Jump to content

TSLA - Tesla Motors


Palantir

Recommended Posts

The whole Tesla autonomy event was a bizarre charade. Musk showed up late and once there displayed very nervous body language. He tripled down on previous predictions that Tesla would make 500,000 cars in 2019 and also insisted that a vast swathe of robotaxi Teslas would be ready for 2020. His insinuations about customers' self-driving Tesla cars generating $30,000 in ride-sharing income annually raises the question, why even sell cars at all, why isn't an FSD fleet the company's entire business? The answer is that he needs some way to pivot the narrative: 'Tesla is selling far less cars, but that's ok, we are a mobility company after all!' Only credulous people are willing to accept that Tesla is in the lead on autonomy. Cruise and Waymo have far better tech according to industry experts. Tesla is probably 10th or 15th in autonomy. Calling Semi-Autonomous features Full-Autonomy will just result in car crashes and unnecessary deaths. Musk is desperate. A capital raise needed to happen yesterday. Is he avoiding a margin call on his enormous loans? Is the company prohibiting from raising by the SEC? One wonders.

Link to comment
Share on other sites

  • Replies 4.6k
  • Created
  • Last Reply

Top Posters In This Topic

The whole Tesla autonomy event was a bizarre charade. Musk showed up late and once there displayed very nervous body language. He tripled down on previous predictions that Tesla would make 500,000 cars in 2019 and also insisted that a vast swathe of robotaxi Teslas would be ready for 2020. His insinuations about customers' self-driving Tesla cars generating $30,000 in ride-sharing income annually raises the question, why even sell cars at all, why isn't an FSD fleet the company's entire business? The answer is that he needs some way to pivot the narrative: 'Tesla is selling far less cars, but that's ok, we are a mobility company after all!' Only credulous people are willing to accept that Tesla is in the lead on autonomy. Cruise and Waymo have far better tech according to industry experts. Tesla is probably 10th or 15th in autonomy. Calling Semi-Autonomous features Full-Autonomy will just result in car crashes and unnecessary deaths. Musk is desperate. A capital raise needed to happen yesterday. Is he avoiding a margin call on his enormous loans? Is the company prohibiting from raising by the SEC? One wonders.

 

Technically, I found it interesting that they are bashing LIDAR, even referring to this as an appendix. I thought Waymo uses LIDAR (which is very good at giving a 3D view of the surroundings  independent of viewing conditions), but I could be wrong. Tesla want a to do all this with a machine vision and a fairly crude radar. Go figure. Basically Tesla tries to do, what Waymo does with vastly inferior hardware.

Link to comment
Share on other sites

Musk always presents a slightly unpolished or nervous body-language and tends to talk off the cuff, based on his knowledge, and to willingly accept questions and deviate from any planned script. I think that's just who he is. He's very much thinking as he goes and thinking like an engineer, calling on his experience of problem solving and recalling parts of the decision making process where they looked into certain options then ruled them out.

 

I think it is valuable that different approaches are being tried in this space and I wouldn't like to call whether Tesla's computer-vision first approach or the LIDAR+computer vision approach of many others such as Waymo will emerge as a clear winner. Going with the crowd or against the crowd isn't necessarily right, and it cannot be known before all the engineering and economic challenges and developments prior to adoption have been solved and it's turned into a mass-market product. I can think of other situations such as optical networking switches, where it was valuable to have two approaches and not entirely obvious which would turn out to be the most stable, manufacturable and cost-effective, and if I'd been asked to choose, I probably would've picked the wrong horse.

 

Humans can perceive depth and distance well enough to drive reasonably safely in most conditions using eyes operating with reflected light from external sources and the brain's mental model of the world without the need of supplementary sensors, so it's theoretically possible to mimic that with cameras and a suitably discerning neural network vision system. A Neural Network computer vision system is potentially prone to optical illusions in the same way as the human brain, though with enough data for these edge cases, and by using all the cameras and sensors present it may be possible to reduce the frequency of these causing problems, and typically roads are designed to avoid most such misleading cues.

 

It's also possible for humans to know when it's too hard to drive safely (such as in very thick fog when we must either slow dramatically or park up until safe to move - the worst I've ever had was probably 10-15mph (15-25km/h) at night for about 10 miles/15km, though it could become worse in smog, I imagine. It should be possible to self-driving systems to make the same calls and slow down greatly or park up and request assistance if it becomes unsafe to continue.

 

It might be useful to use near IR full-beam illuminators (InfraRed light just below the visible spectrum's red colour, rather than typical passive infrared which detected warm objects far below frequency of red light) in addition to headlights to supplement night vision beyond human levels (or to use LIDAR or RADAR for this). And of course, it is potentially better-than-human to be able to monitor 360 degrees constantly, whether that's via LIDAR, 8 cameras or whatever.

 

LIDAR especially just outside the visible wavelengths of light, such as near InfraRed could be useful in seeing things at night that headlights cannot illuminate without dazzling other human drivers and provides absolute range measurements to supplement and cross-check against distance information derived from vision systems. Near IR illuminators at around 700-850nm could also supplement dipped headlights without dazzling human drivers for computer vision systems, and many standard CMOS camera sensors would work fine at these wavelengths (as some people will notice when video-recording their TV remote controls using their phones).

 

RADAR has similar potential to LIDAR in creating the 3D picture of objects around without dazzling drivers. Most LIDAR for self-driving seems to be 360 degree, whilst RADAR for Tesla and most manufacturers' various traffic-aware cruise control and automatic emergency braking systems is only forward-facing, though forward is usually the most important direction for driving. RADAR wavelengths are better able to penetrate fog and dust than visible or near infrared, so they could be a useful supplement, but would be useless in reading signs or traffic lights designed for visible light so cannot be used alone for full self-driving.

 

I could see LIDAR systems getting substantially cheaper, more compact and with faster capture rates as demand increases, meaning they could become cost-effective as part of a self-driving architecture over time, though they're not there yet, so I'm not entirely convinced that Tesla is right to bash LIDAR.

 

From having viewed the entire Tesla Full Self Driving presentation this morning at 1.5x speed on YouTube, it seems that the forward radar and short-range ultrasound (both having ranging capability) can be used to supplement the vision as a confirmation of presumed range derived from vision, and as before to detect collisions and dangers ahead of the car in front (using bounced beams to detect movements of vehicles further ahead).

 

They showed a demonstration video where the vision system's model of the vehicle distances and shape was compared to radar ranging info, which is a relatively precise absolute distance ranging measurement with well defined uncertainty. It was comparing the vision system's box around the vehicle to the point ranging distance from forward-facing radar, and matched it very well (the radar produced a spot on the rear fender of each vehicle, in essence, while the vision systems produced a box representing the vehicle, and the spot remained tightly aligned to the box, presumably showing how they'd use the radar data to train the vision-based range estimation neural networks and that the vision-based estimation was now very good at ranging too). While they doubtless chose to show an example sequence where it tracked four or five other vehicles very well on a three-lane highway, without any significant mis-reading of distances and didn't choose to show an edge case where it failed, I think there is certainly scope for computer vision to be good enough for driving once enough edge cases are ironed out.

 

I would imagine the heuristics used by the neural network would include those based on both stereoscopic vision, especially for closer objects, an assumption of consistent vehicle size, relatively consistent vehicle speed and direction and vehicle movement past road features etc. and perhaps even shadows being cast by the vehicle over road features.

 

The details of the exact heuristics used are emergent from the neural network training and the neural weights propagated in that training and retraining when edge cases are presented, rather than programmed in, so it's not 100% possible to know how the neural network is doing it, just how well it is performing in tests.

 

They acknowledged that they need to work in various lighting conditions and cope with imaging artifacts such as lens flares without causing problems, but having so many independent overlapping cameras ought to help in training the NNs to ignore artifacts on only one camera, fairly reliably.

 

I found it interesting that they were able to operate the software in shadow mode and to automatically detect edge cases (especially when human intervention occurred) to better train the NN and to collect the particularly unusual situations encountered maybe once every million miles, and then to try to collect more similar edge cases to further train the network.

 

It was also interesting to see that the system made assumptions about the unseen road ahead on curves and crests in much the way a human driver would, potentially revising the assumptions as new scenes come into view. It seems that the general heuristics are potentially getting towards the right sort of level for the cars to drive like a good human driver, though we're always left wondering about the edge cases.

 

I can see the value in not relying on detailed mapping as a crutch to supplement the vision system just to get something out there. There does seem to be value in the system being able to read the road ahead regardless of changes including stopped vehicles and roadworks or obscured lane markings. I've had the impression that Waymo's cars have been rear-ended quite a few times when they get over-cautious and stop unexpectedly in a way a human driver wouldn't.

 

I've also experienced Nissan Pro-Pilot in a 2019 Nissan Rogue I recently hired in Canada slowing unexpectedly on two occasions when the vehicle ahead moved into a left turn lane, slowed and begun its left turn, clearing my lane ahead, but I'd set the following distance to maximum. It seems that it was over-cautious even once my lane was clear and if an inattentive driver behind me had been following closely, I might have been rear-ended if I hadn't applied more accelerator input or disengaged Pro-Pilot myself to compensate. Overall, over about 2,300 km I was highly impressed with Pro-Pilot as a Level 1/2 driving aid, really useful for drafting trucks at a 2-3 second gap on the highway, but I became aware of its limitations and tended to drive around it to modulate my speed in a more normal fashion in queuing traffic before I returned it to the rental company.

 

I think there's still a long way to go with all the self-driving cars, including Tesla, but I'm quite impressed with the progress Tesla seem to be making. I remain somewhat skeptical however, as it's impossible for an outsider to know whether we're seeing typical results or cherry-picked examples where it worked particularly well, and to gauge how many nines will be required to squeeze out enough edge cases to make it truly ready to go. Only the Tesla employees who have been trying out the unreleased self-driving features will really have a good idea of this.

 

I think there's certainly scope that the processing abilities of the new FSD computer will be sufficient to enable them to continue refining the edge cases and get the neural networks trained to be good enough.

 

I think one area of confusion was where they stated that they took in the raw video input from all the cameras at a high frame rate into on-chip memory and then threw it away shortly after processing it, after which the system would be ready to process the next video frame for all cameras.

I think they also send a copy of the raw video input to the specialised H.265 encoder section of the chip in parallel, but don't use that H.265 stream for the self driving functions (they'd just have to decode it back to raw video and they'd gain some distortion in the process). That greatly reduces the data rate of the video that might end up being stored and does allow the video to be kept in a buffer of slower cheaper memory, allowing it to be saved for anonymised transmission to Tesla in the event of an edge-case detection, collision or manual override so that it can be used to train the systems. I'd imagine that other sensor data would accompany the video files. The same H.265 encoder section can be used to save video on a USB flash drive for Sentry Mode and DashCam mode.

 

I'm quite impressed by what they presented, but that's far from enough to convince me they'll win in this space

Link to comment
Share on other sites

TSLA is going to have 1 million self driving cabs on the road next year?

 

This stuff positively writes itself!

 

TLSA car owners will put their robot taxis to work and NET $30k a year from it's labors!  You would be STUPID not to buy a Tesla car!

 

Did Mr. Musk have a brain aneurysm? 

 

Somebody holding a family member hostage, forcing him to say crazy things?

 

This stuff positively writes itself!

 

Simply going to be fascinating to see what happens!

Link to comment
Share on other sites

TSLA is going to have 1 million self driving cabs on the road next year?

 

This stuff positively writes itself!

 

TLSA car owners will put their robot taxis to work and NET $30k a year from it's labors!  You would be STUPID not to buy a Tesla car!

 

Did Mr. Musk have a brain aneurysm? 

 

Somebody holding a family member hostage, forcing him to say crazy things?

 

This stuff positively writes itself!

 

Simply going to be fascinating to see what happens!

 

I know right. Classic Musk, always overpromising and then under-delivering.

 

Doesn't stop me from enjoying their cars though!

Link to comment
Share on other sites

china is seemingly moving toward fcv

https://www.theepochtimes.com/chinas-electric-vehicle-industry-hit-hard-by-sudden-policy-shift-as-beijing-turns-toward-hydrogen-fuel_2865743.html

 

which might explain allowing tesla to operate china factory with 100% ownership, timelines are close

https://www.bloomberg.com/news/articles/2019-01-11/in-china-elon-musk-sure-felt-the-love-that-was-missing-at-home

https://electrek.co/2018/04/17/tesla-china-factory-ownership/

 

not sure if these have already been posted before

https://www.theguardian.com/environment/2012/aug/07/china-rare-earth-village-pollution

https://www.nationalgeographic.com/magazine/2019/02/lithium-is-fueling-technology-today-at-what-cost/

 

The idea that Tesla has a competitive advantage due to accumulating 'real world' miles via current owners in the race for AVs seems pretty compelling, but I don't know that much about autonomous vehicles beyond cursory google searches. Here is a a youtube video with a cofounder of waymo that validates EM's claim about lidar, although there is a bit of a past with him.

https://en.wikipedia.org/wiki/Anthony_Levandowski

 

"but I don't know that much about autonomous vehicles beyond cursory google searches"

 

--> Best way to find out this through driving or renting one of the Autopilot 2.5 hardware cars (3,S,X). May check all nuances and scenarios. Currently, With recent updates, warns user approaching intersection where red light is on. Also, check out few demos of "Advanced Summon Tesla" on you tube (if possible experience). Experience, Navigate on Autopilot changing lanes on its own and taking exits on its own. Freeway entrance to freeway exit , currently it is close to 95% automated. Driver hardly need to take over the car when on freeway.

 

FSD chip upgrade later this year (2000-3000) upgrade, is stepping stone further to turn left on signal, turn right on signal, stop on stop sign etc. FSD chip is focusing on surface street solution.

 

Then , compare which other consumer platforms/systems/OTA cars have these features available.

 

Based on your post I would assume Tesla's are capable of autonomous driving (Level 4/5) whereas they are at best driver assisted systems (Level 2)..

 

Try driving in LA or SF traffic, will feel like 4-5. You may put Levels as come to your perception. Current System is pretty good at taking stress out of your 1-2 hours daily drive which lot of people are after. With FSD moving to surface streets and stop signs, will aid up. Progress is gradual and then in leaps. Only system which is capable of mapping real world as it is closer to reality; smooth in handshakes; OTA upgrades and patches;  internet all the time to move data back and forth. Currently, you are training Neural Network. And ultimately, repetitive task goes away, as already happened on freeway. 

Link to comment
Share on other sites

Musk always presents a slightly unpolished or nervous body-language and tends to talk off the cuff, based on his knowledge, and to willingly accept questions and deviate from any planned script. I think that's just who he is. He's very much thinking as he goes and thinking like an engineer, calling on his experience of problem solving and recalling parts of the decision making process where they looked into certain options then ruled them out.

 

I think it is valuable that different approaches are being tried in this space and I wouldn't like to call whether Tesla's computer-vision first approach or the LIDAR+computer vision approach of many others such as Waymo will emerge as a clear winner. Going with the crowd or against the crowd isn't necessarily right, and it cannot be known before all the engineering and economic challenges and developments prior to adoption have been solved and it's turned into a mass-market product. I can think of other situations such as optical networking switches, where it was valuable to have two approaches and not entirely obvious which would turn out to be the most stable, manufacturable and cost-effective, and if I'd been asked to choose, I probably would've picked the wrong horse.

 

Humans can perceive depth and distance well enough to drive reasonably safely in most conditions using eyes operating with reflected light from external sources and the brain's mental model of the world without the need of supplementary sensors, so it's theoretically possible to mimic that with cameras and a suitably discerning neural network vision system. A Neural Network computer vision system is potentially prone to optical illusions in the same way as the human brain, though with enough data for these edge cases, and by using all the cameras and sensors present it may be possible to reduce the frequency of these causing problems, and typically roads are designed to avoid most such misleading cues.

 

It's also possible for humans to know when it's too hard to drive safely (such as in very thick fog when we must either slow dramatically or park up until safe to move - the worst I've ever had was probably 10-15mph (15-25km/h) at night for about 10 miles/15km, though it could become worse in smog, I imagine. It should be possible to self-driving systems to make the same calls and slow down greatly or park up and request assistance if it becomes unsafe to continue.

 

It might be useful to use near IR full-beam illuminators (InfraRed light just below the visible spectrum's red colour, rather than typical passive infrared which detected warm objects far below frequency of red light) in addition to headlights to supplement night vision beyond human levels (or to use LIDAR or RADAR for this). And of course, it is potentially better-than-human to be able to monitor 360 degrees constantly, whether that's via LIDAR, 8 cameras or whatever.

 

LIDAR especially just outside the visible wavelengths of light, such as near InfraRed could be useful in seeing things at night that headlights cannot illuminate without dazzling other human drivers and provides absolute range measurements to supplement and cross-check against distance information derived from vision systems. Near IR illuminators at around 700-850nm could also supplement dipped headlights without dazzling human drivers for computer vision systems, and many standard CMOS camera sensors would work fine at these wavelengths (as some people will notice when video-recording their TV remote controls using their phones).

 

RADAR has similar potential to LIDAR in creating the 3D picture of objects around without dazzling drivers. Most LIDAR for self-driving seems to be 360 degree, whilst RADAR for Tesla and most manufacturers' various traffic-aware cruise control and automatic emergency braking systems is only forward-facing, though forward is usually the most important direction for driving. RADAR wavelengths are better able to penetrate fog and dust than visible or near infrared, so they could be a useful supplement, but would be useless in reading signs or traffic lights designed for visible light so cannot be used alone for full self-driving.

 

I could see LIDAR systems getting substantially cheaper, more compact and with faster capture rates as demand increases, meaning they could become cost-effective as part of a self-driving architecture over time, though they're not there yet, so I'm not entirely convinced that Tesla is right to bash LIDAR.

 

From having viewed the entire Tesla Full Self Driving presentation this morning at 1.5x speed on YouTube, it seems that the forward radar and short-range ultrasound (both having ranging capability) can be used to supplement the vision as a confirmation of presumed range derived from vision, and as before to detect collisions and dangers ahead of the car in front (using bounced beams to detect movements of vehicles further ahead).

 

They showed a demonstration video where the vision system's model of the vehicle distances and shape was compared to radar ranging info, which is a relatively precise absolute distance ranging measurement with well defined uncertainty. It was comparing the vision system's box around the vehicle to the point ranging distance from forward-facing radar, and matched it very well (the radar produced a spot on the rear fender of each vehicle, in essence, while the vision systems produced a box representing the vehicle, and the spot remained tightly aligned to the box, presumably showing how they'd use the radar data to train the vision-based range estimation neural networks and that the vision-based estimation was now very good at ranging too). While they doubtless chose to show an example sequence where it tracked four or five other vehicles very well on a three-lane highway, without any significant mis-reading of distances and didn't choose to show an edge case where it failed, I think there is certainly scope for computer vision to be good enough for driving once enough edge cases are ironed out.

 

I would imagine the heuristics used by the neural network would include those based on both stereoscopic vision, especially for closer objects, an assumption of consistent vehicle size, relatively consistent vehicle speed and direction and vehicle movement past road features etc. and perhaps even shadows being cast by the vehicle over road features.

 

The details of the exact heuristics used are emergent from the neural network training and the neural weights propagated in that training and retraining when edge cases are presented, rather than programmed in, so it's not 100% possible to know how the neural network is doing it, just how well it is performing in tests.

 

They acknowledged that they need to work in various lighting conditions and cope with imaging artifacts such as lens flares without causing problems, but having so many independent overlapping cameras ought to help in training the NNs to ignore artifacts on only one camera, fairly reliably.

 

I found it interesting that they were able to operate the software in shadow mode and to automatically detect edge cases (especially when human intervention occurred) to better train the NN and to collect the particularly unusual situations encountered maybe once every million miles, and then to try to collect more similar edge cases to further train the network.

 

It was also interesting to see that the system made assumptions about the unseen road ahead on curves and crests in much the way a human driver would, potentially revising the assumptions as new scenes come into view. It seems that the general heuristics are potentially getting towards the right sort of level for the cars to drive like a good human driver, though we're always left wondering about the edge cases.

 

I can see the value in not relying on detailed mapping as a crutch to supplement the vision system just to get something out there. There does seem to be value in the system being able to read the road ahead regardless of changes including stopped vehicles and roadworks or obscured lane markings. I've had the impression that Waymo's cars have been rear-ended quite a few times when they get over-cautious and stop unexpectedly in a way a human driver wouldn't.

 

I've also experienced Nissan Pro-Pilot in a 2019 Nissan Rogue I recently hired in Canada slowing unexpectedly on two occasions when the vehicle ahead moved into a left turn lane, slowed and begun its left turn, clearing my lane ahead, but I'd set the following distance to maximum. It seems that it was over-cautious even once my lane was clear and if an inattentive driver behind me had been following closely, I might have been rear-ended if I hadn't applied more accelerator input or disengaged Pro-Pilot myself to compensate. Overall, over about 2,300 km I was highly impressed with Pro-Pilot as a Level 1/2 driving aid, really useful for drafting trucks at a 2-3 second gap on the highway, but I became aware of its limitations and tended to drive around it to modulate my speed in a more normal fashion in queuing traffic before I returned it to the rental company.

 

I think there's still a long way to go with all the self-driving cars, including Tesla, but I'm quite impressed with the progress Tesla seem to be making. I remain somewhat skeptical however, as it's impossible for an outsider to know whether we're seeing typical results or cherry-picked examples where it worked particularly well, and to gauge how many nines will be required to squeeze out enough edge cases to make it truly ready to go. Only the Tesla employees who have been trying out the unreleased self-driving features will really have a good idea of this.

 

I think there's certainly scope that the processing abilities of the new FSD computer will be sufficient to enable them to continue refining the edge cases and get the neural networks trained to be good enough.

 

I think one area of confusion was where they stated that they took in the raw video input from all the cameras at a high frame rate into on-chip memory and then threw it away shortly after processing it, after which the system would be ready to process the next video frame for all cameras.

I think they also send a copy of the raw video input to the specialised H.265 encoder section of the chip in parallel, but don't use that H.265 stream for the self driving functions (they'd just have to decode it back to raw video and they'd gain some distortion in the process). That greatly reduces the data rate of the video that might end up being stored and does allow the video to be kept in a buffer of slower cheaper memory, allowing it to be saved for anonymised transmission to Tesla in the event of an edge-case detection, collision or manual override so that it can be used to train the systems. I'd imagine that other sensor data would accompany the video files. The same H.265 encoder section can be used to save video on a USB flash drive for Sentry Mode and DashCam mode.

 

I'm quite impressed by what they presented, but that's far from enough to convince me they'll win in this space

 

From what you have described, Tesla being only vertically integrated system running in shadow mode on real world, makes them first one to come up with working solution, versus NVIDIA based systems where real-world data is only collected in handful of cities and not in live feed, gives lagging in terms of data collection effort.  Growing fleet size widens the gap.

Link to comment
Share on other sites

I watched the entire presentation live. I was surprised at how seriously the company took their FSD program internally; I had honestly thought it was mostly a way for Tesla to generate more cash flow and pump the stock. Clearly they're really going for it.

 

I also didn't know they could ask the fleet to store images or video to later be uploaded to Tesla (Twitter user @greentheonly has rooted his car and documented this). I had previously thought the computer just stored data and video surrounding disengagements. While the cars clearly don't upload nearly as much data as some bulls think, this still seems like an advantage.

 

However, they're still really far behind. Most people who took demo rides seem to say they experienced a disengagement. The route was only 12.2 miles around Tesla's HQ (https://goo.gl/maps/ERgLqczBWsoDxPzUA), so it was surely not chosen because it was difficult for the cars. If we assume the demo rides disengaged once every two laps, that's still 246 and 451 times worse than Cruise and Waymo, respectively. Cruise also operates exclusively inside San Francisco, a far more challenging environment. While I think Tesla's data collection is probably an advantage, their approach needs to be vastly superior to the competition if they have any hope of catching up.

 

Keep in mind GM made almost ten million cars in 2017. If they needed more data, couldn't they find a way to get it?

Link to comment
Share on other sites

From what you have described, Tesla being only vertically integrated system running in shadow mode on real world, makes them first one to come up with working solution, versus NVIDIA based systems where real-world data is only collected in handful of cities and not in live feed, gives lagging in terms of data collection effort.  Growing fleet size widens the gap.

 

I think a few scenarios are quite possible, but I have a very hard time trying to assess the probabilities of each. Certainly even if Tesla is feature-complete in 18 months, I can't see them obtaining widespread regulatory approval without a further 12-36 month wait, though pilot testing approvals may be granted in restricted areas.

 

Scenarios:

1. Tesla is first to market with level 4 full self driving and gets an invaluable early lead and may end up licensing their FSD technology to other automakers, potentially restricting any others using Tesla FSD tech to the Tesla Network. It's possible others don't wish to license but follow Tesla's model without licensing their IP and get reasonable FSD systems about 3-5 years later or possibly by collaboration with other OEMs and their Tier 1 suppliers and/or tech companies with knowledge in the space.

 

2. Another maker with a LIDAR system is first to market with a regulator approved system and takes a dominant early lead, with others following one to five years later. Tesla may be among those taking a late position or make need to rethink and adopt LIDAR or someone else's tech.

 

3. Another maker with LIDAR and a geofenced system is first to market by augmenting their system using high precision mapping and GPS in specific areas, but Tesla or another competitor comes along a little later with the first general solution that can drive virtually anywhere safely enough to satisfy regulators.

 

It's fascinating to watch, and I think the fierce competition will spur great advances to solve this difficult problem.

 

I'm glad I'm not an investor in this space, being neither long nor short any of the companies fighting it out.

Link to comment
Share on other sites

I watched the entire presentation live. I was surprised at how seriously the company took their FSD program internally; I had honestly thought it was mostly a way for Tesla to generate more cash flow and pump the stock. Clearly they're really going for it.

 

I also didn't know they could ask the fleet to store images or video to later be uploaded to Tesla (Twitter user @greentheonly has rooted his car and documented this). I had previously thought the computer just stored data and video surrounding disengagements. While the cars clearly don't upload nearly as much data as some bulls think, this still seems like an advantage.

 

However, they're still really far behind. Most people who took demo rides seem to say they experienced a disengagement. The route was only 12.2 miles around Tesla's HQ (https://goo.gl/maps/ERgLqczBWsoDxPzUA), so it was surely not chosen because it was difficult for the cars. If we assume the demo rides disengaged once every two laps, that's still 246 and 451 times worse than Cruise and Waymo, respectively. Cruise also operates exclusively inside San Francisco, a far more challenging environment. While I think Tesla's data collection is probably an advantage, their approach needs to be vastly superior to the competition if they have any hope of catching up.

 

Keep in mind GM made almost ten million cars in 2017. If they needed more data, couldn't they find a way to get it?

 

Then question would be how many of millions of these do have cameras;radars, sensors, OTA , LTE built-in and all required techs installed for this approach. Vertical integration helps looks like for this effort.

Link to comment
Share on other sites

Then question would be how many of millions of these do have cameras;radars, sensors, OTA , LTE built-in and all required techs installed for this approach.

 

I don't know the answer to this. My point was just that if GM did want to gather this sort of data, they are very well positioned to do so. Google could also pay people to put sensor pods on their cars, with OBD2 data acquisition to capture driver inputs. As far as I know, neither company has expressed any desire to do this.

Link to comment
Share on other sites

Q1 results are out and what a joke this company is:

- guiding for Q2 deliveries of 90k to 100k while actual April deliveries do not show a substantial increase over January deliveries. How the hell are they guiding for a Q2 loss with 90k to 100k expected Q2 deliveries anyway?

- Q1 cap ex of 250k, which is again well below depreciation and amortisation. But they are still guiding for 2019 capex of 2,5 billion. Definitely not possible without a capital raise.

- going to start their own insurance business next month

- they don't know yet where they are going to build the model Y (California or Nevada), but they already ordered the tooling :o

 

Link to comment
Share on other sites

On a more serious note, I was a bit surprised to learn that they’ve managed to secure financing from Chinese banks for their new Gigafactory in Shanghai.  Last time I checked, the word on the street was that there’s no way they can get any debt financing?

Link to comment
Share on other sites

Q1 results are out and what a joke this company is:

- guiding for Q2 deliveries of 90k to 100k while actual April deliveries do not show a substantial increase over January deliveries. How the hell are they guiding for a Q2 loss with 90k to 100k expected Q2 deliveries anyway?

- Q1 cap ex of 250k, which is again well below depreciation and amortisation. But they are still guiding for 2019 capex of 2,5 billion. Definitely not possible without a capital raise.

- going to start their own insurance business next month

- they don't know yet where they are going to build the model Y (California or Nevada), but they already ordered the tooling :o

 

Agreed.  Enough of a loss to wipe out the previous two quarter's profits.  This just after saying in January that all quarters should be profitable going forward.  Somehow in the next 60 days, there was a surprise loss of $700 million, and now Q2 Tesla expects record deliveries and also to make a loss (?), then to grow 30+% in Q3 and Q4 despite the tax credit falling in half again on July 1.

 

On a more serious note, I was a bit surprised to learn that they’ve managed to secure financing from Chinese banks for their new Gigafactory in Shanghai.  Last time I checked, the word on the street was that there’s no way they can get any debt financing?

 

This is one year construction financing and will have to be renewed or refinanced next year when construction is complete. It's short term financing in China, where without a huge liquidity intervention early this year, we might already be seeing a recession.

 

Big picture is that if demand really has hit the wall, then Tesla has only months left.  Tesla does everything in their power to spin earnings reports favorably, and this one looks awful no matter how you look at it.  The increased AR balance on declining sales looks odd,  the CapEx below depreciation looks odd, the maintaining guidance is lipstick on a pig. 

 

It looks inevitable that Q2 results will break the growth narrative completely--I don't see a path for them to sell 90-100k cars this quarter, and for them to project big % increases from there in Q3 and Q4 is bonkers.  The phaseout of the tax credit means Tesla will either have to eat margin or raise prices, neither of which seems likely to drive demand.

 

 

 

 

 

 

Link to comment
Share on other sites

Hey all:

 

WOW!  TSLA reports a BIGLY loss....Musk is talking about raising capital cause TSLA is now such an efficient operation!

 

Stock is down ONLY $1.65/share!  What is it going to take to shake investor confidence?

 

I would have thought the stock would be down $40 or $50/share today.

 

Going to be interesting.

Link to comment
Share on other sites

This is one year construction financing and will have to be renewed or refinanced next year when construction is complete.

 

That’s good to know.  Just out of curiosity, can I ask you where this was reported?  TIA.

Link to comment
Share on other sites

This is one year construction financing and will have to be renewed or refinanced next year when construction is complete.

 

That’s good to know.  Just out of curiosity, can I ask you where this was reported?  TIA.

 

https://www.cnbc.com/2019/03/07/tesla-enters-into-agreement-with-chinese-lenders-for-shanghai-gigafactory.html

 

https://www.sec.gov/Archives/edgar/data/1318605/000156459019006788/tsla-8k_20190301.htm

 

Tesla Shanghai Syndication Loan Agreement

 

 

 

On March 1, 2019, Tesla (Shanghai) Co., Ltd. (“Tesla Shanghai”) entered into a Syndication Loan Agreement (the “China Loan Agreement”) with China Construction Bank Corporation (Shanghai Pudong Branch), Agricultural Bank of China Limited (Shanghai Changning Sub-branch), Industrial and Commercial Bank of China Limited (Shanghai Lingang Sub-branch) and Shanghai Pudong Development Bank Co., Ltd. (Shanghai Branch), as lenders, pursuant to which Tesla Shanghai may draw funds from time to time on an unsecured term facility of up to a total of RMB 3.5 billion (or the equivalent amount drawn in U.S. dollars). The proceeds of such loans may be used only for expenditures related to the construction of and production at our Gigafactory Shanghai.  The China Loan Agreement will terminate and all outstanding loans will mature on March 4, 2020, and the loan facility is non-recourse to Tesla or its assets.

 

 

 

Outstanding borrowings pursuant to the China Loan Agreement accrue interest at a rate equal to: (i) for RMB-denominated loans, 90% of the one-year rate published by the People’s Bank of China, and (ii) for U.S. dollar-denominated loans, the sum of one-year LIBOR plus 1.0%. Tesla Shanghai is subject to certain covenants, including a restriction on liens and other security interests on assets acquired and/or constructed using borrowings under the China Loan Agreement, other than specified exceptions, as well as certain customary covenants and events of default. As of March 7, 2019, RMB 31.1 million in loans were outstanding pursuant to the China Loan Agreement. 

Link to comment
Share on other sites

Hmm, I had an alert set for $250...we've all been building up our Tesla Tithes, right?

 

The china factory is purportedly coming online in 2020?

 

Adam Jonas is speculating they will raise $2.5 bills in equity from Chinese sources?

 

Would be nice to be a Chinese official with a bead on trade negotiations when you were pulling that trigger, eh?

Link to comment
Share on other sites

I just increased my short position to something slightly more meaningful.  This seemed to me like a long shot not too long ago but I think the odds are much better now that the stars (i.e., fundamentals, technicals, sentiments, catalysts) have more or less aligned.  It's still a YOLO trade in my book, but anyway.

Link to comment
Share on other sites

I've been doing some research on stock offerings, in light of Tesla needing one and their forward guidance probably being fraudulent.

 

From what I've read, anyone underwriting their offering is liable for fraudulent guidance under sections 11 and 12 of the Securities Act. Unlike the more common 10b-5 violations, sections 11 and 12 don't require the plaintiff prove scienter (basically the intent to defraud). This makes them fairly easy to prove compared to most securities fraud. Their underwriters don't want to have to defend the 2019 100k cars / quarter delivery guidance in court. So they should either require Tesla to give accurate guidance, or give no guidance at all. I'm guessing this applies to delivery and robo-taxi guidance.

 

Ergo I think Tesla will have to walk back their fraudulent guidance, which will negatively affect the stock, or not raise at all, which may drive them into bankruptcy. With this in mind I think July puts may be a very good bet as they capture Q2 deliveries and probably an attempt at a capital raise.

Link to comment
Share on other sites

I've been doing some research on stock offerings, in light of Tesla needing one and their forward guidance probably being fraudulent.

 

From what I've read, anyone underwriting their offering is liable for fraudulent guidance under sections 11 and 12 of the Securities Act. Unlike the more common 10b-5 violations, sections 11 and 12 don't require the plaintiff prove scienter (basically the intent to defraud). This makes them fairly easy to prove compared to most securities fraud. Their underwriters don't want to have to defend the 2019 100k cars / quarter delivery guidance in court. So they should either require Tesla to give accurate guidance, or give no guidance at all. I'm guessing this applies to delivery and robo-taxi guidance.

 

Ergo I think Tesla will have to walk back their fraudulent guidance, which will negatively affect the stock, or not raise at all, which may drive them into bankruptcy. With this in mind I think July puts may be a very good bet as they capture Q2 deliveries and probably an attempt at a capital raise.

 

This is almost right.  Yes, there is strict liability, but it will apply to what's in the offering documents, which the underwriter prepares and files with the SEC. This is why there is a quiet period before an IPO and the SEC can delay the date you go public for running your mouth before the launch. Because you are making statements to your potential investors that aren't in the offering documents.  People are supposed to buy your stock after reading your S-1 and understanding it, not after seeing you on CNBC with Jim Cramer.

Link to comment
Share on other sites

Hey all:

 

Where are the bulls on TSLA?

 

Nothing to be heard?

 

I have to admit, I've been very wrong, AND very off on my pricing/valuation for TSLA.  Mr. Musk is indeed an incredible promoter...much, Much, MUCH better than I initially gave him credit for.

 

The upcoming 12-18 months are indeed going to be interesting!  They are going to have insurance company, have solved the self driving problem, and will have have a million robo-taxis rolling about.

 

Obviously, they are going to make so much kash money, they simply won't know what to do with it all!

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now



×
×
  • Create New...