Toyota FAIL DevOps Lesson: Crash Testing Secrets
Toyota is accused of lax DevOps standards, as the company reveals that it has stored the production database credentials in a public GitHub repository. It’s bad enough, but it also took five years to detect and correct.
Easy to scoff at, but could it happen to you? What DevOps processes are you using to avoid a similar incident? And do these processes have management support?
It is not the first time it happened. In this week Secure Software Blogwatch, we know it won’t be the last.
Your humble blogwatcher has curated these blog bits for your entertainment. Without speaking about: tomorrow’s world.
Do: Detect stupid developers who defy doctrine
What is craic? Satoshi Sugiyama reports – “Toyota says about 296,000 customer information may have been leaked”:
“Possibility of spam, phishing”
Toyota said 296,019 email addresses and customer numbers of those using T-Connect, a telematics service that connects vehicles via a network, were potentially leaked. … He added that third-party access “could not be completely ruled out”. … The affected customers are individuals who have registered on the service site using their email addresses since July 2017.
The Japanese automaker … has warned that there is a possibility of spam, phishing scams and unsolicited emails being sent to users’ email addresses. [It] said that an entrepreneur who developed the T-Connect website accidentally uploaded parts of the source code with public parameters.
Looks like the details were mangled in the reports. Bill Toulas succeeded in discovering the real issue — “Access key exposed on GitHub”:
“GitHub has started scanning published code for secrets”
An access key has been publicly available on GitHub for nearly five years. …this gave an unauthorized third party access to the contact details of 296,019 customers between December 2017 and … September 17, 2022, [when] database keys have been changed.
This type of security incident has become a large-scale problem that puts troves of sensitive data at risk of exposure. … This is usually the result of developer negligence, storing credentials in code to make retrieving assets, accessing service, and updating configuration quick and easy while testing multiple iterations of ‘apps. These credentials should be removed when the software is ready for actual deployment.
GitHub has begun scanning published code for secrets and blocking commits of code containing authentication keys to better secure projects. However, if a developer uses non-standard access keys or custom tokens, GitHub will not be able to detect them.
Ouch. How did it happen? Simon Sharwood says – “When your contractor leaks site source code”:
The automaker…explains that an outsourced developer responsible for creating T-Connect uploaded the source code for the site to a public GitHub repository in December 2017.…Fortunately, the customer management numbers stored on the server aren’t very helpful to third parties.
But email addresses are, especially if the criminals decide to run Toyota-themed phishing. Perhaps the automaker also needs to take a closer look at its own business, given that it suffered a cyberattack in March 2022 that closed its factories, sold cars susceptible to losing moving wheels and falsified vehicle data. emissions.
What a mess. talkative sounds slightly sarcastic:
Oh well, glad all the keys have been changed and now people who had access for 5 years finally don’t. Phew.
When will these companies realize that we don’t care if phone numbers and credit cards are leaked – numbers can be changed and purchases can be cancelled. The exposure of 5 years of behavioral data on nearly 300,000 people is the threat. Behavior dictates economics, politics and everything else. Behavioral data is what real manipulation models are built around.
How can developers avoid this kind of SNAFU? u/sometimesanengineer provide a small list:
To prevent the publication of secrets:
• Checks before validation on the IDE side
• Pipeline verification for secrets
• Periodically re-scan your repositories
And, of course, solve all the problems as soon as you discover them. Unlike Toyota. Jamesit asks the obvious question:
Why did it take two days to change the key? I thought changing the key would be a priority.
And what about the five years previously? TwistedGreen calls it “Massive Mismanagement”:
Not only did a developer have access to the credentials of the production database containing customer data, but those credentials haven’t been rotated in 5 years? Sorry, but the problem is way bigger than a “messed up subcontractor”. Heads [should] ride for it.
Still. That’s the catch. u/srgevipr argues that management needs to give developers space to follow strong processes:
It’s more culture than tools. … Everyone in a business needs to clearly understand the benefits. … Moreover, it should be supported by management to integrate it into the process.
Meanwhile, Toyota occupies a special place in the heart of drinkypoo:
This is the same Toyota where – when they were accused of unintended acceleration – a review of the code revealed that there were multiple code paths that could be the cause, which were partly caused by the Toyota engineers who did not follow Toyota’s own coding standards – let alone well-established industry standards.
Welcome to the Outernet, Emily-prime. Now it’s the envy of all the dead.
Hat tip: Oysdgp. More information: Wikipedia
Previously in and finally
Have you read Blogwatch secure software by Richi Jennings. Richi curates the best blogs, the best forums, and the weirdest websites…so you don’t have to. Hate messages may be directed to @RiCHi Where [email protected]. Ask your doctor before reading. Your mileage may vary. Past performance is not indicative of future results. Do not look at the laser with the remaining eye. E&OE. 30.
Sauce picture: IIHS.
*** This is a Security Bloggers Network syndicated blog from the ReversingLabs blog written by Richi Jennings. Read the original post at: https://blog.reversinglabs.com/blog/devops-lessons-from-toyota-fail-crash-test-