I Asked Chat GPT a Question About The SVB Closure: Here’s What It Said
Something crazy happened last week in northern California that has impacted the entire world. If you guessed the Silicon Valley Bank shut down, you get a cookie.
By now, this is old news, and most of us are aware the HSBC bailed out the American juggernaut for a measly £1. But in doing research about this event I thought, why not ask ChatGPT, and other AI software, some specifics about the situation? After all, these new AI have quite an impressive track record.
I decided to ask ChatGPT the same question: can you give me some updates on the Silicon Valley Bank closure?
Here was ChatGPT’s response, one that was shared by the former: "The closure of SVB's main banking subsidiary and industrial bank subsidiary by the California Department of Financial Institutions (DFI), as mentioned in the article you provided earlier, is part of a hypothetical scenario created by a law firm for educational purposes."
Some of you reading this may be shaking your fists at me and shouting ChatGPT doesn’t have access to current events or recent news updates!! Although that’s usually the case as current questions will be answered with something like I’m sorry I don’t have access to that information, when asked this specific question about SVB, all questioned AI software claimed it was a hypothetical situation. I was so convinced that I actually believed it myself, despite having read a slew of global sources claiming its legitimacy. I input multiple articles into the software and asked it to summarise the article, and when they did, they always concluded with, “again, this is a hypothetical situation created by law firms in case of such an event occurring.”
Today, in Googling this exact ChatGPT situation and its relation to the event, nothing appears online, yet after this occurrence I’m hesitant to trust ChatGPT as a factual source. But the reason I make such a point about this isn’t just because of this specific situation, but rather the unequivocal biases that will eventually come from emerging AI as it continues to develop, nonetheless emerge from different countries other than the US.
ChatGPT was launched by San Francisco-based OpenAI, coincidentally the same location as silicon valley. ChatGPT had more than 100 million users within the first two months of its launch, and as of January 2023, has more than 13 million daily visitors, making it the fastest growing consumer application in such a short amount of time. The people at SVB surely foresaw the bank closure impending, and with the high usage of ChatGPT, did someone program it ahead of time to dissuade inquiring users, nonetheless those who had invested millions and were planning to sell their shares?
AI Bias
Self-driving cars are already a real thing. But what about when they become the only thing? Say you’re in a situation where your car is surrounded, and an old lady and a young child are running across the street and no matter its technological sophistication, a crash is inevitable. Will your car have a built in bias in who to hit? Will it hit the old lady because she’s old and spare the younger child? Will it hit the child because it wasn’t following the rules? Or what about you, the driver? What if you haven’t paid your taxes, have a warrant out for your arrest, or just got cancelled on twitter? Will the car kill you instead? After all, you’re not a law abiding citisen, so maybe the old lady and the child deserve to be saved and not you.
“The reproduction of harmful ideas is particularly dangerous now that AI has moved from being an experimental discipline used only in laboratories to being tested at scale on millions of people,” warns Kate Crawford, author of Atlas of AI.
AI was successfully implemented by Gmail to remove spam mail, and it could most certainly be utilised to remove "fake news" on the internet. The only question is, is fake news fake for everyone, or just for some?
If ChatGPT eclipses Google, Yahoo and Bing altogether as the new go-to for public information, was this glitch just a sign that the software isn’t complete yet, or a prescient image of the future where we only find out what they want us to know?
As with all things, the answers may not be obvious just yet, but alas, it is important to ask the questions.