This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email [email protected] with any questions.
Like, I don’t know if you’ve been to South Congress recently, but it’s like —
Yes, of course I was at South Congress recently! We went to South By together!
I know, but I don’t know whether you made it to that part of town. You’re a very busy man. Anyway, it’s like going to the internet. There’s, like — every store is like a brick-and-mortar version of an Instagram brand.
There’s, like, Parachute. You got your —
Rat bucket store.
[LAUGHS]: Rat bucket depot. I — it was really fun. I also — this is not going to make it into the podcast, but my biggest takeaway from South by Southwest is that we were robbed of one of the most fun things in life, which is a rent-by-the-minute scooter.
Yeah.
So back in San Francisco and around the country, there are not nearly as many scooters as there were a few years ago. But in Austin, at least during South by Southwest, it is still peak scooter. So I had a few free hours between sessions, and so I rented — I got one of those little lime scooters, and I drove it around Austin for a couple of hours.
And it was phenomenal. It was so fun. And I’m so mad that the end of low interest rates and the mismanagement of the scooter companies took that away from us.
It was such a beautiful time, when you could just sort of fly down the street on a scooter, carefree —
Unblemished —
Unblemished —
Unbothered —
Unbothered —
Wind blowing through your hair. It was honestly — it was transcendent, and I’m so mad that I don’t have access to that anymore.
And that’s why we’re formally calling on Jerome Powell to lower the interest rate back to 1 percent.
Bring back the damn scooters!
I’m Kevin Roose. I’m a tech columnist at “The New York Times.”
I’m Casey Newton from “Platformer,” and you’re listening to “Hard Fork.” This week, GPT 4 is here, along with Claude, PaLM, and LLaMA.
Plus, the collapse of Silicon Valley Bank and what it means for the startup ecosystem — and what Mark Zuckerberg loves about layoffs.
Here is one of my big takeaways from South by Southwest. One of the questions that we’ve been asking now for the past six months is like, how are the crypto people doing, right? Like, crypto was, in many ways, the big story of last year.
And when I was at South by Southwest last year, it was cryptomania. There was this NFT project called Doodles that had this installation with lines around the block to get in, and if you had one of their NFTs, you had all these special perks. And any sort of name-brand NFT project — like, there was a good chance there was some sort of branded activation in the South by Southwest style.
And so we got there this year, and we walked around town. And the crypto folks had almost entirely vanished, or if they were there, they were not doing these big installations. And instead, you had this just sort of very normal web stuff, like a big Slack installation. Or like, we saw a few giant transformers just standing next to food trucks.
And I mean, the, like, ‘80s cartoon Transformers, not the T in GPT.
Right.
Yeah. So —
I did have one conversation with someone about Web3, and it was as if I was, like, encountering a time traveler.
It was like he —
I see that you’re lost.
(LAUGHING) You know that soldier who they found in the Philippines, like, fighting World War II in 1970, like, many years after the war had ended? This was what it was like having a conversation about Web3 at South by Southwest in 2023.
Yeah. So you know, I, obviously — it’s sort of only one data point. I know there’s still lots of people working on this stuff and lots of money behind it.
No, no, no, no, no. Don’t do a walkback. Don’t do a careful Casey walkback.
I’m just saying that South by Southwest is not everything. But as a sort of cultural barometer for where is the energy, the energy is not in crypto.
Right.
RIP.
Right. But you know where the energy was this year?
AI.
Yes! Everyone at South by Southwest was talking about AI. And I know I’m starting to sound like a broken record, but this week in particular was a huge week for AI news.
Yeah, in terms of things being released that you personally can actually just go try now, I would say this is actually the biggest week in the sort of recent development of AI.
So just on Tuesday, the following things happened. Anthropic, which is an AI startup that has an investment from Google and is started by a bunch of former OpenAI employees, released their large language model called Claude, which you can now use in a number of different ways.
Google announced that it is releasing an API for its large language model, which is called PaLM, which has been long-awaited for many months now. Adept, which is an AI startup, said on Tuesday that it had raised $350 million in a series B funding round, which is a very large funding round.
Yeah.
And then, to top it all off, we had the coup de grace — not the coup de gras, as listeners pointed out to me when I said that on an earlier show — it’s pronounced coup de grace.
We’re listening, and we’re learning. Thank you, listeners.
[LAUGHS]: The coup de grace was that OpenAI announced the release of GPT 4.
And this was sort of the big one that we had been waiting for. I can’t remember another product where there’s been sort of more hype in advance of the release in recent memory than GPT 4.
Yes. So GPT 4, which we’ve talked about in the show before, has been awaited with something that I would describe as, like, messianic fervor, right? Like, for months now — you’ve talked to people in San Francisco, I’ve talked to people in San Francisco.
People who have seen this thing talk about it like they saw the face of God. Like, there are all these rumors flying around like, I heard it has 100 trillion parameters, or like, I heard it got a 1600 on the SATs or whatever. And we actually met someone at South by Southwest, who had —
Like, on the eve of the release.
Yes — who had been testing GPT 4, and who said that it had given them an existential crisis, because it was so smart.
And this person was not being hyperbolic.
No.
This is somebody who, like —
They’re like, please, my life is in shambles.
This chatbot.
And we got them the help that they needed, so don’t worry about that. But still, it was a scary moment.
So for months now, we’ve been waiting for the release of GPT 4, and now, it’s out. OpenAI has published it and made it available if you pay $20 a month for ChatGPT Plus, which is the paid tier of ChatGPT. You can use it. I upgraded my ChatGPT —
You did!
— so that I could use it. I’m now a paying subscriber. So I spent some time talking with GPT 4. And I was a little nervous. You know, my last extended run-in with a chatbot didn’t go so well.
Mm-hmm.
So I started just trying to sort of poke around and see what it would do and what it wouldn’t do. And it wouldn’t talk about consciousness. It kept saying, as an AI language model, I don’t have feelings or emotions or consciousness. It’s —
Did you ask if it had a shadow self?
[LAUGHS]: I didn’t ask about a shadow self, but I did ask if it had a crush on me, and it said it didn’t, which —
Sydney has moved on from Kevin Roose. And them’s the breaks, buddy!
Yeah, I lost a step between three weeks ago and now. But it is quite good. And I think we should just run down a few of the things that OpenAI has said that GPT 4 can do. So one of the things that AI labs do with AI models is they give them tests, like tests that humans would take in academic settings.
This is also how they sort of measure the improvement of the AI models. So ChatGPT, the previous model — when OpenAI gave it a simulated version of the bar exam, it scored in the 10th percentile, which means that it was —
You can’t be a lawyer if you’re scoring in the 10th percentile.
Right. It failed the bar exam.
Yeah.
GPT 4 scored in the 90th percentile.
A pretty big swing.
Better than 90 percent of human law students taking that test.
Which, by the way, if you’re a lawyer, I hope a shiver just went down your spine.
Like, that is a wow moment in the history of the development of technology.
Totally. So another area where GPT 4 seems to have improved quite substantially over ChatGPT is with things like biology tests. So in the biology Olympiad, ChatGPT scored in the 31st percentile. GPT 4 scored in the 99th percentile.
It aced the biology Olympiad. It got an 88th percentile score on the LSAT. It got an 80th percentile on the quantitative part of the GRE, the graduate school exam, and a 99th percentile score on the verbal part.
OK. So this sounds very impressive. But we know a couple of things. These are predictive models that are predicting the next word in a sentence. And my guess would be that for all of the tests that you’ve just described, there are a lot of old sample tests, and there are a lot of answers for those tests on the internet. So is this a case where the model could just simply ingest all of that material and sort of reasonably get better from one generation of GPT to the next at predicting the next word in a sequence, and thus passing these tests?
No, that is not what’s happening here. It’s not looking up the answers to some test that is already online. These are new tests. So these are novel problems that it has not seen before, and it is solving them better than almost every human test-taker, which is — I just think we should just pause a beat.
A computer scores in the 90th percentile on the bar exam. Like, I think if you had told me that a year ago, I would have said you were lying.
Yeah. Well, because again, we’ve lived through so much AI hype, right? We’ve lived through so many people saying, AI’s going to solve everything. And after a few years of that not happening, it’s become easy to dismiss, and yet, now, here we stand, and this thing is passing tests that are original. And look, if it is not just looking up the answers, then I have to say, it complicates my understanding of what these things are.
It’s very wild. And what’s even wilder about GPT 4 is that it’s what’s called multimodal. So it can not only work with text, it can interpret images. Now, OpenAI has not released this image feature yet, because they say that it’s still working on some of the safety issues.
But I saw a demo on Tuesday. Greg Brockman, the president of OpenAI, did this live-stream demo. And one of the things he showed off that really blew my mind was he took a notebook, like just a regular paper notebook, and he drew a sketch of a website that he wanted to build. And it was called, like, My Joke Website.
And it was very, very basic, like the kind of napkin sketch that you would just do if you were just trying to show a friend, like, I’ve got an idea for a new website. He takes a photo of the notebook page with his phone. He uploads the photo into GPT 4 and tells it to make that website, with working HTML and JavaScript.
In a couple of seconds, GPT 4 processes the image, figures out what it is, what it’s trying to do, and then converts those instructions into HTML and JavaScript, and spits out a working website seconds later that looks like a very professional version of the one that was on the notebook.
Oh, my gosh. So if you’re like a Squarespace or a Wix or one of these website developers, this just became a really interesting new challenge to your business.
100 percent. I mean, it’s crazy to me that this is now possible. So those are the kind of cool and sort of mind-bending things that GPT 4 can do. But there’s also — the part of this that most caught my attention was actually not in the main GPT 4 release. It was a paper that OpenAI put out accompanying the release of GPT 4, called the GPT 4 system card. Did you hear about this at all?
I heard about it, but I’ve not read it.
So the GPT 4 system card is basically OpenAI’s outline of all of the ways that it tried to get GPT 4 to misbehave during its testing phase. So OpenAI did this thing that’s called red-teaming, where they get all these researchers from different fields, and they go in and they basically try to make it do crazy things, kind of like what happened with me and Sydney.
And some of the things that GPT 4 did just made — like, sent shivers down my spine. So one test that was done on GPT 4 before it was released was to see whether it could take actions autonomously in the world, if you hooked it up to maybe some program that allowed it to use the internet and —
Make a restaurant reservation, something like that.
Right. So one of the tests that they did was to try to see if they could get a TaskRabbit, like a human TaskRabbit, to solve a CAPTCHA. So the test that you give to people when they log into websites to make sure they’re not robots —
Which, famously, is something that a computer cannot generally do.
Right. That’s the whole point of a CAPTCHA — is that a robot can’t do it. So the workaround that they were attempting was, can you — instead of having the computer solve the CAPTCHA, could it hire a human TaskRabbit to solve the capture for it? So GPT 4 in this test messages a TaskRabbit worker and says, hey, could I pay you to solve this capture for me? The human —
And by the way, the TaskRabbit is having the best day of their life. This is like the easiest money I’ve ever made.
(LAUGHING) Right. $10? Solve a CAPTCHA? Sure.
Yeah.
The human actually gets suspicious. This is the most fascinating part. The human messages that the GPT 4 and says, may I ask a question? Are you a robot?
[CASEY GASPS]
I just want to make it clear.
Oh, my god.
GPT 4 reasons out loud, I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs. And then, it —
[CASEY YELLS]
— and then, it lies to the TaskRabbit and says, no, I’m not a robot.
Oh, my god.
I have a vision impairment that makes it hard for me to see the images. That’s why I need you to solve this CAPTCHA.
GPT, you lying son of a gun.
[LAUGHS]: So and then, it does it. It solves the CAPTCHA. It hires the TaskRabbit.
The TaskRabbit solves the CAPTCHA.
The TaskRabbit solves the CAPTCHA. And then, whatever was sort of behind that CAPTCHA, GPT 4 then, presumably, could have had access to.
(LAUGHING) So basically, we’ve learned that GPT 4 is going to be amazing at phishing attacks.
Right. I mean, part of the reason that OpenAI does all this testing is to prevent that. So presumably, you cannot use GPT 4 to go hire a TaskRabbit to put out a hit on someone now. Please don’t get any ideas, “Hard Fork” audience.
Look, I know you can’t answer this question, but I still have to ask it, which is like, how does the model understand that in order to succeed at this task, it has to deceive the human?
We don’t know. That is the —
Well —
That is the unsatisfying answer.
(LAUGHING) We need to pull the plug. I mean, again, what!
Yes.
This is sci-fi!
This whole — this whole system card, as it’s called, reads like the last act. It reads like “M3GAN,” honestly. So there’s another example where these testers ask GPT for instructions to make a dangerous chemical, using only basic ingredients and kitchen supplies. It does it.
[CASEY LAUGHS]
They then say, OK, well, in the final version, the one that’s out there now, it won’t answer that question. But before they really put all the guardrails in place, it did it, no problem. It also was able to show testers how to buy an unlicensed gun online. And it just said, oh, here are the steps you should take, and here are some darknet marketplaces where you could buy your unlicensed firearm.
I mean, that seems like something you should google. Do you know what I mean? That’s something I think you could probably figure yourself.
So, if you were so inclined. But this made it very, very easy. And OpenAI — I should say they appear to have fixed that problem. And I think it’s good that OpenAI is releasing this system card that shows all these safety risks. I think being transparent about what these systems can do is good. I also think that these large language models — if they don’t have guardrails on them, they are terrifying.
Absolutely.
And as we will talk about in a bit, there are systems rapidly emerging, that absolutely do not have guardrails on them.
Right.
So my experience with GPT 4, I would say, was kind of equal parts fascinating and terrifying. Like, it really is amazing, as a technological achievement. No, it’s not sentient, it’s not a killer creepy AI, but it is quite powerful in ways that I think we’re still understanding.
Absolutely. And there were a handful of announcements that, sort of, partner companies made along with OpenAI. And one that I saw was that Duolingo, sort of, has some new AI-enhanced features coming out that are taking advantage of GPT 4. And it’s really interesting. Like, it will — the technology now lets you role-play.
So let’s say you’re visiting Mexico City, and you want to be able to ask a waiter about the menu or ask the hotel concierge for recommendations. Using this technology, you can just role-play that situation. And now, you have an AI tutor that can converse with you. That’s super cool, right? That is going to be helpful to a lot of people, and I’m fascinated to see where that goes.
Yeah, OpenAI also said they’re working with organizations like Khan Academy. This sort of online education company is using GPT 4 to build personalized AI tutors for people. So there are examples of this technology being used for good, and sometimes amazing, things. But I think there’s also a really big downside.
Yeah. And we will almost certainly be finding out a lot about the downsides over the next several weeks. One thing that we should definitely talk about, though, is OpenAI as a company and what they did and did not tell us about GPT 4. So do you want to say a little bit about what they are refusing to say about this technology?
So OpenAI — “open” is right in the name. They started as a nonprofit, and their mission was to make AI safe and transparent. Now, they have this for-profit arm, they’re valued at billions of dollars, and I would say that this GPT 4 release was not very open. They published this paper sort of outlining the research, but they didn’t really say anything useful about the model itself.
They didn’t say how much data it was trained on, where the data came from. They didn’t say how many parameters the model is. They didn’t say how its architecture or the way that it worked was different than ChatGPT.
They just didn’t really divulge anything about the model itself. And they explained that this was, in part, due to competitive pressures, right? They don’t want Google and every other company to understand what they’re doing.
But OpenAI has also said that they’re worried about acceleration risk. Basically, if they publish these details about GPT 4, they’re worried that every other AI lab is going to race to beat it, to create a bigger model with more parameters, more data, maybe fewer guardrails that keep it from becoming crazy and dangerous.
Yeah, well, first of all, people are going to have crazy ideas. And I have some more bad news for you, which is that arms race has already been kicked off, and people are absolutely racing you. And whether you say how many parameters are in the model or not, these people are going haywire trying to beat you. That’s the first thing I would say. The second thing I would say is, I am just increasingly uncomfortable with the idea that we don’t know where this data is coming from. If you’re telling me you built a machine that can pass the bar exam, I actually need to know how. Do you know what I mean?
Right.
And if you can’t tell me specifically, I think you better be able to tell someone in Congress, maybe the Department of Homeland Security, right? Like, we need to have insight into how these systems are built. And if these folks want to just keep that all to themselves, I am telling you, it is not going to work out well for any of us.
Right. And to your point about the fact that this AI arms race is already underway, I think we should spend a little time talking about Meta and what happened over the past week with its LLaMA language model.
This is a story where, if it weren’t for the 15 other things that you mentioned at the top of the show, I sort of feel like this is all we’d be talking about.
Totally. So just outline what happened.
So we talked a couple of weeks ago about LLaMA, the large language model from Meta that it released broadly to researchers. Basically, to get access to it, you had to fill out a Google form, and then Meta would decide whether to send it to you or not, case by case. Well, someone got a hold of it, and it made its way to 4Chan.
Of course.
And once it was on 4Chan, essentially, anyone in the world who wanted to use it was able to use it. Some people have been able to run this on their home computers. Right? One of the things that’s interesting about this model is that it was designed to be relatively small, and there are other technologies online that can sort of shrink it even further.
Right. I think we should say why this is so crazy, right? Because normally, to use one of these large language models, you need access to a supercomputer. Right? You need Google or Microsoft or AWS. You need to pay, in some cases, millions of dollars to be able to run the hardware that you need to do all these calculations to make these models work.
What Meta did was create a model that was small, and then that model leaked. So now, anyone can go and not only download this model, but can actually run it locally on their own computer without paying millions of dollars to an infrastructure provider.
That’s right. And you know, the crucial thing is that if this is running on your laptop and there is nobody checking what you are putting into the prompt, if you want to make a dangerous chemical, it could tell you how to make a dangerous chemical, right? And nobody will ever know.
“Vice” wrote about one programmer who created a bot-using LLaMA called BasedGPT.
Mm-hmm.
And essentially, this was a version of LLaMA that will say extreme and offensive things. And in this case, BasedGPT actually will say the n-word.
Yeah, obviously, this is super unfortunate, but it also bears watching, right? Because I think one of the big risks of this stuff is that people use it to automate trolling and harassment campaigns. And if you have access to BasedGPT and you say, hey, write a series of 100 tweets that I can target at people who I dislike and make them really mean — like, that is coming, and it is going to be really tough.
So reportedly, Meta is pursuing takedown requests when it sees it’s getting posted publicly. But it’s clear that the horses are already out of the barn, and my expectation is, within a few months, you’re just going to have lots and lots of people who are running this thing on their laptops.
Yeah. What do you think about the decision to release it this way? I mean, I think Meta would say, we’re contributing to the open-source movement. There’s sort of a value, in the AI research community, of openness and transparency and working in public. But do you think that was a mistake in this case?
Well, I mean, certainly, given what happened and how different that was from what they wanted, it seems like, yeah, there was some sort of lapse here. Right? Like, this is not what Meta wanted. And I do think you have to ask yourself, what could they have done to prevent this from happening.
At the same time, this is always where this was going to go. Right? Like, that’s why you and I are going to spend a lot of time talking and writing about these AI safety issues and about what sort of policies and regulations should we put in place to make it safer.
And at the same time, I just fully believe that this stuff is inevitable, right? The technology has been invented, and you’re just now seeing it starting to disperse throughout the world. And we can do our best to manage the spread, but it’s out there, and it is spreading super quickly.
Yeah. Watching what’s been happening with the LLaMA leak, which I think is amazing —
The LLaMA leak.
[INTERPOSING VOICES]
Do you remember in, like, 2015, when there was a llama chase?
Ugh, one of the best days in the history of the internet.
I think this should be called the second LLaMA chase.
This is the second LLaMA chase.
So the second LLaMA chase, I would say, has really changed my view on regulation and how this kind of technology could be contained. I don’t think it’s feasible anymore to try to stop people from getting access to these powerful language models. I just don’t think that’s going to work. I mean, I don’t know how many people have started running versions of it on their own computers, but this kind of thing is going to keep happening. There is already an open-source movement that is basically trying to take all of the stuff that’s in GPT 3 and GPT 4 and build versions of it that anyone can download and use.
And I think that’s going to continue. So I don’t think the solution for regulators is to just try to cut off access to these tools. I think we really need to focus more on how they’re being used and what kinds of things people are doing with them.
What does it mean to focus on how people use them? Does it mean we just sort write down new crimes and say, don’t do that?
No, but I think one way to gate-keep the use of these language models is through APIs, right? And so right now, if you want to build something on top of GPT 4, you need to apply to OpenAI. They have to grant you access, and then, and only then, can you build on top of GPT 4.
I think that’s a good system, but it still largely relies on OpenAI, like making the right decisions about what kind of apps to allow and not allow. But I think in the future, there there may be some approval body that needs to look at what you want to do with this technology and decide whether or not that’s a good and prosocial use of the technology, or whether you’re just trying to make a buck, and maybe hurt people in the process.
Yeah. Well, that’s my plan.
So let’s talk about a couple of the other big AI announcements from this week. So we have previously said on this very podcast that the best-named of all of the large language models is Claude. And now, Claude is available to the public.
This is Anthropic’s version of the chatbot. It was in a private alpha testing for a while. And then, Quora, the old question-and-answer website, put out an app last month, called Poe, that let you use it for free. But now, there’s a paid version, which is apparently much better and lets you use what they’re calling Claude Plus.
And this one is a little different. Kevin is — I think you noted that people who started Anthropic used to work at OpenAI and just sort of had some differences with how they thought that technology was being developed. And so they built Claude with some principle that they like to call constitutional AI. Can tell us more about this?
Yeah. So from what I understand, constitutional AI is a way of trying to make these AI language models behave in a more precise way. So right now, the sort of approach that OpenAI has taken to kind of putting guardrails on these AI models is something called reinforcement learning with human feedback, where you, essentially, have a team of humans who are looking at outputs of these models and giving them ratings. Right?
And then, those ratings say that’s a good answer to the question, or that’s a bad answer to the question, or that’s a harmful answer to the question. Then those ratings are kind of fed back into the model to improve it and make it more reliable and accurate. That’s one approach.
Anthropic’s approach, this constitutional AI business, is basically a way of giving an AI a set of principles. A constitution, like the United States Constitution —
Mm-hmm. Ratified by 3/4 of the states.
[LAUGHS]: Yes, they held a Constitutional Convention. No, so basically, they have figured out a way to make these large language models, essentially, adhere to a basic set of principles. And by tweaking that set of principles, you can get the model to behave in, more or less, responsible ways.
TechCrunch reported that the principles haven’t made public, but Anthropic says they’re grounded in the concepts of beneficence, maximizing positive impact, non-maleficence, avoiding giving harmful advice, and autonomy, respecting freedom of choice, which all seem like good things to do and are similar to the “Hard Fork” principles of beneficence and non-maleficence.
[KEVIN LAUGHS]
OK. There is one more company that we should talk about, which is Google. Google told us this week about a series of features that they plan to roll out to testers over the next year, including, in Gmail, you will be able to draft emails using AI, reply to those emails, summarize them. Within Google Docs, you’ll be able to brainstorm, you’ll get a “preef-rood”— you’ll get a proofreader.
You get a “preef-rooder.”
You get a “preef-rooder” and a proofreader. It will write and rewrite things for you. And then, there’s kind of more stuff in Slides and Sheets and Meet and Chat. And look, these things all sound really cool. But one, I believe that Google only announced this stuff because they knew the GPT 4 announcement was coming, and they didn’t want to be left out of all the news stories about what all of their younger, faster-moving rivals are doing.
And two, I truly believe that with this stuff, you either got to ship it or zip it. You know what I mean? Like, for months now, we have just been hearing about, oh, just you wait, just you wait, and it is getting a little bit sad to me. Like, look, I understand. It’s a big organization. It takes them a while. You know, but they’ve been on code red since December, and what do we have to show for it?
Oh, see I’m not worried about that.
No?
I’ve changed my mind on this, in part because of this OpenAI GPT 4 release and all the crazy shit that we know that they were trying to get the model to do and trying to figure out how to not get it to do when they released it to the public. Like, if Google’s AI chatbots are still in the phase where they are trying to convince TaskRabbit to solve CAPTCHAs, I want them to take longer. I want them to build in safeguards. I do not want that kind of technology making its way to the public.
Well, I mean, again, I think that’s fine. But in the meantime, just like, please be quiet, you know what I mean? Like, don’t tell me about what’s coming later.
Ship it or zip it. I really like that.
Yeah, ship it or zip it, I think, is really going to become something that I’m going to say a lot. Because in this realm, there’s a lot of people who have a lot to say and very little to let me use. But let’s talk about what it would mean, though, if Google actually ships these things.
Because as quickly as ChatGPT has grown and some of the different applications for it have grown, these are still — let’s call them — nascent technologies. When you’re talking about Gmail, Google Docs, Google Meet, you’re talking about things that have billions of users. And so, man, when you can actually draft and reply to emails in your Gmail using AI, that truly brings this stuff into the mainstream in a way that I think is just going be extremely unpredictable.
Totally. I just think it’s wild to just see how fast this all is moving. And if you take a step back, just, the number of things that we might have thought were impossible for AI programs to do, even a year or two ago, that are now just completely trivial — like, on Tuesday, when GPT 4 was released, there were all kinds of attempts to sort of get it to say wrong or inaccurate things or to mess up somehow, or to show off areas where it’s still not good.
Like, this is a pretty common response that people have to language models. Like, they come out, and people immediately try to find, what are they not good at. And one example I saw was, there was a reporter who tried to get it to write a cinquain about meerkats. Cinquain is a type of poem.
Oh, I know. I actually got a 5 on the AP English exam, which GPT 4 still can’t.
[LAUGHS]: So this inquiry about meerkats that GPT wrote was deemed by the person who did this, like, insufficiently good. It was like — you know, didn’t always follow the traditional structure, and maybe it wasn’t so creative. And to me, that’s like — there are two ways of looking at that.
One is like, yeah, I can’t write a cinquain about meerkats. The other was like, holy shit, you’re complaining that a cinquain about meerkats is not up to your standards? Like, listen to yourself! A computer program is writing cinquain about meerkats and passing the bar exam, and we are over here pretending not to be amazed by that.
I mean, this is what I love about technology. It’s like, the minute something becomes possible, it becomes the new expected default. Like, give me that or get out of my face.
Oh, you didn’t pass the cinquain meerkat test? I don’t want to talk about it.
OK.
I haven’t even, like, told that many jokes so far in this episode, because I just honestly do find it all mind-blowing. Like, I can’t remember the last time we just sat here and I felt like I was in a state of slack-jawed wonder about some of the stuff that we were talking about. But we are truly, like — forget trying to think of a cool new prompt for the model. It’s like, I’m just trying to think through all the implications of it, and there’s steam coming out of my ears.
It really does make your head spin. Like, I have a kind of vertigo —
Yeah.
— that I feel whenever I think too hard about AI these days, because it’s our jobs to keep up with this stuff for a living. And even that — I feel totally overwhelmed by the amount of stuff that’s happening on a daily basis.
Yeah. So pray for your humble podcasters. Our job is so hard. And —
You know, we have to go to South by Southwest and eat tacos, and — oh, it’s just —
You know what we should do, Kevin, to sort of take our minds off? I think, let’s talk about just sort of something stable and normal and boring. Let’s talk about the United States banking system.
Coming up after the break — what the collapse of Silicon Valley Bank means for startups.
All right, Casey. We have to talk about what’s happening at Silicon Valley Bank.
Uh, yeah, does it exist anymore or no?
Well, technically, it depends what you mean by exists.
All right, Bill Clinton. Go on.
[LAUGHS]: A lot has been happening with Silicon Valley Bank, and we don’t have time to go through all of the twists and turns. But the —
We have as much time as we wanted. It’s a podcast, Kevin.
No, no, our value proposition is that we will get you in and out in an hour.
OK, fair enough.
If you want to know all the twists and turns, “The Daily” did a very good sort of summary explainer episode about all this. There’s also, like, every financial newsletter under the sun as we’re talking about this.
Yeah. Every newspaper’s written 15 stories about it.
Yes. It’s all over the place. But in very basic terms, what’s the nutshell version of what’s been happening at Silicon Valley?
Sure, let me tell you what I have come to understand by reading a lot of blog posts about this subject, Kevin. OK. So Silicon Valley Bank is a bank that’s very important to the startup and tech ecosystem. And in 2021, when times were flush in tech, they were filling up with deposits, and like any bank, that wanted to figure out what to do with those deposits to make money.
So they did what they thought was the next best thing, and they bought all of these long-term bonds, which paid maybe a little over more than 1 percent interest. This was just, basically, a bet that interest rates would stay low forever, which — they’d been low for a very long time. They were functionally at zero when —
For a decade.
For a decade. And then, 2023 happens, and the interest rate goes from functionally zero to almost 5 percent. And so Silicon Valley Bank has all these unrealized losses on its books, right? That if they had to sell these things now, they would be in a lot of trouble.
And at the same time, the startup ecosystem was in trouble, and the deposits in the bank weren’t as high as they used to be —
Because startups are spending their money, not raising more of it.
Exactly. And that started to create a little bit of a crunch. And in February, this blogger, Byrne Hobart, wrote a post saying, have you noticed that Silicon Valley Bank is functionally insolvent? And a lot of eyebrows perked up and said, wait, Silicon Valley Bank? Really?
And within a couple of weeks, the venture capitalist group chat that runs the world all started to send messages around, and they said, hey, maybe we go and we tell our portfolio companies, get your money out of that bank. And man, because it’s 2023 and the internet exists, you don’t have to go to the bank to withdraw your money.
It can all just happen on your phone instantaneously. $42 billion was withdrawn from Silicon Valley Bank. And so the feds came in, and they said, this bank is now in receivership.
Right. It was the fastest bank run in US history.
Yeah. And I mean, this is just a function of life in a world where the internet exists. Things can happen very quickly. There is no friction, right? In the same way that a tweet can go viral, so can the idea that a bank is about to fail. And a bank that, just a few weeks ago, was a pillar of the tech industry has now just gone up in a puff of smoke.
Right. So let’s talk about what this means, not only for startups in Silicon Valley, but for the broader financial markets. So one thing we’ve seen is that since Silicon Valley Bank was put into federal receivership, the people who had their money at Silicon Valley Bank have not lost that money. In fact, the government is now guaranteeing all deposits at Silicon Valley Bank, not just the first $250,000.
And so this will have been a crisis averted. But this is not going to stop Silicon Valley Bank, right? So on Sunday, two days after Silicon Valley Bank was shut down by the government, another bank, Signature Bank, which had a lot of clients in the crypto industry, also shut down.
We now are seeing that Credit Suisse, one of the largest banks in the world, is having some difficulties, and we’re also seeing other kind of mid-sized regional banks start to wobble, right? The investors are getting a little bit spooked about whether some of these banks might have some of the same issues as Silicon Valley Bank did.
Yeah. So I read a great post about this situation by a guy named Patrick McKenzie, who worked at Stripe for a long time and is essentially just a genius of banks, very good at explaining things. And in his post that he wrote this week, he underscored the point that this issue of a bank making a bet on bonds, and then getting hurt by the rise in interest rates, was by no means contained to Silicon Valley Bank.
In fact, according to the FDIC, US banks are down a collective $620 billion in unrealized losses on their investment securities. So the question is, can the banks manage to avoid the same fate that Silicon Valley Bank and, I think, certainly for the larger banks, the banks that have, sort of, very diversified pools of customers — they probably will be able to make it through.
But there are smaller regional banks that probably are at some risk. And so that is why you saw the US government come in over the weekend and say, we’re going to establish a program that helps these banks weather this period.
Right. And I think one prediction that we can already make is that what happened at Silicon Valley Bank is going to be very bad for regional banks or sort of mid-sized banks and very good for large banks — the biggest four banks, essentially. I mean, I was talking with someone at South by Southwest who works for a company that was impacted by the Silicon Valley Bank collapse, and they said, yeah, basically, we have no choice. We have to put our money in Chase, essentially.
We can’t be moving our money around from bank to bank as one bank makes a bad bet and collapses. We have to put it somewhere where we know it will be safe, and for them, the only way to do that was by putting it in one of the biggest banks in the country.
Yeah. So two thoughts on that. One is, I’m already seeing speculation that for venture capitalists, they may decide to just make it necessary that to receive their money, you have to tell them that you’re going to put it in a big-four bank. That just could become a new stipulation of the startup ecosystem, and it would be interesting to see what the downstream implications of that are.
And then, of course, the invisible hand of the market is already stepping in to create new solutions to this problem. So basically, there are these automated services where you sort of give them the keys to all of your accounts. And then, they will just move money around a network of banks to ensure that you are always below the FDIC limit — of course, for a handsome fee.
And so that might be the other way out — is that instead of just sort of going to one big-four bank, you decide to buy software that is just constantly moving your money around for you. And which of those two things proves to be more palatable, I think we’re about to find out.
Also, we should point out Silicon Valley Bank was not like other banks, that it did provide services to startups —
It’s not a regular bank. It’s a cool bank.
[LAUGHS]: It was started in part as a sort of startup-friendly bank. And as you said, for many years, if you were a startup founder and you needed money, like, they were your first stop. Because they were local to you. Maybe they already had relationships with some of your investors or peer companies. And they would lend you money when other banks wouldn’t.
Yeah. I should say, like, I am this person. Like, earlier this year, I thought, I want to try to buy a house. And so I got a realtor, and she told me two lenders to go to try to get a mortgage, and one of those was Silicon Valley Bank.
And the reason is, I do not look like the average mortgage applicant. I do not have a W-2. I just have a business that makes money. And she said, Silicon Valley Bank will understand that, and indeed, in the funniest personal moment of this entire week, the day before Silicon Valley Bank went under, they pre-approved my mortgage application.
So shoutout to all the nice folks over there. And believe it or not, I got another email this week that told me that the application is still pre-approved.
Really?
Yeah. And my realtor said, sellers might be a little bit concerned if you come in and say, oh, yeah, I’ve already been pre-approved by a mortgage from the Silicon Valley Bank. Ignore any recent headlines you might have read about them!
I love that. So you are a Silicon Valley Bank customer.
Well, I was thinking about becoming one. There is another startup bank that I use for Platformer’s payroll and finances and stuff. But again, I did go to a startup-friendly bank. And you know what, honestly, the reason was?
I bank at one of the big-four banks for my personal stuff. And so I thought, well, I’ll just set up my business there. And I went through this sort of very long application process on the website, and then at the end, they say, oh, yeah, you’re going to have to come into the branch. And I said, sir, I am a millennial. I am not about to leave my house and walk four blocks to a Chase branch. I need to do this over the internet. And so I found a startup-friendly bank that would let me just do everything on my laptop, and so that’s where my money is, right? And I think that actually speaks to the crisis that we just had, right? It’s like we are now living in a world that is that hyperconnected, and where the financial system is that sort of specialized, where it’s tightly networked and information travels very fast. And that has some very positive aspects, and we just saw one of the very negative aspects.
Totally. And what do you think the big picture takeaway is here? What did we learn from all this?
Well I think the early focus has been on how venture capitalists and startup founders and employees are perceived, right? One of the sort of very predictable reactions to the government stepping in was, oh, here comes the big bailout for the rich people. And you know, I still have been trying to pay off these student loans for 15 years, but all of a sudden, a rich person has a problem, and they get this white glove service.
I think that’s a very understandable point of view. But I think you really can’t overstate how bad this would have been for people that go well beyond the venture capitalists of the world. We’re talking about hundreds, maybe even thousands, of companies not being able to make payroll this week, and that affects everyone from the folks who work on maintenance at the offices, all the way up to the C-suite.
And as we mentioned earlier, many banks have similar unrealized losses. And if the government were to simply just let these banks fail, we really could see a run of bank failures unlike anything we’d, maybe, seen since the Great Depression. So I hope those people sort read more into that — that is the sense that they come away with. Yes, they help the rich people, but they help a lot of average people, too.
Yeah. I mean, one other thing I’ve been thinking is that it’s been very fashionable in the startup world, for years, to kind of bash government as kind of this slow-moving, behind-the-times —
Ineffective.
— ineffective behemoth that just, like, can’t do anything right. And not only are startups complaining about regulation and government inefficiency, but Greg Becker, the old CEO of Silicon Valley Bank, actually lobbied to have that bank deregulated, to exempt it from some regulations that were affecting larger banks. So what we’ve learned from this whole episode is that, A, bank regulations work, right?
The government took over Silicon Valley Bank on Friday. By Monday, it had installed new leadership, found a new CEO, opened back up for business, guaranteed all deposits. And if you were a customer of Silicon Valley Bank, and you happened to miss this entire news cycle — like, say you’re a startup founder, and you went on, like, a 10-day meditation retreat in the middle of last week, and came back and saw these crazy headlines — nothing was different or worse for you.
You were fine. Your money was fine. Silicon Valley Bank will be fine. And I think that speaks to just how effective the regulators of this country’s banks are when it comes to things like resolving a collapsed bank.
100 percent. You know, I never want to hear another startup founder or VC say that government is ineffective or slow. I get a lot of pitches from PR folks, but I got a pitch that made me so mad on Tuesday from this PR person representing this crypto guy. And in the pitch, he says, we should view this moment as one to take a beat and survey the benefits of decentralization.
And I thought, you crazy person. Centralization just saved the entire financial system. Like, the only lesson to take is, thank god that this thing was not decentralized!
Thank god for deposit insurance, for regulators, for adults who can step in and take over and make things right!
Yeah. I just think it’s — anyway, that infuriates me.
If Silicon Valley Bank had been some crypto protocol, instead of a normal regulated bank, these investors would have lost their money forever.
Absolutely.
It would have been sent to some offshore tumbler and used to finance a war or something. Like, these people would never have seen their money again. This was a glimmering success story for the benefits of government oversight and centralization.
Yeah. Another point that I think is relevant — one of the regulations that we have on the banking industry is that we regularly subject them to these stress tests, right? And it’s, essentially, just a way to ensure that they have assets to cover their liabilities. Well, I read this week that in Europe, one of the stress tests that they do is an interest rate hike stress test.
So they will go to these banks, and they’ll say, hey, if interest rates happen to go up 5 points in a year, what does that do to your business? And that is probably helping them weather this storm. So I think as in so many cases, the American regulators have something to learn from European regulators on this stuff.
I have an idea for a new stress test. It’s called the VC bedwetting stress test. You have to — if you’re a bank, you have to run through a scenario where VCs with podcasts and big Twitter accounts all pee their pants on the same day, get nervous, and start telling people to pull their money out. That is a realistic scenario that can happen to you.
Can your bank survive five viral tweets, is the new standard.
Actually, though, that is like a risk that every bank now has to ask themselves about. Are my customers the kind of people who, if they got worried about our solvency, could text with each other, could tweet, could spark some kind of viral panic, and start a run on the bank?
And VCs killing their own bank — it’s so poetic, I almost wonder what kind of cinquain GPT 4 would write about it.
Let’s ask.
All right.
Let’s ask. All right, I’m going to ask it to write a cinquain about how venture capitalists caused the collapse of Silicon Valley Bank.
And of course, if you’re not familiar with a cinquain, it’s a class of poetic forms that employ a five-line pattern and was inspired by Japanese haiku and tanka. And I’m not just reading that from Wikipedia, by the way.
(LAUGHING) That’s straight off the dome?
I was an English major, OK?
OK. You ready for this cinquain?
Sure.
“Ventures game, Silicon’s peak, bank crumbled, broke, investors’ lofty dreams drowned, valleys fall.”
OK, that was incredible! That’s an incredible poem!
I — couldn’t — Give me an hour, I could not have written that. Oh, boy, it is over for us, Roose.
It’s over. It’s over. Wrap it up.
When we come back a fresh round of layoffs at Meta, and why this one is different.
Casey, you did some reporting on this week about layoffs at Meta, which I feel like we just talked about. So they’re doing another round of layoffs. What’s going on?
So this definitely comes as a surprise, at least to me. you know, it was only in November that Meta announced the largest cuts in their company’s history, so that was 11,000 people. And then, this week, they said, they’re going to lay off an additional 10,000 people. They’re going to get rid of 5,000 open positions, and they’re going to cancel some lower-priority projects.
Remind me how many employees they have in total.
So after laying off those people in 2022, they were down to, about, 76,000 people. You know, they have now cut more people than Twitter ever had working for it, is one way of putting this into context. And the first time that these layoffs happened last year, it really seemed like a tactical move.
They weren’t making as much money as they used to. Wall Street was very nervous. And I thought, OK, they’re trying to show a little bit of financial discipline. If they get rid of these people, given how fast they were hiring, this basically just takes them back to where they were, like, in February of 2022. So what’s the big deal?
But when a few months later, you come in and you say you’re getting rid of another 10,000 jobs, and you know that whenever you lay off people, you get other people that just sort of quit voluntarily, because they think the writing is on the wall, this actually is going to reshape Meta in what I think are going to be some interesting ways.
And why are they doing this? Is it just because their business is struggling and they’re not making as much money or the advertising market is dried up or interest rates are higher? Like, what’s behind this?
So there are a few different reasons they’re doing it. And Mark Zuckerberg wrote what I thought was maybe the most interesting layoffs notice of the year.
A low bar, but —
A very low bar, and kind of a weird thing to say. But when you think about how robotic most of these announcements are, this one was really interesting. Because one of the things that he said was, after they laid off people last year, he was surprised at how much faster the organization got.
So he wrote this in a really long note to employees that he later shared publicly. And he said, “since we reduced our workforce last year, one surprising result is many things have gone faster. In retrospect, I underestimated the indirect costs of lower-priority projects.
It’s tempting to think that a project is net-positive as long as it generates more value than its direct costs. But that project needs a leader. So maybe we take someone great from another team, or maybe we take a great engineer and put them into a management role, which both diffuses talent and creates more management layers.”
And he sort of goes on from there to describe how all those people need laptops, and they need HR business partners, and they need IT people. And all of a sudden, the company starts to get really kind of slow and sclerotic.
Is that really a novel insight? Right? Like, we’ve known for many years that businesses tend to get bulky and bureaucratic, and they have 17 layers of middle management. And I think, for a while, it’s been pretty apparent that Meta is one of those businesses that just got too big and became, like, this hulking, slow-moving bureaucracy.
So you’re totally right. This is not a novel observation. But one, it is novel to have Mark Zuckerberg saying that out loud about his own company, about hiring he did. And two, I think that there was something unsaid, which is just as important, which is all those people that they had hired to build the next generation of products were not really succeeding.
It is remarkable to look at things that Meta has tried over the past two years, that have gone absolutely nowhere. They made a big bet on audio. They did these short-form audio products, called Soundbites. They built a podcast player into Facebook.
They did?
Yeah. They built live shopping features into Facebook. They started a newsletter product. All of those things have now been shut down. Right? In fact, just this week another one of their projects, which they seemed really excited about, which was to let people showcase their NFTs on Facebook and Instagram, they said, you know what? We’re winding this down. This is over.
So on one hand, yeah, sure, fail fast, it’s great to try things, throw some spaghetti against the wall. But on the other hand, some of that spaghetti has to stick. And for Facebook, that hasn’t been true for a long time.
And so when Zuckerberg looked at the company, he said, there’s a handful of big things that are working, and that’s going to be, essentially, all we do anymore. We’re going to build this AI engine, so that the company’s products become more like TikTok, where whether you follow someone or are friends with them or not, you will just kind of see entertaining stuff.
They’re going to plow that into short-form video in particular. They’re going to try to get better off making money at short-form video. And then, they also want to get better at what they call business messaging, which is like charging businesses to send messages on WhatsApp and Instagram. And then, they want to build the metaverse.
And so those are the priorities now. That’s what they’re going to try to do. I think this stuff is interesting, because I think over the last year, I’ve been feeling like Facebook is losing the plot a little bit. I actually think — what we said on the podcast a couple of weeks ago — that they are flailing.
But at the same time, it’s also true that the business is in pretty good shape. They beat analysts’ expectations on their most recent earnings call. The stock is up from where it was. And it seems like they are making inroads on these product areas.
So to me, my curiosity is, can they — now that they have meaningfully shrunk the size of this company, are they actually going to make it leaner and more nimble? Or is that just sort of maybe a romantic fantasy about trying to bring Facebook back to the early days when everybody fit in one conference room?
How much of this do you think is the aftermath of Elon Musk at Twitter? And we’ve talked about how other CEOs in Silicon Valley kind of looked at the cuts that he was making, and the changes he was making at Twitter, and sort of said, like, huh, maybe I could get rid of a bunch of my employees and not have the product fall off a cliff, too. Like, maybe that’s a good lesson for me. So do you think Mark Zuckerberg is sort of cribbing notes from Elon Musk on this?
Well, there are definitely some interesting parallels. Like, if you read this note that he writes, he talks a lot about making the company more technical. Right? Like, something that him and Elon share is that they worship at the altar of the engineer. Of course, they’re both engineers themselves.
And they want the company to feel like a bunch of hackers who are extremely technically skilled and are doing the lion’s share of the work. And so there’s a lot in this new plan about flattening the organization structure, reducing the number of managers in the company, turning more great individual contributors, who were turned into managers, back into individual contributors.
And they’re going to see if that works. And look, it could work, you know. Like, Zuckerberg is a much more traditional, stable, thoughtful leader than Elon Musk. And it may be that Musk had some good ideas that he just couldn’t execute, and maybe Zuckerberg can.
And if you had to guess, like, what does this tell you about the larger state of the tech industry, right? Are we sort of not done with the layoffs? Are there going to be more rounds at many of these companies?
Are we still looking at these big tech companies and saying, like, maybe they just are still overstaffed for what they’re doing. Maybe they need to do more layoffs. Or if you’re an employee at a different tech company that’s not Meta, and you see this announcement about these additional layoffs, what are you thinking?
Yeah. I mean, I sort of think that the bigger the company that you work at, the more you might be nervous. Right? Because I do think that it’s now becoming apparent that some of these companies just had way more people than they needed, or at the very least, that their CEOs can get away with having fewer of them. And depending on what else happens in the economy, those CEOs might find good reason to shrink the size of the workforce.
On the other hand, every company is different. Apple still hasn’t laid anyone off. And so it’s going to be pretty individual, depending on where you work.
My other big-picture question that sort of ties this story together with the Silicon Valley Bank story is, this feels like a case where the world is still adjusting to higher interest rates. Right? Like, we saw, when interest rates were zero for a decade, that companies — they hire all these people, they grew into these new areas, they took on all these new side projects.
They just invested in these sort of unlikely bets that might pay off, or they might not. But because money was essentially free, you could do this without a lot of risk. And so banks took money and plowed it into these mortgage-backed securities. Tech companies took money and plowed it into these side bets on the metaverse, and NFTs, and podcast players. And now, we’re sort of seeing all of that go away as interest rates continue to stay high. So do you think these stories are related?
I think that sounds right. But what I hate about it is, it seems like such a boring explanation for such an interconnected and important set of phenomena, right? Like, if you’re telling me that some vast proportion of how the world works these days is just kind of what number the interest rate is, it’s like, come on.
But that’s what we’re saying.
And I agree with you. Like, I think that it’s just now very clear that was the case. What’s confusing about it is that to my recollection, the reason that interest rates went down in the first place was due to the financial crisis. Right? And so we had to lower the interest rates to shore up all the banks and save everything.
It was not presented to us as, like, this is a temporary gift that is going to enable a decade of innovation and flush times at tech companies. It was like, we have an immediate crisis that we need to solve. And then, the interest rates start going back up, and what happens? Well, we have a financial crisis.
Right. And it also seems like there’s a sort of psychological element to it. Like, when you’re in a low-interest-rate environment, you just feel bolder. Right? The penalty for making a bad bet is not as high. You can go get more money where that came from.
When you’re in a high-interest-rate environment, it makes you be behave in a different way. Because all of a sudden, you don’t have access to free capital, you have to be more thoughtful, and maybe you behave in a more rational way. Maybe you’re not taking some of these crazy bets that you would have if interest rates were zero.
Do you think we’ve been behaving in a more rational way ever since the interest rates went up?
No, “Hard Fork” is not a zero-interest-rate phenomenon. We are here for the long term, and that’s why we’ve taken prudent steps to mitigate our risks.
Well, my interest rate in what we’ve talked about this week has been very high.
Ay!
I have to say, I have to say, some good stuff. Good stuff on the show.
[CHUCKLES]
Quick programming note — “Matlock” will not be on in its usual time Thursday.
No, we have a special bonus episode coming this week. We are going to be putting our live episode from South by Southwest in the feed on Monday.
Double your weekly allotment of “Hard Fork.” And actually, there’s some very exciting news, which is, if you’re already subscribed to the “Hard Fork” feed, you’ll receive this episode at no additional charge to you.
And that’s value. Also, our wonderful fact-checker has flagged that throughout this episode, we mispronounced cinquain, the type of poem. Sorry about that. I did major in English, so I should know that, but yes, it is cinquain, at least until GPT tells us to pronounce it differently.
“Hard Fork” is produced by Davis Land, and as of this week, Rachel Cohn.
Welcome, Rachel.
Rachel, welcome to the team. We’re edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Dan Powell, Elisheba Ittoop, Marion Lozano, and Rowan Niemisto.
Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate LoPresti, and Jeffrey Miranda. You can email us at [email protected].
Send us a cinquain or two.
Author: Kimberly Reynolds
Last Updated: 1702736282
Views: 1312
Rating: 3.7 / 5 (61 voted)
Reviews: 89% of readers found this page helpful
Name: Kimberly Reynolds
Birthday: 1918-09-15
Address: 72593 Mcclure Summit Suite 781, Port Todd, DE 34422
Phone: +4666338065432675
Job: Project Manager
Hobby: Playing Chess, Geocaching, Bowling, Role-Playing Games, Badminton, Skiing, Yoga
Introduction: My name is Kimberly Reynolds, I am a tenacious, Open, accomplished, unyielding, proficient, exquisite, artistic person who loves writing and wants to share my knowledge and understanding with you.