This week in AI we have stories about another blowout quarter for NVIDIA, fine-tuning OpenAI’s GPT-3.5, and major companies start to block OpenAI’s web crawler .
- NVIDIA blows out Wall Street expectations for the quarter
- OpenAI partners with Scale AI for GPT-3.5 fine-tuning
- Amazon, Reuters among companies to block OpenAI web crawler bot
NVIDIA Reports Blowout Quarter and Exceeds Expectations
In one of the most anticipated earnings report in recent memory from Wall Street, AI-chip leader NVIDIA reported its earnings for the second quarter on Wednesday, and Wall Street was not disappointed.
The company blew away consensus estimates with a staggering 88% revenue growth from the previous quarter. NVIDIA reported earnings of $2.70 per share vs. estimates of $2.09 per share and revenues of $13.51 billion vs. estimates of just $11.22 billion.
Perhaps more impressive was the company’s guidance for the rest of the year. NVIDIA expects its sales to come in at more than $16 billion for the third quarter, which would represent a 170% growth from the third quarter in 2022. Much of this sales growth is powered by its industry-leading AI chips, which help to not only run AI software but develop it too.
NVIDIA also announced a $25 billion stock buyback plan which is used by companies with extra cash to improve shareholder equity. Net income for the quarter also improved dramatically as NVIDIA flexes its muscle when it comes to gross margins. Profits came in at $6.19 billion which is up from just $656 million last year.
The company’s data center sales were front and center as the world continues to expand its AI infrastructure. CEO Jensen Huang stated that the world has trillions of dollars in data centers that will soon be equipped to accelerate computing and generative AI. NVIDIA’s chips are the brains behind this AI revolution and most analysts anticipate this to be just the start of what could be a multi-decade AI bull run.
OpenAI Teams With Scale AI to Allow Fine Tuning for GPT-3.5
OpenAI has officially announced that it is planning to partner with third-party developers to allow customization and fine-tuning to its GPT-3.5 Turbo model. One of the first such partnerships will be with San Francisco-based Scale AI which focuses on data labelling. Its customers will be able to enhance their data and customize it using the power of GPT-3.5.
What is the significance of fine-tuning GPT-3.5? It allows the model to be tailored to meet certain styles and perform specific tasks set out by the user. GPT-3.5 fine tuning can also learn and cite proprietary data that belongs to each company, enabling it to be a fully customizable tool.
OpenAI also reported that it is working on developing its own in-house tool for fine-tuning. Given the report of the partnership with Scale AI just one day later, it is safe to assume that the OpenAI-developed tool will be used to serve a different purpose.
It was also presented that OpenAI will also allow companies to fine-tune their GPT-4 flagship LLM platform in the future. According to previous reports from OpenAI, it is expected that GPT-4 will be ready for customization this fall. As of the time of this writing, no other partnerships had been announced by OpenAI, although there is expected to be more on the way.
Major Companies and Media Outlets Ban OpenAI’s GPTBot
From the good news for OpenAI this week to the bad. OpenAI announced earlier this month that it will be using its GPTBot web crawler to gather information for its LLM, ChatGPT. The bot was only announced about two weeks ago and yet a rising number of the world’s largest websites have moved swiftly to block it.
Included in these websites is Amazon and many of its international sites, the New York Times, Reuters, Bloomberg, CNN, Lonely Planet, Ikea, and Airbnb. As of the time of this writing, nearly 10% of the largest 1,000 websites in the world by daily traffic have blocked the GPTBot.
What’s the big deal of having your site crawled? For starters, a large portion of web content like images and even text are copyrighted. Web crawler bots do not recognize if something is copyrighted and take the information without permission or paying.
One of the only ways for these websites to deter web crawler bots from taking information is by issuing an old technology called a robots.txt file. Crawlers like the GPTBot are supposed to recognize these files and cease taking any information from the site. OpenAI has stated that the GPTBot does recognize robots.txt files and will respect the privacy of the sites that issue them.