Blog

Edition 1 

 
 

Hello all,

Welcome to the very first edition of The BRAINSTORM Bugle! Who knows, one day it could become a collector's item, so keep it out of direct sunlight and don't remove any original packaging.

Wednesday seemed like the best day to drop this in your inbox. By Friday, you're probably far too busy looking forward to lounging in your penthouse infinity pool with Jeff, Elon, Bill & Mark firmly on mute.

This first edition will focus on the Big Players. It's useful to get a bit of backstory on the dominant Playmakers and a taste of just how much skin they have in the game. Future Bugles will cover a much more diverse range of deep dives and insights, including ethics, philosophy, and futurology. But for now, as this is your first time, I'll be gentle.

"Even if all progress in AI stopped right now, it would still be at least two years before we fully knew what GPT-4 was capable of."

- Overheard at Ai4 2023 Conference.

The Big News

Nvidia, which dominates the micro-processing market, has announced its new GH200 Chip ‘The Grace Hopper’ – the most advanced yet, set to launch next year. Remember Google, AMD, and Amazon are all rushing to get into the chip business. After all, making chips is the modern equivalent of the person who sold pickaxes and tents during the gold rush.

CPU+GPU designed for giant-scale AI and HPC.
New 900 gigabytes per second (GB/s) coherent interface, 7X faster than PCIe Gen5.
30X higher aggregate system memory bandwidth to GPU compared to NVIDIA DGX™ A100.

Non-technical version:
It’s really bloody fast.


OpenAI – the organization behind ChatGPT – was reported to have had a dip in users and is said to be incurring costs of $700,000 a day to run their services. Now worth remembering they've been funded to the tune of $50 Billion by Microsoft, so I don’t think they’re lining their shoes with newspaper quite yet. I comment on what the drop in users probably means below.

Non-technical version:
AI ain't cheap.

Google plans to overhaul its Assistant to focus on using generative AI technologies similar to those that power ChatGPT and its own Bard chatbot. According to an internal email sent to employees Monday and seen by Axios, the leaked Google internal email says, "We've seen the profound potential of generative AI to transform people's lives and see a huge opportunity to explore what a supercharged Assistant, powered by the latest LLM technology, would look like."

Non-technical version:

AI will now use the device in your living room to inform you that you need to subscribe to use that particular app or service.

Elsewhere, Google launched AdaTape – which are adaptive tape tokens – which basically means each token can understand the complexity of the task it's been asked to perform and adapt. Thus, a simple request gets fewer/less resources allotted to it rather than the current system where every task, regardless of complexity, gets allotted the same.

Non-technical version:
Imagine up until now we've pretty much paid for every AI transaction with a five-pound note regardless of the simplicity or complexity of the task it carries out. Adaptive tokens allow us to break the fiver into smaller coins to better reflect the value of the job being carried out. This hugely speeds up AI by making it faster and much more efficient.

Google
Google showed off another huge leap with the demonstration of its RT-Robot. This is Google DeepMind’s latest robotics advance, a ‘vision-language-action’ model (VLA), and its capabilities are pretty incredible. Unlike its predecessors, this robot doesn't rely on a complex list of commands. Instead, it learns from AI, allowing it to recognize objects it has never encountered before.

Non-technical version:
Robots and machines are now capable of learning by sight, and this makes them infinitely more adaptable and capable of carrying out a huge range of tasks previously confined to humans.

Project IDX and CODy
Project IDX introduces a new tool for developers to multitask across any device. CODy, an AI assistant, is the latest in a series of significant disruptors to coding and development.

Non-technical version:
Coding just got ridiculously faster and easier.

Microsoft
From August Earnings Call: "We had a solid close to our fiscal year. The Microsoft Cloud surpassed $110 billion in annual revenue, up 27% in constant currency, with Azure all-up accounting for more than 50% of the total for the first time... we remain focused on investing to lead in the new AI platform shift by infusing AI across every layer of the tech stack.”

Microsoft also launched MS 365 Front Line / co-pilot digital tools deliberately built to relieve the burden of frontline workers such as ambulance drivers – allowing scheduling. Bing attached new features to its AI, including NLU and NLG capabilities – Natural Language Understanding and Natural Language Graphics. Basically, input words generate images. Since its launch, Bing AI has handled 10 million conversations across 100 countries.

Non-technical version:
We are going all in on AI, and frontline workers will be included in that, and Bing is bringing the bling.

Recycle-GPT in China
In China, the launch of Recycle GPT is making waves. It claims to be able to reuse and recycle work that AI has already undertaken. This significantly increases efficiency by removing repeating processes and making Large Language Models much more efficient.

Non-technical version:
AI can self-edit by not repeating itself and reusing its old work rather than constantly reinventing the wheel.

Amazon
From August earnings call: On the AI front at Amazon, every single business inside of Amazon has multiple generative AI initiatives underway. These range from streamlining operations to enhancing customer experiences. Alexa is a prime focus, and the possibilities are endless.

Non-technical version:
Amazon will embrace AI to be ever more Amazon.

Apple
From Q2 2023 Earnings Call Transcript, Apple views AI and machine learning as core fundamental technologies integral to virtually every product they build. They're committed to responsible advancement, enriching lives through innovation.

Non-technical version:
We too are in this AI thing big time.

THE BIG BUZZ TOPICS
Two topics have dominated discussions around AI recently. First, AI eavesdropping, with the reported potential AI capability to overhear keyboard taps and identify pressed keys with 90% accuracy. Until confirmation if this is just a rumor, then it's probably best to avoid typing out any passwords or…well, you know, while you're using AI.

So as mentioned above, ChatGPT witnessed a dramatic rise, with 25 million users within a few months of its launch, faster than platforms like Instagram and Spotify. However, recent data suggests waning interest. While total users peaked at 97 million in May, there was a 7% drop in June. Furthermore, engagement on r/chatgpt, a once bustling Reddit forum, has declined.

Obviously, this drop in usage prompted various theories. Many, including myself, believe it's probably due to a lack of understanding of AI's true nature – a tool that increasingly requires mastery for optimal results. I am very quickly learning that the role of “Prompt engineering” is absolutely vital in using AI. Wikipedia defines prompt engineering as "the process of structuring text that can be interpreted and understood by a generative AI model." This really seems to be the crux of everything I've learned over the last nine months. It needs a genuine blend of art & science to get AI to do things that are truly useful, relevant, and innovative. The good news is that once you've mastered it, it can perform ridiculously genuine wonders to behold. But the bad news is that mastering it is getting harder by the day. For example, ChatGPT4 now has just under 900 ‘plug-ins’, which is software that integrates into existing software to give it AI capabilities that massively enhance its functionality.

My own take is that I see some version of prompt engineering becoming a role that will either be a full-time position for organisations and companies, or someone (like me) is brought in on a needs basis to do the work. As part of BRAINSTORM's offering, I've been working on developing accessible training. To be honest, I can get almost anyone (even you ) up to decent ‘Conversational AI’ in just a few hours. But for projects and businesses after that, it's probably going to be quicker, easier, more convenient, economically sound, and more sanity-saving to talk with a prompt engineer, scope out your wants, needs, and goals, and let them go away and do the work.

Next week, as well as updates, I'll touch on the increasingly thorny world of AI ethics and regulation. As always, any and all feedback is most welcome.

Now go – your infinity pool awaits.

Vincent

BrainstormComment