All of This AI News Is Getting Absolutely Crazy!

Table of Contents

Intro

Intro

After last week's insane week in the AI world, you would think that things would start to slow down, but this week has already started off equally as insane as last week. Last Tuesday, we saw huge announcements in the AI space from OpenAI launching GPT-4 to Google announcing they're gonna be putting AI inside of their workspace tools. Last week also saw Mid-Journey version five and Microsoft 365 Co-Pilot. We ended the week with Stable Diffusion launching Reimagine. And this week, it's only Tuesday, we don't only have one announcement or two announcements, we have five announcements that have already come out that are just huge, insane announcements in AI this week.

Runway Gen-2

Runway Gen-2

Starting with yesterday's announcement that Runway Research was about to drop Gen 2. It feels like only a couple of weeks ago that we just got Gen 1, but Gen 2 is now a multimodal AI system that can generate novel videos with text, images, or video clips. This new model promises to have complete text-to-video in it. It says here you can synthesize videos in any style, you can imagine using nothing but a text prompt. If you can say it, now you can see it. They also released this video which shows off some of the features of Gen 2, including an example of text image, a surfer catching a wave, a lion in a living room, walking in a rainstorm, cinematic, a desert landscape, an apartment interior at sunset, extreme closeup of an eye, et cetera, et cetera, et cetera. We finally have text-to-video and it's going to be easy to use for anyone inside of Gen 2. Now Gen 2 does have a wait list right now and you can get on the wait list and they do say they'll be rolling it out over the next coming weeks. And that was just Monday. Everything I'm gonna talk about in this video deserves its own individual video because everything that's come out in the last couple days has been absolutely huge news.

NVIDIA Keynote

NVIDIA Keynote

Now this week is also the week that Nvidia is doing their GTC conference where they talk about all of their biggest breakthroughs and advancements that they're building inside of Nvidia and the theme of this year is a lot of AI and a lot of metaverse stuff. And according to Jensen Hong, the CEO of Nvidia, - We are at the iPhone moment of AI.

- Today, Jensen gave his keynote where he talked about all of the crazy things that Nvidia has been working on. And one of the big standout moments was this portion of the presentation where they announced Nvidia Foundations. Nvidia Foundations is designed to give really impressive compute power to anybody in the cloud. So you no longer need insane crazy systems to generate things with AI and run things like large language models, you'll be able to do it with Nvidia's new services.

- DGX Cloud offers customers the best of Nvidia AI and the best of the world's leading cloud service providers. This win-win partnership gives customers instant access to Nvidia in global scale clouds. Today, we announce the Nvidia AI Foundations, a cloud service for customers needing to build, refine and operate custom large language models and generative AI trained with their proprietary data and for their domain specific tasks. Nvidia AI Foundations comprise language, visual and biology model making services. Generative AI will reinvent nearly every industry.

- Now what this Nvidia AI Foundations is, so these are designed for you to train your own custom large language model on either large text models, large image models or large biomedical models. Nvidia AI Foundations comprise language, visual and biology model making services. Customers can bring their model or start with the Nemo pre-trained language models. And I'm thrilled to announce a significant expansion of our long-time partnership with Adobe to build a set of next generation AI capabilities for the future of creativity, integrating generative AI into the everyday workflows of marketers and creative professionals.

- Now Jensen's keynote, the whole thing is really worth the watch. I highly recommend it. He breaks down all of the crazy and cool innovations that Nvidia are working on right now, but them offering GPU cloud compute is probably gonna be one of the biggest announcements that they made because it's going to enable so many other companies to start building their own proprietary AI models on top of Nvidia's hardware, which is probably going to create an explosion of all sorts of new AI products and tools in the coming months. In fact, here's what Jim Fan, an AI scientist from Nvidia says after today's keynote. I can finally discuss something extremely exciting publicly. Jensen just announced Nvidia AI Foundations. Foundation model as a service is coming to enterprise customized for your proprietary data, multimodal from day one. So text large language models is just one part. You're gonna be able to bring your images, videos, and even 3D data. So you're gonna be able to build custom multimodal large language models and generative models for your own specific use case. Opening up this ability to all sorts of companies to be able to build whatever they can imagine on Nvidia's hardware. Some of the partners from day one, Getty Images, Shutterstock, and Adobe. Meaning that the AI image generation that's built into all of this is built on totally licensed images. There's no gray area on how the images used to train these was obtained. He even says right here, " Don't lose sleep over copyright anymore". Allowing companies to train custom large language models that are multimodal on exactly what their use cases and needs are is going to be huge for the advancement of AI.

Adobe Firefly

Adobe Firefly

So this is really, really big news right here. Now, speaking of Adobe and Nvidia, also today, Adobe announced Adobe Firefly Beta. This is Adobe's new AI art model, which is completely trained on licensed images and open source images only. So again, no worry about future copyright issues on any of the images generated from Adobe Firefly. One thing that's really interesting about Adobe Firefly is they actually plan to compensate artists who allow use of their images inside of their models. So since it's trained on images from Adobe stock, there's going to be some sort of compensation for the creators who provided some of the images that were used to train these models. Adobe Firefly, change scene to winter, generate, boom, took the same scene and changed it to winter. Look at that. You can actually do drawing and mask certain areas out, tweak your AI images, change the color of things. Look at that. Generate style variations, and it will create a bunch of variations that you can choose from. Mask out certain areas, change it to a river, add trees, image to image. It even removed the background on that image there. So Adobe Firefly looks like it's going to be able to do some pretty cool stuff. Look at that. It found the lighthouse in this image and then lets you change the style. It changed that to an underwater, variations of this vector. So much cool stuff looking to come out of this Adobe Firefly. Look at the fonts and the text that you can create with this. This is absolutely phenomenal technology and I'm super excited to play with it. I don't have access to this yet. It is currently in early beta and they do have a wait list where you can get in and play with it. You can see they've got a button here. If you go to adobe.com/sensei/generativeai/firefly.io, you can join the beta. I believe you need to have an Adobe Creative Cloud ID to do it, but I don't think you have to be a paying member of Adobe Creative Cloud. As long as you have an Adobe Creative Cloud account, you should be able to join the beta. We've got text effects, which looks really cool to me. Let's take a look at some examples here. You can see that it's actually turning these letters into like P for popcorn. They're turning these letters yum into like a melting chocolate font.

Now they're changing yum into like a bread. So, I mean, you could do some really cool stuff with these text effects here. They've got their new text to image model here. So they're typing pets reading a book in a magical forest and they're generating that image. And this is what it generates with pets reading a book in a magic forest. I mean, this looks like similar to Mid Journey version four quality maybe. Now they just change it to a landscape picture with the same prompt and they're changing the styles over on the right side over here, different themes. This is where I believe prompt engineering is gonna be less and less of a necessity. You just type what you want, but then you're gonna have little additional prompts and buttons that you can click on the side of the page to get it dialed into the style that you're looking for. So as you can see, they're doing it as a narrow depth of field, golden arrow, bioluminescent, concept art, digital art, and it generates a completely different style of image. This is looking pretty impressive to me.

And then we've also got recolor vectors, which they don't seem to have a sample video of yet, but it says coming soon. My buddy, Bailao got early access to it and he was able to start generating some early images. Here's a few of the images he created. We've got like a UFO here. We've got like a little Chihuahua dog holding a sword. Now he's saying Adobe is using a diffusion based model. So it is using a similar model to something like stable diffusion or Mid Journey. And the images look like something that you might expect from, you know, a really good stable diffusion prompt or a Mid Journey prompt. He shows an example here of what some of the text effects look like. You can see he wrote his username here, BillyFX, using quantum computing and microprocessors as his prompt. And he got this really cool font here. He even did a test to see how hands look and the hands look pretty decent. I mean, maybe this guy's got three fingers on this hand. Some funky thumbs going on here. I mean, hands still need a little bit of work, but you can tell the hands are looking pretty dang decent here. Now he does point out some of the limitations of it. At the moment, it does not support upload or export of video content. You can't currently use Firefly to edit or iterate on your own artwork at this time, assuming that we will be able to in the future. And this is an important one. Firefly is for non-commercial use only while it's in beta. So any of the images that you generate, you're not going to be able to go and resell on Adobe stock. You're not going to be able to go in, you know, print them on a T-shirt or use them in a book that you're selling or anything like that. It's non-commercial use only while in beta. So those terms will likely change. But as of right now, non-commercial use only. And if you're not following Bailawal, make sure you follow him. It's Bailawal Sidhu on Twitter, and he tweets a lot of great stuff in the AI space. Definitely someone that you want to be following if you're into AI right now.

Bing Chats Adds Images

Bing Chats Adds Images

All right, moving on. The next news, I actually got an email directly from somebody at Microsoft today saying, hey, did you know that you can now create images using Bing's chat? It actually uses a new version of Dali to create images. Now, the example on their blog post from today, it says, create images with your words, Bing image creator comes to the new Bing. And they have an example which looks like it's right inside of Bing chat where they're creating images directly inside of the normal chat window. Now, if you read the article, it does say that it will be rolled out to some beta testers to demo it and give some feedback before offering it to the wider user base. It also says that it is using a more advanced version of Dali. So if you've seen Dali 2, this isn't necessarily Dali 2. It's actually a newer version. And I did manage to get access to this. So let's go ahead and take a peek at it. So the first thing I did, because I saw that screenshot was I went straight to my Bing chat and said, make me an image of a cat eating tacos. Let's put it on creative mode and submit it. And it says, I'm sorry, but I cannot make you an image of a cat eating tacos. Cats are carnivores and they cannot digest tacos. Well, it would be harmful to feed them tacos. However, I can make you an image of a cat wearing a taco costume. That would be harmless and cute. Here's an image. And then it made me the sort of like text image of a cat, I guess, wearing a taco costume. Not what I was expecting. Obviously, this isn't using an advanced version of Dali.

But then I read the email that Microsoft sent me a little bit closer. And it said, you can access it over at Bing dot com slash create. And now if I come over here and I type a cat eating a taco and create this image here, we've got a pretty dang convincing picture of a cat eating a taco. Now, this to me is much better than the types of images we were getting out of Dali, too. If I go back to our regular Dali, too, here and type a cat eating a taco, it actually won't even let me. It keeps on saying the servers are overloaded right now. So I guess I can't really compare it to regular Dali at the moment. Let's do an astronaut floating in space with Earth in the background. You can see his space through the helmet. Let's see what happens with this prompt. Now, that's actually pretty dang impressive. I mean, the reflection's wrong because Earth is behind him and you can see Earth in his helmet. I mean, that's a pretty decent image for Dali, honestly. Dali has definitely come a long way.

Let's do a colorful image of a mad scientist holding a beaker in the lab. I mean, pretty good. I mean, I still think we're getting better images out of Mid Journey version four and version five right now, but for Dali, it's come a long way. Dali's definitely getting a lot better than it was before. And now we're getting it straight inside of Bing chat. So any time now we're going to be able to just type in an image right here inside of our normal Bing chat, and we're going to be able to get images that look like this.

So that's the fourth announcement that came out today.

Google Opens Bard

Google Opens Bard

And the fifth and final announcement I'm going to talk about is that Google is opening up Bard to the public so we can finally start testing it. Now there is a wait list. I joined the wait list. I haven't gotten access yet, but I have been seeing people on Twitter show that they're already starting to roll out and give access to people for Bard. If you're not familiar with Bard, Bard is sort of Google's answer to Bing chat. It is a web connected search AI similar to what OpenAI has with Bing chat.

But this is based on Google's Lama model instead of the GPT three or GPT four model. Now, if you want to get access to Bard, you can go to Bard. Google.com and they are opening it up to people in the US and the UK to start. And from what I understand, they're rolling it out pretty quickly. A lot of the people that got on the wait list today have already gotten access. They haven't given me access yet, but a lot of people have already gotten access. In fact, Ben Tossall, who puts out some amazing AI content on Twitter and also has an awesome, awesome daily newsletter called Ben's Bites. He has access and he's been sharing some screenshots on Twitter. Here's one that he shared earlier today. When is your knowledge cut off? If any, I do not have a knowledge cut off. I'm constantly learning and growing, and I'm able to access and process information from the real world through Google search and keep my responses consistent with search results. He also asked for the pros and cons of the most popular cloud compute services. And surprisingly, it didn't actually answer with Google's first.

It actually started with AWS, gave the pros and cons of that, then went into Microsoft, gave the pros and cons of that, and then talked about Google's cloud, which, you know, you would think they would talk about first, but they talked about that third wrapping up with some other cloud services like Alibaba, IBM, and Oracle, he also had barred, write some code for him and asked for any coders to look at it and validate whether this code was any good. I myself am not a coder, so I have not been able to validate with what he generated being any good or not. The announcement that barred came today was pretty dang massive news as well. I'm still waiting for my access. And, you know, when I get access to Bard, I'll be doing a deep dive video about it and giving my full thoughts on it, comparing it to the other chat services that are out there, hoping that it'll be any day now, if I have any luck, I'll be making a video about Bard tomorrow and deep diving for you on that to wrap up this insane, insane day of AI. One last thing I'll leave you with here.

Bill Gates

Bill Gates

Bill Gates wrote a massive article today over on GatesNotes.com about the age of AI and all of his thoughts on AI. He talked about how AI can reduce some of the world's worst inequities, how AI is essentially creating personal assistance for people and how it's leading to huge boosts in productivity. We're also seeing huge advancements in health as a result of AI being able to use diffusion models and computer vision to essentially detect early issues in people's health.

We're seeing boosts in education from it. And I'll sum it all up with his tweets from today. The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the internet, and the mobile phone. It will change the way people work, learn, travel, get healthcare, and communicate with each other. Everyone should benefit from artificial intelligence and not just people in rich countries.

Final Thoughts / Matt's Rant

Final Thoughts / Matt's Rant

AI is moving at insane pace. Last week was the craziest week I've ever seen.

Today is probably one of the most single crazy days I've ever seen in AI with just news announcement after news announcement and Bill Gates and Jensen Huang from Nvidia and Google releasing Bard and Adobe making announcements and Microsoft making announcements. And it's all just kind of escalating. And every single day we're seeing more and more and more crazy advancements and it's just getting crazier and crazier. And I'm absolutely loving nerding out about this stuff.

I am blown away every single day by what I'm discovering. We're talking about things that people have said, we won't see that for years. And then weeks later, we're seeing it. That's how fast things are moving. This is actually becoming exponential and it's so freaking exciting right now. I couldn't be more enthusiastic to be in the AI space right now and deep diving, learning about all this stuff, researching it, figuring out what's going on, and just trying to keep you in the loop with what's happening on a daily basis.

I'm sure there's even news that came out today that I missed in this video, just based on the timing that I'm recording it, but there is so much happening and it's so exciting and I'm just blown away. I just don't know what else to say. I mean, this is nerdgasm all over right now. There's so much cool stuff happening. I'm on the wait list for Adobe Firefly. I'm on the wait list for Google's Bard. I got early access to Microsoft's new version of Dali. I'm on the wait list for Gen 2. So much good stuff. And I really enjoy the fact that there's more and more people finding this channel and enjoying the fact that I'm nerding out and you like nerding out with me. And I'm going to keep on bringing you these advancements and showing off what's coming and what's available now and making tutorials when there's time in between all of these news videos that I'm making, because there's so much I can't wait to show you. There's so many videos that I've been wanting to make, but haven't had the time to make because I'm trying to keep up with all the news and just so much good stuff right now. So thank you so much for tuning into this channel and nerding out with me about all of this AI stuff, because it is just accelerating at a rapid pace and I'm hoping to keep pace with it. And I'm hoping to share with you the pace that all of this stuff is happening at.

Future Tools

Future Tools

And if you want to make sure you stay in the loop, head on over to future tools.ai and join the free newsletter here every week. This is where I give you the TLDR of the week. I try to make a video every single day, but I know it's hard to keep up with. I know a lot of people can't see all the videos I put out. I know there's new tools and the new advancements and just so much stuff happening, and I want to show you just the TLDR, the best of the week, every single week. I send it every Friday and you can find it over at future tools.io. Click the button that says, join the free newsletter and I'll hook you up with everything you'll want to know for the week in AI. So thanks so much for tuning in. I really, really appreciate you. If you want to nerd out with me even more, click the like button, click the subscribe button. It really, really supports the channel and it will also make sure you see really cool new AI nerdery from me in your newsfeed. So thanks again.

I really appreciate you. See you guys in the next one. Bye.