Copy

Welcome to your

AI Weekly Digest

presented by Louis (What's AI) Bouchard

1️⃣ Google Brain's Answer to Dalle-e 2: Imagen 🚀

 

If you thought Dall-e 2 had great results, wait until you see what this new model from Google Brain can do.

Dalle-e is amazing but often lacks realism, and this is what the team attacked with this new model called Imagen.

They share a lot of results on their project page as well as a benchmark, which they introduced for comparing text-to-image models, where they clearly outperform Dall-E 2, and previous image generation approaches. Learn more in the video or read more here.

Watch the video

2️⃣ AI Ethics with Lauren

What a race is going on in the image generation world of AI! DALL.E 2 already has a competitor, and a very good one at that, in Imagen. I don’t want to go on about all the different ways that image generation can pose ethical risks, as they are well known. Instead, I want to focus on how Google is addressing these risks.

Google offers a robust description of the issues surrounding image generation, namely the two largest risk factors of model misuse and biased data used as inputs in training the models. The blog post addresses these concerns with a focus on the latter, and cites them as reasons for not providing unrestricted access to the model, recognizing the direct bias that Imagen was imbued with:

“While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models.”

I appreciate this level of transparency, even though impact considerations ought to be explored even further than this in all AI research. I’m confused by the following sentiment which expresses that there ought to be future work done on social and cultural bias mitigation. This is very true, but this logic seems backwards: why train a model on bad data and retroactively call for attempts to fix it, when you could just not use bad data to train it?

I’m sure you can hear the whispers of the Collingridge Dilemma here. Once that model learns from toxic data, that toxicity stays and the model will always be affected by it. Retroactive mitigation is good, but the effects of the model cannot fully be rectified. It seems inefficient to break something and try to fix it, rather than be careful and not break it.

Obviously it’s unrealistic to expect data to be filtered by hand when the data sets used are massive beyond human comprehension. But if Google Research can make an incredibly photorealistic text-to-image generator, they likely can also figure out a way to filter that data set of toxic data points. This would proactively mitigate bias and harm in not only their models, but any model that uses that data set. Google already clearly identified the “wide range of inappropriate content” that makes the data set ethically dubious, so it’s not a matter of determining what content is inappropriate, it’s just caring enough to stop that content’s influence on models, even if it costs you some time in the text-to-image AI race.

Delegating important sociocultural research for “the future” feels a lot like offloading responsibility to when people have forgotten the problem. Lucky for them, dorks like me do not forget these things easily. I look forward to seeing not just retroactive bias mitigation for better social and cultural AI outcomes, but proactive efforts as well!



- AI Ethics segment by Lauren Keegan

3️⃣ And the winner is... 🎉

 
A month ago we had an NVIDIA GPU giveaway for the GTC event in collaboration with NVIDIA AI. I had one RTX 3080 Ti to giveaway to anyone in my audience attending the event and commenting under the video with a screenshot as proof of their participation in the event. There were 95 unique subscribers that participated, and I am glad to announce that the winner is...

Fabio Gil! 🔥

Congratulations!! 🎉🚀
 
I am glad that this comment (below) won the giveaway, and I will reach out to you personally. Thank you and to all of you guys for participating in the giveaway. Stay tuned for the next one that might be sooner than you expect! 👀
Want to get into AI or improve your skills? Click here!
We are already at the end of this AI weekly digest! Thank you for thoroughly going through this iteration! I hope you enjoyed it. 

If you have suggestions, comments, or other thoughts, you can reach me by replying to this email or directly on Twitter or Linkedin. Don't hesitate to come to chat with more than 20'000 AI enthusiasts on Discord!
I hope the next week wipes away some of your stress and brings new opportunities, challenges and happiness. A happy new week from your friend!

If you would like to support my work financially, you can become a Patreon and receive a cool role in the discord server at the same time!

Share the knowledge and forward this email to a friend using this link: http://eepurl.com/huGLT5

- Louis Bouchard
Blog
Twitter
LinkedIn
GitHub
Email
Copyright © 2022 What's AI, All rights reserved.


Want to change how you receive these emails?
You can update your preferences or unsubscribe from this list.

Email Marketing Powered by Mailchimp