"Unleash all this creativity": the astounding potential of Google AI
On Wednesday, Google's research division unveiled a dizzying array of artificial intelligence (AI) projects it's developing, with goals ranging from reducing global warming to assisting writers in crafting effective language.
Why it matters: If developed and used responsibly, AI has the astonishing potential to enhance and improve our lives, but it also carries extremely concerning risks of abuse, intrusion, and wrongdoing.
Driving the news: Google Research unveiled about a dozen AI projects at a media event in Manhattan. These projects are in varying phases of development, and their objectives range from social enhancement (such as better health diagnosis) to sheer creativity and fun (text-to-image generation that can help you build a 3D image of a skirt-clad monster made of marzipan).
In terms of "social good"
Google's machine-learning model for early detection of wildfires is operational in areas of Australia, Canada, and the United States.
Flood forecasting: A system that sends 23 million people in India and Bangladesh (who accounted for 115 million flood alerts last year) has since been expanded to 18 other nations (15 in Africa). "Unleash all this creativity": the astounding potential of Google AI
Brazil, Colombia, and Sri Lanka are added to Africa.
In order to measure a fetus' gestational age and position in the womb, nurses and midwives in the US and Zambia are testing a technique that uses Android software and a portable ultrasound device.
Google's Automated Retinal Disease Assessment (ARDA) employs AI to assist healthcare professionals in identifying diabetic retinopathy and preventing blindness. With the help of a smartphone eye image, more than 150,000 patients have been screened.
The "1,000 Languages Initiative": Google is developing an AI system that will support the top 1,000 languages used worldwide.
On the riskier and experimental end of the spectrum:
Robots that can write their own code: In a project called "Code as Policies," robots are learning to write new code on their own.
Google's Andy Zeng performed a demonstration in which he informed a robot hovering over three plastic bowls—red, blue, and green—and three candies—Skittles, M&M's, and Reese's—that I like M&Ms and that my bowl was blue. Even though it wasn't explicitly directed to "put M&M's in the blue bowl," the robot managed to place the proper candy in the appropriate bowl.
Wordcraft: A number of seasoned authors are testing out Google's AI fiction-writing tool. You may read the stories they came up with using it here even though it isn't quite ready for prime time.
The broad strokes:
The White House recently released a draft "AI Bill of Rights," pushing technologists to incorporate safeguards into their products, in response to concerns about AI's negative aspects, including privacy infringement, the dissemination of false information, and losing control of customer data.
Although other tech companies have followed Google's lead and established their own AI development principles, there is little to no government control.
Despite recent investor reticence toward AI startups, Google's enormous financial resources might provide it more time to work on initiatives that won't immediately generate profits.
Yes, but as they displayed their products, Google execs issued several cautionary statements.
The head of Google Research's center of expertise on ethical AI, Marian Croak, stated that AI "may have enormous social advantages" and "unleash all this creativity."
"But because it affects so many individuals, there is also a very high risk associated. And if we don't get that right, it may be extremely damaging."
Level of threat: A recent paper from Georgetown University's Center for Security and Emerging Technology looked at how text-generating AI could "be utilized to turbocharge disinformation efforts."
And as Scott Rosenberg of Axios has noted, society is only now starting to address the moral and legal issues brought on by AI's increased ability to produce words and images.
There is fun stuff, though: Imagen and Parti, two AI models that can produce photorealistic images from text prompt, were released by Google Research this summer (like "a puppy in a nest emerging from a cracked egg").
They are currently developing text-to-video:
A giraffe underneath a microwave is a sentence that Imagen Video may turn into a little clip.
According to Google Research, Phenaki is "a model for producing videos from text, including suggestions that can evolve over time and videos that can be as long as multiple minutes."
A mobile application called AI Test Kitchen uses the games "City Dreamer" and "Wobble" to show off its text-to-image functionality (create friendly monsters that can dance).
The final line: Despite recent economic setbacks, AI is advancing apace, and businesses like Google are well-positioned to act as moral arbiters and standard-setters.
In a pre-recorded welcome to the event on Wednesday, Google CEO Sundar Pichai stated that "AI is the most significant technology we are working on, but these are still early days."

Post a Comment