Making Art in the Age of Generative AI

Making Art in the Age of Generative AI

By Evan Leonhard

When they told us that AI is coming for people’s jobs, most of us didn’t think that they were talking about artists. Our popular imaginings of artificially intelligent futures often seem to bracket the work of artists as somehow beyond the cold capacities of clever machines. Could AI handle the manual, administrative, and even strategic aspects of human endeavor? Perhaps. Creativity and aesthetic sensitivity, however, were presumed by many to be unprogrammable, too reliant upon emotion and the subtleties of lived experience.

This popular tendency in viewing art and those who make it as exceptionally human is likely a kind of cultural hangover from the aesthetic theories of the nineteenth century—in which art, especially poetry and painting, were widely proposed to be the self-expression of an extraordinary individual, a genius with a uniquely profound or sensitive subjectivity. Many of our culture’s paradigmatic symbols of artistic psychology, from the Vincent Van Gogh to Jim Morrison, have relied heavily on this trope of spiritual, cultural, and, frequently, tragic heroism.

All of these assumptions have been put to the test over the course of the past two years, as innovations in AI have become increasingly accessible masses and an integral facet of public discourse, especially with respect to education and the ethics of things like celebrity ‘deep fakes’.

At the centre of all this is, of course, a particular sub-category of artificial intelligence known as generative AI —the kind of technology famously responsible for everything ranging from uncanny portraits with extra fingers, your favourite popstar’s robot-sounding cover of a song from the 1950s, and, lest we forget, eerily corporate-coded essays from undergrads who haven’t done their reading.

Well-known generative AI interfaces like Chat-GPT, Dall-E, Bard, and Amper all fall under the umbrella of generative AI. Trained on large data sets of text, images, and audio, these systems are capable of generating original images, bodies of texts, and sonic configurations from preexisting materials. While the interfaces easily available for public usage, like Chat-GPT and DALL-E, very rarely produce anything of a quality high enough to raise the eyebrows of human artists, more sophisticated interfaces have produced works of considerable aesthetic merit.

The early alarm bells blared in September of 2022, when artist Jason M. Allen took home first place and $300 cash prize in the ‘digital arts/digitally manipulated photography’ division at the Colorado State Fair Fine Arts Competition for his piece ‘Théâtre D’Opéra Spatial’. The image is an epic scene from a galactic royal court in a style somewhat evocative of nineteenth-century academicism, looking almost as though Sir Lawrence Alma Tadema had painted a scene from ‘Dune’.

However, in the days following Allen’s big win, news broke that image had been generated using Midjourney, a generative AI system that produces incredibly detailed and often hyper-realistic images from written prompts in a manner similar to DALL-E. The program might be best known for its viral 2023 image of Pope Francis sporting an exaggerated, rapper-style puffer jacket. Prominent voices from the art world and mainstream media alike sounded off about Allen’s win, triggering an initial flurry of quasi-philosophical questioning regarding the nature of art in the age of AI.

What about AI art generators trained on works by other human artists? Could this be considered a form of plagiarism? In the vein of representation, how should we be handling situations in which systems seem to have problematic biases in how they depict certain people or particular groups of people, as in the case of Megan Fox’s complaints about AI-generated images of her being excessively sexualised? While all undoubtedly legitimate and incredibly important questions to answer, a critical dilemma that seems consistently absent from this vibrant public discourse is that of artistic practice and how it might be altered or even endangered by the increasingly sophisticated abilities of generative AI.

Even the famously antiquated William Morris – who spent so much of his career trying to reclaim the dignity of artistic labour through the reviving of mediaeval design methods, made discerning use of new technologies in his printing and textile practices – granted that said technologies could be implemented without threatening the integrity of the art’s quality or the labour involved in making it. Therefore, the question clearly remains: how might recent developments in generative AI and the demands of art coexist?

I spoke with Maggie Mustaklem, a doctoral researcher at the Oxford Internet Institute. Her current project, entitled ‘Design Interrupted’, examines the role that AI is increasingly playing in the artistic brainstorming process, particularly with respect to how designers and architects draw inspiration from things they find in the AI-curated feeds of Pinterest and Instagram. Unlike many of the tech-savvy intellectuals who have tended to chime in on this issue, Mustaklem has actually worked in the arts as a knitwear designer, and is well-aware of the expertise such work demands.

She is resistant to the alarmism that pervades much of the popular discussion mentioned above. ‘I think that the scale and reach of generative AI in creative industries is often overblown’, Mustaklem notes. ‘My research focuses on the concept stage of the design process, where designers often pull images from the web for inspiration to present concepts to clients. Gen AI is well suited to assist with this task, and many are starting to experiment with it. However, during my research I conducted workshops with 15 design studios in London and Berlin. All of them were experimenting with gen. AI, but none were using gen AI images to present concepts to clients. It is becoming a tool in the tool kit, but not one that has yet to demonstrably alter the design process’.

This relatively modest impact of generative AI on the concrete practice of the arts is, according to Mustaklem, one of the most common misconceptions floating around this issue at the moment. Like nearly every other sector of work, it seems that the creative industries have undoubtedly experienced increasing interest in the new possibilities presented by generative AI  However, ‘Statistics on job replacement and efficiency’, she notes, ‘often fail to consider points like how much of designing knitwear, or any product, is tangible and embodied, requiring localised skills and experience’.

‘I think new media and technology needs to be considered within the ecosystems it will disrupt,’ Mustaklem goes on to note, ‘A few years ago we thought 3D printers would replace overseas knitwear factories. Even though there’s some really exciting things happening with 3D printing, most knitwear is still produced overseas. Photography didn’t replace painting, but it did change painting. Gen AI will transform creative industries but it is unlikely to reshape them into something entirely different’.

Some artists have already begun to hint at what this ‘entirely different’ future for the arts might look like. While Mustaklem has design in mind, her prediction about the reconfiguration (rather than elimination) of traditional artistic practice also seems to hold for the so-called ‘fine arts,’ like painting and creative writing.

An especially exciting example of this reconfiguration in the world of literature is the magazine Heavy Traffic. A partial product of the pandemic-spawned ‘Dimes Square’ art and intellectual scene in New York City, the magazine has become a burgeoning touchstone of the American literary avant-garde. Distinct from Mustaklem’s vision of AI as a kind of collaborative design or conceptualising tool, writers publishing with Heavy Traffic present a more apophatic path for grappling with AI’s ability to mimic human creativity.

In an interview with ‘Dazed’, editor Patrick McGraw describes the magazine’s signature style as ‘shizzed out gibberish’, citing our culture’s AI-instigated shifting relationship to language as prompt for taking art where computers trained on patterns might have a difficult time following—poetic disruption and instability. As implied by McGraw’s colourful description, the writing in Heavy Traffic is characterised by a jarring, aggressively chaotic tone and even borderline incomprehensibility.

In some respects, a move like this is akin to how painters reacted in the wake photography. No longer needed as a medium for capturing visual reality, the impressionists through to the cubists and abstract expressionists sought to capture what photography could not—subjective sensation, perspective, and pure form.

Whether any of the above methods of grappling with the intersection of art and artificial intelligence can or should sustain our artistic needs into what we can fairly say will be a tech-driven future is by no means evident. However, they are a reminder that ‘human art’ and practice are by no means under existential threat. While the great nineteenth-century myth of singular artistic genius might well wither away in the wake of generative AI, the concrete work of the artist seems entirely capable of adapting for the time being.

Read the full article

Artificial Intelligence Trained to Draw Inspiration From Images, Not Copy Them

Artificial Intelligence Trained to Draw Inspiration From Images, Not Copy Them

By Karen Davidson

Researchers are using corrupted data to help generative AI models avoid the misuse of images under copyright.

Powerful new artificial intelligence models sometimes, quite famously, get things wrong — whether hallucinating false information or memorizing others’ work and offering it up as their own. To address the latter, researchers led by a team at The University of Texas at Austin have developed a framework to train AI models on images corrupted beyond recognition.

DALL-E, Midjourney and Stable Diffusion are among the text-to-image diffusion generative AI models that can turn arbitrary user text into highly realistic images. All three are now facing lawsuits from artists who allege generated samples replicate their work. Trained on billions of image-text pairs that are not publicly available, the models are capable of generating high-quality imagery from textual prompts but may draw on copyrighted images that they then replicate.

The newly proposed framework, called Ambient Diffusion, gets around this problem by training diffusion models through access only to corrupted image-based data. Early efforts suggest the framework is able to continue to generate high-quality samples without ever seeing anything that’s recognizable as the original source images.

Ambient Diffusion was originally presented at NeurIPS, a top machine-learning conference, in 2023 and has since been adapted and extended. The follow-up paper, Consistent Diffusion Meets Tweedie,” was accepted to the 2024 International Conference on Machine Learning. In collaboration with Constantinos Daskalakis of the Massachusetts Institute of Technology, the team extended the framework to train diffusion models on data sets of images corrupted by other types of noise, rather than by simply masking pixels, and on larger data sets.

“The framework could prove useful for scientific and medical applications, too,” said Adam Klivans, a professor of computer science, who was involved in the work. “That would be true for basically any research where it is expensive or impossible to have a full set of uncorrupted data, from black hole imaging to certain types of MRI scans.”

Klivans—along with Alex Dimakis, a professor of electrical and computer engineering, and other collaborators in the multi-institution Institute for Foundations of Machine Learning directed by the two UT faculty members—experimented first by training a diffusion model on a set of 3,000 images of celebrities, then using that model to generate new samples. In the experiment, the diffusion model trained on clean data blatantly copied the training examples. But when researchers corrupted the training data, randomly masking up to 90% of individual pixels in an image, and retrained the model with their new approach, the generated samples remained high quality but looked very different. The model can still generate human faces, but the generated ones are sufficiently different from the training images.

“Our framework allows for controlling the trade-off between memorization and performance,” said Giannis Daras, a computer science graduate student who led the work. “As the level of corruption encountered during training increases, the memorization of the training set decreases.”

The researchers said this points to a solution that, though it may change performance, will never output noise. The framework offers an example of how academic researchers are advancing artificial intelligence to meet societal needs, a key theme this year at The University of Texas at Austin, which has declared 2024 the “Year of AI.”

The research team included members from the University of California, Berkeley and MIT.

Funding for the research was provided by the National Science Foundation, Western Digital, Amazon, Cisco and The University of Texas at Austin. Daras is the recipient of the Onassis Fellowship, Bodossaki Fellowship and Leventis Fellowship, and Dimakis is Stanley P. Finch Centennial Professor in Engineering.

Read the full article

Meta’s new generative AI features aim to make it easier to create ads – and they’re free

Meta’s new generative AI features aim to make it easier to create ads – and they’re free

The company’s new advertiser tool can help you get more out of a single product image.

By Sabrina Ortiz

Social media feeds are an ideal place to advertise, and a well-executed campaign can help businesses grow significantly — but creating them is a lot of work. Meta’s new generative artificial intelligence (AI) tools aim to help make curating the perfect ad easier.

On Tuesday, Meta unveiled new generative AI features and upgrades that build on its current offerings to assist businesses in creating and editing new ad content, aiming to make the process quicker and more efficient.

Meta first introduced generative AI features for advertisers in October, including background generation, which allows users to swap backdrops for their product images; image expansion, which automatically fits creative assets to different aspect ratios; and text variations, which generates multiple ad text options from an advertiser’s original copy.

Now, the company is adding new image and text generation capabilities, the highlight being a new image variation feature that can create alternate iterations of your content based on the original creative.

As seen in the video below, the user’s original ad creative, an image of a cup of coffee with a pasture in the background, was transformed into a set of new images that showcase a cup of coffee in front of lush leaves.

The feature is rolling out to users now. In the upcoming months, it will be upgraded to include user text prompts that can customize what the model generates to better fit a user’s specific vision. Additionally, users can overlay text on those images, selecting from dozens of font typefaces to complete the ad, as seen below.

These features have the potential to bring a lot of value to businesses by helping already-stretched marketers and business owners save time and money on shooting a new product and carrying out an entirely new campaign.

Meta’s new features could also make it easier for businesses to reach their target audiences, as a shoot can be tweaked to suit many different interests without having to go through extensive project planning to bring a new idea to life.

The image expansion feature is being upgraded to include Reels and Feed on both Instagram and Facebook, making it easier for users to adjust the same content across aspect ratios and eliminating the need for manual adjustments.

Similarly, Meta is upgrading the text generation feature to include ad headlines in addition to the primary text. Meta revealed that this text feature will “soon” be built with Llama 3, the company’s most advanced large language model (LLM), making the feature more advanced than it currently is and offering advertisers more comprehensive help.

All of the generative AI features are available in Meta’s Ads Manager through Advantage+ creative, Meta’s hub for optimizing user ad content.

Meta will continue to offer these tools at no additional cost to the user, in the hopes that increased ad performance encourages companies to continue to advertise with Meta. Meta says that companies are already seeing improved ad performance from leveraging some of these tools.

For example, Meta shared that the skincare brand Fresh saw a five-time incremental return on ads spend by running Advantage+ shopping campaigns with Shops ads and generative AI text variations. Similarly, Casetify saw a 13% increase in return on ad spend when testing the background generation feature.

Generative AI is great at churning out quality creative content at impressive speed and scale, so we’ll continue to see more of these applications that support marketers in the coming months. Recently, Adobe announced a suite of generative AI tools marketers can use to help with everything from generating content for a campaign to deploying it.

Read the full article

The top three ways to use generative AI to empower knowledge workers

The top three ways to use generative AI to empower knowledge workers

The future of generative AI is exciting, but there can be implications if innovations are not built responsibly.

By Cynthia Stoddard

Though generative AI is still a nascent technology, it is already being adopted by teams across companies to unleash new levels of productivity and creativity. Marketers are deploying generative AI to create personalized customer journeys. Designers are using the technology to boost brainstorming and iterate between different content layouts more quickly. The future of technology is exciting, but there can be implications if these innovations are not built responsibly.

As Adobe’s CIO, I get questions from both our internal teams and other technology leaders: how can generative AI add real value for knowledge workers—at an enterprise level? Adobe is a producer and consumer of generative AI technologies, and this question is urgent for us in both capacities. It’s also a question that CIOs of large companies are uniquely positioned to answer. We have a distinct view into different teams across our organizations, and working with customers gives us more opportunities to enhance business functions.

Our approach

When it comes to AI at Adobe, my team has taken a comprehensive approach that includes investment in foundational AI, strategic adoption, an AI ethics framework, legal considerations, security, and content authentication. ​The rollout follows a phased approach, starting with pilot groups and building communities around AI. ​

This approach includes experimenting with and documenting use cases like writing and editing, data analysis, presentations and employee onboarding, corporate training, employee portals, and improved personalization across HR channels. The rollouts are accompanied by training podcasts and other resources to educate and empower employees to use AI in ways that improve their work and keep them more engaged. ​

Unlocking productivity with documents

While there are innumerable ways that CIOs can leverage generative AI to help surface value at scale for knowledge workers, I’d like to focus on digital documents—a space in which Adobe has been a leader for over 30 years. Whether they are sales associates who spend hours responding to requests for proposals (RFPs) or customizing presentations, marketers who need competitive intel for their next campaign, or legal and finance teams who need to consume, analyze, and summarize massive amounts of complex information—documents are a core part of knowledge workers’ daily work life. Despite their ubiquity and the fact that critical information lives inside companies’ documents (from research reports to contracts to white papers to confidential strategies and even intellectual property), most knowledge workers are experiencing information overload. The impact on both employee productivity and engagement is real.

Lessons from customer zero

Adobe invented the PDF and we’ve been innovating new ways for knowledge workers to get more productive with their digital documents for decades. Earlier this year, the Acrobat team approached my team about launching an all-employee beta for the new generative AI-powered AI Assistant. The tool is designed to help people consume the information in documents faster and enable them to consolidate and format information into business content.

I faced all the same questions every CIO is asking about deploying generative AI across their business— from security and governance to use cases and value. We discovered the following three specific ways where generative AI helped (and is still helping) our employees work smarter and improve productivity.

  1. Faster time to knowledge
    Our employees used AI Assistant to close the gap between understanding and action for large, complicated documents. The generative AI-powered tool’s summary feature automatically generates an overview to give readers a quick understanding of the content. A conversational interface allows employees to “chat” with their documents and provides a list of suggested questions to help them get started. To get more details, employees can ask the assistant to generate top takeaways or surface only the information on a specific topic. At Adobe, our R&D teams used to spend more than 10 hours a week reading and analyzing technical white papers and industry reports. With generative AI, they’ve been able to nearly halve that time by asking questions and getting answers about exactly what they need to know and instantly identifying trends or surfacing inconsistencies across multiple documents.
  2. Easy navigation and verification
    AI-powered chat is gaining ground on traditional search when it comes to navigating the internet. However, there are still challenges when it comes to accuracy and connecting responses to the source. Acrobat AI Assistant takes a more focused approach, applying generative AI to the set of documents employees select and providing hot links and clickable citations along with responses. So instead of using the search function to locate random words or trying to scan through dozens of pages for the information they need, AI Assistant generates both responses and clickable citations and links, allowing employees to navigate quickly to the source where they can quickly verify the information and move on, or spend time deep diving to learn more. One example of where generative AI is having a huge productivity impact is with our sales teams who spend hours researching prospects by reading materials like annual reports as well as responding to RFPs. Consuming that information and finding just the right details for RPFs can cost each salesperson more than eight hours a week. Armed with AI Assistant, sales associates quickly navigate pages of documents and identify critical intelligence to personalize pitch decks and instantly find and verify technical details for RFPs, cutting the time they spend down to about four hours.
  3. Creating business content
    One of the most interesting use cases we helped validate is taking information in documents and formatting and repurposing that information into business content. With nearly 30,000 employees dispersed across regions, we have a lot of employees who work asynchronously and depend on technology and colleagues to keep them up to date. Using generative AI, employees can now summarize meeting transcripts, surface action items, and instantly format the information into an email for sharing with their teams or a report for their manager. Before starting the beta, our communications teams reported spending a full workday (seven to 10 hours) per week transforming documents like white papers and research reports into derivative content like media briefing decks, social media posts, blogs, and other thought leadership content. Today they’re saving more than five hours a week by instantly generating first drafts with the help of generative AI.

Simple, safe, and responsible

CIOs love learning about and testing new technologies, but at times they can require lengthy evaluations and implementation processes. Acrobat AI Assistant can be deployed in minutes on the desktop, web, or mobile apps employees already know and use every day. Acrobat AI Assistant leverages a variety of processes, protocols, and technologies so our customers’ data remains their data and they can deploy the features with confidence. No document content is stored or used to train AI Assistant without customers’ consent, and the features only deliver insights from documents users provide. For more information about Adobe is deploying generative AI safely, visit here.

Generative AI is an incredibly exciting technology with incredible potential to help every knowledge worker work smarter and more productively. By having the right guardrails in place, identifying high-value use cases, and providing ongoing training and education to encourage successful adoption, technology leaders can support their workforce and companies to be wildly successful in our AI-accelerated world.

Read the full article

OpenAI Shows Off New GPT-4o Generative AI Model and More ChatGPT Upgrades

OpenAI Shows Off New GPT-4o Generative AI Model and More ChatGPT Upgrades

By Eric Hal Schwartz

OpenAI has introduced its latest generative AI model, GPT-4o, which the company describes as a multimodal upgrade to GPT -4’s abilities as a large language model (LLM) GPT-4o integrates voice, text, and visual data, with the “o” stands for omni in reference to its multimodal functionality.


OpenAI CTO Muri Murati shared the details of GPT-4o in a virtual presentation at what looked like the basement from The Brady Bunch. She explained how, while GPT-4 was trained on images and text, GPT-4o added auditory data to its training regimen, allowing for a more complete understanding and interaction with users across multiple media and formats.

Based on tests shared by OpenAI, GPT-4o represents a notable evolution in how AI models interact with people. OpenAI said GPT-4o is as good as GPT-4 Turbo in its performance in English text and code and much better in non-English languages. It’s also faster and costs half as much as the GPT-4 Turbo API.

“GPT-4o provides GPT-4 intelligence, but it is much faster, and it improves on its capabilities across text, vision, and audio. For the past couple of years, we have been very focused on improving the intelligence of these models. And they have gotten pretty good. But this is the first time that we are really making a huge step forward when it comes to the ease of use,” Murati said during the presentation. “This is incredibly important because we are looking at the future of interaction between ourselves and the machines. We think that GPT-4o is really shifting the paradigm into the future of collaboration.”


The company plans to incorporate the new model into ChatGPT, which will up its abilities and responsiveness both through text and voice. The new model can respond to audio at an average speed of 320 milliseconds, essentially the same speed as a human. ChatGPT will converse in a more dynamic way with GPT-40, allowing users to interrupt the AI and enabling the chatbot to detect emotional nuances in the user’s voice and respond in a tone appropriate to what it hears.

Improved visual data processing thanks to GPT-4o will also enhance CHatGPT or applications running the LLM in its speed and accuracy processing images. Users can ask context-specific questions about the content of images, including getting the AI to read code on a screen or identify a brand based on a product in a photo. These advancements aim to facilitate a more natural and intuitive user experience akin to conversing with a human assistant.

In addition to these advancements, OpenAI announced the launch of a desktop version of ChatGPT, as can be seen above. The desktop app is only for macOS for now, but it looks a lot like how Microsoft has been incorporating its Copilot into Windows 11. A Windows ChatGPT desktop app is in the development stage, though there is no mention of a dedicated button like Microsoft gave Copilot. Users can converse with ChatGPT on the computer and even share screenshots. There’s also an audio conversational option with the Voice Mode for ChatGPT available on the web client also available with the desktop app. In other words, Apple users may never bother with whatever generative AI assistant comes native with future versions of Apple computers. The online version of ChatGPT isn’t being left out, either, as it’s getting a facelift in appearance and user interface.

Read the full article

Academic Success Tip: Infusing AI into Curricular Offerings

Academic Success Tip: Infusing AI into Curricular Offerings

By Ashley Mowreader

Faculty members have created special courses and assignments around generative artificial intelligence to prepare students for their lives after college.

Generative artificial intelligence (AI) has been one of the most disruptive technologies in recent years. In fulfilling higher education’s mission to equip students with lifelong skills and learning, faculty members across disciplines are including AI tools in their classrooms.

Inside Higher Ed’s most recent survey of chief academic officers and provosts found 14 percent have reviewed the curriculum to ensure it will prepare students for AI in the workforce and an additional 73 percent plan to do so. Among students, 72 percent of respondents to a winter 2023 Student Voice survey by Inside Higher Ed and College Pulse believe their institution should be preparing them “a lot” or “somewhat” to use AI in the workplace.

Among the possibilities of how generative AI can improve learning for students, faculty members have embedded AI in student supports, as course topics and as research tools.

AI as a teaching tool: One of the most common ways faculty are utilizing AI is to enhance current learning outcomes.

Chatbots are not new AI tools, but generative AI expands the opportunities available to faculty in automated messaging with learners.

Sanghoon Park, a professor of teaching and learning at the University of South Florida, created a chatbot that uses AI to provide motivational messages to students in his online class. The chatbot is connected to the online course page and students can receive academic help and emotional support in a few clicks from the bot.

Similarly, the University of Georgia, Morgan State University and the University of Central Florida will deploy chatbot technology into first-year math and English classes starting in fall 2024 as part of a Department of Education grant program to study the role of chatbots on student outcomes. The chatbots will answer questions about course material, remind students of upcoming assignments as well as provide motivational support.

Students at Hult International Business School and Columbia University are exposed to generative AI through an online digital-marketing simulation. In the game, students can interact with customer personas who are powered by AI to make smarter decisions in their campaigns.

AI as a course topic: Other faculty are engaging with technology directly in the classroom, teaching students how to hone and develop their own AI tools and projects.

A fall 2023 course at the University of Mary Washington taught students how to utilize ChatGPT and generative AI, as well as the challenges and possibilities they pose. The special topics course in digital studies was taught by Anand Rao, professor of communication and chair of the department of communication and digital studies.

Students built their own generative AI tools, using low- and no-code options including edtech platform PlayLab, which helps users create AI chatbots.

American University’s Kogod School of Business announced in March that, starting fall 2024, it would update all curriculum to teach students prompt engineering and programming as well as coding in R, Python and AI/ML models. Course content will vary from using AI in real-world contexts to application in abstract and theoretical manners.

Incoming students will also be required to complete AI courses and workshops as part of the undergraduate core curriculum.

Georgia Tech’s College of Engineering is reimagining course offerings to strengthen artificial intelligence and machine learning education at the university, with six new courses launched in spring 2024. Most of the courses are major-specific, including how to use AI in civil and environmental engineering or bioinformatics, but others are more general, including decision and data analytics.

AI in research: As artificial intelligence tools develop, researchers at Cornell University expect AI to be part of the toolbox of the next generation of researchers, therefore professors should be prepared to engage and lead thoughtful discussions around AI.

Officials at the university also believe it is critical to establish clear policies around using tools in research to protect researchers’ privacy and uphold ethical practices.

Boston University is piloting an initiative to understand how students in a first-year writing course partner with AI in writing and research and to inform program guidelines in the future. The goal is to provide best practices for faculty in teaching ethical and responsible AI use as well as to develop assignments and activities to better teach these concepts.

Read the full article

Generative AI and Creativity

Generative AI and Creativity

By Jayne Roberts

Is generative AI boosting creativity or stifling the human imagination?

As artificial intelligence evolves, the conversations about AI and creativity are becoming more complex. Is creativity something we can study and evolve in a computer, an innate ability unique to human minds, or something in between?

Researchers often link creative potential to the power of memory. Anna Abraham, the director at the Torrance Center for Creativity and Talent Development, has argued that semantic memory—our long-term memory storage for foundational knowledge—is the root of imagination. Her research proposes that we cultivate our imaginations using our experiences of the world.

The Torrance Center was founded more than 40 years ago through the Mary Frances Early College of Education to study and nurture gifted and creative talents. Abraham has been the Center’s director since 2020 and serves as a faculty member in the Gifted & Creative Education program.

“Creativity comes into being when we combine what we know in new ways,” says Abraham. “If you have a concept of the color gold and a concept of an elephant, even though you’ve never seen a gold elephant, you can imagine it based on combining what you know about its separate elements.”

Some scholars believe that advanced AI tools also have semantic memory. How else do we describe the immense databases of conversational jargon and personal chats they use to evolve language? This potentially infinite backlog in AI systems means that human issues like memory loss or a lack of inspiration don’t affect productivity. If you want an image of a gold elephant or a story about one, you simply type in the prompt to tools like:

  • ChatGPT
  • Jasper AI
  • DALL-E
  • Murf

These programs can craft stories, edit copy, create voiceovers and generate images and videos. They “remember” previous prompts and use that data to develop better products. Now, researchers are asking if these AI tools are boosting innovation or reducing our creative impulses?

How is AI used in creative writing?

It is no longer a question of whether writers use generative AI but when in the process. Professional writers and students alike use AI tools like Wordtune and the Microsoft spelling and grammar feature to check for minor mistakes and finetune sentence structure. Writesonic, Anyword and Jasper are some of the popular programs that can create content from scratch.

But crafting a well-written creative piece using AI is more than asking it to “write a best-selling novel” or “create an A+ essay.” A writer from Guardian US experimented with using ChatGPT to create movie scripts after the Writers Guild of America allowed the use of AI for professional submissions.Using prompts like “write me the outline for a movie that will make billions of dollars theatrically” and “write me an Oscar-winning movie,” ChatGPT came up with detailed plots, dialogue, song lyrics and even witty one-liners for a new Marvel Avengers movie.

  • Thor: “Your conqueror game is weak, Kang. You should stick to playing with your toys.”
  • Captain America: “You may be a conqueror, but you’ll never conquer our spirit.”
  • Hulk: “Kang, you wouldn’t like me when I’m angry. Oh wait, you already don’t like me.”

Unsurprisingly, The Guardian writer found the dialogue underwhelming, but he agreed that the software was useful for generating plot ideas and helping to navigate outlines.

Author Rie Kudan recently caused a stir in the literary world when she won Japan’s most prestigious book award, the Akutagawa Prize, for a novel in which she used generative AI. Kudan’s book takes place in a futuristic world where AI is commonplace. She used real-life AI responses to make the AI in her book more realistic but says that she made “appropriate modifications to the story” to make it her own.

Even with the use of AI tools, the quality of the creative work relies on the talent of the author. Though the members of the judging committee wished they knew in advance about the use of AI, they still praised Kudan’s work as “flawless” and “highly entertaining.”

Should AI art be considered “art”?

As AI technology continues to advance, so do the ethical and legal dilemmas involved with using it to create art. Is it creativity or plagiarism?

According to a study from Everypixel Journal, text-to-image algorithms created more than 15 billion images between 2022 and 2023. These images are on social media, website ads, presentations and online marketplaces. They are unavoidable. Artists like Dapo Adeola believes that this surge of AI art “devalues illustration,” but others are experimenting with how this technology can push the creative boundaries of what it means to be an artist.

In 2018, an AI-generated portrait sold for $432,500 at the world-renowned Christie’s auction house. A collective of artists and researchers created the “painting” using an algorithm that pulled data from 15,000 portraits painted between the 14th century to the 20th.

Hugo Caselles-Dupre, one of the researchers who helped create the portrait, says, “If the artist is the one that creates the image, then that would be the machine. If the artist is the one that holds the vision and wants to share the message, then that would be us.”

Using AI for Creativity

As AI and creativity continue to walk hand-in-hand, Abraham cautions creatives against becoming too dependent on AI tools.  But she also knows of avid users who vouch for how generative AI has boosted their intellectual curiosity and helped them overcome obstacles, like mental blocks.

“Like most things, it can be potentially helpful to some until you get to a certain point,” says Abraham. “It’s important to distinguish between tools that make manual labor easier, like a typewriter, from tools that have the potential to deskill us cognitively when we remove the need to develop a complex ability ourselves.”

Choosing to develop basic creative skills is a crucial part of improving any potential or talent. Using AI can help people who have little knowledge at the beginning of their journey or help streamline tasks once they’ve become an expert. But whether using Pinterest to generate color palettes or ChatGPT to jumpstart new ideas, moderation seems to be the key to growing an artist’s creative potential.

“The human brain and AI technology might seem to create a similar product, but the process underlying that generation is entirely different,” says Abraham. “The skill development and the sense of fulfillment, frustration and purpose that we experience as humans when creating art are not experiences that can be replicated by artificial intelligence. That is what it means to be human.”

Read the full article

7 Essential Open-Source Generative AI Models Available Today

7 Essential Open-Source Generative AI Models Available Today

By Bernard Marr

There are many reasons that businesses may want to choose open-source over proprietary tools when getting started with generative AI.

This could be because of cost, opportunities for customization and optimization, transparency or simply the support that’s offered by the community.

There are disadvantages too, of course, and I cover the pros and cons of each option more fully in this article.

With software generally, the term open-source simply means that the source code is publicly available and can be used, free of charge, for pretty much any purpose.

When it comes to AI models, though, there has been some debate about exactly what this entails, as we will get into it as we discuss the individual models covered here. So, let’s dive in.

Stable Diffusion

One of the most powerful and flexible image generation models, and certainly the most widely-used open-source image models, Stable Diffusion 3 (the latest version as of writing) supports text-to-image as well as image-to-image generation and has become well-known for its ability to create highly realistic and detailed images.

As is common with open-source software, using Stable Diffusion isn’t quite as straightforward as using commercial, proprietary tools like ChatGPT. Rather than having its own web interface, it’s accessed through third-party tools built by commercial entities, including DreamStudio and Stable Diffusion Web. The alternative is to compile and run it yourself locally, and this requires providing your own compute resources as well as technical know-how.

Meta Llama 3

This is a family of language models available in various sizes, making it suitable for different applications, from lightweight mobile clients to full-size cloud deployments. The same model that powers the Meta AI assistant available across its social media platforms can be deployed by anyone for many uses including natural language generation and creating computer code. One of its strong points is its ability to run on relatively low-powered hardware. However, as with some of the other models covered here, there is some debate as to whether it can truly be considered open-source, as Meta has not disclosed exact details of its training data.

Mistral AI

Mistral is a French startup that has developed several generative AI models that it has made available under open-source licenses. These include Mistral 7B, which is designed to be lightweight and easy to deploy on low-power hardware, and the more powerful Mistral 8x22B. It has a strong user community offering support, and positions itself as a highly flexible and customizable generative language model.


OpenAI has open-sourced the second version of their LLM – essentially an earlier version of the engines that are now used to power ChatGPT. While it isn’t as big, powerful or flexible as the later GPT-3.5 or GPT-4 (built on 1.2 billion parameters compared to GPT-4’s one-trillion plus), it’s still considered to be perfectly adequate for many language-based tasks such as generating text or powering chatbots. GPT-2 is made available by OpenAI under the MIT license, which is generally considered to be compliant with open-source principles.


BLOOM is described as the world’s largest open, multilingual language model, built on 176 billion parameters. Development was led by Hugging Face, a repository of open-source AI resources working alongside a team of over 1,000 researchers as part of a global collaborative project known as BigScience. The aim was to create a truly open and transparent LLM available to anyone who agrees to the terms of the project’s Responsible AI License. Technically, this means it isn’t quite open source, but it is freely available to use and distribute, as long as it isn’t used for harmful purposes as defined by the terms of the license. This makes it a very interesting experiment in the critically important domain of developing and distributing ethical AI.


This LLM also claims to be the world’s largest open-source model, although again there is some debate as to whether it technically fills all of the criteria for being truly open source.

Grok was designed and built by, a startup founded by Elon Musk following his split from OpenAI. This split has been reported as being caused by disagreements over exactly what “open” means when it comes to AI models.

Rather than using the term large language model, X describes Grok as a “mixture of experts” model, reflecting the fact that the base model is designed to be more general-purpose and is not specifically trained for creating dialogue, as is the case with, for example, ChatGPT.

As with Llama, skepticism of Grok’s open-source status is based on the fact that while has made the weights and architecture of the model publicly available, it hasn’t revealed all of the code or training data.


Two models of this LLM architecture have been made freely available by its developers, the Technology Innovation Institute, a research institution founded by the government of Abu Dhabi. Both models – the more portable Falcon 40B and the more powerful 180B, have been released as open source and reportedly come second only to GPT-4 on Open Face’s leaderboard of LLM performance. While the smaller model is released under the Apache 2.0 license – generally considered to fit the definition of open-source – the larger model has had some conditions attached to its use and distribution.

This exploration into the realm of open-source generative AI tools illuminates the diverse array of options available and underscores the transformative potential these technologies hold for businesses eager to leverage AI’s power while embracing transparency, cost-efficiency, and robust community support.

Read the full article

7 takeaways from a year of building generative AI responsibly and at scale

7 takeaways from a year of building generative AI responsibly and at scale

By Sally Beatty

Last year saw huge advances in generative AI, as people experienced the ability to generate lifelike visuals with words and Microsoft Copilot tools that can summarize missed meetings, help write business proposals or suggest a dinner menu based on what’s in your fridge. While Microsoft has long established principles and processes for building AI applications in ways that seek to minimize unexpected harm and give people the experiences they’re looking for, deploying generative AI products on such a large scale has introduced new challenges and opportunities.

That’s why Microsoft recently released its first annual Responsible AI Transparency Report to help people understand how we approach responsible AI (RAI). The company has also rolled out new tools available in Azure AI for enterprise customers and developers to help safeguard the quality of their AI outputs and protect against malicious or unexpected uses of the systems.

It’s been a momentous year of stress-testing exciting new technology and safeguards at scale. Here are some key takeaways from Natasha Crampton, Microsoft’s Chief Responsible AI Officer, who leads the team defining and governing the company’s approach to RAI, and Sarah Bird, Microsoft’s Chief Product Officer for Responsible AI, who drives RAI implementation across the product portfolio:

#1: Make responsible AI a foundation, not an afterthought

Responsible AI is never about a single team or set of experts, but rather the responsibility of all employees across Microsoft. For instance, every employee who works on developing generative AI applications must follow the company’s Responsible AI Standard, a detailed list of responsible AI requirements for product development. These include instructions for assessing the potential impact of new AI applications, creating plans for managing previously unknown failures that come to light once in use, and identifying limitations or changes so customers, partners, and people using the AI applications can make informed decisions.

Microsoft has also invested in mandatory training to build awareness and advocacy across the company – at the end of last year 99 percent of employees had completed a training module on responsible AI in our annual standards of business conduct training.

“It’s not possible to do responsible AI work as some sort of after-thought bolt on checklist immediately prior to shipping a product,” says Natasha Crampton. “It needs to be integrated into the way in which we build products from the very beginning. d into the way in which we build products from the very beginning. We need everyone across the company to be thinking about responsible AI considerations from the very get go.”

#2: Be ready to evolve and move quickly

New AI product development is dynamic. Taking generative AI to scale has required rapid integration of customer feedback from dozens of pilot programs, followed by ongoing engagement with customers to understand not only what issues might emerge as more people begin using the new technology, but what might make the experience more engaging.

It was through this process that Microsoft decided to offer different conversational styles – more creative, more balanced or more precise mode – as part of Copilot on its Bing search engine.

“We need to have an experimentation cycle with them where they try things on,” says Sarah Bird. “We learn from that and adapt the product accordingly.”

#3: Centralize to get to scale faster

As the company introduced Microsoft Copilot and started integrating those AI-powered experiences across its products, the company needed a more centralized system to make sure everything that was being released met the same high bar. And it didn’t make sense to reinvent the wheel with every product, which is why the company is developing one responsible AI technology stack in Azure AI so teams can rely on the same tools and processes.

Microsoft’s responsible AI experts also developed a new approach that centralizes how product releases are evaluated and approved. The team reviews the steps product teams have taken to map, measure and manage potential risks from generative AI, based on a consensus-driven framework, at every layer of the technology stack and before, during and after a product launch. They also consider data collected from testing, threat modeling and “red-teaming,” a technique to pressure-test new generative AI technology by attempting to undo or manipulate safety features.

Centralizing this review process made it easier to detect and mitigate potential vulnerabilities across the portfolio, develop best practices, and ensure timely information-sharing across the company and with customers and developers outside Microsoft.

“The technology is changing, superfast,” says Sarah Bird. “We’ve had to really focus on getting it right once, and then reuse (those lessons) maximally.”

#4: Tell people where things come from

Because AI systems have become so good at generating artificial video, audio and images that are difficult to distinguish from the real thing, it’s increasingly important for users to be able to identify the provenance, or source, of AI generated information.

In February, Microsoft joined with 19 other companies in agreeing to a set of voluntary commitments aimed at voluntary commitments aimed at combating deceptive use of AI and the potential misuse of “deepfakes” in the 2024 elections. This includes encouraging features to block abusive prompts aimed at creating false images meant to mislead the public, embedding metadata to identify the origins of an image and providing mechanisms for political candidates to report deepfakes of themselves.

Microsoft has developed and deployed media provenance capabilities – or “Content Credentials” – that enable users to verify whether an image or video was generated by AI, using cryptographic methods to mark and sign AI-generated content with metadata about its source and history, following an open technical standard developed by the Coalition for Content Provenance and Authenticity (C2PA), which we co-founded in 2021. Microsoft’s AI for Good Lab has also directed more of its focus on identifying deepfakes, tracking bad actors and analyzing their tactics.

“These issues aren’t just a challenge for technology companies, it’s a broader societal challenge as well,” says Natasha Crampton.

#5: Put RAI tools in the hands of customers

To improve the quality of AI model outputs and help protect against malicious use of its generative AI systems, Microsoft also works to put the same tools and safeguards it uses into the hands of customers so they can build responsibly. These include open-source as well as commercial tools and services, and templates and guidance to help organizations build, evaluate, deploy and manage generative AI systems.

Last year, Microsoft released Azure AI Content Safety, a tool that helps customers identify and filter out unwanted outputs from AI models such as hate, violence, sexual or self-harm content. More recently, the company has added new tools that are now available or coming soon in Azure AI Studio to help developers and customers improve the safety and reliability of their own generative AI systems.

These include new features that allow customers to conduct safety evaluations of their applications that help developers to identify and address vulnerabilities quickly, perform additional risk and safety monitoring and detect instances where a model is “hallucinating” or generating data that is false or fictional.

“The point is, we want to make it easy to be safe by default,” says Sarah Bird.

#6: Expect people to break things

As people experience more sophisticated AI technology, it’s perhaps inevitable that some will try to challenge systems in ways that range from harmless to malicious. That’s given rise to a phenomenon known as “jailbreaks,” which in tech refers to the practice of working to get around safety tools built into AI systems.

In addition to probing for potential vulnerabilities before it releases updates of new AI products, Microsoft works with customers to ensure they also have the latest tools to protect their own custom AI applications built on Azure.

For instance, Microsoft has recently made new models available that use pattern recognition to detect and block malicious jailbreaks, helping to safeguard the integrity of large language models (LLM) and user interactions. Another seeks to prevent a new type of attack that attempts to insert instructions allowing someone to take control of the AI system.

“These are uses that we certainly didn’t design for, but that’s what naturally happens when you are pushing on the edges of the technology,” says Natasha Crampton.

#7: Help inform users about the limits of AI

While AI can already do a lot to make life easier, it’s far from perfect. It’s a good practice for users to verify information they receive from AI-enabled systems, which is why Microsoft provides links to cited sources at the end of any chat-produced output.

Since 2019, Microsoft has been releasing “transparency notes” providing customers of the company’s platform services with detailed information about capabilities, limitations, intended uses and details for responsible integration and use of AI. The company also includes user-friendly notices in products aimed at consumers, such as Copilot, to provide important disclosures around topics like risk identification, the potential for AI to make errors or generate unexpected content, and to remind people they are interacting with AI.

As generative AI technology and its uses continue to expand, it will be critical to continue to strengthen systems, adapt to new regulation, update processes and keep striving to create AI systems that deliver the experiences that people want.

“We need to be really humble in saying we don’t know how people will use this new technology, so we need to hear from them,” says Sarah Bird. “We have to keep innovating, learning and listening.”

Read the full article

Generative AI will be designing new drugs all on its own in the near future

Generative AI will be designing new drugs all on its own in the near future

By Trevor Laurence Jockims

Eli Lilly chief information and digital officer Diogo Rau was recently involved in some experiments in the office, but not the typical drug research work that you might expect to be among the lab tinkering inside a major pharmaceutical company.

Lilly has been using generative AI to search through millions of molecules. With AI able to move at a speed of discovery which in five minutes can generate as many molecules as Lilly could synthesize in an entire year in traditional wet labs, it make sense to test the limits of artificial intelligence in medicine. But there’s no way to know if the abundance of AI-generated designs will work in the real world, and that’s something skeptical company executives wanted to learn more about.

The top AI-generated biological designs, molecules that Rau described as having “weird-looking structures” that could not be matched to much in the company’s existing molecular database, but that looked like potentially strong drug candidates, were taken to Lilly research scientists. Executives, including Rau, expected scientists to dismiss the AI results.

“They can’t possibly be this good?” he remembered thinking before presented the AI results.

The scientists were expected to point out everything wrong with the AI-generated designs, but what they offered in response was a surprise to Lilly executives: ”‘It’s interesting; we hadn’t thought about designing a molecule that way,’” Rau recalled them saying as he related the story, previously unreported, to attendees at last November’s CNBC Technology Executive Council Summit.

“That was an epiphany for me,” Rau said. “We always talk about training the machines, but another art is where the machines produce ideas based on a data set that humans wouldn’t have been able to see or visualize. This spurs even more creativity by opening pathways in medicine development that humans may not have otherwise explored.”

According to executives working at the intersection of AI and health care, the field is on a trajectory that will see medicines completely generated by AI in the near future; according to some, within a few years at most it will become a norm in drug discovery. Generative AI is rapidly accelerating its applicability to the developments and discovery of new medications, in a move that will reshape not only the pharmaceutical industry but ground-level ideas that have been built into the scientific method for centuries.

When Google’s DeepMind broke the protein mold

The moment this trajectory first became clear was years before ChatGPT broke through into the public consciousness. It was “the AlphaFold moment” in 2021, according to Kimberly Powell, vice president of health care at Nvidia, when Google’s DeepMind AI unit — which had become famous for showing how different AI’s creative thinking could be from humans in the Chinese strategy game of Go — pioneered the application of AI large language models to biology. “AlphaFold was this pivotal moment when we could train these transformer models with very large data sets and go from amino acid sequence to a protein structure, which is at the core of doing drug development and design,” Powell said.

The advances related to AI are taking place within a field of biology that has been increasingly digitized at what Powell describes as “unprecedented scales and resolutions.”

It’s a medical revolution that includes spatial genomics scanning millions of cells within tissue, in 3-D, and AI model-building that specifically benefits from a catalog of chemicals already in a digital form which allows generative AI transformer models to now go to work on them. “This training can be done using unsupervised and self-supervised learning, and it can be done not only rapidly but imaginatively: the AI can ‘think’ of drug models that a human would not,” Powell said.

An analogy for understanding the development of AI drugs can be found in the mechanisms of ChatGPT. “It’s essentially been trained on every book, every webpage, every PDF document, and it’s encoded the knowledge of the world in such a way that you can ask it questions and it can generate you answers,” Powell said.

The GPT-version of drug discovery

Drug discovery is a process of witnessing interactions and changes in biological behavior, but what would take months, or years, in a lab, can be represented in computer models that simulate traditional biological behavior. “And when you can simulate their behavior, you can predict how things might work together and interact,” she said. “We now have this ability to represent the world of drugs — biology and chemistry — because we have AI supercomputers using AI and a GPT -like method, and with all of the digital biology data, we can represent the world of drugs in a computer for the very first time.”

It’s a radical departure from the classic empirical method that has dominated the last century of drug discovery: extensive experimentation, subsequent gathering of data, analysis of the data on a human level, followed by another design process based on those results. Experimentation within the walls of a company followed by several decision points that scientists and executives hope will result in successful clinical trials. “It’s a very artisanal process,” Powell said. As a result, it’s a drug discovery process that has a 90% failure rate.

AI backers believe it will save time and improve success rates, transforming the classic process into engineering that is more systematic and repeatable, allowing drug researchers to build off a higher success rate. Citing results from recent studies published in NaturePowell noted that Amgen found a drug discovery process that once might have taken years can be cut down to months with the help of AI. Even more important — given the cost of drug development, which can range from $30M to $300M per trial — the success rate jumped when AI was introduced to the process early on. After a two-year traditional development process, the probability of success was 50/50. At the end of the faster AI-augmented process, the success rate rose to 90%, Powell said, .

“The progress of drug discovery, we predict, should massively go up,” Powell said. Some of the noted flaws of generative AI, its propensity to “hallucinate” for example, could prove to be powerful in drug discovery. “Over the last many decades, we have kind of been looking at the same targets, but what if we can use the generative approach to open up new targets?” she added.

‘Hallucinating’ new drugs

Protein discovery is an example. Biological evolution works by identifying a protein that works well, and then nature moves on. It doesn’t test all the other proteins that may also work, or work better. AI, on the other hand, can begin its work with non-existent proteins within models, an approach that would be untenable in a classic empirical model. By the numbers, AI has a much bigger discovery set to explore. With a potential number of proteins that could act as a therapy essentially infinite, Powell said — 10 to the power of 160, or ten with one hundred and sixty zeroes — the existing limit on working with the proteins nature has given humanity is exploded. “You can use these models to hallucinate proteins that might have all of the functions and features we need. It can go where a human mind wouldn’t, but a computer can,” Powell said.

The University of Texas at Austin recently purchased one of the largest NVIDIA computing clusters for its new Center for Generative AI.

“Just as ChatGPT is able to learn from strings of letters, chemicals can be represented as strings, and we can learn from them,” said Andy Ellington, professor of molecular biosciences. AI is learning to distinguish drugs from non-drugs, and to create new drugs, in the same way that ChatGPT can create sentences, Ellington said. “As these advances are paired with ongoing efforts in predicting protein structures, it should soon be possible to identify drug-like compounds that can be fit to key targets,” he said.

Daniel Diaz, a postdoctoral fellow in computer science who leads the deep proteins group at UT’s Institute for Foundations of Machine Learning, said most current AI work on drugs is centered on small molecule discovery, but he thinks the bigger impact will be in the development of novel biologics (protein-based drugs), where he is already seeing how AI can speed up the process of finding the best designs.

A UT Austin group is currently running animal experiments on a therapeutic for breast cancer that is an engineered version of a human protein that degrades a key metabolite that breast cancer is dependent on — essentially starving the cancer. Traditionally, when scientists need a protein for therapeutics, they look for several features, including stable proteins that don’t fall apart easily. That requires scientists to introduce genetic engineering to tweak a protein, a cumbersome process in lab work — mapping the structure and identifying, from all the possible genetic modifications, the best options.

Now, AI models are helping narrow down the possibilities, so scientists more quickly know the optimal modifications to try. In the experiment Diaz cited, use of an AI-enhanced version that is more stable resulted in a roughly sevenfold improvement in yield of the protein, so researchers end up with more protein to test, use, etc. “The results are looking very promising,” he said. And since it’s a human-based protein, the chances of patients becoming allergic to the drug — allergic responses to protein-based drugs are a big problem — are minimized.

Nvidia’s recent release of what it calls “microservices” for AI healthcare, including for drug discovery — a component in its aggressive ambitions for health sector AI adoption — allows researchers to screen for trillions of drug compounds and predict protein structures. Computational software design company Cadence is integrating Nvidia AI in a molecular design platform which allows researchers to generate, search and model data libraries with hundreds of billions of compounds. It’s also offering research capabilities related to DeepMind’s AlphaFold-2 protein model.

“AlphaFold is hard for a biologist to just use, so we’ve simplified it,” Powell said. “You can go to a webpage and input an amino acid sequence and the actual structure comes out. If you were to do that with an instrument, the instrument would cost you $5 million, and you’d need three [full-time equivalent workers] FTE to run, and you might get the structure in a year. We’ve made that instantaneous in a webpage,” Powell said.

Ultimately, AI-designed drugs will rise or fail based on the traditional final step in drug development: performance in human trials.

“You still have to generate ground proof,” Powell said.

She compared the current level of progress to the training of self-driving cars, where data is being collecting constantly to reinforce and re-enhance models. “The exact same thing is happening in drug discovery,” she said. “You can use these methods to explore new space … hone it, hone it … do more intelligent experimentation, take that experiment data and feed it back into the models, and around the loop goes.”

But the biological space within the broader AI model field is still small by comparison. The AI industry is in the range of a trillion model or more in areas of multi-modal and natural language processing. By comparison, the biology models number in the tens of billions.

“We are in the early innings,” Powell said. “An average word is less than ten letters long. A genome is 3 billion letters long.”

Read the full article