Creative AI newsletter #10 — Art, Design and Music updates from 2020🤖🎨

Luba Elliott
14 min readJan 14, 2021

by Luba Elliott

This is an occasional newsletter with updates on creative applications of artificial intelligence in art, music, design and beyond. Below is issue #10 and you can subscribe here 😀

Art 🎨

The AI Art gallery that I curate every year for the NeurIPS Creativity Workshop includes 100+ new art, music, text and design projects, check it out! This year, I was also part of the jury for the AI Artathon in Saudi Arabia — you can see the winners, hackathon and bootcamp projects. Meanwhile, the Lumen Prize awarded its AI Prize to Christian Mio Loclair’s AI-designed marble sculpture Helin and Casey Reas & Jan St. Werner won in the Moving Image category with Compressed Cinema, where the videos were generated by AI.

New art projects include Golan Levin with Lingdong Huang’s Ambigrammatic Figures legible both upside-down and right-side up; Joel Simon and Tal Shiri’s Derivative Works of AI-made image collages of faces; Philipp Schmitt’s Declassifier, overlaying dataset images over objects detected in an image; Everest Pipkin’s Lacework, using AI to reinscribe a video dataset of daily actions; Terence Broad’s Teratome made with network bending techniques; Daniel Ambrosi’s Abstract Dreams; Vishal Kumaraswamy’s Swaayattate (Autonomy), a set of films on human-machine relationships and gender, caste and labour; driessens & verstappen’s Pareidolia, where facial recognition finds faces in grains of sand; Mario Klingemann’s Appropriate Response, a series of changing letters to explore meaning, expectation and relationship with AI; Guillaume Slizewicz’s I can remember, about the creative potential of image analysis; Carrie Sijia Wang’s An Interview with ALEX, a simulation of an interview with an AI HR; Holly Grimm’s Aikphrasis Project, where artists respond to AI-generated text; Alexander Reben’s Am I AI (The New Aesthetic) artworks dreamed up by AI and produced in real-life by the artist or others; Nina Rajcic’s Mirror Ritual, a project on human-machine co-construction of emotion; Vibe Check by Lauren Lee McCarthy & Kyle McDonald, which enacts another control system through the passive observation of our neighbours; Entangled Others & Sofia Crespo’s Artificial Remnants of generated insects.

Writing 2020 began with features in Art in America, on tools by Jason Bailey and on bots vs AI by Matthew Plummer-Fernandez as well as an overview of AI art books. Flash Art published an article on Anna Ridler and Roman Lipski’s work, which I co-wrote with Alex Estorick, who has now started a new column on AI and contemporary art; Beth Joachim talks to artists for her AI Art Corner. New York Times surveyed AI and dance. Interviews with Entangled Others on ecology and generative arts, Robbie Barrat on his art and fashion design work, Sougwen Chung on collaborating with a robot arm, Christian Mio Loclair on the overlap between human movement and AI, Mario Klingemann on his recent works and Jenna Sutela on connecting humans, cellular form and the spiritual; Stephanie Lepp on her deepfake art of public figures.

Research Devi Parikh predicted a creator’s preferences from interactive generative art, Simon Colton covered adapting and enhancing evolutionary art for casual creation; Ramya Srinivasan reviewed biases in generative art from the lens of art history; Jon McCormack looked at understanding aesthetic evaluation with deep learning; Byunghwee Lee dissected landscape art history with information theory; Ziv Epstein considered credit in AI art; Jason Bailey thought about predicting the price of art at auction; Xi Wang quantified ambiguities in artistic images; Amy Zhao synthesized time lapse videos of paintings; Terence Broad optimised generating images for fakeness; Mark Hamilton’s MosAIc found hidden links between works of art; Aaron Hertzmann thought about why line drawings work; Lou Safra attempted to track changes in trustworthiness by analysing facial cues in paintings.

Museums and more. The State Tretyakov Gallery in Russia enlivened three fairytale paintings with AI, while the search engine Yandex made a gallery of GAN art and Matt Round for AI street art. With the start of the lockdowns, Occupy White Walls helped users fill a virtual art gallery with the help of AI suggestions. Purdue University students made a play for human and AI spectators. Bucharest Biennial will now be curated by an AI. The National Archives began work on a computer vision search platform for identifying and matching images across digitised collections. The Cleveland Museum of Art invites users to upload images and get matched with work from their collection. Fabian Offert developed a dataset-agnostic, deep visual search engine for digital art history based on neural network embeddings. Benjamin Lee’s Newspaper Navigator enables you to explore historic newspaper photographs. Aleksey Tikhonov averaged art from classical paintings and Merzmensch reviewed his experiments from 2020.

Text 📖

Projects with GPT-2 include Hendrik Strobelt’s website that helps you say No; Issue 5 of the music review publication Ear Wave Event; Janelle Shane’s jell-o centric recipes; Elliot Turner’s AI-Dungeon style adventure game based on installing OpenCV; ThisWordDoesNotExist.com with AI-generated dictionary entries; Lucid Beaming’s art review generator; Paola Torres Núñez del Prado’s AI poetry record.

GPT-3 In May, OpenAI release GPT-3, giving access to its private beta to some researchers and practitioners. Experiments include React app, website designer, layout generator, search engine, SQL query generator and writing such as Twitter personalities, legal language, new Hamilton song. Matti Allin analysed Matio Klingemann’s GPT-3 Jerome K. Jerome writing. The Guardian even wrote an article using the technology and you can see the variations here. Gary Marcus weighed in on GPT-3 and so did nine philosophers. Or, if you prefer, Philosopher AI can answer your questions. Gwern has generated plenty of creative fiction. The GPT-3-powered bot went undetected for a week on AskReddit. AI Dungeon now runs on GPT-3 and writers are taking note of its interactive storytelling.

Allison Parrish wrote Reconstructions, an infinite computer-generated poem whose output conforms to the literary figure of chiasmus. Funk Turkey wrote a new song for AC/DC using Markov chains. Andreas Refsgaard generated fairy tales from mundane images. Keaton Patti got a bot to write QAnon theories. Read the 78 NaNoGenMo submissions.

Books. Aaron A Reed wrote a horror novel Subcutanean, where no two copies are the same; Douglas Summer wrote a book with GPT-3; Alexey Tikhonov wrote a diary of a Paranoid Transformer; K Allado-McDowell came up with Pharmako-AI together with GPT-3. Tom Lebrun and René Audet published a white paper on AI and the book industry.

Vogue interviewed Billie Eilish using an AI bot. Shenzhen court ruled that an AI-written article has copyright protection. Pold and Erslev explore third-wave electronic literature. The New Yorker considered the overlap between machines and poetry. Ken Liu on writing with AI and Allison Parrish on writing under the control of machine learning.

Google’s Verse by Verse allows you to compose poems inspired by classic American poets, while Fabricious decodes ancient Egyptian hieroglyphs. Max Kreminski’s Blabrecs is like Scrabble, but only accepts words that sound like English to the AI. Nikola I. Nikolov developed Rapformer for rap lyrics generation and Mark Riedl has Weird A.I. Yankovic for lyrics. Daphne Ippolito discovered that detecting generated text automatically is easiest when humans are fooled. Max Woolf explains how to make an AI bot to parody any Twitter user and Mark Riedl has an introduction to AI story generation.

Music🎵

New music projects. Together with Sister City hotel and Microsoft, Bjork released the AI-powered composition Kórsafn based on her choral arrangements and the sky. Jennifer Walsche’s A Late Anthology of Early Music Vol. 1: Ancient to Renaissance is a speculative history of early Western music, made using machine learning. Robert Laidlow made Alter using several generative models. Moisés Horta Valenzuela’s Transfiguración applied the musician’s own style to Antonio Zepeda’s pre-hispanic sounds. Dadabots made an infinite bass solo in the style of YouTuber Adam Neely. Shimon the Robot released an album on Spotify, music videos and a demonstration of his new rapping talent. Everest Pipkin made Shell Song, an interactive audio-narrative game exploring voice deep-fake and their datasets.

Kjetil Golid’s Sonant creates generative music based on random walks, Beat Sage custom beat maps for songs, Trap Factory beats and thisd7xcartdoesnotexist preset cartridges for Yamaha DX7. AiMi plays electronic music that adapts to you and your energy. LifeScore created a dynamic soundtrack to ‘Artificial’ based on the audience’s reactions in the chat channel. CantoCocktail is an interactive karaoke generator, composing new medleys based on excerpts of 120 Cantopop songs. Google’s Blob Opera allows you to generate operatic sounds using four blob figures. Infinite Bad Guy brings together fan YouTube covers to make a never-ending music video. The studio IYOIYO share how they did it. Qosmo developed an automatic music generation system for Shiseido.

Australia’s Beautiful the World song won the AI Eurovision song contest and Ed Newton-Rex discussed the ethical considerations. Bob Sturm wrote up the summary and results of his AI Music Generation Challenge. Deepfakes of Jay Z performing Hamlet and Frank Sinatra singing Britney Spears reignite copyright debates as some of the videos are removed from YouTube. Damien Riehl and Noah Rubin generated all possible melodies and released them into the public domain in attempt to stop copyright lawsuits. Meanwhile, The Pudding judges your taste in music.

Research. Facebook released Demucs and Northwestern University Cerberus for music source separation. Earlier in the year, OpenAI released Jukebox, which generates music as raw audio in a variety of genres and artist styles. Here it is continuing the Windows 95 startup sound and Take On Me. Google Magenta has been developing user-friendly music experimentation tools like ToneTransfer and Lo Fi Player. Nao Tokui developed M4L.RhythmVAE, which allows musicians to train and use the rhythm generation model within music production software. Ryan Louie created a set of novice-friendly AI-steering tools for AI music creation. Yu-Siang Huang developed Pop Music Transformer, a model that generates expressive pop piano music. Yi Ren’s DeepSinger generates singing voices in Chinese and English by training on data from music websites. Algoriddim released Neural Mix, which isolates beats, instruments and vocals in real-time. Neural Beatbox generates beats and rhythms, based on voices and claps recorded by a webcam.Sander Dieleman wrote about music generation in the waveform domain. Jean-Pierre Briot looks at history, concepts and trends in music generation, Jeremy Freeman surveyed AI music from 1950s onwards and François Pachet looked at the last 10 years of AI-assisted music composition.

Design 👗

New design experiments include blended pictures of celebrities, real people from drawings, beetle-faces, Great British Bakeoff, feet, fursonas, jellyfish, Pokemon and monsters. Vincent Woo made a website with people that do exist, while Daniel Voshart imagined Roman Emperors. There are machine learning fonts that you can download. Otherwise, Hypha grows letters and GlyphNet generates a visual language.. There are music videos for Massive Attack and Freeda Beast by Mario Klingemann, DJ Deep by Sofia Crespo, 0171 by Terence Broad and Qrion and Hiatus by Nathan Shipley.

Google Experiments showcase details in Japanese scrolls and you can get AI-designed manga in the style of Osamu Tezuka. Asahi used an AI-powered design for product packaging with a focus on ‘objectivity’ and ‘originality’. Cunicode created unexpected materials with textures and normal data. Javier Ideami visualised loss landscapes. CookGAN generates meal images from an ingredients list. Acne Studios collaborated with Robbie Barrat on their Fall/Winter 2020 collection. A ceramicist shared her experiments with AI and Jon McCormack covered design considerations for real-time collaboration with creative AI. This New York Times article lets you play with generating fake human faces.

Film. Merzmensch and Kira Bursky shared experiences of making short films with machine learning, Transitional Forms released Agence, a dynamic film experience. Denis Shiryaev upscaled old black and white film footage from 1896. Warner Bros signed deal for AI-driven film management system. In the ‘Welcome to Chechnya’ documentary film, AI swapped out faces to protect identities. MIT aims to educate the public about deepfakes with its ‘In the Event of Moon Disaster’ project, while the advocacy group RepresentUs made deepfake ads of Vladimir Putin and and Kim Jong-Un. Actors and filmmakers shared their behind the scenes experience. Martin Anderson surveyed AI in the visual effects industry and DitherNet is a very slow movie player.

Face. There are projects replacing your face with a fake, camouflage garments, anti-surveillance make-up (there’s even a tutorial by Pussy Riot!), accessoires, face masks, jewellery. There is an app that anonymises photos and videos and websites that makes images machine-unreadable, anonymise BLM protestors and scrub images. YR Media’s Erase your Face and Shinji Toya’s Paint Your Face Away let you draw or paint over a face to make it unrecognisable. Devon Schiller considered the development of critical practice using facial recognition, while Kyle McDonald wrote about recent projects around the face and The New Yorker has a piece on dressing for the surveillance age.

Tools. Cyril Diagne launched ClipDrop, an app that lets you capture objects from your surroundings and transfer them into desktop apps. Open Avatarify has photorealistic avatars for video-conferencing apps. AI Gahaku transforms your pictures into Reinaissance paintings, AnimeGAN into anime and Toonify into cartoons. Scroobly brings your doodles to life. Disney uses PyTorch for animated character recognition. There are now self-organising floor plans for care homes, city map, house layout and bedroom generators. Danielle Baskin posted a GAN rental unit on Craigslist and there’s a community for AI-generated Russian panel houses. Adobe released a new version of Photoshop with five new AI features.

Resources 📓

We held the 4th edition of the NeurIPS Creativity Workshop this year. You can browse the papers and check out some of the demos including Musical Speech and Latent Compass.The video recordings should be available later this month here. Conference recordings include EVA London, AI Music Creativity, Measuring Computational Creativity Workshop at ISEA , AI for Culture: From Japanese Art to Anime seminar at CODH, xCOAX, EvoMusArt. Magenta and Gray Area organised a machine learning and music series, including participant projects.

Reading. Montreal-based Espace magazine focussed on creating in the age of AI and SciArt had ‘Algorithmic’ as this year’s special topic. Academic publications had their own special issues Entropy on ‘AI and Complexity in Art, Music, Games and Design’ and Applied Sciences on Deep Learning for Acoustics. There are also essays on aesthetics of new AI, on post-work, online labour and automation. Marc Garrett made a reading list for AI technology: Art, Academia and Activism, Alex Champandard has one for texture synthesis. Jason Webb has some resources on digital morphogenesis and Scott Hawley on ML-Audio. Emil Wallner made a database of machine learning art. Sam Lavigne has a guide to web scraping as an artistic and critical practice.

Books. Joanny Zylinska published AI Art: Machine Visions and Warped Dreams. Ben Vickers & K Allado-McDowell edited Atlas of Anomalous AI with contributions from writers, philosophers and curators. Lev Manovich published Cultural Analytics. Vladan Joler and Matteo Pasquinelli published Nooscope, a diagram of machine learning errors, biases and limitations.

Talks.There are talks from Dr Matthew Guzdial at AI and Games, Allison Parrish’s tutorial at PyCon 2020, Refik Anadol’s TED Talk, Melissa Avdeeff on AI, popular music and copyright. Podcast recordings from Creative Next, Joel Simon on Soft Robotics, Pindar van Arman on The AI Podcast. SensiLab launched its second podcast season with an interview with me.

Courses. FutureLearn released a course introducing creative AI, while Parag Mital made one on cultural appropriation with machine learning. Artificial Images has tutorials on making art with machine learning. Allison Parrish’s course Materials of Language has a reading list and notebooks available online. Dennis Tennen has a course on literature in the age of AI. Jordi Pons put online teaching materials on deep neural networks for music.

Tools. I made a list of 60+ machine learning tools for The Creative AI Lab. Here is my interview providing an overview. There’s now an AI: A Museum Planning Toolkit produced by Oonagh Murphy, to which I contributed a glossary with project examples. There is Anton Marini’s Synopsis, a suite of open source software for computational cinematography; Hugging Face’s Transformers with state-of-the-art NLP models and demos; Kritik Soman’s machine learning for GIMP; Pose Animator, enlivening an illustration from real-time person movements; Pembroiderer is an open library for computational embroidery.

Andrej Karpathy made a minimal GPT training library in PyTorch. Anand Padwara made a real time image animation application in OpenCV. Anastasia Opara made a genetic algorithm project for drawing. Sensity (formerly DeepTrace) has a deepfake detection tool. Elad Richardson released the official implementation for pixel2style2pixel; Clova AI Research theirs for StarGAN v2, diverse image synthesis for multiple domains. There’s already an implementation by lucidrains of the newly released DALL-E, Open AI’s text to image transformer.

Datasets include 150,000 botanical and animal illustrations from the Biodiversity Heritage Library; 321,178 works of art from Paris Museums, face images from Japanese illustrations, 2.8 million images from the Smithsonian, 2,500 novel English words published in The New York Times; 1,774 images created by computational artist Andy Lomas; 1 million playlist dataset by Spotify; 15 million CAD sketches with geometric constraint graphs by Princeton LIPS; 196,640 books in plain text by Shawn Presser; a repository of motion capture data by the choreographer Alexander Whitley; the Freesound Loop Dataset of annotated music loops and GiantMIDI-Piano of 10,854 MIDI files of 2,786 composers.

Opportunities 🚀

Calls for Papers: ISMIR has a call for papers for its special issue on AI and Musical Creativity until 28th February; The second Workshop on Human-AI Co-Creation with Generative Models has a paper and demo submissions deadline on 15th January; ICCC has a call for papers on computational creativity until 5th March; Urban Assemblage: The City as Architecture, Media, AI and Big Data has a call for abstracts until 1st April; SIGGRAPH has an art papers track, submit by 15th January; Conference on AI Music Creativity has a call for papers and music until 1st April.

Open calls: Science Gallery Detroit seeks artworks, interventions and research projects for its Tracked and Traced exhibition, apply until 5th February. Adobe has a Creative Residency Community Fund for visual creators. MediaFutures is looking for artists and startups eager to reshape the media value through applications of data and user-generated content until 28th January. AI Song Contest is back, submit your songs by 18th May. EuropeanaTech Challenge seeks proposals for the assembly of suitable AI/ML datasets until 31st January.

The School of Digital Arts in Manchester is recruiting a Research Fellow in Digital Arts Practice and AI. Oh, and Runway and Artbreeder are hiring :)

Exhibitions😃

To see in 2021: In Spain, LABoral’s exhibition When the butterflies of the soul flutter their wings on art, neuroscience and AI is open until 24th April. London’s Furtherfield Gallery is exhibiting the UNINVITED art installation by Nye Thompson and UBERMORGEN, looking at what happens when networked surveillance tools and AI capabilities get sick in the head until 31st January. Berlin’s CTM will feature Apotome, a generative music environment for microtonal tuning systems, created by Khyam Allami and Counterpoint. In Melbourne, Refik Anadol’s Quantum Memories are on view for the NGV Triennial, while RMIT Gallery plans the Future U exhibition featuring creative responses to the potential implications of the rapid developments in AI and biotechnology, tentatively scheduled from 25th June until 9th October.

In the US, Honor Fraser Gallery explores the relationship between visual arts and the cybernetic world in Thin as Thorns, In These Thoughts in Us until 20th February. bitforms gallery is hosting Alchemical, a collaborative exhibition by Casey Reas and Jan St. Werner until 14th February. Curatorial A(i)gents, a series of machine-learning-based experiments with museum collections and data is scheduled to run for nine weeks beginning 1st February at Harvard’s metaLAB.

From 2020. In the summer, I co-curated the Real Time Constraints exhibition in the form of a browser plugin with arebyte. Merzmensch did a useful write-up. Other exhibitions in the online format include Datasets vs Mindsets and Silent Works from the The Berliner Gazette Winter School Program. AI-related shows included Algotaylorism at Kunsthalle Mulhouse about artists working at the human-machine junction point; Mind the Deep at Shanghai Ming Contemporary Art Museum; Human Learning — What Machines Teach Us by the Canadian Cultural Centre in Paris; GANLand at DAM projects; Neurons, simulated intelligence at Centre Pompidou; Uncanny Valley: Being Human in the Age of AI at de Young museum (review in Art in America); Cyborg Subjects at esc median kunst labor; Real Feelings at HeK Basel; AI: Love and Artificial Intelligence at Hyundai Motorstudio Beijing; A.I., Sunshine Misses Windows at Daejeon Biennale; ARCU & OHM at HALLE 14; Future and the Arts: AI, Robotics, Cities, Life at Mori Art Museum; How to Make Paradise at Frankfurter Kunstverein. Solo shows included Trevor Paglen’s Bloom at Pace Gallery (write-up in Art in America); Hito Steyerl.I Will Survive at K21; Jenna Sutela’s NO NO NSE NSE at Kunsthall Trondheim and Ben Snell’s Embodiments at Blackbird Gallery.

Thanks for getting this far. It has been a long one. Anything I missed? Drop me a line if you have any updates I should include or if you need any creative AI-related help.

Subscribe to this newsletter HERE

--

--

Luba Elliott

All about AI in creative disciplines. Researcher, producer, curator