Creative AI newsletter #9 — Art, Design and Music updates over the past few months 🤖🎨

Luba Elliott
13 min readOct 28, 2019

--

by Luba Elliott

This is an occasional newsletter with updates on creative applications of artificial intelligence in art, music, design and beyond. Below is issue #9 and you can subscribe for future editions here 😀

Art 🎨

The Lumen Prize announced its winners on Thursday, which saw Refik Anadol win the Gold Award, Sougwen Chung the Still Image Award and Dave Murray-Rust & Rocio von Jungenfeld the BCS AI award. Competition has been strong, there are plenty of great works on the shortlist and longlist, which I was on the selection committee for. In September, Mozilla announced the recipients of its Creative Media Awards dealing with how AI intersects with online media and truth.

TU Delft used a convolutional neural network to reconstruct Vincent Van Gogh’s drawings that have deteriorated over time, while University College London researchers applied style transfer to x-radiographs of artwork to retrieve a Picasso painting. Nature wrote about the potential uses of machine learning for attribution, charting style development and preventing deterioration. Monash’s SensiLab have developed Art or Not, an app that relies on visual information rather than philosophical understanding. Google Lens have partnered with Wescover to help users identify local works of art. Artsy found that AI still cannot predict auction prices, or at least for Mark Rothko.

New art projects from the last few months include Zach Blas’s The Doors exploring psychedelia, Silicon Valley and AI; Gene Kogan’s autonomous artificial artist Abraham AI; Sofia Crespo and Pinar&Viola’s Panspermia; Jake Elwes’s ‘London’s first AI drag kid ‘ Zizidrag; Nicolas Boillot’s Vectoglyphs, which explore vector forms as a foreign language; Asunder by Tega Brain, Julian Oliver and Bengt Sjölen, which simulates a fictitious geoengineering process to preserve our planet; the parody voice assistant Ally AI by American Artist and Rashida Richardson; Joel Simon’s depiction of an emergent neural net visual language in clay tablets in Dimension of Dialogue; Mario Klingemann’s Circuit Training, which generates images that are interesting to humans; the deteriorating face What I saw before the darkness by AI Told Me; Ewa Nowak’s face jewelry against facial recognition; the auction of AI-generated tulips on the blockchain Bloemenveiling by Anna Ridler and David Pfau; the AI-generated sculpture Dio by Ben Snell; artificial human motion in asymmetry Blackberry Winter by Christian Mio Loclair; an examination of the limits of categorisation in machine learning systems in Layers of Abstraction: A Pixel at the Heart of Identity by Shinji Toya and Murad Khan;

Interviews with Libby Heaney on Euro(re)vision, Casey Reas on AI and computer art, Lauren McCarthy on Lauren, Gretchen Andrew on her search engine art, Scott Eaton on his recent Artist + AI show, Victor Wong on his AI Gemini, trained in Chinese landscape painting. Gene Kogan introduces the rationale for his Abraham project, Interesting Engineering profiles Refik Anadol in detail. In sad news, read the obituary of Sascha Pohflepp, a much-loved designer, thinker, artist and member of the AI art community.

Aaron Hertzmann wrote about the aesthetics of neural art, looking at neural network artworks as juxtapositions of natural image cues, Colin G. Johnson considered aesthetics and fitness measures in evolutionary art systems and Myounghoon Jeon reviewed the linked between interactive art and HCI. Daniel J. Gervais and Jason Bailey thought about AI art copyright, while Fabian Offert wrote on the past, present and future of AI art. Patricia de Vries considered masks and camouflage in artistic imaginaries of facial recognition algorithms. Kate Crawford with Trevor Paglen wrote about the politics of images in training sets and with Luke Stark on what artists can teach us about ethics of data practice.

Text 📖

Salesforce released CTRL, the largest publicly available language model with 1.6B parameters, aiming to provide improved control over text generation. Six months after GPT-2, OpenAI made available their 774 million parameter GPT-2 language model and a report on its social impact, followed last week by a fine-tuned GPT-2 from human preferences. You can experiment with the model on Talk to Transformer, with the autocomplete tool Write with Transformer or with Max Woolf’s gpt2-simple. Hugging Face made PyTorch Transformers, a library of state-of-the-art pre-trained NLP models. Projects involving GPT-2 include a text adventure game, folk music, petitions, subreddit simulator, alternate Game of Thrones endings and a book with GPT-2,OpenAI as credited co-author. The Economist and The New Yorker have experimented with text generation too. Gwern made an overview of GPT-2 poetry, while Robin Hill has a list of generative dictionaries.

Music🎵

YACHT released their new album Chain Tripping, for which they collaborated with many leading creative AI practitioners including Tom White, Magenta and Ross Goodwin. Jenna Sutela’s album nimiia vibié is the audio accompaniment to her earlier video installation work nimiia cétiï. Holly Herndon’s Proto, is an album created with Spawn, an AI trained to reproduce different voices. Dadabots have been generating free jazz using AI from aboard NASA space probe Voyager 3. AIVA completed an unfinished piano piece by Antonín Dvořák. Yuri Suzuki reimagined Raymond Scott’s Electronium using Magenta. Google Creative Lab and NOAA, trained an AI on humpback whale songs and Kyle McDonald wrote about his experience on the project. Jai Paul created Bronze AI to generate unique and infinite playback of his piece Jasmine. Julianna Barwick made a sound art installation influenced by its environment for the new Sister City hotel in New York. Earlier this year, Warner Music signed a record deal for 20 albums with an algorithm.

Rebecca Fiebrink’s Sound Control is out, an accessible software for making custom musical instruments with sensors. Leandro Garber’s AudioStellar is an open source data-driven musical instrument for latent sound structure discovery and music experimentation. There’s also a real-time voice cloning implementation by Corentin Jemine.

OpenAI released MuseNet, which can generate musical compositions with 10 different instruments and combine styles. It’s been used by Ars Electronica to complete an unfinished symphony by Gustav Mahler. Following the Bach Doodle in March, which harmonizes user melodies into Bach’s style using the Coconet model, Magenta have now released the dataset of the 21.6 million harmonizations. The team have also developed GrooVAE for generating and controlling expressive drum performances; MiDiMe for personalising machine learning models and introduced a new Colab notebook for generating piano music with transformer. Meanwhile, Sony came up with DrumNet with aim of creating musically plausible drum patterns. Tero Parviainen’s Counterpoint studio released GANHarp, an experimental musical instrument based on AI-generated sounds, made with Magenta.js and the Magenta GANSynth model trained on acoustic instruments to generate continuously morphing waveform interpolations. Chris Donahue and Vibert Thio made the procedural music sequencer Neural Loops, Andrew Shaw came up with MusicAutoBot, using transformer to generate pop music. Yi Yu and Simon Canales generated melodies from lyrics using conditional LSTM-GAN. MIT Researchers translated proteins into music and back — you can hear the sounds via their Amino Acid Synthesizer app. Christian S. Perone experimented with turning gradient norms into sound during training.

Here’s an overview of using neural networks for music generation that covers major projects from 2016 onwards. The Industry Observer deliberated whether AI-generated music is worth anything, the BBC picked out trends that may shape music in the next 20 years and Bob Sturm looked at copyright law and engineering praxis. Andrew Reed analysed what songs Phish would play during a live performance.

Design 👗

Pinar Yanardag and Emily Salvador have launched their fashion brand Glitch, which allows you to buy AI-generated designs such as the ‘aisymmetric’ little black dress. Earlier this year, you could also buy trousers with a bag attached to the side, a design based on Robbie Barrat’s Balenciaga experiments and make crochet hats based on Janelle Shane’s HAT3000 project. Or, you could just rely on AI to give you suggestions on outfit improvement, generate models or hide from surveillance with adversarial patches.

New AI-generated design experiments include British snacks, twitch emotes and metal album covers. RunwayML published a series of online experiments made with its tools, including Generative Engine, a storytelling machine that automatically generates images as you write. Olesya Chernyavskaya experimented with changing design and content based on emotion. AKQA have come up with an AI-generated ball sport Speedgate. The ‘This does not exist’ group keeps growing, with lyrics, stories and vessels. Kashish Hora has made a website listing them all.

Counterpoint and YACHT made Bit Tripper, an interactive tool for exploring generative typography, while Robert Munro applied StyleGAN to images of Unicode characters to invent new ones and Shuai Yang adapted style transfer to fonts.

Google Creative Lab put together a PoseNet Sketchbook, looking at what you can do when combining movement and machine (you can check the experiment with Bill T. Jones). Joel Simon’s Artbreeder now includes more models than the earlier GANbreeder including portraits, albums and landscapes. Kory Mathewson’s talk generator comes up with slides based on a single topic suggestion. Stanislas Chaillou’s ArchiGAN is a generative stack for apartment building design. Nathan Glover and Rico Beti’s Selfie2Anime turns your photos into an anime character. Adobe Machine Intelligence Design wrote about how we are shaped by our creative tools.

In film, shot types can be recognised with ResNets. There’s also an upscaled version of Star Trek: Deep Space Nine and Disney wrote a paper on generating storyboard animations from scripts, while CUHK researchers came up with a flow-guided approach for video inpainting.

Deepfakes 😞

Deeptrace Labs published a report detailing the available tools and current trends. It follows a continued deepfake prominence in the news: there’s been a fraud case made by replicating a CEO’s voice and spies are apparently using generated photos in LinkedIn requests. Earlier this summer, the artist-made Mark Zuckerberg deepfake raised controversy about whether Facebook should fact-check art, while the Dali Museum in Florida installed a deepfake Salvador Dali for visitors to take photos with. Zao, a Chinese deepfake face-swapping app faced backlash because of perceived threat to user privacy. Further ethical questions were raised by apps like DeepNude, which was taken offline shortly after its release because of potential harm. Plus, Grumpy Cat, now dead, will still live on through AI.

Washington Post has created a guide to manipulated video and researchers are developing forensic techniques to protect world leaders against deepfakes, but their continued effectiveness is doubted. In defence against written fake news, we now have Allen Institute’s Grover and Harvard-MIT-IBM’s GLTR, which detect whether the a particular text is AI-generated. Facebook dedicated $10 million to deepfake detection and released an initial dataset before the challenge begins in December.

Resources 📓

Eyeo have released the talk videos from the 2019 festival, including those by Adam Harvey, Mario Klingemann and Helena Sarin. Sonar+D uploaded Madeline Gannon’s talk on robotics and art. Adversarial Fashion published their DEFCON 27 presentation on ‘sartorial hacking to combat surveillance’ featuring art projects and tips on designing your own. Sensilab now run podcasts on Creative AI, discussing anything from deepfakes to music AI.

Interalia magazine dedicated its September issue to AI and Creativity, featuring contributions by Simon Colton, Gene Kogan and Ahmed Elgammal. IEEE has a special issue on AI-based and AI-assisted game design. INSAM Journal put together an issue on AI in music, arts and theory, including my short contribution. There are also reports of the recent Dagstuhl Seminar on Computational Creativity Meets Digital Literary Studies and ICCC.

David Foster published Generative Deep Learning, which I reviewed earlier this year and can recommend if you’re looking to get started in the field. Artist-published books include Helena Sarin’s The Book of GANesis, Casey Reas’s Making Pictures With Generative Adversarial Networks, Mike Tyka’s Portraits Of Imaginary People and Philipp Schmitt’s Learning to See, a colouring book of images used to train AI. Andreas Refsgaard and Mikkel Thybo Loose made Booksby.ai, an online bookstore selling AI-generated science fiction novels.In other book news, Tanya X. Short and Tarn Adams edited Procedural Storytelling in Game Design, Tony Veale and F. Amílcar Cardoso Computational Creativity. Matthew Guzdial published his dissertation on combinatorial machine learning creativity and Matthew Plummer-Fernandez finished his on the art of bots.

Rebecca Fiebrink and Phoenix Perry made InteractML, an interactive machine learning framework for Unity3D. Following the experiment with Pinar & Viola, Alex Mordvintsev released Infinite Patterns as a tool to make your own art. MIT’s GANpaint Studio lets you add, delete, and modify objects in photos. Ben Marriott made a tutorial on psychedelic morphing with GANbreeder.

Mariel Pettee and her team have developed Beyond Imitation, a set of configurable tools to generate novel sequences of choreography and tunable variations on input choreographic sequences. MIMIC Project from Goldsmiths, Durham and Sussex universities groups together various examples, projects and guides around musical machine learning and machine listening. Google Arts & Culture have a dedicated online section for Barbican’s AI: More than Human exhibition, including their recent project with Es Devlin. Ars Electronica have put online their Out of the Box Festival and Cyberarts catalogues, both of which list plenty of AI works.

Research 📋

I’m organising our 3rd NeurIPS Workshop on Machine Learning for Creativity and Design on the 14th December in Vancouver and the ICCV Workshop on Computer Vision for Fashion, Art and Design in Seoul on the 2nd November. Come drop by if you are going to either conference! If not, accepted papers and art will be online in due course.

Reiichiro Nakano wrote up his Neural Painters project, a generative model for brushstrokes learned from a real non-differentiable and non-deterministic painting program. DeepMind released an improved SPIRAL for doodling and painting and also VQ-VAE-2, which generates high fidelity images with more diversity. Using a single image, Adobe researchers build dynamic images and Samsung creates talking heads. Stanford researchers can now edit talking head videos as if they were editing text. MIT’s Speech2Face reconstructs an image of a person’s face from a short audio segment of speech. Google’s VideoBERT predicts what will happen next in videos, SynVAE translates visual art into music and SVG-VAE generates new fonts. NVIDIA’s FUNIT PetSwap allows you to turn your pet into another species and if you missed GauGAN earlier, it’s still happy to turn your doodles into lifelike landscapes. Augustus Odena compiled a list of open problems about GANs.

In terms of datasets, new releases include the Fluent Speech Commands dataset containing 30,043 utterances used for controlling smart-home appliances or virtual assistants; AMASS, a large database of human motion; Celeb-DF, a dataset for deepfake forensics; generated.photos with 100,000 generated faces; MIMII sound dataset for malfunctioning industrial machine investigation and inspection; Lyft Level 5 AV dataset with autonomous driving data; Topical Chat, a knowledge-grounded human-human conversation dataset of 235,000 utterances . Here’s a list of bioacoustics datasets and earth system datasets. Google released two new dialogue datasets, a landmark recognition dataset with over 5 million images of more than 200 thousand different landmarks and a deep fake detection dataset with over 3000 manipulated videos from 28 actors in various scenes. Meanwhile, Fei-Fei Li and Princeton researchers are aiming to improve ImageNet dataset by addressing issues of fairness and representation. Microsoft’s MS Celeb dataset has now been deleted.

Opportunities 🚀

Open calls: International Conference on Live Interfaces 2020 has a focus on AI and seeks papers until 2nd November. The Conference on Computation, Communication, Aesthetics & X (xCoAx) and Algorithms the Matter (ALMAT) are seeking submissions around comutational tools, algorithms and media art until 31st January. EvoMusArt is looking for submissions on computational intelligence techniques in artistic fields, deadline 29th April. EPSRC Network Plus in Human Data Interaction (HDI) has a call for projects on HDI, AI and machine learning technologies and their impact on the creative industries until 31st January.

Residencies: European Media Art Platform (EMAP) offers a two-month residency at one of 11 hosting institutions, including Ars Electronica and FACT, apply before 2nd December. European ARTificial Intelligence Lab in partnership with Experiential AI in Edinburgh seeks an artist for its Entanglements theme on fair, moral and transparent AI, apply until 29th October. With VisionrIAs, Fundación Zaragoza Ciudad offers a residency on vision and visuality in relation to the creativity of machines, apply until 4th November. Runway are looking for a something-in-residence, recent residents Philipp Schmitt and Allison Parrish wrote about their experiences. STRP Award for Creative Technology seeks submissions themed around the Post Anthropocene until 10th November.

Academic jobs: UC San Diego is looking for an Assistant Professor: Computational Artist / Designer, apply until 2nd December. National University of Singapore seeks a post-doc to work on deep learning approaches to models of musical and environmental audio. DXARTS at University of Washington is looking for an Assistant Professor in Data-Driven Arts Practice until 15th Nov. MARCS at Western Sydney University is looking for PhD students to work on a deep learning generative model for adventurous keyboard music. NYU Tisch ITP & IMA is looking for an Assistant Arts Professor, apply until 1st December. In Melbourne, the startup move37 is looking for a Research Scientist to create tools to augment human conceptual creativity.

Things to do 😃

London: Trevor Paglen’s new dataset-related work is on at The Barbican between until16 Feb; Heather Dewey-Hagborg’s commission How do you see me? is on at The Photographers’ Gallery until 14th Nov as part of its Data/Set/Match programme; The Serpentine Gallery hosts Jenna Sutela’s I, Magma until 12th Jan. The Design Museum is showing a variety of AI-related projects in its Digital Category for Beazley Designs of the Year and lets you vote for your favourite. The Lumen Prize is exhibiting its Director’s Showcase at The Cello Factory Gallery between 30 October and 2 November. Rob Laidlow’s orchestral piece Alter, using and about AI, will premiere at the Barbican on 2nd November. In Margate, Turner Contemporary is showing Welcome Chorus, an interactive AI-based outdoor installation by Yuri Suzuki until Jan 2020.

Europe: The Ars Electronica Center reopened earlier this summer with the Understanding AI exhibition, showcasing how the technology works via a multitude of demos and artworks. There’s also an exhibition on AI x Music there. The Unknown Ideal at Edith-Russ-Haus for Media Art presents a survey of Zach Blas’s practice around digital technologies and the cultures and politics behind them. Fondazione Prada in Milan is showing Kate Crawford and Trevor Paglen’s Training Humans until 24th Feb. Last month, Berlin finally saw the opening of the Futurium museum, with works by Gene Kogan and Sofia Crespo.

Elsewhere: In the US, Refik Anadol’s Machine Hallucination installation is at Artechouse in NYC and Cooper Hewitt’s Process Lab has an exhibition around facial recognition until 17th May. In Montreal, Anteism books is hosting Latent See, an open-studio research exhibit resulting from residencies concerning AR, AI and exploratory anomalies, while in South Korea Trevor Paglen exhibits Machine Visions at the Nam June Paik Art Center until 2nd February.

Thanks for getting this far. It has been a long one. Anything I missed? Drop me a line if you have any updates I should include or if you need any creative AI-related help.

Subscribe to this newsletter here

--

--

Luba Elliott
Luba Elliott

Written by Luba Elliott

All about AI in creative disciplines. Researcher, producer, curator

Responses (2)