Introduction to Machine Learning and RunwayML
Presentation:
RunwayML
Download: https://runwayml.com/download
Models
Learning
Videos
https://www.youtube.com/channel/UCUBqu_z5uP0AZhYtuyFZB3g/videos
Interfaces
Processing, OpenFrameworks, Photoshop, p5.js, Arduino, Max, …
https://learn.runwayml.com/#/networking/examples
Integrations
Photoshop, Unity3D, …
https://runwayml.com/integrations
Workshop
Wir suchen uns je ein Thema aus den p5.js Beispielen und arbeiten uns in Gruppen ein. Ziel ist es eigene Konzepte für die Umsetzung zu entwickeln, zu testen und diese der Klasse vorzustellen.
- AttnGAN Generate Images from Text
- CycleGAN Image Translation
- GPT-2 Generate Text
- im2text Generate Text from Images
- PhotoSketch Create Contour Drawings I with PhotoSketch
- StyleGAN Create Animated Image Transitions
- StyleGAN Generate AI Rainbows
https://learn.runwayml.com/#/networking/examples?id=p5js
Beispiele:
https://experiments.runwayml.com/generative_engine/
ML Lyrics by Lea
Since my Interaction Design project for this term was about plants making sound and music, I was interested in creating song lyrics using Machine Learning.
Iron ML
First I wanted to learn how training models with lyrics works in general and how big a data set and the number of steps have to be to generate more or less meaningful lyrics in a recognizable style.
I chose to create a data set from Iron Maiden songs since the band has released an impressive amount of songs which generally contain quite a bit of text.
I collected all of their songs in one text document and separated the lyrics using paragraphs. The song titles were the first line of each paragraph.
Then I started the training using GPT-2. I have set 5000 steps and the training took about four hours.

The results indeed sound like Iron Maiden songs but there were whole paragraphs taken over directly from the lyrics in the data set. Also the song titles cannot be recognized as such since I did not mark them in the data set.
Here is one example of lyrics generated basing on the Iron Maiden dataset and the input word „today“:
today
We will ride this thunderbird
Silver shadows on the earth
A thousand leagues away
Our winter times are like a silent shroud
To me the things I see are just
Deceiving me
Making me see things I don't want to see
These are things I cannot see
In the black the moonlight casts a
Shadow on the earth
A thousand leagues away
Our winter times are like a silent shroud
To me the things I see are just
Deceiving me
Making me see things I don't want to see
These are things I cannot see
In the black the moonlight casts a
Shadow on the earth
A thousand leagues away
Our winter times are lik
Happy and sad lyrics
Iron Maiden Lyrics were a great start but they did not help me a lot with my plant music project.
That's why I decided to create data sets based on emotions so I can express the wellbeing of a plant. To simplify the whole process, I decided to reduce the emotions to happy and sad.
I asked people on social media to enter happy and and sad songs in a spreadsheet. In the end, the sheet contained 191 sad and 124 happy songs. While assembling the data set, I stumbled across the problem that some of the songs were in German or other languages. I ignored that fact and just added those songs to the data set along with the English lyrics. This time I put the song titles in quotes to mark them.
Due to financial reasons, I reduced the number of steps to 1000 for each data set.
Here is one example of a sad song created using the input „too much light“:
„Too much Light“
Too much Light
I cannot function
I cannot function
I am broken down
Stacking up problems
Is too much light
Too much light
Suffocating
Don't know how or when
To break away
But I know how
How the darkness conspires
And how the light always wins
I know how the darkness conspires
And how the light always wins
Song to a Man on the Edge
Written in Tears by a Dreamer
In a Mirror by a Dreamer
I saw a vision of a new world
Where colors flowed freely
Where animals and people coexisted
I saw a world in crisis
In a desperate caress
I kissed the sky
A million miles away
We are still falling
When I think of you
I think of all the people I've lost
And all the dreams I've had
I wish that I could share
This moment
The happy songs were quite less meaningful since almost half of the lyrics in the data set were in German. But this kind of nonsense also reflects a comedic character that underlines the happiness of the lyrics. Here is an example with „I like it“ as an input:
„I like it“
You like it
I like it (laughs)
So what do you think?
Should I get a tattoo?
Or should I just say I like it?
Tattoos are pretty cool
They're pretty cool
But I like my ears
They're my reality
You can't push it
You can't push it
You can't say I don't like it
Yeah, I like it
You like it
ML Textexamples
In our workshop we tested the attnGan. We thought of words and sentences that might have to do with our topic in information/interaction.


































ML Training Images
In Runway i tried to test logos with the Image Training. For this project i took some well-known sports logos. The result looks a little bit strange, but you can easily see the abstract „Adidas“ logo. It looks like a watercolor image.








Skyline GAN – Overview (By Max Döres)
For „Skyline GAN“ I gave RunwayML black/white city skyline pictures to learn from. As a result, I got generated skylines of cities that doesn't exist. In addition, I generated fictitious citynames for completing the images.
Skyline GAN – Results
The final Model can be found under:
www.thisskylinedoesnotexist.com
I created a small p5 tool for generating a unique skyline using a custom seed from the user.

Source Material:

Generated results:


Skyline GAN – Process
For a better understanding of RunwayML, I was experimenting with the FootwearGAN.
It was fascinating seeing ImageGAN generting new buildings. So I tried to abstract this technique for generating new city skylines.
These are easy to find and edit for ML.
I used a script for scraping the images:
I edited the pictures as a perfect quad using Photoshop.
As a basis for StyleGAN2 I used the church preset, because these structures are often similar to city skylines. For the learning process I got aprox. 180 pictures and worked with 3500 steps.
After the learning process, I was able to generate new skylines:
The stunning first results:
These were my raw pictures:
In addition, I generated fictitious citynames, based on a cityname-database, for completing the images.
Skyline GAN – Training Process












Album Covers Style Transfer





















