First draft of Unreal Engine final project “Reality Threshold”
[RTSS] Interaction
Web sketch: LINK
One interaction on the web that I found compelling is the WebGL example for lens flare.
I found the open-world space navigation very fluid and intuitive. It matches the feeling of flying well. To navigate the space, you only need one mouse input, which will influence the direction you’re looking at and whether or not you’d like to move forward. To stay still, your mouse can stay in the center. To change the direction of the camera, you would push it towards the direction you’d like to pan to. The further the mouse is from the camera, the faster the camera will pan. If you click the mouse, the camera will move forward so that you can move in space.
I tried out implementing this interaction to my previous week homework assignment. I did this by removing the first person controls and creating my own camera movement. I had a mouse tracker (ev.clientX and ev.clientY) that took note of where the mouse position was in relation to the center of the canvas. Then, depending if the mouse was facing left, right, up, or down, the camera would rotate towards that direction. Next, I tried to push the camera to move whenever the mouse was held down. I had a bit of trouble with this—the camera was only be pushed once per mouse click, but I wanted the pushing to be continuous as the mouse was held down.
Additionally, when I translated this interaction to my bathhouse, I realized that this interaction wasn’t as suitable as it was for the space-like flying movement in the example I found. The anti-gravity, flowy movement didn’t match as well with my box-shaped bathhouse, where the user would expect a more rigid walking movement.
[Storytelling] Retelling a Story
Helen Lin - Storytelling for Project Development - Spring 2025
Brief
For my retelling, renamed dorm room, I would like to rework this interactive storytelling project that I built with my friends (JZ on code, Kathleen on art direction, Sarah and I on illustration, and Sav Du on music) in my sophomore year of undergraduate study.
“dorm room”, previously called R o o m, takes you through a college dorm room in four different states (one in each month) and encourages you to poke around, gleaning details into what kind of person the student is through their belongings and how the belongings change over time.
Original Version
This original storyboard has a linear story structure, taking the viewer through the room in four different states (one for each season). It starts with the beginning of the semester, unpacking boxes, through winter and spring, and then to the end of the semester, packing the belongings back up.
The original is more of an animated comic strip, but I want to make it more interactive with the storytelling elements being outlined more clearly.
We built this for a hackathon so we were working on a tight deadline and rushed some components. Initially, we wanted this character to be a lot messier, so in the new rendition, I want to exaggerate this aspect.
Reworked Version
I want to rework this piece into having a stronger story by fleshing out more literal glimpses into the owner of the room (who I will call JC). For the new version, I’d like to create a more meandering spiral story structure by creating more access points in learning different aspects about JC’s life.
Firstly, I mocked up some of the storytelling elements that I wanted to incorporate by mapping out the elements and adding text boxes to narrate for each object. I mostly wanted to focus on some repeating aspects of JC’s life, mainly the physics class, playing lacrosse, and a brief romance with a girl named Justine.
Mockup: dorm room mockup
I want the audience’s role will be to visit (1) or spectate (2).
To technically execute this new version, I wanted to practice what I’ve learned last semester in ICM by rebuilding the interactions using p5. I organized my files on my local computer and set up my local working environment using VSCode.
It took quite a bit of time to gather the exact position I wanted to place each object, so for the time being, I’m using the exact pixel number of each (x,y) coordinate. To quickly estimate each point, I put a mouseX and mouseY tracker on the top left.
Next, I drafted a thought bubble text for each clickable item. These texts would only show up when a specific region (mapped to the position of the object) is clicked. I reused the code of detecting a new mouse click from a different previous sketch I worked on. Whenever there was a newClick(), the code will display or hide a text box when applicable. The text is different for each clickable object. It was also important to me to change the cursor to a hand when hovering over a clickable object to make it intuitive to use.
One of the challenges I faced was to get the animation gif play on each new hover without needing to reload each time. I haven’t had the time to research and figure this out, but this would improve the user experience. For the time being, the gif only plays once while actively on the page, but resets once the page is changed.
For this iteration (since I was reusing illustrated assets), a lot of the creative freedom this time went into writing the text. I was able to support my meandering story structure by choosing what to highlight in the text.
All in all, it helped to push myself to throw a prototype together very quickly while focusing on the aspects of interactive storytelling I have the less experience with (writing, coding, interaction design).
[RTSS] Light and Shadows
Link to sketch: LINK
Inspiration image from Library of Congress
I appreciate the ease of which I can navigate the space in newart.city exhibitions (leymusoom being one of my favorites—shared by my friend Vinh). The UI feels intuitive and familiar. The tutorial page is succinct and placed at a location that feels both non-disruptive yet easy to find. I feel excited by the potential of this tool and how it can make 3D environment building more accessible and experimental. I’m more excited by moments where elements of the space can be exaggerated and abstracted. I'm also fascinated by the idea of walking through wall text as if it was a 3D space. One thing I did notice was how it’s important to have responsive touch in order to simulate the feeling of my body / brain / eyes walking through the space itself. Otherwise, the lag made me feel detached from the avatar and I didn’t feel “present”. It’s possible my internet connection wasn’t fast enough to render the details of the models.
The artworks listed looked interesting in Coco Sui’s The Third Space, but I couldn’t make it through the front door.
Shashank Satish’s “Digital (Dis)embodiments” made me think a lot about how text can be navigated through 3D space. How does the way its laid out in this additional dimension change the way we read it?
[RTSS] Public Spaces on the Internet
Link to sketch: LINK
Blog: Write a post on your blog about a (physical) public space you use personally. This might be your local park or green space, a library reading room, a cafe in your neighborhood, or something else entirely. Why do you spend your time in this space rather than any other? What do you like about it? How do you engage with others (if at all) within this space?
The public space that I use most frequently and prominently is the MTA subway. There's a reason why so much media and content about the space is produced by the people who live here. The people who use it are often required to use it at a high frequency to carry out their daily tasks. It has such recognizable sounds (the screeching of the train accelerating, the rumbling against the tracks, "stand clear of the closing doors", muffled train conductor announcements) and recognizable sights (faces buried into their phones, flashing lights from the windows underground, salt and stains on the floor, littered cups of coffee rolling around). Perhaps because of this overstimulating environment, most people tend to retreat inwards, avoiding as much interaction with each other as possible. Since I've gotten noise cancelling earphones, I felt my subway experience has improved drastically. It's a public place, yet my daily commutes feel like a very private time to me. Here, I often sit, put my earbuds in, pull up my mask, and escape into my mind to hasten the experience of the ride. It's a place that is a necessity for me to be in when I use it, but it's also become sentimental. I've done homework, cried, scarfed down quick meals, written journal entries, crafted gifts, napped, watched videos, and got to know people on the train. Despite the lack of comfort, hygiene, and predictability, it's become an important place where people have done so much living and growing up.
Still life study of a teapot on table using three.js
[RTSS] Beyond Being There (Intro to three.js)
How well or poorly do the examples from the article “Beyond Being There” (1992) capture your experience of hybrid social/collaborative/working experiences today? What elements of your hybrid life are missing from this piece?
A lot of the ways that we interpret communication via physical vs. electronic space highlighted in this article feel applicable today, particularly in the way we perform synchronous and asynchronous communication. We continue to meet in person to pursue the intimacy that online communication (synchronous or asynchronous) lacks, and we continue to use email or DM to fill in for the need for asynchronous communication that physical interaction lacks.
The article feels less applicable when speaking about synchronous telecommunication and archiving, as a lot of the tools developed are now able to capture a wide range of verbal intonations and facial expressions. Remote communication has become widespread even for professional environments, especially since the beginning of the pandemic when the majority of the population had transitioned to work from home. However, the "ease" of synchronous communication assumes that users from both sides have reliable tools to capture and render video and audio I/O and a high speed internet connection, an access commonly barred by wealth, resources, or location.
For those of us that are lucky enough to have the option of both on hand, I found that having electronic tools to chat remotely changes the experience by adding a level of choice in the amount of anonymity we want to preserve. If we deem our appearances unfit for camera, we could simply turn it off or add a filter. If our surroundings are too noisy, we could mute our microphones temporarily. If we wanted to further conceal our identities while sharing our stories, we could modulate or pitch shift our voice. Animated characters such as Vtube avatars can replace our corporeal body while mimicking our facial expressions and motions with an increasing amount of complexity, allowing us to communicate without needing to reveal any details about our real name, age, and face. I think the telecommunication problem as referenced in the article about how "we must develop tools that people prefer to use even when they have the option of interacting in physical proximity as they have heretofore... to do that requires tools that go beyond being there" has been achieved by giving us the control of moderating and modifying the way we present ourselves from the convenience of our own homes. All of these potential modifications do decrease the level of intimacy commonly felt from online interactions, but it truly provides an experience otherwise impossible from physical communication. It provides an alternate possibility for, but without fully replacing, physical communication... for now.
Adding a strange unicorn spout (to be fixed)
Adding a table (legs to be added)
[ICM] Webpage Final Project
Link to project: LINK
For my final project, I wanted to build a functioning website where I can browse through all the discarded textiles that have been acquired as materials in my studio. For each item acquired, I wanted to show its different states, starting with just three visual representations (front side, back side, and the goodbye letter from the person who donated the item). I wanted the layout to change up on every click and It was important for this website to be scalable (since it’s likely that I’ll have many more textiles added to the fabric stew over time) and also responsive depending on the size of the browser. This work ended up working more with DOM model, HTML, and CSS than in previous assignments.
Since I was working with a lot of images (120 minimum), I quickly realized I had to move off the p5 web editor and use local host to preview the interactions on my webpage. With the help of Lucia from The Coding Lab, she quickly introduced me to VSCode and an easy way to go live / preview the webpage (via the “Go Live” button on the lower right). I downloaded my files from the p5 web editor into a package and set up my station on my local computer accordingly.
Next, to create a responsive webpage, I used flexbox to organize all the buttons (each button would trigger image loading) and made the javascript code constantly check if the window has been resized. The initial setup would create a p5 canvas corresponding to the size of the window screen and then resize the p canvas if the window screen has been resized. Then, I made sure that later when I was setting the positions for each image, they would fall within the range of the most recently sized canvas.
Next, I wanted to make it so that every time one of the buttons were clicked, the images could layer on top of each other and create new original compositions every time. The positions and sizes for each image would be randomly generated with each button click. I set the back photo to always be partially transparent and the blend mode for the goodbye letter to multiply. That way, it could create interesting overlays and textures upon each button click as well.
Oftentimes, coding the image handling and positioning was the most challenging part of this project. I didn’t want to overload the browser to load too many images at once, and my teacher Allison helped me a lot in problem-solving this code. With my current model, the sketch only loads the number’s corresponding images into an array when prompted to by the button click (just three images). The draw() loop renders the image over and over again. And finally when there is a new button click, the entirety of the images stack is released and a new series of images are pushed back in. Initially, I wanted to make it so that DOM images are created instead of p5 images. That way, I can have an isolated sketch just for holding the 3d object for each button click. However, I had trouble with switching my code out from p5 image objects to HTML elements. This is something I want to look into in the future. At one point, I want to include more context into this webpage, maybe an about page or such, and refine the layout design.
[SEA-DP] Revisiting the three ideas...
Revisit your 3 ideas project. How has this semester’s study shifted your thinking on the ideas? What revisions would you make now, based on this knowledge? Post to your SEA-DP blog.
Revisiting my 3 ideas from the beginning of the semester, I am still intrigued by these topics and want to pursue them. Reflecting back at close to teh end of the class though, the main difference is that the way I would approach them is more detailed. I also feel better equipped with reference materials and information. My interests are still rooted in the following topics.
For my first idea on creating a piece about algorithmic bias in Chat GPT and facial recognition models, I would adapt it to be more interactive (per the feedback when presenting this). Rather than a data visualization, I think I would collaborate with my data scientist friend doing research on this area to develop a game that teaches participants how artificial intelligence works. Then, it would be more engaging.
For my second idea on creating an electronic textile lion dance head sculpture illuminating the textile garment industry and history behind Chinatown, I would adapt this idea to be more realistic. I looked into how to build a lion dance head and in order to make one safe for the dancers to use, it would have to be constructed in a very sturdy way. Creating lion dance heads is an artisan craft passed down through families in building these dance heads, and if I had a budget, I feel it’d be more meaningful to commission or create one in conjunction with such artisans rather than build one with my limited knowledge alone. Additionally, having a lion dance adorned with electronic components such as LED’s and other reactive light up components is nothing new. I think what could be a more meaningful project would be to hold lion dance head making workshops for children. In such workshops, it would not only be an opportunity for children to reconnect with their heritage, but there also wouldn’t be as strong of a need to adhere to the safety restrictions of proper lion dance heads.
For my third idea, I am planning to pursue this as my SEA-DP final project. Although a lot of the production and work I’ll, it is meant to be in preparation of setting up a studio practice of using discarded textiles and making them parsable in a way that can be iterated in the future. Awaiting feedback on this idea!
[ICM] Media: Sound
Link to sketch: LINK
For this project, I wanted to play around with music visualization and manipulation. It’s more likely that I’ll be working with premade .mp3 files in the future rather than create my own music / sound piece, so I decided to practice using p5.FFT and loadSound() rather than experiment with p5.Oscillator.
Firstly, I imported all the sounds I’d be using into the preload(). I loaded 2 amazing dance tracks from two fave artsists that I listened to on repeat during late nights in 2019 and 4 instrumental sound bites I found royalty free online.
Next, I created two sliders and two buttons. The first slider will allow you to change the volume, the second slider will allow you to change the song speed, the play button allows you to play/pause the song, and the change song allows you to cycle between the two tracks. In the future, this project could be expanded further by adding more songs into the song[] array. I could even take it one step further by making each song an object that contains String variables song title, artist, duration, etc. Then, I could make a music player display so that the user of this interface can see what tracks are available to cycle through and the information for each.
I used the togglePlaying() code to switch between button states that Pedro from my Physical Computation class used in his p5 serial communication code. It was an easy way to show functionality as well as song state using the button.
Additionally, I used the keyTyped() code from Allison Parrish’s sound example for triggering events based on keyboard presses. When the user presses the characters ‘a’, ‘d’, ‘s’, or ‘f’, the character’s respective percussion sound plays. Ideally, the user would play around and trigger the drum sounds in time with the music playing (kind of like shaking the tambourine during karaoke).
Next, I wanted to create a visualizer for the song. I’ve always wanted to try out 3D graphics within the browser. We didn’t have time to go into it within the scope of ICM, but I still wanted to try it out a bit. I created a 3D cone and allowed orbitControl() so the 3D space can be spun around a bit using the trackpad. I didnt’ like the look of adding lights() so I decided to keep it stylized without shading. However, this meant that upon first glance, the user wouldn’t be able to tell that the space was 3D, so I added a rotation animation to the 3D space.
Then, I used p5.FFT to analyze the song’s sound and create a visual from it. I used the FFT bars example from class and adapted it to create an abstracted spiral using the rectangles. It’s harder to decipher, but does create an interesting visual. I’d like to play around more with the visualizer graphics in the future.
Here it is in motion~
[PCOMP] Synchronous Serial Communication (I2C and SPI)
For this week, I decided to do the OLED lab for the I2C lab and the Playing WAV file for the SPI lab.
OLED Screen Display using I2C
Firstly, I put together the circuit wiring with the OLED and the potentiometer. To get the OLED to work, I had to import the Adafruit OLED model library and the GFX library, and then initialize the screen.
Next, I imported a font library and changed the text size using the display.setFont() function in Arduino code. On the picture to the left, you can see that the text for “sensor” is a lot bigger than it was before.
I also tried out Richard Moore’s QR code library by downloading it in the QR code manager. I used the code provided from the lab page to use the string I sent into Serial to generate a QR code, and it created a lovely QR graphic to display on the OLED.
This is what I sent through the Serial Monitor. When I scanned the QR code, it sent me to “hi this is my message” on my browser.
Lab: Playing .WAV Files from an Arduino using I2S and SPI
For the sound lab, I checked out a micro SD Card reader, audio breakout board and an I2S amplifier from the shop. I borrowed a microSD from a friend and loaded my favorite doja cat song onto it as a .wav file. Then, I plugged my hardware onto the breadboard using the following connections:
SD Card Reader
Vcc – voltage in. Connects to microcontroller voltage out
CS – Chip select. Connects to microcontroller CS (pin D10 on the Nano/Uno)
DI – SPI data in. Connects to microcontroller SDO (pin D11 on the Nano/Uno)
SCK – SPI clock.. Connects to microcontroller SCLK (pin D13 on the Nano/Uno)
DO – SPI data out. Connects to microcontroller SDI (pin D12 on the Nano/Uno)
CD – card detect. Not connected in this example
GND – ground. Connects to microcontroller ground
Amplifier
BCLK connects to A3 of the Nano 33 IoT board
LRC connects to A2 of the Nano 33 IoT board
DIN connects to D4 (SDA Pin) of the Nano 33 IoT board
Vin connects to 3.3V
GND connects to ground
+ connects to the left and right sides of a 3.5mm audio jack
– connects to the center pin of a 3.5mm audio jack
Next, I loaded in the code provided from the lab into the microcontroller. When running it, I couldn’t figure out why the SD card wasn’t initializing at first. Then, I realized I forgot to connect the SD card to power and ground.
The picture on the left is the correct wiring! The audio jack plug is the black attachment at the bottom of the breadboard. I tried plugging in my earbuds and couldn’t hear anything… The Serial monitor said the SD card and .wav file were valid and the file was playing, but no sound was coming out, and I couldn’t figure out why. Doja cat and I weren’t meant to be today.
[SEA-DP] Final Project Proposal
For my final project, I want to create a database / archive website for the textile contributions collected throughout my studio practice, starting with those collected in “to you, 100 years into the future”, a workshop series and exhibition project inviting participants to actively reflect upon our existing belongings and revisit sewing as a time-honored practice towards emotional healing. Textile contributions were documented using embroidered ID numbers to be traceable pre-transformation (when it was collected) and post-transformation (after it was turned into a sculpture). The identification numbering system will serve as the base organization method for the website.
This archive shows the items that the workshop participants chose to “discard” into this project’s collective fabric stew and the goodbye letters they wrote to the “discarded” textiles. Together, the fabric stew of nostalgic colors and prints made from silk, polyester, cotton, denim, and others. Each discarded textile is given a new character to play—a bowl, a table, a monitor, a potted plant, a teacup—within this newly constructed home. Each household item transformed is called a “titem”, a play on the words “(t)ransformed item” and “totem”.
As we click through the archive, we’ll see different manifestations of the titem’s identity. It will display different modes of viewing; the front and back photo of the item, the goodbye letter written, the 3d object item, the 3d object titem, and the 3d objects’ uv unwrapping. In the same way that are contemporary bodies are now distributed, each textile is no longer just tied to its corporeal self, but the different representations and data that trails behind it. What would it look like if each textile that slips in and out of our lives was traced through the hands it went through, in harvesting the wool, spinning the thread, weaving the fabric, sewing the item, and onwards? Would it change the way we obtain, treasure, or trash each textile?
In creating the website, I’m heavily inspired by Laurel Schwulst’s work and her essay “my website is a shifting house”, where she writes a manifesto on what a website can be and what the web can look like if was built and guided by individuals rather than corporations. She writes on the capabilities of a website to be a living temporal space particularly effective for world-building, and consequently as a medium for artwork. Another resource is Aidan Quinlan’s course “Handmade Web”, which remains as an open access hub of information and references. In Quinlan’s words, “The hand has become increasingly less present in the web as we know it today. Websites are largely automated or built from templates, and the knowledge of how to make a website is relegated to a select few. It has only grown easier to learn how to make websites, but the perceived requirements and expectations for a website have become so convoluted and arcane that many avoid the subject.”
[PCOMP] Final Project Proposal
Final project idea: “Three Little Pigs” Full Book
Assignment requirements
Microcontroller to PC (Serial Communication)
Physical Interaction Design Principles
Design principles
For my pcomp final project, I’m working with Chris again to refine our midterm project for the Winter show. We got a lot of good feedback during the critique that we are interested in addressing for this improved version. I am also looking forward to work more with soft materials and explore using e-textile sensors, switches, and conductive thread.
This time, we want to make an interactive book with minimum three pages. We are sticking with the OG story of “Three Little Pigs” since we already have a foundation with it, but want to tell the story from the wolf’s point of view. As the reader is flipping through the pages, they're asked to help the wolf achieve his goals. For the serial communication aspect, it can be an interface for picking the genre of how you read the story. For example, the background music and sfx it plays when the person flips through the story depend on the mode people clicks (funny light hearted music for comedy, eerie creepy laughter in distance for horror).
I think it'd be really cool to have more pages, but not all of them need an interaction. Some of them can be isolated simple circuits or not have any pcomp at all (so the readers can still have a fleshed out story, and are encouraged to slowly discover and find the interactive components over time).
Materials
Arduino
Android phone (to run p5 sketch
Sewable LED’s
Conductive thread or copper tape
Photoresistors
Interactive experience
Depending on the different ways you interact with it, you get different results.
How can you allow for more discovery and curiosity?
Let the user dictate what they can control.
Keeping it portable
Q for Pedro: What’s the best way to incorporate the serial communication interaction?
Ideas
Serial communication to control the mood of the scene?
if DAY = white led lights + sound of rooster
if NIGHT = Orange led lights + owl hoot
if CALM = motor is off
if WINDY = motor comes on + wind gust
There could be different placements on the page where you can place a character that completes the circuit to perform different actions (i.e. connects the circuit to the lights/to the motor)
if circuit is complete, sound plays or led is on
Start with three pages
If photoresistor 1 has light, page 1 is open
If page 1 is open, page 1 interactions are active
If photoresistor 2 has light, page 2 is open
If page 2 is open, page 2 interactions are active
If photoresistor 3 has light, page 3 is open
If page 3 is open, page 3 interactions are active
Feedback from Pedro:
Look at past interactive book projects because there's a lot out there
Maybe focus more on the interactive element than the pop up book element
Self contained vs. connecting to a computer are conflicting goals
We can use HID with phone to trigger interactions in the book instead of computer?
USB-OTG (on the go) allows android phone to show up as a keyboard that can send input into the arduino. Using a phone will both power and give sound to the story.
Use android from ER and run p5 from browser.
Phone can be connected to arduino inside the book (and play different animations depending on the page that's open).
Can also just use the phone to play the sounds
Light sensors can be used with holes cut out to tell which page has been flipped.
[SEA-DP] mecha mecha mecha - a demo
mecha mecha mecha is a livestreamed performance-lecture and participatory nail reading that explores the networked and disembodied self through the persona of the "girl". Our interpretation of the "girl" is an ungendered model / AI that is stepped into while navigating digital spaces. In mecha mecha mecha's accessories, the performing body is prompted to both move and interact with the world in ways beyond the limitations that societal norms have shaped. We've equipped the hard shell of the fingertips with a soft armor. The ruffle on each nail is coiled and tightened up, but when undone using the drawstring, it reveals an embroidered excerpt from the text to be read out loud into the microphone. mecha mecha mecha livestreamed performance activates the text from every angle, recorded using multi-projection camera captures, amplified by the microphone and pitch shifts, and put into motion with our hands and voices.
This performance was conceived and executed in collaboration with Vinh Mai Nguyen as part of their thesis on the Cute and the Nail.
Girl dinner, hot girl walk, that girl, clean girl, girl boss, girl math, girl blog, hot girl summer, pick me girl, christian girl autumn, vsco girl, e-girl, good girl, bad girl, sad girl, manic pixie dream girl, i’m just a girl, girl’s girl, girl power, rat girl, feral girl, gorgeous gorgeous girls love soup, it girl, cam girl, for the girls, girl code, horse girl, gamer girl, girlypop, tomato girl, olive girl, red onion girl, girl next door, riot grrrl, gremlin girl, girl shopping, daddy’s girl, dream girl, babygirl, girl rot, girl blunt, fangirl, go piss girl, girl pretty, the girl reading this…
This project pulls from Girl theorist Alex Quicho’s use of the mecha as a metaphor for “climb[ing] into and pilot[ing] this already-existing subject that has the unique privilege of being greater than us all, yet thoroughly downplayed and underestimated.” As Quicho writes in Everyone is a Girl Online, “It may well work in our favor to accelerate our way into Total Girl—that is, to consider the girl as a specific technology of subjectivity that maxes out on desire, attraction, replication, and cunning to achieve specific ends—and to use such technology to access something once unknowable about ourselves rather than for simple capital gains, blowing a kiss at individually-scaled pleasures while really giving voice to the egregore, the totality of not just information, but experience, affect, emotion.” Tracing homologies from the Girl to AI brings us to the upstream effects of Total Girl; a perfect model for AGI aspirants; the well-dressed singularity that retroactively writes itself into existence from the future one purchase at a time.
[ICM] Media: Pixels
Link to sketch: LINK
I’ve always been captivated by remnants of broken web, finding glitches that create beautiful and seemingly temporal effects. For this project, I wanted to experiment with datamoshing, the process of changing the data within media files to create distorted visual or auditory effects. Currently, many people achieve this effect by using softwares that run scripts within the media files’ data, but is it possible to make something glitch-like using what I’ve learned through code in p5?
I started with a createCapture(); and created a Region class, with region objects generated randomly. The region would capture a small area of pixels and hold it across multiple frames (and slowly move towards the lower right). The movement of the region was slightly random (it can either increase, decrease or stay the same in x or y dimensions). I added a posterize filter so that the region’s image could degrade over time.
Next, I thought the image felt too clear and wanted to play around with the capture on a pixel level, so I created a for loop that would pick a random pixel in the pixel array, and draw it on top of the previous image.
Next, I added an if statement that measures the brightness of each random pixel, and makes it so that the square is only drawn if the brightness goes past a certain threshold. I liked how fragmented this made the capture look. It created floating windows and spaces, feeling dream-like and surreal.
I’d love to do more research into mimicking datamoshing at one point though. According to Wikipedia (copying and pasting so I can reread this later a couple times),
“In the field of video compression a video frame is compressed using different algorithms with different advantages and disadvantages, centered mainly around amount of data compression. These different algorithms for video frames are called picture types or frame types. The three major picture types used in the different video algorithms are I, P and B.[1] They are different in the following characteristics:
I‑frames are the least compressible but don't require other video frames to decode.
P‑frames can use data from previous frames to decompress and are more compressible than I‑frames.
B‑frames can use both previous and forward frames for data reference to get the highest amount of data compression.”
According to this website datamoshing.com,
“Modern compressed video files have very complex methods of reducing the amount of storage or bandwidth needed to display the video. To do this most formats don’t store the entire image for each frame. Frames which store an entire picture are called I-frames (Intra-coded), and can be displayed without any additional information.
Frames which don’t contain the entire picture require information from other frames in order to be displayed, either previous or subsequent frames, these frames are called P-frames (Predicted) and B-frames (Bi-predictive). Instead of storing full pictures these P-frames and B-frames contain data describing only the differences in the picture from the preceding frame, and/or from the next frame, this data is much smaller compared to storing the entire picture — especially in videos where there isn’t much movement.
If an I-frame is corrupted, removed or replaced the data contained in the following P-frames is applied to the wrong picture. In the above video I-frames have been removed and so instead of scenes changing properly you see the motion from a new scene applied to a picture from a previous frame. This process of corrupting, removing or replacing I-frames is a very popular video datamoshing technique and what this tutorial will focus on.
Another video datamoshing technique involves selecting one or more P-frames and duplicating them multiple times consecutively. This results in the same P-frame data being applied to one picture over and over again, accentuating the movement and creating what’s known as a Bloom effect.”
Maybe next time, I can follow this tutorial to try pixel sorting, using this tutorial as a template: http://datamoshing.com/2016/06/16/how-to-glitch-images-using-pixel-sorting/.
[PCOMP] Two-way Serial Communication Lab
For this week’s lab, we practiced using two-way serial communication between P5 and Arduino.
Firstly, I set up my analog inputs on the breadboard, plugging in two potentiometers and one button. The breadboard can be found on the left picture below. The arduino code for testing out the analog and digital inputs (separated with punctuation) can be found on the right picture below.
Next, I plugged in a p5 sketch importing p5.webSerial to take the inputs and translate them into the movement of the circle in the sketch. One pot is tied to the circle’s X position, the other is tied to it’s circle’s Y position, and the button makes the circle disappear when pressed. The majority of the interaction code is under serialEvent() and draw().
Next, I adjusted the code so that the Arduino is only reading and sending back the input data when prompted (call-and-response / handshaking). The code isn’t shown below but the program prints “hello\n\r” until it receives a prompt (can be anything, it’s mostly just to tell the program to start), and then print the input data whenever it receives a prompt to do so.
Next, it’s time to implement this interaction on the p5 sketch as well. The main change is to add “serial.print(‘x’);” in initiateSerial()
At first, I made this error where I thought initiateSerial() had to be its own independent function. The code did not work.
Then, I realized that the initiateSerial() was within the openPort() function and that’s the one I needed to modify. This code works!
Next, I explored my own application by adding serial communication to an ICM homework exercise I did in the past. I modified the p5 sketch to respond to the red circle (position on sketch controlled by two pots) instead of mouse interaction. To click a button on the remote, your circle would need to be in the correct position and the button would need to be pressed (instead of mouse click). It took a bit of time adjusting the code, but I was excited to find my fake “mouse” work. The circle was quite jittery in movement so if I were to repeat this exercise in the future, I would add some code to filter out the noise and make the circle movement smoother. I accidentally closed this sketch without saving and screenshotting the code. Lesson learned, always save your work!
[PCOMP] Serial Communication
Firstly, I tried out communicating with the port (receiving analog input) through the terminal. The code was first uploaded through the Arduino. Then, once the code was uploaded, I was able to read it using the “cat” command in the terminal.
ls command in this folder shows the ports in my computer. The “usbmodem” is only visible when the Arduino is connected.
This the data that was being read from the port. I used Control + C to quit out of the port.
Next, I went back into the Arduino IDE and serial monitor to take a look at the code while modifying it to be easy to read. The output on the left is using “Serial.write” which outputs the information in binary code converted all in one line and is notably difficult to read. The information in the right uses “\t” tab and println() new line characters to make the data easier to read.
The output is shown in raw binary value, ASCII-encoded binary value, decimal value, hexadecimal value, and octal value respectively.
Next, I put together three sensors to manage serial communication between two inputs. I used several different Serial print methods to format the input reading in a readable way.
For this one, the serial data is formatted and differentiated using punctuation.
Next, I learned how to send all of the data received into a CSV file, which can later be read and imported into a spreadsheet. This is useful information for the future when I’d need to document different readings, import readings into other applications, etc. The image to the right of this text shows the CSV file opened.
Next I worked on reading analog input as serial input from a potentiometer, particularly into p5.webserial. Firstly, I set up my potentiometer.
Next, I imported p5.webserial into the index.html file and I changed the the p5 sketch code to utilize serial communication.
The indata variable is read in from the sensor and can be interpreted into any which way to be used in the p5 sketch. I can imagine this to be used in a lot of fun ways to create interactive digital artworks and games with a controller.
The console.log() and print() functions change inData into a String of bytes which can then be read… I think? Still in the process of understanding bytes.
The inData is a byte that can then be converted into a number and charted into a visual.
Next, I moved onto the third lab, which involves serial output. I set up a p5 sketch, and LED to communicate with each other.
mouseY is mapped to the brightness (0-255) of the LED.
PhysicalPixel code (found in the Examples > Communication section of Arduino IDE). In here, if you send a ‘H’, the LED will turn on and if you send a ‘L’, the LED will turn off.
The number entered (0-255) will result in the corresponding level of brightness in the LED.
PPT: Link
mecha mecha mecha - a proposal
Summary
Through a combination of livestreamed lecture performance and audience participation with wearable assemblage, the piece seeks to pull together a playful, rigorous engagement with cuteness and clothing as scholarship. mecha mecha mecha pulls from Girl theorist Alex Quicho’s use of the mecha as a metaphor for “climb[ing] into and pilot[ing] this already-existing subject that has the unique privilege of being greater than us all, yet thoroughly downplayed and underestimated.” Taking Quicho’s understanding that everyone is already a Girl online, mecha mecha mecha seeks to model an example of unveiling invisibilized stories of technocapital reproduction and the Girl as decoy technology for the 21st Century.
Performance
Stage 1
Transformation through dress: Corset nails and accessory worn by a contemporary dancer / performer. Volunteers (2-3) must help performer string up the nails. In the background, there is a multi-camera real-time projection of all the hands working.
Multi-camera real-time projection of hands working / livestreamed
Reference: Gordon Hall, elementary school teachers with old school projectors
Some cameras are more affected (i.e. data moshing) than others
Reference: transmediale https://www.youtube.com/watch?v=o20yfTukhKY&t=1805s
Stage 2
The Houses Of The Serpent Bearer. The 3rd House, Noah Klink, Berlin
Gordon Hall, Read me that part a-gain, where I disin-herit everybody, 2014. Wood, paint, and performance-lecture with projected images and colored light, 50 min.
Lecture: Performer wears extremely long nails with small but legible lecture text on each finger
Volunteers (2-3) take turns reading the text from each finger
Text to recite
Excerpts from ρ᥅ꫀꪶ꠸ꪑ꠸ꪀꪖ᥅ꪗ ꪑꪖꪻꫀ᥅꠸ꪖꪶᦓ ᠻꪮ᥅ ꪖ ꪻꫝꫀꪮ᥅ꪗ ꪮᠻ ꪻꫝꫀ ꪗꪮꪊꪀᧁ-ᧁ꠸᥅ꪶ by Tiqqun, Alex Quicho’s Collected Girlstack, and Cute Accelerationism by Amy Ireland and Maya B. Kronik
Emphasizing “mecha” metaphor - accessory / nail as armor
Takeaways (zine of the text printed)
Reference: Journey Streams, 2023 @ Blade Study (https://www.are.na/block/23784372)
Practical considerations
Performance / exhibition space (preferably a location home to queer nightlife)
Performance / Exhibition Reception funds (drinks + food)
Transportation costs for performance preparation
Cost of materials for nails and garment (filament, grommets, threads, cords, patterns)
Cost of materials for exhibition space preparation (black curtains, lighting, etc)
Cost of materials for zine (vellum paper, textured paper, ink, printing)
Cost of materials for 3 nail workshops hosted at 370 Jay Street
Cost of materials for nail workshop signage
Reference
Through this project, we will work under the mentorship and consultation of practicing new media artists from the NYU Tisch faculty, such as Clarinda Mac Low, Sharon de la Cruz, Ali Santana, and Patrick Warren. We are currently working from a growing library of resources, looking to works such as Alex Quicho’s “GIRLSTACK”, Bogna Konior’s “Dark Forest Theory of the Internet”, Tiqqun’s Preliminary Materials for a Theory of the Young-Girl, Amy Ireland and Maya B. Kronic’s Cute Accelerationism, and feminist posthuman phenomenology writings such as Bodies of Water by Astrida Neimanis. We also intend to reach out to and attend events hosted by artists / communities who engage with similar topics, such as NYU IMA resident Mishka Morgan who did her thesis on girlhood online in the form of the developing video game “Yucky World”, Chia Amisola who is currently doing a residency at CULTUREHUB, CSM and LCF lecturer Alex Quicho, POWRPLNT, and the Center for Experimental Lectures.
[PCOMP] "Three Little Pigs" Midterm Project
“Three Little Pigs”
by Helen Lin + Chris Weiliang Toh
Description: Based on the childhood story of Three Little Pigs, this pop up card will bring the scene of the wolf blowing down the three little pigs’ house down to life, using light-up and interactive components when opened.
Project Interaction:
Flex sensor
When the flex sensor is stretched, book is closed (LED off)
When the flex sensor is loose, book is open (LED on)
Button
Button → change light mode in LED’s
Digital input (button) → Analog Output (RGB LED)
Fan
Analog input (FSR sensor) → Analog output (DC Motor)
Project Pseudocode:
If the flex sensor is loose (low reading), then the LED shines
If the button is pressed the first time, LED shines red
If the button is pressed the second time, LED shines blue
If the button is pressed the third time, LED shines green
If the button is pressed the fourth time, LED shines random color
If the button is pressed the fifth time, LED shines a color changing random color
If the button is pressed again, it repeats the order from red.
If the FSR is pressed, the fan spins.
This was an early stage of the pop up book with the flex sensor and first DC motor installed. Sketches of the four main characters were included to help set the scene.
When trying to use power from an external source, we accidentally fried a series of yellow LED lights strung together and an Arduino! RIP Chris’s Arduino. We could tell it was fried because it was overheating whenever plugged in and wouldn’t show up in the port on the computer.
Wiring with all the soldered RGB LED lights installed. In this version, a speaker was added to make a sound every time the book was newly opened or closed. Later on, the speaker was removed due to changing out to a higher V DC motor.
Close-up for the Arduino wiring. The LED wires can be found in the top right. The DC motor wiring is on the left. The button is in the middle. The arduino with I/O ports is on the bottom.
In the code, the flex sensor reading gave output from 150 (unbent) to 450 (bent). I mapped this reading to a 0 to 10 scale just to make it easier to decipher in the code. If the flex sensor gave an output to be greater than 3, then the declared boolean variable “bookOpen” will be true. This boolean “bookOpen” helps indicate when the interactive components should be read. The FSR and LED lights + motor will only work if the book is open.
“mode” is a variable that keeps track of the current LED state. The first three modes are red, green, and blue, respectively. Mode four is a random color. Mode five is a random color that changes over time. To make sure mode five works, we added a function checkChangingRGBDir() where it not only made sure that the incrementing or decrementing value is kept within bounds, but it would also store whether the value needs to go up or down (to make the color changing feel gradual).
This code checks if the button is clicked (and it is a new click). The debounce helps remove digital noise. The random() function is only called if the mode is newly fourth [3] or fifth [4], because those modes call for random colors.
DEMO: Pushing the button allows you to switch between LED light modes.
DEMO: Pressing the FSR sensor allows you to activate the “wolf’s breath” in blowing the house down (DC Motor).
DEMO: The lights and interactivity only works if the book is open (detectable using a flex sensor on the spine).
Feedback (Ideas for Expansion:
Change plug out for power battery to make it more portable
Currently the interaction feels more reactive than interactive. What are some ways to create more discovery and curiosity?
Perhaps have some pages without electric components, so when you enter the interactive page, it’s very noticeable
If you wanted to continue to use soft materials, look into:
Ribbon cable, silicon wire
Copper tape often used for paper projects
Can be cut with vinyl cutter to create complex drawings
Jie Qi: Chibitronics
Z tape (more expensive)
High-low tech (research group at MIT Media Lab):
[ICM] Week 6: Arrays and Objects (Catch the Roach)
Link to sketch: LINK
I used my interactive cloud code from last week’s assignment as a base for instances of an object moving left to right. For this assignment, I replaced the clouds with roaches and turned it into a clicker game. When we first moved into my new (current) Astoria apartment about a couple months ago, we had a bit of a roach problem. One of my roommates is very squeamish around roaches and this game was inspired by him. Usually if he ran into one, I would go in and either chase it or clean up the corpse.
I used pixel graphics found from online search, which was imported using preload() from an assets folder. When a new roach is “born”, its width, height, type (there are 3 different breeds of roaches in this game), and speed are randomly generated, though within specified ranges.
For this game, I had two different arrays storing information. There was the Bugs[] array that kept track of the bug objects on the screen and another array typesOfBugs that stored the image data for the three bug graphics. Bugs[] was parsed through every draw() cycle. For each bug on the screen, it’d show on screen, move to its next respective x and y position, remove itself if it has gone past the screen, and check for mouse interaction. typesOfBugs[] was much simpler in that it was simply used so that the roach image could be randomly generated depending on index (ex: random(0,2) for a three image array).
For the mouse interaction, checkClicked() is a Bug class specific function that checks if the mouse click was within bounds of its respective image. I added moE (margin of error) to make the game more playable. Otherwise, it’d feel like the user was clicking the bug, but not getting the squish. The higher the moE, the easier the game would be. Next, I added a points system so that the smallest bugs gave the most points, and then the small-ish or fast bugs gave slightly higher points.
Lastly, I added a health bar to create some stakes. Whenever a roach passed through the canvas without being clicked, the health bar would deplete. When fully depleted, the health bar and points would reset. The game still keeps track of total bugs squished though, just for pure player satisfaction.
[PCOMP] Transistors and DC Motor Controls
I was doing well in the past two labs, but started to struggle again with this one. Floor resources are low and I wasn’t able to locate some of the materials. I did my best to follow along, but I also feel the concepts behind transistors (NPN, PNP, MOSFET, etc) are quite complex and I haven’t fully absorbed them. I will definitely be asking them during class today.
Adding a potentiometer
Adding a transistor. They ran out of TIP120, so I used the TIP 102 as a placeholder just so I could practicing wiring everything.
It works with a DC motor! The blurry blue is a piece of masking tape spinning.
It does not work with a gear motor… Is it because I have the wrong transistor? I wonder if it’s because the gear motor I borrowed from the shop didn’t have wires attached and I didn’t install them correctly.
I wasn’t able to do the portion with a DC incadescent bulb because I wasn’t able to find one on the floor. I hope to try the last part of the transistor lab once I find one. This is definitely a lab I want to revisit again in the future, especially if I’m going to work with motors.
Next, to start on the second lab, I soldered header pins to my motor driver from my arduino kit. They were a bit globby, I noticed soldering this time felt more challenging than connecting two wires together because I had to be more precise in placement. I accidentally soldered two pins together, so I had to practice desoldering as well.
After soldering the motor driver, I plugged it into my breadboard and started working on the connections. There were a lot of cables and connections for the motor driver.
Attempt 1 with additional DC power supply attached. I used a gear motor with this set up (gear motor not pictured).
Attempt 2 with additional DC power supply attached. I used a regular DC motor with this set up.
This time, the motor wouldn’t respond… thus I was unable to test out the code to control the motor. I feel like the code itself is pretty straightforward, I think I’d need some assistance troubleshooting the circuit building component.