ITP

[RTSS] Public Spaces on the Internet

Link to sketch: LINK

Blog: Write a post on your blog about a (physical) public space you use personally. This might be your local park or green space, a library reading room, a cafe in your neighborhood, or something else entirely. Why do you spend your time in this space rather than any other?  What do you like about it?  How do you engage with others (if at all) within this space?

The public space that I use most frequently and prominently is the MTA subway. There's a reason why so much media and content about the space is produced by the people who live here. The people who use it are often required to use it at a high frequency to carry out their daily tasks. It has such recognizable sounds (the screeching of the train accelerating, the rumbling against the tracks, "stand clear of the closing doors", muffled train conductor announcements) and recognizable sights (faces buried into their phones, flashing lights from the windows underground, salt and stains on the floor, littered cups of coffee rolling around). Perhaps because of this overstimulating environment, most people tend to retreat inwards, avoiding as much interaction with each other as possible. Since I've gotten noise cancelling earphones, I felt my subway experience has improved drastically. It's a public place, yet my daily commutes feel like a very private time to me. Here, I often sit, put my earbuds in, pull up my mask, and escape into my mind to hasten the experience of the ride. It's a place that is a necessity for me to be in when I use it, but it's also become sentimental. I've done homework, cried, scarfed down quick meals, written journal entries, crafted gifts, napped, watched videos, and got to know people on the train. Despite the lack of comfort, hygiene, and predictability, it's become an important place where people have done so much living and growing up. 

[ICM] Webpage Final Project

Link to project: LINK

For my final project, I wanted to build a functioning website where I can browse through all the discarded textiles that have been acquired as materials in my studio. For each item acquired, I wanted to show its different states, starting with just three visual representations (front side, back side, and the goodbye letter from the person who donated the item). I wanted the layout to change up on every click and It was important for this website to be scalable (since it’s likely that I’ll have many more textiles added to the fabric stew over time) and also responsive depending on the size of the browser. This work ended up working more with DOM model, HTML, and CSS than in previous assignments.

Since I was working with a lot of images (120 minimum), I quickly realized I had to move off the p5 web editor and use local host to preview the interactions on my webpage. With the help of Lucia from The Coding Lab, she quickly introduced me to VSCode and an easy way to go live / preview the webpage (via the “Go Live” button on the lower right). I downloaded my files from the p5 web editor into a package and set up my station on my local computer accordingly.

Next, to create a responsive webpage, I used flexbox to organize all the buttons (each button would trigger image loading) and made the javascript code constantly check if the window has been resized. The initial setup would create a p5 canvas corresponding to the size of the window screen and then resize the p canvas if the window screen has been resized. Then, I made sure that later when I was setting the positions for each image, they would fall within the range of the most recently sized canvas.

Next, I wanted to make it so that every time one of the buttons were clicked, the images could layer on top of each other and create new original compositions every time. The positions and sizes for each image would be randomly generated with each button click. I set the back photo to always be partially transparent and the blend mode for the goodbye letter to multiply. That way, it could create interesting overlays and textures upon each button click as well.

Oftentimes, coding the image handling and positioning was the most challenging part of this project. I didn’t want to overload the browser to load too many images at once, and my teacher Allison helped me a lot in problem-solving this code. With my current model, the sketch only loads the number’s corresponding images into an array when prompted to by the button click (just three images). The draw() loop renders the image over and over again. And finally when there is a new button click, the entirety of the images stack is released and a new series of images are pushed back in. Initially, I wanted to make it so that DOM images are created instead of p5 images. That way, I can have an isolated sketch just for holding the 3d object for each button click. However, I had trouble with switching my code out from p5 image objects to HTML elements. This is something I want to look into in the future. At one point, I want to include more context into this webpage, maybe an about page or such, and refine the layout design.

[ICM] Media: Sound

Link to sketch: LINK

For this project, I wanted to play around with music visualization and manipulation. It’s more likely that I’ll be working with premade .mp3 files in the future rather than create my own music / sound piece, so I decided to practice using p5.FFT and loadSound() rather than experiment with p5.Oscillator.

Firstly, I imported all the sounds I’d be using into the preload(). I loaded 2 amazing dance tracks from two fave artsists that I listened to on repeat during late nights in 2019 and 4 instrumental sound bites I found royalty free online.

Next, I created two sliders and two buttons. The first slider will allow you to change the volume, the second slider will allow you to change the song speed, the play button allows you to play/pause the song, and the change song allows you to cycle between the two tracks. In the future, this project could be expanded further by adding more songs into the song[] array. I could even take it one step further by making each song an object that contains String variables song title, artist, duration, etc. Then, I could make a music player display so that the user of this interface can see what tracks are available to cycle through and the information for each.

I used the togglePlaying() code to switch between button states that Pedro from my Physical Computation class used in his p5 serial communication code. It was an easy way to show functionality as well as song state using the button.

Additionally, I used the keyTyped() code from Allison Parrish’s sound example for triggering events based on keyboard presses. When the user presses the characters ‘a’, ‘d’, ‘s’, or ‘f’, the character’s respective percussion sound plays. Ideally, the user would play around and trigger the drum sounds in time with the music playing (kind of like shaking the tambourine during karaoke).

Next, I wanted to create a visualizer for the song. I’ve always wanted to try out 3D graphics within the browser. We didn’t have time to go into it within the scope of ICM, but I still wanted to try it out a bit. I created a 3D cone and allowed orbitControl() so the 3D space can be spun around a bit using the trackpad. I didnt’ like the look of adding lights() so I decided to keep it stylized without shading. However, this meant that upon first glance, the user wouldn’t be able to tell that the space was 3D, so I added a rotation animation to the 3D space.

Then, I used p5.FFT to analyze the song’s sound and create a visual from it. I used the FFT bars example from class and adapted it to create an abstracted spiral using the rectangles. It’s harder to decipher, but does create an interesting visual. I’d like to play around more with the visualizer graphics in the future.

Here it is in motion~

[PCOMP] Synchronous Serial Communication (I2C and SPI)

For this week, I decided to do the OLED lab for the I2C lab and the Playing WAV file for the SPI lab.

OLED Screen Display using I2C

Firstly, I put together the circuit wiring with the OLED and the potentiometer. To get the OLED to work, I had to import the Adafruit OLED model library and the GFX library, and then initialize the screen.

Next, I imported a font library and changed the text size using the display.setFont() function in Arduino code. On the picture to the left, you can see that the text for “sensor” is a lot bigger than it was before.

I also tried out Richard Moore’s QR code library by downloading it in the QR code manager. I used the code provided from the lab page to use the string I sent into Serial to generate a QR code, and it created a lovely QR graphic to display on the OLED.

This is what I sent through the Serial Monitor. When I scanned the QR code, it sent me to “hi this is my message” on my browser.

Lab: Playing .WAV Files from an Arduino using I2S and SPI

For the sound lab, I checked out a micro SD Card reader, audio breakout board and an I2S amplifier from the shop. I borrowed a microSD from a friend and loaded my favorite doja cat song onto it as a .wav file. Then, I plugged my hardware onto the breadboard using the following connections:

SD Card Reader

  • Vcc – voltage in. Connects to microcontroller voltage out

  • CS – Chip select. Connects to microcontroller CS (pin D10 on the Nano/Uno)

  • DI – SPI data in. Connects to microcontroller SDO (pin D11 on the Nano/Uno)

  • SCK – SPI clock.. Connects to microcontroller SCLK (pin D13 on the Nano/Uno)

  • DO – SPI data out. Connects to microcontroller SDI (pin D12 on the Nano/Uno)

  • CD – card detect. Not connected in this example

  • GND – ground. Connects to microcontroller ground

Amplifier

  • BCLK connects to A3 of the Nano 33 IoT board

  • LRC connects to A2 of the Nano 33 IoT board

  • DIN connects to D4 (SDA Pin) of the Nano 33 IoT board

  • Vin connects to 3.3V

  • GND connects to ground

  • + connects to the left and right sides of a 3.5mm audio jack

  • – connects to the center pin of a 3.5mm audio jack

Next, I loaded in the code provided from the lab into the microcontroller. When running it, I couldn’t figure out why the SD card wasn’t initializing at first. Then, I realized I forgot to connect the SD card to power and ground.

The picture on the left is the correct wiring! The audio jack plug is the black attachment at the bottom of the breadboard. I tried plugging in my earbuds and couldn’t hear anything… The Serial monitor said the SD card and .wav file were valid and the file was playing, but no sound was coming out, and I couldn’t figure out why. Doja cat and I weren’t meant to be today.

[SEA-DP] Final Project Proposal

For my final project, I want to create a database / archive website for the textile contributions collected throughout my studio practice, starting with those collected in “to you, 100 years into the future”, a workshop series and exhibition project inviting participants to actively reflect upon our existing belongings and revisit sewing as a time-honored practice towards emotional healing. Textile contributions were documented using embroidered ID numbers to be traceable pre-transformation (when it was collected) and post-transformation (after it was turned into a sculpture). The identification numbering system will serve as the base organization method for the website. 

This archive shows the items that the workshop participants chose to “discard” into this project’s collective fabric stew and the goodbye letters they wrote to the “discarded” textiles. Together, the fabric stew of nostalgic colors and prints made from silk, polyester, cotton, denim, and others.  Each discarded textile is given a new character to play—a bowl, a table, a monitor, a potted plant, a teacup—within this newly constructed home. Each household item transformed is called a “titem”, a play on the words “(t)ransformed item” and “totem”. 

As we click through the archive, we’ll see different manifestations of the titem’s identity. It will display different modes of viewing; the front and back photo of the item, the goodbye letter written, the 3d object item, the 3d object titem, and the 3d objects’ uv unwrapping. In the same way that are contemporary bodies are now distributed, each textile is no longer just tied to its corporeal self, but the different representations and data that trails behind it. What would it look like if each textile that slips in and out of our lives was traced through the hands it went through, in harvesting the wool, spinning the thread, weaving the fabric, sewing the item, and onwards? Would it change the way we obtain, treasure, or trash each textile? 

In creating the website, I’m heavily inspired by Laurel Schwulst’s work and her essay “my website is a shifting house”, where she writes a manifesto on what a website can be and what the web can look like if was built and guided by individuals rather than corporations. She writes on the capabilities of a website to be a living temporal space particularly effective for world-building, and consequently as a medium for artwork. Another resource is Aidan Quinlan’s course “Handmade Web”, which remains as an open access hub of information and references. In Quinlan’s words, “The hand has become increasingly less present in the web as we know it today. Websites are largely automated or built from templates, and the knowledge of how to make a website is relegated to a select few. It has only grown easier to learn how to make websites, but the perceived requirements and expectations for a website have become so convoluted and arcane that many avoid the subject.”

[PCOMP] Final Project Proposal

Final project idea: “Three Little Pigs” Full Book

Assignment requirements

  • Microcontroller to PC (Serial Communication)

  • Physical Interaction Design Principles 

  • Design principles

For my pcomp final project, I’m working with Chris again to refine our midterm project for the Winter show. We got a lot of good feedback during the critique that we are interested in addressing for this improved version. I am also looking forward to work more with soft materials and explore using e-textile sensors, switches, and conductive thread.

This time, we want to make an interactive book with minimum three pages. We are sticking with the OG story of “Three Little Pigs” since we already have a foundation with it, but want to tell the story from the wolf’s point of view. As the reader is flipping through the pages, they're asked to help the wolf achieve his goals. For the serial communication aspect, it can be an interface for picking the genre of how you read the story. For example, the background music and sfx it plays when the person flips through the story depend on the mode people clicks (funny light hearted music for comedy, eerie creepy laughter in distance for horror).

I think it'd be really cool to have more pages, but not all of them need an interaction. Some of them can be isolated simple circuits or not have any pcomp at all (so the readers can still have a fleshed out story, and are encouraged to slowly discover and find the interactive components over time).

Materials

  • Arduino

  • Android phone (to run p5 sketch

  • Sewable LED’s

  • Conductive thread or copper tape

  • Photoresistors

Interactive experience 

  • Depending on the different ways you interact with it, you get different results. 

    • How can you allow for more discovery and curiosity? 

  • Let the user dictate what they can control.

  • Keeping it portable

  • Q for Pedro: What’s the best way to incorporate the serial communication interaction? 

  • Ideas

    • Serial communication to control the mood of the scene? 

      • if DAY = white led lights + sound of rooster

      • if NIGHT = Orange led lights + owl hoot

      • if CALM = motor is off

      • if WINDY = motor comes on + wind gust

    • There could be different placements on the page where you can place a character that completes the circuit to perform different actions (i.e. connects the circuit to the lights/to the motor)

      • if circuit is complete, sound plays or led is on

    • Start with three pages

      • If photoresistor 1 has light, page 1 is open

        • If page 1 is open, page 1 interactions are active

      • If photoresistor 2 has light, page 2 is open

        • If page 2 is open, page 2 interactions are active

      • If photoresistor 3 has light, page 3 is open

        • If page 3 is open, page 3 interactions are active

Feedback from Pedro: 

  • Look at past interactive book projects because there's a lot out there

    • Maybe focus more on the interactive element than the pop up book element

  • Self contained vs. connecting to a computer are conflicting goals

    • We can use HID with phone to trigger interactions in the book instead of computer? 

    • USB-OTG (on the go) allows android phone to show up as a keyboard that can send input into the arduino. Using a phone will both power and give sound to the story.  

    • Use android from ER and run p5 from browser. 

    • Phone can be connected to arduino inside the book (and play different animations depending on the page that's open). 

    • Can also just use the phone to play the sounds

  • Light sensors can be used with holes cut out to tell which page has been flipped.

[SEA-DP] mecha mecha mecha - a demo

mecha mecha mecha is a livestreamed performance-lecture and participatory nail reading that explores the networked and disembodied self through the persona of the "girl". Our interpretation of the "girl" is an ungendered model / AI that is stepped into while navigating digital spaces. In mecha mecha mecha's accessories, the performing body is prompted to both move and interact with the world in ways beyond the limitations that societal norms have shaped. We've equipped the hard shell of the fingertips with a soft armor. The ruffle on each nail is coiled and tightened up, but when undone using the drawstring, it reveals an embroidered excerpt from the text to be read out loud into the microphone. mecha mecha mecha livestreamed performance activates the text from every angle, recorded using multi-projection camera captures, amplified by the microphone and pitch shifts, and put into motion with our hands and voices. 

This performance was conceived and executed in collaboration with Vinh Mai Nguyen as part of their thesis on the Cute and the Nail. 


Girl dinner, hot girl walk, that girl, clean girl, girl boss, girl math, girl blog, hot girl summer, pick me girl, christian girl autumn, vsco girl, e-girl, good girl, bad girl, sad girl, manic pixie dream girl, i’m just a girl, girl’s girl, girl power, rat girl, feral girl, gorgeous gorgeous girls love soup, it girl, cam girl, for the girls, girl code, horse girl, gamer girl, girlypop, tomato girl, olive girl, red onion girl, girl next door, riot grrrl, gremlin girl, girl shopping, daddy’s girl, dream girl, babygirl, girl rot, girl blunt, fangirl, go piss girl, girl pretty, the girl reading this…

This project pulls from Girl theorist Alex Quicho’s use of the mecha as a metaphor for “climb[ing] into and pilot[ing] this already-existing subject that has the unique privilege of being greater than us all, yet thoroughly downplayed and underestimated.” As Quicho writes in Everyone is a Girl Online, “It may well work in our favor to accelerate our way into Total Girl—that is, to consider the girl as a specific technology of subjectivity that maxes out on desire, attraction, replication, and cunning to achieve specific ends—and to use such technology to access something once unknowable about ourselves rather than for simple capital gains, blowing a kiss at individually-scaled pleasures while really giving voice to the egregore, the totality of not just information, but experience, affect, emotion.” Tracing homologies from the Girl to AI brings us to the upstream effects of Total Girl; a perfect model for AGI aspirants; the well-dressed singularity that retroactively writes itself into existence from the future one purchase at a time.

[PCOMP] Two-way Serial Communication Lab

For this week’s lab, we practiced using two-way serial communication between P5 and Arduino.

Firstly, I set up my analog inputs on the breadboard, plugging in two potentiometers and one button. The breadboard can be found on the left picture below. The arduino code for testing out the analog and digital inputs (separated with punctuation) can be found on the right picture below.

Next, I plugged in a p5 sketch importing p5.webSerial to take the inputs and translate them into the movement of the circle in the sketch. One pot is tied to the circle’s X position, the other is tied to it’s circle’s Y position, and the button makes the circle disappear when pressed. The majority of the interaction code is under serialEvent() and draw().

Next, I adjusted the code so that the Arduino is only reading and sending back the input data when prompted (call-and-response / handshaking). The code isn’t shown below but the program prints “hello\n\r” until it receives a prompt (can be anything, it’s mostly just to tell the program to start), and then print the input data whenever it receives a prompt to do so.

Next, it’s time to implement this interaction on the p5 sketch as well. The main change is to add “serial.print(‘x’);” in initiateSerial()

At first, I made this error where I thought initiateSerial() had to be its own independent function. The code did not work.

Then, I realized that the initiateSerial() was within the openPort() function and that’s the one I needed to modify. This code works!

Next, I explored my own application by adding serial communication to an ICM homework exercise I did in the past. I modified the p5 sketch to respond to the red circle (position on sketch controlled by two pots) instead of mouse interaction. To click a button on the remote, your circle would need to be in the correct position and the button would need to be pressed (instead of mouse click). It took a bit of time adjusting the code, but I was excited to find my fake “mouse” work. The circle was quite jittery in movement so if I were to repeat this exercise in the future, I would add some code to filter out the noise and make the circle movement smoother. I accidentally closed this sketch without saving and screenshotting the code. Lesson learned, always save your work!