Featured

About Me

Cy X is a black queer multidisciplinary artist based in Brooklyn, NY. They received their BA in Film and Media Studies from Colorado College in 2017. They are currently a MPS candidate at The Interactive Telecommunications Program, New York University Tisch School of the Arts. Cy is interested in exploring black queer futures and abolitionist possibilities through emerging technology, immersive environments, and performances.

View Cy’s Portfolio Here.

Photo Credit: Meadow Studio (www.meadownyc.com)

Week 6

This week I started to follow Matt’s tutorial for Wrap3. Because it only worked for Windows, I used Paperspace on my mac and it worked surprisingly well. There was a tiny bit of a lag and getting files to paperspace is slightly more annoying, but it worked and did the things that I needed to do.

For my headscan, I used two iphone apps: Capture and ScandyPro, however, both of those turned out to not be the best options. For capture, although I can access my scan in the browsers and download the OBJ file, it doesn’t export a mesh. This proved to be quite annoying.

ScandyPro also did not work well for me and made my skin look extremely grey. ScandyPro also said it exports a mesh, but it seems like it just exports a png screenshot of the model.

Here is an example of the “mesh” that was exported. Looking at other people’s blog, it seems like the scanner in the ER is the best bet or using Bellus3D and paying a small fee for the resulting scans.

Here is my scan without the mesh.

Week 5 – 3D Scans

3D scanning was super fun to explore and I found myself drawn to the medium, particularly the more messier aspects of it…and it was super messy.

I chose to my scans outside at a park by Jamaica Bay. It was under a bridge and there was low lighting and it was slightly windy.

I instructed my friend how to do a 3D scan and coach them along the way but I found my scan to not be as good as my scan of them. I think the lighting wasn’t as good for me and my skin color and seemed to fit their skin color better.

I tried doing a frontal scan in my room with warm light directly on my face and that proved to be a lot better.

I tried head scans using both the Scandy Pro app and the Capture app. The capture app seemed slightly better for the head can but seemed almost impossible to get a full body scan using it because it would randomly stop scanning before we moved around the entirety of my body.

Week04 Sequencer

I’m slowly learning the building blocks of Unreal. Although it doesn’t feel super hard to do the basics, I’m struggling thinking of the full depths of how I can push this medium and this software. I’m thinking that a big part of it might be because I don’t have the characters and the environment that are intriguing.

I’ve been working with this mannequin character for the past few examples and am thinking it might be time to switch it up and see what new stories come out of that

Next time, I plan to record sequences individually and then export them and put together in Adobe. It was a bit hard having everything in Unreal and I would set up scenes, and when I went to add new things afterwords, the blocking would be off and events I wanted to be separate would be put all together.

Week 3: More Complex Animations

Unreal is becoming a bit more comfortable! I decided to add a mouse and external keyboard to my setup and it has already improved things exponentially.

I started by going to Mixamo and downloaded the key animations for a run cycle.

I then added them to an Animation’s folder and for some reason, it created additional files during my upload. I’m trying to pay attention to these issues with files especially since my goal is to be more organized with files in Unreal…which is proving to be quite a challenge.

I’m not sure why it added these Idle state animations and I don’t remember importing them or even downloading that option.

I was able to get everything working but realized after going into game mode that the transitions between the different animations was way off and awkward, so began to work on fixing those.

Week 2: Cyborgs 4 WAP

It took me some initial time to get the hang of Unreal, but since I’m a little familiar with Cinema 4D it hasn’t been too bad of a transition. Mostly realizing, I could really benefit from a mouse…using the laptop trackpad makes everything far from smooth.

This week’s assignment was to use skeletal meshes in UE4. I used the Mannequin Model in Mixamo for my asset. I’m planning on getting a new phone soon so that I can also start doing my own scans as my old Iphone 6 is struggling but the mannequin was still useful for the assignment.

I tried many variations of moving through the environment: using the preset cyborg character in Unreal, using the mannequin as a character to control as well as other mannequins in the environment, and a more first person perspective.

I wanted to create a sense of ritual with a group of people doing a synchronized dance and the person navigating the space would be an outsider to the ritual. Conceptualizing the build of the environment was my favorite part and I think I could’ve spent so much time on that and hope to in the future. It’s a bit harder creating in Unreal though so need to figure out a C4D to Unreal workflow.

So far I got my animations working. I’m curious how it’ll be using other assetts or even what its like creating your own character in Cinema4D and then animating that. I also can do a way better job at organization and so many things are messy because I’m not sure how to categorize everything yet.

In the end, somehow ended up making a tik tok video.

Week 1: avatars

The process of making an avatar is always an interesting one and reminds me of my time spent deeply on the Sims, habbo hotel, IMVU, and other platforms growing up. It is an intimate process, a familiar one, and one that I don’t feel like I navigate in the same way now.

More recently, it feels like my “avatars” seem to more closely mimic my ideas and understanding of myself and feel less playful. In some ways, when I was younger I experimented more with coming up with unique usernames, character names, and ways of being that extended beyond me.

For the process of creating an avatar for the homework assignment, I began with IMVU which is a platform I used growing up. It felt very different to be making this character directly in the browser compared to downloaded a third-party app and I wondered how the browser based character creation differs from the character creation I was used to growing up.

Immediately, when creating an avatar here you are forced to pick from the gender binary which makes the options already tremendously limiting. As usual, I found it to be very difficult picking a hairstyle that felt similar or authentic to my hairstyle so had to settle on something in between. I also found it to be very interesting that in general, there weren’t alot of customization tools when it came to eye shape, eyebrows, body type, nose shape, which makes it so it already feels like there’s a certain privileging of a specific body type.

After running up against the limitations of this platform, I decided to try something which I viewed as more “simple,” which was the memoji sticker and animation feature available on the iphone. I have been growing particularly interested in memoji because of its ability to project your features, face movements, eye movement, tongue movement in animation mode, something I unfortunately have not been able to experiment with because of an older phone model.

Although this technology felt “simpler” upon initial judgement. I actually felt like the Memoji was more expressive. There were more specific customization options for the face and it foregoes making you pick a gender, and in fact I had access to facial hair options which isn’t always allotted to me. I also think there’s something interesting about the focus being solely on the face and the rest of the body for the most part is not visible unless you choose a specific emoji that incorporates more body elements. In general, I felt like I could get the avatar to feel like a more authentic version of myself, which is important since sometimes people may use memoji as a substitute for more standard types of emojis, to add personality to a message, or to enhance a video recorded (or voice memo) message.

MlxD – Final Execution Plan

Sound Design

  • Ideal goals
    • Launch interactive website with two-three mixes
  • Realistic goals
    • Launch interactive website with one mix
  • Minimum goal
    • Design website using Sketch + Figma to show User flow

Interactivity + Visual Design

  • Ideal goals
    • Launch full-fledge working website
    • Website is responsive for mobile and web
    • Website allows users to submit knowledge / information about sounds
    • Guest mixes are also showcased
  • Realistic goals
    • Design website using Sketch and Figma and show examples of User Flow
  • Minimum goals
    • Wireframe
    • Design

Exhibition

  • Web or Visual experience to be interacted with online

Final Execution Plan

By April 14

  • Visual
    • Color Scheme decided
    • Landing Page Design
    • Layout for First Mix
  • Music
    • Annotated Notes for First Mix
  • Interaction
    • Animation Designed from Moving from Landing Page to First Mix Page
  • Exhibition

By April 21

  • Visual
    • Design for how information flows on screen decided for first mix
  • Music
    • Annotated notes for Second Mix
  • Interaction
    • Two minutes of information shown and animated
  • Exhibition

By April 28

  • Visual
    • Design for second mix
  • Music
  • Interaction
    • Two minute animation shown for navigating to second mix
  • Exhibition

By May 5 – Final Presentation

  • Visual
  • Music
  • Interaction
  • Exhibition
    • Animated Video exported and ready for feedback
    • Figma/Sketch design accessible for those to see

Sonic Sessions

Sonic Sessions / Sonic Portals

PROJECT DESCRIPTION

Sonic Sessions is an experiment in listening that seeks to incorporate music theory, history, and exploration into the way we listen to music and mixes.

WHAT

Sonic Sessions will combine music literacy through active listening alongside but not limited to the exploration of:

  • Music Theory
  • Music History
  • Activism
  • Collective Knowledge
  • Ethnomusicology

The goal would be to provide an alternative experience to the way we often listen to music. Mixes are stories, woven together with the unique talents of the DJ and the selector. What if we provided insight into the process? What if there were more opportunities to learn the story behind each sample? 

HOW AND WHY ME?

I currently DJ and have/had a bi-weekly residency at Playground Coffee Shop. Sometimes when I would make my own mixes, I would ensure to include a tracklist, but even that wasn’t always helpful unless it was synched up / including the time when it was playing. However, depending on the DJ they can have as many as four samples and effects moving at different moments.

Additionally, for the year I decided that I wanted to become a DJ because I wanted to better my listening practice and actively build a relationship with music.  This thinking about music and sound more intentionally often led itself to a deeper level of research and curation.

TARGETED GROUP(S)

Music Listeners who are looking for a curated music discovery experience that exists beyond/outside the algorithms.

Music Listeners who are interested in learning basic music theory and music history in an approachable way

Synthesis

For this exploration, I decided to use Max to learn more about synthesis and combine that with learning about UI.

There were a few parameters for the assignment and eventually I ended up taking an additive approach as I began to work on the assignment.

I was first tasked with controlling an element using a button (or a digital off/on state). I decided to learn about they key pressed function in Max and used the pressing of “p” on the keyboard to activate the phasor. I’ve been really drawn to the phasor because of it’s ability to produce a clicking noise.

I then created another button to start a sine wave. I liked the idea of combining the wave with the clicking of the phasor to see how they interacted and sometimes got some pretty interesting results.

For the sliders, I also connected them with buttons and used the built in slider of the live.gain as well as other additional sliders and programmed them so that their min and max made sense and reflected their role.

Max Patch UI:

Max Patch:

Melodic Elements

I spent this past week focusing on effects and research. My last piece I did seemed pretty melodic heavy, so I worked on an additional one and used both effects and sampling to try something new.

The inspiration for this piece was pretty accidental. I was listening to some songs I was thinking of including in a DJ performance I was doing and some of the pieces I was listening to were more ambient. The dialogue you hear in the beginning was actually playing on my computer in a different window on twitter when listening to one song and I thought it was part of the song.

When I realized that dialogue kept looping, I recognized that it was not actually part of the song at all, but it gave me an idea on what I could experiment with.

Additionally, I also researched more musical devices that could serve as inspiration as I further develop my device.

I really like the Roland G707 which is a string inspired synthesizer and the hyve which is a 60 voice polyphonic analog synthesizer controlled by pressure and touch movement.