Comments

You must log in or register to comment.

sonderlingg t1_itvsxn2 wrote

Yeah, the same with music. It would be nice to train AI based on projects in DAWs, but i don't think it's possible

2

solidh2o t1_itwm0i5 wrote

softwarw engineer/ musician here: absolutely possible if you have the tools to edit the files. it would likely be time consuming to the nth degree, but we'll within the realm of possible.

Most of what comprises the "DAW files" is links to other file types,plus the timeline sorting. For example: synth tracks in Ableton (or ligic or any DAW) are a combination of a midi file reference, and a plug in (via api interface provided by ableton) to a proprietary processor thst converts the midi into a rendered wave. you plug in "on", "a note" and "parameters" such as ADSR/length/vocity/etc (whatever the plugin supports) send out comes a rendered audio file streamed in real time to the sound card (or virtual aidio bus if you are within a DAW).

Actually very easy to do, just INSANELY complex as you layer on levels on comexity and paralellized streams of audio that then run through post processors before hitting the sound rendering hardware.

2

sonderlingg t1_itwraxw wrote

Yeah i've worked in ableton and fl as well, and now i'm a programmer.

I just think it's hard, because there's no dataset.
Like you can't just write every musician "send us your DAW projects to train neural net"

2

solidh2o t1_itx3d0n wrote

well you could - the first 100 or so would take the most time, after that it would only be new parameters to add.

Don't want to pierce the veil of reddit anonymity too much, but I'll say I work@amazon leading a team that catalogs ~ 150,000 apps. We catalog data sources and write ML models look for privacy data to comply with competing regulatory needs from Legal and Accounting.

After being on this project for 3 years I'm 100% confident in the ability to dissect a DAW memnto and use Midi apis that are already out there :)

1

ReadSeparate t1_itw079x wrote

Maybe an even better idea is img2psd. It would be easy to do unsupervised learning for this. Just start with noise or a random image in a PSD file, make random changes like adding layers, drawing lines, adding text, etc, then output the corresponding PNG.

Then, you tokenize the PNG and the PSD files, and use the PNG as the input and the PSD as the output for the training data.

Could make a shit load of training data effortlessly that way.

That way we can use the current prompt to image solutions and just plug in the resulting image to this new model to output a PSD.

I’m not sure how well it would work, but it would be cool to try. Maybe it would also need some supervised data as well.

1

earthsworld t1_itw1qbv wrote

layered/segmented images have been in the works for years already.

1