Built this after reading one too many "we take your privacy seriously" statements from companies that very clearly do not. Configure the transgression, severity, and industry vertical — AI handles the learnings-forward messaging. No actual accountability was produced in the making of this tool.
I'm an artist with a catalog of published instrumental tracks that, for years, I've wanted to complete with vocals—a part of the production process I always found challenging. I decided to use Suno AI's Remix feature not to create from scratch, but to collaborate with the AI as a vocalist, using my own high-quality productions as the source material. This project is a case study of that workflow, showcasing the "before" instrumentals and the "after" vocal versions.
The most surprising discovery was the psychological impact. Hearing a polished, finished version of an old idea almost instantly is a powerful creative catalyst that has me digging through my entire back catalog. To achieve these results, I developed a system I call "Suno - assist," where I use another LLM to generate detailed style prompts and lyrics with embedded musical directions (like "[full band enters]" or "[ethereal falsetto]"). The page shares this exact workflow and the system prompt for anyone to use.
Beyond just the music, this experiment has me thinking about the future. The speed of iteration is a superpower for any working composer, but it also points toward a future where we may need a "Proof-of-Human" ledger to verify unassisted art. I'm excited by the possibilities and would love to hear HN's thoughts on where this is all heading.
I started with my own original music (from jam sessions to complete, mixed, unmastered tracks) and used Gemini to generate detailed, comma-separated prompts for Suno's cover/remix feature. The AI was able to generate everything from shockingly accurate Bowie-style vocals to complex Jacob Collier-esque choral arrangements, and even a 70s game show theme. It's become a powerful tool for ideation and finishing stuck projects.
"You can create custom ringtones on your iPhone using GarageBand by importing audio, trimming it, and exporting it as a ringtone. First, you'll need the GarageBand app and optionally, an audio file or a song from your Apple Music library. Then, you can import the audio, trim it to under 30 seconds, and export it as a ringtone within GarageBand. Finally, you can set the ringtone in your iPhone's settings under "Sounds & Haptics".
Here's a more detailed breakdown:
1. Get GarageBand and your audio:
Download GarageBand from the App Store if you don't already have it.
If you're using a song from Apple Music, make sure it's downloaded to your iPhone.
You can also import audio files from your Files app or record audio directly in GarageBand.
2. Create a new project in GarageBand:
Open GarageBand and create a new audio recording.
Select the track type (e.g., Files, Music) and import your chosen audio.
If using a song from your library, it must be downloaded to your iPhone.
If the file is dimmed, it is either protected or not downloaded.
3. Edit the audio:
Adjust the start and end points of the audio using the handles to create a 30-second or shorter ringtone.
You can also use the precision editor for more fine-grained adjustments.
If the ringtone is longer than 30 seconds, GarageBand will automatically shorten it when exporting.
4. Export as a ringtone:
Tap the navigation button and then "My Songs".
Select your project, tap the share button, and choose "Ringtone".
Name your ringtone and tap "Export".
5. Set the ringtone:
If the ringtone is less than 30 seconds, you can choose to use it as a standard ringtone, text tone, or assign it to a contact.
To set it as your general ringtone, go to iPhone settings > Sounds & Haptics > Ringtone.
You can also assign the ringtone to a specific contact. "
Would love to be able to export videos using this, is that possible in the future? I have a bunch of original music that I'd like to upload with visuals like this. Nice work
Thanks! I'm also planning to release the desktop app (currently for Apple only first), it would be quite easy to make actually, since I have decided not to charge for it anything :) I've actually made videos that are on the front page by doing screen capture on my laptop! I just need to figure out how to sign the desktop app properly, so it can be distributed for Apple... I might be even able to do a Windows app, since I am using JUCE framework. If you want, feel free to subscribe to my email list (link on the app website), so that I can send updates. Sorry, didn't make twitter account for this :)
I've done this initial app just to gauge if there is any interest in this at all (friends were telling me "no", but they don't know anything about music :)). I am thinking that I could do some visualisations for free, and add the fancy ones as a paid option. There are many visualisation apps out there, but none have the same frequency detection algorithm... Many people asking to open source it now, but I don't know how can I do it, and also get paid for making something people can pay for on top of it... Just trying to make something that would allow me to leave my (already high paying!) job... I'll need to think :)
The heavy lifting in game music is not outsourced. It’s really only the orchestra musicians that are outsourced. The music direction, supervision, engineering, orchestrating, score prep and copy services, editing, mixing,mastering, integration and testing is still done in-house or by a contracted music service, usually US based. The reason for outsourcing orchestras is ironically to avoid overly strict contracts with AFM, the musicians union. This is why Nashville is bursting with video game scoring sessions. Tennessee is a right-to-work state.
Thanks for asking. The entire composition, performance, and production is done by Amper. The music on Soundcloud is Amper's direct output without any human intervention.