Rendered at 15:26:19 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
adrianwaj 16 hours ago [-]
Why not call it Slopify? Humans are the new vinyl.
https://flippa.com/12100071 - I was wondering why SmashHaus was for sale. (no affiliation) Peak value. It's only downhill from here for outsourced music.
wxw 16 hours ago [-]
I asked it to make lofi cafe music and it just made a static web-page. When I asked why there wasn't any music, it said:
> My bad—I forgot to hook up the sound system.
And then it started playing jazz, which I'm not mad about. Nice to see Google trying fun stuff.
johanneskanybal 15 hours ago [-]
"Let's go surfing dude, we'll make some vibes!" This is like if someone lobotomized your grand dad and then created an tik tok account for him.
troymc 14 hours ago [-]
Aw, c'mon man, they're just doin their job, ya know? Chill out.
genewitch 15 hours ago [-]
it can't remix. even comfyui can remix on my desktop. I've used udio, suno, comfyui with the music generation models, and one other site that i can't remember the name of since it was through a friend.
They all kinda suck, you do have to run generation many times unless you're very lucky.
I and my friend wrote 10 albums between 1997 and 2007. we went solo for geographical regions, and i stopped writing music altogether in 2017 or so, only doing arrangements, mashups, mastering.
and now i don't even know if i ever want to get back into music, because i can rapidly generate a "good enough for the moment" track, like when the south korean president tried to coup: https://soundcloud.com/djoutcold/i-aint-even-writing-music-a... Oh by the way the lyrics are in lojban except for "... and the people were pissed"
the oldest stuff on those three sites i mentioned is all hand written by me over the years.
4 hours ago [-]
inerte 14 hours ago [-]
I've tried and couldn't make it sound like Angine de Poitrine, it completely ignores the microtones. Sounds more like Polyphia. It does look like AdP is the answer to AI.... or we haven't trained the models with sufficient microtones, likely due to western music influence.
BLKNSLVR 13 hours ago [-]
Three albums not enough to train AI on?
- Vol I (AdP)
- Vol II (AdP)
- Flying Microtonal Banana (KGatLW)
I just wanted to write this comment because it will be almost impenetrable to anyone who doesn't already know.
adw 10 hours ago [-]
Bunch of Balkan and Turkish music has quarter tones too. (And you’re forgetting KG and LW…)
BLKNSLVR 7 hours ago [-]
> (And you’re forgetting KG and LW…)
Now I'm only replying because I'll take any opportunity to prop King Gizzard and the Lizard Wizard, which is the third item of the original three dot points.
KG = KG and LW = KGatLW = King Gizzard and the Lizard Wizard.
I don't like all of KGatLW's music but, as someone who is also a big fan of Frank Zappa's extensive corpus of works, I love their versatility and their willingness to be versatile.
I think this is rather an AGI benchmark than a pelican.
burner-phone73 7 hours ago [-]
Thank you for bringing Angine de Poitrine tonmy attention, this is awesome music and performance!
BLKNSLVR 7 hours ago [-]
Not the one who posted it, but welcome to a decreasingly exclusive club!
When I first watched the AdP KEXP performance, it felt like my musical knowledge up to that point was just preparing me for AdP. I've been through micro-tonal (King Gizzard and the Lizard Wizard, specially their Flying Microtonal Banana album), polyrhythms and time changes (Tool, The Mars Volta, Metallica's St. Anger album, various other bits and pieces), looping (Party Dozen -ish-, Adam Page[0][1][2], various Math Rock bands, primarily Battles), and other general "out-there-ness" (Arthur Brown[3], Frank Zappa, Mr. Bungle[4]).
The Adam Page stuff is all, essentially, stream of consciousness, unique to the individual performance. He had a monthly residency at a pub in the city a long while back, and I reckon I went to see him ten times over the course of a year and each performance was unique. I think they're all recorded on Bandcamp somewhere, hey, yes, here: https://adampage.bandcamp.com/music. I even named one of his songs: "Preventing a future disaster" (from September 2014, although it's misspelled). I find part of the enjoyment of looping music is in the performance itself, the combination of conducting and choreography.
BrokenCogs 12 hours ago [-]
AdP is the equivalent of pelicans on bicycles
defrost 12 hours ago [-]
For bands less "out there" than AdeP it's also disappointing.
I'm struggling to get a JJJ style "Like a Version" cover of Sleaford Mods doing King Stingray or vice versa .. no joy in the sense of it's a struggle to get anything that sounds anything like either band.
The text it generates midway sounds promising .. and then it plays audio that has zero of the unique elements of either.
It does explicitly state that it can't (or won't) imitate the style of a band(?).
surgical_fire 4 hours ago [-]
I came here looking for that, did not disappoint.
I imagine that AI can be trained on Angine de Poitrine, but it would never really be able to create something like that no matter how many text prompts it was given.
pavel_lishin 17 hours ago [-]
Given how much Google lives to mash their offerings together, and then sunset them, I live in fear of them killing Google Youtube Music (or whatever it's called), in favor of combining functionality with this, and having my music cycle between my actual library, and bespoke AI-generated stuff.
mh- 13 hours ago [-]
If I were a PM at Google trying to connect those two products, the far more obvious approach is the end of the creation pipeline having an "Upload to YouTube Music" button.
ButlerianJihad 11 hours ago [-]
YouTube already hosts significant factories of programmatically generated music; just look for Creators with the 3-hour or 10-hour or livestreams. For YouTube Creators, and also for Photos editors (that's everyone with an unrestricted Workspaces account), they provide a menu of background music that is "royalty-free" and so you can attach it to a montage or your own videos, to avoid awkward silences, or set the mood to vaguely sorta what you were hoping for.
So it's an evolutionary step in my view, rather than a revolution.
gtirloni 17 hours ago [-]
This website looks so terrible that I can't tell if it's really owned by Google or a scam.
breezybottom 15 hours ago [-]
It's still better than NotebookLM. Google is really bad at following their own design guidelines.
verst 15 hours ago [-]
Agreed - I have been checking to verify whether this truly is a Google service or just something that links out to generic Google ToS and Support pages. It looks suspicious.
fg137 16 hours ago [-]
And its animation is terribly buggy on mobile.
giancarlostoro 15 hours ago [-]
Yeah, I had to do a triple take.
unicorn_cowboy 15 hours ago [-]
This is their recent acquisition (producer.ai) rebranded.
Before the purchase, the quality of generations had been going down for a while (IMO; subjective and anecdotal). I tested multiple iterations of their chat interface and was never thrilled with its ability to actually understand or adhere to prompts. I had liked their previous (Suno/Udio-like) iteration (Riffusion).
Curious to hear how it performs for people now and whether anything has improved.
vjay15 13 hours ago [-]
The songs it generates is so corporate music pilled and generic, it has no creativity of its own and even if we try to make it do something creative, it generates the same EDM style beat with no taste, well I guess stock music users rejoice, you dont need to use stock music anymore, you can create endless stock music
4 hours ago [-]
butterlesstoast 13 hours ago [-]
Man, we really took the Google Plus days for granted…
mvkel 13 hours ago [-]
The output is pretty terrible. Multiple attempts to get it to generate ambient music, and every time it pumps out a terrible dubstep beat.
Like a strange form of Gell Mann amnesia, where all AI output is probably this bad, but if we don't know any better, we don't know just how bad it is.
arcticfox 13 hours ago [-]
What do you mean by the Gell Mann part? The output from this tool may be bad, but AI music in general is extremely good. My playlist is largely "AI artists" at this point and they're really good, to the point where if you look them up online, it's mostly people being finding out they're AI and being sad about it (I also felt this - would love to see them live, but they're not even real).
mvkel 13 hours ago [-]
RE: Gell Mann.
I know what music I like. I said I wanted "dystopian, ambient, droning music with ear-filling, warm bass. No drums or beat."
What came out was some pretty generic dubstep that one might hear on a Verizon commercial circa 2018. Subjective, sure, but a big miss given the instructions I gave it.
Now let's say I ask it to generate code to scrape all real estate listings that were recently taken off the market. The output looks good enough to me, and I'm happy. But is the underlying architecture just as bad as the music?
arcticfox 56 minutes ago [-]
Got it, I like that. I see what you mean but I think a lot of that is just this tool is bad - like coding models from a year or two ago, they look convincing, enough that you waste your time on their bad decisions.
I think SOTA on both fronts has already reached exceptionally good though.
minikomi 13 hours ago [-]
Would you be willing to share some artists? I'm curious
arcticfox 59 minutes ago [-]
Definitely not going to be for everyone or even many people, but here's an example:
OMG, some of those are legit good. That said the AI seems minimally guidable. It seems to ignore three majority of instructions in https://suno.com/song/25b16ab7-bfea-451d-abb3-8b52cdd783d0?s... so I guess like most tools, it's fine if you want to get what you're given but not really control it.
jatora 9 hours ago [-]
Yep agreed. You can guide it only so much and then you're at the mercy of running it a few times to get the closest match
minikomi 6 hours ago [-]
Thank you for sharing.
SwellJoe 12 hours ago [-]
"My playlist is largely "AI artists" at this point and they're really good"
That's the craziest shit I've ever heard.
jatora 11 hours ago [-]
Much more common than you think, and zero rational reason for it not to be the case since suno v5.
archagon 11 hours ago [-]
Zero rational reason?? Are you/they just going to keep going until every bit of input into your/their life is AI-generated?????
jatora 11 hours ago [-]
If the ai generated media is on par or better than human generated media? Yep. As will you. As will everyone
SwellJoe 11 hours ago [-]
That's some dark vibes you're peddling.
nmeagent 7 hours ago [-]
> As will everyone
Nope. Not for me, not ever.
archagon 11 hours ago [-]
Uh, nope, absolutely fucking not!
alfiedotwtf 11 hours ago [-]
Just wait till you meet your AI GP and Surgeon
lukan 11 hours ago [-]
Or till that Asimov story happens in real life to you, you are in a room with people arguing about who is AI and who it not - and in the end you were actually the only human left.
alfiedotwtf 4 hours ago [-]
Sadly what you're describing is online forums and comment discussions these days!
bl4ckneon 13 hours ago [-]
I have found that every Ai music model I have tried (tried about 6 of them) can not for the life of it generate any good ambient music or like sad non upbeat music. They all try to revert to some sort of uptempo or beat drop of some sort. They can't just "chill out" so to speak
jatora 11 hours ago [-]
definitely agreed on ambient music, but not on sad music. suno v5 can do that all day
nutjob2 13 hours ago [-]
Ask to to generate some music for airports.
a2128 12 hours ago [-]
> To use our Service, please first verify your age with Google.
I guess to protect kids, we need to restrict them from.. checks notes music?
SwellJoe 12 hours ago [-]
This isn't music.
jatora 12 hours ago [-]
why not?
SwellJoe 11 hours ago [-]
Music, like all art, is a human expression. AI has no desires, it feels nothing, it believes in nothing, thus it has nothing to express. It may imitate music, but it's not music.
jatora 11 hours ago [-]
Would you consider a beautiful sunset as art? is the value of music found in the source or the listener? I argue it is only the listener. The source is irrelevant. That is surely the case for me, and I dont think I'm unhinged or insane. I have a strong feeling I am not a minority in this regard.
Pythagoras argued that music is essentially number and proportion. If beauty is found in the geometry of sound, then the "belief" of the architect is secondary to the elegance of the structure.
SwellJoe 10 hours ago [-]
Upon hearing mashed up pop music with almost coherent lyrics, "Shall I compare thee to a sunset?" What is going on here?
Have you not heard music before? Is Suno your first experience with music-shaped sounds? Because, buddy, this is wild. You're not getting Rumours out of an AI. You're not getting Time (The Revelator) out of AI. London Calling does not spring from the geometry of sound.
jatora 10 hours ago [-]
Maybe I'm just not as emotional as you. Could definitely be the case. Even before AI music I never cared much about lyrics. Nor artist names beyond finding similar music to a song I like. I listen to music for the sound, which does elicit emotion and feelings that are more enjoyable or less depending on my mood, but I don't care about the story being told.
I still don't think you're saying anything that refutes the geometry of sound argument, however. If you heard an AI song you liked, and didnt know it was AI, and found out after the fact, would you be rational enough to accept you could be wrong? Or would it turn you off to the song irrationally?
socalgal2 6 hours ago [-]
Tangential but AFAICT most people don't care about lyrics. If they did, so many hit songs would not be hits.
To name one, "Saving All My Love For You" should never be played at a wedding because the song is about having an affair with a married guy with kids. But no one listens to the Lyrics. They just hear the chorus. It's a hit for other reasons, not because of lyrics.
Similarly, few people listen to the lyrics of ""Rainy Day Women #12 & 35" (everybody must get stoned). It is not about drugs.
Heck, famously there's Bush Jr. (or more likely some PR person) using "Born in the USA" as a pro-American song. It's not a pro-America song at all. I wouldn't call it anti-American but it's definitely a song entirely about problems in America. Not praise.
I was going to say... if you were an early 90s kid, there was plenty of "don't let the kids be exposed to today's music".
Admittedly I went to a Christian high school, but we actually had a school intervention about kids listening to "dangerous music" like Nine Inch Nails.
I don't think any of us had on our bingo card that 30 years later Nine Inch Nails would be writing soundtracks for Disney movies.
MrZander 17 hours ago [-]
Really bad at prompt adherence. Was trying to get it to compose a solo old time banjo piece.
Couldn't get it to stop adding in backing instrumentals at all and it sounded too much like bluegrass style.
"solo banjo instrumental, strictly no other instruments" ... ten seconds later: drums, a fiddle, and a guitar join in.
p1mrx 13 hours ago [-]
I tried to make an orchestra where they smash household objects, and a synthpop song where all the lyrics are burped. Didn't work. Wake me up when we're in the future for real.
r0ckarong 9 hours ago [-]
Don't forget to save your Google Flow Music collection before we shut down this service.
jmull3n 15 hours ago [-]
It can generate something well produced, but it's really bad at applying taste or direction in the way a human does.
The workflow feels wrong. it should be closer to a DAW with chat, where the model outputs stems, samples and arrangement parts instead of one finished track. Then you could target a specific sound, section or idea and actually develop it.
filoleg 14 hours ago [-]
I agree with your DAW UX suggestion very much. I think the writing is on the wall, and Suno is doing exactly that with their Suno Studio.
mh- 13 hours ago [-]
I think this wasn't done earlier because the Suno (etc.) models couldn't output stems.
They could attempt messy stem splitting like all of the other tools have done for a few years now, but those aren't really usable in a production setting beyond small samples you were already going to chop/distort.
filoleg 12 hours ago [-]
I agree, the tech was likely not there at the time.
I am not sure if it is there yet either, but imo your UX vision for it is the correct one, so if the tech is still not quite there yet, it is just a matter of time. But the AI-powered DAW UX is imo where it will eventually end up.
dwb 6 hours ago [-]
I’m really not the kind to use this sort of language much at all (I’m agnostic), but AI-generated music is sacrilegious to me. Satanic. I didn’t love humans making soulless corporate muzak, but at least they were getting paid for it and it was mostly ignorable.
sublinear 6 hours ago [-]
All music on major labels has been formulaic "corporate muzak" for a long time.
If the humans of the 2020s hadn't given up their souls long before this era of AI came out, we wouldn't be discussing this ad nauseam every single day for years on HN. There's no comparison. Humans are vastly superior at music.
People need to get a grip and get out there. I know a significant majority of people reading this are into enough hobbies to know at least one instrument well or can sing. They should just do it for fun and as loudly as possible everywhere they can get away with it. We used to celebrate that sort of thing as a necessary part of maintaining a civil and thoughtful world. Music is an activity before it is a product to be sold. AI is a recording technique built from lots of recordings.
One might have a silly counterargument like "what if there are microphones everywhere stealing my work?", but not enough people consider these days that the corporate world is absolutely terrified of trying to sell what is beautiful. It was considered too risky even when they believed in it with all their heart and were deliberately trying to. What makes the most money is the average, not the exceptional. There's no good excuse people put up against pursuing music other than neurotic irrationalities that come from being chronically online.
(edited for pronouns - overuse of "you")
dwb 5 hours ago [-]
I don’t know what you think my position is? I play multiple instruments myself. Not that well, but enough to have fun. I am, uh, not worried about hidden microphones - not sure where you got that from. I agree that music should be an activity first, but I still love the recorded form too.
I’m certainly no fan of the economic structure of major labels (or any capitalist entity), but despite that they do sometimes still release some good stuff.
sublinear 4 hours ago [-]
I thought I was agreeing with you.
I edited my comment to remove all the "you"s. I have a bad habit of using it in the "fourth person", so to speak.
> hidden microphones
I meant bootlegs. There was a story making the rounds recently about a huge archive of them, and then some discussions went off the rails about AI.
dwb 3 hours ago [-]
Right. I didn’t think you were disagreeing as such but thanks for the clarity. Personally I’m less peeved about people training models than I am people using them.
TheAceOfHearts 15 hours ago [-]
It's good, I have this song I generated last year with Suno which stuck with me and I just tried having Flow generate a variant and it was acceptable. Sometimes the lyrics get modified for no reason. It would be better if you could control emphasis by specifying tags or something along those lines, but it seems fun to play around. If there was an intermediate step where a symbolic or partial processing of your input was shown for tweaking, it would be immensely powerful.
One of the key issues that I encountered after a few song generations is that it feels very rushed, like it's constrained to this 3 minute limit per song so it forces every section of the song to conform to a very specific structure. I tried increasing the limit to 4 minutes but it still gave me 3 minute songs.
Honestly, I feel like this product is showing up a bit late to the party and it's not really feeling particularly innovative. There's nothing egregiously bad about it, but it doesn't seem to add anything new or special that I could notice.
I don't have a microphone hooked up so I can't try the voice interface, but it would be really fun if you could sing to it in order to iteratively compose a song. It could clean up your voice a bit and add music. Or being able to hum out a beat which it converts into a track which you slowly build up. Is anyone able to try if those capabilities are possible with the existing product?
Overall, I'm not sure if a chat interface is the best way to produce a song. It feels very restrictive to have full songs as the primary iteration mechanism. In a text file and in code you can inspect or modify different sections or components very easily. I think a more human-focused tool would provide an on-ramp towards full music production, where you can focus on the parts that you care about and enjoy, while the AI tool fills the other parts with sausage. Right now you can chat with the tool but it appears to be quite limited in the kind of changes that it can make.
dgalati 11 hours ago [-]
I just built a really cool synth using this tool just sitting on my couch casually checking this out. You can blend waves together and change their frequencies to create drones.
This is really sweet as a tool for creating ways to create music. Its a shame you can just prompt -> finished project when its so easy for anyone to create their own music with the right tools and motivation.
LogicFailsMe 14 hours ago [-]
Seems to be where Udio and Suno were 2 years ago, but with a better initial UI than they had. I'm sure Google will discontinue this in a year or two. Suno has since pulled significantly ahead. This isn't another Songsmith, but it's behind the curve right now.
dinobones 14 hours ago [-]
I've noticed that all of these music generators suffer from something like "mean" collapse (as opposed to mode, you do get variance, but all results are highly centered around similar sounding songs).
The music is all just very average, it sounds like the most average song with the most average chord progression/drum pattern per genre.
I guess that makes sense if these are most likely next audio token predictors... but it'd be cool if there was a way to inject some type of creativity/novelty into these, or at least tune up the temperature.
Everything so far just sounds like stock library music to me.
digitaltrees 14 hours ago [-]
Isn’t that what the billboard charts are effectively?
deferredgrant 17 hours ago [-]
I can see the appeal if this ends up being good at iteration rather than just first-pass generation. A lot of AI music products look impressive for five minutes, but the real test is whether they help someone get closer to a specific thing they actually wanted to make.
twobitshifter 15 hours ago [-]
Ok, someone explain the use case for this? Jingles? Making a song about your friend / sig other? Are people thinking they are going to sell these songs and create an AI artist?
kelseyfrog 14 hours ago [-]
I've summarized Supreme Court cases into Broadway musicals. The thing about memory is that novel input increases retention. So now I know about grouse hunting and explosives that fall off trains and their constitutional implications.
Another was a set of songs that helped me emotionally regulate on the drive home after couples therapy. The lyrics contained grounding exercises that helped maintain awareness and presence and contained mindfulness practices.
Both did their job, but they were also music for utility, not necessarily for artistic enjoyment. So it's not entirely an apples to apples comparison.
_sys49152 13 hours ago [-]
turning class notes into songs for study purposes sounds like genius. never wouldve thought of it, but i could definately see value. catchy 1950's style radio commercials advertising highlights of case law to remember for an exam coming up.
mh- 13 hours ago [-]
I think you've got a great app idea on your hands.
By the time I finish writing this comment - yours is 10 minutes old - someone will have vibe coded one, probably.
Also feels like an easy feature for someone like Suno to add, to help subscription retention.
But something like NotebookLM emphasizing subtle mnemonic devices set to music..
lern_too_spel 15 hours ago [-]
Marketing jingles for video ads.
torben-friis 15 hours ago [-]
people like to think they're successful but they don't like effort. So this will let them pretend they're artists.
Also, probably someone will game an algorithm to get revenue from a bajillion tracks of lofi slop.
SmirkingRevenge 13 hours ago [-]
Yep, I'm going to say the overwhelming use-case will be slop-4-revenue.
Slop is starting to dominate uploads to some music services, so I think it will only get worse from here
dabinat 18 hours ago [-]
I’m a little confused about the pricing packages. In what scenario would being able to create 600 songs a month (20/day) be too few?
I could understand if this was an API that people built products around, but it seems to be geared directly at consumers.
smallerfish 17 hours ago [-]
If it's anything like suno, it probably takes you 30 to 40 attempts to dial in what you were looking for. (And don't get me wrong, the results can be great with suno - there's just a lot of trial and error, and dice rolling.)
numpad0 15 hours ago [-]
I don't know anything about these AI tools, but it seems to me like, the yield rates of all these AI media generators are exactly in the range of that of lootbox games. Kids "pull" it like slot machines for set prompt, keeping no more than 1% of outputs. The rest is just thrown away, only potentially useful as negative data. So 600 per month total is probably like just couples per month usable.
janalsncm 15 hours ago [-]
That’s a huge amount of messing around to get those handful of songs then. If only 1% are good, you’re pulling the lever 100x more than you should need to.
mh- 13 hours ago [-]
Putting commentary about AI media aside:
How many iterations (arrangements and recordings) do you think a typical Billboard pop song goes through before it's ready for a final mix and mastering?
Go find a YouTube of someone doing this work, it is kind of mind blowing. Given how expensive studio time is, you realize why it costs so much for a popular artist to produce a polished album.
999900000999 16 hours ago [-]
Eminem allegedly has hundreds of songs in the vault.
Odds are for every 200 ai songs you generate , 2 or 3 are decent.
Anyway. UMG will probably force you to sign over training rights in future record deals.
The models still can't rap. Sounds like if you asked someone who didn't know what rap was to read a script
7 hours ago [-]
quux 13 hours ago [-]
My prompt was "give me the most generic playa house beat you can imagine" and I gotta say, it nailed it
I especially love the glitchy ui sounds, although I suspect it's hardly intentional.
mayukh 14 hours ago [-]
Wow, such hate for anything AI. I haven't used it, I don't care to. But I have teenaged kids and I've seen them hang and tinker with tech and ai. They seem to love it.
Sloppy humans create sloppy output. The AI is just an amplifier, it has no motive
luqtas 14 hours ago [-]
> Sloppy humans create sloppy output. The AI is just an amplifier, it has no motive
yes! go tell to teenager kids harmonize by typing and not editing sheet score a > 4 instrument melody without any music theory background or in-ear training
pgoggijr 13 hours ago [-]
There are things to hate about AI besides the slop it outputs. To name a few:
- The training set being mostly art made be real humans without their consent or compensation
- The work that is taken away from skilled, talented musicians
- The environmental impact of global AI deployments
- The further consolidation of power in the hands of the very few who DO have a motive
AI being "fun" for kids who aren't able to grasp the negative externalities of using AI doesn't make it something to admire
nwhnwh 13 hours ago [-]
Wow, such failure to learn from the mistakes of the past.
Hammershaft 14 hours ago [-]
I'm sure you're right that AI augmented workflows can (& do?) produce beautiful works that I would call art... it's just that the overwhelming majority of AI 'art' I experience on the internet is slop.
mh- 13 hours ago [-]
I feel like I make this comment a lot, but: how do you know?
You recognize obvious slop as slop. But this is a survivor bias-like phenomenon. You have no idea what goes unnoticed.
If you're talking about stuff posted to communities that are self-selecting for AI art submissions, that's another kind of fallacy.
hexaga 11 hours ago [-]
Because good things are few and far between and it's pretty easy to discern provenance out of band in almost all of those cases.
philringsmuth 17 hours ago [-]
What I really hate about all of this, whether it’s music, images, video or anything else, is how much they all use the word “create.” As in, you can create the music you’ve always imagined.
You. Are. Not. Creating. Anything.
You are prompting. Then tweaking, changing, adjusting, etc. The tech is incredible, don’t get me wrong, but it’s advertised so blatantly as the user doing the creating.
Use it as a creativity tool, but don’t get caught up in the false belief that what it spits out is something you created.
Old man yells at cloud. Going back to my cave now.
rdiddly 16 hours ago [-]
It's worse than that: the creativity and originality I put in my prompts, it extinguishes, and instead churns out unoriginal formulaic crap. The crap sounds exquisite and realistic though.
thorum 16 hours ago [-]
The models are primitive right now, but we’re clearly heading toward “AI as sound synthesis, human as artist” - much like how producers currently use a DAW to assemble premade loops and sounds from Splice, but with the producer now able to prompt any sound, filter, or effect they can imagine into existence and then rearrange them into a song.
See for example Suno Studio, which is not very good in my opinion, but shows the direction they’re going.
gnopgnip 17 hours ago [-]
How does that work with using a camera to take photos?
philringsmuth 15 hours ago [-]
I’m a photographer, I have almost a quarter million photos in my archives. When I take a photo, 90% of it is composition, during which I move around, analyze lighting, background, aperture, shutter speed, exposure and a whole lot of “what do I want to capture here?”
The other 10% is editing, which for me involves minor color adjustments, highlights, shadows, cropping, etc. I make all the decisions.
AI can generate an image based on a prompt, and that’s fine, but I would never, never claim to have created that output myself.
wirgil1 14 hours ago [-]
So the only difference is the amount of decisions and iteration. If someone spends 5 hours iterating with AI vs 5 minutes on a photo, which one has the better claim to being a creative work
jdiff 14 hours ago [-]
If someone spends 5 hours communicating with an artist they're commissioning vs 5 minutes on a sketch on a napkin, I think the napkin has a stronger claim to creativity.
numpad0 13 hours ago [-]
Dragging the camera to where it shall be.
I'm no photographer, but do anyone not have that Sidewinder seekerhead in the brain that gives you the blaring tone when a great composition is in front of you(including 3D off-boresight warnings)?
recursive 17 hours ago [-]
You press the button to capture the photo. As you note, a different verb is used. When I order take-out, I'm not "creating" it.
projektfu 14 hours ago [-]
Who is the artist? Mr. Brainwash or the artists he hired?
xnx 17 hours ago [-]
Where do you draw the line? Do composers create?
cwillu 17 hours ago [-]
It is not necessary to draw a sharp line that clearly divides everything before saying “this is too far” about something that has, in fact, gone too far.
kibibu 16 hours ago [-]
Yes.
Does the guy who tells the composer "write a song" create?
No.
The line is somewhere in the middle
_DeadFred_ 16 hours ago [-]
At what point of detail/complexity does my restaurant order transform me into the cook?
xnx 14 hours ago [-]
Good question. Maybe not cook, but consider someone who picked just the right ingredients and preparation for a sandwich. Combining flavors and textures in novel ways that are as surprising as they are delicious. I would ascribe more of the creative credit to that person vs. the one cutting the bread.
somekyle2 16 hours ago [-]
[dead]
kpozin 12 hours ago [-]
Gemini's built-in song generator seems to have better prompt adherence, higher audio quality, and better diction.
ttul 14 hours ago [-]
Here's my personal take on what I'll call the new realm of "AI art". Whether it's prompting a music model or an image model, there is a huge space for creative output, limited only by the human imagination. Sure, tossing in a single prompt and letting the model crap out something will produce "slop". But if you pour your heart into exploring the high-dimensional landscape of the model, you can find truly amazing stuff. This is no different than exploring the creative landscape of music, photography, and other forms of art in the pre-LLM era.
I find that people who rush to negative judgement of LLM-generated art are not going far enough in the creative process to properly judge just how much juice there is to be squeezed out of those 50-billion-dimensional spaces.
minikomi 16 hours ago [-]
The descriptions generated from the prompts are almost always great, but the generated music is always terrible. The sound pallete seems so limited.
DiabloD3 17 hours ago [-]
Why did Google bother?
They're a music store, they sell music, both to own, but also renting their vast library out.
Google should learn not to shit where they eat.
wirgil1 14 hours ago [-]
big tech companies are 50 companies in a trench coat, there isn't some great aligning directive. Feels like some random side project some employees felt like making.
inerte 15 hours ago [-]
Because of ads and background music for YouTube.
tredre3 16 hours ago [-]
Welcome to 2026's reality, most new music is already AI-generated. I don't like it, but it is what it is. YT Music is already full of AI slop, those tools aren't changing that.
If anything it gives Google control of the entire production->sale->delivery process.
I'm honestly not seeing a downside for Google here, can you elaborate?
jdiff 14 hours ago [-]
Most new music by what definition? I'm certain more stuff is being churned out by these automated tools than genuine human creativity, but that doesn't make it economically relevant if the only use it's seeing is random high school kids' YouTube channels. It's not seeing streams on services, it's not bringing in revenue once created.
DiabloD3 16 hours ago [-]
I just keep reporting AI slop videos (incl music) on YT, and sometimes the videos or even entire channel vanish. I hope I'm contributing to this process to keep YT safe, but I'm just one guy, and they probably have a much bigger effort internally.
The downside for Google is, ultimately, the death of the company. Nobody wants AI slop, and go out of their way to actively avoid it and punish companies that promote it. Google already is running a huge risk by pushing Gemini into every service, and permanently burning customers and users with it.
Microsoft is already seeing the downside of trying to Copilot everything. Their software is now partly slop, shit randomly breaks, companies cancel Azure/Office subscriptions and move to on-prem, FOSS, etc. They've pumped their brakes quite a lot, but the damage may be too great to mitigate now.
If Google wants to lose money in the long run, then by all means, please continue.
somewhatgoated 16 hours ago [-]
The people in charge here don’t give a fuck about the long term.
Reap as much profits for yourself as you can before everything inevitably collapses - that’s the prevailing current trend.
Let the lizard brain take over and just feel good in the moment, why worry about the future.
DiabloD3 15 hours ago [-]
Unfortunately, this is probably true for Google.
Once you have that particular brand of cancer, its too late to save the company without drastic measures.
This would have been amazing to see 2 years ago, now its just a thing?
zackify 16 hours ago [-]
This thing is insane i already made multiple songs in english and spanish, different genres
giancarlostoro 16 hours ago [-]
This is how I use Suno, guess I'll give this a fair try.
ryanwhitney 17 hours ago [-]
> Application error: a client-side exception has occurred (see the browser console for more information).
Nice
6stringmerc 4 hours ago [-]
I know my comment will get buried but I’m trying an A/B test first with the same prompts I gave NapsterAI. Eventually I’ll write this up too. I wonder how many I can make before I run out of credits.
what would make these udio/suno type services better? tighter training data? exclude the muzak? how would one even go about doing that?
throwatdem12311 16 hours ago [-]
Tried to prompt some instrumental progressive metal with keyboard and guitar unison solos and some back and forth call and reply riffing and eventually it just kinda forgot that there was supposed to even be keyboards in the song. Basically slop knockoff of Liquid Tension Experiment.
The sound of the guitar is good but the keyboard sounded realy awful, just like a Casio toy keyboard pretending to be a piano. Like truely awful sounding, which is when I prompted the AI to try to fix the tone and then it basically just removed it.
The drums were also waaaaay too prominent so I asked it make them a bit more subdued in the mix and it just ended up slowing down everything to the point it just kinda sounded like generic radio alt-rock instead.
But basically once the keyboards were forgotten no amount of prompting could “convince” it to bring them back.
I tried Suno a few months ago out of morbid curiosity and it was waaaay better than this. Actually got something that made my musician friends actually kinda nervous.
vendemiat 16 hours ago [-]
It requires age verification
curvaturearth 15 hours ago [-]
This is the type of thing that really doesn't interest me. Algorithmic junk that sounds "good", similar to the kind of writing an LLM generates. Sure you can do it, but the main use cases are AI Slop (IMO). Slopify is a great name
alfiedotwtf 11 hours ago [-]
Was the webpage vibe coded where the pause and back button only worked when the song has ended? Seems like Gemini doesn’t know how to do async Javascript.
bentt 13 hours ago [-]
I hate this and I hate that they think it's what they should be doing.
If Google can't see the difference between this and useful, moral AI tools then I worry for their path forward.
tlhunter 13 hours ago [-]
I hate how these tools ask me to type in some long prompt and then once I finish they tell me that I need to make an account.
fortran77 16 hours ago [-]
I wanted it to make a "PAMS" or "JAM CReative" style radio jingle, like the radio jingles of 70s radio stations but for my website. It failed miserably
throwawayk7h 14 hours ago [-]
Google already killed their main music service in 2020. Why should I trust them with another?
giancarlostoro 16 hours ago [-]
Really not a fan of the "ChatGPT" style UI, did they even look at something like Suno at all? This is a little silly to me.
The music sounds decent, I feel like its missing some things, to be fair Suno still doesn't know what a Puerto Rican guiro is. I assume a lot of these AI platforms will take many iterations.
Things Suno needs to figure out and maybe Google now too, is how to let someone pick a specific voice, and get a rather unique voice, I've heard a few songs in Suno with similar voices to my own songs, and its kind of weird.
I do love making the songs as a hobby, so not a big deal. All in all, AI music is really fun to toy with, especially blending genres together.
One very noticeable difference against Suno is Google Flow Music lets you make Music Videos, which I have yet to test. I wonder if I can use my Suno songs to make music videos for them, not sure I'm vibing with Google's Music AI yet.
Aside: Makes me chuckle a little, since "Flow Music" is a reggaeton catch phrase by Arcangel who would always say "Flow Music" even though it was always called "Flow Factory" he would call it Flow Music.
Edit:
There's some awkward factors Google will need to work out, while the instruments and voices sound nice and clear, the rythm feels weirdly off for some songs, its like the voices are not matching the genre mix, its also missing some nuances I've asked for, I assume it does not know what "wobble bass" means. Suno lets you describe nuanced specific sounds or instruments and uses them how you describe.
I told it to have a dubstep breakdown in the middle, it keeps the artist singing / rapping, which is bizarre, that's not how a breakdown would be...
Suno takes great care to make sure the voice always matches whatever is going on with the beat, including humming the beat / bass / brass / whatever instruments are being played.
Glad Suno is going to have some real competition, I just hope Google doesn't kill Suno with its bigger wallet, would be a shame.
Edit:
Final verdict from me is, it feels like less polished than Suno in terms of music, but more features. Suno lacks music creation which still annoys me they let you make a lyric video that's one single orientation / resolution, you have zero control over it otherwise.
There's a "workspace builder" which you prompt and it builds a web app that lets you create songs and what not, not sure what all the features of it is, but it is interesting as well.
If they get this more on par with Suno, they might for the first time ever take money from me since I left the Android / Google platform many moons ago.
nothinkjustai 13 hours ago [-]
[flagged]
giancarlostoro 13 hours ago [-]
Hmm? I make music that I enjoy for myself, is there a problem with that? I've also made music with a DAW over the years. There's plenty of people who use real equipment to make music or DJ who use Suno. If you're going to comment like this is reddit, maybe this isn't the site for you.
https://flippa.com/12100071 - I was wondering why SmashHaus was for sale. (no affiliation) Peak value. It's only downhill from here for outsourced music.
> My bad—I forgot to hook up the sound system.
And then it started playing jazz, which I'm not mad about. Nice to see Google trying fun stuff.
They all kinda suck, you do have to run generation many times unless you're very lucky.
I and my friend wrote 10 albums between 1997 and 2007. we went solo for geographical regions, and i stopped writing music altogether in 2017 or so, only doing arrangements, mashups, mastering.
I can't use this google product to make a song. source for credentials is my youtube, soundclick, and soundcloud accounts, e.g. https://www.youtube.com/watch?v=HXro-e0e7aA
and now i don't even know if i ever want to get back into music, because i can rapidly generate a "good enough for the moment" track, like when the south korean president tried to coup: https://soundcloud.com/djoutcold/i-aint-even-writing-music-a... Oh by the way the lyrics are in lojban except for "... and the people were pissed"
the oldest stuff on those three sites i mentioned is all hand written by me over the years.
- Vol I (AdP)
- Vol II (AdP)
- Flying Microtonal Banana (KGatLW)
I just wanted to write this comment because it will be almost impenetrable to anyone who doesn't already know.
Now I'm only replying because I'll take any opportunity to prop King Gizzard and the Lizard Wizard, which is the third item of the original three dot points.
KG = KG and LW = KGatLW = King Gizzard and the Lizard Wizard.
I don't like all of KGatLW's music but, as someone who is also a big fan of Frank Zappa's extensive corpus of works, I love their versatility and their willingness to be versatile.
One of my favourite performances of KG: https://www.youtube.com/watch?v=MI_XU1iKRRc (I can't believe this is from ten years ago!)
https://www.youtube.com/watch?v=0Ssi-9wS1so
I think this is rather an AGI benchmark than a pelican.
When I first watched the AdP KEXP performance, it felt like my musical knowledge up to that point was just preparing me for AdP. I've been through micro-tonal (King Gizzard and the Lizard Wizard, specially their Flying Microtonal Banana album), polyrhythms and time changes (Tool, The Mars Volta, Metallica's St. Anger album, various other bits and pieces), looping (Party Dozen -ish-, Adam Page[0][1][2], various Math Rock bands, primarily Battles), and other general "out-there-ness" (Arthur Brown[3], Frank Zappa, Mr. Bungle[4]).
[0]: https://www.youtube.com/watch?v=95h3M6BG2QM
[1]: https://www.youtube.com/watch?v=kr1ykpNPOSg (17 years ago, holy shit)
[2]: https://www.youtube.com/watch?v=Er--olP_8wQ
[3]: https://www.youtube.com/watch?v=yXYgLPBfF_w
[4]: https://www.youtube.com/watch?v=E4YvvhKW7bA
The Adam Page stuff is all, essentially, stream of consciousness, unique to the individual performance. He had a monthly residency at a pub in the city a long while back, and I reckon I went to see him ten times over the course of a year and each performance was unique. I think they're all recorded on Bandcamp somewhere, hey, yes, here: https://adampage.bandcamp.com/music. I even named one of his songs: "Preventing a future disaster" (from September 2014, although it's misspelled). I find part of the enjoyment of looping music is in the performance itself, the combination of conducting and choreography.
I'm struggling to get a JJJ style "Like a Version" cover of Sleaford Mods doing King Stingray or vice versa .. no joy in the sense of it's a struggle to get anything that sounds anything like either band.
The text it generates midway sounds promising .. and then it plays audio that has zero of the unique elements of either.
It does explicitly state that it can't (or won't) imitate the style of a band(?).
I imagine that AI can be trained on Angine de Poitrine, but it would never really be able to create something like that no matter how many text prompts it was given.
So it's an evolutionary step in my view, rather than a revolution.
Before the purchase, the quality of generations had been going down for a while (IMO; subjective and anecdotal). I tested multiple iterations of their chat interface and was never thrilled with its ability to actually understand or adhere to prompts. I had liked their previous (Suno/Udio-like) iteration (Riffusion).
Curious to hear how it performs for people now and whether anything has improved.
Like a strange form of Gell Mann amnesia, where all AI output is probably this bad, but if we don't know any better, we don't know just how bad it is.
I know what music I like. I said I wanted "dystopian, ambient, droning music with ear-filling, warm bass. No drums or beat."
What came out was some pretty generic dubstep that one might hear on a Verizon commercial circa 2018. Subjective, sure, but a big miss given the instructions I gave it.
Now let's say I ask it to generate code to scrape all real estate listings that were recently taken off the market. The output looks good enough to me, and I'm happy. But is the underlying architecture just as bad as the music?
I think SOTA on both fronts has already reached exceptionally good though.
https://www.youtube.com/watch?v=tpSC3XxhRwQ
This genre barely even exists from human artists AFAIK; Blackmore's Night (https://www.youtube.com/watch?v=U8mcqTScQoY) and Celtic Woman (https://www.youtube.com/watch?v=dhW1mh7U6-U) are the closest human examples I can think of to cross-reference against. I like those artists too but they have very few songs even remotely similar.
various genres: https://suno.com/s/Oc5842XzzuBTk4Ma https://suno.com/s/RdmFOKpbi4zyVbRf https://suno.com/s/J4Z8t8jU9JXVJ1DB https://suno.com/s/OhfzCYkmcZhFf1Pk https://suno.com/s/VYHHLW7Hkw2uHjrb https://suno.com/s/cTu7AkoOdAyi0eWz https://suno.com/s/QvOExImOVzo1b2Gl https://suno.com/s/MASINon9lGr9JPLS https://suno.com/s/ujpTfZwVdAKy9W0h https://suno.com/s/DwekDLuEzgyNpYGQ https://suno.com/s/psWqWzDQa6Aq96Pk https://suno.com/s/JEM8G2RxD35ZUpGy
also if you like enders game lol: https://suno.com/s/gQ8eGNgnkfktl0Xq
That's the craziest shit I've ever heard.
Nope. Not for me, not ever.
I guess to protect kids, we need to restrict them from.. checks notes music?
Pythagoras argued that music is essentially number and proportion. If beauty is found in the geometry of sound, then the "belief" of the architect is secondary to the elegance of the structure.
Have you not heard music before? Is Suno your first experience with music-shaped sounds? Because, buddy, this is wild. You're not getting Rumours out of an AI. You're not getting Time (The Revelator) out of AI. London Calling does not spring from the geometry of sound.
I still don't think you're saying anything that refutes the geometry of sound argument, however. If you heard an AI song you liked, and didnt know it was AI, and found out after the fact, would you be rational enough to accept you could be wrong? Or would it turn you off to the song irrationally?
To name one, "Saving All My Love For You" should never be played at a wedding because the song is about having an affair with a married guy with kids. But no one listens to the Lyrics. They just hear the chorus. It's a hit for other reasons, not because of lyrics.
Similarly, few people listen to the lyrics of ""Rainy Day Women #12 & 35" (everybody must get stoned). It is not about drugs.
Heck, famously there's Bush Jr. (or more likely some PR person) using "Born in the USA" as a pro-American song. It's not a pro-America song at all. I wouldn't call it anti-American but it's definitely a song entirely about problems in America. Not praise.
https://en.wikipedia.org/wiki/Parents_Music_Resource_Center
Admittedly I went to a Christian high school, but we actually had a school intervention about kids listening to "dangerous music" like Nine Inch Nails.
I don't think any of us had on our bingo card that 30 years later Nine Inch Nails would be writing soundtracks for Disney movies.
"solo banjo instrumental, strictly no other instruments" ... ten seconds later: drums, a fiddle, and a guitar join in.
The workflow feels wrong. it should be closer to a DAW with chat, where the model outputs stems, samples and arrangement parts instead of one finished track. Then you could target a specific sound, section or idea and actually develop it.
They could attempt messy stem splitting like all of the other tools have done for a few years now, but those aren't really usable in a production setting beyond small samples you were already going to chop/distort.
I am not sure if it is there yet either, but imo your UX vision for it is the correct one, so if the tech is still not quite there yet, it is just a matter of time. But the AI-powered DAW UX is imo where it will eventually end up.
If the humans of the 2020s hadn't given up their souls long before this era of AI came out, we wouldn't be discussing this ad nauseam every single day for years on HN. There's no comparison. Humans are vastly superior at music.
People need to get a grip and get out there. I know a significant majority of people reading this are into enough hobbies to know at least one instrument well or can sing. They should just do it for fun and as loudly as possible everywhere they can get away with it. We used to celebrate that sort of thing as a necessary part of maintaining a civil and thoughtful world. Music is an activity before it is a product to be sold. AI is a recording technique built from lots of recordings.
One might have a silly counterargument like "what if there are microphones everywhere stealing my work?", but not enough people consider these days that the corporate world is absolutely terrified of trying to sell what is beautiful. It was considered too risky even when they believed in it with all their heart and were deliberately trying to. What makes the most money is the average, not the exceptional. There's no good excuse people put up against pursuing music other than neurotic irrationalities that come from being chronically online.
(edited for pronouns - overuse of "you")
I’m certainly no fan of the economic structure of major labels (or any capitalist entity), but despite that they do sometimes still release some good stuff.
I edited my comment to remove all the "you"s. I have a bad habit of using it in the "fourth person", so to speak.
> hidden microphones
I meant bootlegs. There was a story making the rounds recently about a huge archive of them, and then some discussions went off the rails about AI.
One of the key issues that I encountered after a few song generations is that it feels very rushed, like it's constrained to this 3 minute limit per song so it forces every section of the song to conform to a very specific structure. I tried increasing the limit to 4 minutes but it still gave me 3 minute songs.
Honestly, I feel like this product is showing up a bit late to the party and it's not really feeling particularly innovative. There's nothing egregiously bad about it, but it doesn't seem to add anything new or special that I could notice.
I don't have a microphone hooked up so I can't try the voice interface, but it would be really fun if you could sing to it in order to iteratively compose a song. It could clean up your voice a bit and add music. Or being able to hum out a beat which it converts into a track which you slowly build up. Is anyone able to try if those capabilities are possible with the existing product?
Overall, I'm not sure if a chat interface is the best way to produce a song. It feels very restrictive to have full songs as the primary iteration mechanism. In a text file and in code you can inspect or modify different sections or components very easily. I think a more human-focused tool would provide an on-ramp towards full music production, where you can focus on the parts that you care about and enjoy, while the AI tool fills the other parts with sausage. Right now you can chat with the tool but it appears to be quite limited in the kind of changes that it can make.
https://www.flowmusic.app/space/27dcebeb-7aae-4a0d-9031-308b...
This is really sweet as a tool for creating ways to create music. Its a shame you can just prompt -> finished project when its so easy for anyone to create their own music with the right tools and motivation.
The music is all just very average, it sounds like the most average song with the most average chord progression/drum pattern per genre.
I guess that makes sense if these are most likely next audio token predictors... but it'd be cool if there was a way to inject some type of creativity/novelty into these, or at least tune up the temperature.
Everything so far just sounds like stock library music to me.
Another was a set of songs that helped me emotionally regulate on the drive home after couples therapy. The lyrics contained grounding exercises that helped maintain awareness and presence and contained mindfulness practices.
Both did their job, but they were also music for utility, not necessarily for artistic enjoyment. So it's not entirely an apples to apples comparison.
By the time I finish writing this comment - yours is 10 minutes old - someone will have vibe coded one, probably.
Also feels like an easy feature for someone like Suno to add, to help subscription retention.
But something like NotebookLM emphasizing subtle mnemonic devices set to music..
Also, probably someone will game an algorithm to get revenue from a bajillion tracks of lofi slop.
Slop is starting to dominate uploads to some music services, so I think it will only get worse from here
I could understand if this was an API that people built products around, but it seems to be geared directly at consumers.
How many iterations (arrangements and recordings) do you think a typical Billboard pop song goes through before it's ready for a final mix and mastering?
Go find a YouTube of someone doing this work, it is kind of mind blowing. Given how expensive studio time is, you realize why it costs so much for a popular artist to produce a polished album.
Odds are for every 200 ai songs you generate , 2 or 3 are decent.
Anyway. UMG will probably force you to sign over training rights in future record deals.
The models still can't rap. Sounds like if you asked someone who didn't know what rap was to read a script
I especially love the glitchy ui sounds, although I suspect it's hardly intentional.
Sloppy humans create sloppy output. The AI is just an amplifier, it has no motive
yes! go tell to teenager kids harmonize by typing and not editing sheet score a > 4 instrument melody without any music theory background or in-ear training
AI being "fun" for kids who aren't able to grasp the negative externalities of using AI doesn't make it something to admire
You recognize obvious slop as slop. But this is a survivor bias-like phenomenon. You have no idea what goes unnoticed.
If you're talking about stuff posted to communities that are self-selecting for AI art submissions, that's another kind of fallacy.
You. Are. Not. Creating. Anything.
You are prompting. Then tweaking, changing, adjusting, etc. The tech is incredible, don’t get me wrong, but it’s advertised so blatantly as the user doing the creating.
Use it as a creativity tool, but don’t get caught up in the false belief that what it spits out is something you created.
Old man yells at cloud. Going back to my cave now.
See for example Suno Studio, which is not very good in my opinion, but shows the direction they’re going.
The other 10% is editing, which for me involves minor color adjustments, highlights, shadows, cropping, etc. I make all the decisions.
AI can generate an image based on a prompt, and that’s fine, but I would never, never claim to have created that output myself.
I'm no photographer, but do anyone not have that Sidewinder seekerhead in the brain that gives you the blaring tone when a great composition is in front of you(including 3D off-boresight warnings)?
Does the guy who tells the composer "write a song" create?
No.
The line is somewhere in the middle
I find that people who rush to negative judgement of LLM-generated art are not going far enough in the creative process to properly judge just how much juice there is to be squeezed out of those 50-billion-dimensional spaces.
They're a music store, they sell music, both to own, but also renting their vast library out.
Google should learn not to shit where they eat.
If anything it gives Google control of the entire production->sale->delivery process.
I'm honestly not seeing a downside for Google here, can you elaborate?
The downside for Google is, ultimately, the death of the company. Nobody wants AI slop, and go out of their way to actively avoid it and punish companies that promote it. Google already is running a huge risk by pushing Gemini into every service, and permanently burning customers and users with it.
Microsoft is already seeing the downside of trying to Copilot everything. Their software is now partly slop, shit randomly breaks, companies cancel Azure/Office subscriptions and move to on-prem, FOSS, etc. They've pumped their brakes quite a lot, but the damage may be too great to mitigate now.
If Google wants to lose money in the long run, then by all means, please continue.
Once you have that particular brand of cancer, its too late to save the company without drastic measures.
Nice
https://samhenrycliff.medium.com/my-surprisingly-fun-ca-h-gr...
The sound of the guitar is good but the keyboard sounded realy awful, just like a Casio toy keyboard pretending to be a piano. Like truely awful sounding, which is when I prompted the AI to try to fix the tone and then it basically just removed it.
The drums were also waaaaay too prominent so I asked it make them a bit more subdued in the mix and it just ended up slowing down everything to the point it just kinda sounded like generic radio alt-rock instead.
But basically once the keyboards were forgotten no amount of prompting could “convince” it to bring them back.
I tried Suno a few months ago out of morbid curiosity and it was waaaay better than this. Actually got something that made my musician friends actually kinda nervous.
If Google can't see the difference between this and useful, moral AI tools then I worry for their path forward.
The music sounds decent, I feel like its missing some things, to be fair Suno still doesn't know what a Puerto Rican guiro is. I assume a lot of these AI platforms will take many iterations.
Things Suno needs to figure out and maybe Google now too, is how to let someone pick a specific voice, and get a rather unique voice, I've heard a few songs in Suno with similar voices to my own songs, and its kind of weird.
I do love making the songs as a hobby, so not a big deal. All in all, AI music is really fun to toy with, especially blending genres together.
One very noticeable difference against Suno is Google Flow Music lets you make Music Videos, which I have yet to test. I wonder if I can use my Suno songs to make music videos for them, not sure I'm vibing with Google's Music AI yet.
Aside: Makes me chuckle a little, since "Flow Music" is a reggaeton catch phrase by Arcangel who would always say "Flow Music" even though it was always called "Flow Factory" he would call it Flow Music.
Edit:
There's some awkward factors Google will need to work out, while the instruments and voices sound nice and clear, the rythm feels weirdly off for some songs, its like the voices are not matching the genre mix, its also missing some nuances I've asked for, I assume it does not know what "wobble bass" means. Suno lets you describe nuanced specific sounds or instruments and uses them how you describe.
I told it to have a dubstep breakdown in the middle, it keeps the artist singing / rapping, which is bizarre, that's not how a breakdown would be...
Suno takes great care to make sure the voice always matches whatever is going on with the beat, including humming the beat / bass / brass / whatever instruments are being played.
Glad Suno is going to have some real competition, I just hope Google doesn't kill Suno with its bigger wallet, would be a shame.
Edit:
Final verdict from me is, it feels like less polished than Suno in terms of music, but more features. Suno lacks music creation which still annoys me they let you make a lyric video that's one single orientation / resolution, you have zero control over it otherwise.
There's a "workspace builder" which you prompt and it builds a web app that lets you create songs and what not, not sure what all the features of it is, but it is interesting as well.
If they get this more on par with Suno, they might for the first time ever take money from me since I left the Android / Google platform many moons ago.