24 December 2009
23 December 2009
Ron Carter valiantly inserts some jazz into the scene, and it’s not as bad as what they did to Hendrix, so that’s something.
This is pretty cool (from Hack A Day): http://hackaday.com/2009/12/22/subcycles-multitouch-music-controller/. He mainly seems to manipulate timbres with it, but not so much melodies. Maybe that can come later? Anything to reduce the laptop staring phenomenon.
05 July 2009
Raising prices is fine, and I’m really glad for them that they’ve managed to expand their catalog. That must have involved intense negotiations with the major label goats.
However, even high bit-rate MP3s are noticeably worse than CDs or WAVs — do a back-to-back listening test on decent home stereo gear, it’s pathetic — but emusic sells only 192Kbps (VBR) MP3s. Sorry, guys... CDs really were an improvement over cassette tapes.
A couple years ago I did an MP3 (192Kbps) vs. WAV listening test with some metal (Gojira’s “Ocean Planet”), and had a friend do the same experiment. I didn’t tell him what I heard, but he responded that he heard exactly what I did: the bass frequencies were “richer and fuller” on the WAV (ripped from CD). I also thought the stereo imaging suffered in the MP3. The end result was that the tricky interplay between the drums and the bass guitar was muted in the MP3 — I think the sound difference actually affected the musical content of the song. And that was metal; the problem is worse for more subtle music.
Emusic subscribers are asking about quality, too. They aren’t getting any answers.
Lossy compression is dead. It was a solution to a problem that no longer exists: poor bandwidth and expensive storage. In 2009, we get megabits per second to the home and a GB of storage is $0.10 or less.
Add to this the fact that quality control suffers (almost every audiobook I’ve downloaded form emusic has had at least one terrible error — the emusic commenters complain too) and parts of their catalog have disappeared (I can’t re-download some stuff I got before), and emusic is no longer looking like such a good deal. I’ll spend my $30 per month at Amoeba instead.
In fact, even “CD-quality” sound (44,100 16-bit samples per second) is really not good enough to capture all the sound a human can hear. Producer and musician T-Bone Burnett has started releasing albums on DVDs with 96,000 24-bit samples per second. I hope, although doubt, that it will take off.
According to my handy Computer Music Tutorial, the dynamic range (in decibels) is roughly 6 times the sample width, and to avoid aliasing (high frequencies mangled into lower frequencies) you need to sample at a rate at least twice as high as the highest pitch you’re trying to record. (See here for more nerd details.) Although the theoretical maximum of 16/44.1 recording is pretty damn good, and although the loudness war does more damage than the digitization process does, you never really get the theoretical maximum. Digital recording is done at 24/96 (or even better), and it’s only downsampled and truncated in the last stage to fit the CD format.
16/44.1 can sound very good indeed... but in 2009, that’s the minimum.
We keep talking about artists who are connecting with fans, and giving them a reason to buy, and it seems like every day we hear of more and more new and creative ways that artists are doing this — even as the naysayers stop by daily to insist it’s impossible for such things to scale. It’s a blast to see it scale more and more every day and prove them wrong. The latest example comes from Amanda Palmer — who we’ve written about a few times before. She's the singer who has been fighting with her major record label (Warner Music’s Roadrunner) for not just being a pain to deal with, but for making it harder for her to both connect with fans and give them reasons to buy. For example, she got caught in Warner's stubborn decision to fight YouTube over payments, and had all her videos taken down from YouTube against her wishes. So, at a concert, she told fans to upload the video to YouTube as she sang a song begging her label to drop her.
However, now she’s going much further, much of it using Twitter to closely connect with fans. She recently explained three separate experiments, all done on a whim this month, which allowed her to bring in $19,000,
In related news, I recently saw Suffocation and Necrophagist (Suffocation is on Roadrunner). Awesome show. I went with Phil, who had a video on YouTube of himself drumming along to a Suffocation song. He got caught in the Warner jihad, and only after contacting Warner, Roadrunner, the founders of YouTube, and Suffocation did he finally get his video back up. More than 3 million views, and Warner wants to shoot themselves in the foot. So Suffocation put him on the guest list and gave him some all-access badges, and they were his biggest fans! It was an adorable love-fest. Fuck you, Warner!
On the other hand, this story on Techdirt leaves a lot of questions unanswered, as the commenters point out.
21 February 2009
My current projects are to (a) transcribe the entirety of Vernon Reid’s tune “Afrerika”, including the guitar solo (I’ve got the tune and the first couple bars of the solo); (b) to try to match some hip new chords to Ornette Coleman’s “Jayne”. (Fsus9, #4 under a G melody? Maybe!); and (c) to learn a chord-melody arragement of “Naima” by Coltrane.
Over the holiday break I took a bass lesson from my friend Al Vorse in Minneapolis, and that was helpful too: a good technique exercise and some tips for walking over changes.
My old friend John was asking why I would take lessons again, after I’ve already taken so many. It’s a reasonable question, because after all you can play lots of good music and have good fun with much less education than I have. Many people do.
But I’m into music for the long haul, and there is always something more to learn. In my case, tons more to learn. My jazz education is pretty incomplete, and there are lots of technique things I’d like to clean up, such as playing everything without a pick.
It’s also important for me to concentrate on something outside of work, because my work is pretty involved and I could easily spend every waking hour doing software security engineering stuff. (Like music, it’s bottomless.) Gotta keep the brain flexible!
Then today I was jamming with a new group, and I brought only a fuzz box, my tuner, and the venerable Boss PS-2 Pitch Shifter/Delay. Three pedals feels about right. Then this evening I rolled them all up into a proper pedal board. I was thinking, “Perfect... But a chorus pedal would be nice.”
Then I realized that with the pitch shifter, I can get a chorus effect. I put it into manual pitch shift mode, then dial up the fine-tuner knob to unison harmony. Then, extremely carefully, I dial it ever so slightly flat. It’s easy to dial it a hair too far and get a deep warble swim effect (also cool).
For non-music-nerd readers, you’ll likely recognize the chorus effect as the sound Nirvana used in their song “Come As You Are”. I recorded a snippet of it without chorus, and then chorus kicks in halfway through.
Here is another, more complete example (an excerpt from the song “Nouvelle Chanson” by my old band Boshuda), again with chorus off at first and then on:
So now the Pitch Shifter/Delay is really three effects: echo, harmonizer, and a credible chorus. Three is a good number!
02 February 2009
Should the data survive, even if new software is required to interpret it? And/or should the software itself survive, still functional? I think it depends on the nature of the application: whether it is productive (Microsoft Word) or performative (a video game). The real hard problems arise when the data format is so complex and/or incompletely specified as to require an essentially performative application (examples: Microsoft Word, web applications).
With a vast number of resources being committed to reformatting into digital form, we need to begin considering how we can assure that that digital information will continue to be accessible over a prolonged period of time. In this chapter we will first outline the general problem of information in digital form disappearing. We will then look closely at 5 key factors that pose problems for digital longevity. Finally, we will turn our attention to a series of suggestions that are likely to improve the longevity of digital information, focusing primarily on metadata. Though this chapter was written for the digital imaging community, the observations here will be useful for all communities wishing to assure the longevity of any type of digital information.
In particular, this tragedy makes me sick:
Though the advent of electronic storage is fairly new, a substantial amount of information stored in electronic form has deteriorated and disappeared. Archives of videotape and audiotape such as fairly recent interviews designed to capture the last cultural remnants of Navajo tribal elders may not be salvageable (Sanders 1997).
How can we ever hope that the files we create today will be readable in our information environments 100 years from now?
The basic principle is that we ensure that the data survives, and an application is a transient thing anyway. It could die any time. Upon restart it will be able to work with the data again. Some data could be lost, but this is a known risk that should have been calculated.
Now that virtual machines are killing the hardware replacement cycle I’m left with only my software lifecycles, which really aren’t all that much better than hardware cycles. If those get longer, and I can guarantee an operating environment for 15 years, the amount of staff time and effort it takes to maintain these operating environments will drop rapidly. I’ll be able to upgrade when it makes more business sense for me, like when I’m replacing an application, or I decide it’s too much work to support 7 different versions of Red Hat Enterprise Linux. Not just when a vendor decides they’re done with an OS.
Software lives longer than most organizations expect — a mean age of 9.4 years for applications of fundamental importance to the organization, according to one study. And it is living longer than before, up from 4.75 years in 1980. Nonetheless, software should live longer yet. Long-living software has many advantages. First, as a software application survives, it works. It benefits the organization that created it and the users that use it, and it pays back its development cost. Second, as a software application survives, it changes continually, functionality being added and modified to meet changing needs. In this continual evolution or maintenance, software fulfills one of its characterizing functions: its modifiability, its capacity for change, its softness. Functions are embodied in software instead of in hardware expressly because they can be changed. Change, and the resources that go into change, are its mission. Finally, as a software application survives, its quality improves. Errors are encountered or found, and removed. An operational profile emerges, and the software is adapted to it. The users who access it and the applications that connect to it explore, exploit, and optimize its capabilities
Interesting remarks on slide 1.
Vice tried to help resettle the members to Canada and Germany, and kept them afloat with cash — as much as $40,000 paid from Vice’s own coffers, sponsors and donations collected online, according to Suroosh Alvi, a founder of the company and one of the directors of the film.
“We had outed them and endangered their lives,” Mr. Alvi said on the way to the Prudential Center, where a small Vice crew was filming every handshake and wide-eyed glimpse of Metallica’s mountains of equipment. “They were receiving threats from Iraq while they were in Syria.” He added, “We had a responsibility.&rdquo