24 December 2009

De-Googlizing

Greets peeps. I’m moving this blog to my home server as part of my general de-Googlization. It’s new and permanent home will be

http://noncombatant.org/blog/

Comments don’t work right now, but maybe they will someday!

23 December 2009

Grooveshark

Grooveshark seems pretty cool. Hopefully the ads, and/or the $3/month fee to get rid of the ads, get them some profit. I still don’t see why we can’t just pay $N per month for unlimited legal downloading in lossless, but hey...

http://listen.grooveshark.com/#/album/Monk+Suite+Kronos+Quartet+Plays+Music+Of+Thelonious+Monk+With+Special+Guest+Artist+Ron+Carter/3092114

Ron Carter valiantly inserts some jazz into the scene, and it’s not as bad as what they did to Hendrix, so that’s something.

Multitouch Display as Music Interface

This is pretty cool (from Hack A Day): http://hackaday.com/2009/12/22/subcycles-multitouch-music-controller/. He mainly seems to manipulate timbres with it, but not so much melodies. Maybe that can come later? Anything to reduce the laptop staring phenomenon.

05 July 2009

T-Bone Burnett Interview

In this interview, T-Bone talks about his new audio delivery system and music in general. Awesome!

Cancelling My Emusic.com Account; Sound Geekery

I’ve been a big fan of emusic.com for years, but it’s time to let them go. Not because they recently raised prices (they also expanded their catalog), but because all they offer is shitty MP3s.

Raising prices is fine, and I’m really glad for them that they’ve managed to expand their catalog. That must have involved intense negotiations with the major label goats.

However, even high bit-rate MP3s are noticeably worse than CDs or WAVs — do a back-to-back listening test on decent home stereo gear, it’s pathetic — but emusic sells only 192Kbps (VBR) MP3s. Sorry, guys... CDs really were an improvement over cassette tapes.

A couple years ago I did an MP3 (192Kbps) vs. WAV listening test with some metal (Gojira’s “Ocean Planet”), and had a friend do the same experiment. I didn’t tell him what I heard, but he responded that he heard exactly what I did: the bass frequencies were “richer and fuller” on the WAV (ripped from CD). I also thought the stereo imaging suffered in the MP3. The end result was that the tricky interplay between the drums and the bass guitar was muted in the MP3 — I think the sound difference actually affected the musical content of the song. And that was metal; the problem is worse for more subtle music.

Emusic subscribers are asking about quality, too. They aren’t getting any answers.

Lossy compression is dead. It was a solution to a problem that no longer exists: poor bandwidth and expensive storage. In 2009, we get megabits per second to the home and a GB of storage is $0.10 or less.

Add to this the fact that quality control suffers (almost every audiobook I’ve downloaded form emusic has had at least one terrible error — the emusic commenters complain too) and parts of their catalog have disappeared (I can’t re-download some stuff I got before), and emusic is no longer looking like such a good deal. I’ll spend my $30 per month at Amoeba instead.

In fact, even “CD-quality” sound (44,100 16-bit samples per second) is really not good enough to capture all the sound a human can hear. Producer and musician T-Bone Burnett has started releasing albums on DVDs with 96,000 24-bit samples per second. I hope, although doubt, that it will take off.

According to my handy Computer Music Tutorial, the dynamic range (in decibels) is roughly 6 times the sample width, and to avoid aliasing (high frequencies mangled into lower frequencies) you need to sample at a rate at least twice as high as the highest pitch you’re trying to record. (See here for more nerd details.) Although the theoretical maximum of 16/44.1 recording is pretty damn good, and although the loudness war does more damage than the digitization process does, you never really get the theoretical maximum. Digital recording is done at 24/96 (or even better), and it’s only downsampled and truncated in the last stage to fit the CD format.

16/44.1 can sound very good indeed... but in 2009, that’s the minimum.

Does Twitter Have a Better Business Model Than Warner Music?

From Techdirt (http://techdirt.com/articles/20090623/2337095343.shtml):

We keep talking about artists who are connecting with fans, and giving them a reason to buy, and it seems like every day we hear of more and more new and creative ways that artists are doing this — even as the naysayers stop by daily to insist it’s impossible for such things to scale. It’s a blast to see it scale more and more every day and prove them wrong. The latest example comes from Amanda Palmer — who we’ve written about a few times before. She's the singer who has been fighting with her major record label (Warner Music’s Roadrunner) for not just being a pain to deal with, but for making it harder for her to both connect with fans and give them reasons to buy. For example, she got caught in Warner's stubborn decision to fight YouTube over payments, and had all her videos taken down from YouTube against her wishes. So, at a concert, she told fans to upload the video to YouTube as she sang a song begging her label to drop her.

[...]

However, now she’s going much further, much of it using Twitter to closely connect with fans. She recently explained three separate experiments, all done on a whim this month, which allowed her to bring in $19,000,

Oops.

In related news, I recently saw Suffocation and Necrophagist (Suffocation is on Roadrunner). Awesome show. I went with Phil, who had a video on YouTube of himself drumming along to a Suffocation song. He got caught in the Warner jihad, and only after contacting Warner, Roadrunner, the founders of YouTube, and Suffocation did he finally get his video back up. More than 3 million views, and Warner wants to shoot themselves in the foot. So Suffocation put him on the guest list and gave him some all-access badges, and they were his biggest fans! It was an adorable love-fest. Fuck you, Warner!



On the other hand, this story on Techdirt leaves a lot of questions unanswered, as the commenters point out.

21 February 2009

Music and Learning

I recently started taking guitar lessons with the patient and wise Luke Westbrook. I’ve probably taken about 8 years of private music lessons, on and off, since I was 12. It’s great to back into it!

My current projects are to (a) transcribe the entirety of Vernon Reid’s tune “Afrerika”, including the guitar solo (I’ve got the tune and the first couple bars of the solo); (b) to try to match some hip new chords to Ornette Coleman’s “Jayne”. (Fsus9, #4 under a G melody? Maybe!); and (c) to learn a chord-melody arragement of “Naima” by Coltrane.

Over the holiday break I took a bass lesson from my friend Al Vorse in Minneapolis, and that was helpful too: a good technique exercise and some tips for walking over changes.

My old friend John was asking why I would take lessons again, after I’ve already taken so many. It’s a reasonable question, because after all you can play lots of good music and have good fun with much less education than I have. Many people do.

But I’m into music for the long haul, and there is always something more to learn. In my case, tons more to learn. My jazz education is pretty incomplete, and there are lots of technique things I’d like to clean up, such as playing everything without a pick.

It’s also important for me to concentrate on something outside of work, because my work is pretty involved and I could easily spend every waking hour doing software security engineering stuff. (Like music, it’s bottomless.) Gotta keep the brain flexible!

Making the Most Out of Simple Gear

After too long a respite, I’ve been playing music with people again lately. Last weekend I jammed with my co-worker Chris, and I brought almost all of my effects pedals. It was a mess! Too many wires, too much complexity, not enough reliability. Part of the problem was that I had them all loose, not mounted on a pedal board, but the rest of the problem was just the sheer number. Like Floyd Rose wang bars, that kind of setup is for people with roadies!

Then today I was jamming with a new group, and I brought only a fuzz box, my tuner, and the venerable Boss PS-2 Pitch Shifter/Delay. Three pedals feels about right. Then this evening I rolled them all up into a proper pedal board. I was thinking, “Perfect... But a chorus pedal would be nice.”

Then I realized that with the pitch shifter, I can get a chorus effect. I put it into manual pitch shift mode, then dial up the fine-tuner knob to unison harmony. Then, extremely carefully, I dial it ever so slightly flat. It’s easy to dial it a hair too far and get a deep warble swim effect (also cool).

For non-music-nerd readers, you’ll likely recognize the chorus effect as the sound Nirvana used in their song “Come As You Are”. I recorded a snippet of it without chorus, and then chorus kicks in halfway through.

http://www.noncombatant.org/audio/come-as-you-are-excerpt.mp3

Here is another, more complete example (an excerpt from the song “Nouvelle Chanson” by my old band Boshuda), again with chorus off at first and then on:

http://www.noncombatant.org/audio/nouvelle-chanson-excerpt.mp3

So now the Pitch Shifter/Delay is really three effects: echo, harmonizer, and a credible chorus. Three is a good number!

02 February 2009

100-year Software: Half-baked musings and some citations

Imagine a computer program which, when run 100 years from now on whatever computers they have in 100 years, produces the same output (given the same input) as that program does today.

Should the data survive, even if new software is required to interpret it? And/or should the software itself survive, still functional? I think it depends on the nature of the application: whether it is productive (Microsoft Word) or performative (a video game). The real hard problems arise when the data format is so complex and/or incompletely specified as to require an essentially performative application (examples: Microsoft Word, web applications).



  • http://www.gseis.ucla.edu/~howard/papers/sfs-longevity.html

    With a vast number of resources being committed to reformatting into digital form, we need to begin considering how we can assure that that digital information will continue to be accessible over a prolonged period of time. In this chapter we will first outline the general problem of information in digital form disappearing. We will then look closely at 5 key factors that pose problems for digital longevity. Finally, we will turn our attention to a series of suggestions that are likely to improve the longevity of digital information, focusing primarily on metadata. Though this chapter was written for the digital imaging community, the observations here will be useful for all communities wishing to assure the longevity of any type of digital information.


    In particular, this tragedy makes me sick:

    Though the advent of electronic storage is fairly new, a substantial amount of information stored in electronic form has deteriorated and disappeared. Archives of videotape and audiotape such as fairly recent interviews designed to capture the last cultural remnants of Navajo tribal elders may not be salvageable (Sanders 1997).


    How can we ever hope that the files we create today will be readable in our information environments 100 years from now?


  • http://constantine-plotnikov.blogspot.com/2007/02/software-system-longevity-paradigms.html

    The basic principle is that we ensure that the data survives, and an application is a transient thing anyway. It could die any time. Upon restart it will be able to work with the data again. Some data could be lost, but this is a known risk that should have been calculated.


  • http://lonesysadmin.net/2008/05/20/java-se-for-business-software-longevity/

    Now that virtual machines are killing the hardware replacement cycle I’m left with only my software lifecycles, which really aren’t all that much better than hardware cycles. If those get longer, and I can guarantee an operating environment for 15 years, the amount of staff time and effort it takes to maintain these operating environments will drop rapidly. I’ll be able to upgrade when it makes more business sense for me, like when I’m replacing an application, or I decide it’s too much work to support 7 different versions of Red Hat Enterprise Linux. Not just when a vendor decides they’re done with an OS.


  • http://ieeexplore.ieee.org/Xplore/login.jsp?url=/iel4/52/15098/00687939.pdf?temp=x

    Software lives longer than most organizations expect — a mean age of 9.4 years for applications of fundamental importance to the organization, according to one study. And it is living longer than before, up from 4.75 years in 1980. Nonetheless, software should live longer yet. Long-living software has many advantages. First, as a software application survives, it works. It benefits the organization that created it and the users that use it, and it pays back its development cost. Second, as a software application survives, it changes continually, functionality being added and modified to meet changing needs. In this continual evolution or maintenance, software fulfills one of its characterizing functions: its modifiability, its capacity for change, its softness. Functions are embodied in software instead of in hardware expressly because they can be changed. Change, and the resources that go into change, are its mission. Finally, as a software application survives, its quality improves. Errors are encountered or found, and removed. An operational profile emerges, and the software is adapted to it. The users who access it and the applications that connect to it explore, exploit, and optimize its capabilities


  • http://portal.acm.org/citation.cfm?id=284308.284365&dl=GUIDE&dl=GUIDE&CFID=20195306&CFTOKEN=59168537

  • http://www.cs.berkeley.edu/~yelick/cs267-sp04/lectures/08/lect08-mpi-intro.pdf

    Interesting remarks on slide 1.

Acrassicauda: Iraqi metal band

The NYT has this fun article about Iraqi metalheads Acrassicauda:

Vice tried to help resettle the members to Canada and Germany, and kept them afloat with cash — as much as $40,000 paid from Vice’s own coffers, sponsors and donations collected online, according to Suroosh Alvi, a founder of the company and one of the directors of the film.

“We had outed them and endangered their lives,” Mr. Alvi said on the way to the Prudential Center, where a small Vice crew was filming every handshake and wide-eyed glimpse of Metallica’s mountains of equipment. “They were receiving threats from Iraq while they were in Syria.” He added, “We had a responsibility.&rdquo

31 December 2008

That Just About Sums It Up

Following the Wikipedia links around intertwingularity, I visited Ted Nelson’s page.

Ted Nelson promotes four maxims: “most people are fools, most authority is malignant, God does not exist, and everything is wrong”.


But, Ted, some things aren’t wrong...

Meshuggah and Cynic (!!!) at Slim's on 4 Feb 2009

OH HELLZ YES. I just bought my ticket. Actually I bought 3 dinner tickets — the sound is better in the back where the foodz is.

Intent on decimating the boundaries of extreme music with their metric art, Sweden’s Meshuggah will be returning to North America in February on a 17-show headlining tour presented by MySpace Music. Direct support to Meshuggah will be provided by the legendary progressive metal band Cynic. Opening all shows will be technical progressive death metallers The Faceless from LA.

More on Digital Archival Storage

Bit Preservation: A Solved Problem?” by David Rosenthal discusses the problems with our current understanding of the reliability of data storage systems. He examines the (comical) claims of storage system vendors and of optimistic researchers, exposes the fact that the claims are meaningless and untestable, and proposes a new metric: “bit half-life”.

The most abstract model of a bit preservation system is as a black box, into which a string of bits S(0) is placed at time T(0) and from which at subsequent times T(i) a string of bits S(i) can be extracted. The system is successful if S(i) = S(0) for all i.

No real-world system can be perfect and eternal, so real systems will fail. The simplest model of these failures is analogous to the decay of radioactive atoms. Each bit in the string independently is subject to a random process that has a constant small probability per unit time of causing its value to flip. The time after which there is a 50% probability that a bit will flip is the “bit half-life”.

[...]

There is no escape from the problem that the size of the data collections to be preserved and the times for which they must be preserved mean that experimental confirmation that the technology chosen is up to the job is not economically feasible. Even if it was the results would not be available soon enough to be useful. What this argument demonstrates is that, far from bit preservation being a solved problem, it is in a very specific sense an unsolvable problem.