- Ugh, circadians are all off today, what with daylight savings’ time. Yawn
- Circadian situation perhaps worsened by the weekend’s fun: On Saturday, got together with a 5 other Los Angelean tea-drinkers, and together we held a tasting of 12 different teas, drinking ~50 cups per person. The “theme” of the day was supposedly “light wuyi”, but we sampled a large range, from da hong pao to tgy to puerh to dancong. Will hopefully compile some more sensical tasting notes soon, as it was quite an experience. The thing that struck me most of all was how counter-culture it all felt—how does drinking traditional Chinese teas become a thing like punk rock, in the right cultural context?
- This also marks the second weekend of my decision to take a leave of absence from my studies at USC. I am totally enjoying a life where I can enjoy my weekends without guilt. Between tea on Saturday, doing a disc brake job on my old Civic Sunday, and cooking dinner with Mindy for my folks down in OC Sunday evening, it’s great to be able to enjoy life a bit. Being a grad student offers lots of time freedoms on the day-to-day scale (want to take a random morning off to run errands? want to take a random afternoon off to go to the beach? both not a problem), but it’s pretty draining on the macro level (want to enjoy a full weekend stress-free? good luck, with that impending thesis or conference paper floating over your head!). This new life is a nice change…
Time
12-Mar-07Magritte Thoughts
03-Mar-07Went to see the Magritte exhibit at LACMA today. Thoughts:
- I’d never thought of the parallels between The Red Model and cyborgs/cybernetic enhancement before =).
- Likewise, I stared at The Human Condition for a good 20 minutes, and couldn’t help but thinking about AI agents and World Models
- Everything in the Treachery of Images section made me really appreciate the classes I’ve taken on semiotics/semantics. If I ever teach a linguistics course, I’d like to spend a class day looking at Magritte paintings, discussing the relationship between label, portrayal, and thing, and their respective roles in communication.
Darjeeling Oolongs
28-Feb-07As a break from Artificial Intelligence ponderings, I was recently mailed a sampler of darjeeling oolong teas so that I could participate in an “on-line tasting”. Thanks to T-Ching, and Phyll Sheng for arranging the tasting. And, finally, a huge thanks to Lochan Tea for providing the leaves, and for pioneering this new form of tea production.
First, an explanatory note: Oolong teas are “partially fermented”, in that they’re halfway between unfermented green tea and completely fermented black tea. Typically, Oolongs are produced in Taiwan and Southern China. Darjeeling, India, by contrast, has traditionally produced black teas. A while ago, the enterprising and globalizing Lochan growers decided to try preparing their darjeeling leaves using traditional Chinese methods to produce Oolongs.
The three samples I tasted were quite exciting. Definitely a fusion of tastes, different from both typical Darjeeling and typical Oolong, but maintaining enough qualities of each that you can tell that it’s a mix of the two.
I brewed all three of these by Lochan’s provided recommendations (1 cup water with 1 teaspoon dry leaves, brewed for 3-4 minutes, 3 brewings) rather than typical Chinese style (higher leaf:water ratio, shorter brewing time, more brewings). Once I have a bit more time I’d like to go back and try brewing again, but using the Gong Fu method. Just curious.
All three were brewed using Glacier Springs water (good water is a requisite when tasting tea, and I like the flavors that a high mineral content water brings out). The teas were rated on a scale of 1(worst) to 5 (best), with a focus on flavor/smell/huigan rather than leaf appearance.
Here are my tasting notes:
nutty oolong
- glacier springs water, 100 degrees, 240ml, 1 teaspoon nutty oolong
- first brew (4 minutes)
- leaves broken
- smell very sweet (over-ripe fruit) with a little grassiness.
- definitely tell it’s darjeeling, though not as astringent smelling.
- definitely not “oolong” tasting (or, not like the formosan oolongs I drink). I wouldn’t call it oolong. it’s like darjeeling with that same fruity/flowery overtone
- it’s quite smooth, but much too light for my liking (love the full body of darjeeling). will try increasing steeping time, see if i get more body with teh same smoothness.
- astringent huigan with a flavor that lingers for half a minute, but not more
- dirty gold color
- score 3
- second brew (5 minutes)
- maintains color, but despite longer brewing, even less flavor than before.
- astringency more pronounced
- honestly, not really enjoyable.
- score 1
- third brew (4 minutes, 120ml)
- more vibrant flavor—a little more of the fruitiness has returned (should have only used 2/3 cup water all along perhaps), but you can still tell that the leaves are “used”. astringent.
- score 2
moonlight oolong
- glacier springs water, 100 degrees, 240ml, 1.5 teaspoon nutty oolong
- (after the prior tasting, decided to increase the amount of leaves per water)
- first brew
- (4 minutes)
- very light aroma, not as overtly fruity/fragrant as the nutty oolong. more subdued, more of a “classic oolong” smell.
- lighter amber color with a touch of green.
- unfortunately, misses on both the full-body of the darjeeling and the complexity of oolongs.
- aftertaste is pleasant, pure darjeeling with a little longer huigan than darjeeling. nice. fades quickly.
- like last, very dry mouthfeel.
- score 2
- second brew
- (4 minutes)
- the leaves have “woken up” now. sweeter, richer, fuller taste. Darjeeling, with a little bit of fruity/nuttiness above it. Still very straightforward, though (not as complex as I’d hope a good oolong to be). Pleasant astringency like a good oolong. The aftertaste is still short and has lots of “darjeeling” flavor.
- score 3.75
- third brew
- score 2
- wish I knew what the difference was the second time, the leaves are back to the banality of the first brew.
- little bit of sweetness in the taste at the onset, but fades to very astringent, lightly colored water. You can tell the leaves are used up.
- not too much aftertaste/huigan.
- first and third brewings are disappointing, but the second one was so good, I’d like to try this again using “gong fu” style and see what happens.
snow oolong
- glacier springs water, 100 degrees, 240ml, 1.5 teaspoon nutty oolong
- flowery/nutty smell rather than fruity
- first brew (3 minutes 20 seconds)
- this is by far the best of the 3. glad I saved it for last.
- starts with a rich flowery/nutty taste and finishes with classic darjeeling taste. complex and sweet.
- slight huigan, which is the only thing really lacking in this tea. will try brewing it for 4 minutes the next time to attempt to elicit more.
- dries the mouth, but pleasantly.
- score 4.5 . Really enjoyed this one.
- second brew (4 minutes)
- the 4 minutes did its work, the tea is dark.
- floral taste still very strong at the onset, but the nuttiness and oolong tastes are faded quite a bit. mediocre finish.
- likewise still a mediocre huigan (a bit richer than before, with the lingering sweetness from the beginning plus a bit of darjeeling flavor, but ddefinitely not as much aftertaste as I’d hope from a formosan oolong brewed for this long)
- score 3 (still decent)
- third brew (3.5 minutes)
- moderate floral taste, tea is obviously faded (perhaps the longer brewings took a toll on the leaves?). still smooth, not bitter. but very toned down compared to the first brewing. The aftertaste is a little better this time, a little more pleasantly permanent compared to the prior two brewings.
- tea is ok. better than “restaurant tea” but definitely faded quickly compared to the latter brewings of formosan oolongs. that seems to be a trend with these darjeeling oolongs.
- score 2.5
“Where is my cure for this disease?”
12-Feb-07Thinking a bit about AI this weekend. 30 years ago, we tried to imagine what life would be like in 2010. Intelligent Agents, Strong AI, etc etc. It’s a bit disheartening that the pinnacle of AI that we have to show for our efforts are things like PageRank and phrase-based statistical machine translation.
Not to say that either of these algorithms are bad—on the contrary, they accomplish exactly what they set out to do, and they do it well. But, there’s no magic to them. No glimmer of human-like intelligence behind them. They show us that the major accomplishment of AI for these past few decades is one of statistics (treating measurable phenomena like the trust metric of a website, or how a word in one language corresponds to a word in another) rather than one of intelligence in the more generalizable sense.
I suppose this isn’t a bad thing—the things we build do what they were built to do, after all—but still, the idealistic part of me that grew up reading Asimov and Heinlein… that part of me can’t help but wish that submarines could swim.
Building a Smarter Feedreader
10-Feb-07I ran across Leonard Richardson’s Ultra Gleeper again yesterday. I hadn’t seen it for a year, and it’s been good looking at it with new eyes since I’ve begun hacking in earnest on machine learning problems and measuring “interestingness” of RSS posts.
The project is interesting because he aims to solve the same problem I do (automatically find interesting/relevant things on the net), but goes about it in a COMPLETELY different way–different in the “automatic finding” and also different in the “deriving interestingness”.
For automatic harvesting of possible material, Richardson looks to a number of resources (not only whatever is pointed at by his RSS feeds, but also whatever is pointed at by what his RSS feeds point at—basically, he wants to search not only his information sources, but also what his own information sources treat as sources themselves). He also harvests from technorati, some custom google queries, delicious (until Joshua got mad =) ), etc.
Good stuff. This harvesting technique definitely broadens the search space. If I look at it in natural language processing terms, I would say it increases “recall”: over the set of all possible interesting articles, looking at a larger set of articles on the whole increases our chances of finding interesting stuff that a more conservative algorithm wouldn’t find. However, if we take this approach then we’d have to have much stronger algorithms that can guess interestingness—otherwise precision will suffer. Now, I want to say that the RSS feeds that I already subscribe to are nearly guaranteed to point to “interesting” pages—otherwise I wouldn’t point to them.
In other words, first-degree information sources (what I treat as a source) are guaranteed high precision low recall. Second-degree information (what my sources treat as a source), by contrast, are much higher recall (quantity order n-squared vs n), but have a hit to precision.
With an order magnitude (or more!) more information, we’d need big changes–changes in GUI, changes in expectation from the program, changes in ranking algorithm.
Richardson also addresses the last of these points–change in ranking algorithm. To compute interestingness, he does something smart that’s almost like a “reverse pagerank”. He says “things are interesting if they point to interesting things” (contrast this with Google’s Pagerank, which says “things are interesting if they are pointed to by interesting things).
The good thing about this is that, once you have some initial seed data, it becomes a sort of passive goodness metric. It bootstraps off of the initial data you provide so that it can continue learning even after you stop providing it data. Unsupervised machine learning, in other words.
Lots of things to think on.
Car Buying
07-Feb-07Bought a car over the last two weekends, and it was surprisingly painless. Here’s the 4-step process:
- 2 weekends ago we set aside Sunday afternoon to see what make/model/color/options we wanted. We knew we weren’t going to buy anything that weekend, just see exactly what we wanted.
- Having decided on a Civic, we compiled a list of all the Honda dealers in our area, with their email addresses. Most dealerships have “Internet Sales” department which are supposedly “no haggle”, but are really just “less haggle”. They won’t barter directly, but are happy to match/beat . Emailed all of them with a very simple/direct message (here’s exactly what we want, what’s your final out-the-door price including all your damn bogus fees, and how soon will you have it in stock?). Emailed about 15 dealerships by simple copy-pasting. By Saturday night 4/5 of them had replied. The best price was offered by a dealership 1.5 hours away—a good price ($100 below invoice), but much too far away. Basically, just good for bartering.
- Saturday night took the best of the quotes and emailed everybody else back. “Dear XX dealer, Here’s the best price we were able to get, but of course we’d rather buy from YOU”. All of the nearby dealerships but two said “that’s impossibly low, we can’t do that”. We called the closest “yes” dealership, had him fax over an official offer, warned him we’d walk out if there was any bait-and-switch.
- Sunday afternoon drove to the dealership, brought the fax, they honored the price, we paid in cash, and walked out with a new car. Pretty painless.
Things I learned
- The best part about this process was that we were able to buy on our own terms, our own time frame, and without any manipulative sales tactics on the part of the dealerships. I was able to fit the emailing/waiting for replies from the dealerships into my normal Saturday schedule, so there was effectively no time wasted.
- It was really imported to ask everyone for their post-tax, post-fees, out-the-door price. The “invoice price” that the dealerships quote a highly variable thing (we found it ranged $1000 depending on who we asked), and has nothing to do with what they actually paid Honda for the car in the first place. And most all the other fees that the dealerships will tack on (“destination fee”, “advertisement fee”, etc) are really just another word for “profit”. The only exception to this are the document and registration fees. By asking for the out-the-door price we were able to compare apples to apples.
Visualization of Academic Papers
01-Feb-07Earlier this morning found a wonderful bit of information visualization, graphing the patterns in academic publication over 3 centuries (!). Chris Harrison‘s “Visualizing the Royal Society Archive” (found via Information Aesthetics).
Beautiful graphs.
I have a little criticism for his methodology (why not remove stop words? why graph on a 45 degree slant? why not log your unigrams instead of square-root?), but regardless this is both neat and beautiful.
There is a soft spot in my heart for information treated as art.
Yawn.
05-Jan-07**jetlagged**.
phew
29-Dec-06the last few weeks have been nonstop.
- First it was an excellent conference on computer aided language learning, hosted by the computational linguistics folks over at Ohio State University. Lots of interesting ideas gleaned and interesting people met. Perhaps more on this later
- Then it was 2.5 days back in Los Angeles, enough time to adjust to Pacific Standard Time, run some errands, wrap up the semester of research, unpack my conference bags and pack my international bags…
- Now it’s 1 week into a 2 week trip in Taiwan. Playing translator/tour guide to my parents, and having a 1-year wedding anniversary dinner for all of the friends and family over here that couldn’t attend our original wedding ceremony and reception. Taking lots of pictures, maintaining a constant state of being totally stuffed with food, and having a great time in general. But it’s totally draining—So far I’ve had more than one night filled with dreams in which I’m constantly translating to ppl around me what is happening, into either Chinese or English. And these kind of dreams make me wake up even more tired!
- Still haven’t made my requisite “tea run” this trip—my stashes at home in Los Angeles are running low, and the selection here is wonderful, of course. I’m especially interested in stocking up this trip, because I’ve found my tastes have matured quite a lot lately…
- After I get back, it’s a weekend of getting over jetlag, then I’m up to Mountainview for “new employee training week” at Google.
.
Life is anything but dull, at least. Phew!
The Problem with Linguistics
10-Dec-06via LanguageLog:
Linguistics will become a science when linguists begin standing on one another’s shoulders instead of on one another’s toes.
–Stephen R. Anderson’s A-Morphous Morphology (Cambridge University Press, 1992):