computer – sardonick http://motespace.com/blog Disclaimer: The following web space does not contain my own opinions, merely linguistic representations thereof. Fri, 14 Oct 2011 16:26:45 +0000 en-US hourly 1 https://wordpress.org/?v=4.6.1 Visualizing Command Line History http://motespace.com/blog/2011/03/13/visualizing-command-line-history/ Sun, 13 Mar 2011 07:12:34 +0000 http://motespace.com/blog/?p=352 So, after documenting how I save a timestamped log of my bash file, I got curious about what kind of analyses I could pull out of it.

(caveat: I only started this logging about a month ago, so there aren’t as many data points as I’d like. However, there is enough to see some interesting trends emerging).

Day of Week

First, here is the spread of activity over day-of-week for my machine at home. I found this surprising! I’d expected my weekend hacking projects to show a significant weekend effect, but I did not notice the Thursday slump. It’s interesting when data shows us stuff about ourselves that we didn’t realize. I have no idea what causes the Tuesday mini-spike.

Next, I have activity per hour-of-day, broken up by weekends-only and weekdays-only (because my behavior differs significantly between these two sets).

Weekends

Both charts clearly show my average sleeping times. Weekends show a bump of morning hacking and evening hacking, with less computer time than I’d have expected in the middle of the day.

Weekdays

I love the evening just-got-home-from-work-and-finished-with-dinner spike for the weekdays, followed by evidence of late-night hacking (probably too late for my own good).

Where to go from here

I wonder if the unexpected Tuesday spike and 6pm-weekday spikes are legitimate phenomena or artifacts due to data sparsity. It will be interesting to check back in with this data in a few more months to see how it smooths out. (Ugh, daylight savings time is going to mess with this a bit =/ ).

Also, this only measures one aspect of my activity in a day–stuff typed at the command line, which is mostly programming-related. I would love to plot other information alongside it (emails sent, lines of code written, instant messages sent, songs played, GPS-based movement). I’m tracking much of this already. I’ll need a good way of visualizing all of these signals together, as the graph is going to get a bit crowded. Maybe I’ll pick up that Tufte book again…

(And, speaking of visualization, I think a heatmap of activity per hour of the week would be interesting as well… Google Spreadsheets doesn’t do those, though, so while I have the data I couldn’t whip one up easily tonight).

Lastly, what’s the purpose of this all? What do I want to accomplish from this analysis? They’re nice-looking graphs, for sure. And honestly there is a bit of narcissistic pleasure in self-discovery. And I suppose it’s good to realize things like the mid-week slump (exhaustion from work? external calendar factors?) are happening.

But I’m eventually hoping for something less passive than just observation. Later I look forward to using this data to change myself. I can imagine later setting goals (in bed by a certain hour, up by a certain hour, no coding on day-x vs more coding on day-y) and letting the statistics show my progress towards those goals.

]]>
Saving Command Line History http://motespace.com/blog/2011/03/12/saving-command-line-history/ http://motespace.com/blog/2011/03/12/saving-command-line-history/#comments Sun, 13 Mar 2011 00:19:04 +0000 http://motespace.com/blog/?p=339 I’ve never been satisfied with the defaults for the way linux & osx save command line history. For all practical purposes, when we’re talking about text files, we have infinite hard drive space. Why not save every command that we ever type.

First, A Roundup of What’s Out There

Here’s the baseline of what I started with, in bash:

declare -x HISTFILESIZE=1000000000
declare -x HISTSIZE=1000000

But there are a few problems with this: bash and zsh sometimes corrupt their history files, and multiple terminals sometimes don’t interact properly. A few pages have suggested hacks to PROMPT_COMMAND to get terminals to play well together:

briancarper.net

  • relatedly, shopt -s histappend (for bashrc)
  • export PROMPT_COMMAND=”history -n; history -a” (upon every bash prompt write out to history and read in latest history). While this works, it feels a bit hacky

tonyscelfo.com has a more formalized version of the above.

Further down the rabbit-hole, this guy has a quite complicated script to output each session’s history to a uniquely ID’d .bash_history file. Good, but it only exports upon exit from a session (which I rarely do… for me, sessions either crash (which doesn’t trigger the write) or I don’t close them… still, it’s an interesting idea).

(Aside: shell-shink was an interesting solution to this issue, though it had its own set of problems — privacy implications… in case I type passwords in the command-prompt, I would really rather not have this stuff live on the web. Also, it’s now obselete and taken down, so it’s not even an alternative now). Links, for posterity:
[1] [2] [3]

Now, what I finally decided to use

Talking to some folks at work, I found this wonderful hack: modify $PROMPT_COMMAND to output to a history file manually… but also output a little context — the timestamp and current path, along with the command. Beautiful!

export PROMPT_COMMAND='if [ "$(id -u)" -ne 0 ]; then echo "`date` `pwd` `history 1`" >> ~/.shell.log; fi'

ZSH doesn’t have $PROMPT_COMMAND but it does have an equivalent.

For posterity, here’s what I ended up with:

  • zsh:

    function precmd() {
    if [ "$(id -u)" -ne 0 ]; then
    FULL_CMD_LOG=/export/hda3/home/mote/logs/zsh_history.log;
    echo "`/bin/date +%Y%m%d.%H%M.%S` `pwd` `history -1`" >> ${FULL_CMD_LOG};
    fi
    }

  • bash:


    case "$TERM" in
    xterm*|rxvt*)
    DISP='echo -ne "\033]0;${USER}@${HOSTNAME}: ${PWD/$HOME/~}\007"'
    BASHLOG='/home/mote/logs/bash_history.log'
    SAVEBASH='if [ "$(id -u)" -ne 0 ]; then echo "`/home/mote/bin/ndate` `pwd` `history 1`" >> ${BASHLOG}; fi'
    PROMPT_COMMAND="${DISP};${SAVEBASH}"
    ;;
    *)
    ;;
    esac

This gets ya a wonderful logfile, full of context, with no risk of corruption:

20110306.1819.03 /home/mote/dev/load 515 ls
20110306.1819.09 /home/mote/dev/load 516 gvim run_all.sh
20110306.1819.32 /home/mote/dev/load 517 svn st
20110306.1819.35 /home/mote/dev/load 518 svn add log_screensaver.py
20110306.1819.49 /home/mote/dev/load 519 svn ci -m “script to log if screensaver is running”

(As an aside, you’ll notice that these commands are all timestamped. Imagine the wealth of personal infometrics data that we can mine from here! When am I most productive (as measured by command-density-per-time-of-day?). What really are my working hours? When do I wake? Sleep? Lunch? )

Next up, need to make a `history`-like command to tail more copy-pastable stuff out of this file.

]]>
http://motespace.com/blog/2011/03/12/saving-command-line-history/feed/ 1
Consolidating Music Metadata http://motespace.com/blog/2006/12/04/consolidating-music-metadata/ Mon, 04 Dec 2006 21:48:41 +0000 http://fairuz.isi.edu/blog/index.php/2006/12/04/consolidating-music-metadata/ Finally finished a script a couple weekends ago to synchronize data between Amarok, Rhythmbox, and iTunes. I now use Amarok exclusively, and it’d been bugging me for a long time that my old metadata from multiple machines and multiple apps was locked away and unexploitable. So i fixed that, for myself at least. I harvest everything into a common format and populate a big ol database with everything. Then I merge all the metadata together (averaging and adding, whatever, where necessary).

The code is ugly for now, so no public release. I might clean it up some time if anybody else wants it. Just ask.

]]>
Migrating From Quicken http://motespace.com/blog/2006/07/08/migrating-from-quicken/ http://motespace.com/blog/2006/07/08/migrating-from-quicken/#comments Sat, 08 Jul 2006 16:52:30 +0000 http://fairuz.isi.edu/blog/index.php/2006/07/08/migrating-from-quicken/ A short while back, my wife appropriated my iBook. She couldn’t resist its OSX-y goodness. Until now, I’d been tracking my finances in Quicken on the thing, and, as I have very little love for Quicken on the Mac, this gave me a good excuse to finally migrate away.

My primary machine is a gentoo box, so that shaped my options somewhat.

First, tried a few versions of Quicken (2003, 2005) under Wine. Neither worked. Neither even installed. Googling showed mixed results of sucesses and failures. Perhaps if I invested in Crossover or Cedega I’d have better luck–but for now, I’m a poor PhD student, so I’d rather go the Free as in Beer route. Scratch the easy solution.

There seems to be two main free personal finance apps for Linux: KMyMoney (v 0.8.4) and GnuCash (v 1.8.11).

KMyMoney

  • First step was to import my years of Quicken data. Its QIF import was FINICKY—it took me quite a while manually regexp-tweaking the quicken data file before I could get the data to import correctly (this step alone told me it’s not ready for the everyday user).
  • The UI is pretty friendly (lots of icons, non-imposing). Will need to try it out more before I can make a judgement on big-picture usability
  • Handles OFX imports (through AQBanking) even better than Quicken does.
  • It’s missing graph-based reports. The text-based reports are nice, but they don’t compare
  • The real game-ender, though, was its epilepsy-inducing flashing red text (the first google hit for “kmymoney flashing red” was this same question, greeted by a RTFM, and the original poster deciding to try GnuCash. Sigh. Maybe that’s what I’ll try next

GnuCash

  • I import my QIF data without problem (and without data tweaking, gnucash + 1).
  • The UI looks to be made circa late 1980s. It is ugly and imposing. The feelings I get are more “Generic industrial-strength cleaning product” than “Friendly personal finance management”.

I’ll update this entry more as I hack around with both over the weekend. I’m not too impressed with Free Software solutions for this. Maybe I’ll try MoneyDance, I’ve heard good things about it.

update 20060709:

  • Entries in kmymoney flash red because they are missing categories. While I agree with the spirit of this “feature” (I do eventually want to categorize everything), it doesn’t make the implementation any less epilepsy-inducing. I long for the day when I can’t say “Good idea, ugly UI. What else can you expect from Free Software”.
  • I ended up using KMyMoney because it seems to be more actively developed than GnuCash. For Free Software, I consider forward momentum to be just as important as current feature set, and it appears KMyMoney has it won here (GnuCash is still in GTK1? Maybe that bit about “late 80’s UI” for GnuCash wasn’t as large of an exaggeration as I’d thought! Phew!).
  • a few more reviews

update 20060710:

  • How timely, GnuCash has just released 2.0 as of last night. And appears to be using a modern version of GTK also. So much for that last comment about “forward momentum”. I’ll give the new version a try.
  • Half an hour later: Errr, this isn’t that much better. Oh well…

update 20070131:

  • I’ve been using KMyMoney for half a year now, and for the most part it’s been good to me. OFX support is decent, and it hasn’t disappeared any of my data. Functionally, the only things it’s really lacking is decent graph visualization and budgeting. However, with this new year, I’ve found that the real deal-breaker for me is no cross-platform support. KMyMoney is all right if it’s only me hacking on the finances, but if I want to get my wife involved then we’ll need something that works in mac and/or windows
  • Enter Moneydance. It’s written in java with crossplatformness in mind, which means it works equally well in Mac/Linux/Windows/Solaris/whatever.
  • It has decent graphing visualization and budgets that KMyMoney lacks. Its UI is better than either KMyMoney or GnuCash… but the best part is it has API hooks into python (!!!) —this means (hopefully) I can automate lots of the drollery of, say, categorizing stuff.
  • The downside to Moneydance is that it’s neither free nor Free, but: the python API allows me to get my data out of I need it (mitigating the data lock-in and the fact that it’s not libre Free)… and as far as gratis free, $30 for a license that allows you free upgrades for multiple years… that’s not that bad.
]]>
http://motespace.com/blog/2006/07/08/migrating-from-quicken/feed/ 2
Gentoo 2006.0 http://motespace.com/blog/2006/03/08/gentoo-20060/ Wed, 08 Mar 2006 22:02:42 +0000 http://fairuz.isi.edu/blog/index.php/archives/2006/03/08/gentoo-20060/ Dusted off my old 5-year-old laptop–thinking it might make a good server–low power consumption, built-in UPS.� The new Gentoo Live CD is very slick.� Too bad it doesn’t quite work:

  • The LiveCD auto-starts a gnome session.� Too bad it doesn’t allow the screen resolution to be any higher than 640×480.� This size is unfortunately too small for the graphical auto-installer to be useable.
  • The command line auto-installer is brittle.� After 6 failed tries with an error dump that closes too quickly to look at, I’m going for the old-school command-line install.� Oh well.
]]>
Information, Knowledge as Art http://motespace.com/blog/2005/11/28/information-as-art/ Mon, 28 Nov 2005 22:02:27 +0000 http://fairuz.isi.edu/blog/index.php/archives/2005/11/28/information-as-art/ Came across Newsmap this afternoon, a google maps mashup by Ben O’Neill today that plots locations mentioned in BBC news on a map of the world.

The implementation was neat, but I can’t help but dream of what this could be like. Imagine a map of the world (perhaps OLED, mounted on your wall), with regional coloring based on density of news events in that area. You’d need a few hacks to make things look nice (normalization for standard-level-of-news per area (different areas of the world have different minimum levels of media coverage) … smoothing so that local news-concentration influences regional news-concentration). And a gradient would do a lot more for visualization than these discrete news-event-bubbles (but I realize that the google maps api limits you to location bubble markers, and remixers are limitted to the tools at hand).

I love to see non-art becoming art. To this day, one of my favorite random conversations was with Tom and danah at a conference a year or so ago, where we discussed a fellow information addict who had covered all the walls of his house with bookshelfs full of books. Information, knowledge became art–and it evolved so both organically and unobtrusively.

]]>
AI Scaremongering http://motespace.com/blog/2005/11/16/ai-scaremongering/ http://motespace.com/blog/2005/11/16/ai-scaremongering/#comments Wed, 16 Nov 2005 21:44:03 +0000 http://fairuz.isi.edu/blog/index.php/archives/2005/11/16/ai-scaremongering/ This post on boingboing, “Google: our print scan program has no hidden AI agenda”, which points to this ZDNet story cracks me up.

Talk of a “hidden AI agenda” just cracks me up–it feels like scaremongering, of some lumbering, lovecraftian, inhuman intelligence, artificial intelligence.

When questioned on whether a renaissance of the general paranoia about omnipotent and malign computers was underway now, Levick admitted that such concerns were more abundant, but insisted that Google’s core philosophy of “Don’t be evil” guides all its actions.

“I think that goes back to the concept that these technologies can actually be empowering and good for the world if the companies implementing them are good,” he said. “Could some of these technologies be used for bad purposes? Yes. But will they by us? No.”

Hehe. As someone who works with AI every day, and who knows the prenatal state of natural language processing and so-called “strong AI”, it cracks me up to see public fears of “omnipotent and malign computers”.

Sigh.

]]>
http://motespace.com/blog/2005/11/16/ai-scaremongering/feed/ 5
Bibliographic Management http://motespace.com/blog/2005/11/13/bibliographic-management/ Sun, 13 Nov 2005 07:13:17 +0000 http://fairuz.isi.edu/blog/index.php/archives/2005/11/13/bibliographic-management/ Bibliography Management Linkdump:

  • Bibdesk : an excellent BibTeX database management system. Beautiful. But for mac only.
  • jabref : an open-source, java BibTeX database management system. Lacks Bibdesk’s panache, but not bad.
  • bibtexml: an excellent tool. Takes .bib files, converts them to xml, and then uses DTDs or XSLTs to mark them down to html APA, MLA, or whatever. This is the type of thing that XML was made for. Requires sablotron or another xslt engine to work.
  • citeulike : folksonomy + bibliography. A delicious clone built to manage academic paper metadata. Good for storing data, finding new papers to read, and making what I’ve read public
]]>
Painfully Learning Zope http://motespace.com/blog/2005/10/25/zope/ Tue, 25 Oct 2005 23:06:40 +0000 http://fairuz.isi.edu/blog/index.php/archives/2005/10/25/zope/ My research demands that I write an interface for native speakers to annotate sound files of learner speech. Up until now, my poor annotators had been using an excel sheet I generated via a python script, with one column that pointed to sound files on the web. It worked, and it had nifty features like excel’s builtin autocomplete, but it was easy to run into versioning problems with the halfway-completed excel sheets floating around.

Now, much of our project’s work is done in python, so the powers that be say “hey, write us a web app in python that does this job”. No prob, python has lots of web app frameworks (cherrypy, twisted, django, snakelets, mod_python (and .psp pages) ). And, it was actually a Good Thing, because I’d always wanted to learn web app programming (It’s embarassing, actually. My ivory-tower programming experience has been a lot of working with statistics, machine learning, natural language processing, but I’ve never done things like web programming, database programming, etc; I’ve read php and mod_perl code, but reading is of course much different from writing). So, mod_python and psp it was. They proved to be intuitive enough to get some working teach-myself-how-to-do-this stuff code in a couple hours.

However, project requirements change. “We want you to do it in Zope or Plone” became the new order of the day. Been wrestling learning zope/plone for the past 4 days or so… The architecture has a lot of promise, but in many ways it’s frustratingly immature. It can make things look really slick… but documentation is disappointingly unclear/convoluted. There are many links out there for learning this stuff, but very few good ones.

After much searching, dev shed ended up having a high concentration of good links. I wish I had found this howtoCreating Basic Zope Applications, in particular, 5 days ago.

Zope seems full of inconsistencies. And it’s not very pythonic. Take, for instance, a mishmash of “here”, “this”, and “self” used hodgepodge to fulfil the function of python’s “self”. What’s up with that?

]]>
things elecronic http://motespace.com/blog/2005/10/06/things-elecronic/ http://motespace.com/blog/2005/10/06/things-elecronic/#comments Thu, 06 Oct 2005 21:51:24 +0000 http://fairuz.isi.edu/blog/index.php/archives/2005/10/06/things-elecronic/ I think part of the reason I enjoy programming so much is that there is no entropy in what I create. Information gets out of date, standards change, yes, but that’s just a synchronization problem, not a Universe problem.

Or is entropy a bad thing? It’s the enemy of order and structure, yes. But it does keep things clean, and as the old stuff wears out, it gives us an excuse and a drive to innovate the new.

Huh.

]]>
http://motespace.com/blog/2005/10/06/things-elecronic/feed/ 1