It was about 75 degrees fahrenheit in Pittsburgh today, which means it was time to go to Dave and Andy’s to get dinner.
Dave and Andy’s is a venerable ice cream store that sits in the neighborhood that also holds the the campus of the University of Pittsburgh. They opened sometime in the mid 80s when I was an undergraduate at nearby Carnegie Mellon and so it was a fixture of the hot days in the summer. We’d venture down when it cooled off a bit in the evening, after doing our summer work study jobs, and get a (then) new-fangled waffle cone and ice cream with m&m’s, or Reese’s cups smashed into it. They made heavy creamy ice cream, often with fruit that they got on discount from the Strip district. It was great.
At some point after moving back to Pittsburgh we made an informal tradition of getting to Dave and Andy’s as early in the spring as possible. Eventually the rule became this: the first day it was warm enough to walk from CMU to Dave and Andy’s in short sleeves with no jacket, we would get Dave and Andy’s for dinner. And sometimes a hot dog at the O (RIP) for “dessert.”
We’ve kept this up at least since the early years of the 21st century. And so tonight we went again.
Then we got some Chinese savory grilled crepes (jian bing) for dessert.
The Konica Hexar is a fixed lens rangefinder style 35mm film camera that was sold new between around 1993 and probably 1999 to 2000. It has a Leica-ish shell but with more automated internals, and the lens is glued in place. That lens though, a 35mm F2, is exactly what you would use on a Leica M most of the time. Like the Leica the Hexar has a giant beautiful bright optical viewfinder on the left side of the camera. Unlike the Leica it has autofocus, auto-exposure, auto-wind, and you don’t have to take the fucking bottom of the camera off to load it with film.
I bought a Hexar at the peak of my pretentious “Tri-X 35mm black and white film shot at ISO 320 for better shadow detail” pseudo-artistic phase. It’s always been one of my favorite cameras.
One reason I bought the camera was a very expansive review that was written by one Richard Caruana and published at photo.net. That web site was one of the most active forums for photo nerds during the Internet 1.0 period. The review was so good that you could use it as a quick start manual for the camera when you bought it.
I had reason recently to try and look for the old review, but I could not find it at photo.net anymore. Sad. Luckily archive.org had a snapshot of it. But that’s not the ideal way to read the page.
Happily, it turns out that I spent some time working in the same lab as Richard Caruana. So when I could not find the text on the live Internet anymore I asked him if I could put it here. What follows is the original review, reformatted to look more like my web site. All of the text is by Rich. All of the new typos are by me.
There are also some extra notes at the end of the review about what has happened to these cameras since the 1990s.
Review by Richard Caruana, 1996.
The Hexar is an excellent camera aimed at “real” photography. It’s one of the fastest operating cameras I’ve used, and thus can serve double duty as a point-and-shoot. But it’s not a point-and-shoot; if you want something completely automatic that will fit in your pocket, you’ll probably be happier with a Yashica T4, Nikon 35Ti, or Contax T2. What makes the Hexar stand out is its f2.0 lens, excellent viewfinder, smooth shutter release, almost spooky quietness, and operating modes designed to aid serious photography.
The Hexar is well thought out and well executed. To me, it feels almost like an automated Leica M6. I prefer it to the Contax G1 because the Hexar is faster and quieter with a better viewfinder. That the $500 Hexar compares this well to cameras costing much more is impressive. But the M6 and G1 have one big advantage – interchangeable lenses. If you find a fixed 35mm lens too limiting, don’t buy a Hexar, except as a second camera. At the price of a lens for these other cameras, though, it makes a great second camera. It’s so pleasant to use, you’ll end up taking more pictures with it than you think.
Before jumping into specifics, let me describe the three basic operating modes: P(rogram), A(perture preferred), and M(anual).
The Hexar’s P mode is like the program mode on most cameras except that the exposure settings are biased by the preferred aperture and minimum shutter speed you set. This biasing makes P mode more useful than the program mode on other cameras. Here’s how it works: If there is enough light for the camera to use the aperture you set at shutter speeds as fast as the user-set minimum, it uses the aperture you set and raises the shutter speed. It starts closing the aperture past your setting only after it hits the camera’s top shutter speed. If there is not enough light to use your set aperture at the user-set minimum shutter speed, it starts opening up the lens, keeping the shutter speed at the user-set minimum. I wish all cameras had a mode like this. The Hexar’s P mode lets you bias the settings, but is also fairly foolproof. I even prefer this to Nikon’s exposure shift because the Hexar lets me bias the settings before making an exposure reading.
Cleverly, in P mode the top-deck LCD displays the shutter speed if the camera can use the aperture you set, but displays the aperture when it can’t (the shutter speed is at the max or user-set min so you don’t need to see it). This way of keeping the photographer informed while minimizing display clutter works well. Note that in P mode the Hexar will not use a shutter speed slower than the user-set minimum, even if it is required for proper exposure, but still takes the picture. While this prevents blur, it can lead to underexposure. The camera warns you of this by flashing the underexposure warning light in the finder and the LCD display on the top deck. You can set the minimum shutter speed as slow as 1/8, and there’s little reason to use P mode when not using the camera hand-held, so this isn’t a problem. The other exposure modes give you back full control when you want it.
In A mode, you set the aperture and the camera sets the shutter speed. The camera flashes a warning light in the viewfinder if the shutter speed falls below the user-set minimum, or higher than the max, but the camera takes the picture anyway. In A mode the Hexar assumes you know what you are doing – unlike P mode, it will use a shutter speed as long as 30 seconds if necessary. Comment: With other cameras I usually use aperture preferred or manual exposure; I rarely use program modes. The Hexar’s P mode, however, is such an excellent marriage of program and aperture preferred automation that I use P mode on the Hexar more often than A mode. In both P and A modes, exposure locks when you partially depress the shutter release.
In M mode, you set the aperture and shutter speed. Red plus and minus signs in the viewfinder act as a match needle; when both light up you’re within a third a stop. In P and A modes, the camera uses centerweighted metering. In M the meter switches to spot metering. Personally, I like this design, but you do need to keep it in mind when metering manually. Shutter speeds are set via up/down buttons. I’m not in love with buttons in general, but the Hexar’s up/down buttons are well located and easy to use. They also allow you to set shutter speeds in 1/3 stops over a range from 30 seconds to 1/250. Note that most point-and-shoots, including the T4/35Ti/T2, do not have manual exposure. I usually find manual exposure is the easiest way to handle tricky lighting. And sometimes there just isn’t any other way. This is an important advantage of the Hexar over its competition.
The Hexar allows the user to set the minimum shutter to be used for hand-held photography in P mode. The camera comes preset to a minimum speed of 1/30. Any speed between 1/8 and 1/60 can be set. The Hexar will not allow the shutter speed to fall below this in P mode. In A mode it will set the shutter speed as slow as 30 seconds if required, but the minus sign will flash in the viewfinder to warn you when the speed falls below your set minimum.
The accessory flash is small, lightweight, and moderately powerful. The guide number is 43 at ISO 100, which translates to 21 feet at f/2.0 (or 43 feet with ISO 400 film). It is up to you to attach the flash and turn it on or off, the camera does not make this decision for you.
In P mode the Hexar uses flashmatic, like almost all point & shoot cameras. The flash fires at full power and the camera sets the aperture based on the focussed distance. The shutter speed is automatically adjusted to balance the background exposure with the flash exposure, but never goes below the user-set minimum speed to minimize ghosting. This system works well. It provides maximum depth-of-field up close where it is needed most, but allows the camera to use maximum aperture to achieve maximum flash range. Like Nikon’s 3-D flash system, it is also more likely to give correct flash exposure with off-center subjects than the TTL flash metering in most SLRs. However, because each shot fully discharges the flash, you have to wait 5-10 seconds for the flash to recycle and batteries are consumed faster.
In A mode, you can keep the flash on full power, in which case it is up to you to calculate and set the correct aperture. Alternately, you can set the flash to its single auto-aperture mode, which uses an in-flash sensor and auto-thyristor to quench the flash when there is enough exposure. The auto-aperture is f/4.0 for ISO 100, f/8.0 for ISO 400, etc. (It’s still up to you to set the camera’s aperture. There is little integration between the flash and camera.) In A mode the shutter speed will be whatever is needed to balance background exposure at that aperture; this can easily be longer than you want to hand hold. For this reason I usually use M mode when using the flash on auto aperture. Konica foresaw this, and added a twist to flash in A mode to make it more useful: they moved flash synchronization to the rear curtain. If the exposure is long enough to bring the background exposure into balance, moving subjects may be blurred; rear curtain synch means that the blur is “behind” the sharp image captured by the flash, which fires just before the shutter closes.
There is no red-eye reduction mode; the flash is far enough away from the lens that it isn’t needed. Also, unlike the T4/35Ti/T2, the Hexar can be attached to standard on and off-camera flashes. There’s no PC socket, however, so you have to use a flash with a hot foot, or use a hot shoe-to-PC adapter. Unlike the Leica M6 or Contax G1, the Hexar’s leaf shutter can flash synch at any speed up to 1/250, making daylight fill-flash easier.
The Hexar focusses to 2 feet using active multibeam autofocus. Unlike other multibeam systems, the goal of the multiple beams is not to provide wide area focus, but to provide very accurate single spot focus. As with other AF cameras, focus locks when you partially depress the shutter release. Distance is indicated by a scale in the viewfinder (more on this later), by a scale on the lens that rotates as the lens focusses, and can be displayed digitally on the LCD on the top deck if you switch to manual focus. The viewfinder scale is sufficient for most purposes; I rarely look at the lens scale or LCD.
Under testing with resolution targets, the Hexar’s autofocus proved highly accurate. I may have observed a small bias towards focussing closer than the target, but I’m not sure. If there is a focus shift, it is less than 1/2 the depth-of-field at f/2.0 at both 3 feet and 10 feet. (I used three targets at different distances and autofocussed on the middle one. The middle target was always sharpest, but the closer target appeared sharper than the distant one. The distance between targets was less than half the DOF at f/2.0, so the effects I’m describing are small. Moreover, I’m not sure my setup is accurate enough for me to be confident of the findings.) In real photography, I do not observe any focus shift problems. In fact, I suspect the Hexar focusses more reliably than I do manually with an SLR.
The Hexar’s active focus seems to conk out somewhere around 20-30 feet, depending on the subject and ambient illumination. When focussing on progressively further targets, my camera jumps from a reported focus distance of 7 meters to 20 meters. At f/2.0, the DOF for these two distances barely overlaps, and the DOF for 20 meters barely includes infinity (i.e., the hyperfocal distance at f/2.0 is about 20-25 meters). This suggests that you may have trouble at f/2.0 with subjects at 10-12 meters and infinity.
Actually, things aren’t as bad as a simple DOF calculation suggests because at f/2.0 lens quality limits on-film sharpness more than the potential focus discrepancy – the focus accuracy necessary to achieve near-optimal on-film resolution drops when lens quality becomes more of a limiting factor than the DOF. But the Hexar is close to the margin for objects at 10-12 meters and at infinity when used wide open. I feel more secure closing down to f/2.8 if possible for objects near these distances. Most of the performance improvement you’ll get stopping down from f/2.0 for subjects at these distances will be due to improved lens performance, though, and not to increased DOF. (You might expect the T4/35Ti/T2 which have smaller maximum apertures to have less difficulty here. Because they are smaller and use smaller batteries, however, they have shorter baselines and may project dimmer focus beams to conserve power. Thus it’s not clear that the T4/35Ti/T2 do better with distant subjects at f/3.5/2.8/2.8 than the Hexar does at f/2.0. Plus the Hexar focusses more accurately than it needs to when closed down to f/2.8.)
You can manually set focus in finer increments for distant subjects. At the far end of the scale, the available manual settings are 5m, 7m, 10m, 20m, 40m, and 999m (infinity). The DOF at f/2.0 for these distances overlap considerably. When using the lens at f/2.0, you might get slightly better performance by focussing manually for subjects beyond 7 meters. I must admit, I’ve yet to bother doing this myself, in part because I’m rarely at f/2.0 for subjects this far away, and also because the difference in performance would be so small. The Hexar let’s you switch to infinity focus by pressing a single button, so I usually do this for distant subjects.
So what about manual focus? In autofocus, the camera focusses when you partially depress the release. If you press the MF button while holding the release partway down, the Hexar switches to manual focus with the distance set to the autofocussed distance. The distance is displayed digitally (in meters) on the LCD display and also on the lens barrel, and is adjusted via the Up/Down buttons. To switch back to autofocus, hold the MF button down for a second; the LCD displays “AF” and the lens rotates back to its AF position. If you press the MF button without holding the shutter release part way down, the camera goes to infinity focus.
Manually focussing the Hexar is not nearly as fast, convenient, nor pleasant as with a Leica M6, or perhaps even a Contax G1. First, adjusting focus with up/down buttons feels awkward to me. Second, there is no confirmation when the distance you set manually matches what the autofocus system detects. That said, however, I don’t find the Hexar’s manual focus to be a drawback, largely because I rarely have to use it. The Hexar’s multiple beam autofocus is reliable, both indoors and out, and with a variety of subjects (active focus does not depend on subject contrast). The Hexar even seems to focus through most glass, including smoked glass. Add to this reliability the ease with which you can get to infinity focus, or freeze focus at the currently sensed distance, and you quickly find that 99.9% of the time what you want to do with focus is fast and easy.
The viewfinder is excellent. It is large, bright, distortion free, and I don’t have to take off my glasses to use it. Basically, it’s the next best thing to the finder on a Leica M6. It’s much nicer than the finders on the T4/35Ti/T2, or even on the Contax G1 (which I find too small and too dim). The lens does not block the finder image, even with the lens hood extended.
The Hexar’s frame lines move to compensate for parallax and image scale. The framelines are moved by motor instead of using a projected LCD display. At first I thought this old tech solution would be inferior. Now I don’t. Believe it or not, the nicest thing about motorized framelines is that you can see them move! LCD framelines pop instantly into place. Seeing the framelines move makes for much surer focus confirmation and composition. You learn very quickly to judge how far away the camera has focussed by how far the framelines move. In the rare cases where focus is not what you wanted, you know immediately that it is wrong. You don’t have to look at a scale, you don’t have to see if a focus indicator light is blinking, you just know. It’s remarkably intuitive.
The top frameline slides along a distance scale so you can read the approximate focus distance from the frame’s position on the scale. You’ll rarely look at the scale. After a few rolls of film you just know about how far away the camera has focussed by seeing how far the framelines moved. It doesn’t take any conscious effort. You’ll just know. I really like this. Projected LCD frameline systems would do well to emulate this by scanning their framelines to the right place instead of jumping there right away.
One other thing the framelines do is go back to their infinity setting as soon as the exposure is done. This might not seem useful, but the camera is so quiet in stealth mode that immediate visual confirmation that the picture has been taken can actually be useful. (In manual focus the framelines always stay at the position determined by the distance you set.)
The framelines on my Hexar are very accurate. The lack of accurate framelines is one of the reasons I decided against the T4/35Ti/T2 – I do a lot of work in medium format and want to make maximum use of the smaller negative when shooting 35mm. I also want accurate framing when shooting slides.
The viewfinder manages to convey a lot of info with a few unobtrusive indicators. In P and A modes, the match needle indicators are used to warn against over and underexposure, and if the shutter speed falls below the user-set minimum. Focus lock is indicated by a separate LED, and the approximate focus distance can be read from the frameline’s position on a distance scale. The LEDs are usually visible, but not so bright as to be distracting.
Ok, so the exposure modes are well thought out, the focus system is accurate, and the viewfinder is good. This is all for naught if the lens isn’t good. Guess what? The lens is good. Very good. I’ve compared it with several prime 35mm lenses, both using resolution targets and by looking at pictures. The Hexar’s lens is as good as or better than anything else I’ve used. It is good at f/2.0, better at f/2.8, and outstanding at f/4.0 and beyond. Not only is sharpness high, but contrast, color fidelity, and eveness of illumination are excellent, too. I really like the pictures this lens takes. The smooth, predictable shutter release and vibration-free shutter help you get the most out of the lens when using the Hexar handheld.
Unlike the T4/35Ti/T2, the lens is threaded for filters (46mm). The threads do not rotate as the camera focusses so using a polarizing filter is easier. Keep in mind, though, that using polarizers on a non-SLR is tough because you aren’t viewing through the lens so can’t easily judge the effect. When the lens focusses, it moves inside a solid, fixed barrel. Because of this, the lens and focus mechanism are protected from abuse, even when in use. There is a traditional focus scale on the lens visible through a clear window similar to the windows found on many SLR autofocus lenses. There aren’t any controls on the lens, however; focus and aperture are both controlled from the top deck.
The lens has an abbreviated depth-of-field scale with marks for f/8.0 and f/16.0. I’d prefer a more complete scale. It also has an infrared focussing mark, though this is unnecessary with the Rhodium and Classic models, which can be programmed to AF with infrared film. The retractable lens hood operates smoothly and extends far enough to do a reasonable job of minimizing flare.
The Hexar’s autofocus system seems to be about as fast as other autofocus systems. But there is a big difference between the Hexar and most point-and-shoots. The Hexar moves the lens when it measures distance, i.e., when you partially press the release. When you press the release the rest of the way to take the picture, the lens is already in place and the aperture is already stopped down, so there is no noticeable delay. Many point-and-shoots do not focus the lens until you take the picture, introducing considerable delay between when you press the release and when the picture is taken. To me, this delay is too long for pictures of people or animals – expressions can change a lot in a half of a second. Although I haven’t tried measuring it, the Hexar’s shutter lag feels as short as any other camera I’ve used, including the Leica M6. In other words, shutter lag isn’t a problem if you pre-focus. If you don’t pre-acquire focus, but just fire the release all the way in one shot, lag seems about average.
The ergonomics are great. The rubber coating looks good and feels wonderful. When I hand the Hexar to experienced photographers, I often get the same sequence of reactions. First they comment on how nice it feels in their hands. Then they look through the viewfinder and comment on how nice the finder is. Then they fire a few shots and comment on how quiet it is and how good the shutter release feels. Then they ask how good the pictures are.
Which brings us to the shutter release. The Hexar’s shutter release is very good. I might even prefer it to the release on the M6. The Hexar release is in exactly the right place for my hand, and I have no difficulty depressing the release half way for focus/exposure lock, and then smoothly and predictably pressing it the rest of the way to take the picture. The nicest thing about it is that it feels like I can fire the shutter without shaking the camera at all. The shutter release and controls, however, are awkward with thick gloves. The knurled aperture wheel around the shutter release is large, easy to read, easy to turn with your forefinger (but not so easy that you’ll do it accidentally), and has half-stop indents that make it easy to adjust without removing the camera from eye level.
The Hexar is quiet. Incredibly quiet. People often comment on how quiet it is – and that’s when I use it in the normal “noisy” mode! When turned to the quiet mode, it is almost inaudible, even to the photographer. My wife and I do a lot of photography, yet we often can’t tell when the other has taken our picture with the Hexar. In manual focus the camera is even quieter.
In “stealth” mode, film advance and rewind are also extra quiet. One nice feature is that if the camera starts to rewind the film you can turn the camera off and rewind will stop. Rewind continues when you turn the camera back on. If you turn the camera back on in quiet mode, rewind continues quietly. Very nice! Note: as the instructions explain, the camera can have trouble rewinding some thick films at slow speed. For example, my camera sometimes has trouble with 36 exposure rolls of P3200 (TMZ). When this happens, rewind stops and the LCD flashes instead of continuing to count down to zero. The LCD display continues to flash even when you turn the camera off to remind you there’s unfinished business – wouldn’t want to open the back back prematurely! Restarting the camera in normal mode finishes the rewind.
The LCD frame counter counts backwards as the film rewinds. When it hits zero, the camera pauses for a second and blinks the LCD display several times — if you open the camera back then the film leader is left out. If you don’t, the camera pulls the leader in a moment later. There is no special mode to remember to set or unset.
Unlike the T4/35Ti/T2, the Hexar lets you overide the DX film speed and set the ISO manually. This lets you shoot P3200 (TMZ), which is DXed for 3200, at something more realistic like 1250. I find being able to adjust film speed a much more convenient way of compensating for how I shoot B&W film and some slide films than using exposure compensation. The Hexar also has exposure compensation (+- 2 stops in 1/3 stop increments), but I use this to handle tricky lighting, not to correct an entire roll’s film speed. That’s just as well – the Hexar’s exposure compensation resets when you turn the camera off.
A nice touch is that the Hexar remembers the last film speed you set manually and automatically uses that when you put in a non-DX roll of film. This is helpful if you often load your own film into non-DX cartridges. When you put in a roll of DX film, the Hexar uses and displays the DX speed. If you always use the same film and always shoot it at a speed different from the DX speed, you have manually reset the speed every time you reload. Annoying, but it prevents you from acidentally shooting rolls of DX film at an ISO you manually set for some other roll.
Pop-Photo reported that the Hexar they tested underexposed 2/3 a stop. My Hexar is within 1/3 of a stop of the other meters I use. I can’t assess how much variation there is model to model, but my Hexar is accurate. In any case, manual ISO setting allows one to bias exposure the way one would with most SLRs.
The Hexar uses center weighted metering for P and A, and spot metering for M. I like this, but not everyone will. A separate averaging/spot switch would give more control, but would also slow you down and maybe increase the number of mistakes. I can live with either aproach.
In real picture taking situations, the Hexar’s metering appears to be consistent and reliable. It does not have multi-pattern metering, though, so it is up to you to recognize and compensate for situations that will fool it. Personally, I prefer this, not because I don’t believe multi-pattern meters are accurate in more situations, but because I don’t know how to predict when a multi-pattern meter will not be accurate, or how to compensate it when it isn’t. Center weighted metering is simple enough that I know when it will work and what to do in those situations when it won’t. And, because I understand it, I find it easier to modify a centerweighted reading to achieve a special exposure effect.
A series of pictures taken by varying the aperture and shutter speed to provide constant exposure indicate that both the aperture and shutter are accurate – I saw no difference in exposure between the frames when comparing them side-by-side.
The Hexar Is Not A Point-And-Shoot
In Program mode the Hexar acts almost like a point-and-shoot, but not quite: it is up to you to attach the flash and decide whether or not to use it, exposure is affected by the aperture you set and the minimum shutter speed you allow, and the autofocus system is designed more for precise control than to be foolproof (autofocus on some point-and-shoots is made more foolproof, but also less controllable, by using multiple target areas and focussing on whatever is closest, which is often, but not always, the right thing to do).
You have to remember to take the lens cap off, and then not lose it. You can take a whole roll of pictures with the lens cap on and not know it – everything still works.
The camera is solidly built, but it’s not protected by a clamshell or porthole cover. I’m not sure it would survive as much abuse as a closed T4/35Ti/T2. I don’t think I want to fall on my Hexar. On the other hand, the lens on an open T4/35Ti/T2 is much more fragile than the lens on the Hexar.
Top Shutter Speed is 1/250
The top shutter speed is 1/250. Actually, the 35Ti and T2 apparently go to speeds higher than this only in special situations. I suspect all three manufacturers had difficulty making small, quiet, low power leaf shutters that are fast enough when the lens is wide open – even the leaf shutters on most pro cameras top out at 1/500. Maybe the 35Ti and T2 achieve their top speeds only when the lens is closed down enough that they don’t have to fully open the shutter? Anyone know? [Ed: it is certainly tougher to build a leaf shutter for an f/2 lens than for an f/3.5 lens since the area to cover/uncover is much larger.]
Anyway, the Hexar’s top speed of 1/250 is a problem, not because it is not fast enough to freeze action, but because it limits your choice of aperture when using fast film outdoors. The M6 and G1, which both have focal plane shutters, don’t have this problem. I considered carrying a 3 or 4 stop neutral density filter just for those times when I’m stuck with fast film in sunlight, but I found a better solution: the Hexar lets you leave the leader out when rewinding. The winding mechanism is repeatable enough that you can reload a roll and leave only one frame blank. (Actually, loading is repeatable enough to leave no blank frame if you are carefull to load the leader slightly further when reloading – then you only lose part of a frame, which works fine with 8x10 contact sheets of strips of six frames.) Because the Hexar has a “manual” lens cap and manual exposure, advancing past used frames is not the problem it would be with most point-and-shoots.
So I carry different films for indoors and outdoors and just switch rolls when necessary. Actually in some ways it is nice to be forced to use the right film for each situation. One trick I’ve found is to place a roll of film into the camera, but to not turn the camera on unless I want to take a picture. The Hexar does not advance the film to the first frame until you turn the camera on. This let’s me put in a roll of film so that the camera is ready, but switch the roll to something else without rewinding if I haven’t taken a picture yet. BTW, mid-roll rewind requires a pen or similar instrument to press the small, recesed rewind button. After getting caught a few times without a pen, I bought one of those “space pens” to keep in the camera case. It’s small, and writes on film cartridges well enough to let me note the last exposed frame on the roll.
No Cable Release
The Hexar doesn’t have provision for a cable or electric release. Yes, the self timer helps fill the void, but sometimes nothing but a cable release will do. Given how well they did everything else, I don’t know how Konica left this out.
[Ed: you can build your own cable release for the Hexar.]
No Case
The accessory flash comes with a small case, but the camera does not. There is an optional leather case for the camera that costs about $50. The optional case is soft, high quality leather, and appears to be well made, but has openings on each side for a camera strap. If you don’t use a camera strap, the openings are so large that they let dirt in and won’t adequately protect the camera near the strap lugs.
Battery Dependence
Likemost modern cameras, the Hexar is useless without a battery, so carry a spare. Fortunately, the Hexar is rated for more than 200 rolls of 24 exposures, turns itself off if accidentally left on, and doesn’t use the camera battery to power the flash.
Small Buttons
The buttons used to switch the camera to manual focus mode, to manual ISO mode, to exposure compensation mode, and to self timer mode are small and hard to press. Note that these are not the buttons used to turn the camera on and off or to select P, A, or M mode. They are also not the up/down buttons used to set the shutter speed, exposure compensation, and distance for manual focus. The on/off/exposure mode switch and up/down buttons are very nice and easy to use. It’s the small buttons that you use less frequently that are the problem.
No Continuous Firing/Focussing Mode
The Contax G1 lets you take multiple pictures after you lock focus and exposure by lifting the shutter release only half way between exposures in single-shot mode, or by holding the release down in continuous mode. With the Hexar, you must lift the release all the way before the film advances, so focus and exposure lock are lost and must be reacquired. Switching to manual focus or manual exposure solves this problem for those situations where repeatedly reacquiring focus or exposure would be awkward, but the G1’s solution is sometimes more convenient.
Viewfinder (the downside)
I really like the viewfinder and moving framelines, but sometimes the framelines are not easy to see. This isn’t a big problem, but it could be better. The aperture and shutter speed are not displayed in the viewfinder. Nor is exposure compensation or flash information. I can’t decide if this is a bug or a feature. It keeps the finder uncluttered, but you have to look at the LCD display on the top deck to check things. When taking the first picture in a new setting, I often end up removing the camera from eye level to see what’s up. I don’t like having to do that. Yet, for subsequent pictures in the same setting, I really like the fact that the finder is “quiet” and let’s me concentrate on composition and timing. Hard call.
Instruction Manual
The instruction manual is poor. All the info seems to be there, but it isn’t easy to follow. Fortunately, the Hexar has few modes, and the control sequences are pretty logical once you understand the philosophy behind the camera. This is one of those cameras where the more you understand about photography, the more you understand why the camera works the way it does. There’s a thin plastic wallet reference card that summarizes all the control modes. In contrast to the manual, this card is very well done, and actually manages to explain almost everything. I don’t carry this card with me because the camera makes so much sense that it was easy to learn how to do everything the first night.
Size and Weight
The Hexar is significantly bigger and heavier than T4/35Ti/T2 class cameras. It’s not big, but it’s not small, either. Although it fits in a coat pocket, it’s not really pocketable. The Hexar is similar in size to a Contax G1 with a 45mm lens – the G1 with lens is only slightly bigger and thicker. The M6 is only a little larger than the G1, but is heavier.
To me, the real competition for the Hexar is the Leica M6 and Contax G1, not the T4/35Ti/T2. The fact that I even mention the Hexar in the same class as the M6 and G1 is testament to how well done it is. Add to this the fact that it costs about $500, and in some ways is more pleasant to use than the M6 and G1, and the Hexar starts to look very attractive. The main loss is the lack of interchangeable lenses. This is a big loss; if you can’t live with the 35mm focal length, this isn’t the camera for you. (But at a price less than or equal to a 35mm lens for a M6 or G1, it’s one hell of an extra camera!)
Why did I buy the Konica Hexar instead of the T4/35Ti/T2, Contax G1, or Leica M6? Here’s my reasoning. Keep in mind that I already own several cameras – what I was looking for was something small enough to carry around most of the time, yet good enough to let me do some “serious” photography when the opportunity arises.
I’ve never used a Yashica T4/T4 Super. I hear they’re great. I just knew that being stuck in program mode all the time was going to be too limiting for me for anything other than snapshots.
Deciding between the Hexar and the 35Ti/T2 was difficult. I picked the Hexar over the 35Ti for the following reasons (presented in order of their importance to me):
More accurate framing and better viewfinder; the 35Ti only has a single close-up frame and its viewfinder is too busy.
Manual exposure; the 35Ti does have exposure shift, though.
Manual ISO – I don’t like having to use exposure compensation to adjust film speed, e.g., once you use Nikon’s +- 2 stop exposure compensation to bring P3200 down to ISO 1000, there’s little room to compensate left, and you can’t switch to manual exposure to solve the problem.
f/2.0 vs. f/2.8 – an extra stop of light makes a big difference if you are trying to take pictures in available light.
The Hexar works with any flash; the 35Ti is limited to its in-body flash which to me is only really useful for fill.
But the 35Ti has a few pluses:
I almost bought the 35Ti, mostly because it is truly pocketable. In the end, however, I decided that it would frustrate me too often. Under its beautiful clothes, the 35Ti is still a point-and-shoot. I have nothing against point-and-shoots; they’re great for snapshots. Unfortunately, they’re not usually great for much more than snapshots. But if I were going to buy a point-and-shoot, I’d probably buy the Nikon. If it had manual ISO, manual exposure, and more accurate framing, I’d buy one tonight!
Deciding between the Hexar and G1 was also difficult. The G1 is almost as light and compact as the Hexar, the G1’s automatic and manual modes are well done, the lenses are excellent, and the lenses are interchangeable. I finally selected the Hexar because it was smaller, lighter, operates faster, and because I didn’t like the G1’s small, dim viewfinder. (I could also buy four Hexars for the price of a G1 and lens.)
The Leica M6 is a great camera with great lenses. For me, though, it’s a little too big and too heavy to carry around all the time, even with a collapsible lens. It also lacks autofocus and auto-exposure. Although I often prefer manual exposure and manual focus, automation is nice sometimes, particularly autofocus. If I had both an M6 and a Hexar, I’d use both. The Hexar, however, is probably what I’d carry around in my daypack, take to the office, and use most around the house.
(2024 Note: None of this is true anymore).
There are three Hexar models currently available: the original, the Rhodium, and the Classic. I bought the original. For $30 you can have the original upgraded to all the features of the Rhodium. This adds infrared autofocus, manual GN entry for flashmatic with flashes other than the Hexar flash, one touch exposure correction (not sure what that is), and multiple exposures. The Classic has all the features of the Rhodium, plus auto-bracketing. I’d like to have bought the Classic, but it just didn’t seem worth the extra money to get auto-bracketing. I’ll “upgrade” my Hexar to the Rhodium specs as soon as I’m willing to part with it for a few weeks.
Why is it so hard to find a Hexar in a store? I asked several pro shops. Each said the same thing: the Hexar’s niche is too small; they don’t expect to sell enough of them to make it worth keeping on the shelf. Someone wanting the ultimate luxo point-and-shoot is going to buy a 35Ti or T2, someone wanting the ultimate rangefinder is going to buy a Leica M6 or Contax G1, and almost everyone else is going to buy a much more compact, and more automatic, point-and-shoot. Some of the camera stores that used to carry the Hexar stopped carrying it when the 35Ti and G1 were introduced.
Many people at the stores have never actually seen a Hexar. This is unfortunate because I think the niche for this camera would be larger if stores carried it and the people behind the counter promoted it properly. The folks at Konica obviously put a lot of thought into the Hexar. (I suspect designing the Hexar was a “reward” to the designers who stayed with the company when it stopped making non-point-and-shoot cameras.) It’s a shame their labor has not been rewarded with more market share – they did such a nice job! I enjoy using the Hexar. Every now and then when I’m stuffing it into a coat pocket or into my backpack, I wish it were smaller. But when it’s in my hand, I’m very satisfied. I’m thinking of buying a second one for my wife; she likes the Hexar more than I do.
The Hexar was sold sporadically through the 1990s. I don’t know exactly when they stopped making them, although they definitely did when Minolta bought Konica in 2003.
In 1999 Konica released another relatively modern rangefinder camera that was manual focus and used the Leica M lens mount. This they called the Hexar RF. Most of the Internet chatter about this camera was dedicated to anxiety about whether it was really compatible with the Leica lenses (hundreds of thousands of words were spilled on the subject of flange to film plane distance). But, the Hexar RF is also a great camera, with many of the same qualities that the Hexar has. I (psu) carried one around for a while in my “would like to use a rangefinder, but am too cheap to buy a Leica” phase and I really enjoyed it. But, I never managed to take as many good pictures with fancier rangefinder cameras as I did with my SLRs and my Hexar. So I eventually sold it.
Of course these days I use neither my old film SLRs or the Hexar. But, every time nostalgia overtakes me and I want to see if any of the old cameras that I have still work, the one I reach for is the Hexar.
I just did this for the first time in 10 years or so a couple of weeks back. Which is why this page now exists. Here’s the camera, still working and ready to go with some Tri-X in it.
Twenty years ago I first set up a web site that you could call a “blog”, or “weblog” as I liked to say back then, in January of 2004. It’s here.
The first post there is dated January 18, 2004, but it’s just a link to another thing I had put on my personal web site in November 2003. That piece was an article about digital cameras and is recognizably in the overall style that I’d end up adopting for all of my bloggy stuff. I hesitate to use as pretentious and self-important a term as “style” or “voice” for the rudimentary writing that I do here. “Persona” might be more accurate although over the years even that bitter little man has toned down a bit (I hope).
Anyway, if I had been paying more attention I’d have done the “Hey I’ve been posting dumb things on the Internet for 20 years” post in November of 2023 instead of January of 2024. But I must have been busy with something else that day, so this page will be the 20 year marker instead.
The “Mixed Logs” site lasted about five months, after which I noticed that my buddy Pete had started his own site and then we jointly decided to merge the two places into one. About half the stuff I have written since 2004 started out on tleaves, which has had various homes and is now mostly dormant, but still a point of reference. I copied a lot of my favorite stuff from there to here, but not all of it. I have left some of the shorter fill pieces that we used to write just to keep up the cadence over there. In addition, I have not moved some of the more pointless ruminations about video games. Of course there are a few notable exceptions to the video game rule, but overall I am not as personally connected to that material as I used to be, so I can’t tell if I just think it’s pointless or if it’s actually really pointless. Either way it’s fine where it is.
Of course I moved my new stuff off of tleaves and to this site just over 10 years ago. The marker for that change started out as a fairly recent divider between the large pile of old stuff and the relatively shallow collection of new stuff. Now it sits almost exactly halfway between the two piles, which is wild.
The new pile has more ruminations about music, software and progamming, mathematics, physics, and some weird mashups of everything at once. And of course when I play the one sort of video game that I play, you are doomed to read about it here.
I think, but I’m not sure (because I don’t collect the data), that I’ve been going at a higher pace since 2020 than I had been in the few years before. I guess I’ve had more on my mind since the great stupidity.
A back of the command line shell estimate indicates that this site contains between three and four hundred thousand words of material, give or take a few tens of thousands. On some absolute scale that’s not all that much. On the other hand, I’m just sort of an introverted nerd, and this is a lot more writing than I ever thought I’d put into the world. This stuff has now been around long enough to maybe worry about what to do with it when stop paying the hosting fees for the web site. On the other hand everything is backed up on my personal computing devices, and in github. So who knows.
On our recent trip to Vancouver I told the person serving us food at Vij’s that the last time we were in their place in 2014 it was at their previous location. She sighed and said something like “wow that was forever ago” then paused and added “see you in 2034 I guess!” With any luck maybe I’ll be doing another one of these in 2044. But I would probably not bet on it.
A friend of mine who went to graduate school in Pittsburgh before moving back to Singapore once gave me a valuable piece of advice that every food tourist should take to heart. The “Rajesh Menu Rule” is that if you are in a place and you see an item on the menu that you’ve never seen before in a place of that kind, you should order it immediately. The context for his application of this rule was Indian restaurants in the U.S. many of which have a fairly standard menu template. Any deviation from this standard usually indicates some true piece of creativity that the owners of the place want to point out, so it’s good to investigate.
On my first trip to Vancouver, BC in 2010 I sadly did not follow this advice when getting dinner at Kintaro Ramen, a small ramen shop that sits in a cluster of them in the western part of the downtown area. On that trip I spied the item they have on their menu that you probably won’t see in most other shops: “Cheese Ramen”. They even call it out as a special thing that is of their original creation. I should have have ordered it on sight, but at the time I wanted the more straight up version of the dish, since Pittsburgh is something of a ramen wasteland. That’s fine.
What is inexplicable is that I could have gone back to this shop in 2014 and again neglected to try it. I had always felt low level regret about this and had always hoped to get back to Vancouver and honor Rajesh by correctly following his advice at this place.
So this year when we decided to go back to Vancouver for an end of up-and-down year trip, to hopefully end the up-and-down year on an up note, my mission was clear. On our fourth night in town (there is a lot of food in this town, it took until then for a spot to clear up in our schedule, and even then we were still semi-stuffed with dim sum lunch) we headed back to the shop and I got my bowl. And it was good.
The bowl is based on their miso broth with slices of pork and then one shredded cheese (maybe mild swiss, maybe mozzarella) and one slice of cheese (probably mozzarella):
As you eat it the cheese melts into the soup and all over the noodles. It’s nice! You can kind of see the cheese melted into the noodles and broth here:
Of course, the more straight up ramen at Kintaro is also still great:
And they also had a cool onigiri extra, which Karen had been craving.
At this point I could make the rest of this post a lightly annotated list of about 125 food pictures. But that would be pretty tedious. So I’ll try and edit this down to the top ten or fifteen highlights along with where the dishes came from.
Our main sources for food info came from a list of Downtown area places that I got from a colleague at the local Apple office in the city, and also the food blog Sherman’s Food Adventures that I found out about from a friend of my brother’s who lives in town and plays hockey with Sherman. Sherman apparently produces that web site as a “hobby”, which gives you an idea of what hobby-level eating is like in Vancouver, BC (remember: it’s cooler than Seattle).
If you want casual Japanese bar food look no further than two local chains: Guu and Zukkushi. The Guu places are izakayas with a wide range of small plates. Highlights here were the sashimi and karaage of various sorts.
Scallop sashimi was great at Guu with Garlic on Robson:
The cauliflower karaage was great at Guu Toramasa on Seymour street:
And that location had a great yellowtail sashimi as well:
Meanwhile, at Zukkushi on Main we got dozens of different kinds of grilled skewers and also this chicken yakitori rice bowl that was absolutely perfect in every way. The rice was perfect. The egg was perfect. The chicken was perfect. And the auxiliary toppings were the perfect balance. Beautiful.
They also make an Udon Carbonara that you should try. Especially if you tried the Cheese Ramen above and liked it.
We also overloaded on fried meat at Saku on Robson, which has the best (and most) katsu I’ve ever had. This is their sampler platter, which was a lot bigger than we thought it would be before we ordered it. Oops.
Finally, our sushi slot was taken up by a fancy lunch at a fancy spot called Miku which you should also go to if you can afford it.
Again I got a rice bowl that was perfect in every way. This is tuna on an absolutely perfect bed of white rice.
Close up:
The rest of the sushi was just as good:
Somehow I have no photographic record of the inari sushi that I got, which for once in North America was better than the stuff you get at a 7-11 in Japan. I know the Miku people would be insulted by this comparison, but they have to understand that where I am coming from this is the highest possible compliment.
We broke with tradition and got dim sum in the city this trip instead our normal mode of going to Richmond. I think we did OK.
New Mandarin Seafood Restaurant had the only shu mai that Karen ever ate on purpose. They had scallop and quail egg in them.
They also had these seafood tofu rollups that were similarly excellent.
And finally, the ever popular sleeping teddy bear dessert.
Meanwhile, we went to Royal Palace Seafood Restaurant on Christmas. As one does. There were a few people there.
Everything here was great as well. But the highlight was the last thing we ordered after we were already full. This crab meat and assorted seafood egg fried rice was just unhinged:
The clay pot with the steamed chicken on rice in it was also great:
We had also never had a well executed pan fried steamed buns with pork in them like these:
Things like this don’t make it to Pittsburgh.
In between all of these things, we did head to Richmond to walk around the malls and to get the roast meat at Hong Kong BBQ Master. Neither was a disappointment, but I’m only showing pictures of the roast meat.
They have two kinds of pork:
And also pork, duck, and chicken:
Finally, the fancy Chinese this time was a Michelin Starred Peking Duck place that only has a score of 3.5 in the local Yelp. iDen & Quanjude Beijing Duck House is probably not the best value for money, but I found the the food and the service to be hard to criticize. I can’t say why the Yelp people are so mad about it.
Get the smoked fish appetizer:
And of course the duck:
And the special duck fried rice with crispy rice bits and foie gras:
Of course we went back to Vij’s. This experience is summed up by the vegetarian Thali.
My brother’s friend suggested that we get together and have lunch at a local favorite Vietnamese spot, Ahn and Chi. This place has a nice story behind it, and it has really good soft-shell crab fried rice.
The vermicelli bowl is also great. But we should have split one.
OK. I’ve about doubled my ten to fifteen highlights quota. You missed out on the chicken buns that have ham in them at the Kam Do bakery, Hei Hei rice rolls in the Richmond Public Market food court in Richmond, DingDing, a Taiwanese hole in the wall with omelet covered rice, Lee’s Donuts, and the bagels at Siegels. Finally, there was the corner with at least 5 or 6 coffee shops near Hastings and Cambie. We tried Nemesis, Timber Train, and Revolver, but just go to Revolver because it’s the best one.
I’ll end with a gratuitous sunset picture:
A sunrise picture:
And whatever this is:
I like to take pictures on trips. I even still bother to pack a camera different from phone with which to do this. I’ve told the story before about how I went from giant cameras to smaller cameras and then back and forth again.
This trip might have convinced me to go even smaller on the non-phone camera. This seems paradoxical but it really isn’t. Smaller cameras can now do what you used to need a giant camera to do. So you might as well take advantage of that if the main motivation for your trip is something besides maximizing photo quality (like stuffing yourself with dim sum).
You’ll notice that almost all of the pictures on this page were taken with my phone (the file names start with the standard iPhone prefix). And the ones that were not could (mostly) have just as well been done with a high quality smaller fixed lens camera like the Sony RX100 model whatever. So I think I’m going to go that way.
Back in 1999 the dream was born. This was the year the Tivo device first shipped. These days we use the much more sterile and generic term “DVR” to refer to things like the Tivo. But back then it was just Tivo.
What the Tivo did was capture broadcast TV as a cleverly encoded digital video file so that it could fit on the comparatively tiny disk that sat in the device. Then you could watch your TV shows whenever you wanted. You could even pause the show while it was being broadcast, because the machine would buffer maybe half an hour on each side of “live” so that if you had to go get snacks, or pee, or whatever, you did not have to miss anything that happened while you were gone.
Every nerd who used a Tivo or saw a Tivo being used instantly knew what they wanted: A Tivo in their computer that let them pick whatever shows they wanted on some kind of pay as you go basis. And, the ability to play back the resulting video files anywhere they happened to be on whatever device they happened to be sitting in front of. And finally, a way to not be beholden to the particular schedule the TV people wanted to dictate. Instead, the machine would just save the file for you whenever (time) and wherever (channel) the content happened to appear and you would have one single user interface to interact with to watch it.
Nerds who also liked sports (a minority to be sure, but not non-existent) especially wanted this. No more worrying about being in front of the TV when the game is on. Just fire up your computer sometime after it has started and there it is.
Netflix, of course, started the flood of streaming services for scripted and other pre-recorded programming. A few years after it was clear that Neflix was about to eat everyone else’s lunch, all of the major media networks shipped various kinds of streaming. But, what remained missing to a large degree was live sports. Or at least live sports that I was interested in.
My recent dive into the soccer rabbit hole has changed all of this for me. If you want to be able to watch the widest possible range of European football games, you have to be signed up to almost every streamer that exists:
There are probably a few more I missed.
These services have various delivery systems that run on your phone, or your iPad, or you web browser, or in your Apple TV/Smart TV thingy/Whatever. They are all pretty bad, and mostly don’t come close to the dreams of the “giant Tivo in the sky” ideal.
All of the various interfaces have weak and tedious facilities for navigating to the game you want to watch, especially if you are trying to watch the game after it has started but before it has finished. Most (particularly Paramount+ and Peacock, where most of the soccer is) attempt to simulate everything bad about broadcast TV since they will only allow you to play a game live or later play a “replay” of the game several hours after the fact. Apparently serving a video stream to you “in real time” is somehow different than serving one that is delayed a bit. I don’t begin to understand the complications here, but I do know that YouTube and Twitch can both do this with video game streamers so … to coin a phrase … why don’t they just do it?
ESPN+ is so incoherent that I can’t remember if it has this problem as well (I think it does not), because most of the time I can’t navigate to the game I wanted to watch under the sea of all the other sports and all the ESPN shows being listed in the browser interface all the time with no way to organize it with any finer granularity. I can usually only get there from the ESPN web site, where you’ll of course see the score while navigating to the game. Good job.
Apple TV+ has the distinction of shipping with a UI that shows you every score in every game whether you wanted to know them or not. Which is another great thing to happen to you when you are a bit late to the start time.
Finally, after watching all the other soccer coverage for a year the stuff on FOX is just really awful. I wish they could just get the British crew from NBC to do it. Dare to dream.
From the standpoint of a sports fan all of these services suffer from the fact that the navigation interfaces are all built to find shows and the sports content is organized the same way instead of being organized according to the structure of the leagues and competitions that the games belong to. What you want is something like this page from ESPN that has every game that is being played on any given day … and then just add links to where the game is on “TV”. Sadly no one has any of this.
The one small exception to many of my complaints is, of all things, YouTube TV. I imagine FuboTV would be similar, but I have not tried their service. While the general navigation interface at YouTube TV is pretty much just as poor as the rest the fact that you can easily mark various classes of shows and events as things to “record” in your “library” at least makes it easy to find recent games that you were interested in watching. Also, the user model here is in fact “a Tivo run by a web server.” So for things that are available on the service the system comes very close to the “giant Tivo in the Sky” thing that we all dreamed about back in the day. It’s probably as close as one can expect to manage, given the realities of media rights, software engineering, and so on.
The main downside is that it is very expensive. And who knows what data they are collecting on you. And the Apple TV app for YouTube TV, while thankfully not the same as the awful YouTube app, is still not the best. Still, I’d pay all the money and more if the rest of the streaming services just appeared as channels in the YouTube TV service so I’d be able to “record” the sports stuff that’s there instead of waiting for the replays to appear a few hours after the game ends. I would also much rather everyone just standardized on the YouTube playback interface, even though it’s far from perfect, because it’s closer to good than any of the others. At least scrubbing forward and backward in time usually works in YouTube whereas I’m pretty sure there are circumstances under which pausing the video doesn’t even work in Paramount’s god-forsaken app.
I find myself without any real conclusion about all of this except to say that we appear to have come full circle in the sports on TV business. Before you could only really be happy if you spent all the money for everything whether you needed it or not. Internet TV is now in exactly the same spot, except you have to use 15 different navigation and video interfaces, none of which do the right thing rather than only a single example of such a thing. As usual the nerds dreamed for something and when it was actually built it turned out kind of all wrong.
Maybe in another 10 years everything will just be a channel in the YouTube TV service of the future. On the other hand the number of ways that could go wrong is truly mind boggling.
I have worked in software now for a long time. And for a long time I have observed user complaints about and desires for the software that I and others build. Every software project has its own unique requirements and issues, but there is an almost universal complaint that one will observe from almost every user community. I don’t even have to know the particular feature that the person complaining wants. I can write a generic template for the thing. It is always said, or posted online, with a slight sideways sneer and goes like this: “Why don’t they just (something), how hard could that be?”
The (something) here can be almost anything. But the assumption of the outside observer is that the thing is obviously completely trivial to implement. Various large categories of such features include:
And so on.
I have long fantasized about producing a podcast that I will never produce where each set of episodes starts with a question like this, which I present to actual engineers on the product in question. Then the show would run as many two to three hour installments as necessary to explain exactly why, in a horrific amount of detail, why “they don’t just” do the thing.
I have complete confidence that I could fill hundreds hours of riveting 1 podcast content with the answer to even the most apparently trivial question about the most apparently trivial feature.
I know this because if there is one thing that more than twenty years of working on application software has taught me it is that there is nothing that is easy in large software products. In addition, there is an order of magnitude difference in difficulty between realizing that you want to be able to do a certain thing X, figuring out (maybe) how to do X for yourself, and then figuring out how to do X in the context of a giant complicated application stack that has to then ship to thousands or millions of people and has to work for all of them too.
Here is a list of issues that your favorite “why don’t they just” feature probably has to deal with in order to work in a modern application. This is just off the top of my head.
Now, on the one hand, it is an old maxim that no one should care, least of all the end user, how hard you worked to put the product on the table. That is not the end user’s concern.
But, on the other hand, I can’t help but think about how hard literally everything is to do when some arrogant but incredibly ignorant tech-bro dude gets up on his (almost always this person is male) Internet blow horn and declares how completely trivial it should be to just do X, whatever X is.
I can almost tolerate specific complaints about specific systems. But the very worst sort of “why don’t they just” whining is the kind that makes broad and historical declarations about the nature of software development. Usually these are filled with derision and snark but actually do little but illustrate how little the commenter understands about how we got to this particular place in the history of end user software.
A recent set of such takes on twitter (I will not use the other stupid name) took one of the standard forms: “boy programmers these days sure are stupid. All they do is write giant slow bloatware. Can you imagine them trying to do anything useful on some old machine from the past with less memory in it than a modern microwave oven?”
Which is of course why I am writing this page now, after sitting on it for years.
The most obvious and shallow thing that statements like this generally ignore is that while software developers of the past certainly did a lot of interesting things in assembly language on that machine that only had 1MB of memory nothing that they did even comes close to reaching the base minimum level of user expectations for the simplest free application that runs on your phone today.
Here are some things old applications were really bad at:
Oh hey that list looks familiar.
It would take a dozen more web pages to cover everything else that is wrong with these statements, and I don’t have the energy for that today. There are sports to watch.
Anyway. The final thing to say is that I am in no way denying the existence of true lazy shitware. All over the software universe there is stuff that is put together by people who truly don’t care and barely even do anything. These things obviously do and will continue to exist. I will also confess to being guilty of asking the question I am complaining about there dozens or hundreds of times in my life. But I like to think that I’ve learned my lesson over the years.
I am just making a gentle (and useless) plea for the Internet commentariat to think harder the next time they decide to rip into some long standing product built by a large team of people with a searing hot take about what they should “just do.” The people on the team probably want to do the thing you want even more than you want them to do it. But nothing is easy.
OK it would not be riveting. OTOH have you seen what some podcasts these days use their hundreds of minutes of runtime to talk about? It would be better than that.↩︎
People who know me know that I complain a lot about the computer programs that we use these days to find, buy, catalog, and play recorded music. It used to be simple: you’d read about a recording that you might like and then dutifully trudge to the local record store to try and buy it. But, unless the local record store was in NYC, SF, Boston, or some other major city, chances are you would not find it there. This was especially true for recordings from lesser known artists, recordings in less popular genres, or some combination of both. You would then trudge home, wondering what you were missing out on.
These days it’s different. Almost everything that was ever recorded and managed to survive in the culture is now encoded somewhere on the Internet and available to listen to for either a small nominal fee, or also usually for free if you don’t mind ads. It’s crazy. Describe this world to any music dork in the 80s and they would think you were talking about heaven on Earth.
But, it’s not all great. Records are now just so much digital data, “software” if you will (although I hate that term for it, because it’s just data). As such you have to use other software to access it, and as we all know the software that you have to use sometimes seems designed to actually prevent you from finding the thing that you know is there. This aspect of the new world has always been puzzling but with enough experience it becomes clear why it has to be that way.
So, story time.
One of the things that is allegedly dead in the stark new musical landscape of faceless server in the sky streaming (almost) all the music ever recorded by man directly to the Internet connected device in your pocket and then to your earholes is the idea of spontaneous and serendipitous discovery.
The notion is that when we used to have to interact with the world in order to obtain recorded music we would often be confronted with random and strange items that we would not have otherwise sought out on our own, thus expanding our horizons in useful and educational ways.
While I love and miss the act of browsing in a record store as much as the next record nerd I would also say that the modern soulless mechanized music consumption experience is not completely without natural (not machine generated, sort of) serendipity.
Around ten years ago I was complaining on some online chat system that modern pop music seemed to have lost the capability to make a record with something as simple as a well recorded female singing voice on it. I griped for a while about over-processed and probably auto-tuned vocals that sound more like a machine than a person and someone told me I should look up the band … and I thought I read it right in the chat … “Churches.”
So I dutifully went to whatever music search engine I had available to me in 2014 and typed in “Churches”. I got no records by a band by this name, obviously, because I should have typed in “Chvrches”. To this day I still haven’t gotten around to finding out about Chvrches. But! Instead a record caught my eye by an artist called Aby Wolf. Probably this one:
Following some links around youtube, I next found that Aby Wolf appeared to often work with another artist named Dessa. Maybe it was this next video, maybe not, I don’t remember for sure:
In any case I eventually ended up on this “Tiny Desk Concert” video:
After seeing/hearing this I went out and bought everything Dessa had ever done. And I have continued to do so for the last ten years. And you should too.
This was serendipity unmediated by automatic music delivery “algorithms” as we know them today. Not even the idiotic Youtube Play Next queue. Just a bit of Internet search.
Oh, this video about Dessa’s then new record (Parts of Speech), which is now ten years old, was also fun:
Here is a similar story to the one above, but it is about what happens when you can’t figure out what you just heard.
Around the beginning of 2019 when we were all feeling care free and full of youthful energy, I went to see a saxophone trio at a venue in Pittsburgh and they played an encore that I knew I knew. It was one of those like late 50s/early 60s classic Charlie Parker/Thelonious Monk-like bebop refactors of a pop song of the time. Or so I thought. But, I could not remember the name of the tune off the top of my head. This happens to me a lot, especially with classical encores. I hate it.
Being an idiot I was sure I’d be able to place it later using my vast library of classic jazz recordings. So I went home and played every Charlie Parker recording that I had, or that was in Apple Music. No dice. This drove me nuts for a week and I finally gave up.
At some point I finally figured out that it was this tune by Monk called Rhythm-a-ning:
Then I went on a Rhythm-a-ning jag, and found lots of other versions. I was having a great time until I heard this:
At first this seems like a straightforward fast boppy run through of the tune. But wait. What’s that thing they play in the recap after the bridge? It’s supposed to be the A in the AABA, but that’s not the original tune! That is clearly some other generic Monk-ish or Charlie Parker riff. I was right back in the same hole.
For weeks I tried to find it and could not. I even posted on an Internet forum full of old men who listen to Jazz and got no answer. Then finally one night tonight I was reading some category theory tutorial and absent mindedly listening to an Art Pepper record that I had picked up … and I hear this:
MYSTERY SOLVED.
I probably originally heard it on this album, which I have on vinyl but not iTunes:
You all should go buy all these records now.
The above stories illustrate a big part of how I interact with music in my life. I think I see music not just as a thing to consume and appreciate as art. I also have a side interest in collecting it. Or at least collecting other information about it. It can be like a fun intellectual puzzle.
Before all the music was stored in data centers collecting records was not quite such a weird idea. The main way to hear the music you wanted to hear was to buy the record. Sure you could sit next to your radio and wait for the next time your favorite single of the time came up in the station’s rotation. But that is tedious and time consuming, so once you had the resources to do better you go buy stuff. There were catalogs, and magazines, and the local gurus at the record store to tell you what you might like, and the best instances of those things. It was a whole ecosystem of material outside of the music itself that provided more context and insight into the material.
This is important, because as someone with limited resources you could not buy all of it. You had to know which were the most important things to get. Which recordings were the “best.” Which you should buy first. Which could wait. For me this was especially true with recordings of Classical music.
What I will never understand about Classical music recordings is why there are so many of them. But, given that there are, it’s not enough to know you want to listen to a recording of, say, Bruckner’s Third symphony. You have to narrow it down. There are four or five different editions of the work that were published at various times. Different conductors and orchestras make different choices about which one to play and how they play it. Here is a list of (maybe) every known recording of this piece, organized by edition:
My best guess, based on trying to parse the HTML of that page, is that there are around 256 performances listed, each with multiple released recordings.
In this context what is important to the listener is being able to find the specific performance out of this huge pool that is the one that some trusted reference has declared as being worth your time, and more importantly your money. When you walk into the cathedral-like Tower records in Boston or NYC you want to be able to scan the racks and find exactly the thing you wanted. Jochum, say, from 1976, but not 1967.
From the 50s until around the 2010s this was how Classical music buyers (and also anyone interested in historic jazz, blues, or even many old pop records) were conditioned to interact with the music and music information systems available to them. The knowledge of specific recordings made at specific times by specific people is very important. I have come to realize that this discographical mode of interaction is particularly prevalent among those who used to have any interest at all in collecting records as opposed to just listening to music.
This scene from that classic movie about record collecting, High Fidelity, where the main character talks about reorganizing his record collection in autobiographical order sums up this point of view in 75 short seconds:
Now, it might be that obsession on this level is not all that healthy. It’s certainly the case that a lot of people, maybe most people, don’t interact with music in this multi-leveled way anymore. It’s definitely the case that the music services don’t think they do. The music services are oriented around just playing the songs, or if you are lucky, the albums, or if you are really lucky some specific performances. But, it is very hard to extract any other context from the service itself, because that context is not encoded in the data models and user interfaces of the service in the same way that they used to be encoded in the record catalogs, magazines and liner notes. You can’t organize the world, or at least your music collection, the way you want. It can only be the way the service has already put it together for you. If you want proof open up Apple Music or Spotify or whatever and go try to find any specific recording of the Bruckner third and then verify that you found the right one from the right year. It will make you want to pull your own eyes out.
This is frustrating for those who miss the old ways even if the old ways weren’t really that much better. This is why grumpy old men are always screaming at clouds and choking out the words “meta-data” from their parched throats. The side data is as important as the music itself. And the perception is that it has been taken away. This is not really true. Much of the raw data is there. It’s just not organized in a way that makes it easy to find.
The one oasis in this desert of context free recorded music is the grand and noble project called Discogs.com. Here is a place on the Internet where those who are obsessed with meta-data can again congregate and collect all of their obscure nuggets of information. And as a happy synergy they also finance the whole thing by mediating the exchange of used LPs and CDs for money over the Internet. The perfect crime!
Now all we need is for the music services to realize that this site is important. Finance its further development into a comprehensive discography of everything ever recorded (it has obvious holes, especially in the Classical areas, and some Jazz too), and then add UI to their music players to automatically look up the “liner notes” on the site so you can know who was playing flute on that one Freddie Hubbard quintet date that you were just listening to.
Of course, if this happened some asshole VC would buy it up and then lay everyone off. Because that’s how the music business works. But we can dream.
Here are some recordings of the Bruckner 3rd to try out. The 1889 edition is the most played, followed by the 1877. The 1873 is a bit of a draft and an interesting niche product. I kinda like it.
1873 edition: Simone Young
1877 edition: Haitink, Barenboim
1889 edition: Jochum, 1977, Bohm on Decca
This post is brought to you by my over-developed sense of nerd engineer pedantry.
My dear friend and fellow nerd peterb
recently posted something on twitter that I could not let go of. He said:
I am begging you, when introducing the topic of pointers to new learners, start off BEFORE drawing boxes and arrows by explaining that, in a very real sense, a pointer is just an integer.
I, of course, told him he was wrong. Pointers are a thing that is much more fundamental and much more complicated. We screamed at each other online and offline for a while and came to an understanding that appeared to give us both the last word. But the pedantic asshole in me still felt like this was a statement worth clarifying, if for no other reason than the fact that pointers are a conceptual boundary that trips a lot of beginning programmers hard enough that they just give up completely.
Pointers are a type of value that programming languages use to store a name of or reference to some other value.
In high level languages these names can be conceptually high level and abstract. They might even be something as esoteric as a function that actually computes the value.
In low level languages we have traditionally thought of these names as the machine level address at which a particular value is stored in memory.
It is from this second view of what a pointer might be that we can come to the idea that it might just “be an integer”. But to explain why we have to take one or two steps back.
Most programming languages define some built-in types for simple values that can be stored and manipulated relatively quickly by the underlying machine architecture. You know, things like integers, floating point numbers, maybe simple characters and strings but probably not these days since those are too complicated (thanks, unicode).
Most languages also let you construct new types of things out of the simple values by combining them in various ways. One of those ways tends to be to construct something called an array or vector out of them. So, instead of a single integer (say) we can tell the language that we want a whole bunch of them all stored sequentially. So here is an ASCII picture of a small array that holds five small integers:
[ 0 | 11 | 13 | 10 | 3 ]
If this array is called A
, then we can reference each element, or member, of A
using a small index between 1
and 5
. So A[1]
is 0
, A[2]
is 11
and so on. Here the small integer is called an “index”, and we also say that we use that value to “index” into the array1.
Now the nerds in the room already know where I am going with this. If you take a fairly simple minded view of how memory in a computer works, you can think of it as a giant array of storage cells, each of which stores (usually) a value that by convention fits into 8 bits (so a small integer between 0
and 255
). Computer memory systems are actually a lot more complicated than this, but let’s talk about that later.
In this picture, pointers are just a special kind of index into this giant pool of memory. The size of the index is defined by how many bits you need to represent all possible addresses in the machine architecture. These days this index would be either 32 or 64 bits long. Then to see what a value a pointer is referencing you ask the memory system to hand you back the value that is stored at the index that the pointer holds into the giant pool of bytes. Symmetrically if you are holding on to some value, you can construct a pointer to that value by asking the system to tell you what index that value is stored in.
Voila, in each case you can kind of think of pointers as being these special kind of integer indexes. Nice.
This picture of how pointers work has a lot going for it, which is why Pete was cheering for it on the Internet. The nice things about it are:
Of course, there is more to the story than this.
First, what’s with the boxes and arrows anyway? You know the drill. You’ll often see a picture like this to indicate a simple pointer to an integer like we talked about above:
Here the idea is that the box holds an arrow that represents the name, or reference to the actual value. Follow the arrow from the box to get to the actual value. This is all a bit abstract, which is what Pete was really complaining about. Like all overly abstract notions the thing makes perfect sense but only if you already know what it means.
In particular, if you think of the arrow (or reference, or pointer) as the index that we talked about above, then the picture becomes more down to earth and easy to understand. The arrow reaches into the big pool of memory and pulls out the value you want. Easy.
What the box and arrow picture does capture is the higher level view of what pointers are. Here they are more than just an index into a pool of low level machine values. Instead they really are more of a reference or a name to something you want to operate on indirectly. This is especially true languages with “advanced” type systems that don’t really make contact with memory at the machine level, but do want to provide the programmer with an abstraction for dealing with references for various important uses cases like:
In other words the box and arrow tries to get at the idea that the arrow can be pointing at a generic value of any type, including more complicated structured types, like this:
In this context the idea of a reference as a sort of abstract box that holds an “arrow” 2 which in turn provides a recipe for finding the value that the reference is “holding” sort of makes sense. But what is also clear is that this picture is most important after you already understand the lower level and more concrete picture of how memory allegedly works in an actual computer.
Of course, many languages, like Lisp and Scheme (sort of 3), Smalltalk, Java, Python, Swift, and Haskell have no explicit high level notion of a reference. Instead these are replaced by objects or classes or other types that might be represented internally as references (or something more complicated), but this fact is not really made explicitly visible to the programmer.
And here we can cue up all the pearl clutching old nerds complaining about how Gen-Z programmers don’t know how any basic thing works anymore. Which is of course nonsense.
The last part of the story I want to tell is to say that even if we stay close to the “machine” level the addresses of memory locations are still not really the same as integers, even if they share a similar representation in the machine.
All simple values in computers are stored as (mostly) fixed size bit strings. This picture from the classic book about 1970s minicomputers Soul of a New Machine can get us started. At one point one of the engineers draws a picture of what a basic value looks like in the machine they are building, which uses 32-bit values:
Here, as I said, every value is 32-bits long and those bits are stored such that the higher order bits are first (index 1) in memory and the high bits are last (index 32). These things can go the other way too.
Later on, the same engineer draws the following picture for how the data in a machine word is interpreted as an address or pointer:
What’s happened here is that they have specified that the top four bits of each address are interpreted not just as an index into the giant pool of memory data, but also as what is called a “security” or “privilege” ring, which is used for access control.
Thus, while the address is stored as a 32 bit string just like an machine integer might you can’t actually use the value like an integer, since some standard operations on the value (like addition, or multiplication) can change the meaning of the value in ways that are independent of their “integer”-ness. There is even an episode in the book where the computer fails because one of the test programs that was running took an address and incremented it until it fell off the end of the range of addresses that were actually physically available in the computer.
Over time, and especially as machines moved to 32-bit and then 64-bit basic values, even more semantics have been layered over the “integer” representation of machine addresses. Things like:
All of these things should give one pause before trying to manipulate pointers like integers.
Finally, we can’t let this subject go without pointing out that by far the dominant systems programming language and runtime (C
and C++
) has always taken a very laissez faire attitude towards the relationship between pointers and integers. Who can forget that code from K&R that copied strings like this:
while (*s++ = *t++);
In the fullness of time it’s become pretty clear that treating pointers and integers this way has gone very badly.
To end, my goal here was not really to dunk on my friend Pete’s basic statement. This is because I agree with it. Instead all these words were really just a warning not to take the idea that a pointer is “just a number” too seriously. Pointers are a lot more than just numbers and if you are not careful to remember this they have a way of sneaking up behind you and cutting your code’s throat open and laughing while the code bleeds to death on the floor … or at least in the debugger.
Be careful out there.
For reasons that are too complicated and/or stupid to go into, most programming languages actually index arrays starting at zero rather than one. But I never liked that so I’m going to be difficult about it.↩︎
Category theory nerds represent!↩︎
Lisp and Scheme have an idea of cons
cells which store pairs of values that are implicitly stored by reference. You use this to construct lists and things, and you can also use them for side effects. These are usually implemented using of pointers, but it’s not an explicit part of the language model.↩︎
Today some short things that I could not turn into longer things.
Today is Talking Heads day for me, as this long time favorite film comes back around again. I am now old enough to have seen all three incarnations of this movie as they hit theaters. Which is … something.
I have nothing more to say about this but plan to have a really good time.
Edit: I actually do have one more thing to say about this. Seeing it again I’m struck by how unbelievable it is that they could shoot this the way they did with the giant movie cameras of the time. And that this might not be appreciated enough in these days when you could just walk around the stage with a phone (or, more likely, any of the smaller digital cinema cameras, that are comparatively tiny) and do it (if you knew what you were doing).
Aside from some light use of the classic and now mostly forgotten iPod Shuffle I have mostly avoided the smaller auxiliary computing devices that Apple makes. I carry their small pocket camera and Internet communication device around, because if you have to carry one you might as well carry that one. But otherwise I had never seen a reason to use their ear mounted or wrist mounted machines because they did not do anything that I was interested in doing.
The headphones never fit, and the large ones that did fit felt more like two small computers pretending to be headphones than actual headphones. And, I don’t like watches.
But as we have learned in other contexts, you never know how these things are gonna go. So lately I have found myself using both the ear pods and the watch, for various reasons that are too boring to go into.
The watch is, fine, I guess, for a watch. The bike ride tracking is finally good enough for me to get rid of a dedicated machine just for bike ride tracking. So that’s good. I guess I don’t miss as many text messages. Which is a mixed blessing. Everything else is a collection of incoherent and inscrutable machine-learning driven heuristics that try to dictate how to live your life (“time to standup! time to get moving!”) and make you feel bad for not fitting into the bell curves defined by the models. Also, we’re up to version 10 of the software and it still can’t do windowed averages for all of the exercise metrics? Really?
Oh well. At least it has some comfy bands that are infinitely adjustable.
The new ear pods were a surprise. I decided to try them on a whim after getting some for my brother. I expected them to not fit just like every Apple earbud product in since the original iPod. But surprise, they fit perfectly. I also expected that the idiotic bluetooth connection dance would again be 15 times worse than just unplugging from one headphone jack and plugging into another. But surprise again … it mostly just works. I’ve still had it decide to randomly not talk to my Mac once in a while. So it’s only 1.5 times worse.
Still, these are the best “listen to things in your brain holes while walking around” headphones I’ve ever used. So kudos. Now I can use my walks back and forth to the new office to polish off all those 57 disk boxed sets of the Bach Cantatas that I keep buying.
Back in the early pandemic we bought a new TV that hangs on the wall where my iMac used to be. The most nerve racking part of the process at the time was letting a guy come into the house to hang it for us.
The TV works great except for the fact that it’s actually a fucking computer that runs linux, so every few months it just hangs instead of turning on, so you have to unplug it from the wall to get it going again. Since the wall plug is actually on the floor, under a tall desk and difficult to reach I wanted to get a thing to do this by remote control.
Fortunately, a lot of folks appear to sell remote control power plugs. Unfortunately the ones they want you to buy these days are all “Smart Home” devices, which means they don’t fucking work.
Here is how it goes:
Take the plug out of the box.
Plug it into the wall and hold your phone near it so that it can set itself up using NFC.
Just kidding that shit never works. So unplug it from the wall and turn it over on its ass and take a picture of a small 4 digit code that has been silk-screened on the bottom of the device in a 7pt font in beige lettering that exactly matches the beige color of the body of the device.
Type that code into your phone to pair it up.
Spend 15min adding the device to your “Home” app.
Three months later when the TV goes wack attempt to use the “Home” app to flip the switch inside the plug by remote control. This won’t work because the device will have long since fallen off of your home Wifi network, so you won’t be able to talk to it without going through the whole setup process again.
Because I’m a dipshit I did this with two different wifi power plugs. They were probably just the same hardware and software stack with different brand labels on them … they both behaved in exactly the same useless way.
Having learned my lesson, I bought this thing:
This is the dumbest of all possible plugs with a switch connected to the dumbest of all possible RF receivers.
It just sits there waiting for a radio signal to come around and tell it to do its thing. No NFC, no pairing, no wifi, nothing “smart”. Now, every few months when my “smart” TV goes catatonic I just hit the button the thing and it reboots perfectly. This has happened twice since I bought it in the spring.
So to summarize:
Brainless RF controlled simple power plug with a switch in it: 2
Fancy WiFi Internet of Things Home Control Bullshit Devices that try to do the same thing: -3
The Smart Home shit gets an extra point off for the initial setup pain. Why anyone would trust anything in their house that they actually want to work most of the time to one of these moronic Internet Home devices remains unfathomable to me.
I wrote this silly page about the Yoneda Lemma a couple of years back. Since then I have continued to noodle on it and fix a never ending list of small errors and nitpicks, including one where I left out an entire third of one of the statements of the result. I’ve noodled with the pdf version too. It remains the reference version and the best to read.
In fact even as I type this I just found two more horrible typos. Sigh.
I got a new bike last year. It is a new-fangled “gravel” bike. Which I guess means the road bike people finally decided to make road bikes with wider more comfortable tires and a decent gearing range on them. But to do so they had to put the bike in a completely different category so the road bike people don’t feel like their manhoods are being threatened. Whatever.
I didn’t get to ride it much last year for reasons. But this year I’ve had it out a lot and it’s great. If I get my ride in today I’ll make 600 miles in one summer for only the second time in the last 10 years. Good times.
I also got some weird new bike shorts with pockets in them. I wonder why it took like 50 years of bike shorts for someone to finally try this. They are great.
Here is the bike in its peak natural habitat on a local road that has just been “resurfaced” (wink wink).
Happy fall.
As predicted, Stop Making Sense was great.
I got the 15 miles I needed today, so happy 600.
This one is an intellectual companion to the food processor salsa from last time. The soup is based on a recipe for “Green Corn Soup” from the Fields of Greens cookbook. This is a weird name for the dish because it doesn’t really come out green. Especially if you use red peppers. Their way was also very tedious and fussy so I have streamlined it here.
One special thing you do need for this soup is a “food mill”. A food mill is like a strainer with a spinning handle that pushes stuff through the seive part of the strainer. Like this.
Collect the following:
6 ears of corn. Get someone to cut all the kernels off the corn for you.
6-12 tomatillos, peeled cut into big chunks.
Around three hot peppers if you get hot peppers from where I get hot peppers. Cut the seeds out of two of them. If you want more heat, keep more seeds. This year the red cayennes have been great.
1 pretty big yellow or red onion, chopped small.
2 or 3 cloves of garlic, diced.
A big bunch of cilantro. Including the stems.
First take the peppers and roast them in a 425F oven for 10-15 minutes or until they start to scorch.
Meanwhile, get out your soup pot and heat it up on medium. Add oil and the onion and garlic. Stir that around with some salt and pepper until the onion gets soft. Don’t burn it.
Now add the corn and a quart or two of water … you need just enough so the corn is mostly submerged. Bring this to a boil and then turn the heat down and simmer around 10-15 minutes or until the corn is pretty soft. When this is done scoop about 2 ladles worth of the corn into a bowl. Set the bowl aside.
Take the tomatillos and add them to the pot along with a bit more water if you need it.
While the pot comes back to heat, take the peppers you had in the oven out and cut them into little pieces and throw them in the pot.
Finally, chop up 90% of the cilantro, including all the stems. Keep some of the leaves for later.
When the tomatillos are soft turn the heat off and take an immersion blender and liquify everything in the soup pot. This will take about 5 or 6 passes around all over and several minutes. Be patient.
Now get someone to run the contents of the soup pot through a food mill. This will make the smooth part of the soup even smoooother. Throw the even smoother soup back in the pot.
Now take the bowl of corn kernels you kept aside before and throw them back in the pot along with some more liquid if you need it. Heat everything back up. Adjust the salt and pepper so it tastes good. For an Asian twist add some chicken powder and/or MSG and a dash of fish sauce for an extra umami jolt. Maybe some white pepper too if you want.
Let it simmer on low heat for five or ten minutes to let everything melt together and you are done.
Notes: You could make this with chicken stock, and it would be fine. But it doesn’t really need it. The original recipe is from a vegetarian cookbook so they obviously did not do this. The book suggests you make a “corn stock” from the ears that you cut the kernels off of. If you feel like killing some time doing this go ahead but I am completely not convinced that it’s worth it. Using red hot peppers gets you a bit of extra sweetness on top of the corn which is nice. The original recipe called for jalapenos, but jalapenos kind of suck now so I would suggest getting better stuff.
Tomato and pepper season is upon us. So it’s time to make salsa. Luckily I am the laziest cook on Earth and I will show you how to make the laziest salsa on Earth.
First, get you to the farmer’s market and collect the following items:
4 or 5 good tomatoes or even not the best tomatoes. It doesn’t matter. Get some that are a bit less juicy if you want.
One small green farmer’s market box of tomatillos. This will have between 5 and 10 tomatillos in it.
Cilantro if you like it.
Scallion. Or shallots. Or red onion. Whatever is your favorite.
3 or 4 of your favorite peppers. Peppers are tricky to get these days because the peppers that used to be reliably hot (jalapeno, say) are no longer so. This year I have found that the jalapenos are pretty good. As are the red cayenne. The red cayenne are also pleasingly sweet in addition to having that chili-ness. Hatch and Serrano are also nice. This year the local market has also had what I think are red Fresno peppers that were good. In any case, get a few of what you like. Enough so you can add enough heat to the final mix.
Optional: cut the chilis open and roast them a bit in the oven. This makes them nice.
To balance the heat, remove some of the seeds and veins from the peppers if you want. But leave some of the seeds in so the final result is hot enough. This is the hardest thing to guess at until you get it right. After you have decided what to do chop the peppers into prett big pieces.
Peel and quarter the tomatillos.
Chop the tomatoes into quarters or big chunks. Some people like to peel the tomatoes too. Do that if you are not as lazy as me.
Coarsely dice up whatever onion you picked.
Chop up the cilantro.
Now get out your food processor and into it throw:
Grind that up to make more room.
Toss in the rest of the stuff a bit at a time, starting with the tomatoes. As you add each thing pulse the food processor to make more room.
Finally add a few dribbles of rice vinegar to add a bit of acid. Finish with the dreaded “salt to taste”. Two or three pinches of kosher salt should do it.
Now continue to food process it until it’s nice and pulverized. You should end up with little bits of tomato and such … the food process doesn’t liquify stuff the way a blender might.
If the resulting mash is too watery pour it through a strainer to get rid of some of the juicy juice. When it’s a good consistency put it in a large bowl and cool it off in the fridge for an hour or two.
It will take you a bit of practice to get the balance between the tomatoes and tomatillos right, and to get the heat like you want it. I have found that the salsa gets less hot as it sits in the fridge, so making it a bit too hot to start it a good move. Do this once a week until you are good at it. Then every year you can do it again and make salsa that’s better than the rest.
When I was a graduate student in the late 1900s (if you must know, it was 1987 until around 1993) I used to use an old programming language called perl
1 to do a lot of my “work”. That is, I wrote programs in perl
to automate various tedious tasks that I had to do a lot as a graduate student: converting text files from one sort of format to another, reducing and summarizing data from some “experiments” that I ran, generating pictures in \LaTeX for my thesis. That sort of thing.
At the time perl
had a reputation for being a very powerful but difficult to learn and use system. It had some strange syntax and for any given problem there always seemed to be five to ten different ways to write down an idiomatic solution in perl
. People also made claims about how perl
was hard to learn because it had no simple to understand conceptual core. It was just a lot of different utilities thrown together into one big mess.
These things never bothered me. My approach to using perl
was to always use it on a “need to know” basis. That is, never learn anything new about the tool unless you really need to know. This saves you a lot of time and mental energy. You avoid the intellectual paralysis that comes with trying to understand the “big picture” about a system for which there is no big picture. This policy also lets you avoid some of the darker and sharper edges of the system that you should have really stayed away from anyway (see also: typeglobs).
I think using most computer systems like this is a rational way to go about your life. Unless you have some sort of unhealthy and uncontrollable intellectual curiosity about the things computers do computers are not really fun things to learn about, and you should minimize the amount of time in your life you spend learning to interact with them. God knows you should never get into programming unless you have just lost all hope and have nothing better to do. Or maybe you just like the punishment.
Of course, I am one of the cursed ones who enjoys suffering. So I have spent most of my adult life programming computers for money. But this has just made my attachment to the “need to know” method for managing computer knowledge even stronger. I spend my life knowing stupid and obscure things about a giant meta-recursive ball of mud made out of code that is now almost old enough drink in most states. I need to allocate my brain space very carefully.
So it is surprising to me that there is so much recent discourse and pearl clutching about how “the youngs” don’t understand computers “the right way”. The thinking goes, apparently, that modern computer systems are so good at hiding the obscure and archaic mechanisms by which they actually work that many younger users of these systems have little or no understanding of these critical bits of ancient lore. This, people say, means that they will be mentally and intellectually crippled in their later lives because they lack all of this foundational knowledge. What, you may ask, are these poor people missing out on? I’m not exactly sure but a few things I’ve seen include:
Primitive UI for organizing their data into ad hoc and impossible to navigate hierarchical databases made up of “files” and “directories”, but no real way to actually find the data you want at any given time by any attribute except maybe the name of the file you happened to put it in.
Building up their core intellectual strength by understanding the ancient language of “the command line shells” and pondering why we still have to put up with syntax built when creating a parser for something usable would have used more then the 64K of memory available to us at the time.
Automating simple things using “batch files”.
Special syntax in search engines that does not actually work anymore.
Learning to manually allocate and keep track of memory buffers using a language and runtime that will silently shoot you in the head when you fuck up is apparently very important for your development as a software engineer.
Pick any tool stack and you will find dozens of people saying that it is the most important thing in the world to know all of the details of how said stack can make your life a living hell.
If we were living in the year 2045, I imagine that the olds then would be scolding the youngs about how they have no understanding about how basic and foundational things like @-replies in chat rooms and #-tags work because the interfaces for online interaction have gotten so advanced … wait that will never happen never mind.
What is going on here is yet another instance of the universal standard nerd intellectual interaction, which in this form is phrased: “What I had to learn, and I found very interesting, must be the most foundational and important thing in the world and everyone must also learn it.”2
Of course this is just not true. Most of these things were just bad puzzles that people had to solve back in the day because nothing better had been implemented yet. Folks were forced to use a lot of awkward implementations of a lot of marginal ideas. There is nothing good or foundational about them. Like perl
they were a set of imperfect and often painful to use tools that happened to solve some problems people needed solving.
However, often old nerds just can’t tell the difference between having learned something actually interesting and having been beaten in the face with the limitations of the stupid machines and learning how to work around the pain. It changes your perspective if you think of all the weird and complicated technical things that you have learned over the years as brain damage/trauma instead of intellectual trophies to be proud of. The truth is that most things are the former instead of the latter.
So as I said in the open, it is perfectly rational for people to ignore this bullshit until such time as they are forced to deal with it. There is no reason to waste brain space on stupid technical puzzles from the past when you could be using to store the things you really need to solve your problems.
With that main point out of the way you might now be wondering to yourself: “But psu, what are the important things related to computer systems that you should waste brain space on”? If you have read this web site for a long time you would have a pretty good idea about what my answer would be. But here are a few things:
If you find these ideas too technical and theoretical I would not blame you. They do lean to that side. But what makes them important to me is that they are all general conceptual issues rather than solutions to specific technical puzzles. I like having a nice set of general boxes to put problems into rather than just a list of specific solutions to things. There are a lot of patterns in computing and programming and thinking that way lets you take advantage of that.
Most other things you run into in our industry are just hype for some specific solutions to a particular set of specific issues, and are thus less interesting. Learn that stuff if you need to know it, but otherwise there is no shame in ignoring it. With the right foundations you can figure it out when forced to. And if you are never forced to then you will not have wasted your time.
It is an old joke that “perl” stands for “Pathologically Eclectic Rubbish Lister”. Which fits. See https://en.wikipedia.org/wiki/Perl for details.↩︎
Other forms of this line of thinking include: “this works fine for me, it should be fine for you too” and also “oh, we designed the system that way because we don’t see a reason to want to be able to do X easily, so you should not either”, and finally, “making sure that the system design maintains technical attribute Y is more important than anything else even if no other single person in the universe cares about that.” Finally, I guess in the FromSoft community there is a more concise way of saying this, which is just “git gud, scrub.”↩︎
Some time ago I wrote a somewhat tongue in cheek bit about why football in the NFL is the best sport to watch. I still stand by some of it, but the NFL has had its problems over the last almost 15 years and for various other reasons I have also had less of a personal interest in the game lately.
The opening of that piece also has this quip in it:
If your favorite sport is soccer then we can just agree now that you will hate me and I will feel sorry for you.
Which, well, I guess brings us to the subject at hand. As we wind down to the end of English and European football seasons, let us reflect on how in the fullness of time one can change one’s mind about things, and discuss what is great about soccer.
As before this is mostly sort of serious, but not really. And as before only some of the reasons for liking the other football now have anything to do with the game itself.
I mentioned this in my previous short thought about English football. The timing of the games on this side of the Atlantic is just great. You can enjoy a few nice games every weekend morning, and then go about the rest of your day without sports getting in the way. Even during the NFL football season, you can easily watch both the Premier League and Red Zone on Sundays. Since no Premier League game ever runs longer than two hours (great!) you always know that all the soccer games will be over by the time “the witching hour” comes around on Red Zone. The perfect crime.
Note that this guaranteed fixed time window is the best thing about actually watching the games. Even in other competitions where draws are not allowed, like various tournaments, the World Cup, and so on, the games will never go more than three hours. Meanwhile there is no game in American sports that does not get stretched to two and half to three hours with commercials and such. Even the new fast baseball is still much longer than a soccer game.
But the very best thing is that you never have to be up near midnight watching some stupid sports game because of unavoidable psychological programming that you picked up in your childhood. This is even better than consuming American sports on the American west coast.
While I’m sure some of this is just the relative novelty of my experience, I feel like the commentary on Premier League football games is in general much better than the current crop of announcers in the NFL, NBA, and national MLB. The NBA in particular is in a horrible drought right now, with everyone waiting and praying for Mark Jackson and Jeff Van Gundy to just fucking get out of the way so at least Doris Burke can keep Mike Breen company on the big games. But this will never happen.
The NFL commentary is on the whole better than the NBA, but also generally anonymous and without style. Certainly nowhere in American sports will you get the literary turns of phrase that happen half a dozen times in every game where Peter Drury is doing the call. Every game he does generates at least one or two twitter clips from the Men In Blazers. No American announcer even comes close.
The analysts for the Premier League broadcasts are also mostly better than NFL and NBA. Not only do they all seem able to provide you with early Tony Romo levels of insight into the what’s going on in the game, they also have no fear when it comes to criticizing the players, coaches, and refs doing the playing, coaching, and reffing. No missed strike is a good try, it’s always “he should have done better”. Questionable tactical changes are called out. Dives are called dives and dodgy decisions by the ref in the penalty box are given no mercy. Very refreshing.
In almost all of the international soccer leagues relegation means that teams cannot tank. Let us review.
In American sports, because they are communist, when you do badly and come in last in the league, the league tries to help you out by giving you the best shot at new young players to make your team good again. No one would actually publicly condone losing on purpose to get good draft picks, but at least in the NBA there is a verb (tanking) that means basically “lose as on purpose as you can to get good picks”. In addition, there was even a team (the Philadelphia 76ers) that engaged in what they called “the process”, which involved tanking over multiple seasons to get many good draft picks to become more competitive. They did this for about five years and obtained a generational player (Joel Embiid) who has led them to several thrilling exits in the early rounds of the playoffs. So there is that.
NFL teams can’t really explicitly tank, but under good management bad teams can get good if they understand how to use the draft system to their advantage. As far as I know it’s impossible to tank in baseball. But baseball sucks anyway.
International football teams cannot tank at all. If you lose you don’t get second chances. You get kicked out of the league into a lower tier league. You lose a huge percentage of your revenue. You have to fire everyone. You lose all your players. And then you have to win again in the lower league to get back into the higher tier. This system is called relegation and promotion. Typically the bottom three teams in the league get kicked out (relegation) and replaced by the top three teams in the tier below (promotion). In English soccer there are 6 or 7 levels of professional and semi-professional leagues … so the potential for collapse is almost infinite.
This is a brutal and Darwinian way to run a sports league, and it’s also genius for two reasons:
Now there is a super compelling reason to watch the bad teams as well as the good teams.
Watching the promotion fights for the lower tier leagues is just as fun as watching the championship fights for the top tier.
This year we didn’t find out which teams got dropped from the Premier League until the nearly the very end of the very last day of the season. As a neutral fan can’t beat this for entertainment. As an actual fan it must be the absolute worst.
Again, I think some of this feeling is just the relative novelty of my experience, but I feel like the drama in the Premier League is unmatched. Certainly this year was crazy in ways I have never seen in other sports. Every game seemed to have high stakes of some kind. More teams fired their head coaches than did not fire their coaches. Several teams fired their managers several times. It was unreal.
And then on top of all of this was the relegation drama, the drama of the evil Death Star of a Man City team just inevitably chasing down Arsenal and crushing them and everyone else to win the league.
And then on top of all that is Peter Drury and his buttery delivery making even a throwaway game between Man City and a bottom feeder into the thing that you must watch this morning even though you know you don’t need to watch it.
I wonder if next year will be like this?
I have, of course, spent most of my time here covering the structure of the league and the aspects of that structure that make for good entertainment. I haven’t said too much about the game itself because honestly it kind of baffles me. I stare at it and I can make certain obvious observations about what is going on. Team A may be extremely effective at passing the ball around while not losing it, creating scoring chance after scoring chance. That means they are, in some sense, “winning”. Or Team B may be defending for their lives just trying to survive, and they often make it!
What I don’t really understand is the mechanisms that cause these game states to happen. I can watch (say) an NBA defense and tell you if they are trying in 5 or 10 minutes of game time. Similarly, I can explain to you why the Patriots are pathetic, or the Buffalo Bills dominant in any given week. These things are intuitive to me.
I don’t really understand how soccer works internally, I only understand the outcomes. What exactly happens in “the midfield” seems very important, but I have no idea what it means. Also, seeing bad defense in real time seems to be out of my cognitive reach. Earlier this year when Tottenham gave up five goals in twenty minutes to Newcastle (just wow) all I saw were the balls going into the back of the net. The analyst on the TV went on and on about how the Tottenham defenders were completely clueless, but I didn’t really see it like I would see the same thing in an NBA game. I suppose with time this will get better.
Happily, I can fall back on the endless analysis and gossip cycle, which the Premier League has brilliantly imported from the NFL and made their own. Coaches are tactical geniuses or slobbering morons, depending on their results. Players move from team to team, or more importantly, are rumored to be moving from team to team for various reasons bordering on soap opera, all of which mostly just keep the endless news cycle churning along. The most amazing thing is the extent to which the language and narratives are all the same as in the American sports even though the game itself is so completely different than what we have here.
It makes me feel very at home.
Of course nothing is perfect, so I have gripes. They are relatively few.
Timekeeping. One aspect of soccer that I just will never understand is how people put up with a game that is supposed to have a fixed time limit, but actually will just keep going until the ref decides things are done. It’s horrifying.
Offside. Just how does this rule work? Also why, in this game where when the game ends is managed with all the precision of a six year old deciding when to get dressed for school, do we have computers figuring out whether a player was offside to a level of precision that is apparently measured in millimeters? This makes no sense.
Similarly, does anyone understand what a hand ball is anymore? This is almost as complicated as the catch rules in the NFL.
I don’t like penalty shootouts. But what else are you going to do? I don’t know. But I still don’t like them.
What is the deal with substitutions? Why is the game structured so that the last 10-15 minutes of every match seem to be played with 10 zombies with leg cramps and a goalie against 10 other zombies with leg cramps and a goalie? OK sometimes it’s maybe 7 zombies and a few fresh subs against 6 zombies and a few more fresh subs. But all of these situations bring up the question: why not just have more subs? Or, even more radically: why not allow a player that you subbed off to come back later when they have had some rest? Baffling.
Finally, like hockey, soccer seems to have a fundamental balance issue wherein a team that is missing one player does not immediately lose. I will forgive the game this foible, as it is likely a result of the following core fact: actually scoring a goal is a nearly impossible act requiring incredible feats of athleticism and coordination along with the luck to have 15 external things that you can’t control go exactly right all at the same time.
And yet Liverpool can beat Man United 7-0 on a random day, when all of those things go exactly right seven times in a row.
What a crazy game. Even when it sucks it’s inexplicably cool.
For some insight into where Premier League came from you should read The Club: How the English Premier League Became the Wildest, Richest, Most Disruptive Force in Sports. The inspiration that it drew from American sports, and in particular the NFL, is fascinating.
Football Cliches is a great podcast about the language and culture of English football. Very interesting side dish to the main diet of news and analysis.
Shout out to the NBC Premier League team. It is a great injustice in the world that these people don’t get to do the World Cup and we get that team from Fox instead, which is mostly forgettable at best.
I am apparently in a relatively small minority of humans who use a camera besides my phone for taking pictures. I do this a lot less than I used to, because the phones have gotten really good. But I still do it.
Most of my time with non-phone digital cameras has been spent using either Nikon DSLRs, which were mostly great, but always too big. I have also used the Olympus (now “OM Digital Solutions”, but I’ll still use the old brand name) mirrorless cameras, which were great once you went through the pain of setting up the four modes you want to use them in, and also deliciously small by comparison to the Nikons.
Then Nikon got into the mirrorless game, making cameras almost as small as the Olympus, and generally about as great as the DSLRs. At this point I figured my time with Olympus was over, esp. since the Olympus company sold their camera business to some nameless private equity firm which brought the brand back as “OM System”, which is piece of branding no one can love. I concluded that given that Nikon was still making new things, and “OM System” was maybe not, eventually I’d be getting back into the Nikon stuff anyway. So I jumped back in.
The Nikon Z6II is a mirrorless camera body that feels almost exactly like a Nikon DSLR to use (which in turn felt almost exactly like a classic Nikon autofocus film body to use, only better). It can really do almost anything you want. And it unquestionably does two things much better than the Olympus body that I had been using the last 5 or 6 years:
The tracking AF is what sold me. I don’t actually shoot a lot of moving things, but the convenience of being able to put the focus box on a thing, hit the focus button and then recompose however I want while the camera just stays locked on to the focus spot was just great. In most situations this gets you results that are identical to what you used to get by locking focus and recomposing. But if the thing in the box moved around at all, the camera would generally hold on to focus like magic. I loved this every time I used it.
So I spent a few months happily taking the Nikon and the kit lens (an emminently practical 24-70mm F4 zoom lens) around and plugging away with the focus box. Then I put the camera in a box and barely used it.
Later I started thinking about expanding the lens collection. I like zoom lenses. They are convenient and at this point in my life I am too old and lazy for the hassle of constantly changing single focal length lenses around. I generally want three zoom lenses to carry around, even though I usually only use two of them.
Most of the major camera lines have these lenses. But I hesitated getting the Nikon wide zoom and long zoom. To explain my hesitation, I need to explain some boring technical things.
When camera nerds talk about lenses they tend to refer to the lenses in terms of their focal length. The focal length of the lens is nominally the distance from where the light enters the lens to where it hits the capture plane (the sensor, say). Shorter focal lengths have wider fields of view. Longer ones have small fields of view.
The Nikon Z6II has a sensor in it that is the same size as a piece of 35mm film from the old days, so we can use the standard focal length terminology from those days. So the range of focal lengths for the three zooms listed above would be
The Olympus cameras that I use have a sensor that is smaller than a piece of 35mm film. In fact, it is strategically set up so that if you are using a lens with some given focal length, the field of view you see is the same as using a lens with double that focal length on a 35mm camera. This rule is harder to explain than to say with an example: A 12mm lens becomes 24mm, 50mm becomes 100mm, etc.
This means that the three lenses I want above are something like this:
Because the “actual” focal lengths of the lenses are shorter than what you would build for a 35mm camera, the lenses themselves are inherently smaller for the same range of field of view. Plus, the lenses have to cover a much smaller chip area with a nice image, which lets you make them a lot smaller by volume.
Meanwhile, even though the Nikon mirrorless body is in fact a lot smaller than the old DSLRs, the lenses you put on them are basically the same size since they have to cover the same piece of CCD. In fact they are usually even a bit bigger than the old F-mount 35mm format lenses, because the Z-mount has grown in diameter for various technical reasons.
In any case, the result is that the Z lenses are much bigger than the Olympus stuff. So in the end the lenses become the limiting factor in size, weight, and volume when carrying the camera around.
If you compare the size of camera+lens for the two systems your universal conclusion is that the Nikon lenses are longer, fatter, and heavier than the Olympus lenses that cover similar equivalent fields of view.
Here is each camera with a midrange zoom attached. The Olympus is a 12-40mm F2.8 (24-80mm equivalent) and the Nikon is the 24-70 F4.
As we observed above, the bodies are not that different in size (although the Olympus is a bit smaller in every dimension, which adds up). But the Olympus lens is a lot smaller than the Nikon even though it’s a stop faster, and this is one of the more compact Nikon lenses.
Next, the Nikon has the same zoom as before, and the Olympus has a zoom lens with twice the range as before (12-100mm F4 (24-200 equivalent)). The total Olympus package is still a lot smaller because the width of the camera body and the diameter of the lens is so much smaller.
Here is each camera wearing the telephoto zoom I’d use on them. The Nikon lens is a 70-300mm F4.5/5.6 F lens with an adapter, and the Olympus lens is a 40-150 (80-300mm equivalent) constant F4.
The Olympus zoom expands a bit when you actually use it, but is still less than half the size and much lighter. And it’s a stop faster at the long end to boot.
Unrelated related note: I also have the 40-150 F2.8 Olympus lens, which is an incredible lens. Even at two stops faster on the long end it’s still a bit smaller than the Nikon. It also has a bad retracting lens hood design that caused me to smash the filter ring on it a couple of years ago and I could not figure out how to get it fixed. This might have made me mad enough to try the Nikon stuff at the time, or it might be completely unrelated. I’m not sure.
At this point the reply-guys in the audience are telling me that real men shoot with prime (single focal length) lenses, and surely you can find a small Nikon body/lens combination there to make you happy to which I say … remember how small the Olympus primes are? You can’t win there either.
Here is a Nikon 35mm prime on the Z6 (with an adapter, because I don’t have the Z lens, which is a lot bigger than this one). The Olympus E-M5 body is smaller than my other Olympus, but is the one I would use with this lens because the colors match.
Here is a picture I copied from the internet that has the Z version of the lens on some anonymous Z body. Looks about the same size as my monstrosity above.
Conclusion: the prime lenses will lose too.
Of course, size isn’t everything. In theory you give up a lot going to a smaller sensor, especially in terms of sharpness and noise performance in bad light. In addition, Nikon, for all its faults, is pretty good at building autofocus systems and reasonably straightforward interfaces to run them. The tracking focus system that I described above is great. Olympus has never really been able to keep up. And, there is the overwhelming possibility that by the time I post this page, “OM System” will announce that it is disappearing into the trash bin of camera brand history.
And yet I dithered. I spent almost two years using the Nikon, and never felt like I wanted to keep going with it. It turned out to be because if you more carefully examine the technical advantages that the Nikon allegedly has you can kind of tell they are just the kinds of ghosts that nerds like me chase to spend money.
Yes, the autofocus is great. But, I am really bad at actually taking advantage of the the things that it is great at. Shooting good in-focus pictures of things that are in motion is a skill that is not easy to learn and certainly not easy to keep if you don’t practice a lot. I mostly shoot pictures of stuff that is either standing still or is close to it. So I never practice that stuff. So even if the camera were perfect I’d still fuck everything up because the framing, or the timing, or something else would be wrong.
Also yes, the bigger sensor is better. Especially in low light. But honestly I take most bad light pictures with my phone now. It’s easier.
So in the end it turned out that size actually is everything. The Nikon Z lenses are, simply put, really large. Even the prime lenses are big. And the wide/tele zoom lenses that I would have wanted to get are super large. I knew I would want to use them. But I also knew I would never want to carry them. So for almost two years I sat around paralyzed. I really liked the camera, but could not get myself to buy the lenses.
And finally, to end the story … during this time that I was dithering Olympus, sorry “OM Digital Solutions”, released not only a new body that is a bit better at the things my current body is bad at, but also a set of newer and even more deliciously small lenses: the small 40-150mm (80-300mm equivalent) telephoto zoom in the comparison photo above, and a really useful super wide to normal 8-25mm (16-50mm equivalent) zoom. So I picked up the lenses on sale, and will keep them in my bag for those situations where the 12-100mm won’t do the job (almost never).
Inevitably I’ll probably pick up the new OM body, even though it’s not that much better at this stuff, and the UI is still a trash fire. Hopefully the OM system keeps its head above water for as long as I need them to. Or at least long enough to sell me the one body that I’ll use until Apple figures out how to fold a 200mm lens into an iPhone. At that point I’ll well and truly give up on carrying cameras around for good.
Late Appendix:
Aside from the size and handling, in retrospect another reason I went back to the Olympus cameras is that with the Z bodies Nikon decided to standardize on yet another useless and idiotic card storage format which makes me cart around yet another card reader instead of using the SD slot that’s built in to my computer. The only saving grace is that the Z6II has a single stupid card slot and another spare SD card slot for making backup copies, so I ended up using the “spare” slot all the time and completely ignored the expensive main card slot. I’d have rather paid $200 less for a camera that had two “inferior” SD card slots instead.
And yes, we are at a point in the camera industry where the storage cards that a body uses are probably the most interesting distinguishing characteristic between different brands. If I try full frame again I’ll probably defect to Canon just because they don’t use the stupid cards.
In Pittsburgh Chicken Latino is a long time favorite Peruvian style roast chicken joint that also serves a variety of other kinds of things, all in portions that are too large.
Chicken Latino is also, paradoxically, the home of the cheeseburger in Pittsburgh which is probably second on my list by overall objective “quality” but first on my list of emotional favorites. In these times when people can be remarkably pretentious and self-centered about getting burgers and fries made from only ingredients of the highest quality and correct origins, Chicken Latino takes frozen patties and “steak fries” that just fell off the Sysco truck and turns them into a burger and fries better than almost any other in town. The fries are the most puzzling part of this equation. They really should not be good, but they really are better than almost any other fries in this city where almost no one seems to know how to make fries.
Anyway, Latino has been open for about 15 years, but it wasn’t until about five years ago (I think) that they started serving a dream dish for anyone who loves Chinese food, Peruvian food, double starch, and Chinese/Peruvian/Pittsburgh fries fusion cuisine. I refer, of course, to Lomo Saltado which is basically a sort of beef stir fry served on top of yellow rice and french fries. Brilliant.
Here is what it looks like:
Who can’t love this?
If I’m honest, the beef stir fry part of this dish is its weakest aspect by far … but the fries, and most importantly, the rice mostly make up for it.
Of course, even something as perfect as this combination can be ruined if you try hard enough. I bring this up because at some point last year I was excited to finally sit down in a new place in town that a lot of local foodies appear to like that had an upscale version of this dish on their menu. So of course I ordered it.
In this implementation, the meat was a lot better than the cheaper cut you get at Latino, but overall the dish was bad. The dish was bad for two reasons. The first was that the fries were bad.
I will not rant here about the bad fries. Bad fries are just a fact of American life, I think. Whereas in (say) France there are places whose entire existence is dedicated to serving nothing but steak with perfect french fries, even fancy places run by fancy chefs in the U.S. will serve you sub-par french fries on a routine basis. This is kind of unforgivable, but I guess fries are also technically a tiny bit demanding to do well on a large scale. But really it’s still unforgivable.
The second thing that ruined the expensive plate was that the rice was unforgiveably bad. This I will rant about. It tasted like day old rice that you have left partially uncovered in the fridge and then reheated in the microwave for about 30 seconds while forgetting to add a small dribble of water. You bite into it and the kernels break off in your mouth in a mealy semi-crunchy and tasteless mess. But, you are either too lazy, too tired, too hungover, or too hungry to fix it now, and just dump your food on top hoping the sauce from the food will finish the job of bringing the rice back to life. But it does not.
All this for $29 a plate.
This is, of course, not the first time I have gotten unforgivably bad rice in a restaurant. I sent the rice back at a fancy French place in Paris once, and they sent me back a perfectly great risotto, which for some reason they could make better than plain white rice. I also got the single worst bowl of white rice that I ever paid money for in my life at a fancy special dinner at a long standing and well loved local Asian fusion joint (it was Soba) in Pittsburgh. That rice was like the fridge rice above, except they hadn’t even really tried to reheat it, I think. This happened a long time ago, and people say I should be over it by now. But I’m not.
I am here to say that it does not have to be this way. Unlike fries, rice is completely trivial to cook well. Here is what you do:
The cooker will even keep the perfect rice warm and perfect for the entire restaurant service. I doubt that there is any single food product that requires fewer brain cells to do well than perfect white rice in a rice cooker. And yet the evidence before us is that people care so little about rice that they won’t even do the bare minimum amount of work needed to make it decent.
At this point in the article at least 15 reply-boys (and girls) will stand up and declare that no functioning human being should need a dedicated kitchen appliance taking up their precious counter space for the sole purpose of making sure that the rice is good every time. I am here to say that these people are wrong, because their framing of the question is wrong. The question is not “can I cook OK rice on the stove (or more recently, in the microwave)?”. The question is: “can I push a single button and get perfect rice of any kind any time I want without looking at the cooker again until it’s done?” … and then also keep it warm and perfect for between 8 and 24 hours afterwards.
The second thing is what rice cookers do, and if rice is important enough to you that it will be the main starch in more than 15 out of 35 meals every week, then you will do the right thing and just buy the machine.
But, this attitude about rice is rare here which is why rice always sucks in the U.S. In the U.S. (and most of Europe, really) rice is at best a second tier auxiliary starch that is only used once in a while. In baseball terms, it’s not “an every day player”. So no one actually cares if it’s good or not.
Let us contrast this situation with Japan (and most other places in Asia). In Japan you can walk into any 7-11 store anywhere in the country and walk over to a cooler with pre-made sushi things in it and you will get better rice, even though it’s cold, than what is served in about 99% of all places that serve rice in the West. In particular the rice wrapped in sweet tofu skin is always great, from the middle of Tokyo to any random small town with a train station 7-11 no matter how few people live there. This is because they have a rice cooker and they give a shit.
I have often joked that one of my food dreams would be for a single Japanese 7-11 to open within a reasonable driving distance from my home. Not only would this greatly improve upon the quality of snack foods available in the area, it would also instantly become the best sushi restaurant in town and the best cheap East Asian fast food in town. But really this dream is more about just having a place somewhere that cares about the rice as much as you are supposed to.
The rice is important. In sushi, it’s as important as the fish (remember: sushi means rice). In Chinese food it’s as important as all the other dishes on the table, because all of them are improved when put on top of rice. Rice should not be an overlooked side dish that is little more than some extra food cost. We need for it to be elevated to the same level as potatoes, pasta, the fancy sourdough bread, and all those other hideous whole grain products that unlike rice don’t really taste that good. It should be a whole menu unto itself.
The rice is important.
Chicken Latino.
Rose Tea in Oakland.
Cafe 33 in Squirrel Hill.
Chengdu Gourmet/Chengdu 2 (could be better, but not bad).
Mola. Decent sushi rice. Maybe as good as Chaya was, which was the standard back in the day.
Penn Avenue Fish Company. Their rice is good enough for the rice bowls and stuff but the sushi rice is sub par.
Salems
Turkish Kebab House.
Most of the Indian restaurants, although Indian rice is a different aesthetic than East Asian rice (it’s not sticky, what the hell?).
Well, this aged well.
The last few years have been mostly down for the systems that we call “social media.” Once thought to be the pinnacle of the late-capitalist engine for consumer surveillance and arbitrarily profitable advertising engines people finally seemed to be having second thoughts about using a system that records and broadcasts their every thought and action to the entire Internet at once all to sell clicks and ads for someone else’s profit. They didn’t think too hard about not doing this anymore, but the thought did cross through the collective conciousness, if only for a microsecond. Then the kid pictures continued to go up and the “day in the life” videos continued to be posted. Ah well.
Then in a fun twist, a self-centered billionaire narcisscist dipshit asshole made a joke about buying twitter for 44 billion dollars and then it turned out that, joke’s on him, he had actually made an offer that he could not back out of. And that went really well.
So now we are in a situation summed up by this message:
This same sentiment is true for most of the major social platforms. Certainly the “big three”, Facebook, Instagram and Twitter are now mostly brands shilling brands mixed with ads for other brands and then if you are really lucky there will be a message once in a while written by someone you actually told the system you wanted to follow. Tiktok is also like this, but there the difference is most of the stuff is still fun.
I am not too sad about these things. I think these systems were bad. They were a bad idea. They were badly designed, badly architected, badly implemented, and badly managed. If in fact we somehow manage to make them die, the world will be better without them.
Consider that the pitch for these systems is as follows:
Get a lot of people to sign up to provide you with all of your content for free.
When one out of every several million “creators” gets popular, give them channels to make “deals” with “brands” so they can get paid a pittance or maybe if they are really lucky find a real job in some other part of the industry where they don’t have to crank out a few minutes of content every single day to feed the beast.
Convince your real customers to pay you money for ads to feed to the people while they scroll the free content.
Implement all of this with some of the worst UI ever conceived by man.
Wrap it in a gift box labeled “public town square” or “important intellectual engagement” to make people think they are doing something more than just scroll cat videos, sports highlights, and soft core porn.
It has always been puzzling to me that this pitch worked so well that no one can apparently do any kind of marketing without it. How did we go so wrong?
Now I’m going to pick on twitter some more, because it’s just that easy. Twitter, to me, reached its peak sometime between 2010 and 2015 when it was good for one single thing: watching online commentary on a live event that everyone you follow on twitter was watching at the same time. For me this was NFL football games and sometimes the NBA playoffs. I imagine for a lot of the rest of the world the coverage of European and World soccer had a similar feel. For the non-sports nerds maybe it was TV shows, although no one watches those “live” anymore.
What made this fun was:
Fun interesting people were commenting on what you just saw.
You could throw your dumb thoughts into the firehose and feel like someone might be seeing them.
That’s it. That’s what twitter was good for.
Twitter is bad at literally everything else. It’s not a good chat system (all the threads are upside down). It’s not a good way to manage your content streams (the single timeline makes it too easy to miss things). It’s not good for posting anything longer than a text message, and even if it was it would be a terrible place to read such things. Finally, its implicit broadcast structure makes it too easy, in fact almost inevitable, for any random post of yours get seen by too many of the wrong people with the result being that your account is irreparably destroyed by spam from every single asshole on the Internet.
So yeah, twitter sucked in all ways. But at least it was good at that one thing.
But now it’s not even good at that one thing. With the Chief Dipshit Officer in charge all your journalist “friends” (or their bosses) who used to watch football with you have finally noticed that it’s not good for them to be broadcasting their thoughts in this way so that entire river of takes has dried up. So now twitter is just bad.
But you might have heard of a new kind of system named after an elephant that the nerds have been toiling over for the last decade while everyone else mostly either ignored them or just pointed and laughed. Yes indeed this system exists, but don’t get too excited about the hype. They did fix one or two bad things about twitter. Let’s see if you can guess which ones.
No, the threads are still upside down.
No, the reading interface is still terrible.
No, it’s still all timelines, so everything you might have wanted to see just gets washed away if you miss it.
No, the UI for “conversations” there is still mostly the same level of painful.
Yeah, they got rid of search, which you never once used on twitter ever.
The one thing they did do was break up the back end into a lot of separate servers run by independent individuals. So instead of broadcasting your precious thoughts to every asshole on the Internet everywhere, all you do now is broadcast them to just the assholes on server you picked to join. To get more assholes to see it, users from the other servers have to follow you to open up a tube to get your posts from one server to the other.
This, the nerds say, will fix everything, and create a system that can truly fulfill the great potential for … something … that systems like … checks notes … twitter … could have reached.
I’m here to say that this is not true. The best case for these systems is the following:
A bit less of the asshole wave from twitter. But if your account is popular, not that much less.
None of the fun things, because the big media platforms have realized that social media is a dead end, and social media with deliberately limited audience reach is even more of a dead end.
So, no more watching TV with the whole Internet for you. Instead all that they have built is, best case, a local chat room with one of the worst interfaces for a chat room that you can possibly imagine building.
Here is the thing. And please believe me when I say I am 100% serious about this take:
That’s really all you need to know about this.
The most amazing thing about USENET is that even with only ASCII terminals to work with the news reading interface is still better than every Internet forum and shared media site that I have ever used. If you think about what devices then could do (ASCII only) versus what a phone can do now, the fact that 40 years later everything is still worse is a pretty damning condemnation of the intellectual capacity of the human race.
Anyway. My only deep thought about why social media systems suck is essentially the same thought as in my original piece. You can’t talk to the whole Internet at once and have a good time. It’s just not possible. So whatever interface we build for this is going to have to be cut up into smaller pieces organized by common interest, much like the old Internet Forums and USENET newsgroups (and these days, slack servers, and discords run by brands). Yes yes, this is what “federation” does, in theory. But again, this is the most obvious fact in the entire world, since even I thought it up.
It remains to be seen if humanity can come up with a better way to interact with itself online. Obviously I am skeptical that it can be done. But I’m not the one to build it for anyway. I post words on the Internet that you can’t even write comments on, because comments are stupid.
A few years ago at “peak pandemic” I wrote a short blurb about fried rice where, among other things, I praised the tireless work that Uncle Roger had been doing to defend this staple of East/South Asian cuisine against a seemingly never-ending onslaught of stupidity, overthinking and general cluenessness.
Sadly, his work has not been enough. Even now in 2023 we who love this dish are still under a constant barrage of bad fried rice recipes. So I felt I had to act. Here I will repeat my long standing simple fried rice recipe, but with a few refinements to reflect the added insight about the dish that I have gained since I wrote that down more than 15 years ago. In addition I’ll provide some reference links to other places to look for good fried rice advice. With all of this material in hand you can now safely ignore any new suggestions for how to make fried rice coming from the mainstream food media of the damned (New York Times, Bon Appetit, etc) and just bookmark this page instead.
Fried rice is easy. Don’t overthink it. Almost every fried rice recipe that is posted on the internet is
You do not have a gigantic 16-20 inch restaurant wok sitting on top of a rocket engine burner. You have (maybe) a 12 inch skillet or (if you have been listening to me) a 12 inch non-stick wok. Maybe you have a 14 inch wok. Great for you. They still tell you to put too much shit in that pan. This means
A recent recipe in the NYT told you to pile all of the following material into your poor 12 inch skillet:
And this is for “four servings”. NYT servings are huge.
Anyway, you are now doomed. It doesn’t matter what else you do. You will have a mess.
So, on this page we are doing to start with the easiest recipe, with a volume of food that is manageable. Then I’ll tell you some cool variations, including one of the best fried rices ever that was published in the New York Times, of all places, more than 10 years ago. They have this recipe in the bag and still trot out all kinds of terrible bullshit anyway. I don’t get it. Anyway, here we go.
In its simplest form all you need for fried rice is this:
Here is what you do.
First, dice the scallion into little pieces and put them in a bowl. If you are using garlic and ginger, mince that stuff too.
Second, heat your pan on medium-high heat. When it’s hot, add a teaspoon or two of oil (or if you want to live large, use lard) and crack the two eggs into the pan. Stir them around until they are 1/2 cooked.
Now add your onion/ginger/garlic. Mix.
Now add your rice and break it up into little pieces and mix it up with everything else. Work really hard at this, you don’t want any big lumps of rice, but rather all separate kernels.
When the rice is good and broken up add salt to taste, a few sprinkles of white pepper and the MSG if you have it. Then toss a 1/2 to 1 teaspoon of soy around the side of the pan and mix that in. If you have dark soy add a tiny bit to get a deeper brown color.
Mix mix mix mix mix until it looks like fried rice.
You are done.
It will look like this
OK. The first variation is to add meat. Whatever you pick, you don’t need much. 3 or 4oz is usually enough. If you want something really meaty, you could go up to 6oz or maybe half a pound. You can add a lot of different kinds of meat:
The game here is always the same. First fry/saute/brown off the meat so it’s completely cooked. Put it into a bowl. Then do the same thing as we did above, and at some point mix in the meat.
Next, do everything we just did. But at the end mix in frozen peas (maybe even peas and carrots). Classic Chinese American staple:
Next, we can do more interesting vegetables than just the scallions above. Shred up any sort of green veg. that cooks fast:
Saute the vegetable in the pan first, like you did with the meat. When the vegetable is done, do the whole egg fried rice thing above just piling everything on top of it. It will be great. Here we have put all these ideas together for Chinese sausage and cabbage fried rice, with a fried egg on top and chili crisp:
Here is another example with kielbasa and cabbage in it:
And now you might be wondering about the egg on top with the crispy nuggets of something.
This is one of my favorite versions which comes from Mark Bittman at the NYT, via the Jean George restaurant in NYC. That a French person has one of the best fried rice recipes in the world is certainly … something.
Anyway, you can read the recipe here. Basically you take the minced ginger and garlic and brown them in a small pan until crispy. Then you do the fried rice above, but without the egg, and with leeks as the main vegetable. Then you assemble it by putting a fried egg and the crispy garlic on top. Fry the eggs in the oil you used to brown the garlic and ginger. Stupendous.
Bittman made a good video about it too. Watch that here.
There are a few more fancy techniques for incorporating eggs into fried rice that I have not gone over here, but the references below will show you how to do that. Especially Chef Wang. Go to town and have some fun.
For more fried rice insight start at these places:
Chef Wang. His channel has 4 or 5 great videos on this subject, including one on the fanciest most expensive fried rice ever. The egg strand technique in that video is incredible. I wish I could do it, but I’m too lazy.
Also, the Chinese Cooking Demystified people have their own insights, including how to make fried rice without waiting for the rice to sit overnight. Watch their stuff too.
Finally, here is a link to my dumb idea for a fusion fried rice food truck.
My plan today had been to just say “Happy March 1107 2020”, as we have yet again passed one more go around the sun since that great stupidity started. But instead I have a different and unexpected grudge to finally let go of.
In 1981, which is, I guess, 42 years ago, I was in high school and the best movie of the year was Raiders of the Lost Ark. Raiders is still a pretty fun watch even today, if you can look past the somewhat primitive visual effects and some of the problematic cultural politics. But, the film did not get much love at the Oscars that year, because movies that get love at the Oscars have to be solemn and serious affairs, usually involving a lot of white people drama. So instead of an actual good time, all the awards went to a dour and boring British film about some Olympic athletes or something.
The most insulting aspect of this was handing the Best Original Score award to a collection of dour and boring electronic pap instead of John Williams. Be honest, when was the last time you thought about the theme to Chariots of Fire without falling asleep? Now run that trumpet fanfare from Raiders through your head … see? OK whatever.
Anyway. At the time teen high school me was furious, and concluded that the Oscars was a huge scam run by money and old people. And I have never changed my mind.
But, this year I feel like I have to forgive them. About a year ago when I first saw the trailers to Everything Everywhere All At Once I immediately went around calling it The Best Film of 2022. I did this more after actually seeing it (twice!). And finally last night the Oscars did the right thing and picked a movie with a fun and actually enjoyable energy over a lot of dour and depressing dramas for best picture. More importantly, they finally gave Michelle Yeoh her statue after robbing her in 2000 (!!) and not nominating Crouching Tiger for any acting awards (you can’t act in Chinese, you see). Oh, and all the other winners from the movie were great too.
So, good job Oscars. You are off the hook now. I bet you feel a great sense of relief.
I had a bit of a forced break from most of the regular things I do. An amazing thing about the world is that doing a job, and hobbies, that mostly involve sitting around and typing at a keyboard can still open you up to severely crippling orthopedic injuries in your hands and wrist area. I have been lucky to mostly avoid issues like this for the last 30 years (knock wood) so of course the thing that took me down ends up being newborn mother’s thumb. I have had this before, and always escaped with just a cortisone shot. But I was not so lucky this time and had to have a “minor” surgery, which took months to recover from.
Note: I guess this can also happen to people who use gamepads too much. But I don’t use gamepads too much … I spread the thousands of hours of Fromsoft addiction over many weeks and months and am careful to play no more than an hour or two at a time. Oh well. Who knows.
What’s important is that the result was me sitting at home unable to type, cook, shower, and lots of other stuff with my right hand. So I was really fun to be with. Here are some thoughts on things I did instead.
I buy too much music from Mosaic Records. I have since the late 80s. The result is that I have a huge number of ripped tracks from various CD collections that they have put out over the years, but I have actually only listened to a fraction (maybe slightly more than half) of all the tracks.
So I made a playlist of unplayed Mosaic tracks and started shuffling my way through it while I sat in my house either rehabbing the hand or trying not to think about how uncomfortable the hand was.
Note: If this ever happens to you, and you end up needing surgery of any kind … sign up for the post-surgery PT and OT before the procedure even happens. Trust me on this. The therapy people know so much more about how shit will go after the surgeon is done it’s not even funny.
Since November I have shuffled through about 2,500 tracks, and I have about 1,000 to go. I have not listened to all of them deeply and seriously, but I have listened enough to know which I like more than others. Their bread and butter has always been the classic Blue Note and more modern material. And IMHO it still is. The more historical stuff from the Swing period is good, but not to the same level.
One interesting side effect of this exercise is that it turned the music listening part of my brain back on. It had been taking a bit of a vacation lately and I had not been really motivated to engage with the huge amount of recorded music in the world. Getting this engagement back has improved my life.
Another thing I engaged in more than I have in the past is soccer, which in this section of the page I will call football since it seems more appropriate.
The beginning of my break landed right on the beginning of the World Cup which happened in the winter this year because it was held in a part of the world where if you played football in the summer you would fall over dead on the field. For more on this look at this set of videos.
So anyway, the World Cup was cool, especially all the Messi brilliance, the entire world making fun of Ronaldo, and that craaayyyyyyzy final game. But the difference this year was that it came right in the middle of all the major English and European professional seasons. So, after a short break the curious could dive immediately into the Premier League rat hole. In the past I had thought about doing this, but it all seemed too complicated. This year I literally had nothing better to do. But it’s still too complicated.
The first thing you notice watching league, or “club” football, as they call it, is that the games are very different from the World Cup. This should not be surprising. The World Cup is a giant high pressure all star game where the happiness of entire nations depends on good performances from teams that have played together for a comparatively short time. So the games tend to be a bit, for lack of a better description, stiff and tight. While it’s an incredible event the quality of the football games is a bit variable. This is me saying what hundreds of millions of people have known for decades.
Premier League is actual teams doing their actual every day jobs, and the games are a lot more fun to watch. Especially the opening minutes of games. The ball goes up and down and back and forth. People run for their lives. There are early scores. Lots of trash talking and drama. It’s great.
Of course the Premier League is, in a literal sense, only the very tip of the top of the iceberg. If you are not careful you will also find yourself diving into the other Euro leagues (Ligue 1, Bundesliga, La Liga, Serie A, etc), the secondary English leagues, FA Cup, League Cup (which for sponsorship reasons is really the Carabao Cup right now), Champions League, Europa League, and who knows what else. One could retire and do nothing but watch English and European football 7 days a week, 10 hours a day, for the rest of one’s life. If I did that for 5 or 10 years I might finally figure out how offside works.
The best thing about these games is the timing. Because of the time zones involved there are no night games that go until midnight. Great! And, the games always finish on time. Wonderful! And, you can play almost two complete English football games in the time it takes to get through a regular season MLB or NFL game … or sometimes two and a half games for a playoff baseball or NFL game. I’m pretty sure if English football had been invented in the U.S. we’d get ad breaks before every corner and free kick. And the free kicks would have a sponsor (and next, the Visa Interest Free Free kick!). Instead we get these weird surreal 3-d cans on the field:
The next best thing about all these games is the songs. After that the best thing is the shirts with collars. But finally the actual best thing is how high variance they are. It seems fairly rare for things to actually go the way people would expect them to, given whatever the current league standings (er, table) are at the time. The bottom of the league will often outplay and destroy the top. Or, some team from the third or fourth tier league will play a team two or three tiers above it to an exciting draw. Exciting draws are not a thing I would have expected to either
or
But there you go.
Finally, to answer the obvious question: I have not picked a team yet. I’m going to bandwagon the front runners (Arsenal, Man United) this year. And I like the Crystal Palace shirts and team name the best. Wolverhampton Wanderers and Nottingham Forest are also strong contenders in the name contest. Oh, and I like the bubble machines at West Ham.
Izola’s is a buffet style restaurant in Hinesville Georgia. It serves food that is best described as “Classic American Southern”. Early on in the great stupidity they were one of my early finds on TikTok. Every day they would pan a phone camera down the buffet while a friendly voice narrated the menu. Typical items included, say, chicken and rice, chicken and dumplings, fried chicken, baked chicken, panko crusted fried fish, BBQ meatballs, Swedish Meatballs, smothered pork chops, collard greens, green beans, fried cabbage, mashed potatoes, dressing, rice, 2 or 3 kinds of gravy, Mac and Cheese (“scoop that mac”), and so on.
During the pandemic we thought: “when this stupidity is over we should go down there”. Around the time my hand finally started feeling a bit normal we decided that while stupidity in the world would never end, we should go down there for some warmer weather and the food, so we did.
And it was just as glorious as in the videos.
You should go. It’s close to Savannah, which is also a neat place.
The voice input stuff in the Apple operating systems is just good enough to be really annoying when it misses, and not really good enough to use on a regular basis if you can actually type at the keyboard. Not surprisingly it’s especially bad with jargon and other domain specific language. I used it a lot when my hand was at its most useless. But now I’ve mostly stopped again.
Finally, once things improved a lot, I made chili for the Super Bowl game (and the two or three Premier League and related games that played before it on Sunday morning and afternoon). I am pretentiously snobby about my chili because I make my own chili powder and use that as a base.
This year I learned I should be using those dried peppers to make a chili pepper sludge, and then make chili out of that. Kenji does it here.
So I did this with most of the peppers. Then used two or three more to also make some powder to spread around in the meat, and kept the rest of my method the same. And it was worked pretty well. The scheme is a bit more variable because you never quite use the same amount of water to make the sludge. I might keep doing this or not, I have not decided.
That’s all I got for now. It’s good to be back. And I’m gonna make that chili again.
Note: whatever you do, don’t use the recent NYT chili recipe. The fact that they would call the result of that recipe “spicy” is an insult to the word spicy.
See you next time. Oh! It’s football time!
In part 1 and part 2 we tried to set up enough of the mathematical formalism of quantum mechanics to be able to talk about quantum measurement in a reasonably precise way. If you were smart and skipped ahead to here you can now get the whole answer without reading through all that other tedious nonsense.
For reference, here are the rules that we currently know about quantum mechanics:
States are vectors in a Hilbert space, usually over \mathbb C.
Observables are self-adjoint linear operators on that space.
The possible values of observables are the eigenvalues of the corresponding operator, and the eigenvectors are the states that achieve those values. In addition, for the operators that represent observables, we can find eigenvectors that form an orthonormal basis of the underlying state space.
There is a special observable for the energy of the system whose operator we call H, for the Hamiltonian. Time evolution of states is then given by the Schrödinger equation.
Now we’ll finally talk about measurement.
As before, I am the furthest thing from an expert on this subject. I’m just trying to summarize some interesting stuff and hoping that I’m not too wrong. I’ll provide a list of more better sources at the end.
In quantum mechanics measurements are the connection between eigen-things and observables. We interpret the eigenvalues of the operator representing an observable as the values that we can see from that observable in experiments. In addition, if the system is in a state which is an eigenvector of the operator, then the value you get from the observable will always be the corresponding eigenvalue.
The simplest model of measurement in quantum systems is to say that a measurement is represented by acting with a single operator representing the observable on a single vector representing the state of the system. In this simple model we are doing “idealized” measurements (simple operators) on “pure” states (simple vectors). There are generalizations of both of these ideas that you can pursue if you are interested. See the further reading.
If we perform a measurement on a system that is in a state represented by an eigenvector of the operator, we always get absolutely determined and well defined answers.
For example let’s say we are in a system where the Hilbert space \cal H is two dimensional, so we can represent it as \mathbb C^2 and with scalars from \mathbb C. So, any basis that we define for the space needs only two vectors: | 0 \rangle = \begin{pmatrix}1\\ 0\end{pmatrix} and | 1 \rangle = \begin{pmatrix}0\\ 1\end{pmatrix}
Let’s say we have some operator S such that | 0 \rangle and | 1 \rangle are its eigenvectors with eigenvalues \lambda_0 and \lambda_1. Then we know that if we measure some system whose state we (somehow) know to be either | 0 \rangle or | 1 \rangle with S we’ll get some number with probability 100%:
That is:
S | 0 \rangle = \lambda_0 | 0 \rangle
and
S | 1 \rangle = \lambda_1 | 1 \rangle
But, quantum states come in Hilbert spaces, which are linear. This means that we also have to figure out what to do if our state vector is any linear combination of the eigenvectors. So what if we had a state like this:
c_0 | 0 \rangle + c_1 | 1 \rangle
where c_0 and c_1 are arbitrary constants? In this case the result of doing a measurement will then either be the eigenvalue \lambda_0 with some probability p_0 or \lambda_1 with some other probability p_1.
The Born rule then states that the probability of getting \lambda_0 is
p_0 = { |c_0|^2 \over |c_0|^2 + |c_1|^2 }
and the probability of getting \lambda_1 is
p_1 = { |c_1|^2 \over |c_0|^2 + |c_1|^2 } .
We have seen a version of this rule before, in part 1, but this time I normalized the probabilities like a good boy (so that they add up to 1).
One last puzzle that should be bothering you is the question of whether we can represent any state as a linear combination of the eigenvectors of the operator. It turns out we can, because we specified that observables are self-adjoint, so we can invoke the spectral theorem from part 2 which says that given an arbitrary state \psi \in \cal H we can always write the state as a linear combination of the eigenvectors.
In summary: given an arbitrary state vector \psi \in \cal H and an observable represented by an operator S you can calculate the behavior of S on \psi by first expressing \psi as a linear combination of eigenvectors of S (because you can find eigenvectors that form a basis) and then applying the Born rule.
So in our example above, where the operator S has eigenvectors | 0 \rangle and | 1 \rangle, we can first write \psi like this:
\psi = c_0 | 0 \rangle + c_1 | 1 \rangle
And then we use the Born rule to compute the measurement probabilities.
The most famous two-state system in the quantum mechanics literature is the so-called “spin 1\over 2” system. The behavior of these systems was first explored in the Stern-Gerlach experiment. In this experiment you shoot electrons (really atoms with a single free electron) through a non-uniform magnetic field, and see where they end up on a screen on the other side. You would expect them to end up in some continuous distribution of possible points, but it turns out they end up in only one of two points, which we will call “up” and “down”. We’re just going to take this result for granted rather than trying to explain it right now.
We can imagine spin as being like a little arrow over the top of the electron pointing either “up” or “down” along a certain spatial axis (e.g. x, y, or z). The Stern-Gerlach device determines the state of this “arrow” by measuring the behavior of the electron in a magnetic field. So it’s sort of like a magnet … but not really.
The state space for this system is just \mathbb C^2. Each one of the spin states is some linear combination of | 0 \rangle and | 1\rangle above.
It also turns out that there are four convenient operators that we can use as observables: the identity, and a spin operator for each spatial axis which we will call S_x, S_y and S_z. For all the details of where these come from, you can read about the Pauli matrices.
The Pauli matrices are called \sigma_1, \sigma_2 and \sigma_3. And the spin operators S_x, S_y, and S_z are defined as
S_x = {\sigma_1 \over 2}, \quad S_y = {\sigma_2 \over 2}, \quad S_z = {\sigma_3 \over 2} .
I can’t decide if it’s a deep mathematical fact or just a strange coincidence of nature that \mathbb C^2 should have exactly three operators for spin measurements, one in each direction that we need. It seems a bit spooky that it worked out that way.
Note: in all of the computations below I’m leaving out factors of \hbar. This is a standard trick in physics texts … you can use units where \hbar = 1 and then put it back later if you want.
We measure spin using a box with a magnetic field in it. So, imagine that we have some box with one hole on the left, and two holes on the right. We send an electron in the left hole and it comes out the top hole if the spin is up, and the bottom hole if the spin is down. We have three kinds of boxes that each measure the spin in a different direction (again: x, y or z).
So the S_z box looks like this:
We start with a beam of particles where each particle is in a completely random state. Electrons (say) go in the left hole and the spin up stuff is directed out the top right hole and the spin down stuff comes out the bottom right hole. We can then consider what happens if we take a bunch of devices like this, chain them together, and take sequential measurements.
First suppose we put another S_z box right after the first one so that all of the particles that enter the second box come out of the {\small +} hole of the first box. What will happen here is that 100% of this beam will come out the {\small +} hole of the second box. This seems very reasonable, since they were all z-spin up particles.
This behavior might make you think that z-spin is a property that we can attach to the electron, perhaps for all time, like classical properties, and that this box acts like a filter that just reads off the property and sends the particles the right way. Keep this thought in your brain.
Next, we can see that the relationship of S_z to S_x is also straightforward. A particle that has a definite z-spin still has an undefined x-spin:
So here when we put a S_x box right after the S_z box and send all the z-spin up particles through we will get x-spin up half the time and x-spin down half the time. If you study the material on the Pauli matrices above this will make sense because it turns out that the eigenvectors of S_z can be written as a superposition of the S_x eigenvectors with coefficients that make these probabilities 1/2 (and vice versa). In particular:
|z_+\rangle = | 0 \rangle = \begin{pmatrix} 1 \\ 0 \end{pmatrix}, \, {\rm and}\,\, |z_-\rangle = | 1 \rangle = \begin{pmatrix} 0 \\ 1 \end{pmatrix}
|x_+\rangle = {1 \over \sqrt{2}} \begin{pmatrix} 1 \\ 1 \end{pmatrix}, \, {\rm and}\,\, |x_-\rangle = {1 \over \sqrt{2}} \begin{pmatrix} 1 \\ -1 \end{pmatrix}
From this we can figure out that:
|x_+\rangle = {1 \over \sqrt{2}} (|z_+\rangle + |z_-\rangle)
and
|z_+\rangle = {1 \over \sqrt{2}} (|x_+\rangle + |x_-\rangle)
The Born rule then tells us that measuring the x-spin of a z-spin up particle will get you x-spin up half the time and x-down half the time. Similarly, measuring the z-spin of an x-spin up particle will get you z-spin up half the time and z-spin down half the time.
Relationships like this also happen to be true for the all of eigenvectors of all the spin operators. Some of the references at the end go into these details.
Finally, we can push on this idea a bit more by adding yet another S_z box on the end of the experiment above. When we do this we get a result that is somewhat surprising.
We might think that all of the particles coming out of the S_x box should be z-spin “up” since we had filtered for those using the first box. Sadly, this is not the case. Measuring the x-spin seems to wipe away whatever z-spin we saw before. This is surprising. Somehow going through the S_x box has made the z-spin undefined again, and we go back to 50/50 instead of 100% spin up.
So now our problem is this: what is going on in the last spin experiment?
We can interpret the first two experiments as behaving like sequential filters. The first z-spin box filters out just the particles with spin-up, and then we feed those to the second box (either z or x) and get the expected answer.
In order to make sense of the third experiment it seems like we need posit that measurements in quantum mechanics have side effects on the systems that they measure. How can we account for the fact that the z-up property that the particles have before measuring the x-spin seems to disappear after we measure the x-spin?
The standard answer to this question goes something like this:
Thus, we are led to ponder another rule to the four we already had for how quantum mechanics works:
Suppose we have a quantum system that is in some state \psi and we perform a measurement on the system for an observable O. Then the result of this measurement will be one of the eigenvalues \lambda of O with a probability determined by the Born rule. In addition, after the measurement the system will evolve to a new state \psi', which will be the eigenvector that corresponds to the eigenvalue that we obtained.
This is, of course, the (in)famous “collapse of the wave function”, and with the background that I have made you slog through it should really be bothering you now.
We seem to need this rule, along with the original rule about eigenvalues and eigenvectors to make our formalism agree with the following general experimental fact:
Whenever we measure a quantum system we always get one definite answer, and if we measure the system again in the same way, we get the same single answer again.
The problem is that the collapse rule completely contradicts our existing time evolution rule, which says that everything evolves continuously and linearly via the Schrödinger equation:
i \hbar \frac{\partial}{\partial t} | \psi(t) \rangle = H | \psi(t) \rangle .
This equation can do a lot of things, but the one thing it cannot do is take a state like this
|ψ\rangle = c_1|ψ_1 \rangle + c_2|ψ_2 \rangle
and remove the superposition. With that equation we can only ever end up in another superposition state, like this:
|ψ'\rangle = c_1' |ψ_1'\rangle + c_2' |ψ_2'\rangle .
To bring this back to our example, suppose our S_x box is modeled as a simple quantum system with three states: |m_0\rangle for when the box is ready to measure something, |m_+\rangle for when it has measured spin up, and |m_-\rangle for when it has measured spin down. Here the m is for machine, or measurement.
In our experiment, at the second box, we start with a particle in the state
|z_+\rangle = {1 \over \sqrt{2}} (|x_+\rangle + |x_-\rangle)
and send it into the S_x box, which starts in the state |m_0\rangle. So the state of the composite system becomes the superposition:
{1 \over \sqrt{2}} (|x_+\rangle + |x_-\rangle)|m_0\rangle .
This state means “the particle is in a superposition of x-spin up and x-spin down, and the measuring device is ready to measure it.” 1
If we believe that Schrödinger evolution is the only rule we have, then this state can only evolve like this:
{1 \over \sqrt{2}} (|x_+\rangle + |x_-\rangle)|m_0\rangle \quad \xrightarrow{\hspace 20pt} \quad {1 \over \sqrt{2}} ( |x_+\rangle|m_+\rangle + |x_-\rangle|m_-\rangle ) .
That is, the box and the particle must evolve to a superposition of “spin up” and “measured spin up” with “spin down” and “measured spin down”. The Schrödinger equation never removes the superposition.2
But we never see states like this. Particles go into measuring devices, and those devices give us a single answer with a single value. The world is not full of superposed Stern-Gerlach devices, or CCDs, or TV screens. Furthermore: cats, famously, are never both alive and dead.
Instead, the particle enters the device and we see a universe where the device tells us a single definitive answer: either spin up or spin down. That is, using our notation above, the real world time evolution seems to always look like this:
{1 \over \sqrt{2}} (|x_+\rangle + |x_-\rangle)|m_0\rangle \quad \xrightarrow{\hspace 20pt} \quad |x_+\rangle|m_+\rangle
or
{1 \over \sqrt{2}} (|x_+\rangle + |x_-\rangle)|m_0\rangle \quad \xrightarrow{\hspace 20pt} \quad |x_-\rangle|m_-\rangle
So we seem to have a fundamental conflict: the Schrödinger equation says we should see superpositions, but in our experiments we never see superpositions.
This, dear friends, is the measurement problem. It is a fundamental contradiction between the observed behavior of real systems in the world, and what the Schrödinger equation dictates.
The literature on the “interpretation of quantum mechanics” is of course full of deep thoughts about the questions that the measurement problem raises. I could not possibly do more than unfairly caricature the various possible stances that one could have about this question, so that’s what I will do. Here are some things we can do:
We can take the collapse rule as a postulate and until we understand how measurement works, just use the rules and try to be happy. This view is often called the “Copenhagen” interpretation, although that’s not really right and the Copenhagen story is actually a lot more complicated than this. A better name for this view is the “standard” or “text book” viewpoint.
We can say that quantum states are mainly a tool for describing the statistical behavior of experiments. Ballentine’s book, which I referenced in part 2, has a careful exposition of one version of this line of thought where the wave function only describes statistical ensembles of systems. There are, of course, a spectrum of different opinions about whether quantum mechanics describes any physical reality at all, or just the behavior of experiments.
We can say that the collapse rule is either not needed or not contradictory because quantum states are not really things that exist in the world. Rather, the quantum state is just a way of describing what we, or some set of rational agents, believes about the world. The most recent version of this idea is probably QBism.
We can think that wave functions do not describe the entire state of the system. Instead, there is some other part of the state that gives systems definite measured properties. The most popular version of this idea is the “pilot wave” or “Bohmian” version of quantum mechanics.
We can decide that superpositions don’t actually collapse, we just can’t see the other branches. This is the Everett and/or the “Many Worlds” idea.
We can say that wave functions actually collapse through some random physical process, and we can use this fact to derive the measurement behavior (and perhaps the Born rule). The most famous theory like this is the GRW stuff.
There are dozens more ideas that I will not list here because I don’t understand them well enough to list them.
If forced to take a stance I would probably say that I am most sympathetic to the more “ontological” theories, like Bohm or Everett. My least favorite idea is probably QBism because I have a hard time being enthusiastic about a world where everything is just the knowledge and credences of rational actors. But, in between these two extremes I enjoy the careful and pragmatic thinking that’s been done about the nature of experiments and measurement in quantum theory. I used Ballentine’s book as an example of this, but there is a lot more where that came from (see Peres for example). I feel like what we really need to do is to attack the core question of what is really happening in quantum and quantum/classical interactions. Until we have a better understanding of that I think we’ll never figure out this puzzle.
When in doubt, I will just appeal to my favorite quantum computer nerd: Scott Aaronson for his point of view, which seems right.
I left out a lot of important details related to the structure of Hilbert space. In the finite dimensional case they don’t matter too much but they are critical in the infinite dimensional case. Watch Schuller’s lectures on quantum mechanics to fill those in.
I really only covered the simplest possible models of quantum states, observables and measurements. Mixed state, density operators, POVMs and all that are missing. Schuller’s lectures or any of the more mathematical books that I listed cover this.
I left out the uncertainty principle, which is kind of a big part of the story to skip. You can talk about it in the context of the spin operators but it’s a lot of work and not directly related to the puzzle that I was trying to get to.
I left out the entire huge world of entangled states because I did not want to introduce any more formalism. Entanglement, Bell’s theorem and all that is also just too big a subject to mention and not go into it, so I left it out Maybe we’ll cover that in a future part 4.
I never mentioned decoherence. I am a bad person.
I played fast and loose with normalization when talking about quantum states and operators. I should have been much more careful, but I’m lazy.
I wish I could have talked about the two slit experiment. But, I’d have done a lousy job so go read Feynman instead.
Finally, you can do an experiment similar to the chained spin-box experiment with polarized light. Watch here.
Some more reading for you:
If you want to go all the way to the beginning with the original sources, both of the books by Dirac (or look at the Google Books link which is likely to be more reliable) and von Neumann are still pretty readable.
Travis Norsen’s Foundations of Quantum Mechanics is a great introduction to this material. A good combination of nuts and bolts physics and discussions of the conceptual issues.
David Albert’s Quantum Mechanics and Experience (also at amazon) has a nice abstracted description of the spin-box experiment that I have butchered above. This one goes well with Norsen.
Sakurai’s Modern Quantum Mecanics starts with a good discussion of the spin experiments I used as an example.
An older book, Quantum mechanics and the particles of nature, by Sudbery, goes at this from a point of view that I like. Hard to find though.
Hughes’ The Structure and Interpretation of Quantum Mechanics also starts with spin but is a more philosophical look at the material.
The Stanford Encyclopedia of Philosophy has a lot of material on quantum mechanics and its interpretation. Their summary page is also a bit shorter, yet also more detailed, than my effort here.
You should read this paper by Leifer just for the delicious pun in the title. But it’s also a great breakdown of the various ways that people talk about and interpret the quantum state.
This much more technical paper by Landsman also addresses the very complicated question of how classical and quantum states are related. He has an open access book that expands on these ideas, especially in the chapter on the measurement problem. I don’t really understand any of this, but it seems like the kind of work that needs to be done.
Those in the know will notice that I have not really explained what this notation for product states that I am using here means. I did not have the space to explain tensor products and entanglement, which is a shame because along with measurement entanglement is the second huge conceptual puzzle in quantum mechanics.↩︎
For those keeping track, this is the formula I’ve been trying to get to this whole time. Was the 9000 words worth it?↩︎