When AI is Artificial I: humanity, culture, art, emotion

chateau_de_versailles_salon_des_nobles_pygmalion_priant_venus_danimer_sa_statue_jean-baptiste_regnaultDoesn’t all AI end up being a reflection of who we are as humans? From the practical, I mentioned bias in how we build AIs and the prevalence of conversational bots. But we all know of the endless numbers of books and movies with stories of AI becoming something we cannot distinguish from humans.

Is this simply the Pygmalion in all of us? We turn to external expressions to make sense of the human condition, through art, religion, science, sport, politics. Why not AI? And with so much expression imbued in an AI, might we not fall in love with it, or want it to be something we can fall in love with?

I’m not going to go over all the examples of AI in art. But I would like to point you to a very interesting short movie written by an AI. Fed a corpus of sci-fi scripts, the AI, given a seed of an idea, wrote a short sci-fi script of its own. The video is the director’s and actor’s interpretation of the script.

The interesting thing is that it comes off as an off-beat movie, but with a touch of something deep that must be there. And if you think the dialogue is too off-beat, read something like the Naked Lunch, or Kafka.

And here’s a recent article on a performance of various pieces from various genres of music written by AI but performed by humans. This sparks a very interesting discussion on the balance between statistically creating music (the AI) and the human touch. The example used in the article is a pair of Mozart pieces – the one that’s all AI is all over the place, but the one with a bit of human intervention begins to have small stretches that feel like Mozart. But, of course, a fully Mozart-style piece does not emerge from the machine.

Though, one of the composers see that AI as a collaborator rather than a composer on its own, and that’s what is exciting to some musicians.

He points out that although the music sounds like Miles Davis, it feels like a fake when he plays it. “Some of the phrases don’t quite follow on or they trip up your fingers,” he says. This makes sense, as this isn’t music written by a human with hands sitting at a keyboard; it’s the creation of a computer. Artificial intelligence can place notes on a stave, but it can’t yet imagine their performance. That’s up to humans.

Source: A night at the AI jazz club – The Verge

My Fair AI
I think we approach the Ultimate Pygmalion in our desire to create simulacra of emotive, interactive beings. For example, there is no end to the wee AI-imbued gizmos we try to create to interact with us. Will these gizmos be as smart as a puppy, or try to do more and end up annoying? Anki’s Cozmo is the latest I’ve seen and a lot was put into the emotional intelligence of the toy.

And then there’s this very interesting story about a AI bot maker who lost a dear friend and used the texts her friend left behind to create a conversational memorial to him. The author of the article is sensitive to the emotional impact of this AI memorial, but also branches off into the areas of authenticity, grieving, personality, and the role of language.

Art is meant to get us to think about who we are as humans. The bot creator only wanted to build a digital monument to have that one last conversation with a dear friend. Yet, she touched a nerve that we could not have touched without her skill in AI and capturing a voice. Rather than create something that helps us do something or cope with something, her digital monument brings up many thoughts on humanity, culture, art, emotion. Should we build bots grounded in real personalities, as derived from their digital textual contrails? What happens to one’s voice when one has died? If our voice can persist, what does it mean to who we are, our mortality, to the ones we leave behind?

What do you think?

Image Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons, from WikiCommons

Come down to earth: some hidden truths about AI

13920528727_a03087a1d3_zYou know that a tech trend is growing when there are more conferences and training programs than you can shake a stick at. And also, the trend is picked up by the amazing Science Friday and you get to hear some interesting developments and future direction.

One thing that you really don’t hear often are the “hidden truths.” The Verge recently wrote a very nice article highlighting three places where AI falls short – training, the bane of specialization, and the black box of how the AI works.

Machine learning in 2016 is creating brilliant tools, but they can be hard to explain, costly to train, and often mysterious even to their creators.

Source: These are three of the biggest problems facing today’s AI – The Verge

I had the good fortune to work with some very talented data scientists who were regularly using machine learning on healthcare data to understand patient behavior. Also, at IBM, I was able to learn a lot about how Watson thought and how well it worked. In all cases, the three hidden truths that The Verge had commented on were evident.

Teach me
The Verge article starts by pointing out the need for plenty of data to be able to train models. True. But for me, the real training issue is that it’s never “machine learning” in the sense of the machine learning on its own. Machine learning always requires a human, for example to provide training and test data, to validate the learning process, to select parameters to fit, to nudge the machine to learn in the right direction. This inevitably leads to human bias in the system.

“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitation,” said University of Utah computer science researcher Suresh Venkatasubramanian in a recent statement.

Source: Computer Programs Can Be as Biased as Humans

This bias means that no matter how well created or how smart, the AI will show the bias of the data scientists involved. The article quoted above references the issue in the context of resume scanning. No, the machine won’t be less biased than the human.

Taking that thought further, I am not concerned only with bias, but that possibly the AI cannot be smarter than the human, using the methods we currently have. Yes, an AI can see patterns across huge sets of data sets, automate certain specific complex actions, come to conclusions – but I do not think these conclusions are any better than a well-trained human. Indeed, my biggest wonder with machine learning in healthcare is whether all the sophisticated models and algorithms are no better than a well-trained nurse. Indeed, Watson really isn’t better than a doctor.

But that’s OK. These AIs can help humans sift through huge data sets, highlight things that might be missed, point humans to more information to help inform the human decision. Like Google helps us remember more, AIs can help us make more informed decisions. And, yes, Watson, in this way, is actually pretty good.

The hedgehog of hedgehogs
The Verge also points out that AIs need to be hyper-specialized to work. Train the AI on one thing and it does it well. But then the AI can’t be generalized or repurposed to do something similar.

I’ve seen this in action, where we had a product that was great in mimicking medical billing coding that a human could do. After training the system for a specific institution, using that specific institution’s data, the system would then perform poorly when given data from another institution. We always had to train to the specific conditions to get useful results. And this applied to all our machine learning models: we always had to retrain for the specific (localized) data set. Rarely were results decent on novel though related data sets.

Alas, this cuts both ways. This allows us to train systems on local data to get the best result, but it also means we need people and time (and money) every time we shift to another data set.

This reminds me of Minsky’s Society of Mind. Often we can create hybrid models that provide multiple facets to be fitted to the data, allowing the hybrid collection to decide which sub-models reflect the data better. Might we not also use a society of agents, a hybrid collection of highly specialized AIs that collaborate and promote the best of the collection to provide the output?

Black box AI
The third and last point the Verge article makes is about showing your work. I’ve been in many customer meetings where we are asked what are the parameters, what is the algorithm, how does the model think? We always waved our hands: “the pattern arises from the data,” “the model is so complex, it matches reality in its own way.” But at the same time, the output we’d see, the things the machine would say, clearly showed that sometimes the model could approximate the reality of the data, but not reality itself. We’d see this in the healthcare models and would need to have the output validated and model tweaked (by a human, of course) to better reflect the reality.

While black boxing the thinking in AI isn’t terrible, it makes it unapproachable to correct any misconceptions. The example in the Verge article on recognizing windows with curtains is a great one. The AI wasn’t recognizing windows with curtains, but correlating rooms with beds with windows with curtains.

AI is not about the machine
The human is critical in the building and running of AIs. And, for me, AIs should be built to help me be smarter, make better decisions. Some of the hidden truths listed above become less concerning when we realize we should, for now, stick to making AI as smart as a puppy, rather than imbue them with supposed powers of cognition beyond the human creators. AIn’t gonna happen any time soon. And will only annoy the humans.

Image from glasseyes view

Talk to the hand – and the computer, and the home device, and the…

5756932852_de6e6ebf5b_zAt the core of what is going to drive this new wave of adoption of AI will be the “conversational interface.”* We spoke a bit about bots, and how the more sophisticated ones are attempting to be conversationally savvy. But the real action around conversational interfaces is best shown by Google’s recent hardware moves.

Talk to the Oracle
I’ve always thought of Google as the Oracle, in the antiquity sense of “a person or agency considered to provide wise and insightful counsel or prophetic predictions or precognition of the future, inspired by the gods.” Ask and ye shall learn (don’t tell me you haven’t asked Google random questions and wondered what you would find out).

AI is driven by knowledge and learning, which Google, as the Oracle, has been amassing for years. Google has made it clear that their new hardware will be the gateway to that Oracle, that AI. And by making that hardware, they can define the experience of conversing with that AI.

Google getting into hardware is a wakeup call for device and software manufacturers who have been dabbling with things that could be driven by AI. There was an expectation that Google would eventually start building its own hardware, going beyond just Android licensing, though even Walt Mossberg admits he missed AI as the motivation behind Google’s hardware interest.

As an aside: Wondering from my perspective as a software person at a hardware manufacturer, is the hardware simply Google’s way to set a reference as to how it sees the future of AI, of the Google Oracle of knowledge; or will Google actually build out a whole range of products? How far will Google go with hardware?

Siri-ously great conversation
The other dabblers in AI have been building capabilities into their voice-driven interfaces in their devices – Apple with Siri, Microsoft with Cortana, and others. But these guys do not necessarily have the knowledge and learning that Google has amassed.

Also, Apple squandered the AI lead with Siri. It seems that Apple has been focused on Siri as a UI tool rather than an extension of users wanting to do things, find things, know things. What Google’s move shows, it’s clear that Siri’s future is all about the capability of the AI behind it.

I’m not bad. I’m only drawn that way.
I am perhaps being unfair to Apple. Keep in mind the origins and soul of Google and Apple. Google is the ultimate bot, amassing information and making it available to the world. They really do not care about the humans, in so much as Google can serve and answer the humans’ questions.

Apple on the other hand is about crafting the amazing experience I have with my photos, my music, and, increasingly, my people (though I don’t see Apple getting social communication any more ‘right’ thank Google can). Yet, and this is relevant for building AI, Apple is not about communicating with data, information, knowledge.

So I ask: Google is great at knowledge, but can it nail the hardware experience? Apple is great at the hardware experience, but can it nail the AI experience?

Talk to the morsels
Amazon and Facebook also want to get into the hardware and AI game, but how does that fit their origins, that narrative they have written for themselves? Facebook does have a lot of knowledge, but of people and their actions and the social tokens they exchange, not the hardcore information that Google has. Amazon also has amassed information, but of what folks buy and consume; there is no insight into communication between people.

Yes, each of these behemoths have a piece of the puzzle. Of course, having been outside them my whole life, I have always had the philosophy of tying the morsels together. That also presents another path. Though not sure how we’ll traverse the various flavors of AI and repositories of knowledge and actions.

Image from Mary McCoy. Hat-tip to Stephanie Rieger for the conversational interface link.

*OK. I’ve been burning to say this on conversational interfaces. Speech doesn’t work in many of our daily places. We’ll still need to keep on writing our questions to the Oracle, conversationally, of course. That’d be one heck of a command line. Though Google is showing much of this powerful command line in their simple but everlasting search box.



Bots all the way down, and a puppy reference

406161373_1b89bdef82_zBots have been around for a long time. Anyone who knows the history of AI remembers ELIZA, a conversation-mimicking AI that made you feel you were talking to a doctor, or, more like, an amateur psychologist. We should not be surprised that, since then, with every new interactive platform, there has been a proliferation of bots.

A botsplosion
In the past year, there’s been a renewed interest in bots on various messaging platforms. These have gone beyond the automated accounts that tweet the TowerBridge or the Shipping Forecast, or tweet the arrival of dictators at Geneva Airport.  Infused with more understanding of language and a dash of AI, these bots now can bait bigoted extremists, or tweet negative Trump quotes – with the source.

Science Friday did a great segment on AI bots and talked to some creators of them. While some of these bots are not, by definition, utilitarian, they are quite imaginative and creative, able to spark wonder and make you think, despite how they come out.

Getting beyond amusement, WeChat (no surprise) and Baidu are taking it up a notch. On WeChat there are bots that can do image recognition or mimic a voice-recognition assistant (I’ll get to voice-driven AI agents in a later post). And Baidu, to circle back to ELIZA, has created a docBot to help start the process (triage?) for folks looking for a doc or medical info.

Getting as smart as a puppy
Things are getting interesting. Duolingo is building chat bots to help folks learn a language. These are basically tutorBots with whom users can message with to practice with. The idea is that the bot is an “eternally patient, nonjudgmental, on-demand instructor.” Though, what I hear about some “patient and nonjudgmental” AI assistants (*cough* Amy) is that they can be extremely annoying.

As these bots get more AI-power, I’ll be on the look out for those that try too much. So far, my experience with machine learning has been that AI systems are usually not smarter than the creator.* But that’s fine. Long ago, a bunch of us concluded that getting too smart could be an annoyance, hence we suggested that ‘smart’ systems should Be as Smart as a Puppy.

For me, the best systems have been the ones that have augmented my intelligence – by being able to sift through large data sets, make broad connections, and present insights in a way to inform me (or the doc, or the data scientist) – rather than trying to supplant my intelligence.

Rather than try to build a bot that thinks it’s as capable as a human, make me a bot that make the humans in the mix work better. Don’t try to outthink me.

Are you using bots in your daily routine, or are they still a curious creature evolving in curious directions?

Image of my BASAAP drawing that Matt Jones was kind enough to immortalize back in 2007.

*Oh, and if you think that kids end up smarter than their creators, you’re right. But I don’t think that any of these bots understand what makes kids be smarter than their parents. That, I think, is at the core of human intelligence that will still take some time to sort out.

Hey, John McCarthy, AI has finally gone mainstream

home___partnership_on_artificial_intelligence_to_benefit_people_and_societyYou know the feeling when someone mentions something and then you see it everywhere? Well, that’s what happened to me with AI. I wasn’t giving it any attention until someone pointed out a few weeks back that it was a big up-and-coming topic (to be fair, they pointed it out as a big up-and-coming topic to me already two plus years ago). Ok, machine learning was a big part of what I was selling these past two years, and I was at IBM the Watson hit the stage, so it’s not like I was totally clueless. But no sooner do I start doing some research, a bunch of big announcements (like the one above) happen. So excuse me if I sound a bit out of touch at the start. I’m playing catch up to smarties like you.

Say it is so, John
John McCarthy was one of the founders of AI. Back in the summer of 1956 he organized the Dartmouth Conference that kicked off AI as a field.

By the time I started reading about AI back in the ’80s, the field had come a long way, but I wouldn’t say it was something mainstream. Nonetheless, AI has been simmering in the background and the age of Big Data seems to have brought AI to the fore and ushered the Era of Mainstream AI.

AI now
In the past few weeks, there have been many announcements around AI, and, with the Partnership on AI (logo above, announcement link below), large corporations are putting money where their mouths are. What’s more, these corporations are also releasing useful products that can truly claim a foundation in AI.

A most telling comment for me has been Google pointing out that for the last many years products have been pushing to be “mobile-first.” Now, and Google CEO Sundar Pichai has been saying this for a few months, “We will move from mobile-first to an AI-first world.”

Why is the message louder now? A slew of AI-based products have been released in the past few months from Apple, Google, Microsoft, Amazon, and Facebook. The time for talk is over – real, AI-based, conversational agents, backed by troves of data and responsive software and hardware and networks are here. And, delightfully, competition will accelerate the usefulness of these products.

OK, I might be late to the party, but I have noticed a growth in the number of browser tabs I have open to AI topics, products, and people; I can no longer sit quietly as these exciting developments happen.

But I am not interested in only understanding AI from the perspective of daily news. I want to understand AI in action, the demos and stories of the uses of AI in all forms. I want to understand the interaction of people and AI, now and in the past, the movers and shakers, and those affected, whether good or bad. I want to understand the culture of AI, how it is portrayed in movies, books, and popular culture. I want to understand the science of AI, no matter how intelligible. And I want to understand how to use AI-driven tools in my exploration of AI (a dog-food kinda thing, let’s see how that goes).

The AI story is right in front of me and, for some strange reason, I now feel compelled to share this story from this perspective.

Let’s see. For sure I don’t need another compulsion I just have to write about. Expect a barrage of posts as I clear out my tabs. 🙂

September 28, 2016NEW YORK — Amazon, DeepMind/Google, Facebook, IBM, and Microsoft today announced that they will create a non-profit organization that will work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field.

Source: Industry Leaders Establish Partnership on AI Best Practices | Partnership on Artificial Intelligence to Benefit People and Society

Thoughts on Aetna’s Apple Watch move

watchAetna announced they were going to subsidize the Apple Watch for select large employers and individual customers this open enrollment season. Aetna will also provide the Apple Watch to nearly 50,000 employees in their wellness reimbursement program.

This is big.

For this to succeed and stick, Aetna will need to be able to measure the impact of these devices on their business and, of course, health of people. I am seeing now that these devices are starting to get attention, we’re starting to truly discover if what we believe is true or based on conjecture (the 10,000 steps myth, for example). I am not convinced there’s enough data to show that devices alone are the answer.

I feel these devices are only useful in the context of a full engagement with patient and person – so far these devices have been seen, incorrectly, as point solutions (did you see Microsoft exited the fitness band hardware biz?). My thoughts tie into a holistic digital therapy, that these devices work best as part of a care plan, with coaches, data, insight, and understanding. It’ll be interesting to see how they meld these devices with the Aetna apps.

[OK, and I haven’t even begun to scratch the surface on biz models and whatever engagement model Aetna or their customer organizations might do to engage with the members. Lots of room for innovation there.]

Aetna’s iOS-exclusive health apps will aim to simplify the healthcare process through a number of features, including: Care management and wellness, to help guide consumers through health events like a new diagnosis or prescription medication with user-driven support from nurses and people with similar conditions. Medication adherence, to help consumers remember to take their medications, easily order refills and connect with their doctor if they need a different treatment through their Apple Watch or iPhone. Integration with Apple Wallet, allowing consumers to check their deductible and pay a bill. Personalized health plan on-boarding, information, messaging and decision support to help Aetna members understand and make the most of their benefits.

Source: News Releases – Investor Info | Aetna [Hat-tip to Rock Health for the link]

What’s the healthcare equivalent of reach, throw, row, go?

14157551699_bbbce23643_mThe other day we were talking about my wife’s mobile vertinary practice, and I started mapping what she does to human healthcare and reach, row, throw, go popped into my head.

My wife used to be a pool lifeguard. She told me that if something happened in the water, the level of engagement was reach (can you use a pole or arm to grab the swimmer?), throw (are they close enough for you to throw a lifesaver?), row (can you take a boat or board to the swimmer?), go (if all else fails, go in after the swimmer).

I’ve been thinking quite a bit about patient engagement lately. I truly believe that the way out of the mess we have in healthcare is through deeper engagement with the healthcare system. I call it high-touch healthcare. But I don’t mean bigger hospitals and more doctors. I’ve always taken a broader, multi-channel and longitudinal view of patient engagement.

What do I mean?

The thought I’ve been developing is there is a gradient of involvement in the healthcare system from independent (for example, looking up information) to complete (for example, surgery). And there are layers to the involvement, taking in the patient, their circle of support, clinicians, clinics and hospitals, visiting caregivers, and, for me, data.

And that’s where the reach, throw, row, go comes in. Each patient is at some level of need (from none to complete dependence) and we need to decide if we need to reach (self-serve websites, mobile devices), throw (visiting caregivers, training for family members), row (clinics and urgent care), go (hospitals, hospices).

I think this thinking comes from my background in marketing and in product development where you can’t just do one thing, but need to think of the user journey, all the touch-points, and provide the right engagement for the right issue.

To me this sounds obvious, but I am never sure if healthcare systems really get it. What do you think? Do they? Do you have examples?

Image from Vasse Nicholas, Antoine

AIs, writing, and computational literature

bender-penA few months back, I stumbled upon Inkitt. Well, more like they stumbled upon me – they were looking for someone with a background in analytics and in writing to build models around the stories in their community. The goal was to build analytic models that would understand what a good story was to basically create an AI submissions editor, an AI slush-pile reader.

On the one hand, Iniktt is building a community of writers (much like Wattpad). On the other (the business model), they are selecting the top novels to offer to book publishers. Should the book publisher not take the manuscript, Inkitt, because they already think the novel is good enough for a publisher, will publish the book. If the book sells well, Inkitt will return to the publisher once more. If the book doesn’t sell well, the rights revert to the author.

Of course, the key is to find the good novels (isn’t success in publishing always about good stories?). The community will bubble some of this up, but perhaps having a model that learns from the community what is good could accelerate the discover of new novels.

Building models of literature
I found this intriguing and started looking into computational literary analysis, also known as Digital Humanities (there’s even a journal). I uncovered a long history of work to make sense of different forms of writing, being able to analyze writing as a scholar would (here’s a recent article from Berkeley).

IBM has championed the concept of “cognitive computing“, a third wave of computing after the first two waves of tabulation and programmatic computing. In cognitive computing, systems are no longer programmed by human-generated rules, but are taught through machine learning and models trained from real data (and plenty of nudging from human specialists).

We do this at work – we feed a corpus of text into our system, along with what ontologies our experts have to give some semblance of meaning to the text (that’s the hard work some people gloss over), and the system builds a model of understanding, pulling together the relevant topics (you can see it in action here). This is how organizations are getting better at understanding sentiment, tracking leading topics, going beyond keywords and rules to build a responsive system that no human alone can build (though, don’t get swayed by the hype, as this very good article warns).

So how are folks teaching systems about story?  By giving them something to read. Facebook is teaching its system by feeding it children’s books (see reading list here). Google has been feeding a system with thousands of romance novels. Alas, these two companies are not necessarily trying to build a model for what a great children’s book or romance novel is. They are trying to tech their systems how humans converse, to better provide conversational services (bots!). Though, as many parents of early readers know, what goes in is what goes out, and young conversationalists are quite impressionable (read about the Microsoft bot). But these systems will end up being a smart as a puppy. Here’s Google’s system with some exercises that look like beatnik poetry.

Folks have also been going beyond conversation and having such systems actually write novels. For example, for NaNoWriMo (National Novel Writing Month) writers spend the month of November writing a 50,000-word story (quantity over quality). NaNoGenMo (National Novel Generation Month) is a riff off of NaNoWriMo – participants build programs that create 50,000-word stories, using computers (hm, I wonder if something mechanical would count). The exercise generated some quite fun results (not to mention the call-outs to @hugovk, whom I know, and not surprised he dove into this). I am not sure how many of these were programmatic rather than cognitive-like, being more human programming cleverness rather than machine-original.

Does it matter who writes it?
I think the distinction between all-machine or all-human or a hybrid writer is irrelevant. Already financial and sports news reports are written by machines. I received spam that reminded me of Burroughs’ cut-up fiction. A machine-generated novel recently made it through the first round of a literary contest. To me, if the story is good, does it matter who wrote it? Rather than ponder _if_ an AI can write a novel, we should be thinking of how do we live in a world where AIs write or help write novels.

I have just spent the past many years feeding and encouraging a writer (human, that is). It’s a joy to share books, writing, discuss plot and style, and practice, practice, practice. AIs will be the same – we, the humans, will give them the tools to learn and grow and find their voice. What’s wrong with that?

I, for one, welcome my new novelist overlords.

Now, excuse me, as I point my AI to go play on Wattpad.

Image by Tony Delgrosso

Recipe: How I make yogurt

I’ve been meaning to post this for a long time. The way I make yogurt was inspired by Vaugh Tan (from a meet-up back in 2012!). The philosophy he shared, and to which I already, as a biologist, ascribed to, was to understand fermentation as something a community of different organisms do. For yogurt, different bacteria have peak activity at different temperatures, each eating different sugars in the milk matrix, preparing the matrix for the next bacteria as the temperature declines. That is why I choose wrapping the fermenting bugs in a towel and a natural reduction of temperature, rather than some machine that keeps the temperature at one setting.*

The recipe
I usually make yogurt 1 gallon at a time. Just simpler for me, and matches how much we eat. I also put the yogurt in Ball (or Mason) jars, but of course, you can put it into any container you are comfortable with. And, as most yogurt folks do, I get my starter from the previous batch; though, sometimes the wife buys some different yogurt and I mix that in as well.

– 1 gal whole milk
– 4 heaping tablespoons of starter culture from the previous batch (one tablespoon per quart)

1) Heat milk
Pour milk in pot. Set to mild heat, set temp alarm to 75-77°C (or 170°F).

OK, call me crazy, but I read some weird suggestion to rub a cube of ice at the inside bottom of the pot to avoid scorching. It seems to work. I don’t think it has anything to do with cooling, of course, but I think it has to do with not pouring the milk into a dry pot, the water forming a layer (adhesion?) so the milk isn’t the only thing touching the metal. I don’t know. But it seems to reduce scorching.
heat milk

One other thing that I can’t suggest strongly enough: get a digital thermometer. I got this Polder from Amazon. I like it because it can do °F and °C and it has a temperature alert. As you can see, I use °C for making yogurt (I learned about bacteria only in °C; though I learned beer brewing in °F – crazy, I know). In any case, having the digital thermometer has allowed me to have very good control of the temperature and has facilitated production and improved quality and repeatability.

I set an alert to 75-77°C so that I don’t forget about the milk and let it boil over.

2) Cool milk
When the temperature of the milk hits 77-80°C (170-175°F) I take it off the heat and let sit until the temp comes down to around 55°C (about 130°F). For a gallon of milk, this usually takes me 20 minutes. I like this slow cooling because (I think) it lets the milk proteins and oligosaccharides slowly loosen up and get intertwined, so you have a well-set yogurt. Indeed, my milk heated and cooled like this usually sets and tastes better than when I boil the milk.

While milk is cooling, I prepare the jars and warm the starter (next steps).

3) Prepare containers
I usually use quart-sized Ball jars. You can also reuse quart-sized plastic commercial yogurt containers. Just make sure the containers were washed in a dishwasher.

To prep the clean containers for yogurt, I give them a rinse with the hottest water I can handle and let them drip dry. For 1 gallon, I use four jars and have an extra smaller one ready for any overage. This works well, as we eat the yogurt from the larger jars and when we open the smaller jar, it’s time for a new batch.

Note: unlike when making preserved foods in Ball jars, I reuse the covers, so long as they are not rusted. I do this because the dominant bug is your yogurt bugs, and you’re not preserving things for months. But if that gives you the heebeejeebees, then do what works for you.

clean jars drip dry jars
4) Warm starter
As the milk comes down to 55°C (130°F), you will want to bring your starter to temperature. I use 1 heaping tablespoon of starter (saved from the previous batch) per quart (therefore, 4 for a gallon of milk). Put the starter in a bowl, pull out a cup of the heated milk (I usually rinse cup before in hot water) and let it cool enough to handle (less than 60°C/140°F), then add to starter, to warm it up. Gently mix to smoothness.
warm starter
Note: That gunk on the cup is milk skin. Some folks like to skim it off (just touch it with a spoon and pull it out). I don’t mind and just leave it.

5) Inoculate milk
You will inoculate at 55°C (130°F). When your milk is around 55C, add the bowl of warmed starter, gently mix it in.

6) Pour into container
As soon as you can, add inoculated milk to your containers and close.

full jars capped jars
7) Store containers
Wrap containers together in a towel, and store in a place they won’t get disturbed for at least 9 hours. Wrapping them together let’s them share the heat, the towel doing enough of a job to keep the heat in and to let it fall gradually (remember the bacteria having their own peak activity temperature?). I usually leave the containers overnight, in a closet or cabinet (note home brew peeking from behind towel).
into towel wrapped
8) When incubation is done, put containers in fridge
I usually tip the jars to see how well the yogurt set. And, for me, one of the most exciting parts is that in the morning, the jars are still warm from the fermentation activity.

You could enjoy them before putting in fridge, but I always put them in fridge for a few hours to firm them up before eating.


And remember to save a bit for the next batch.

– The digital probe thermometer is key. Mine’s a Polder Digital In-Oven Thermometer – $25 from Amazon. I use it for more than yogurt, such as for grilling and making home-brew.
– I started my culture with Stoneyfield which has 6 strains in it and have since added Chobani and Fage along the way. You can use any yogurt with live cultures (it’ll say on container); though, my inclination is the more strains the better.
– Anything touching non-inoculated milk and not being heated I rinse in hot water. You don’t need to be sterile, just be clean. The cultures work fast and overwhelm anything else. Indeed, like I said, the containers should be warm when you check them at the end of fermentation. Evidence of lots of bacterial activity!



*How others make yogurt: I saw this video yesterday and could not figure out why one would spend $50 on an incubator timer when a simple towel would do.

Human permanence and nature’s flow

oxbowWhen I fly, I try to sit by the window. Night or day, the world from the plane is quite interesting, providing perspectives on humanity, the planet, time and space.

Flying over the plains of the US one can see large expanses of regularly shaped farms, straight roads, and lots of flat territory. One time, I was following a river cutting through that order, a meandering river much like the one in the photo here. And, like the one I attached here (alas, I didn’t have decent ones of my own), one could see the history of the river – its current oxbows as it meanders across the flat land; the broken off oxbows, now lakes, where the river once flowed; and evidence in the color and curve of the land of where the river once ran but now covered by a land subdivided in neat little human-understandable chunks.

This got me thinking of places where we have created walls on the sides of rivers (like in practically every European town), of how humans have always tried to force rivers to do their bidding, trying to freeze for all eternity what the rivers unconsciously have been doing for millennia.

And this controlling of the rivers provides a false permanence. The different extent to which the rivers have left their mark on the land, even if we were to try and obliterate them by channels or drainage, shows a permanence of nature’s flow that we are so foolish to think we can stop.

Image from Tim Gage