Talk to the hand – and the computer, and the home device, and the…

5756932852_de6e6ebf5b_zAt the core of what is going to drive this new wave of adoption of AI will be the “conversational interface.”* We spoke a bit about bots, and how the more sophisticated ones are attempting to be conversationally savvy. But the real action around conversational interfaces is best shown by Google’s recent hardware moves.

Talk to the Oracle
I’ve always thought of Google as the Oracle, in the antiquity sense of “a person or agency considered to provide wise and insightful counsel or prophetic predictions or precognition of the future, inspired by the gods.” Ask and ye shall learn (don’t tell me you haven’t asked Google random questions and wondered what you would find out).

AI is driven by knowledge and learning, which Google, as the Oracle, has been amassing for years. Google has made it clear that their new hardware will be the gateway to that Oracle, that AI. And by making that hardware, they can define the experience of conversing with that AI.

Google getting into hardware is a wakeup call for device and software manufacturers who have been dabbling with things that could be driven by AI. There was an expectation that Google would eventually start building its own hardware, going beyond just Android licensing, though even Walt Mossberg admits he missed AI as the motivation behind Google’s hardware interest.

As an aside: Wondering from my perspective as a software person at a hardware manufacturer, is the hardware simply Google’s way to set a reference as to how it sees the future of AI, of the Google Oracle of knowledge; or will Google actually build out a whole range of products? How far will Google go with hardware?

Siri-ously great conversation
The other dabblers in AI have been building capabilities into their voice-driven interfaces in their devices – Apple with Siri, Microsoft with Cortana, and others. But these guys do not necessarily have the knowledge and learning that Google has amassed.

Also, Apple squandered the AI lead with Siri. It seems that Apple has been focused on Siri as a UI tool rather than an extension of users wanting to do things, find things, know things. What Google’s move shows, it’s clear that Siri’s future is all about the capability of the AI behind it.

I’m not bad. I’m only drawn that way.
I am perhaps being unfair to Apple. Keep in mind the origins and soul of Google and Apple. Google is the ultimate bot, amassing information and making it available to the world. They really do not care about the humans, in so much as Google can serve and answer the humans’ questions.

Apple on the other hand is about crafting the amazing experience I have with my photos, my music, and, increasingly, my people (though I don’t see Apple getting social communication any more ‘right’ thank Google can). Yet, and this is relevant for building AI, Apple is not about communicating with data, information, knowledge.

So I ask: Google is great at knowledge, but can it nail the hardware experience? Apple is great at the hardware experience, but can it nail the AI experience?

Talk to the morsels
Amazon and Facebook also want to get into the hardware and AI game, but how does that fit their origins, that narrative they have written for themselves? Facebook does have a lot of knowledge, but of people and their actions and the social tokens they exchange, not the hardcore information that Google has. Amazon also has amassed information, but of what folks buy and consume; there is no insight into communication between people.

Yes, each of these behemoths have a piece of the puzzle. Of course, having been outside them my whole life, I have always had the philosophy of tying the morsels together. That also presents another path. Though not sure how we’ll traverse the various flavors of AI and repositories of knowledge and actions.

Image from Mary McCoy. Hat-tip to Stephanie Rieger for the conversational interface link.

*OK. I’ve been burning to say this on conversational interfaces. Speech doesn’t work in many of our daily places. We’ll still need to keep on writing our questions to the Oracle, conversationally, of course. That’d be one heck of a command line. Though Google is showing much of this powerful command line in their simple but everlasting search box.



Bots all the way down, and a puppy reference

406161373_1b89bdef82_zBots have been around for a long time. Anyone who knows the history of AI remembers ELIZA, a conversation-mimicking AI that made you feel you were talking to a doctor, or, more like, an amateur psychologist. We should not be surprised that, since then, with every new interactive platform, there has been a proliferation of bots.

A botsplosion
In the past year, there’s been a renewed interest in bots on various messaging platforms. These have gone beyond the automated accounts that tweet the TowerBridge or the Shipping Forecast, or tweet the arrival of dictators at Geneva Airport.  Infused with more understanding of language and a dash of AI, these bots now can bait bigoted extremists, or tweet negative Trump quotes – with the source.

Science Friday did a great segment on AI bots and talked to some creators of them. While some of these bots are not, by definition, utilitarian, they are quite imaginative and creative, able to spark wonder and make you think, despite how they come out.

Getting beyond amusement, WeChat (no surprise) and Baidu are taking it up a notch. On WeChat there are bots that can do image recognition or mimic a voice-recognition assistant (I’ll get to voice-driven AI agents in a later post). And Baidu, to circle back to ELIZA, has created a docBot to help start the process (triage?) for folks looking for a doc or medical info.

Getting as smart as a puppy
Things are getting interesting. Duolingo is building chat bots to help folks learn a language. These are basically tutorBots with whom users can message with to practice with. The idea is that the bot is an “eternally patient, nonjudgmental, on-demand instructor.” Though, what I hear about some “patient and nonjudgmental” AI assistants (*cough* Amy) is that they can be extremely annoying.

As these bots get more AI-power, I’ll be on the look out for those that try too much. So far, my experience with machine learning has been that AI systems are usually not smarter than the creator.* But that’s fine. Long ago, a bunch of us concluded that getting too smart could be an annoyance, hence we suggested that ‘smart’ systems should Be as Smart as a Puppy.

For me, the best systems have been the ones that have augmented my intelligence – by being able to sift through large data sets, make broad connections, and present insights in a way to inform me (or the doc, or the data scientist) – rather than trying to supplant my intelligence.

Rather than try to build a bot that thinks it’s as capable as a human, make me a bot that make the humans in the mix work better. Don’t try to outthink me.

Are you using bots in your daily routine, or are they still a curious creature evolving in curious directions?

Image of my BASAAP drawing that Matt Jones was kind enough to immortalize back in 2007.

*Oh, and if you think that kids end up smarter than their creators, you’re right. But I don’t think that any of these bots understand what makes kids be smarter than their parents. That, I think, is at the core of human intelligence that will still take some time to sort out.

Hey, John McCarthy, AI has finally gone mainstream

home___partnership_on_artificial_intelligence_to_benefit_people_and_societyYou know the feeling when someone mentions something and then you see it everywhere? Well, that’s what happened to me with AI. I wasn’t giving it any attention until someone pointed out a few weeks back that it was a big up-and-coming topic (to be fair, they pointed it out as a big up-and-coming topic to me already two plus years ago). Ok, machine learning was a big part of what I was selling these past two years, and I was at IBM the Watson hit the stage, so it’s not like I was totally clueless. But no sooner do I start doing some research, a bunch of big announcements (like the one above) happen. So excuse me if I sound a bit out of touch at the start. I’m playing catch up to smarties like you.

Say it is so, John
John McCarthy was one of the founders of AI. Back in the summer of 1956 he organized the Dartmouth Conference that kicked off AI as a field.

By the time I started reading about AI back in the ’80s, the field had come a long way, but I wouldn’t say it was something mainstream. Nonetheless, AI has been simmering in the background and the age of Big Data seems to have brought AI to the fore and ushered the Era of Mainstream AI.

AI now
In the past few weeks, there have been many announcements around AI, and, with the Partnership on AI (logo above, announcement link below), large corporations are putting money where their mouths are. What’s more, these corporations are also releasing useful products that can truly claim a foundation in AI.

A most telling comment for me has been Google pointing out that for the last many years products have been pushing to be “mobile-first.” Now, and Google CEO Sundar Pichai has been saying this for a few months, “We will move from mobile-first to an AI-first world.”

Why is the message louder now? A slew of AI-based products have been released in the past few months from Apple, Google, Microsoft, Amazon, and Facebook. The time for talk is over – real, AI-based, conversational agents, backed by troves of data and responsive software and hardware and networks are here. And, delightfully, competition will accelerate the usefulness of these products.

OK, I might be late to the party, but I have noticed a growth in the number of browser tabs I have open to AI topics, products, and people; I can no longer sit quietly as these exciting developments happen.

But I am not interested in only understanding AI from the perspective of daily news. I want to understand AI in action, the demos and stories of the uses of AI in all forms. I want to understand the interaction of people and AI, now and in the past, the movers and shakers, and those affected, whether good or bad. I want to understand the culture of AI, how it is portrayed in movies, books, and popular culture. I want to understand the science of AI, no matter how intelligible. And I want to understand how to use AI-driven tools in my exploration of AI (a dog-food kinda thing, let’s see how that goes).

The AI story is right in front of me and, for some strange reason, I now feel compelled to share this story from this perspective.

Let’s see. For sure I don’t need another compulsion I just have to write about. Expect a barrage of posts as I clear out my tabs. 🙂

September 28, 2016NEW YORK — Amazon, DeepMind/Google, Facebook, IBM, and Microsoft today announced that they will create a non-profit organization that will work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field.

Source: Industry Leaders Establish Partnership on AI Best Practices | Partnership on Artificial Intelligence to Benefit People and Society

Thoughts on Aetna’s Apple Watch move

watchAetna announced they were going to subsidize the Apple Watch for select large employers and individual customers this open enrollment season. Aetna will also provide the Apple Watch to nearly 50,000 employees in their wellness reimbursement program.

This is big.

For this to succeed and stick, Aetna will need to be able to measure the impact of these devices on their business and, of course, health of people. I am seeing now that these devices are starting to get attention, we’re starting to truly discover if what we believe is true or based on conjecture (the 10,000 steps myth, for example). I am not convinced there’s enough data to show that devices alone are the answer.

I feel these devices are only useful in the context of a full engagement with patient and person – so far these devices have been seen, incorrectly, as point solutions (did you see Microsoft exited the fitness band hardware biz?). My thoughts tie into a holistic digital therapy, that these devices work best as part of a care plan, with coaches, data, insight, and understanding. It’ll be interesting to see how they meld these devices with the Aetna apps.

[OK, and I haven’t even begun to scratch the surface on biz models and whatever engagement model Aetna or their customer organizations might do to engage with the members. Lots of room for innovation there.]

Aetna’s iOS-exclusive health apps will aim to simplify the healthcare process through a number of features, including: Care management and wellness, to help guide consumers through health events like a new diagnosis or prescription medication with user-driven support from nurses and people with similar conditions. Medication adherence, to help consumers remember to take their medications, easily order refills and connect with their doctor if they need a different treatment through their Apple Watch or iPhone. Integration with Apple Wallet, allowing consumers to check their deductible and pay a bill. Personalized health plan on-boarding, information, messaging and decision support to help Aetna members understand and make the most of their benefits.

Source: News Releases – Investor Info | Aetna [Hat-tip to Rock Health for the link]

What’s the healthcare equivalent of reach, throw, row, go?

14157551699_bbbce23643_mThe other day we were talking about my wife’s mobile vertinary practice, and I started mapping what she does to human healthcare and reach, row, throw, go popped into my head.

My wife used to be a pool lifeguard. She told me that if something happened in the water, the level of engagement was reach (can you use a pole or arm to grab the swimmer?), throw (are they close enough for you to throw a lifesaver?), row (can you take a boat or board to the swimmer?), go (if all else fails, go in after the swimmer).

I’ve been thinking quite a bit about patient engagement lately. I truly believe that the way out of the mess we have in healthcare is through deeper engagement with the healthcare system. I call it high-touch healthcare. But I don’t mean bigger hospitals and more doctors. I’ve always taken a broader, multi-channel and longitudinal view of patient engagement.

What do I mean?

The thought I’ve been developing is there is a gradient of involvement in the healthcare system from independent (for example, looking up information) to complete (for example, surgery). And there are layers to the involvement, taking in the patient, their circle of support, clinicians, clinics and hospitals, visiting caregivers, and, for me, data.

And that’s where the reach, throw, row, go comes in. Each patient is at some level of need (from none to complete dependence) and we need to decide if we need to reach (self-serve websites, mobile devices), throw (visiting caregivers, training for family members), row (clinics and urgent care), go (hospitals, hospices).

I think this thinking comes from my background in marketing and in product development where you can’t just do one thing, but need to think of the user journey, all the touch-points, and provide the right engagement for the right issue.

To me this sounds obvious, but I am never sure if healthcare systems really get it. What do you think? Do they? Do you have examples?

Image from Vasse Nicholas, Antoine

AIs, writing, and computational literature

bender-penA few months back, I stumbled upon Inkitt. Well, more like they stumbled upon me – they were looking for someone with a background in analytics and in writing to build models around the stories in their community. The goal was to build analytic models that would understand what a good story was to basically create an AI submissions editor, an AI slush-pile reader.

On the one hand, Iniktt is building a community of writers (much like Wattpad). On the other (the business model), they are selecting the top novels to offer to book publishers. Should the book publisher not take the manuscript, Inkitt, because they already think the novel is good enough for a publisher, will publish the book. If the book sells well, Inkitt will return to the publisher once more. If the book doesn’t sell well, the rights revert to the author.

Of course, the key is to find the good novels (isn’t success in publishing always about good stories?). The community will bubble some of this up, but perhaps having a model that learns from the community what is good could accelerate the discover of new novels.

Building models of literature
I found this intriguing and started looking into computational literary analysis, also known as Digital Humanities (there’s even a journal). I uncovered a long history of work to make sense of different forms of writing, being able to analyze writing as a scholar would (here’s a recent article from Berkeley).

IBM has championed the concept of “cognitive computing“, a third wave of computing after the first two waves of tabulation and programmatic computing. In cognitive computing, systems are no longer programmed by human-generated rules, but are taught through machine learning and models trained from real data (and plenty of nudging from human specialists).

We do this at work – we feed a corpus of text into our system, along with what ontologies our experts have to give some semblance of meaning to the text (that’s the hard work some people gloss over), and the system builds a model of understanding, pulling together the relevant topics (you can see it in action here). This is how organizations are getting better at understanding sentiment, tracking leading topics, going beyond keywords and rules to build a responsive system that no human alone can build (though, don’t get swayed by the hype, as this very good article warns).

So how are folks teaching systems about story?  By giving them something to read. Facebook is teaching its system by feeding it children’s books (see reading list here). Google has been feeding a system with thousands of romance novels. Alas, these two companies are not necessarily trying to build a model for what a great children’s book or romance novel is. They are trying to tech their systems how humans converse, to better provide conversational services (bots!). Though, as many parents of early readers know, what goes in is what goes out, and young conversationalists are quite impressionable (read about the Microsoft bot). But these systems will end up being a smart as a puppy. Here’s Google’s system with some exercises that look like beatnik poetry.

Folks have also been going beyond conversation and having such systems actually write novels. For example, for NaNoWriMo (National Novel Writing Month) writers spend the month of November writing a 50,000-word story (quantity over quality). NaNoGenMo (National Novel Generation Month) is a riff off of NaNoWriMo – participants build programs that create 50,000-word stories, using computers (hm, I wonder if something mechanical would count). The exercise generated some quite fun results (not to mention the call-outs to @hugovk, whom I know, and not surprised he dove into this). I am not sure how many of these were programmatic rather than cognitive-like, being more human programming cleverness rather than machine-original.

Does it matter who writes it?
I think the distinction between all-machine or all-human or a hybrid writer is irrelevant. Already financial and sports news reports are written by machines. I received spam that reminded me of Burroughs’ cut-up fiction. A machine-generated novel recently made it through the first round of a literary contest. To me, if the story is good, does it matter who wrote it? Rather than ponder _if_ an AI can write a novel, we should be thinking of how do we live in a world where AIs write or help write novels.

I have just spent the past many years feeding and encouraging a writer (human, that is). It’s a joy to share books, writing, discuss plot and style, and practice, practice, practice. AIs will be the same – we, the humans, will give them the tools to learn and grow and find their voice. What’s wrong with that?

I, for one, welcome my new novelist overlords.

Now, excuse me, as I point my AI to go play on Wattpad.

Image by Tony Delgrosso

Recipe: How I make yogurt

I’ve been meaning to post this for a long time. The way I make yogurt was inspired by Vaugh Tan (from a meet-up back in 2012!). The philosophy he shared, and to which I already, as a biologist, ascribed to, was to understand fermentation as something a community of different organisms do. For yogurt, different bacteria have peak activity at different temperatures, each eating different sugars in the milk matrix, preparing the matrix for the next bacteria as the temperature declines. That is why I choose wrapping the fermenting bugs in a towel and a natural reduction of temperature, rather than some machine that keeps the temperature at one setting.*

The recipe
I usually make yogurt 1 gallon at a time. Just simpler for me, and matches how much we eat. I also put the yogurt in Ball (or Mason) jars, but of course, you can put it into any container you are comfortable with. And, as most yogurt folks do, I get my starter from the previous batch; though, sometimes the wife buys some different yogurt and I mix that in as well.

– 1 gal whole milk
– 4 heaping tablespoons of starter culture from the previous batch (one tablespoon per quart)

1) Heat milk
Pour milk in pot. Set to mild heat, set temp alarm to 75-77°C (or 170°F).

OK, call me crazy, but I read some weird suggestion to rub a cube of ice at the inside bottom of the pot to avoid scorching. It seems to work. I don’t think it has anything to do with cooling, of course, but I think it has to do with not pouring the milk into a dry pot, the water forming a layer (adhesion?) so the milk isn’t the only thing touching the metal. I don’t know. But it seems to reduce scorching.
heat milk

One other thing that I can’t suggest strongly enough: get a digital thermometer. I got this Polder from Amazon. I like it because it can do °F and °C and it has a temperature alert. As you can see, I use °C for making yogurt (I learned about bacteria only in °C; though I learned beer brewing in °F – crazy, I know). In any case, having the digital thermometer has allowed me to have very good control of the temperature and has facilitated production and improved quality and repeatability.

I set an alert to 75-77°C so that I don’t forget about the milk and let it boil over.

2) Cool milk
When the temperature of the milk hits 77-80°C (170-175°F) I take it off the heat and let sit until the temp comes down to around 55°C (about 130°F). For a gallon of milk, this usually takes me 20 minutes. I like this slow cooling because (I think) it lets the milk proteins and oligosaccharides slowly loosen up and get intertwined, so you have a well-set yogurt. Indeed, my milk heated and cooled like this usually sets and tastes better than when I boil the milk.

While milk is cooling, I prepare the jars and warm the starter (next steps).

3) Prepare containers
I usually use quart-sized Ball jars. You can also reuse quart-sized plastic commercial yogurt containers. Just make sure the containers were washed in a dishwasher.

To prep the clean containers for yogurt, I give them a rinse with the hottest water I can handle and let them drip dry. For 1 gallon, I use four jars and have an extra smaller one ready for any overage. This works well, as we eat the yogurt from the larger jars and when we open the smaller jar, it’s time for a new batch.

Note: unlike when making preserved foods in Ball jars, I reuse the covers, so long as they are not rusted. I do this because the dominant bug is your yogurt bugs, and you’re not preserving things for months. But if that gives you the heebeejeebees, then do what works for you.

clean jars drip dry jars
4) Warm starter
As the milk comes down to 55°C (130°F), you will want to bring your starter to temperature. I use 1 heaping tablespoon of starter (saved from the previous batch) per quart (therefore, 4 for a gallon of milk). Put the starter in a bowl, pull out a cup of the heated milk (I usually rinse cup before in hot water) and let it cool enough to handle (less than 60°C/140°F), then add to starter, to warm it up. Gently mix to smoothness.
warm starter
Note: That gunk on the cup is milk skin. Some folks like to skim it off (just touch it with a spoon and pull it out). I don’t mind and just leave it.

5) Inoculate milk
You will inoculate at 55°C (130°F). When your milk is around 55C, add the bowl of warmed starter, gently mix it in.

6) Pour into container
As soon as you can, add inoculated milk to your containers and close.

full jars capped jars
7) Store containers
Wrap containers together in a towel, and store in a place they won’t get disturbed for at least 9 hours. Wrapping them together let’s them share the heat, the towel doing enough of a job to keep the heat in and to let it fall gradually (remember the bacteria having their own peak activity temperature?). I usually leave the containers overnight, in a closet or cabinet (note home brew peeking from behind towel).
into towel wrapped
8) When incubation is done, put containers in fridge
I usually tip the jars to see how well the yogurt set. And, for me, one of the most exciting parts is that in the morning, the jars are still warm from the fermentation activity.

You could enjoy them before putting in fridge, but I always put them in fridge for a few hours to firm them up before eating.


And remember to save a bit for the next batch.

– The digital probe thermometer is key. Mine’s a Polder Digital In-Oven Thermometer – $25 from Amazon. I use it for more than yogurt, such as for grilling and making home-brew.
– I started my culture with Stoneyfield which has 6 strains in it and have since added Chobani and Fage along the way. You can use any yogurt with live cultures (it’ll say on container); though, my inclination is the more strains the better.
– Anything touching non-inoculated milk and not being heated I rinse in hot water. You don’t need to be sterile, just be clean. The cultures work fast and overwhelm anything else. Indeed, like I said, the containers should be warm when you check them at the end of fermentation. Evidence of lots of bacterial activity!



*How others make yogurt: I saw this video yesterday and could not figure out why one would spend $50 on an incubator timer when a simple towel would do.

Human permanence and nature’s flow

oxbowWhen I fly, I try to sit by the window. Night or day, the world from the plane is quite interesting, providing perspectives on humanity, the planet, time and space.

Flying over the plains of the US one can see large expanses of regularly shaped farms, straight roads, and lots of flat territory. One time, I was following a river cutting through that order, a meandering river much like the one in the photo here. And, like the one I attached here (alas, I didn’t have decent ones of my own), one could see the history of the river – its current oxbows as it meanders across the flat land; the broken off oxbows, now lakes, where the river once flowed; and evidence in the color and curve of the land of where the river once ran but now covered by a land subdivided in neat little human-understandable chunks.

This got me thinking of places where we have created walls on the sides of rivers (like in practically every European town), of how humans have always tried to force rivers to do their bidding, trying to freeze for all eternity what the rivers unconsciously have been doing for millennia.

And this controlling of the rivers provides a false permanence. The different extent to which the rivers have left their mark on the land, even if we were to try and obliterate them by channels or drainage, shows a permanence of nature’s flow that we are so foolish to think we can stop.

Image from Tim Gage

Hey Bruins, it’s concentração time

dm_160310_hurricanes_bruinsEvery Bruins fan knows that this year our favorite team has been struggling at home. On the road, Bruins are 23-7-3, which, by this table, ranks them as the second best on-the-road record in the league. At home, it’s a vastly different story. With a record of 15-16-5, Boston is in the bottom quartile, sitting at 23rd out of the 30 teams in the league.

Concentration time
I grew up in Brasil, living near the stadium of one of the football powerhouses, Flamengo. I recall that before every game, the men would spend the night at the stadium, a sort of retreat, or as they called it “concentração”, “concentration”.

I did a quick search on the topic this morning, wondering if teams still do it and found out that not only does the concept have a wikipedia entry, but it’s still a going practice in Brasil. Also, it seems that some football teams in Europe also do this to some degree, for example Man City. And what was really interesting is I found out even college football teams do it.

Really, I have no idea if there is any positive effect. Back in the 50s, a famous equipment manager for Botafogo (at the time a powerhouse from Rio), quipped “Se concentração ganhasse jogo, o time do presídio não perdia uma partida” – “If concentração would win games, then the prison team would never lose.”

At times, the players rebel against it (especially when their salaries aren’t being paid), but in Brasil, it’s in the contract that they need to abide by the concentração rules. And orgs still do complain that it’s a luxurious cost to place players in hotels near the home stadium the night before a home game.

In this excellent article on the topic (in Portuguese), players and coaches discuss the pros and cons and the culture around the concentração before the game. It’s not a simple decision. Some teams call players in 2 days before; when you add home games to away games, players are never in their own bed for most of the year. Then there are the technological changes that have made concentração more individual than team-based, solo activities on electronics versus group activities around games or movies.

Perhaps it is also culture that keeps Brasilian coaches more connected to this practice, worried that his boys will be out partying [Of note, during the last cup run, Germany and Holland did no such concentração. Player discipline?]. Many coaches have tried to mess with the formula (as have big teams in Europe). But there is a strong expectation of the positive effect of squirreling away the players before a home game, that players, coaches, and fans point to a lack of concentração on losses.

Need to crack the problem, Claude
OK, so perhaps I, too, have a bias towards this idea of some sort of retreat before a home game. Especially when I see the different between the Bruins’ home and away record.

What’s it going to take for the Bruins to shake this poor at home record? No one knows what’s causing this skew in the record between home and away, but we all know it needs to be solved. Going into the playoffs, the ability to win at home is even more important, especially for player confidence.

I’m not suggesting that Bruins start acting like they are on the road for home games. Or, perhaps, I am. Perhaps, what I am suggesting is that Claude, Don, and the amazing John Whitesides give this a ponder as the Bruins try to hold on to our standing through the end of the season and all they way to that final game in June.

Image from last night’s loss at home (which oddly put us in 1st place): ESPN

Pause for station identification

gears-William-WarbyAs I am sort of feeling my brain starting to reactivate my writing muscles, I thought now would be a good time for a station identification. No need to panic. I’ve done this before. This is my 10th pause for station identification on this site since the first one in March 2005 (and #10 seemed like an interesting milestone to point out).

Hello. My name is Charlie Schick. I am Senior Director, Healthcare, at Atigeo. I started there in a biz dev and sales role, but the role morphed into a client exec role and then into a product leadership role – stepping in where I’m needed, where I can best apply my skills, to keep the gears turning. One of the coolest offerings I am working on is building a catalog of healthcare and cyber data as part of our platform to enrich analytics and build new insight using external data. I’ll be giving talks about this throughout the year, it seems. But I’m not sure I’ll be posting much about it here, though, so feel free to ping me for more info.

Prior to Atigeo, I was at IBM, Nokia, and Boston Children’s Hospital in various roles in research, product development, sales consulting, and customer-focused go-to-market activities. During this time, I’ve designed and launched web and mobile products; provided internet, social media, and content strategy consulting; written numerous articles for online and print publications; published several biomedical research papers in leading journals; and co-authored a book on advanced phone systems. 

Biologist at heart
Oh, and I am a bio-nerd, mostly validated by my PhD in molecular and cellular biology from UMass Amherst. My bio-nerdiness is expressed in my love of certain fermented foods (I am a fermentos) and in my interest in the practical use of microbes in food, health, and interesting products. For the last few years, I have posted many things on this site and on Twitter around this fascinating topic.

And I enjoy being surrounded by PhDs at work (though the others have much more impressive PhDs, and our chief scientist has two, from MIT, no less).

Thinking and speaking and helping
This background should give you an idea of my interests and why I say what I say. Therefore, it should not be surprising that I share this experience, advising healthcare start-ups on mobile, marketing, and analytics. If you’re interested in knowing more about this, feel free to invite me to lunch or beer.

I also regularly speak in front of large audiences, sharing my experience and interests through various forms of media and design, and in the office of CxOs. Send me a note if you want to know more.

And of course, my standard disclaimer (riffing off of an ancient Cringely disclaimer)
Everything I write here on this site is an expression of my own opinions, NOT of my employer, Atigeo. If these were the opinions of Atigeo, the site would be called ‘Atigeo something’ and, for sure, the writing and design would be much more professional. Likewise, I am an intensely trained professional writer :-P, so don’t expect to find any confidential secret corporate mumbo-jumbo being revealed here. Everything I write here is public info or readily found via any decent search engine or easily deduced by someone who has an understanding of the industry.

If you have ideas that you think I might be interested in please contact me, Charlie Schick, at, for Atigeo-related matters; via my profile on LinkedIn; or via @molecularist on Twitter.

Image from William Warby