What is 777labs?

777labs_consultancyFor the past 20 years, I have been helping folks in marketing and sales identify, target, build, and nurture customer relationships, market opportunities, and brand growth. I have either led or heavily influenced sales strategies, marketing efforts, or solution design and development, giving me a unique perspective as to how strategy and execution cut across key areas of an organization and affect their customers.

My goal is to make this experience available through 777labs. I want to help my clients build an engagement strategy, whether the customer is another business or a consumer of a service or product. And I want to help build the content that enables the client to deliver on that strategy, be it sales content to provide the sales staff competency and credibility, or clever tweets and blog posts.

This is what I have been doing for decades, and this is what I enjoy doing.

A list what I offer
Marketing: Digital marketing strategy, Content strategy, Social media strategy, Marketing strategy, Marketing content, Brand building, Marketing analytics, Community management

Sales: Customer engagement strategy, Sales strategy, Sales content, Sales training, Sales analytics

Solution design: Mobile service design strategy, Web service design strategy, Product and solution marketing, Solution design strategy, Data enrichment strategy

Healthcare, in particular
While I can do these things for companies in practically any industry, I’d like to focus on one industry I have extensive experience in: healthcare. I’m particularly interested in providing guidance to clients who are not traditional healthcare companies, but who are building a healthcare vertical or are interested in figuring out how to enter the healthcare market.

Contact me
If you are a company looking to take your product or service into healthcare, or you want to grow your digital health or patient engagement activities, 777labs can help. You can contact me, Charlie Schick, at firstname.lastname@777labs.co.

Pause for station identification

aut-viam-inveniam-aut-faciam
“I will find a way or make one” – on my Harvard University chair kindly given by Gary Silverman on my departure from his lab

Through the years, each of these pauses have been a definition of where I am at in that sliver of time. Alas, currently, I’m exploring a few potential paths, so defining where I am in this sliver of time is important to me.

So here we go.

Me
Hello. My name is Charlie Schick. I’m passionate about the intersection of healthcare, mobile, and data; particularly how we can improve the way healthcare organizations engage with customers, patients, and families. I also advise companies on mobile, marketing, and analytics.

I have 20 years of experience in engaging with customers through various roles in marketing, sales, solution design and development, and research at major brands, such as IBM, Nokia, and Boston Children’s Hospital. Also, I have been influential leading these major brands with innovative ways of engaging with customers, particularly through digital solutions.

What I’m doing now. Again.
My first gig out of the lab was my own company, Edubba, providing editorial consulting – running proto-blog sites, being a columnist for some magazines, providing wordsmithing for product reviews and marketing material.

That independent effort quieted down when I moved to Nokia, though I did keep working on the side – writing feature articles for organizations, a biz plan here or there. The bulk of my writing and strategy work in the past 20 years, though, has really been corporate –  the Beagle; Hello Direct; the Nokia Cloud project; the Nokia corporate blog; Children’s Facebook page and blog; sales consulting and occasional writing for IBM; trying to make a difference at Atigeo.

Consultant-reborn
Now that I am on my own, again, I’m going back to my first job out of the lab. I’m launching a new consultancy, 777labs. This time I will have a broader scope than before, tapping into my many years of experience in the corporate world, and relevant to where I want to make an impact.

777labs is a customer engagement strategy consultancy helping clients identify, target, build, and nurture customer relationships, market opportunities, and brand growth. Our services cut across sales, marketing, and solution design strategy and also include the necessary tools, analytics, and content development. Our primary focus is in healthcare, including providing value to non-healthcare companies who are entering the healthcare market.

I’m excited to get back into leading this work full-time, for myself.

Thinking and speaking and helping
Beyond the new consultancy, I want to continue giving talks and running panel. I regularly speak in front of large audiences, sharing my experience and interests through various forms of media and design, and in the office of CxOs. Send me a note if you want to know more.

And of course, my standard disclaimer
(riffing off of an ancient Cringely disclaimer)
Everything I write here on this site is an expression of my own opinions, NOT of any of my clients. If these were the opinions of my clients, the site would be called ‘777labs’ client’s something or other’ and, for sure, the writing and design would be much more professional. Likewise, I am an intensely trained professional writer :-P, so don’t expect to find any confidential secret corporate mumbo-jumbo being revealed here. Everything I write here is public info or readily found via any decent search engine or easily deduced by someone who has an understanding of the industry.

If you have ideas or projects that you think I might be interested in please contact me, Charlie Schick, at firstname.lastname@molecularist.com; via my profile on LinkedIn; or via @molecularist on Twitter. And if you’re interested in working with 777labs, you can contact me at firstname.lastname@777labs.co.

Peanut butter and chocolate moment: AI goes great with…?

1849953350_79809bd7e6_zI have an ideation game I play called “Peanut Butter and Chocolate.” Basically, it’s mashing two seemingly unrelated things to think of how they would go together (I’m sure others have a similar technique). For example, most recently, we wondered about toilet paper (everyone needs toilet paper) and how it might go with religion (very popular) or 3D printing (also popular, though not as much as toilet paper or religion).

So, as is evident by the title of this post, what happens when we add AI to something? For me, I turn to two areas that are never far from my mind: healthcare and mobile.

Healthcare
I have seen machine learning being used to develop better models around readmission (yawn, isn’t it always readmissions?). What I’d like to see are more optimization solutions, such as optimizing staff, equipment, or drug usage. Or how about helping patients choose the best health plan based on their medical and resource usage history (this is a dear one to me).

Another area where I would like to see AI applied is behavioral health – can we help patients manage their mental health, what can we provide caregivers to better manage relapses or even violence? I think we spend so much time on the Big Three – heart disease, obesity, diabetes – that we fail to hit in places that are not getting attention, such as mental health, geriatrics, or the impact of poverty on health.

Though I always come back to my original concern with AI in healthcare – will it ever be better than a good nurse armed with some good data? Watson, what’s your comment on this?

Mobile
I think back to my early years in mobile and how I used to talk about the mobile lifestyle. The success of AI in mobile will also be related to how it flows in with the mobile lifestyle. Though I think these days folks are a bit more savvy with mobile than way back when.

But there’s been an inordinate amount of focus on speech-driven agents that are really clever assistants. Yes, I am looking forward to agents talking to agents to schedule meetings, booking tickets or restaurants, and the like. Yet these agents require me to stop what I am doing and talk to them, breaking the mobile flow.

I want AI to recede into the background. I don’t want to tell the AI what to do, it should know. For example, when I schedule a meeting, don’t just tell me about the participants, but learn from me what is the usual info I collect before a meeting and summarize it for me. Or, learn from me what I like to know at the start of the day and summarize that for me. Or pay attention to what I am doing and where I am and make sure I get things done, based on my email or based on my calendar.

OK, so I am not so clear on where AI can go in mobile, but I do see we need to get beyond our fixation with bots and speech-driven agents.

Have you seen anything interesting around AI in mobile?

Image from Graham Hellewell

When AI is Artificial I: humanity, culture, art, emotion

chateau_de_versailles_salon_des_nobles_pygmalion_priant_venus_danimer_sa_statue_jean-baptiste_regnaultDoesn’t all AI end up being a reflection of who we are as humans? From the practical, I mentioned bias in how we build AIs and the prevalence of conversational bots. But we all know of the endless numbers of books and movies with stories of AI becoming something we cannot distinguish from humans.

Is this simply the Pygmalion in all of us? We turn to external expressions to make sense of the human condition, through art, religion, science, sport, politics. Why not AI? And with so much expression imbued in an AI, might we not fall in love with it, or want it to be something we can fall in love with?

Cre-ai-tivity
I’m not going to go over all the examples of AI in art. But I would like to point you to a very interesting short movie written by an AI. Fed a corpus of sci-fi scripts, the AI, given a seed of an idea, wrote a short sci-fi script of its own. The video is the director’s and actor’s interpretation of the script.

The interesting thing is that it comes off as an off-beat movie, but with a touch of something deep that must be there. And if you think the dialogue is too off-beat, read something like the Naked Lunch, or Kafka.

And here’s a recent article on a performance of various pieces from various genres of music written by AI but performed by humans. This sparks a very interesting discussion on the balance between statistically creating music (the AI) and the human touch. The example used in the article is a pair of Mozart pieces – the one that’s all AI is all over the place, but the one with a bit of human intervention begins to have small stretches that feel like Mozart. But, of course, a fully Mozart-style piece does not emerge from the machine.

Though, one of the composers see that AI as a collaborator rather than a composer on its own, and that’s what is exciting to some musicians.

He points out that although the music sounds like Miles Davis, it feels like a fake when he plays it. “Some of the phrases don’t quite follow on or they trip up your fingers,” he says. This makes sense, as this isn’t music written by a human with hands sitting at a keyboard; it’s the creation of a computer. Artificial intelligence can place notes on a stave, but it can’t yet imagine their performance. That’s up to humans.

Source: A night at the AI jazz club – The Verge

My Fair AI
I think we approach the Ultimate Pygmalion in our desire to create simulacra of emotive, interactive beings. For example, there is no end to the wee AI-imbued gizmos we try to create to interact with us. Will these gizmos be as smart as a puppy, or try to do more and end up annoying? Anki’s Cozmo is the latest I’ve seen and a lot was put into the emotional intelligence of the toy.

And then there’s this very interesting story about a AI bot maker who lost a dear friend and used the texts her friend left behind to create a conversational memorial to him. The author of the article is sensitive to the emotional impact of this AI memorial, but also branches off into the areas of authenticity, grieving, personality, and the role of language.

Art is meant to get us to think about who we are as humans. The bot creator only wanted to build a digital monument to have that one last conversation with a dear friend. Yet, she touched a nerve that we could not have touched without her skill in AI and capturing a voice. Rather than create something that helps us do something or cope with something, her digital monument brings up many thoughts on humanity, culture, art, emotion. Should we build bots grounded in real personalities, as derived from their digital textual contrails? What happens to one’s voice when one has died? If our voice can persist, what does it mean to who we are, our mortality, to the ones we leave behind?

What do you think?

Image Pygmalion by Jean-Baptiste Regnault, 1786, Musée National du Château et des Trianons, from WikiCommons

Come down to earth: some hidden truths about AI

13920528727_a03087a1d3_zYou know that a tech trend is growing when there are more conferences and training programs than you can shake a stick at. And also, the trend is picked up by the amazing Science Friday and you get to hear some interesting developments and future direction.

One thing that you really don’t hear often are the “hidden truths.” The Verge recently wrote a very nice article highlighting three places where AI falls short – training, the bane of specialization, and the black box of how the AI works.

Machine learning in 2016 is creating brilliant tools, but they can be hard to explain, costly to train, and often mysterious even to their creators.

Source: These are three of the biggest problems facing today’s AI – The Verge

I had the good fortune to work with some very talented data scientists who were regularly using machine learning on healthcare data to understand patient behavior. Also, at IBM, I was able to learn a lot about how Watson thought and how well it worked. In all cases, the three hidden truths that The Verge had commented on were evident.

Teach me
The Verge article starts by pointing out the need for plenty of data to be able to train models. True. But for me, the real training issue is that it’s never “machine learning” in the sense of the machine learning on its own. Machine learning always requires a human, for example to provide training and test data, to validate the learning process, to select parameters to fit, to nudge the machine to learn in the right direction. This inevitably leads to human bias in the system.

“The irony is that the more we design artificial intelligence technology that successfully mimics humans, the more that A.I. is learning in a way that we do, with all of our biases and limitation,” said University of Utah computer science researcher Suresh Venkatasubramanian in a recent statement.

Source: Computer Programs Can Be as Biased as Humans

This bias means that no matter how well created or how smart, the AI will show the bias of the data scientists involved. The article quoted above references the issue in the context of resume scanning. No, the machine won’t be less biased than the human.

Taking that thought further, I am not concerned only with bias, but that possibly the AI cannot be smarter than the human, using the methods we currently have. Yes, an AI can see patterns across huge sets of data sets, automate certain specific complex actions, come to conclusions – but I do not think these conclusions are any better than a well-trained human. Indeed, my biggest wonder with machine learning in healthcare is whether all the sophisticated models and algorithms are no better than a well-trained nurse. Indeed, Watson really isn’t better than a doctor.

But that’s OK. These AIs can help humans sift through huge data sets, highlight things that might be missed, point humans to more information to help inform the human decision. Like Google helps us remember more, AIs can help us make more informed decisions. And, yes, Watson, in this way, is actually pretty good.

The hedgehog of hedgehogs
The Verge also points out that AIs need to be hyper-specialized to work. Train the AI on one thing and it does it well. But then the AI can’t be generalized or repurposed to do something similar.

I’ve seen this in action, where we had a product that was great in mimicking medical billing coding that a human could do. After training the system for a specific institution, using that specific institution’s data, the system would then perform poorly when given data from another institution. We always had to train to the specific conditions to get useful results. And this applied to all our machine learning models: we always had to retrain for the specific (localized) data set. Rarely were results decent on novel though related data sets.

Alas, this cuts both ways. This allows us to train systems on local data to get the best result, but it also means we need people and time (and money) every time we shift to another data set.

This reminds me of Minsky’s Society of Mind. Often we can create hybrid models that provide multiple facets to be fitted to the data, allowing the hybrid collection to decide which sub-models reflect the data better. Might we not also use a society of agents, a hybrid collection of highly specialized AIs that collaborate and promote the best of the collection to provide the output?

Black box AI
The third and last point the Verge article makes is about showing your work. I’ve been in many customer meetings where we are asked what are the parameters, what is the algorithm, how does the model think? We always waved our hands: “the pattern arises from the data,” “the model is so complex, it matches reality in its own way.” But at the same time, the output we’d see, the things the machine would say, clearly showed that sometimes the model could approximate the reality of the data, but not reality itself. We’d see this in the healthcare models and would need to have the output validated and model tweaked (by a human, of course) to better reflect the reality.

While black boxing the thinking in AI isn’t terrible, it makes it unapproachable to correct any misconceptions. The example in the Verge article on recognizing windows with curtains is a great one. The AI wasn’t recognizing windows with curtains, but correlating rooms with beds with windows with curtains.

AI is not about the machine
The human is critical in the building and running of AIs. And, for me, AIs should be built to help me be smarter, make better decisions. Some of the hidden truths listed above become less concerning when we realize we should, for now, stick to making AI as smart as a puppy, rather than imbue them with supposed powers of cognition beyond the human creators. AIn’t gonna happen any time soon. And will only annoy the humans.

Image from glasseyes view

Talk to the hand – and the computer, and the home device, and the…

5756932852_de6e6ebf5b_zAt the core of what is going to drive this new wave of adoption of AI will be the “conversational interface.”* We spoke a bit about bots, and how the more sophisticated ones are attempting to be conversationally savvy. But the real action around conversational interfaces is best shown by Google’s recent hardware moves.

Talk to the Oracle
I’ve always thought of Google as the Oracle, in the antiquity sense of “a person or agency considered to provide wise and insightful counsel or prophetic predictions or precognition of the future, inspired by the gods.” Ask and ye shall learn (don’t tell me you haven’t asked Google random questions and wondered what you would find out).

AI is driven by knowledge and learning, which Google, as the Oracle, has been amassing for years. Google has made it clear that their new hardware will be the gateway to that Oracle, that AI. And by making that hardware, they can define the experience of conversing with that AI.

Google getting into hardware is a wakeup call for device and software manufacturers who have been dabbling with things that could be driven by AI. There was an expectation that Google would eventually start building its own hardware, going beyond just Android licensing, though even Walt Mossberg admits he missed AI as the motivation behind Google’s hardware interest.

As an aside: Wondering from my perspective as a software person at a hardware manufacturer, is the hardware simply Google’s way to set a reference as to how it sees the future of AI, of the Google Oracle of knowledge; or will Google actually build out a whole range of products? How far will Google go with hardware?

Siri-ously great conversation
The other dabblers in AI have been building capabilities into their voice-driven interfaces in their devices – Apple with Siri, Microsoft with Cortana, and others. But these guys do not necessarily have the knowledge and learning that Google has amassed.

Also, Apple squandered the AI lead with Siri. It seems that Apple has been focused on Siri as a UI tool rather than an extension of users wanting to do things, find things, know things. What Google’s move shows, it’s clear that Siri’s future is all about the capability of the AI behind it.

I’m not bad. I’m only drawn that way.
I am perhaps being unfair to Apple. Keep in mind the origins and soul of Google and Apple. Google is the ultimate bot, amassing information and making it available to the world. They really do not care about the humans, in so much as Google can serve and answer the humans’ questions.

Apple on the other hand is about crafting the amazing experience I have with my photos, my music, and, increasingly, my people (though I don’t see Apple getting social communication any more ‘right’ thank Google can). Yet, and this is relevant for building AI, Apple is not about communicating with data, information, knowledge.

So I ask: Google is great at knowledge, but can it nail the hardware experience? Apple is great at the hardware experience, but can it nail the AI experience?

Talk to the morsels
Amazon and Facebook also want to get into the hardware and AI game, but how does that fit their origins, that narrative they have written for themselves? Facebook does have a lot of knowledge, but of people and their actions and the social tokens they exchange, not the hardcore information that Google has. Amazon also has amassed information, but of what folks buy and consume; there is no insight into communication between people.

Yes, each of these behemoths have a piece of the puzzle. Of course, having been outside them my whole life, I have always had the philosophy of tying the morsels together. That also presents another path. Though not sure how we’ll traverse the various flavors of AI and repositories of knowledge and actions.

Image from Mary McCoy. Hat-tip to Stephanie Rieger for the conversational interface link.

*OK. I’ve been burning to say this on conversational interfaces. Speech doesn’t work in many of our daily places. We’ll still need to keep on writing our questions to the Oracle, conversationally, of course. That’d be one heck of a command line. Though Google is showing much of this powerful command line in their simple but everlasting search box.

 

 

Bots all the way down, and a puppy reference

406161373_1b89bdef82_zBots have been around for a long time. Anyone who knows the history of AI remembers ELIZA, a conversation-mimicking AI that made you feel you were talking to a doctor, or, more like, an amateur psychologist. We should not be surprised that, since then, with every new interactive platform, there has been a proliferation of bots.

A botsplosion
In the past year, there’s been a renewed interest in bots on various messaging platforms. These have gone beyond the automated accounts that tweet the TowerBridge or the Shipping Forecast, or tweet the arrival of dictators at Geneva Airport.  Infused with more understanding of language and a dash of AI, these bots now can bait bigoted extremists, or tweet negative Trump quotes – with the source.

Science Friday did a great segment on AI bots and talked to some creators of them. While some of these bots are not, by definition, utilitarian, they are quite imaginative and creative, able to spark wonder and make you think, despite how they come out.

Getting beyond amusement, WeChat (no surprise) and Baidu are taking it up a notch. On WeChat there are bots that can do image recognition or mimic a voice-recognition assistant (I’ll get to voice-driven AI agents in a later post). And Baidu, to circle back to ELIZA, has created a docBot to help start the process (triage?) for folks looking for a doc or medical info.

Getting as smart as a puppy
Things are getting interesting. Duolingo is building chat bots to help folks learn a language. These are basically tutorBots with whom users can message with to practice with. The idea is that the bot is an “eternally patient, nonjudgmental, on-demand instructor.” Though, what I hear about some “patient and nonjudgmental” AI assistants (*cough* Amy) is that they can be extremely annoying.

As these bots get more AI-power, I’ll be on the look out for those that try too much. So far, my experience with machine learning has been that AI systems are usually not smarter than the creator.* But that’s fine. Long ago, a bunch of us concluded that getting too smart could be an annoyance, hence we suggested that ‘smart’ systems should Be as Smart as a Puppy.

For me, the best systems have been the ones that have augmented my intelligence – by being able to sift through large data sets, make broad connections, and present insights in a way to inform me (or the doc, or the data scientist) – rather than trying to supplant my intelligence.

Rather than try to build a bot that thinks it’s as capable as a human, make me a bot that make the humans in the mix work better. Don’t try to outthink me.

Are you using bots in your daily routine, or are they still a curious creature evolving in curious directions?

Image of my BASAAP drawing that Matt Jones was kind enough to immortalize back in 2007.

*Oh, and if you think that kids end up smarter than their creators, you’re right. But I don’t think that any of these bots understand what makes kids be smarter than their parents. That, I think, is at the core of human intelligence that will still take some time to sort out.

Hey, John McCarthy, AI has finally gone mainstream

home___partnership_on_artificial_intelligence_to_benefit_people_and_societyYou know the feeling when someone mentions something and then you see it everywhere? Well, that’s what happened to me with AI. I wasn’t giving it any attention until someone pointed out a few weeks back that it was a big up-and-coming topic (to be fair, they pointed it out as a big up-and-coming topic to me already two plus years ago). Ok, machine learning was a big part of what I was selling these past two years, and I was at IBM the Watson hit the stage, so it’s not like I was totally clueless. But no sooner do I start doing some research, a bunch of big announcements (like the one above) happen. So excuse me if I sound a bit out of touch at the start. I’m playing catch up to smarties like you.

Say it is so, John
John McCarthy was one of the founders of AI. Back in the summer of 1956 he organized the Dartmouth Conference that kicked off AI as a field.

By the time I started reading about AI back in the ’80s, the field had come a long way, but I wouldn’t say it was something mainstream. Nonetheless, AI has been simmering in the background and the age of Big Data seems to have brought AI to the fore and ushered the Era of Mainstream AI.

AI now
In the past few weeks, there have been many announcements around AI, and, with the Partnership on AI (logo above, announcement link below), large corporations are putting money where their mouths are. What’s more, these corporations are also releasing useful products that can truly claim a foundation in AI.

A most telling comment for me has been Google pointing out that for the last many years products have been pushing to be “mobile-first.” Now, and Google CEO Sundar Pichai has been saying this for a few months, “We will move from mobile-first to an AI-first world.”

Why is the message louder now? A slew of AI-based products have been released in the past few months from Apple, Google, Microsoft, Amazon, and Facebook. The time for talk is over – real, AI-based, conversational agents, backed by troves of data and responsive software and hardware and networks are here. And, delightfully, competition will accelerate the usefulness of these products.

Story-telling
OK, I might be late to the party, but I have noticed a growth in the number of browser tabs I have open to AI topics, products, and people; I can no longer sit quietly as these exciting developments happen.

But I am not interested in only understanding AI from the perspective of daily news. I want to understand AI in action, the demos and stories of the uses of AI in all forms. I want to understand the interaction of people and AI, now and in the past, the movers and shakers, and those affected, whether good or bad. I want to understand the culture of AI, how it is portrayed in movies, books, and popular culture. I want to understand the science of AI, no matter how intelligible. And I want to understand how to use AI-driven tools in my exploration of AI (a dog-food kinda thing, let’s see how that goes).

The AI story is right in front of me and, for some strange reason, I now feel compelled to share this story from this perspective.

Let’s see. For sure I don’t need another compulsion I just have to write about. Expect a barrage of posts as I clear out my tabs. 🙂

September 28, 2016NEW YORK — Amazon, DeepMind/Google, Facebook, IBM, and Microsoft today announced that they will create a non-profit organization that will work to advance public understanding of artificial intelligence technologies (AI) and formulate best practices on the challenges and opportunities within the field.

Source: Industry Leaders Establish Partnership on AI Best Practices | Partnership on Artificial Intelligence to Benefit People and Society

Thoughts on Aetna’s Apple Watch move

watchAetna announced they were going to subsidize the Apple Watch for select large employers and individual customers this open enrollment season. Aetna will also provide the Apple Watch to nearly 50,000 employees in their wellness reimbursement program.

This is big.

For this to succeed and stick, Aetna will need to be able to measure the impact of these devices on their business and, of course, health of people. I am seeing now that these devices are starting to get attention, we’re starting to truly discover if what we believe is true or based on conjecture (the 10,000 steps myth, for example). I am not convinced there’s enough data to show that devices alone are the answer.

I feel these devices are only useful in the context of a full engagement with patient and person – so far these devices have been seen, incorrectly, as point solutions (did you see Microsoft exited the fitness band hardware biz?). My thoughts tie into a holistic digital therapy, that these devices work best as part of a care plan, with coaches, data, insight, and understanding. It’ll be interesting to see how they meld these devices with the Aetna apps.

[OK, and I haven’t even begun to scratch the surface on biz models and whatever engagement model Aetna or their customer organizations might do to engage with the members. Lots of room for innovation there.]

Aetna’s iOS-exclusive health apps will aim to simplify the healthcare process through a number of features, including: Care management and wellness, to help guide consumers through health events like a new diagnosis or prescription medication with user-driven support from nurses and people with similar conditions. Medication adherence, to help consumers remember to take their medications, easily order refills and connect with their doctor if they need a different treatment through their Apple Watch or iPhone. Integration with Apple Wallet, allowing consumers to check their deductible and pay a bill. Personalized health plan on-boarding, information, messaging and decision support to help Aetna members understand and make the most of their benefits.

Source: News Releases – Investor Info | Aetna [Hat-tip to Rock Health for the link]

What’s the healthcare equivalent of reach, throw, row, go?

14157551699_bbbce23643_mThe other day we were talking about my wife’s mobile vertinary practice, and I started mapping what she does to human healthcare and reach, row, throw, go popped into my head.

My wife used to be a pool lifeguard. She told me that if something happened in the water, the level of engagement was reach (can you use a pole or arm to grab the swimmer?), throw (are they close enough for you to throw a lifesaver?), row (can you take a boat or board to the swimmer?), go (if all else fails, go in after the swimmer).

I’ve been thinking quite a bit about patient engagement lately. I truly believe that the way out of the mess we have in healthcare is through deeper engagement with the healthcare system. I call it high-touch healthcare. But I don’t mean bigger hospitals and more doctors. I’ve always taken a broader, multi-channel and longitudinal view of patient engagement.

What do I mean?

The thought I’ve been developing is there is a gradient of involvement in the healthcare system from independent (for example, looking up information) to complete (for example, surgery). And there are layers to the involvement, taking in the patient, their circle of support, clinicians, clinics and hospitals, visiting caregivers, and, for me, data.

And that’s where the reach, throw, row, go comes in. Each patient is at some level of need (from none to complete dependence) and we need to decide if we need to reach (self-serve websites, mobile devices), throw (visiting caregivers, training for family members), row (clinics and urgent care), go (hospitals, hospices).

I think this thinking comes from my background in marketing and in product development where you can’t just do one thing, but need to think of the user journey, all the touch-points, and provide the right engagement for the right issue.

To me this sounds obvious, but I am never sure if healthcare systems really get it. What do you think? Do they? Do you have examples?

Image from Vasse Nicholas, Antoine