Every tech investor looks forward to the classic “ground-floor opportunity,” the chance to invest in a young company, inches from going public, developing a new, compelling technology that has every chance of wide adoption.

A company like that can bring gains that just keep getting bigger and bigger for years. Take Netflix Inc. (Nasdaq: NFLX), for instance, the undisputed king of streaming video.

[ad#Google Adsense 336×280-IA]If you’d bought shares of the company back in 2002 when streaming was on the drawing boards and the firm was a DVD-only mail rental business, you would have seen gains as high as 11,388%, not accounting for splits.

Today I want to tell you about another ground-floor opportunity waiting in the wings.

I’ve been following it closely, and I’ve been in close touch with one of the chief architects of this exciting new technology.

I couldn’t be more fired up about the profit potential here.

Let me be clear, you can’t invest in it right now, but soon you’ll be able to grab shares at a real bargain. I’m going to let you know right away when that happens.

But in the meantime, here’s why I’m so excited about “Deep Learning”…

This Is Disruptive, Ubiquitous, and Hugely Profitable

We can get on the Internet anytime and see that Facebook “knows” who our friends are and what topics we’re interested in, and on Amazon, the service always seems to “know” exactly what you want and, more importantly, how much you’re willing to pay for it.

What’s more, we can talk to our Android, iOS, and Windows mobile phones in plain English, and Google Now, Siri, and Cortana always understand what we’re saying.

It’s because the companies behind these products are all using a new technology that is becoming more common and powerful by the day – one many experts believe will be the most “disruptive” technology of the next decade and beyond.

It’s a form of artificial intelligence (AI) called “Deep Learning.” And it’s a technology I too am incredibly excited about.

Not just because this technology has the ability to dramatically improve scores of businesses, in industries as diverse as medical research and drug development… healthcare… software… military training… driverless cars… industrial design… and more.

But because this new trend is presenting us with some huge money-making opportunities.

Over the past six months, I’ve been spending a lot of time researching Deep Learning so I can bring you the best opportunities in this developing field, including talking to a visionary R&D executive at a growing startup called OrCam in Israel. I’ll tell you more about him in a moment.

But first I want to show you why I’m so fired up over this.

The ABCs of “Deep Learning”

Deep Learning (sometimes referred to as “machine learning”) is emerging as the dominant technique in the field of artificial intelligence that makes use of specialized programs and even electronic circuits called “neural networks.”

These neural networks mimic the behavior of the human brain by “memorizing” images, ideas and sounds (such as voices) right in its code.

What makes this so unique is that this method, developed since the ’70s, has finally leaped beyond all other techniques in machine learning – and now solves tasks that were not considered solvable just five years ago.

Even more amazing, the computer continues to learn as it goes along… just like the human brain.

The world recently got to see Deep Learning up close in March 2016, when a computer program called AlphaGo, developed at Google’s DeepMind division, defeated a reigning Grandmaster at the game “Go.”

That feat was widely considered impossible, because with so many move combinations to consider, previous “brute force” programming approaches had all failed. But the Deep Learning approach succeeded.

Deep Learning is also showing up in the pharmaceutical industry. In 2012, a team led by George Dahl, a Ph.D. student, used Deep Learning neural networks to win the “Merck Molecular Activity Challenge,” a competition based around predicting which targets a therapeutic compound will be active against and which ones the compound will ignore.

Deep Learning is also used in robotics, automotive safety, and text and speech recognition, to name just a few fields.

Here’s why this is a truly amazing evolution in technology…

One way to have a computer handle tasks is to build a large number of specific “rules” into your program. Your rules tell the computer what kind of “output” to produce based on a given “input.” An input might be a number, for example.

This brute-force approach is not well suited if you want a computer to, say, recognize handwriting, or faces, or human voices. The reason has to do with the fact that visual or sonic “inputs” are not as clear-cut as numbers are.

So to do those tasks properly, a computer has to take raw information – such as sound or light waves – find subtle but consistent patterns… then make predictions and perform operations based on those predictions.

The computer repeats this input-output process, getting closer and closer to the optimal output.

As this learning process is taking place, the actual code that the computer is running literally changes – the parts of the program that aren’t “helping” get weaker while the parts that “help” get stronger.

That’s not a figure of speech. The way processing power is allocated to particular parts of the program – called “neurons” – really does shift and rebalance depending upon whether a neuron is helping solve the task at hand.

The practical, real-time result of those actions feeding back into the neural network is that the next time the computer comes across a similar situation, it knows how to react without having to be reprogrammed or “told.” It doesn’t have to repeat the steps.

This is clearly revolutionary, and the “Who’s Who” list of companies using Deep Learning reads like a list of some of our best recommendations: Facebook Inc. (Nasdaq: FB), Amazon.com Inc. (Nasdaq: AMZN), Alphabet Inc. (Nasdaq: GOOG), Microsoft Corp. (Nasdaq: MSFT), and Baidu Inc. (Nasdaq ADR: BIDU).

Here’s the Company I’m Watching Closely

Let me begin now with the Israeli company that recently caught my attention: OrCam.

As I said, this company is not yet publicly traded, but its branch of Deep Learning – called Artificial Vision – is nothing short of mind-blowing. In fact, I believe it could have a profound impact on society and the markets as a whole.

Artificial Vision is exactly what its name suggests: A way for a machine to “see” on behalf of a human user, a tiny wearable camera looks out at the world and tells the wearer, in plain English, what is “out there.”

I’m not exaggerating. OrCam’s flagship product, MyEye, not only recognizes what it sees, it also verbalizes it to the wearer. It functions as a second pair of eyes to provide new hope and independence to people who – because of glaucoma, macular degeneration, retinitis, or any other cause – have suffered a debilitating loss of vision.

Ziv Aviram and Prof. Amnon Shashua, the two founders of OrCam, are “visionaries” in more ways than one.

Before founding OrCam in 2010, they co-founded Mobileye NV (NYSE: MBLY), a company that develops systems that prevent collisions in automobiles.

Aviram and Shashua both contribute in parallel capacities to Orcam and Mobileye.

Dr. Yonatan Wexler is the head of Research & Development at OrCam.

For his part, he’s spent years researching Artificial Vision at the University of Maryland, Oxford University, the Weizmann Institute of Science, and Microsoft.

He is a recognized expert on efficient ways to extract useful information from images and video.

In 2003, in recognition for his stellar scholarship and contribution to his field, Dr. Wexler received the prestigious David Marr Prize, which is given out every two years by the International Conference on Computer Vision and is considered a top honor for computer vision researchers.

Wexler had a big role in creating MyEye, OrCam’s flagship product.

The way Wexler explains it, a tiny camera clips to the stem of a wearer’s eyeglasses. A pocket-sized processor interprets visual information from the camera and “tells” the wearer what the camera is looking at.

When I asked Dr. Wexler how he got involved in Artificial Vision and Deep Learning, he told me about an early job he held for a company that needed to scan airline tickets.

To his surprise, the computers couldn’t tell what was on the ticket. “I pursued a career in understanding out how to make a computer see like a person,” he said.

Dr. Wexler has been deeply involved with MyEye every step of the way. “I was there from the beginning,” he said.

Dr. Wexler gave me a glimpse of the thinking that went into the MyEye…

“What information do people really need? What’s important? This is a very personal question,” he said. “If you have a device that just describes what it sees, it’ll never stop talking. So we created a device that is very intuitive. We tried to be very selective. What’s the most natural way to indicate what you’re interested in? You point at it.”

This is a huge improvement over other Artificial Vision systems that require the user to first snap a photo of something he or she needs to “see.”

It also saves a ton of power, which is crucial because a wearable Artificial Vision device has to be able to go all day on a single battery charge.

“The battery has to last all day. For [visually impaired] people to have the courage to leave home, the device has to be with you all day,” he said. “It can’t connect to the Internet, because you could lose the connection. It would take too long to have to ask Google what you’re pointing to. The device has to be localized.

“We want to give you the information that’s exactly what you need – useful and actionable – reliably using a very little amount of energy in a very short time,” he continued. “The added value is huge. That’s where we want OrCam to be.”

MyEye is using a specialized earpiece that provides clear audio to the wearer by way of a mini speaker near the ear, without blocking their normal hearing, which is very important to the visually impaired.

Make no mistake. There is a massive need for this technology. More than 25 million people struggle with vision impairment in the United States alone.

Worldwide that number climbs well past 320 million.

Wexler spoke with passion and pride about what a difference MyEye makes for vision-impaired children, who too often find themselves isolated by their impairment.

Now, with MyEye, “They’ll excel and do well because they can read any printed text as well as recognize people in social situations.”

Finally, I asked Dr. Wexler how large a market share he envisioned OrCam capturing.

“A $750 billion market in five years,” he said. “We’re going to make the device recognize all languages and speak all languages. We’re going to address the full worldwide market.”

I think Dr. Wexler is on the money with that forecast.

OrCam is not alone in working to bring this disruptive tech into the mainstream. Many publicly traded firms are also emerging leaders in this field. I’ll be watching each of them closely, and as soon as I spot a great opportunity, I’ll talk about it here.

— Michael A. Robinson

[ad#mmpress]

Source: Money Morning