AI is reshaping programming as we know it, creating an intersection of the old and the new. As a programmer, this creates new challenges but with a familiar ring to them. Here are four ways that I’ve found to help me stay on top of these demands.
#1 Containers matter—and debugging needn’t be hell
Containers are being used in key ways for AI to construct a unified and simplified platform to collect, organize, and analyze data. But when something goes wrong and we need to debug an application that’s running within the container, we’ve been lacking in tools. Have you experienced the “hell” of finding that using containers meant giving up on using the best analysis tools? Well, that has changed. In general, tools are finding a way to offer strong support inside containers—as long as we use the latest versions. Intel has extended profiling tools to help with enterprise applications inside Docker and Mesos containers, those using Python, and those running Java services and daemons. For instance, the Intel® VTune™ Amplifier 2019 (the latest version) supports profiling code within containers—see the latest user’s guide for more information.
#2 Python matters—more accelerated Python
The need for speed for programmers doing classic machine learning (as well as other numerically intensive applications) in Python can be satisfied by using accelerated Python packages. I’ve written about efforts to accelerate Python in “How Does a 20X Speed-Up in Python Grab You?” The latest 2019 release of Accelerated Python adds scikit-learn optimizations for SVM classification algorithm, a new XGBoost package with Python interface to the library (Linux only), and support for Python 3.6 and 2.7 (aren’t there enough Pythons?). Best of all, Intel no longer requires a separate download and install to get accelerated Python (it’s still available by itself for free). It simply comes with the product installs for Intel® Parallel Studio 2019 XE.
#3 Data analytics matters—BIG Data, BIG problems
Intel has long had great libraries for performance, but we had to pay for them. Don’t get me wrong, they’re worth the money—but none of us enjoys asking our managers to buy them or opening our own wallets to pay. Not only has Intel layered in amazing support for data analytics, they offer it open source and for free these days. Times are changing!
One thing that has not changed is the attention to detail that Intel Performance libraries have always shown in features and performance. Check out the DNN additions to the world of Intel® Math Kernel libraries, and the Intel® Data Analytics Acceleration library (Intel® DAAL). I provide links at the end of this article to the github repositories, but you can also download ready-to-go binaries from the Intel® Performance Libraries website.
#4 You matter—set aside time to learn more AI/ML (it’s NOT rocket science)
Here are my small, big, and medium (just right?) options for investing time in learning from the convenience of your computer. Because you’re investing in yourself, you can’t go wrong!
A. Small (ease in): Dip your toe in gently with a Kaggle competition turned into a “cut your teeth” step-by-step exercise. Try out “Titanic: Machine Learning from Disaster.” If you can’t find time to do this, then you really aren’t taking time to invest in yourself.
B. Go big (and fairly academic): Jump in deep with Stanford Professor Andrew Ng. Check out “Machine Learning on Coursera.” This is a full college-course-style class that you do online “at your own pace.” It offers a firm foundation, includes some great programming exercises, and requires more focus to complete than many online courses. With an average rating of 4.9 out of 5.0 stars—and with more than 90,000 reviews—it’s well worth a look.
C. Medium-sized (less academic): three step-by-step courses in bite-sized pieces, courtesy of Intel’s Nirvana team.
Thrive in 2019
My wish for you in 2019: Invest in yourself and your tools. Then the intersection of AI with everything we do, including programming, can be not only survivable but fun. Here are some links to tool downloads that I mentioned: