Spyder Typing Delay

Recently, the Spyder IDE faced serious typing delay or lag. Basically, after entering something using the keyboard, it takes almost half a second for it to appear on the screen. The issue can also be described as a typing lag in Spyder.

This seems to have been triggered by the installation of Big Sur OS.

Spyder slow on Mac

After intensive googling, we stumbled upon this Github page detailing several possible solutions.

The one that worked for us was installing pyqt and pyqtwebengine. Basically, type the following commands in the terminal:

pip install PyQt5
pip install PyQtWebEngine

The above solution should be very safe since it is just installing Python packages.

Spyder lagging

The above solution helped to solve the troubling issue of Spyder lagging. Since Spyder uses Qt for its GUI, it is critical to keep the various Qt related packages updated / at the correct version. This may be the reason why installing PyQt5 and PyQtWebEngine helps to remove the lag in Spyder.

Spyder very slow

There seems to be many reasons, other than the above, that can result in Spyder being very slow. One tip that is useful, is never update to the latest version of Spyder, Mac OS, or Anaconda immediately once it is released, unless it is absolutely necessary. Most of the bugs appear in the newest releases, and can cause multiple problems including making Spyder very slow. By updating at a later date, most of the bugs would have been solved by then and it is a much safer approach.

Previously, updating to the latest Spyder 4.1.5 also caused several problems, including lag, slowness or even Spyder simply just crashing.

Advertisement

Python matplotlib Plot Multiple Figures in Separate Windows

Matplotlib is a popular plotting package used in Python. There are some things to note for plotting multiple figures, in separate windows.

A wrong approach may lead to matplotlib showing a black screen, or plotting two figures superimposed on each other, which may not be the desired outcome.

Sample Matplotlib Code for Plotting Multiple Figures in Separate Windows

import matplotlib.pyplot as plt

plt.figure()
# plotting function here, e.g. plt.hist()
plt.savefig('filename1')


plt.figure()
# plotting function here, e.g. plt.hist()
plt.savefig('filename2')

plt.show()

One way to do this is to use the plt.figure() command for each figure that you want to plot separately. Optionally, you can use plt.savefig() if you wish to save the figure plotted to the working directory folder.

At the end, use the plt.show() command. The plt.show() command should only be used once per script.

Updating Spyder takes forever

Spyder is a Python IDE that is bundled together with the Anaconda distribution.

There are some problems that are commonly faced when it comes to updating Spyder. One way to update Spyder is to open Anaconda Navigator and click the settings button which has an option to update Spyder. But the problem is that the process can take a very long time. The process shows that it is “loading packages of /User/…/opt/anaconda3”.

Updating Spyder is constricted by …

Another way to update Spyder is to type “conda update spyder” in the terminal. A problem that can crop up is the error message: “updating spyder is constricted by …

Anaconda stuck updating Spyder [Solved]

For my case, it turns out that the version of Anaconda Navigator is outdated. Hence, I first updated Anaconda Navigator to the latest version.

Then, instead of clicking “Update application” which still didn’t quite work, we click on “Install specific version” and choose the latest version of Spyder (Spyder 4.1.5 in this case).

Then, the updating of Spyder in Anaconda Navigator worked perfectly!

How to update Spyder using Anaconda-Navigator: Click “Install specific version” instead of “Update application”.

Best Udemy Data Science / Machine Learning / AI Courses

During this current lockdown period it is a good idea to pick up a data science skill. Most occupations can benefit from such a skill, including engineers, accountants, teachers, even students. Who knows, one day you may find deep learning useful!

In this page we introduce various Udemy courses (which come with certificates that you can put on your LinkedIn profile) that are the best in their class, be it for data science, machine learning (including deep learning), and AI (Artificial Intelligence).

Best Udemy Python Course

Currently, Python is the most popular language for data science and machine learning.  R is the second most popular language, and is especially good for statistics.

Hence, this Machine Learning A-Z™: Hands-On Python & R In Data Science Course is perfect as it introduces two of the most popular programming languages in one course! You will learn Machine Learning (ML) in the process as well, which is a great bonus.

If you only want to focus on Python, then check out 2020 Complete Python Bootcamp: From Zero to Hero in Python. It is designed to bring you from zero knowledge to a respectable expert in Python if you complete the course and exercises.

Best Udemy courses for data science

In the Python for Data Science and Machine Learning Bootcamp  course, students can learn how to use NumPy, Pandas, Seaborn, Matplotlib, Plotly, Scikit-Learn, Machine Learning, Tensorflow, and more! The aforementioned packages are all classic and popular in data science, data analysis and data visualization.

The Data Science Course 2020: Complete Data Science Bootcamp is another bootcamp style course that gives you complete Data Science training in: Mathematics, Statistics, Python, Advanced Statistics in Python, Machine & Deep Learning. It is especially suitable for beginners, as well as intermediate students who need to brush up on their skills.

Best Udemy course for Deep Learning

Deep learning (DL) is a subbranch of machine learning that is recently very hot and popular due to its superior accuracy in tasks such as image classification and NLP (natural language processing).

The Deep Learning A-Z™: Hands-On Artificial Neural Networks allows students to learn how to create Deep Learning Algorithms in Python from two Machine Learning & Data Science experts. Templates included, which is very important. Essentially, you can use and modify the templates to suit your individual task at hand.

Complete Guide to TensorFlow for Deep Learning with Python is a course for learn how to use Google’s Deep Learning Framework – TensorFlow with Python! Solve problems with cutting edge techniques! TensorFlow is one of the more popular deep learning framework, and is slightly ahead in popularity compared to its closest rival, PyTorch.

Udemy course benefits

The first benefit of Udemy courses, is that you get to learn content from the top trainers. Often, these courses are superior to free YouTube content, and may be even better than the courses in your school.

The second benefit is that Udemy provides a certificate upon completion that you can list in your CV, as well as put in your LinkedIn profile. This is especially important if you are trying to transition into a data scientist job from another field, like engineering or physical sciences.

What is your favorite Udemy course for AI/ML/DL? Feel free to comment below!

Python (Anaconda) does not work with MacOS Catalina!

This is just to highlight that the Anaconda Python Distribution does not work with the latest MacOS Catalina. I only realized upon trying to open Anaconda Navigator, after installing Catalina.

The only (good) solution seems to be reinstalling Anaconda.

Source: https://www.anaconda.com/how-to-restore-anaconda-after-macos-catalina-update/

MacOS Catalina was released on October 7, 2019, and has been causing quite a stir for Anaconda users.  Apple has decided that Anaconda’s default install location in the root folder is not allowed. It moves that folder into a folder on your desktop called “Relocated Items,” in the Security folder. If you’ve used the .pkg installer for Anaconda, this probably broke your Anaconda installation.  Many users discuss the breakage at https://github.com/ContinuumIO/anaconda-issues/issues/10998.

Best Pattern Recognition and Machine Learning Book (Bishop)


Pattern Recognition and Machine Learning (Information Science and Statistics)

The above book by Christopher M. Bishop is widely regarded as one of the most comprehensive books on Machine Learning. At over 700 pages, it has coverage of most machine learning and pattern recognition topics.

It is considered very rigorous for a machine learning (data science) book, but yet has a lighter touch than a pure mathematics or theoretical computer science book. Hence, it is perfect as a reference book or even textbook for students self learning the subject from the ground up (i.e. students who want to understand instead of just blindly apply algorithms).

A brief overview of the contents covered (taken from the contents page of the book):

  1. Introduction

  2. Probability Distributions

  3. Linear Models for Regression

  4. Linear Models for Classification

  5. Neural Networks

  6. Kernel Methods

  7. Sparse Kernel Machines

  8. Graphical Models

  9. Mixture Models and EM

  10. Approximate Inference

  11. Sampling Methods

  12. Continuous Latent Variables

  13. Sequential Data

  14. Combining Models

2 types of chi-squared test

Most people have heard of chi-squared test, but not many know that there are (at least) two types of chi-squared tests.

The two most common chi-squared tests are:

  • 1-way classification: Goodness-of-fit test
  • 2-way classification: Contingency test

The goodness-of-fit chi-squared test is to test proportions, or to be precise, to test if an an observed distribution fits an expected distribution.

The contingency test (the more classical type of chi-squared test) is to test the independence or relatedness of two random variables.

The best website I found regarding how to practically code (in R) for the two chi-squared tests is: https://web.stanford.edu/class/psych252/cheatsheets/chisquare.html

I created a PDF copy of the above site, in case it becomes unavailable in the future:

Chi-squared Stanford PDF

Best Videos on each type of Chi-squared test

Goodness of fit Chi-squared test video by Khan Academy:

Contingency table chi-square test:

Popular packages in R and Python for Data Science

Most of the time, users of R and Python will rely on packages and libraries as far as possible, in order to avoid “reinventing the wheel”. Packages that are established are also often superior and preferred, due to lower chance of errors and bugs.

We list down the most popular and useful packages in R and Python for data science, statistics, and machine learning.

Packages in R

  • arules
  • arulesViz
  • car
  • caret
  • cluster
  • corrplot
  • ggplot2
  • lattice
  • perturb
  • psych
  • readr
  • recommenderlab
  • reshape2
  • ROCR
  • rpart
  • rpart.plot
  • tidyverse

Python Packages

  • factor_analyzer
  • math
  • matplotlib
  • numpy
  • pandas
  • scipy
  • seaborn
  • sklearn
  • statsmodels

pip install keeps installing old/outdated packages

This article is suitable for solving the following few problems:

  1. module ‘sklearn.tree’ has no attribute ‘plot_tree’
  2. pip install (on Spyder, Anaconda Prompt, etc.) does not install the latest package.

The leading reason for “module ‘sklearn.tree’ has no attribute ‘plot_tree” is because the sklearn package is outdated.

Sometimes “pip install scikit-learn” simply does not update the sklearn package to the latest version. Type “print(sklearn.__version__)” to get the version of sklearn on your machine, it should be at least 0.21.

The solution is to force pip to install the latest package:

pip install --no-cache-dir --upgrade <package>

In this case, we would replace <package>  by “scikit-learn”.


Sometimes, pip install does not work in the Spyder IPython console, it displays an error to the effect that you should install “outside the IPython console”. This is not normal (i.e. it should not happen), but as a quick fix you can try “pip install” in Anaconda Prompt instead. It is likely that something wrong went on during the installation of Anaconda, Python, and the long-term solution is to reinstall Anaconda.

How to save sklearn tree plot as file (Vector Graphics)

The Scikit-Learn (sklearn) Python package has a nice function sklearn.tree.plot_tree to plot (decision) trees. The documentation is found here.

However, the default plot just by using the command

tree.plot_tree(clf)

could be low resolution if you try to save it from a IDE like Spyder.

The solution is to first import matplotlib.pyplot:

import matplotlib.pyplot as plt

Then, the following code will allow you to save the sklearn tree as .eps (or you could change the format accordingly):

plt.figure()
tree.plot_tree(clf,filled=True)  
plt.savefig('tree.eps',format='eps',bbox_inches = "tight")

To elaborate, clf is your Decision Tree classifier (to be defined before plotting the tree):

# Example from https://scikit-learn.org/stable/modules/generated/sklearn.tree.plot_tree.html
clf = tree.DecisionTreeClassifier(random_state=0)
clf = clf.fit(iris.data, iris.target)

The outcome is a Vector Graphics format (.eps) tree that will retain its full resolution when zoomed in. The bbox_inches=”tight” command prevents truncating of the image. Without that command, sometimes the sklearn tree will just be cropped off and be incomplete.

Making big data a little smaller

While this result is nice, it also seems to mean that theoretically, we have already reached the limit in dimensional reduction for data compression.

Source: Science Daily

Harvard computer scientist demonstrates 30-year-old theorem still best to reduce data and speed up algorithms

Date:
October 19, 2017
Source:
Harvard John A. Paulson School of Engineering and Applied Sciences
Summary:
Computer scientists have found that the Johnson-Lindenstrauss lemma, a 30-year-old theorem, is the best approach to pre-process large data into a manageably low dimension for algorithmic processing.

When we think about digital information, we often think about size. A daily email newsletter, for example, may be 75 to 100 kilobytes in size. But data also has dimensions, based on the numbers of variables in a piece of data. An email, for example, can be viewed as a high-dimensional vector where there’s one coordinate for each word in the dictionary and the value in that coordinate is the number of times that word is used in the email. So, a 75 Kb email that is 1,000 words long would result in a vector in the millions.

This geometric view on data is useful in some applications, such as learning spam classifiers, but, the more dimensions, the longer it can take for an algorithm to run, and the more memory the algorithm uses.

As data processing got more and more complex in the mid-to-late 1990s, computer scientists turned to pure mathematics to help speed up the algorithmic processing of data. In particular, researchers found a solution in a theorem proved in the 1980s by mathematics William B. Johnson and Joram Lindenstrauss working the area of functional analysis.

Known as the Johnson-Lindenstrauss lemma (JL lemma), computer scientists have used the theorem to reduce the dimensionality of data and help speed up all types of algorithms across many different fields, from streaming and search algorithms, to fast approximation algorithms for statistical and linear algebra and even algorithms for computational biology.

Source:

Harvard John A. Paulson School of Engineering and Applied Sciences. “Making big data a little smaller: Harvard computer scientist demonstrates 30-year-old theorem still best to reduce data and speed up algorithms.” ScienceDaily. ScienceDaily, 19 October 2017. <www.sciencedaily.com/releases/2017/10/171019101026.htm>.