Singular Learning
Classical laws break down near statistical singularities.
Distributed Intelligence
Towards model-parallelism and Hebbian learning.
Type-Theoretic Language
Communication and reasoning between intelligent entities.

2016 Future of AI

2020 Statistical and Machine Learning

Singular Learning

Events

20161111 ‘Deep Probability Flow,’ Brain Lab, SUTD.
20161028 ‘Deeper Learning for Smarter Cities,’ NVIDIA-SUTD Deep Learning Day, SUTD.
20161027 ‘From Deep Learning to Minimum Probability Flow,’ ZJU Data Science and Engineering Research Center, Hangzhou.
20161026 ‘Smarter Cities through Distributed Artificial Intelligence,’ SUTD-ZJU IDEA Workshop, Hangzhou.
20160715 ‘Deep Distributed Intelligence,’ Brain Lab, SUTD.
20160713 ‘Distributed Intelligence,’ WNDS Group, SUTD.
20160713 NRF Workshop on AI, CREATE, Singapore.
20160701 Future of AI, National Library, Singapore.
20160509 ‘The Singularity is Near: When Machines Transcend Data,’ Applied Algebra Seminar, Berkeley.
20160504 Panellist at ICCCRI 2016, Suntec Convention and Exhibition Centre, Singapore.
20160316 ‘Lessons on Statistical Singularities from Deep Learning,’ Yale-NUS Math Seminar.
20160309 6th Singapore Conference on Statistical Sciences, NUS.
20160304 ‘Deep Learning,’ NUS Guest Lecture.
20160226 ‘Big Data and Data Analytics,’ CSD&M Asia, SUTD.
20160203 ‘IoT Analytics,’ SMU Guest Lecture.

We Are Hiring!

I’m looking for mathematically-minded postdocs and Ph.D. students who are interested in working on spiking neural networks and machine reasoning. A strong background in statistical learning, symbolic logic, abstract algebra or computer science will be preferred. Please email me for more information.

Enabling Spiking Neuromorphic Computation with On-Board Learning Through Algorithm and Hardware Co-design

Position

Research fellow. 1-2 years.

Job Description

  • Develop spiking neural network hardware simulation platform with CMOS circuit design techniques and memristor device modeling.
  • Develop novel artificial neuron and synapse models based on different technologies, including CMOS and memristor technologies.
  • Work with a cross-disciplined research teams to perform algorithm-hardware joint design to evaluate various spiking-based learning methods, artificial neuron and synapse models, and investigate the trade-offs among prediction performance, hardware cost and power consumption.
  • Perform simulation and verification from algorithm level, architecture level, down to circuit level.

Qualifications

  • PhD in Electrical Engineering, Computer Engineering, Computer Science, or related fields
  • In-depth knowledge in digital, analog and mixed-signal circuit designs
  • In-depth experience and hands-on skills in coding with Matlab, Verilog/Verilog-A, and Spice
  • Experience in designing/simulating various circuit building blocks such as Op-amp, ADC, DAC, and Sense Amplifier, in Cadence Virtuoso environment.
  • Knowledge in machine learning, spiking neural network, and memristor technology  is preferred.
  • Experience in python programming and Tensorflow platform is preferred.  
  • Strong team work ethic and good interpersonal skills

Scalable Distributed Learning and Semantics in Internet-of-Data

Position

Research fellow. 1-2 years.

Job Description

  • Design software implementations for a distributed, extensible and lightweight semantic framework
  • Develop an Internet-of-Data that promotes self-discovery and real-time delivery of information
  • Leverage on ideas from linked data, publish-subscribe protocols and a form of symbolic reasoning known as dependent type theory

Qualifications

  • PhD in computer science, electrical engineering, statistics, mathematics or related fields
  • Programming in languages such as Python
  • Experience with the semantic web and with mathematical logic will be highly desirable