ayan@website $ ./project --list _

[1] rlx: A modular Deep RL library for research

Author: Ayan Das
Dated: 27 Jun 2020
Tags: Deep Learning, Reinforcement Learning, Open Source, Research,

rlx is a Deep RL library written on top of PyTorch & built for educational and research purpose. Majority of the libraries/codebases for Deep RL are geared more towards reproduction of state-of-the-art algorithms on very specific tasks (e.g. Atari games etc.), but rlx is NOT. It is supposed to be more expressive and modular. Rather than making RL algorithms as black-boxes, rlx adopts an API that tries to expose more granular operation to the users which makes writing new algorithms easier. It is also useful for implementing task specific engineering into a known algorithm (as we know RL is very sensitive to small implementation engineerings).

[2] Project MIRIAD: Intel India Pvt. Ltd.

Author: Ayan Das
Dated: 14 Feb 2018
Tags: Intel, Healthcare, Deep Learning,

Providing quality health services and screening to rural populations in a nation as large as India can be extremely challenging. For example, India has only three accredited radiologists per million people. Using AI technology to provide more extensive, effective radiological screening has the potential for saving lives and providing overall improvements to health across the country. A unified approach to handling diverse medical images that span modalities presents a distinct challenge to researchers and developers, one requiring a compute-intensive processing platform and an innovative approach to the deep neural network model.

[3] Imagenary: A computer-vision based mouse and keyboard interface for PCs

Author: Ayan Das
Dated: 26 Nov 2017
Tags: Gesture Control, Keyboard, Mouse, PC Interface,

A computer vision based cost effective UI (User Interface) for Computers. The project consists of two modules - finger movement based mouse and a keyboard. A camera feed is taken from above the hand of a user in a usual position in front of the PC. It is then taken for processing using several state-of-the-art computer vision algorithms for extracting finger movements and recognizing them in order to predict the user control.