Python
Object Detection
OpenCV
NLTK
Keras
NumPy
Pandas
EDA
spaCy
Apache
Kaa
Data Visualization
GAN
Generative Art using AI
LSTM
Chatbot Development
TensorFlow
RNN
API
Model Deployment
OCR
Computer vision
NLP
MLOps
Docker
AWS
Cloud Computing
Flask API
MySQL
Genie Bot
Tools & Technologies: Python, Data Extraction, LSTM, NLP, Data Cleaning, Jupyter, Apache Spark, NLTK, Tensorflow, Scikit-learn.
Description: Build an automation script for an in-house Conversational Chatbot base model, which reduced production time by 80%. Data cleaning after creating queries for our project developed an end-to-end conversational chatbot using a base model for 3+ international clients, implementing advanced NLP techniques consisting of domain and intent classification and entity
recognition. LSTM and BiLSTM models are used for entity detection. Build a prototype model of a conversational and rule-based chatbot to meet the client's requirements. Created a POC of the base model chatbot on MLOps platforms Airflow and MLFlow.
Human-to-anime for Metaverse product
Tools & Technologies: GAN, CNN, Transform Data, Object detection, Data Gathering, OpenCV, Extract Data, PyTorch, Keras, Scikit-learn, Matplotlib.
Description: We created human-to-anime images for the Metaverse product, in which we used the unsupervised image-to-image translation technique model UGATIT. Achieved 75% accuracy through AdaIN. Extract human and anime images from different open-source platforms. Clean raw images and filter data from the database.
Artificial NFTs for XANALIA Product
Tools & Technologies: Python, Data Cleaning, Pandas, Data Gathering, CNN, OpenCV, YOLO, OpenCV.
Description: We built a CNN-based model for a decentralised NFT marketplace using the neural-style TF technique and generated 10k+ bizarre NFTs for the XANALIA Platform, whose market cap is more than 10 million USD.
Multiple POC project
Tools & Technologies: Python, GAN, Data Cleaning, Pandas, Data Gathering, CNN, OpenCV, Data Pipeline, Tensorflow, SQL, ETL, Python, PyTorch.
Description: Improve low-quality images into high-quality using SR-GAN and basic OpenCV functions. We achieved 50% accuracy in the virtual try-on model "Down to the Latest Detail", It was possible only because of these 3 things. Pose keypoints, semantic parsing, and a cloth mask. The SSIM evaluation is 0.71. Contribution to the Unity team for 3D mesh, experimental practises on different models like pifuHD and pytorch3DIt reduced the time of mesh generation by 30%. Delivered 20+ POCs and their technical documentation with the CTO and team lead. Ideated a new approach for product automation and
visualisation using ML/DL, reducing time by 70%. I created a script for the image classification model for data cleaning, and after this, I passed the dataset.