Code Logic

Sarvesh Bhatnagar

Code logic is all about logic development and it aims at improving problem solving skills, by listening to this podcast you will get to know how to think and decompose a difficult problem into multiple simpler problems...

  • 7 minutes 28 seconds
    Collocations, Part Two (S3E2)

    Hey guys, this is sarvesh again, I hope you enjoyed the recent episodes and as promised, this is another episode for collocation's. These techniques are generally and most commonly used for finding if the words are collocations or not. The generally principle behind this is hypothesis testing. Our null hypothesis being, assumption that the words are not collocations. To find out more, go ahead and listen to the episode.

    #NLP #NaturalLanguageProcessing #Learn #Something #New

    --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    20 January 2022, 7:46 am
  • 13 minutes 46 seconds
    Collocations, Part One (S3E1)

    Hey guys, sorry for the long break. I was working from home and its difficult to record audio at home, anyways I have started the podcast back, I think we will be discussing about word embeddings and other embeddings later because we have a lot to learn before that.

    About this episode, this episode focuses about collocations, what comprises of collocations and two methods for collocations the first is based on frequency and second is based on mean and variance. I hope you learn something from this episode and enjoy it. See you in the next episode.

    We are starting a fresh, with season 3 because I think we were learning in an unclean manner. I always strive to make good content for my listeners and will try to make it as decent as possible. I might reupload previous episodes again. Good luck, Have fun and happy new year!


    #collocations #learn #learnsomethingnew #NLP

    --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    3 January 2022, 10:59 am
  • 4 minutes 2 seconds
    Word Embeddings - A simple introduction to word2vec

    Hey guys welcome to another episode for word embeddings! In this episode we talk about another popularly used word embedding technique that is known as word2vec. We use word2vec to grab the contextual meaning in our vector representation. I've found this useful reading for word2vec. Do read it for an in depth explanation.

    p.s. Sorry for always posting episode after a significant delay, this is because I myself am learning various stuffs, I have different blogs to handle, multiple projects that are in place so my schedule almost daily is kinda packed. I hope you all get some value from my podcasts and helps you get an intuitive understanding of various topics.

    See you in the next podcast episode!

    --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    13 January 2021, 11:04 pm
  • 2 minutes 37 seconds
    Introduction to word embeddings and One hot encoding in NLP

    In this podcast episode we discuss about why word embeddings are required, what are they and we also discuss about one hot encodings. In next episode we will talk about specific techniques for word embeddings individually. Stay tuned.

    Sponsored by www.stacklearn.org

    --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    22 December 2020, 9:14 pm
  • 1 minute 45 seconds
    learn about TF-IDF model in Natural Language Processing

    In this podcast episode we will talk about TF-IDF model in Natural Language Processing. TF-IDF model stands for term frequency inverse document frequency. We use TF-IDF model to give more weight to important words as compared with common words like the, a, in, there, where, etc. To learn python programming visit www.stacklearn.org. See you in the next podcast episode!

    --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    13 December 2020, 10:41 pm
  • 4 minutes 5 seconds
    Bag of Words in Natural Language Processing

    In this podcast episode we talk about bag of words model in natural language processing. Bag of Words model is simply a feature extraction method used in NLP. We mainly discuss about why bag of words model is required and what it is. In summary BOW is simply a set of tuples with words along with their frequency pairs.

    To learn more about BOW : visit this

    Gensim Introduction : visit this

    Also, to support me do visit www.stacklearn.org

    --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    9 October 2020, 10:25 pm
  • 3 minutes 2 seconds
    Review of Preprocessing steps in NLP and More!
    In this episode we review preprocessing steps such as making text lowercase, removing unwanted characters and other related cleaning tasks which we have discussed in the previous videos... we also talk about Gensim package in python and how it simplifies preprocessing in Natural language processing --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    24 September 2020, 11:22 pm
  • 1 minute 15 seconds
    Lemmatization in Natural Language Processing

    In this podcast episode we will be talking about Lemmatization in natural language processing. It is a text normalization step which we need to perform to normalize words. Lemmatization improves on shortcomings of Stemming in Natural Language Processing and In this podcast episode we talk about that shortcoming and also how we can use lemmatization using nltk library.

    Learn Python: www.stacklearn.org

    Python package to save snippets : PyPi - codesnip

    --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    23 September 2020, 3:32 pm
  • 1 minute 24 seconds
    Stemming in Natural Language Processing
    In this podcast episode we will be talking about stemming in natural language processing. It is a text normalization step which we need to perform to normalize words such that run, runs and running counts the same... stemming involves chopping off affixes such as ing, ly, etc. --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    17 September 2020, 10:38 pm
  • 2 minutes 12 seconds
    Tokenization in Natural Language Processing

    In this episode we discuss about tokenization in Natural Language Processing. As discussed in previous episode, tokenisation is an important step in data cleaning and it entails dividing a large piece of text into smaller chunks. In this episode we discuss some of the basic tokenizers available from nltk.tokenize in nltk.

    If you liked this episode, do follow and do connect with me on twitter @sarvesh0829

    follow my blog at www.stacklearn.org.

    If you sell something locally, do it using BagUp app available at play store, It would help a lot.

    --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    14 September 2020, 10:30 pm
  • 2 minutes 32 seconds
    Data Cleaning in Natural Language Provessing
    In this episode we talk about various steps in data cleaning process in Natural Language Processing. Data cleaning is almost a given whenever you want to perform natural language processing onto the given text. Data cleaning in natural language processing involves tokenization, lowering the words, lemmatization, and so on. Aside from talking about that we also talk about how you can implement those briefly. To install codesnip mentioned in the last part open your terminal and write pip install codesnip --- Send in a voice message: https://podcasters.spotify.com/pod/show/sarvesh-bhatnagar/message
    13 September 2020, 9:36 pm
  • More Episodes? Get the App
© MoonFM 2024. All rights reserved.