GeekCoding101

  • Home
  • GenAI
    • Daily AI Insights
    • Machine Learning
    • Transformer
    • Azure AI
  • DevOps
    • Kubernetes
    • Terraform
  • Technology
    • Cybersecurity
    • System Design
    • Coding Notes
  • About
  • Contact
Machine Learning
Master machine learning with hands-on tutorials, AI models, and real-world applications.
Machine Learning

Overfitting! Unlocking the Last Key Concept in Supervised Machine Learning – Day 11, 12

I finished the course! I really enjoyed the learning experiences in Andrew's course so far. Let's see what I've learn for the two days! Overfitting - The Last Topic of this Course! Overfitting It occurs when a machine learning model learns the details and noise in the training data to an extent that it negatively impacts the performance of the model on new data. This means the model is great at predicting or fitting the training data but performs poorly on unseen data, due to its inability to generalize from the training set to the broader population of data. The course explains that overfitting can be addressed by: We can't bypass underfitting. Overfitting and underfitting both are undesirable effects that suggest a model is not well-tuned to the task at hand, but they stem from opposite causes and have different solutions. Below two screenshots captured from course for my notes: Questions help me to master the content Words From Andrew At The End! I want to say congratulations on how far you've come and I want to say great job for getting through all the way to the end of this video.  I hope you also work through the practice labs and quizzes.  Having said that, there are still many more exciting things to learn. Awesome! I am already ready for next machine learning journeys!

May 12, 2024 0comments 177hotness 0likes Geekcoding101 Read all
Machine Learning

Grinding Through Logistic regression: Exploring Supervised Machine Learning – Day 10

Let's continue! Today is mainly learning about "Decision boundary", "Cost function of logistic regresion", "Logistic loss" and "Gradient Descent Implementation for logistic regression". We found out the "Decision boundary" is when z equals to 0 in the sigmod function. Because at this moment, its value will be just at neutral position. Andrew gave an example with two variables, x1 + x2 - 3 (w1 = w2 = 1) the decision bounday is the line of x1 + x2 = 3. I want to say "Cost function for logistic regression" is the most hard in week 3 so far I've seen. I haven't quite figured out why the square error cost function not applicable and where the loss function came from. I have to re-watch the videos again. The lab is also useful. This particular cost function in above is derived from statistics using a statistical principle called maximum likelihood estimation (MLE).     Questions and Answers Some thoughts of today Honestly, it feels like it's getting tougher and tougher. I can still get through the equations and derivations alright, it’s just that as I age, I feel like my brain is just not keeping up. At the end of each video, Andrew always congratulates me with a big smile, saying I’ve mastered the content of the session. But deep down, I really think what he's actually thinking is, "Ha, got you stumped again!" However, to be fair, Andrew really does explain things superbly well. I hope someday I can truly master this knowledge and use it effortlessly. Fighting! Ps. Feel free to…

May 10, 2024 0comments 267hotness 0likes Geekcoding101 Read all
Machine Learning

Master Gradient Descent and Binary Classification: Supervised Machine Learning – Day 9

A break due to sick Oh boy... I was sick for almost two weeks 🤒 After a brief break, I’m back to dive deep into machine learning, and today, we’ll revisit one of the core concepts in training models—gradient descent. This optimization technique is essential for minimizing the cost function and finding the optimal parameters for our machine learning models. Whether you're working with linear regression or more complex algorithms, understanding how gradient descent guides the learning process is key to achieving accurate predictions and efficient model training.  Let's dive back into the data-drenched depths where we left off, shall we? 🚀 The first coding assessment I couldn't recall all of the stuff actually. It's for testing implementation of gradient dscent for one variable linear regression. I did a walk through previous lessons and I found this summary is really helpful: This exercise enhanced what I've learnt about "gradient descent" in this week. Getting into Classification I started the learning of the 3rd week. Looks like it will be more interesting. I made a few notes: Probability that y is 1;Given input arrow x, parameters arrow w, b. I couldn't focus too long on this. Need to pause after watching a few videos. Bye now. Ps. feel free to check out my other posts in Supervised Machine Learning Journey.

May 8, 2024 0comments 124hotness 0likes Geekcoding101 Read all
Machine Learning

Master Learning Rate and Feature Engineering: Supervised Machine Learning – Day 8

Today I started with Choosing the learning rate, reviewed the Jupyter lab and learnt what is feature engineering. Choosing the learning rate The graph taugh in Choosing the learning rate is helpful when develping models: Feature Engineering When I first started Andrew Ng’s Supervised Machine Learning course, I didn’t really realize how much of an impact feature engineering could have on a model’s performance. But boy, was I in for a surprise! As I worked through the course, I quickly realized that the raw data we start with is rarely good enough for building a great model. Instead, it needs to be transformed, scaled, and cleaned up — that’s where feature engineering comes into play. Feature engineering is all about making your data more useful for a machine learning algorithm. Think of it like preparing ingredients for a recipe — the better the quality of your ingredients, the better the final dish will be. Similarly, in machine learning, the features (the input variables) need to be well-prepared to help the algorithm understand patterns more easily. Without this step, even the most powerful algorithms might not perform at their best. In the course, Andrew Ng really breaks it down and explains how important feature scaling and transformation are. In one of the early lessons, he used the example of linear regression — a simple algorithm that relies on understanding the relationship between input features and the output. If the features are on vastly different scales, it can throw off the whole process and make training the model take much longer. This…

April 26, 2024 0comments 121hotness 0likes Geekcoding101 Read all
Machine Learning

Finished Machine Learning for Absolute Beginners - Level 1

As you know I was in progress learning Andrew Ng's Supervised Machine Learning: Regression and Classification, it's so dry! So I also spare some time to pick up some easy ML courses to help me to understand. Today I came across Machine Learning for Absolute Beginners - Level 1 and it's really easy and friendly to beginner. Finished in 2.5 hours - Maybe because I've made some good progress in Supervised Machine Learning: Regression and Classification and so feel it's easy. I want to share my notes in this blog post. Applied AI or Shallow AI Industry’s robot can handle specific small task which has been programmed, it’s called Applied AI or Shallow AI. Under-fitting and over-fitting are challenges for Generalization. Under-fitting The trained model is not working well on the training data and can’t generalize to new data. Reasons may be: An idea training process, it would looks like: Under fitting….. better fitting…. Good fit Over-fitting The trained model is working well on the training data and can’t generalize well to new data. Reasons may be: Training dataset (labeled) -> ML Training phase -> Trained Model The input (unlabeled dataset) -> processed by Trained model (inference phase) -> output (labeled dataset) Approaches or learning algorithms of ML systems can be categorized into: Supervised Learning There are two very typical tasks that are performed using supervised learning: Shallow Learning One of the common classification algorithms under the shallow learning category is called Support Vector Machines (SVM). Unsupervised Learning The goal is to identify automatically meaningful patterns in unlabeled data. Semi-supervised…

April 24, 2024 0comments 112hotness 0likes Geekcoding101 Read all
Machine Learning

Master Feature Scaling & Gradient Descent: Supervised Machine Learning – Day 7

Welcome back I didn't get much time working on the course in past 5 days!!! Finally resuming today! Today I reviewed Feature scaling part 1 and learned Feature scaling part 2 and Checking gradient descent for convergence. The difficulty of the course is getter harder, 20mins video, I spent double time and needed to checking external articles to get better understanding. Feature Scaling Trying to understand what is "Feature Scaling"... What are features and parameters in below formula? hat of Price = w1x1 + w2x2 + b. x1 and x2 are features, former one represents size of house, later one represents number of bedrooms. w1 and w2 are parameters. When a possible range of values of a feature is large, it's more likely that a good model will learn to choose a relatively small parameter value. Likewise, when the possible values of the feature are small, like the number of bedrooms, then a reasonable value for its parameters will be relatively large like 50.  So how does this relate to grading descent? At the end of this video, Andrew explained that the features need to be re-scaled or transformed sl that the cost function J using the transfomed data would shape better and gradient descent can find a much more direct path to the global minimum. When you have different features that take on very different ranges of values, it can cause gradient descent to run slowly but re scaling the different features so they all take on comparable range of values. because speed, upgrade and dissent significantly.  Andrew Ng One key aspect of feature engineering is…

April 24, 2024 0comments 200hotness 0likes Geekcoding101 Read all
Machine Learning

Supervised Machine Learning – Day 6

Today I spent 10 30 60 mins reviewing previous notes, just realized that's a lot. I am amazing 🤩 Today start with "Gradient descent for multiple linear regression". Gradient descent for multiple linear regression Holy... At the beginning Andrew throw out below and said hope you still remember, I don't: Why I couldn't recognize... Where does this come? I spent 30mins to review several previous videos, then found it... The important videos are: 1. Week 1 Implementing gradient descent, Andrew just wrote down below without explaining (he explained later) 2. Gradient descent for linear regression Holy! Found a mistake in Andrew's course!On above screenshot, Andrew lost x(i) at the end of the first line! WOW! I ROCK! Spent almost 60mins! I am done for today! The situation reversed an hour later But I felt upset, I was NOT convinced I found the simple mistake especially in Andrew's most popular Machine learning course! I started trying to resolve the fomula. And.... I found out I was indeed too young too naive... Andrew was right... I got help from my college classmate who has been dealing with calculus everyday for more than 20 years... This is the derivation process of the formula written by him: He said this to me like my math teacher in college: Chain rule, deriving step by step. If you still remember, I have mentined Parul Pandey’s Understanding the Mathematics behind Gradient Descent in my previous post Supervised Machine Learning – Day 2 & 3 – On My Way To Becoming A Machine Learning Person, in her post she…

April 18, 2024 0comments 106hotness 0likes Geekcoding101 Read all
Machine Learning

Mastering Multiple Features & Vectorization: Supervised Machine Learning – Day 4 and 5

So difficult to manage some time on this Day 4 was a long day for me, just got 15mins before bed to quickly skim through the video "multiple features" and "vectorization part 1". Day 5, a longer day than yesterday... went to urgent care in morning... then back-to-back meeting after come back.... lunch... back-to-back meeting again... need to step out again... Anyway, that's life. Multiple features (variables) and Vectorization In "multiple features", Andrew uses crispy language explained how to simplify the multiple features formula by using vector and dot product. In "Part 1", Andrew introduced how to use NumPy to do dot product and said GPU is good at this type of calculation. Numpy function can use parallel hardware (like GPU) to make dot product fast. In "Part 2", Andrew further introduced why computer can do dot product fast. He used gradient descent as an example. The lab was informative, I walked through all of them though I've known most of them before. More linkes about Vectorization can be find here. Questions for helping myself learning I created the following questions to test my knowledge later. What is x(4)1 in above graph?   Ps. feel free to check out the series of my Supervised Machine Learning journey.

April 17, 2024 0comments 131hotness 0likes Geekcoding101 Read all
Machine Learning

Supervised Machine Learning – Day 2 & 3 - On My Way To Becoming A Machine Learning Person

A brief introduction Day 2 I was busy and managed only 15 mins for "Supervised Machine Learning" video + 15 mins watched But what is a neural network? | Chapter 1, Deep learning from 3blue1brown. Day 3 I managed 30+ mins on "Supervised Machine Learning", and some time spent on articles reading, like Parul Pandey's Understanding the Mathematics behind Gradient Descent, it's really good. I like math 😂 So this notes are mixed. Notes Implementing gradient descent Notation I was struggling in writting Latex for the formula, then found this table is useful (source is here): Andrew said I don't need to worry about the derivative and calculas at all, I trust him, next I dived into my bookcases and found out my advanced mathmatics books used in college, and spent 15 mins to review, yes, I don't need. Snapped two epic shots of my "Advanced Mathematics" book used in my college time to show off my killer skills in derivative and calculus - pretty sure I've unlocked Math Wizard status! Reading online Okay. Reading some online articles. If we are able to compute the derivative of a function, we know in which direction to proceed to minimize it (for the cost function). From Parul Pandey, Understanding the Mathematics behind Gradient Descent Parul briefly introduced Power rule and Chain rule, fortunately, I still remember them learnt from colleage. I am so proud. After reviewing various explanations of gradient descent, I truly appreciate Andrew's straightforward and precise approach! He was kiddish some times drawing a stick man walking down a hill…

April 15, 2024 0comments 179hotness 1likes Geekcoding101 Read all
Machine Learning

Supervised Machine Learning - Day 1

The Beginning As I've been advancing technologies of my AI-powered product knowlege base chatbot which based on Django/LangChain/OpenAI/Chroma/Gradio which is sitting on AI application/framework layer, I also have kept an eye on how to build a pipeline for assessing the accuracy of machine learning models which is a part of AI Devops/infra. But I realized that I have no idea how to meature a model's accuracy. This makes me upset. Then I started looking for answers. My first google search on this is "how to measure llm accuracy", it brought me to Evaluating Large Language Models (LLMs): A Standard Set of Metrics for Accurate Assessment, it's informative. It's not a lengthy article and I read through it. This opens a new world to me. There are standard set of metrics for evaluating LLMs, including: I don't know all of them and where to start! I have to tell meself, "Man, you don't know machine learning..." So my next search was "machine learning course", Andrew Ng's Supervised Machine Learning: Regression and Classification now came on top of the google search results! It's so famous and I knew this before! Then I made a decision, I want to take action now and finish it thoroughly! I immedially enrolled into the course. Now let's start the journey! Day 1 Started Basics 1. What is ML? Defined by Arthur Samuel back in the 1950 😯 "Field of study that gives computers the ability to learn without being explicitly programmed." The above claims gaves the key point (The highlighted part) which could answer the question from…

April 13, 2024 0comments 405hotness 1likes Geekcoding101 Read all
Newest Hotest Random
Newest Hotest Random
A 12 Factor Crash Course in Python: Build Clean, Scalable FastAPI Apps the Right Way Golang Range Loop Reference - Why Your Loop Keeps Giving You the Same Pointer (and How to Fix It) Terraform Associate Exam: A Powerful Guide about How to Prepare It Terraform Meta Arguments Unlocked: Practical Patterns for Clean Infrastructure Code Mastering Terraform with AWS Guide Part 1: Launch Real AWS Infrastructure with VPC, IAM and EC2 ExternalName and LoadBalancer - Ultimate Kubernetes Tutorial Part 5
Mastering Terraform with AWS Guide Part 1: Launch Real AWS Infrastructure with VPC, IAM and EC2Terraform Meta Arguments Unlocked: Practical Patterns for Clean Infrastructure CodeTerraform Associate Exam: A Powerful Guide about How to Prepare ItGolang Range Loop Reference - Why Your Loop Keeps Giving You the Same Pointer (and How to Fix It)A 12 Factor Crash Course in Python: Build Clean, Scalable FastAPI Apps the Right Way
Git Notes Install Azure-Cli on Mac An Adventurer's Guide to Base64, Base64URL, and Base32 Encoding Master Gradient Descent and Binary Classification: Supervised Machine Learning – Day 9 Honored to Pass AI-102! Terminal Mastery: Crafting a Productivity Environment with iTerm, tmux, and Beyond
Newest comment
Tag aggregation
Transformer Machine Learning Daily.AI.Insight cybersecurity Supervised Machine Learning notes AI security

COPYRIGHT © 2024 GeekCoding101. ALL RIGHTS RESERVED.

Theme Kratos Made By Seaton Jiang