I applied and was recently awarded the Bertelsmann Technology scholarship where a group of students take part in an Artificial Intelligence track made up of 5 parts to be completed in 3.5 months.
As part of taking the class, we have to take part in a slack channel where we post our daily studies for 60 days reflecting on what we have learned. This is a transcription of those 60 days. The public github wiki is located here https://github.com/chromilo/udacity-bertelsmann-scholarship/wiki
Day 1: I am in p3 (Datasets) doing the xray annotation project. I have created the appen job using the "Image Categorization" template. I uploaded the xray image data and modified the CML to make the questions specific to checking for pnemonia. Also updated the Examples section. I am still working out the usage of conditional only-if in checkboxes to determine what other smarts to include when annotators go through the page. Created one Question so far and will continue working on the other questions next (this seemed longer than 30 min so I'll continue tomorrow, have to make the 60 days last). I updated the files on my github here https://github.com/chromilo/udacity-bertelsmann-scholarship #60daysofudacity
Day 2: I just finished setting up my TopGreener smart plug IoT device to automate turning on the outdoor Christmas lights when sunset hits. I believe it connects to the weather network to estimate when sunset is for my city. That api call is binary but the astronomy service used on the other side may be using AI/ML to correctly annotate when sunset is for a location (I doubt they use human annotation). This service is one example https://services.timeanddate.com/api/services/astronomy.html
I updated my notes here on my GitHub wiki https://github.com/chromilo/udacity-bertelsmann-scholarship/wiki
Day 3: Updated the appen job with 5 questions for p3 project. What is the ratio of questions to the 117 total dataset that is needed to generate an unbiased result set? My draft project proposal documentation here so far https://github.com/chromilo/udacity-bertelsmann-scholarship/blob/main/p3-creating%20dataset/project-proposal-xraydataset.pdf #60daysofudacity
Day 4: Started with p4 lesson 1 and completed the two quizzes there. I went back to p2 and realized I missed the quiz in section 11 (Unsupervised vs. Supervised Approaches) so I completed that as well. I think I missed it because I did the lessons from my phone initially, which makes it harder to see all the material, instead of doing them from my desktop. I also added another question to my p3 project in Appen dashboard for a total of 6 questions to ensure I include a test case for every 19 data points, i.e. 117 / 19 = 6+ questions in total needed. I have updated my notes here https://github.com/chromilo/udacity-bertelsmann-scholarship/wiki #60daysofudacity
Day 5: Finished p4 lesson 1 including confusion matrix quiz. Starting to see AI practical uses everywhere including this robotic pool cleaner at YMCA. I think this pool cleaner is similar to the iRobot vacuum cleaner in that the same sensors are used to sense wall obstructions. “Transfer Learning” ML is probably used here to copy the pre-trained network for the first few layers but the output layer is tuned to work in a pool environment like checking for mold for example. The YMCA pool is still closed due to covid but am excited nevertheless to see some activity on the pool deck.
Updated my github wiki notes here https://github.com/chromilo/udacity-bertelsmann-scholarship/wiki #60daysofudacity
I was just googling pool cleaning robots and the embedded systems processors they use needing low power to work underwater in order to not electrocute the water. Don’t know how TinyML works but if it modifies code size then power consumption may increase. Will be reading up more on TinyML.
Day 6: Started training my model for p4 lesson 2 project. I signed up for a Google Cloud Platform free-trial for 90 days and $390 to spend. This trial expires in Mar 2021. Created an AutoML Vision classification project and setup a standard Google Cloud Storage free-tier account in the us-west1 region (single region only) which will be used to store the CSV dataset that I upload to GCP. I picked 100 Chest X-Ray Images each for normal/pneumonia for my new AutoML dataset and uploaded the zipped file to GCP which is now importing the images into the dataset.
Day 7: Completed the first model "Binary Classifier with Clean/Balanced Data" for p4 lesson 2 project. The wording "score threshold" appears to have changed to "confidence threshold" on the GCP Vision dashboard so that confused my a little. The "TEST&USE" tab also shows as "Predict" in the lesson graphic which is incorrect. I don't see "Upload Image" so I assume I have to click on "Deploy Model" to actually run the test. This is where it starts incurring costs. So far I completed the AutoML Model Report for Binary Classifier with Clean/Balanced Data only. I will add 3 additional models to the project next, with cleanunbalanced, dirtybalanced, and 3class datasets.
Day 8: Created another dataset containing 100 "normal" images and 300 "pneumonia" images to train the "Binary Classifier with Clean/Balanced Data" model. Also prepared the zip files for the other next two datasets, i.e. dirtybalanced and 3class. Just waiting for the cleanunbalanced dataset to finish training which could take a few hours. I entered codewars and finished a Kata in python. Currently considering https://adventofcode.com/ challenge for Day 4.
Day 9: Completed training the remaining 3 datasets for the cleanunbalanced, dirtybalanced, and 3class inside GCP. I recorded the confusion matrix for each one and answered the questions asked in the project report for each model. I am still working on how to correctly identify the binary classification (positive vs negative) for a matrix with 3 classes. The F1 score is also not calculated yet so I believe I need to upload a new dataset that only includes normal and one of the pneumonia classes in order to get a correct confusion matrix established. I'll do that next. I also completed two python challenges on https://adventofcode.com/ with the results posted on my github here https://github.com/chromilo/adventofcode. I might also setup my Raspberry Pi 3 in order to run some kaggle challenges next.
Day 10: Uploaded new datasets to my GCP project to train and evaluate. I split the 3class model to include 3classviral and 3classbacterial comparing each one to a normal dataset to ensure there is less confusion. Calculated the F1 score given the new datasets. I have completed my project report now for p4 lesson 2 and am looking to figure out how to do some peer-reviewing if anyone is interested. I posted a question on a uopeople subreddit asking what their process is because grading of projects and assignments are done by their classmates. I asked if they use any digital rights management product to protect the projects but haven't heard back yet.
Day 11: Started p5 lesson 1 on "Measuring Impact and Updating Models". Got to item 13 or 76% done. I followed up on peer reviews with uopeople and it seems they use proctoru which isn't free. An alternative is using Google Drive to apply rights management to the project document to prevent copying, printing, downloading of that project. This also would require a Google account so plagiarism can be tracked that way. Just some thoughts.
Day 12: Drafted goals and objectives for study group #sg_pinoi_and_pin_ai for my filipino peoples. Also added a poly asking for a good time to meet. Read some of the SG Toolkit to get some ideas. Also lso reviewed first half of p5 lesson 1.
Day 13: Completed all the p5 lessons and quizzes. Started draft of capstone project but am still unsure of what to use for business proposal, whether I use an actual use case we can use at work, one I can use personally at home, or something that we can use at school for this udacity course.
Day 14: Continued working on capstone project. Also tried to setup some last minute calls with #sg_pinoi_and_pin_ai study group but will likely continue next week as it's Christmas in the Philippines now.
Day 15: Reread p5 lessons and updated my github wiki https://github.com/chromilo/udacity-bertelsmann-scholarship/wiki. Trying to find another free ML service as an alternative to GCP AutoML as I have only $200 credits left. Planning to use the Titanic dataset from Kaggle.
Day 16: Created a new Google colab notebook to test training the Titanic dataset from the Kaggle challenge. Still undecided on business case for p5 project
Day 17: Reviewed great summary notes from @Winsome Yuen for p5. Trying to get train.csv dataset working from the Kaggle titanic challenge.
Day 18: Will use company mid-year real estate prospectus generated by R&D team to come up with a capstone project proposal. Given the COVID pandemic is unlike any other pandemic, forecast the service-oriented jobs most heavily affected in the coming months using AI/ML to estimate US: https://bit.ly/3hDB8v3 CAN EN: https://bit.ly/2Y8Bulz
Day 19: Close to fixing the titanic dataset for the Kaggle challenge using Keras API to train, evaluate, and predict survivors from given test set. Just don't know how to set the value for the label for the test dataset so I joined the kaggle study group. I also setup a kickoff meeting for #sg_pinoi_and_pin_ai study group tomorrow afternoon at 3:30pm Pacific.
Day 20: Finished first page of capstone project. Updated SG charter with meeting minutes from today.
Day 21: Finished and submitted my first Kaggle challenge using Titanic datasets, detailed in my github repo here and wike here. I used keras on top of Tensorflow, dataframes in pandas, and numpy Python libraries. I submitted a survivor count of 184 out of 418 which is 0.74162 accuracy from Kaggle leaderboard. Opened another Zoom bridge at 10:30am Pacific today for our #sg_pinoi_and_pin_ai study group. Will start with project peer-review initiatives next week hopefully.
Day 22: Gathered the three project rubrics and working to assign points to each row in preparation for project peer grading in study group #sg_pinoi_and_pin_ai. Head on over to that channel to get yours graded mga Pinoy and Pinay. Happy new year to all
Day 23: Updated study group charter for #sg_pinoi_and_pin_ai with 2 initiatives. Customized project 1 rubrics for peer grading next week. Updated my personal blog.
Day 24: Completed all other sections of capstone project except for the MVP and post-MVP-Deployment sections. Registered for a free Figma.com account as suggested by @@Winsome Yuen to try and generate a sketch of my product. Trying to figure out how to do that as it's not as easy as it looks.
Day 25: Updated Project 1 rubrics and scheduled zoom call for tomorrow for #sg_pinoi_and_pin_ai study group. For capstone project, still trying to figure out what to wireframing with Figma because the proposal outcome is not an online website but a PDF report. #60daysofudacity
Day 26: Completed the MVP and post-MVP-Deployment sections of the capstone project. I was able to figure out how to use Figma wireframing app to generate a layout of business outcome in the form of a line graph. I had to go back to A/B Testing using @Rohit Roy Chowdhury great handwritten notes to review as I couldn't quickly find it in the Udacity classroom lessons. Excited to get it peer-reviewed when we have the rubrics ready from our study group. We have a meetup scheduled today in a few hours time so hope to see some of the members there.
Day 27: Started working on project 2 rubrics for peer review. Updated SG charter and posted meeting minutes from yesterday meetup for our #sg_pinoi_and_pin_ai channel. Joined the featured SG #sg_ai_practioners and read about random forests.
Day 28: Continuing to work on project 2 rubrics.
Day 29: Attended meetup “Learn Data Science by doing Kaggle Competitions” where they looked at prostate cancer detection by looking at cancer tissue pictures.
Day 30: Half-way through Pluralsight course “Building Bots with Microsoft’s Bot Framework”.
Day 31: Continued listening to the Pluralsight course “Building Bots with Microsoft’s Bot Framework”. Now at lesson 5 of 7, topic is “The Dialog of Bots”. Complicated stuff.
Day 32: Completed project 2 peer review rubrics template for our study group #sg_pinoi_and_pin_ai, and setup next weekly meeting. Finished lesson 6 of 7 of Pluralsight course “Building Bots with Microsoft’s Bot Framework”, topic is "Adding Natural Language Processing through LUIS AI".
Day 33: Prepared meeting minutes for study group weekly calls. Updated project 1 rubrics for the sg_canada study group.
Day 34: Attended kickoff for mentorship program at BCIT, which I am hoping to use for our study groups as a possible initiative, if the members want to.
Day 35: Continued listening to lesson 6 of 7 of Pluralsight course “Building Bots with Microsoft’s Bot Framework”, topic is "Adding Natural Language Processing through LUIS AI". Did the intro to ML and started reading through the syllabus for CS 7638 Artificial Intelligence for Robotics class.
Day 36: Finished first out of 4 in the Veracode Secure Coder competition titled "OWASP #2: Broken Authentication" running until Jan 15 midnight. Doing the challenge in python of course.
Day 49: Completed the get_laser_action method but am still failing on 2/8 cases. I need to further tune the heuristics to prevent execution timeouts.
Day 50: Signed up for Open Athens and linked Google Scholar to GT Library for access to research papers on Localization and Kalman Filters. Have to find and write relevant research paper in 300-600 words due by Monday. Yikes.
Day 53: Started with next chapter on Particle Filters. Apparently it’s more accurate than either histogram or kalman filters. Currently at 13/28 lessons on this chapter
Day 59: Hosted two lightly attended events today and covered #tech_help channel for an hour as part of Study Jam 1.0. Also spent some time debugging my meteorite python project due Tuesday. Stuck on the defense method, too much meteorites hitting earth.
NICE
ReplyDelete