1: [Bifarin] I won another academic award at Georgia Tech, a few weeks ago.
2: [AI] DALL.E 2 is a new AI system that can create realistic images and art from a description in natural language (courtesy OpenAI). You can explore the system on their webpage or watch the video, also on their webpage. Pretty wild stuff. Also, Google Docs just released “a machine learning (ML) model that comprehends document text and, when confident, generates a 1-2 sentence natural language description of the document content.” Read about it on Google Blog. It’s currently available for Google Workspace business customers.
3: [Startups] Dalton Caldwell and Michael Seibe of Y-combinator sat down to talk about common mistakes start-ups make. They mentioned the Boiling Ocean problem, starting with multiple features at one time without delivering value to a user yet. And suppose a scientist is allowed to pontificate on startup. In that case, I will say new startup founder(s) on the block should start with ONE thing that is ridiculously straightforward, reasonably impactful. That at least one of the start-up founders should be able (or participate meaningfully) to create a unit of the core product being offered. I know that is a mouthful, but anything short of that will probably not end well. Dalton and Michael also talked about the fact that getting VC money, or winning start-up competition is no validation that your startup will work. The customers are the ultimate judges.
4: [Machine Learning] an excellent thread on whether the number of estimators (tree in the forest) depends on the number of predictors in a Random Forest machine learning algorithm Link.
“During classification the subspace dimensionality is √p (rather small, 𝑝 is the total number of predictors) by default, but a tree contains many nodes. During regression the subspace dimensionality is 𝑝/3 (large enough) by default, though a tree contains fewer nodes. So the optimal number of trees in a random forest depends on the number of predictors only in extreme cases.”
5: [Movies] In Thread and Links 03: Number 11 (TaL 03:11, for shorts), I mentioned and shared a link to Mark Wahlberg’s Father Stu. I saw the movie a couple of weeks ago, and I think it’s great. See Bishop Barron’s review. Mark also sat down with Cardinal Dolan of NY to discuss the movie.
6: [STEM] A friend told me about this PARPA thing (Private Advanced Research Project Agency). A new organizational system that prides itself to take on impactful work that is “too researchy for startups, too engineering-heavy for academia, and too weird for governments to take on.” Here is the synoptic paper. I didn’t get enough meat from the paper, but I think I get the idea of where this is going. There is also a Private ARPA user manual here, which didn’t read, but that should give more context.
“To give you a flavor of the types of programs that we foresee undertaking: would it be possible to unlock ‘precision chemistry’ with a molecular 3D printer? What would happen if there was an experimental platform for general purpose telerobotics? Could robotics, atmospheric control, and other mechanisms cross an efficiency threshold for vertical farming? Could much more flexible simulators unlock new kinds of science? What would it take to create general-purpose humanless factories?”
7: [Nigeria Politics] Bode George on Tinubu. Serious dirty details about the culture of Nigerian politics. (This is a must-watch for students of contemporary Nigerian politics.) Amongst other obscene political theatrics, he said, “you can only rig, where you are popular.” Also see Politicking in Nigeria, by Danladi Bako. Sowore presidency (1) Sowore presidency (2). And finally, Dele Farotimi, talking some sense into everybody.
8: [Education]
9: [Cryptocurrency] Peter Thiel Bitcoin Keynote at Bitcoin 2022 conference. He had a bitcoin enemy list. Enemy no 1: Warren Buffet.
10: [Social Media] WSJ’s TikTok Brain Explained: Why Some Kids Seem Hooked on Social Video Feeds. I am not a big fan of social media, it’s an incredibly powerful tech, don’t get me wrong. But stuff like this should just be ignored?
“Many parents tell me their kids can’t sit through feature-length films anymore because to them the movies feel painfully slow. Others have observed their kids struggling to focus on homework. And reading a book? Forget about it.” …
“Emerging research suggests that watching short, fast-paced videos makes it harder for kids to sustain activities that don’t offer instant—and constant—gratification.” …
“TikTok is a dopamine machine,” said John Hutton, a pediatrician and director of the Reading & Literacy Discovery Center at Cincinnati Children’s Hospital. “If you want kids to pay attention, they need to practice paying attention.”
11: [Data science] If you are a data scientist, or just a plain researcher, at some point, you might want to find the intersection of, say, seven sets. You are probably thinking of using Venn diagrams, but that would be a mistake, use upset plot instead.
“Understanding relationships between sets is an important analysis task. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. The most common set visualization approach – Venn Diagrams – doesn’t scale beyond three or four sets. UpSet, in contrast, is well suited for the quantitative analysis of data with more than three sets.”
12: [Food/Biology] Ultra-Processed Junk Food Put to the Test. “The health risks of highly processed foods. What happened when ultra-processed foods were matched for calories, sugar, fat, and fiber content in the first randomized controlled trial?” See the scientific paper here.
“Hall et al. investigated 20 inpatient adults who were exposed to ultra-processed versus unprocessed diets for 14 days each, in random order. The ultra- processed diet caused increased ad libitum energy intake and weight gain despite being matched to the unprocessed diet for presented calories, sugar, fat, sodium, fiber, and macronutrients.”