# rmarcksharpdown

This is an R Markdown document… library(tidyverse) data_frame(X = rnorm(1000)) %>% ggplot(aes(X)) + geom_histogram() And this is some c# code… Console.WriteLine("Hello World!"); ## Hello World! 🤩 …that the document just executed! 🤩 And here’s some more c# code that talks across different Rmd code blocks… var greatDay = "What a great day!"; greatDay = greatDay + " I hope yours is good too! ❤️🧡💚💙💜"; Console.WriteLine(greatDay); ## What a great day! I hope yours is good too!

# Gratitude

I’ve been meditating lately. I started in December, and I haven’t done it every day, but I enjoy it. The topic of this morning’s meditation was gratitude. It led me through feeling grateful for different things. Something someone did for me. Something someone I don’t know did. Something from nature. Something I did. Something small. Something big. My task for the week was to deploy an A/B test of a new job recommendation algorithm.

# Fun With Random Numbers: More Random Projection

Last time we learned about a method of dimensionality reduction called random projection. We showed that using random projection, the number of dimensions required to preserve the distance between the points in a set is dependent only upon the number of points in the set and the maximum acceptable distortion set by the user. Surprisingly it does not depend on the original number of dimensions. The proof that random projections work is hard to understand, but the method is very simple to implement in just a few steps.

# Fun With Random Numbers: Random Projection

So, there’s this bit of math called the Johnson-Lindenstrauss lemma. It makes a fairly fantastic claim. Here it is in math-speak from the original paper… Fantastic, right? What does it mean in slightly more lay speak? The Fantastic Claim Let’s say you have a set of 1000 points in a 10,000 dimensional space. These points could be the brightness of pixels from 100x100 grayscale images. Or maybe they’re the term counts of the 10,000 most frequent terms in a document from a corpus of documents.

# Providence: Failure Is Always An Option

This post is part of a series on the Providence project at Stack Exchange. The first post can be found here. The last five blog posts have been a highlight reel of Providence’s successes. Don’t be fooled, though, the road to Providence was long and winding. Let’s balance out the highlight reel with a look at some of the bumps in the road. I said the road was long. Let’s quantify that.

# Providence: Architecture and Performance

This post is part of a series on the Providence project at Stack Exchange. The first post can be found here. We’ve talked about how we’re trying to understand our users better at Stack Exchange and seen just how big an impact it’s had on our pilot project, the Careers Job Ads. Let’s take a look at the architecture of the system. Hardware This is the easy part, so let’s just get it out of the way.

# Providence: Testing and Results

This post is part of a series on the Providence project at Stack Exchange. The first post can be found here. The Providence project was motivated by our desire to better understand our users at Stack Exchange. So we think we’ve figured out what kind of developers come to our sites, and what technologies they’re using. Then we figured out a way to combine all our features into the Value Function.
• page 1 of 3

#### Jason Punyon

Chaotic Good w a splash of Data. Data x2. Stack Overflow. He/him.

Principal Developer at Stack Overflow