2018: A retrospective

At the end of last year’s retrospective, I set a number of goals for myself. It feels (really) bad to look back and realize that I did complete a single one. I think it’s important to reflect on failures and shortcomings in order to understand them and hopefully overcome them going forward.

Goal 1: Write one blog post every week

Result: 13 posts / 52 weeks

In January 2018 I began the blog series Learn TensorFlow Now which walked users through the very basics of TensorFlow. For three months I stuck to my goal of writing one blog post every week and I’m very proud of how my published posts turned out. Unfortunately during April I took on a consulting project and my posts completely halted. Once I missed a single week I basically gave up on blogging altogether. While I don’t regret taking on a consulting project, I do regret that I used it as an excuse to stop blogging.

This year I would like to start over and try once again to write one blog post per week (off to a rough start considering it’s already the end of January!). I don’t really have a new strategy other than I will resolve not to quit entirely if I miss a week.

Goal 2: Read Deep Learning by Ian Goodfellow

Result: 300 pages / 700 pages

When I first started reading this book I was very intimidated by the first few chapters covering the background mathematics of deep learning. While my linear algebra was solid, my calculus was very weak. I put the book away for three months and grinded through Khan Academy’s calculus modules. I say “grinded” because I didn’t enjoy this process at all. Every day felt like a slog and my progress felt painfully slow. Even knowing calculus would ultimately be applicable to deep learning, I struggled to stay focused and interested in the work.

When I came back to the book in the second half of 2018 I realized it was a mistake to stop reading. While the review chapters were mathematically challenging, the actual deep learning portions were much less difficult and most of the insights could be reached without worrying about the math at all. For example, I cannot prove to you that L1 regularization results in sparse weight matrices, but I am aware that such a proof exists (at least in the case of linear regression).

This year I would like to finish this book. I think it might be worth my time to try to implement some of the basic algorithms illustrated in the book without the use of PyTorch or TensorFlow, but that will remain a stretch goal.

Goal 3: Contribute to TensorFlow

Result: 1 Contribution?

In February one of my revised PRs ended up making it into TensorFlow. Since I opened it in December of the previous year I’ve only marked it as half a contribution. Other than this PR I didn’t actively seek out any other places where I could contribute to TensorFlow.

On the plus side, I recently submitted a pull request to PyTorch. It’s a small PR that helps bring the C++ API closer to the Python API. Since it’s not yet merged I guess I should only count this as half a contribution? At least that puts me at one full contribution to deep learning libraries for the year.

Goal 4: Compete in a more Challenging Kaggle competition

Result: 0 attempts

There’s not much to say here other than that I didn’t really seek out or attempt any Kaggle competitions. In the later half of 2018 I began to focus on reinforcement learning so I was interested in other competitive environments such as OpenAI Gym and Halite.io. Unfortunately my RL agents were not very competitive when it came to Halite, but I’m hoping this year I will improve my RL knowledge and be able to submit some results to other competitions.

Goal 5: Work on HackerRank problems to strengthen my interview skills

Result: 3 months / 12 months

While I started off strong and completed lots of problems, I tapered off around the same time I stopped blogging. While I don’t feel super bad about stopping these exercises (I had started working, after all) I am a little sad because it didn’t really feel like I improved at solving questions. This remains an area I want to improve in but I don’t think I’m going to make it an explicit goal in 2019.

Goal 6: Get a job related to ML/AI

Result: 0 jobs

I did not receive (or apply to) any jobs in ML/AI during 2018. After focusing on consulting for most of the year I didn’t feel like I could demonstrate that I was proficient enough to be hired into the field. My understanding is that an end-to-end personal project is probably the best way to demonstrate true proficiency and something I want to pursue during 2019.

 

Goals for 2019

While I’m obviously not thrilled with my progress in 2018 I try not to consider failure a terminal state. I’m going to regroup and try to be more disciplined and consistent when it comes to my work this year. One activity that I’ve found both fun and productive is streaming on Twitch. I spent about 100 hours streaming and had a pretty consistent schedule during November and December.

  • Stream programming on Twitch during weekdays
  • Write one blog post every week
  • Finish reading Deep Learning by Ian Goodfellow

LTFN 10: CIFAR-10

Part of the series Learn TensorFlow Now

Over the last nine posts, we built a reasonably effective digit classifier. Now we’re ready to enter the big leagues and try out our VGGNet on a more challenging image recognition task. CIFAR-10 (Canadian Institute For Advanced Research) is a collection of 60,000 cropped images of planes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks.

  • 50,000 images in the training set
  • 10,000 images in the test set
  • Size: 32×32 (1024 pixels)
  • 3 Channels (RGB)
  • 10 output classes
Sample images from CIFAR-10

CIFAR-10 is a natural next-step due to its similarities to the MNIST dataset. For starters, we have the same number of training images, testing images and output classes. CIFAR-10’s images are of size 32x32 which is convenient as we were paddding MNIST’s images to achieve the same size. These similarities make it easy to use our previous VGGNet architecture to classify these images.

Despite the similarities, there are some differences that make CIFAR-10 a more challenging image recognition problem. For starters, our images are RGB and therefore have 3 channels. Detecting lines might not be so easy when they can be drawn in any color. Another challenge is that our images are now 2-D depictions of 3-D objects. In the above image, the center two images represent the “truck” class, but are shown at different angles. This means our network has to learn enough about “trucks” to recognize them at angles it has never seen before.

Loading CIFAR-10

The CIFAR-10 dataset is hosted at: https://www.cs.toronto.edu/~kriz/cifar.html

In order to make it easier to work with, I’ve prepared a small script that downloads, shuffles and caches the dataset locally. You can find it on GitHub here.

After saving this file locally, we can use it to prepare our datasets:


import tensorflow as tf
import numpy as np
import cifar_data_loader
(train_images, train_labels, test_images, test_labels, mean_image) = cifar_data_loader.load_data()
print(train_images.shape)
print(train_labels.shape)
print(test_images.shape)
print(test_labels.shape)
print(mean_image.shape)

view raw

ltfn_10_1.py

hosted with ❤ by GitHub

Running this locally produces the following output:

Attempting to download: https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
0%....5%....10%....15%....20%....25%....30%....35%....40%....45%....50%....55%....60%....65%....70%....75%....80%....85%....90%....95%....100%
Download Complete!
(50000, 32, 32, 3)
(50000,)
(10000, 32, 32, 3)
(10000,)
(32, 32, 3)

The above output shows that we’ve downloaded the dataset and created a training set of size 50,000 and a test set of size 10,000. Note: Unlike MNIST, these labels are not 1-hot encoded (otherwise they’d be of size 50,000x10 and 10,000x10 respectively). We have to account for this difference in shape when we build VGGNet for this dataset.

Let’s start by adjusting input and labels to fit the CIFAR-10 dataset:


input = tf.placeholder(tf.float32, shape=(None, 32, 32, 3)) #Input is of size 32x32x3 (RGB images)
labels = tf.placeholder(tf.int32, shape=(None), name="labels") #Labels are single integers (tf.int32)

view raw

ltfn_10_2.py

hosted with ❤ by GitHub

Next we have to adjust the first layer of our network. Recall from the post on convolutions that each convolutional filter must match the depth of the layer against which it is convolved. Previously we had defined our convolutional filter to be of shape [3, 3, 1, 64]. That is, a 64 3x3 convolutional filters, each with depth of 1, matching the depth of our grayscale input image. Now that we’re using RGB images, we must define it to be of shape [3, 3, 3, 64]:


layer1_weights = tf.get_variable("layer1_weights", [3, 3, 3, 64], initializer=tf.contrib.layers.variance_scaling_initializer())

view raw

ltfn_10_3.py

hosted with ❤ by GitHub

Another change we must make is the calculation of cost. Previously we were using tf.nn.softmax_cross_entropy_with_logits() which is suitable only when our labels are 1-hot encoded. When we represent the labels as single integers, we can instead use tf.nn.sparse_softmax_cross_entropy_with_logits(). It is otherwise identical to our original softmax cross entropy function.


cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels))

view raw

ltfn_10_4.py

hosted with ❤ by GitHub

Finally, we must also modify our calculation of correction_prediction (used to calculate accuracy) to account for the change in label shape. We no longer have to take the tf.argmax of our labels because they’re already represented as a single number:


correct_prediction = tf.equal(labels, tf.argmax(predictions, 1, output_type=tf.int32))

view raw

ltfn_10_5.py

hosted with ❤ by GitHub

Note: We have to specify output_type=tf.int32 because tf.argmax() returns tf.int64 by default.

With that, we’ve got everything we need to test our VGGNet on CIFAR-10. The complete code is presented at the end of this post.

After running our network for 10,000 steps, we’re greeted with the following output:

Cost: 470.996
Accuracy: 9.00000035763 %
Cost: 2.00049
Accuracy: 25.0 %
...
Cost: 0.553867
Accuracy: 82.9999983311 %
Cost: 0.393799
Accuracy: 87.0000004768 %
Test Cost: 0.895597087741
Test accuracy: 70.9400003552 %

Our final test accuracy appears to be approximately 71%, which isn’t too great. On one hand this is disappointing as it means our VGGNet architecture (or the method in which we’re training it) doesn’t generalize very well. On the other hand, CIFAR-10 presents us with new opportunities to try out new neural network components and architectures. In the next few posts we’ll explore some of these approaches to build a neural network that can handle the more complex CIFAR-10 dataset.

If you look carefully at the previous results you may have noticed something interesting. For the first time, our test accuracy (71%) is much lower than our training accuracy (~82-87%). This is a problem we’ll discuss in future posts on bias and variance in deep learning.

Complete Code


import tensorflow as tf
import numpy as np
import cifar_data_loader
(train_images, train_labels, test_images, test_labels, mean_image) = cifar_data_loader.load_data()
print(train_images.shape)
print(train_labels.shape)
print(test_images.shape)
print(test_labels.shape)
print(mean_image.shape)
graph = tf.Graph()
with graph.as_default():
input = tf.placeholder(tf.float32, shape=(None, 32, 32, 3))
labels = tf.placeholder(tf.int32, shape=(None), name="labels")
layer1_weights = tf.get_variable("layer1_weights", [3, 3, 3, 64], initializer=tf.contrib.layers.variance_scaling_initializer())
layer1_bias = tf.Variable(tf.zeros([64]))
layer1_conv = tf.nn.conv2d(input, filter=layer1_weights, strides=[1,1,1,1], padding='SAME')
layer1_out = tf.nn.relu(layer1_conv + layer1_bias)
layer2_weights = tf.get_variable("layer2_weights", [3, 3, 64, 64], initializer=tf.contrib.layers.variance_scaling_initializer())
layer2_bias = tf.Variable(tf.zeros([64]))
layer2_conv = tf.nn.conv2d(layer1_out, filter=layer2_weights, strides=[1,1,1,1], padding='SAME')
layer2_out = tf.nn.relu(layer2_conv + layer2_bias)
pool1 = tf.nn.max_pool(layer2_out, ksize=[1,2,2,1], strides=[1,2,2,1], padding='VALID')
layer3_weights = tf.get_variable("layer3_weights", [3, 3, 64, 128], initializer=tf.contrib.layers.variance_scaling_initializer())
layer3_bias = tf.Variable(tf.zeros([128]))
layer3_conv = tf.nn.conv2d(pool1, filter=layer3_weights, strides=[1,1,1,1], padding='SAME')
layer3_out = tf.nn.relu(layer3_conv + layer3_bias)
layer4_weights = tf.get_variable("layer4_weights", [3, 3, 128, 128], initializer=tf.contrib.layers.variance_scaling_initializer())
layer4_bias = tf.Variable(tf.zeros([128]))
layer4_conv = tf.nn.conv2d(layer3_out, filter=layer4_weights, strides=[1,1,1,1], padding='SAME')
layer4_out = tf.nn.relu(layer4_conv + layer4_bias)
pool2 = tf.nn.max_pool(layer4_out, ksize=[1,2,2,1], strides=[1,2,2,1], padding='VALID')
layer5_weights = tf.get_variable("layer5_weights", [3, 3, 128, 256], initializer=tf.contrib.layers.variance_scaling_initializer())
layer5_bias = tf.Variable(tf.zeros([256]))
layer5_conv = tf.nn.conv2d(pool2, filter=layer5_weights, strides=[1,1,1,1], padding='SAME')
layer5_out = tf.nn.relu(layer5_conv + layer5_bias)
layer6_weights = tf.get_variable("layer6_weights", [3, 3, 256, 256], initializer=tf.contrib.layers.variance_scaling_initializer())
layer6_bias = tf.Variable(tf.zeros([256]))
layer6_conv = tf.nn.conv2d(layer5_out, filter=layer6_weights, strides=[1,1,1,1], padding='SAME')
layer6_out = tf.nn.relu(layer6_conv + layer6_bias)
layer7_weights = tf.get_variable("layer7_weights", [3, 3, 256, 256], initializer=tf.contrib.layers.variance_scaling_initializer())
layer7_bias = tf.Variable(tf.zeros([256]))
layer7_conv = tf.nn.conv2d(layer6_out, filter=layer7_weights, strides=[1,1,1,1], padding='SAME')
layer7_out = tf.nn.relu(layer7_conv + layer7_bias)
pool3 = tf.nn.max_pool(layer7_out, ksize=[1,2,2,1], strides=[1,2,2,1], padding='VALID')
layer8_weights = tf.get_variable("layer8_weights", [3, 3, 256, 512], initializer=tf.contrib.layers.variance_scaling_initializer())
layer8_bias = tf.Variable(tf.zeros([512]))
layer8_conv = tf.nn.conv2d(pool3, filter=layer8_weights, strides=[1,1,1,1], padding='SAME')
layer8_out = tf.nn.relu(layer8_conv + layer8_bias)
layer9_weights = tf.get_variable("layer9_weights", [3, 3, 512, 512], initializer=tf.contrib.layers.variance_scaling_initializer())
layer9_bias = tf.Variable(tf.zeros([512]))
layer9_conv = tf.nn.conv2d(layer8_out, filter=layer9_weights, strides=[1,1,1,1], padding='SAME')
layer9_out = tf.nn.relu(layer9_conv + layer9_bias)
layer10_weights = tf.get_variable("layer10_weights", [3, 3, 512, 512], initializer=tf.contrib.layers.variance_scaling_initializer())
layer10_bias = tf.Variable(tf.zeros([512]))
layer10_conv = tf.nn.conv2d(layer9_out, filter=layer10_weights, strides=[1,1,1,1], padding='SAME')
layer10_out = tf.nn.relu(layer10_conv + layer10_bias)
pool4 = tf.nn.max_pool(layer10_out, ksize=[1,2,2,1], strides=[1,2,2,1], padding='VALID')
shape = pool4.shape.as_list()
newShape = shape[1] * shape[2] * shape[3]
reshaped_pool4 = tf.reshape(pool4, [1, newShape])
fc1_weights = tf.get_variable("layer11_weights", [newShape, 4096], initializer=tf.contrib.layers.variance_scaling_initializer())
fc1_bias = tf.Variable(tf.zeros([4096]))
fc1_out = tf.nn.relu(tf.matmul(reshaped_pool4, fc1_weights) + fc1_bias)
fc2_weights = tf.get_variable("layer12_weights", [4096, 10], initializer=tf.contrib.layers.xavier_initializer())
fc2_bias = tf.Variable(tf.zeros([10]))
logits = tf.matmul(fc1_out, fc2_weights) + fc2_bias
cost = tf.reduce_mean(tf.nn.sparse_softmax_cross_entropy_with_logits(logits=logits, labels=labels))
learning_rate = 0.001
optimizer = tf.train.AdamOptimizer(learning_rate).minimize(cost)
#Add a few nodes to calculate accuracy and optionally retrieve predictions
predictions = tf.nn.softmax(logits)
correct_prediction = tf.equal(labels, tf.argmax(predictions, 1, output_type=tf.int32))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
num_steps = 10000
batch_size = 100
for step in range(num_steps):
offset = (step * batch_size) % (train_labels.shape[0] batch_size)
batch_images = train_images[offset😦offset + batch_size)]
batch_labels = train_labels[offset😦offset + batch_size)]
feed_dict = {input: batch_images, labels: batch_labels}
_, c, acc = session.run([optimizer, cost, accuracy], feed_dict=feed_dict)
if step % 100 == 0:
print("Cost: ", c)
print("Accuracy: ", acc * 100.0, "%")
#Test
num_test_batches = int(len(test_images) / 100)
total_accuracy = 0
total_cost = 0
for step in range(num_test_batches):
offset = (step * batch_size) % (train_labels.shape[0] batch_size)
batch_images = test_images[offset😦offset + batch_size)]
batch_labels = test_labels[offset😦offset + batch_size)]
feed_dict = {input: batch_images, labels: batch_labels}
c, acc = session.run([cost, accuracy], feed_dict=feed_dict)
total_cost = total_cost + c
total_accuracy = total_accuracy + acc
print("Test Cost: ", total_cost / num_test_batches)
print("Test accuracy: ", total_accuracy * 100.0 / num_test_batches, "%")

view raw

ltfn_10_full.py

hosted with ❤ by GitHub

2017: A Retrospective

2017 saw a lot of change for me. I left Microsoft, returned to Toronto and shifted my focus from developer tools to machine learning. Following in patio11’s footsteps, I wanted to take a moment to reflect on the year and clarify my thoughts in written form.

Microsoft

July 2016 – July 2017

In July 2016, my company Code Connect joined Microsoft with the stated goal of integrating our Alive extension into Visual Studio 2017. Unfortunately our visas were delayed and we weren’t able to begin work until early September. It didn’t make sense to rush Alive into Visual Studio 2017 and risk introducing stability problems so we held off for a few months.

In the meantime, we conducted a number of experiments to try to quantify demand for Alive so we could compare it to other potential features. When the experiments concluded, the data didn’t demonstrate an immediate need for a product like Alive. Alive was put on ice and we worked on features that were immediately pressing to Visual Studio’s success. At the time this meant focusing on accessibility, a top-down directive from Satya Nadella himself.

After Alive was sunset, I did some reflection on where I wanted to take my career and what I wanted to focus on. For a long time I’ve worred that the knowledge I’ve accumulated writing Visual Studio extensions is not immediately applicable outside of Visual Studio. Despite working on developer tools for almost five years, I would be on near-equal footing with a junior developer when it comes to developing extensions for VS Code or Eclipse. Much of my expertise comes in the form of random bits of trivia about Visual Studio. The problems I was faced with were challenging, but not very interesting.

The more I complained, the more I realized something had to change. In July 2017 I left Microsoft and applied for a position at a company for which I’d previously been an intern. However I didn’t have the C++ experience required for the roles they were staffing so we concluded it probably wouldn’t be a great fit.

While at Microsoft I began to learn about machine learning in my free time. In 2015 I’d watched in awe from the sidelines as AlphaGo crushed its human counterparts and I wanted in. I decided that now was the time for me to focus exclusively on machine learning and deep learning in particular.

Would I do it again?

Yup. Alive’s best chance for the long-term was a home at Microsoft. We had a modest number of paying customers but required an order of magnitude more in order to grow  and continue full-time work on Alive. We had grand dreams of bringing Alive to other languages and we wouldn’t have been able to do so without hiring more developers. Microsoft came to us at the perfect time and gave Alive one last shot at success. I’m sad it didn’t work out, but I’m eternally grateful to everyone involved in getting us to Microsoft and to the Visual Studio editor team for being my home for the year.

Machine Learning

August 2017 – Ongoing

The hardest thing about starting something brand new is figuring out where to start. I settled on a mix of linear algebra, Coursera, Kaggle and open-source work.

DeepLearning.ai

In September Andrew Ng launched a new deep learning course that covered the following:

I’ve completed the first four and am waiting for the final course on Sequence Models to launch in the coming month. These courses paired nicely with Andrej Karpathy’s CS231n videos. This course will be my go-to answer for the question “I want to learn about neural networks, where do I start?”

Kaggle

I wanted to apply the lessons I learned in Andrew’s videos to my own neural networks. I set out to compete in the introductory Kaggle competition “MNIST Digit Recognizer”. As I learned more and more about neural networks I would apply these lessons to my network and watch the score improve. Being told “batch normalization will improve your results” is one thing, but watching your score tick higher is something else altogether. As of this writing my best submission puts me in the top 25% of submissions with 99.171% accuracy.

My top submission.

 

TensorFlow

I set a personal goal to contribute at least one pull request to TensorFlow so as to better understand the tool I was using. Coming from a .NET Desktop background, there was a bit of a learning curve when it came to tools like bazel and Docker. However, like most things in software development these tools just require a bit of time and focused energy to understand.

I’ve seen mixed success with my contributions. My first pull request was a correction to TensorFlow’s implementation of Inception network. The reviewers agreed that the initial model was incorrect, but are hesitant to change the model due to backward compatibility concerns.

My second pull request improved support for various image operations in TensorFlow. In short, it made it easier to augment multiple images at once. (eg. Randomly flipping images left-to-right). Unfortunately, I introduced some performance regressions and my changes had to be reverted. 😢

My third pull request is a re-implementation of the last, while avoiding the performance regressions. It remains open, but I’m confident that after some work it will be accepted.

On the whole, I’m pleased with the progress I made with TensorFlow. The API surface is massive and I have a lot to learn, but I’m making real, measurable progress. I’ll continue to contribute back code where appropriate.

ICLR 2018 Reproducibility Challenge

At the end of each course, Andrew Ng took time to interview famous names in machine learning such as Geoffrey Hinton and Ian Goodfellow. Shared advice they had for newcomers was “Reproduce papers”. Around the same time, I stumbled upon the ICLR 2018 Reproducibility Challenge where students are challenged to reproduce the results of papers submitted to the ICLR conference.

I signed up and chose the paper “Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates”. This paper proposed a method for training certain neural networks an order of magnitude faster than previous methods allowed. Their approach involved varying the learning rate linearly between (what are typically considered) large values throughout training.

This was the hardest portion of my work thus far and forced me to delve into the details of TensorFlow. In December I made my report available in the comments of their paper’s submission. The TensorFlow portion of my work is available on GitHub at: http://github.com/JoshVarty/ReproducingSuperconvergence

Blogging

My only regret during 2017 was that I published zero blog posts. As such, this was the first year that traffic to my blog decreased.

2017 saw a modest decline in blog traffic

 

I often tell others to start blogging, and this year my actions didn’t match my words. I attribute my poor track-record to one-part laziness and one-part lack of confidence. It’s surprisingly difficult to work up the courage to write about a subject when you’re brand new to it.

Goals for 2018

  • Follow Jeff Atwood’s advice for bloggers and stick to a schedule for blogging. I want to buckle down and write one blog post a week in 2018
  • Read Ian Goodfellow’s Deep Learning Book
  • Contribute to TensorFlow
  • Compete in a more challenging Kaggle competition
  • Work on HackerRank problems to strengthen my interview skills
  • Get a job related to ML/AI (preferably some kind of research role)

 

 

EnC Part 3 – The CLR

In the last post, we looked at using Roslyn to generate deltas between two compilations. Today we’ll take a look at how we can apply these deltas to a running process.

The CLR API

If you dig through Microsoft’s .NET Reference source occasionally you’ll come across extern methods like FastAllocateString() decorated with a special attribute: [MethodImplAttribute(MethodImplOptions.InternalCall)]. These are entry points to the CLR that can be called from managed code. Calling into the CLR can be done for a number of reasons. In the case of FastAllocateString it’s to implement certain functionality in native code for performance (in this case without even the overhead of P/Invoke). Other entry points are exposed to trigger CLR behavior like garbage collection or to apply deltas to a running process.

When I started this project I wasn’t even aware the CLR had an API. Fortunately Microsoft has recently released internal documentation that explains much of the CLR’s behavior including these APIs. Mscorlib and Calling Into the Runtime documents the differences between FCall, QCall and P/Invoke as entrypoints to the CLR from managed code.

Managing the many methods, classes and interfaces is a huge pain and too much work to do manually when starting out. Luckily Microsoft has released a managed wrapper that makes a lot of this stuff easier to work with. The Managed Debug Sample (mdbg) has everything we’ll need to attach to a process and apply changes to it.

The sample has a few extra projects. For our purposes we’ll need:

  • corapi – The managed API we’ll interact with directly
  • raw – Set of interfaces and COMImports over the ICorDebug API
  • NativeDebugWrappers – Functionality for low level Windows debugging

Game Plan

At a high level, our approach is going to be the following:

  1. Create an instance of CorDebugger, a debugger we can use to create and attach itself to other processes.
  2. Start a remote process
  3. Intercept loading of modules and mark them for Edit and Continue
  4. Apply deltas

CorDebugger

Creating an instance of the debugger is fairly involved. We first have to get an instance of the CLR host based on the version of the runtime we’re interested in (in our case anything after v4.0 will work). Working with the managed API is still awkward, certain types are created based on GUIDs that seem to be undocumented outside of sample code. Nonetheless the following code creates an instance of a managed debugger we can use.

In the following we get a list of runtimes available from the currently running process. I can’t offer insight into whether this is “good” or “bad” but it’s something to be aware of.


private static CorDebugger GetDebugger()
{
Guid classId = new Guid("9280188D-0E8E-4867-B30C-7FA83884E8DE");
Guid interfaceId = new Guid("D332DB9E-B9B3-4125-8207-A14884F53216");
dynamic rawMetaHost;
Microsoft.Samples.Debugging.CorDebug.NativeMethods.CLRCreateInstance(ref classId, ref interfaceId, out rawMetaHost);
ICLRMetaHost metaHost = (ICLRMetaHost)rawMetaHost;
var currentProcess = Process.GetCurrentProcess();
var runtime_v40 = GetLoadedRuntimeByVersion(metaHost, currentProcess.Id, "v4.0");
var debuggerClassId = new Guid("DF8395B5-A4BA-450B-A77C-A9A47762C520");
var debuggerInterfaceId = new Guid("3D6F5F61-7538-11D3-8D5B-00104B35E7EF");
//Get a debugger for this version of the runtime.
Object res = runtime_v40.m_runtimeInfo.GetInterface(ref debuggerClassId, ref debuggerInterfaceId);
ICorDebug debugger = (ICorDebug)res;
//We create CorDebugger that wraps the ICorDebug stuff making it easier to use
var corDebugger = new CorDebugger(debugger);
return corDebugger;
}
public static CLRRuntimeInfo GetLoadedRuntimeByVersion(ICLRMetaHost metaHost, Int32 processId, string version)
{
IEnumerable<CLRRuntimeInfo> runtimes = EnumerateLoadedRuntimes(metaHost, processId);
foreach (CLRRuntimeInfo rti in runtimes)
{
//Search through all loaded runtimes for one that starts with v4.0.
if (rti.GetVersionString().StartsWith(version, StringComparison.OrdinalIgnoreCase))
{
return rti;
}
}
return null;
}
public static IEnumerable<CLRRuntimeInfo> EnumerateLoadedRuntimes(ICLRMetaHost metaHost, Int32 processId)
{
List<CLRRuntimeInfo> runtimes = new List<CLRRuntimeInfo>();
IEnumUnknown enumRuntimes;
//We get a handle for the process and then get all the runtimes available from it.
using (ProcessSafeHandle hProcess = NativeMethods.OpenProcess((int)(NativeMethods.ProcessAccessOptions.ProcessVMRead |
NativeMethods.ProcessAccessOptions.ProcessQueryInformation |
NativeMethods.ProcessAccessOptions.ProcessDupHandle |
NativeMethods.ProcessAccessOptions.Synchronize),
false, // inherit handle
processId))
{
if (hProcess.IsInvalid)
{
throw new System.ComponentModel.Win32Exception(Marshal.GetLastWin32Error());
}
enumRuntimes = metaHost.EnumerateLoadedRuntimes(hProcess);
}
// Since we're only getting one at a time, we can pass NULL for count.
// S_OK also means we got the single element we asked for.
for (object oIUnknown; enumRuntimes.Next(1, out oIUnknown, IntPtr.Zero) == 0; /* empty */)
{
runtimes.Add(new CLRRuntimeInfo(oIUnknown));
}
return runtimes;
}

view raw

CorDebugger.cs

hosted with ❤ by GitHub

Starting the process

Once we’ve got a hold of our debugger, we can use it to start a process. While working on this I learned that we (in the .NET world) have been shielded from some of the peculiarities of creating a process on Windows. These peculiarities start to bleed through when creating processes with our custom debugger.

For example, if we want to send the argument 123456 to our new process, it turns our we have to send the process’ filename as the first argument as well. So the call to ICorDebug::CreateProcess(string applicationName, string commandLine) ends up looking something like


var applicationName = "myProcess.exe";
var commandLineArgs = "myProcess.exe 123456"; //Note: Repeat application name in arguments
debugger.CreateProcess(applicationName, commandLineArgs, ... ); //Ignoring other arguments for simplicity

view raw

Remapping.cs

hosted with ❤ by GitHub

For more on this Mike Stall has a post on Conventions for passing the arguments to a process.

We also have to manually pass process flags when creating our process. These flags dictate various properties for our new process (Should a new window be created? Should we debug child processes of this process? etc.). Below we start a process, assuming that the application is in the current directory.


private static CorProcess StartProcess(CorDebugger debugger, string programName)
{
var currentDirectory = Directory.GetCurrentDirectory();
//const CREATE_NO_WINDOW = 0x08000000 Use this to create process without a console
var corProcess = debugger.CreateProcess(programName, "", currentDirectory, (int)CreateProcessFlags.CREATE_NEW_CONSOLE);
corProcess.Continue(outOfBand: false);
return corProcess;
}

view raw

StartProcess.cs

hosted with ❤ by GitHub

Mark Modules for Edit and Continue

By default the CLR doesn’t expect that EnC will be enabled. In order to enable it, we’ll have to manually set JIT flags on each module we’re interested in. CorDebug exposes an event that signals when a module has been loaded, so we’ll use this to control the flags.

A sample event handler for module loading might look like:


private static void CorProcess_OnModuleLoad(object sender, CorModuleEventArgs e)
{
var module = e.Module;
if (!module.Name.Contains("myProcess.exe"))
{
return;
}
var compilerFlags = module.JITCompilerFlags;
module.JITCompilerFlags = CorDebugJITCompilerFlags.CORDEBUG_JIT_ENABLE_ENC;
}

Notice in the above that we’re only setting the flag for the module we’re interested in. If we try to set the JIT flags for all modules we’ll run into exceptions when working with NGen-ed modules. The exception is a little cryptic and complains about “Zap Modules” but this turns out just to be the internal name for NGen modules.

Applying the Deltas

Finally. After three blog posts we’ve arrived at the point: Actually manipulating the running process.

In truth, we don’t apply our changes directly to the process, but to an individual module within it. So our first task is to find the individual module we’re want to change. We can search through all AppDomains, assemblies and modules to find the module with the correct name.

Once we find the module we want to request metadata about the module from it. This turns out to be a weird implementation detail in which the CLR assumes you can’t possible want to apply changes unless you’ve requested this info previously. We put this all together into the following:


//See part two for how to generate these two
byte[] metadataBytes = ...;
byte[] ilBytes = ...;
//Find module by name
var appDomain = corProcess.AppDomains.Cast<CorAppDomain>().Single();
var assembly = appDomain.Assemblies.Cast<CorAssembly>().Where(n => n.Name.Contains("MyProgram")).Single();
var module = assembly.Modules.Cast<CorModule>().Single();
//I found a bug in the ICorDebug API. Apparently the API assumes that you couldn't possibly have a change to apply
//unless you had first fetched the metadata for this module. Perhaps reasonable in the overall scenario, but
//its certainly not OK to simply throw an AV exception if it hadn't happened yet.
//
//In any case, fetching the metadata is a thankfully simple workaround
object import = module.GetMetaDataInterface(typeof(IMetadataImport).GUID);
corProcess.Stop(1);
module.ApplyChanges(metadataBytes, ilBytes);
corProcess.Continue(outOfBand: false);

view raw

ApplChanges.cs

hosted with ❤ by GitHub

Remapping

I should at least touch on one more aspect of EnC I’ve glossed over thus far: remapping. If you are changing a method that has currently active statements, you will be given an opportunity to remap the current “Instruction Pointer” based on line number. It’s up to you to decide on which line execution should resume. The CorDebugger exposes OnFunctionRemapOpportunity and OnFunctionRemapComplete as events that allow you to guide remapping.

Here’s a sample remapping event handler:


private static void CorProcess_OnFunctionRemapOpportunity(object sender, CorFunctionRemapOpportunityEventArgs e)
{
//A remap opportunity is where the runtime can hijack the thread IP from the old version of the code and
//put it in the new version of the code. However the runtime has no idea how the old IL relates to the new
//IL, so it needs the debugger to tell it which new IL offset in the updated IL is the semantically equivalent of
//old IL offset the IP is at right now.
Console.WriteLine("The debuggee has hit a remap opportunity at: " + e.OldFunction + ":" + e.OldILOffset);
//I have no idea what this new IL looks like either, but lets start at the beginning of the method once again
int newILOffset = e.OldILOffset;
var canSetIP = e.Thread.ActiveFrame.CanSetIP(newILOffset);
Console.WriteLine("Can set IP to: " + newILOffset + " : " + canSetIP);
e.Thread.ActiveFrame.RemapFunction(newILOffset);
Console.WriteLine("Continuing the debuggee in the updated IL at IL offset: " + newILOffset);
}

view raw

Remapping.cs

hosted with ❤ by GitHub

We’ve now got all the pieces necessary to manipulate a running process and a good base to build off of. Complete code for today’s blog post can be found here on GitHub. Leave any questions in the comments and I’ll do my best to answer them or direct you to someone who can at Microsoft.

Edit and Continue Part 2 – Roslyn

Our first task is to coerce Roslyn to emit metadata and IL deltas between between two compilations. I say coerce because we’ll have to do quite a bit of work to get things working. The Compilation.EmitDifference() API is marked as public, but I’m fairly sure it’s yet to be actually used by the public. Getting everything to work requires reflection and manual copying of Roslyn code that doesn’t ship via NuGet.

The first order of business is to figure out what it takes to call Compilation.EmitDifference() in the first place. What parameters are we expected to provide? The signature:


public EmitDifferenceResult EmitDifference(
EmitBaseline baseline, //Input: Information about the baseline compilation
IEnumerable<SemanticEdit> edits, //Input: A collection of edits made to the program
Stream metadataStream, //Output: Contains the Metadata deltas
Stream ilStream, //Output: Contains the IL deltas
Stream pdbStream, //Output: Contains the .pdb deltas
ICollection<MethodDefinitionHandle> updatedMethods) //Output: that contains methods that changed

view raw

test.cs

hosted with ❤ by GitHub

So based on the above, the two input arguments that we need to worry about are EmitBasline and IEnumerable<SemanticEdit>. We’ll approach these one at a time.

EmitBaseline

An EmitBaseline represents a module created from a previous compilation. Modules live inside of assemblies and for our purposes it’s safe to assume that every module relates one-to-one with an assembly. (In reality multi-module assemblies can exist, but neither Visual Studio nor MSBuild support their creation). For more see this StackOverflow question.

We’ll look at the EmitBaseline as representing an assembly created from a previous compilation. We want to create a baseline to represent the initial compiled assembly before any changes are made to it. Roslyn can compare this baseline to new compilations we create.

An baseline can be created via EmitBaseline.CreateInitialBaseline()


public static EmitBaseline CreateInitialBaseline(
ModuleMetadata module,
Func<MethodDefinitionHandle, EditAndContinueMethodDebugInformation> debugInformationProvider)

Now we’ve got two more problems: ModuleMetadata and a function that maps between MethodDefinitionHandle and EditAndContinueMethodDebugInformation.

ModuleMetadata simply represents summary information about our module/assembly. Thankfully we can create it easily by passing our initial assembly to either ModuleMetadata.CreateFromFile (for assemblies on disk) or ModuleMetadata.CreateFromStream (for assemblies in memory).

Func<MethodDefinitionHandle, EditAndContinueMethodDebugInformation> proves much harder to work with. This function maps between methods and various debug information including a method’s local variable slots, lambdas and closures. This information can be generated by reading .pdb symbol files. Unfortunately there’s no public API for generating this function. What’s worse is that we’ll have to use test APIs that don’t even ship via NuGet so even Reflection is out of the question.

Instead we’ll have to piece together bits of code from Roslyn’s test utilities. Ultimately this requires that we copy code from the following files:

We’ll also need to include two NuGet packages:

It’s a bit of a pain that we need to bring so much of Roslyn with us just for the sake of one file. It’s sort of like working with a ball of yarn; you pull on one string and the whole thing comes with it.

The SymReaderFactory coupled with the DiaSymReader packages can interpret debug information from Microsoft’s PDB format. Once we’ve copied these files to our project we can use the SymReaderFactory to create a debug information provider by feeding the PDB stream to SymReaderFactory.CreateReader().

IEnumerable<SemanticEdit>

SemanticEdits describe the differences between compilations at the symbol level. For example, modifying a method will introduce a SemanticEdit for the corresponding IMethodSymbol marking is as updated. Roslyn will end up converting these SemanticEdits into proper IL and metadata deltas.

It turns out SemanticEdit is a public class. The problem is that they’re difficult to generate properly. We have to diff Documents across different versions of a Solution which means we have to take into account changes in syntax, trivia and semantics. We also have to detect invalid changes which aren’t (to my knowledge) officially or completely documented anywhere. In this Roslyn issue, I propose three potential approaches to generating the edits, but we’ll only take a look at the one I’ve implemented myself: using the internal CSharpEditAndContinueAnalyzer.

The CSharpEditAndContinueAnalyzer and its base class method AnalyzeDocumentAsync will generate a DocumentAnalysisResult with our edits along with some supplementary information about the changes. Were there errors? Were the changes substantial? Were there special areas of interest such as catch or finally blocks?

Since these classes are internal we’ll have to use Reflection to get at them. We’ll also need to keep a copy of the Solution around with which we used to generate our EmitBaseline. I’ve put all of the code together into a complete sample. The reflection based approach for CSharpEditAndContinueAnalyzer is demonstrated in the GetSemanticEdits method below.


static void FullWal()
{
string sourceText_1 = @"
using System;
using System.Threading.Tasks;
class C
{
public static void F() { Console.WriteLine(""Original Text""); }
public static void Main() { F(); Console.ReadLine(); }
}";
string sourceText_2 = @"
using System;
using System.Threading.Tasks;
class C
{
public static void F() { Console.WriteLine(123456789); }
public static void Main() { F(); Console.ReadLine(); }
}";
string programName = "MyProgram.exe";
string pdbName = "MyProgram.pdb";
//Get solution
Solution solution = createSolution(sourceText_1);
//Get compilation
var compilation = solution.Projects.Single().GetCompilationAsync().Result;
//Emit .exe. and .pdb to disk
var emitResult = compilation.Emit(programName, pdbName);
if (!emitResult.Success)
{
throw new InvalidOperationException("Errors in compilation: " + emitResult.Diagnostics.Count());
}
//Build the EmitBaseline
var metadataModule = ModuleMetadata.CreateFromFile(programName);
var fs = new FileStream(pdbName, FileMode.Open);
var emitBaseline = EmitBaseline.CreateInitialBaseline(metadataModule, SymReaderFactory.CreateReader(fs).GetEncMethodDebugInfo);
//Take solution, change it and compile it
var document = solution.Projects.Single().Documents.Single();
var updatedDocument = document.WithText(SourceText.From(sourceText_2, System.Text.Encoding.UTF8));
var newCompilation = updatedDocument.Project.GetCompilationAsync().Result;
//Get semantic edits with Reflection + CSharpEditAndContinueAnalyzer
IEnumerable<SemanticEdit> semanticEdits = GetSemanticEdits(solution, updatedDocument);
//Apply metadat/IL deltas
var metadataStream = new MemoryStream();
var ilStream = new MemoryStream();
var newPdbStream = new MemoryStream();
var updatedMethods = new List<System.Reflection.Metadata.MethodDefinitionHandle>();
var newEmitResult = newCompilation.EmitDifference(emitBaseline, semanticEdits, metadataStream, ilStream, newPdbStream, updatedMethods);
}
private static IEnumerable<SemanticEdit> GetSemanticEdits(Solution originalSolution, Document updatedDocument, CancellationToken token = default(CancellationToken))
{
//Load our CSharpAnalyzer and ActiveStatementSpan types via reflection
Type csharpEditAndContinueAnalyzerType = Type.GetType("Microsoft.CodeAnalysis.CSharp.EditAndContinue.CSharpEditAndContinueAnalyzer, Microsoft.CodeAnalysis.CSharp.Features");
Type activeStatementSpanType = Type.GetType("Microsoft.CodeAnalysis.EditAndContinue.ActiveStatementSpan, Microsoft.CodeAnalysis.Features");
dynamic csharpEditAndContinueAnalyzer = Activator.CreateInstance(csharpEditAndContinueAnalyzerType, nonPublic: true);
var bindingFlags = BindingFlags.Instance | BindingFlags.Static | BindingFlags.Public;
Type[] targetParams = new Type[] { };
//Create an empty ImmutableArray<ActiveStatementSpan> because we're not currently running the code
var immutableArray_Create_T = typeof(ImmutableArray).GetMethod("Create", bindingFlags, binder: null, types: targetParams, modifiers: null);
var immutableArray_Create_ActiveStatementSpan = immutableArray_Create_T.MakeGenericMethod(activeStatementSpanType);
var immutableArray_ActiveStatementSpan = immutableArray_Create_ActiveStatementSpan.Invoke(null, new object[] { });
var method = (MethodInfo)csharpEditAndContinueAnalyzer.GetType().GetMethod("AnalyzeDocumentAsync");
var myParams = new object[] { originalSolution, immutableArray_ActiveStatementSpan, updatedDocument, token };
object task = method.Invoke(csharpEditAndContinueAnalyzer, myParams);
var documentAnalysisResults = task.GetType().GetProperty("Result").GetValue(task);
//Get the semantic edits from DocumentAnalysisResults
var edits = (IEnumerable<SemanticEdit>)documentAnalysisResults.GetType().GetField("SemanticEdits", bindingFlags).GetValue(documentAnalysisResults);
return edits;
}
private static Solution createSolution(string text)
{
var tree = CSharpSyntaxTree.ParseText(text);
var mscorlib = MetadataReference.CreateFromFile(typeof(object).Assembly.Location);
var adHockWorkspace = new AdhocWorkspace();
var options = new CSharpCompilationOptions(OutputKind.ConsoleApplication, platform: Platform.X86);
var project = adHockWorkspace.AddProject(ProjectInfo.Create(ProjectId.CreateNewId(), VersionStamp.Default, "MyProject", "MyProject", "C#", metadataReferences: new List<MetadataReference>() { mscorlib }, compilationOptions: options));
adHockWorkspace.AddDocument(project.Id, "MyDocument.cs", SourceText.From(text, System.Text.UTF8Encoding.UTF8));
return adHockWorkspace.CurrentSolution;
}

We can see that this is quite a bit of work just to build the edits. In the above sample we made a number of simplifying assumptions. We assumed there were no errors in the compilation, that there were no illegal edits and no active statements. It’s important to cover all cases if you plan to consume this API properly.

Our next step will be to apply these deltas to a running process using APIs exposed by the CLR.

Edit and Continue Part 1 – Introduction

When discussing the Emit API in my last post, I mentioned that Roslyn gives users the ability to emit deltas between compilations. As far as I know this API is only used by Visual Studio’s Edit and Continue (EnC) feature. When you edit a running program the compiler is smart enough to only emit the changes you’ve made to the previous compilation. The CLR is then smart enough to load these changes and preserve the state of the running program.

I’ve created a (large) sample on how to use Roslyn and the CLR to modify a running process that is available on GitHub. Over the next week we’ll take a look at what it takes to use both Roslyn and the CLR to achieve this.

Part 1: Introduction
Part 2: EnC and Roslyn
Part 3: EnC and The CLR

I’ve had my eye on the Compilation.EmitDifference() API for almost a year now. I work on a Visual Studio extension called Alive that shows developers exactly what their source code does the moment they write it. This means that every time a user edits their code the extension re-compiles and re-emits the binary for their updated source code.

Re-emitting the compiled binary was a large bottleneck for us and created consistent GC pressure. When you emit a compilation you’re essentially dumping a big byte[] to memory. Worse still, if this byte[] contains over 85,000 elements then it goes straight to the large object heap. In our case these arrays weren’t long lived; the moment our users type we have to recompile and the previous binary becomes useless. Compilation.EmitDifference() allowed us to avoid emitting this giant array for every compilation and greatly reduce our extension’s memory footprint.

We can look at two approaches to consuming this API by comparing EnC and Alive. The primary difference between the two approaches is the preservation of state. EnC pauses execution of your program, lets you change it and resumes execution while retaining the previous program state. Alive has no need to preserve state between executions. It runs a given method and then waits for further instructions.

This difference means that EnC calculates the deltas between each compilation it creates, preserving state. Alive calculates deltas between the initial base compilation and the current state of the code.

How EnC builds deltas across compilations

EnC Deltas

How Alive builds deltas across compilations

EnC2

The above deltas are simplified for the sake of explanation. In reality they exist as pairs of IL/Metadata deltas. Deltas also aren’t generated at the statement level, when you edit a method the CLR actually replaces the entire method with your new code.

There are also restrictions on what constitutes a valid edit. For detailed rules I’ll defer to Mike Stall’s post on valid edits but it’s possibly outdated. (One valid edit he doesn’t mention is the addition of new top-level types to a program) Programs that use these APIs should have fallback plans for invalid edits. Visual Studio’s EnC simply displays an error saying that it cannot continue while invalid edits are present. Alive falls back to its old approach and re-emits the compilation in its entirety.

In part two we’ll take a look at what it takes to get Roslyn to generate deltas between two compilations.

LRN Quick Tip: How to Test out C# 7 Features with Roslyn

As of November, people outside of the Roslyn team have been able to build and dogfood changes they make to the compiler and language services. Now that the various feature branches have caught up, we can start playing around with some of the proposed features for C#.

If you’d just like to learn about the features, I’ve put up a few videos on binary literals, digit separators and local functions.

I’ve also prepared a video on How to Test out C# 7 Features with Roslyn

The current branches available on GitHub are:

features/Annotated Types
features/Nullable Reference Types
features/constVar
features/local-functions
features/multi-Var
features/openGenericNameInNameof
features/patterns
features/privateprotected
features/ref-returns
features/tuples

The /future branch is where all these features end up once they’re close to complete and ready to be reviewed for more feedback. Today (Feburary 9, 2015) it’s home to binary literals, digit separators and local functions.

Today we’re going to look at the steps necessary to get the /future branch to build and let us test out the new features.

Cloning and Building Roslyn

The first steps are identical to those found on Roslyn’s “Building Debugging and Testing on Windows” guideline.

  1. Clone https://github.com/dotnet/roslyn
  2. Check out the /features branch
  3. Run the “Developer Command Prompt for VS2015” from your start menu.
  4. Navigate to the directory of your Git clone.
  5. Run Restore.cmd in the command prompt to restore NuGet packages. (Note: This sometimes takes up to 30 minutes to complete and may appear to be frozen when it’s not)
  6. Build on the command line before opening in Visual Studio. Run msbuild /v:m /m Roslyn.sln
  7. Open Roslyn.sln

Enabling C# 7 Features in Visual Studio

  1. Navigate to CSharpParseOptions.cs and find IsFeatureEnabled()
  2. Force it to return true to enable all available features
  3. In the Solution Explorer, set the VisualStudioSetup project as the startup project and press F5 to run.
  4. A new instance of Visual Studio will open with the C# 7 features available for use within VS.

Note: Although there will be no error squiggles in the editors, you won’t be able to perform full-builds until you deploy your changes to the out-of-process compiler.

Enabling C# 7 Features in Out-of-process compiler

To enable full builds within your experimental Visual Studio:

  1. Make the above changes.
  2. Deploy them to the CompilerExtension project.

There you have it, you can test out local functions, binary literals and digit separators. You can also use a similar approach to try out some of the other feature branches.

Learn Roslyn Now: Part 16 The Emit API

Up until now, we’ve mostly looked at how we can use Roslyn to analyze and manipulate source code. Now we’ll take a look at finishing the compilation process by emitting it disk or to memory. To start, we’ll just try emitting a simple compilation to disk and checking whether or not it succeeded.


var tree = CSharpSyntaxTree.ParseText(@"
using System;
public class C
{
public static void Main()
{
Console.WriteLine(""Hello World!"");
Console.ReadLine();
}
}");
var mscorlib = MetadataReference.CreateFromFile(typeof(object).Assembly.Location);
var compilation = CSharpCompilation.Create("MyCompilation",
syntaxTrees: new[] { tree }, references: new[] { mscorlib });
//Emitting to file is available through an extension method in the Microsoft.CodeAnalysis namespace
var emitResult = compilation.Emit("output.exe", "output.pdb");
//If our compilation failed, we can discover exactly why.
if(!emitResult.Success)
{
foreach(var diagnostic in emitResult.Diagnostics)
{
Console.WriteLine(diagnostic.ToString());
}
}

After running this code we can see that our executable and .pdb have been emitted to Debug/bin/. We can double click output.exe and see that our program runs as expected. Keep in mind that the .pdb file is optional. I’ve only chosen to emit it here to show off the API. Writing the .pdb file to disk can take a fairly long time and it often pays to omit this argument unless you really need it.

Sometimes we might not want to emit to disk. We might just want to compile the code, emit it to memory and then execute it from memory. Keep in mind that for most cases where we’d want to do this, the scripting API probably makes more sense to use. Still, it pays to know our options.


var tree = CSharpSyntaxTree.ParseText(@"
using System;
public class MyClass
{
public static void Main()
{
Console.WriteLine(""Hello World!"");
Console.ReadLine();
}
}");
var mscorlib = MetadataReference.CreateFromFile(typeof(object).Assembly.Location);
var compilation = CSharpCompilation.Create("MyCompilation",
syntaxTrees: new[] { tree }, references: new[] { mscorlib });
//Emit to stream
var ms = new MemoryStream();
var emitResult = compilation.Emit(ms);
//Load into currently running assembly. Normally we'd probably
//want to do this in an AppDomain
var ourAssembly = Assembly.Load(ms.ToArray());
var type = ourAssembly.GetType("MyClass");
//Invokes our main method and writes "Hello World" 🙂
type.InvokeMember("Main", BindingFlags.Default | BindingFlags.InvokeMethod, null, null, null);

Finally, what if we want to influence  how our code is compiled? We might want to allow unsafe code, mark warnings as errors or delay sign the assembly. All of these options can be customized by passing a CSharpCompilationOptions object to CSharpCompilation.Create(). We’ll take a look at how we can interact with a few of these properties below.


var tree = CSharpSyntaxTree.ParseText(@"
using System;
public class MyClass
{
public static void Main()
{
Console.WriteLine(""Hello World!"");
Console.ReadLine();
}
}");
//We first have to choose what kind of output we're creating: DLL, .exe etc.
var options = new CSharpCompilationOptions(OutputKind.ConsoleApplication);
options = options.WithAllowUnsafe(true); //Allow unsafe code;
options = options.WithOptimizationLevel(OptimizationLevel.Release); //Set optimization level
options = options.WithPlatform(Platform.X64); //Set platform
var mscorlib = MetadataReference.CreateFromFile(typeof(object).Assembly.Location);
var compilation = CSharpCompilation.Create("MyCompilation",
syntaxTrees: new[] { tree },
references: new[] { mscorlib },
options: options); //Pass options to compilation

In total there are about twenty-five different options available for customization. Basically any option you have within the Visual Studio’s project property page should be available here.

Advanced options

There are a few optional parameters available in Compilation.Emit() that are worth discussing. Some of them I’m familiar with, but others I’ve never used.

  • xmlDocPath – Auto generates XML documentation based on the documentation comments present on your classes, methods, properties etc.
  • manifestResources – Allows you to manually embed resources such as strings and images within the emitted assembly. Batteries are not included with this API and it requires some heavy lifting if you want to embed .resx resources within your assembly. We’ll explore this overload in a future blog post.
  • win32ResourcesPath – Path of the file from which the compilation’s Win32 resources will be read (in RES format). Unfortunately I haven’t used this API yet and I’m not at all familiar with Win32 Resources.
  • There is also the option to EmitDifference between two compilations. I’m not familiar with this API, and I’m not familiar with how you can apply these deltas to existing assemblies on disk or in memory. I hope to learn more about this API in the coming months.

That just about wraps up the Emit API. If you have any questions, feel free to ask them in the comments below.

Learn Roslyn Now: Part 15 The SymbolVisitor

I had a question the other day that I ended up taking directly to the Roslyn issues: How do I get a list of all of the types available to a compilation? Schabse Laks (@Schabse) and David Glick (@daveaglick) introduced me to a cool class I hadn’t encountered before: The SymbolVisitor.

In previous posts we touched on the CSharpSyntaxWalker and the CSharpSyntaxRewriter. The SymbolVisitor is the analogue of SyntaxVisitor, but applies at the symbol level. Unfortunately unlike the SyntaxWalker and CSharpSyntaxRewriter, when using the SymbolVisitor we must construct the scaffolding code to visit all the nodes.

To simply list all the types available to a compilation we can use the following.


public class NamedTypeVisitor : SymbolVisitor
{
public override void VisitNamespace(INamespaceSymbol symbol)
{
Console.WriteLine(symbol);
foreach(var childSymbol in symbol.GetMembers())
{
//We must implement the visitor pattern ourselves and
//accept the child symbols in order to visit their children
childSymbol.Accept(this);
}
}
public override void VisitNamedType(INamedTypeSymbol symbol)
{
Console.WriteLine(symbol);
foreach (var childSymbol in symbol.GetTypeMembers())
{
//Once againt we must accept the children to visit
//all of their children
childSymbol.Accept(this);
}
}
}
//Now we need to use our visitor
var tree = CSharpSyntaxTree.ParseText(@"
class MyClass
{
class Nested
{
}
void M()
{
}
}");
var mscorlib = MetadataReference.CreateFromFile(typeof(object).Assembly.Location);
var compilation = CSharpCompilation.Create("MyCompilation",
syntaxTrees: new[] { tree }, references: new[] { mscorlib });
var visitor = new NamedTypeVisitor();
visitor.Visit(compilation.GlobalNamespace);

In order to visit all the methods available to a given compilation we can use the following:


public class MethodSymbolVisitor : SymbolVisitor
{
//NOTE: We have to visit the namespace's children even though
//we don't care about them. 😦
public override void VisitNamespace(INamespaceSymbol symbol)
{
foreach(var child in symbol.GetMembers())
{
child.Accept(this);
}
}
//NOTE: We have to visit the named type's children even though
//we don't care about them. 😦
public override void VisitNamedType(INamedTypeSymbol symbol)
{
foreach(var child in symbol.GetMembers())
{
child.Accept(this);
}
}
public override void VisitMethod(IMethodSymbol symbol)
{
Console.WriteLine(symbol);
}
}

It’s important to be aware of how you must structure your code in order to visit all the symbols you’re interested in. By now you may have noticed that using this API directly makes me a little sad. If I’m interested in visiting method symbols, I don’t want to have to write code that visits namespaces and types.

Hopefully at some point we’ll get a SymbolWalker class that we can use to separate out our implemenation from the traversal code. I’ve opened an issue on Roslyn requesting this feature. (It seems like it’s going to be challenging to implement and would require working with both syntax and symbols).

Finding All Named Type Symbols

Finally, you might be wondering how I answered my original question: How do we get a list of all of the types available to a compilation? My implementation is below:


public class CustomSymbolFinder
{
public List<INamedTypeSymbol> GetAllSymbols(Compilation compilation)
{
var visitor = new FindAllSymbolsVisitor();
visitor.Visit(compilation.GlobalNamespace);
return visitor.AllTypeSymbols;
}
private class FindAllSymbolsVisitor : SymbolVisitor
{
public List<INamedTypeSymbol> AllTypeSymbols { get; } = new List<INamedTypeSymbol>();
public override void VisitNamespace(INamespaceSymbol symbol)
{
Parallel.ForEach(symbol.GetMembers(), s => s.Accept(this));
}
public override void VisitNamedType(INamedTypeSymbol symbol)
{
AllTypeSymbols.Add(symbol);
foreach (var childSymbol in symbol.GetTypeMembers())
{
base.Visit(childSymbol);
}
}
}
}

I should note that after implementing this solution, I came to the conclusion that it was too slow for our purposes. We got a major performance boost by only visiting symbols within namespaces defined within source, but it was still about an order of magnitude slower than the simply searching for types via the SymbolFinder class.

Still, the SymbolVisitor class is probably appropriate for one-off uses during compilation or for visiting a subset of available symbols. At the very least, it’s worth being aware of.

Learn Roslyn Now: Part 14 Intro to the Scripting API

The Scripting API is finally here! After being removed from Roslyn’s 1.0 release it’s now available (for C#) in pre-release format on NuGet. To install to your project just run:

Install-Package Microsoft.CodeAnalysis.Scripting -Pre

Note: You need to target .NET 4.6 or you’ll get the following exception when running your scripts:

Could not load file or assembly 'System.Runtime, Version=4.0.20.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a' or one of its dependencies. The system cannot find the file specified.

Note: Today (October 15, 2015) the Scripting APIs depend on the 1.1.0-beta1 release, so you’ll have to update your Microsoft.CodeAnalysis references to match if you want to use all of Roslyn with the scripting stuff.

There are a few different ways to use the Scripting API.

EvaluateAsync

CSharpScript.EvaluateAsync is probably the simplest way to get started evaluating expressions. Simple pass any expression that would return a single result to this method it will be evaluated for you.


var result = await CSharpScript.EvaluateAsync("5 + 5");
Console.WriteLine(result); // 10
result = await CSharpScript.EvaluateAsync(@"""sample""");
Console.WriteLine(result); // sample
result = await CSharpScript.EvaluateAsync(@"""sample"" + "" string""");
Console.WriteLine(result); // sample string
result = await CSharpScript.EvaluateAsync("int x = 5; int y = 5; x"); //Note the last x is not contained in a proper statement
Console.WriteLine(result); // 5

RunAsync

Not every script returns a single value. For more complex scripts we may want to keep track of state or inspect different variables. CSharpScript.RunAsync creates and returns a ScriptState object that allows us to do exactly this. Take a look:


var state = CSharpScript.RunAsync(@"int x = 5; int y = 3; int z = x + y;""");
ScriptVariable x = state.Variables["x"];
ScriptVariable y = state.Variables["y"];
Console.Write($"{x.Name} : {x.Value} : {x.Type} "); // x : 5
Console.Write($"{y.Name} : {y.Value} : {y.Type} "); // y : 3

view raw

RunAsync1.cs

hosted with ❤ by GitHub

We can also maintain the state of our script and continue applying changes to it with ScriptState.ContinueWith():


var state = CSharpScript.RunAsync(@"int x = 5; int y = 3; int z = x + y;""").Result;
state = state.ContinueWithAsync("x++; y = 1;").Result;
state = state.ContinueWithAsync("x = x + y;").Result;
ScriptVariable x = state.Variables["x"];
ScriptVariable y = state.Variables["y"];
Console.Write($"{x.Name} : {x.Value} : {x.Type} "); // x : 7
Console.Write($"{y.Name} : {y.Value} : {y.Type} "); // y : 1

view raw

ContinueWith.cs

hosted with ❤ by GitHub

ScriptOptions

We can start to get into more interesting code by adding references to DLLs that we’d like to use. We use ScriptOptions to provide out script with the proper MetadataReferences.


ScriptOptions scriptOptions = ScriptOptions.Default;
//Add reference to mscorlib
var mscorlib = typeof(System.Object).Assembly;
var systemCore = typeof(System.Linq.Enumerable).Assembly;
scriptOptions = scriptOptions.AddReferences(mscorlib, systemCore);
//Add namespaces
scriptOptions = scriptOptions.AddNamespaces("System");
scriptOptions = scriptOptions.AddNamespaces("System.Linq");
scriptOptions = scriptOptions.AddNamespaces("System.Collections.Generic");
var state = await CSharpScript.RunAsync(@"var x = new List(){1,2,3,4,5};", scriptOptions);
state = await state.ContinueWithAsync("var y = x.Take(3).ToList();");
var y = state.Variables["y"];
var yList = (List)y.Value;
foreach(var val in yList)
{
Console.Write(val + " "); // Prints 1 2 3
}

This stuff is surprisingly broad. The Microsoft.CodeAnalysis.Scripting namespace is full of public types that I’m not at all familiar with and there’s a lot left to learn. I’m excited to see what people will build with this and how they might be able to incorporate scripting into their applications.

Kasey Uhlenhuth from the Roslyn team has compiled a list of code snippets to help get you off the ground with the Scripting API. Check them out on GitHub!

If you’ve got some cool plans for the scripting API, let me know if the comments below!