LTFN 10: CIFAR-10

Part of the series Learn TensorFlow Now

Over the last nine posts, we built a reasonably effective digit classifier. Now we’re ready to enter the big leagues and try out our VGGNet on a more challenging image recognition task. CIFAR-10 (Canadian Institute For Advanced Research) is a collection of 60,000 cropped images of planes, cars, birds, cats, deer, dogs, frogs, horses, ships, and trucks.

  • 50,000 images in the training set
  • 10,000 images in the test set
  • Size: 32×32 (1024 pixels)
  • 3 Channels (RGB)
  • 10 output classes
Sample images from CIFAR-10

CIFAR-10 is a natural next-step due to its similarities to the MNIST dataset. For starters, we have the same number of training images, testing images and output classes. CIFAR-10’s images are of size 32x32 which is convenient as we were paddding MNIST’s images to achieve the same size. These similarities make it easy to use our previous VGGNet architecture to classify these images.

Despite the similarities, there are some differences that make CIFAR-10 a more challenging image recognition problem. For starters, our images are RGB and therefore have 3 channels. Detecting lines might not be so easy when they can be drawn in any color. Another challenge is that our images are now 2-D depictions of 3-D objects. In the above image, the center two images represent the “truck” class, but are shown at different angles. This means our network has to learn enough about “trucks” to recognize them at angles it has never seen before.

Loading CIFAR-10

The CIFAR-10 dataset is hosted at: https://www.cs.toronto.edu/~kriz/cifar.html

In order to make it easier to work with, I’ve prepared a small script that downloads, shuffles and caches the dataset locally. You can find it on GitHub here.

After saving this file locally, we can use it to prepare our datasets:

Running this locally produces the following output:

Attempting to download: https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
0%....5%....10%....15%....20%....25%....30%....35%....40%....45%....50%....55%....60%....65%....70%....75%....80%....85%....90%....95%....100%
Download Complete!
(50000, 32, 32, 3)
(50000,)
(10000, 32, 32, 3)
(10000,)
(32, 32, 3)

The above output shows that we’ve downloaded the dataset and created a training set of size 50,000 and a test set of size 10,000. Note: Unlike MNIST, these labels are not 1-hot encoded (otherwise they’d be of size 50,000x10 and 10,000x10 respectively). We have to account for this difference in shape when we build VGGNet for this dataset.

Let’s start by adjusting input and labels to fit the CIFAR-10 dataset:

Next we have to adjust the first layer of our network. Recall from the post on convolutions that each convolutional filter must match the depth of the layer against which it is convolved. Previously we had defined our convolutional filter to be of shape [3, 3, 1, 64]. That is, a 64 3x3 convolutional filters, each with depth of 1, matching the depth of our grayscale input image. Now that we’re using RGB images, we must define it to be of shape [3, 3, 3, 64]:

Another change we must make is the calculation of cost. Previously we were using tf.nn.softmax_cross_entropy_with_logits() which is suitable only when our labels are 1-hot encoded. When we represent the labels as single integers, we can instead use tf.nn.sparse_softmax_cross_entropy_with_logits(). It is otherwise identical to our original softmax cross entropy function.

Finally, we must also modify our calculation of correction_prediction (used to calculate accuracy) to account for the change in label shape. We no longer have to take the tf.argmax of our labels because they’re already represented as a single number:

Note: We have to specify output_type=tf.int32 because tf.argmax() returns tf.int64 by default.

With that, we’ve got everything we need to test our VGGNet on CIFAR-10. The complete code is presented at the end of this post.

After running our network for 10,000 steps, we’re greeted with the following output:

Cost: 470.996
Accuracy: 9.00000035763 %
Cost: 2.00049
Accuracy: 25.0 %
...
Cost: 0.553867
Accuracy: 82.9999983311 %
Cost: 0.393799
Accuracy: 87.0000004768 %
Test Cost: 0.895597087741
Test accuracy: 70.9400003552 %

Our final test accuracy appears to be approximately 71%, which isn’t too great. On one hand this is disappointing as it means our VGGNet architecture (or the method in which we’re training it) doesn’t generalize very well. On the other hand, CIFAR-10 presents us with new opportunities to try out new neural network components and architectures. In the next few posts we’ll explore some of these approaches to build a neural network that can handle the more complex CIFAR-10 dataset.

If you look carefully at the previous results you may have noticed something interesting. For the first time, our test accuracy (71%) is much lower than our training accuracy (~82-87%). This is a problem we’ll discuss in future posts on bias and variance in deep learning.

Complete Code

2017: A Retrospective

2017 saw a lot of change for me. I left Microsoft, returned to Toronto and shifted my focus from developer tools to machine learning. Following in patio11’s footsteps, I wanted to take a moment to reflect on the year and clarify my thoughts in written form.

Microsoft

July 2016 – July 2017

In July 2016, my company Code Connect joined Microsoft with the stated goal of integrating our Alive extension into Visual Studio 2017. Unfortunately our visas were delayed and we weren’t able to begin work until early September. It didn’t make sense to rush Alive into Visual Studio 2017 and risk introducing stability problems so we held off for a few months.

In the meantime, we conducted a number of experiments to try to quantify demand for Alive so we could compare it to other potential features. When the experiments concluded, the data didn’t demonstrate an immediate need for a product like Alive. Alive was put on ice and we worked on features that were immediately pressing to Visual Studio’s success. At the time this meant focusing on accessibility, a top-down directive from Satya Nadella himself.

After Alive was sunset, I did some reflection on where I wanted to take my career and what I wanted to focus on. For a long time I’ve worred that the knowledge I’ve accumulated writing Visual Studio extensions is not immediately applicable outside of Visual Studio. Despite working on developer tools for almost five years, I would be on near-equal footing with a junior developer when it comes to developing extensions for VS Code or Eclipse. Much of my expertise comes in the form of random bits of trivia about Visual Studio. The problems I was faced with were challenging, but not very interesting.

The more I complained, the more I realized something had to change. In July 2017 I left Microsoft and applied for a position at a company for which I’d previously been an intern. However I didn’t have the C++ experience required for the roles they were staffing so we concluded it probably wouldn’t be a great fit.

While at Microsoft I began to learn about machine learning in my free time. In 2015 I’d watched in awe from the sidelines as AlphaGo crushed its human counterparts and I wanted in. I decided that now was the time for me to focus exclusively on machine learning and deep learning in particular.

Would I do it again?

Yup. Alive’s best chance for the long-term was a home at Microsoft. We had a modest number of paying customers but required an order of magnitude more in order to grow  and continue full-time work on Alive. We had grand dreams of bringing Alive to other languages and we wouldn’t have been able to do so without hiring more developers. Microsoft came to us at the perfect time and gave Alive one last shot at success. I’m sad it didn’t work out, but I’m eternally grateful to everyone involved in getting us to Microsoft and to the Visual Studio editor team for being my home for the year.

Machine Learning

August 2017 – Ongoing

The hardest thing about starting something brand new is figuring out where to start. I settled on a mix of linear algebra, Coursera, Kaggle and open-source work.

DeepLearning.ai

In September Andrew Ng launched a new deep learning course that covered the following:

I’ve completed the first four and am waiting for the final course on Sequence Models to launch in the coming month. These courses paired nicely with Andrej Karpathy’s CS231n videos. This course will be my go-to answer for the question “I want to learn about neural networks, where do I start?”

Kaggle

I wanted to apply the lessons I learned in Andrew’s videos to my own neural networks. I set out to compete in the introductory Kaggle competition “MNIST Digit Recognizer”. As I learned more and more about neural networks I would apply these lessons to my network and watch the score improve. Being told “batch normalization will improve your results” is one thing, but watching your score tick higher is something else altogether. As of this writing my best submission puts me in the top 25% of submissions with 99.171% accuracy.

My top submission.

 

TensorFlow

I set a personal goal to contribute at least one pull request to TensorFlow so as to better understand the tool I was using. Coming from a .NET Desktop background, there was a bit of a learning curve when it came to tools like bazel and Docker. However, like most things in software development these tools just require a bit of time and focused energy to understand.

I’ve seen mixed success with my contributions. My first pull request was a correction to TensorFlow’s implementation of Inception network. The reviewers agreed that the initial model was incorrect, but are hesitant to change the model due to backward compatibility concerns.

My second pull request improved support for various image operations in TensorFlow. In short, it made it easier to augment multiple images at once. (eg. Randomly flipping images left-to-right). Unfortunately, I introduced some performance regressions and my changes had to be reverted. 😢

My third pull request is a re-implementation of the last, while avoiding the performance regressions. It remains open, but I’m confident that after some work it will be accepted.

On the whole, I’m pleased with the progress I made with TensorFlow. The API surface is massive and I have a lot to learn, but I’m making real, measurable progress. I’ll continue to contribute back code where appropriate.

ICLR 2018 Reproducibility Challenge

At the end of each course, Andrew Ng took time to interview famous names in machine learning such as Geoffrey Hinton and Ian Goodfellow. Shared advice they had for newcomers was “Reproduce papers”. Around the same time, I stumbled upon the ICLR 2018 Reproducibility Challenge where students are challenged to reproduce the results of papers submitted to the ICLR conference.

I signed up and chose the paper “Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates”. This paper proposed a method for training certain neural networks an order of magnitude faster than previous methods allowed. Their approach involved varying the learning rate linearly between (what are typically considered) large values throughout training.

This was the hardest portion of my work thus far and forced me to delve into the details of TensorFlow. In December I made my report available in the comments of their paper’s submission. The TensorFlow portion of my work is available on GitHub at: http://github.com/JoshVarty/ReproducingSuperconvergence

Blogging

My only regret during 2017 was that I published zero blog posts. As such, this was the first year that traffic to my blog decreased.

2017 saw a modest decline in blog traffic

 

I often tell others to start blogging, and this year my actions didn’t match my words. I attribute my poor track-record to one-part laziness and one-part lack of confidence. It’s surprisingly difficult to work up the courage to write about a subject when you’re brand new to it.

Goals for 2018

  • Follow Jeff Atwood’s advice for bloggers and stick to a schedule for blogging. I want to buckle down and write one blog post a week in 2018
  • Read Ian Goodfellow’s Deep Learning Book
  • Contribute to TensorFlow
  • Compete in a more challenging Kaggle competition
  • Work on HackerRank problems to strengthen my interview skills
  • Get a job related to ML/AI (preferably some kind of research role)

 

 

EnC Part 3 – The CLR

In the last post, we looked at using Roslyn to generate deltas between two compilations. Today we’ll take a look at how we can apply these deltas to a running process.

The CLR API

If you dig through Microsoft’s .NET Reference source occasionally you’ll come across extern methods like FastAllocateString() decorated with a special attribute: [MethodImplAttribute(MethodImplOptions.InternalCall)]. These are entry points to the CLR that can be called from managed code. Calling into the CLR can be done for a number of reasons. In the case of FastAllocateString it’s to implement certain functionality in native code for performance (in this case without even the overhead of P/Invoke). Other entry points are exposed to trigger CLR behavior like garbage collection or to apply deltas to a running process.

When I started this project I wasn’t even aware the CLR had an API. Fortunately Microsoft has recently released internal documentation that explains much of the CLR’s behavior including these APIs. Mscorlib and Calling Into the Runtime documents the differences between FCall, QCall and P/Invoke as entrypoints to the CLR from managed code.

Managing the many methods, classes and interfaces is a huge pain and too much work to do manually when starting out. Luckily Microsoft has released a managed wrapper that makes a lot of this stuff easier to work with. The Managed Debug Sample (mdbg) has everything we’ll need to attach to a process and apply changes to it.

The sample has a few extra projects. For our purposes we’ll need:

  • corapi – The managed API we’ll interact with directly
  • raw – Set of interfaces and COMImports over the ICorDebug API
  • NativeDebugWrappers – Functionality for low level Windows debugging

Game Plan

At a high level, our approach is going to be the following:

  1. Create an instance of CorDebugger, a debugger we can use to create and attach itself to other processes.
  2. Start a remote process
  3. Intercept loading of modules and mark them for Edit and Continue
  4. Apply deltas

CorDebugger

Creating an instance of the debugger is fairly involved. We first have to get an instance of the CLR host based on the version of the runtime we’re interested in (in our case anything after v4.0 will work). Working with the managed API is still awkward, certain types are created based on GUIDs that seem to be undocumented outside of sample code. Nonetheless the following code creates an instance of a managed debugger we can use.

In the following we get a list of runtimes available from the currently running process. I can’t offer insight into whether this is “good” or “bad” but it’s something to be aware of.

Starting the process

Once we’ve got a hold of our debugger, we can use it to start a process. While working on this I learned that we (in the .NET world) have been shielded from some of the peculiarities of creating a process on Windows. These peculiarities start to bleed through when creating processes with our custom debugger.

For example, if we want to send the argument 123456 to our new process, it turns our we have to send the process’ filename as the first argument as well. So the call to ICorDebug::CreateProcess(string applicationName, string commandLine) ends up looking something like

For more on this Mike Stall has a post on Conventions for passing the arguments to a process.

We also have to manually pass process flags when creating our process. These flags dictate various properties for our new process (Should a new window be created? Should we debug child processes of this process? etc.). Below we start a process, assuming that the application is in the current directory.

Mark Modules for Edit and Continue

By default the CLR doesn’t expect that EnC will be enabled. In order to enable it, we’ll have to manually set JIT flags on each module we’re interested in. CorDebug exposes an event that signals when a module has been loaded, so we’ll use this to control the flags.

A sample event handler for module loading might look like:

Notice in the above that we’re only setting the flag for the module we’re interested in. If we try to set the JIT flags for all modules we’ll run into exceptions when working with NGen-ed modules. The exception is a little cryptic and complains about “Zap Modules” but this turns out just to be the internal name for NGen modules.

Applying the Deltas

Finally. After three blog posts we’ve arrived at the point: Actually manipulating the running process.

In truth, we don’t apply our changes directly to the process, but to an individual module within it. So our first task is to find the individual module we’re want to change. We can search through all AppDomains, assemblies and modules to find the module with the correct name.

Once we find the module we want to request metadata about the module from it. This turns out to be a weird implementation detail in which the CLR assumes you can’t possible want to apply changes unless you’ve requested this info previously. We put this all together into the following:

Remapping

I should at least touch on one more aspect of EnC I’ve glossed over thus far: remapping. If you are changing a method that has currently active statements, you will be given an opportunity to remap the current “Instruction Pointer” based on line number. It’s up to you to decide on which line execution should resume. The CorDebugger exposes OnFunctionRemapOpportunity and OnFunctionRemapComplete as events that allow you to guide remapping.

Here’s a sample remapping event handler:

We’ve now got all the pieces necessary to manipulate a running process and a good base to build off of. Complete code for today’s blog post can be found here on GitHub. Leave any questions in the comments and I’ll do my best to answer them or direct you to someone who can at Microsoft.

Edit and Continue Part 2 – Roslyn

Our first task is to coerce Roslyn to emit metadata and IL deltas between between two compilations. I say coerce because we’ll have to do quite a bit of work to get things working. The Compilation.EmitDifference() API is marked as public, but I’m fairly sure it’s yet to be actually used by the public. Getting everything to work requires reflection and manual copying of Roslyn code that doesn’t ship via NuGet.

The first order of business is to figure out what it takes to call Compilation.EmitDifference() in the first place. What parameters are we expected to provide? The signature:

So based on the above, the two input arguments that we need to worry about are EmitBasline and IEnumerable<SemanticEdit>. We’ll approach these one at a time.

EmitBaseline

An EmitBaseline represents a module created from a previous compilation. Modules live inside of assemblies and for our purposes it’s safe to assume that every module relates one-to-one with an assembly. (In reality multi-module assemblies can exist, but neither Visual Studio nor MSBuild support their creation). For more see this StackOverflow question.

We’ll look at the EmitBaseline as representing an assembly created from a previous compilation. We want to create a baseline to represent the initial compiled assembly before any changes are made to it. Roslyn can compare this baseline to new compilations we create.

An baseline can be created via EmitBaseline.CreateInitialBaseline()

Now we’ve got two more problems: ModuleMetadata and a function that maps between MethodDefinitionHandle and EditAndContinueMethodDebugInformation.

ModuleMetadata simply represents summary information about our module/assembly. Thankfully we can create it easily by passing our initial assembly to either ModuleMetadata.CreateFromFile (for assemblies on disk) or ModuleMetadata.CreateFromStream (for assemblies in memory).

Func<MethodDefinitionHandle, EditAndContinueMethodDebugInformation> proves much harder to work with. This function maps between methods and various debug information including a method’s local variable slots, lambdas and closures. This information can be generated by reading .pdb symbol files. Unfortunately there’s no public API for generating this function. What’s worse is that we’ll have to use test APIs that don’t even ship via NuGet so even Reflection is out of the question.

Instead we’ll have to piece together bits of code from Roslyn’s test utilities. Ultimately this requires that we copy code from the following files:

We’ll also need to include two NuGet packages:

It’s a bit of a pain that we need to bring so much of Roslyn with us just for the sake of one file. It’s sort of like working with a ball of yarn; you pull on one string and the whole thing comes with it.

The SymReaderFactory coupled with the DiaSymReader packages can interpret debug information from Microsoft’s PDB format. Once we’ve copied these files to our project we can use the SymReaderFactory to create a debug information provider by feeding the PDB stream to SymReaderFactory.CreateReader().

IEnumerable<SemanticEdit>

SemanticEdits describe the differences between compilations at the symbol level. For example, modifying a method will introduce a SemanticEdit for the corresponding IMethodSymbol marking is as updated. Roslyn will end up converting these SemanticEdits into proper IL and metadata deltas.

It turns out SemanticEdit is a public class. The problem is that they’re difficult to generate properly. We have to diff Documents across different versions of a Solution which means we have to take into account changes in syntax, trivia and semantics. We also have to detect invalid changes which aren’t (to my knowledge) officially or completely documented anywhere. In this Roslyn issue, I propose three potential approaches to generating the edits, but we’ll only take a look at the one I’ve implemented myself: using the internal CSharpEditAndContinueAnalyzer.

The CSharpEditAndContinueAnalyzer and its base class method AnalyzeDocumentAsync will generate a DocumentAnalysisResult with our edits along with some supplementary information about the changes. Were there errors? Were the changes substantial? Were there special areas of interest such as catch or finally blocks?

Since these classes are internal we’ll have to use Reflection to get at them. We’ll also need to keep a copy of the Solution around with which we used to generate our EmitBaseline. I’ve put all of the code together into a complete sample. The reflection based approach for CSharpEditAndContinueAnalyzer is demonstrated in the GetSemanticEdits method below.

We can see that this is quite a bit of work just to build the edits. In the above sample we made a number of simplifying assumptions. We assumed there were no errors in the compilation, that there were no illegal edits and no active statements. It’s important to cover all cases if you plan to consume this API properly.

Our next step will be to apply these deltas to a running process using APIs exposed by the CLR.

Edit and Continue Part 1 – Introduction

When discussing the Emit API in my last post, I mentioned that Roslyn gives users the ability to emit deltas between compilations. As far as I know this API is only used by Visual Studio’s Edit and Continue (EnC) feature. When you edit a running program the compiler is smart enough to only emit the changes you’ve made to the previous compilation. The CLR is then smart enough to load these changes and preserve the state of the running program.

I’ve created a (large) sample on how to use Roslyn and the CLR to modify a running process that is available on GitHub. Over the next week we’ll take a look at what it takes to use both Roslyn and the CLR to achieve this.

Part 1: Introduction
Part 2: EnC and Roslyn
Part 3: EnC and The CLR

I’ve had my eye on the Compilation.EmitDifference() API for almost a year now. I work on a Visual Studio extension called Alive that shows developers exactly what their source code does the moment they write it. This means that every time a user edits their code the extension re-compiles and re-emits the binary for their updated source code.

Re-emitting the compiled binary was a large bottleneck for us and created consistent GC pressure. When you emit a compilation you’re essentially dumping a big byte[] to memory. Worse still, if this byte[] contains over 85,000 elements then it goes straight to the large object heap. In our case these arrays weren’t long lived; the moment our users type we have to recompile and the previous binary becomes useless. Compilation.EmitDifference() allowed us to avoid emitting this giant array for every compilation and greatly reduce our extension’s memory footprint.

We can look at two approaches to consuming this API by comparing EnC and Alive. The primary difference between the two approaches is the preservation of state. EnC pauses execution of your program, lets you change it and resumes execution while retaining the previous program state. Alive has no need to preserve state between executions. It runs a given method and then waits for further instructions.

This difference means that EnC calculates the deltas between each compilation it creates, preserving state. Alive calculates deltas between the initial base compilation and the current state of the code.

How EnC builds deltas across compilations

EnC Deltas

How Alive builds deltas across compilations

EnC2

The above deltas are simplified for the sake of explanation. In reality they exist as pairs of IL/Metadata deltas. Deltas also aren’t generated at the statement level, when you edit a method the CLR actually replaces the entire method with your new code.

There are also restrictions on what constitutes a valid edit. For detailed rules I’ll defer to Mike Stall’s post on valid edits but it’s possibly outdated. (One valid edit he doesn’t mention is the addition of new top-level types to a program) Programs that use these APIs should have fallback plans for invalid edits. Visual Studio’s EnC simply displays an error saying that it cannot continue while invalid edits are present. Alive falls back to its old approach and re-emits the compilation in its entirety.

In part two we’ll take a look at what it takes to get Roslyn to generate deltas between two compilations.

LRN Quick Tip: How to Test out C# 7 Features with Roslyn

As of November, people outside of the Roslyn team have been able to build and dogfood changes they make to the compiler and language services. Now that the various feature branches have caught up, we can start playing around with some of the proposed features for C#.

If you’d just like to learn about the features, I’ve put up a few videos on binary literals, digit separators and local functions.

I’ve also prepared a video on How to Test out C# 7 Features with Roslyn

The current branches available on GitHub are:

features/Annotated Types
features/Nullable Reference Types
features/constVar
features/local-functions
features/multi-Var
features/openGenericNameInNameof
features/patterns
features/privateprotected
features/ref-returns
features/tuples

The /future branch is where all these features end up once they’re close to complete and ready to be reviewed for more feedback. Today (Feburary 9, 2015) it’s home to binary literals, digit separators and local functions.

Today we’re going to look at the steps necessary to get the /future branch to build and let us test out the new features.

Cloning and Building Roslyn

The first steps are identical to those found on Roslyn’s “Building Debugging and Testing on Windows” guideline.

  1. Clone https://github.com/dotnet/roslyn
  2. Check out the /features branch
  3. Run the “Developer Command Prompt for VS2015” from your start menu.
  4. Navigate to the directory of your Git clone.
  5. Run Restore.cmd in the command prompt to restore NuGet packages. (Note: This sometimes takes up to 30 minutes to complete and may appear to be frozen when it’s not)
  6. Build on the command line before opening in Visual Studio. Run msbuild /v:m /m Roslyn.sln
  7. Open Roslyn.sln

Enabling C# 7 Features in Visual Studio

  1. Navigate to CSharpParseOptions.cs and find IsFeatureEnabled()
  2. Force it to return true to enable all available features
  3. In the Solution Explorer, set the VisualStudioSetup project as the startup project and press F5 to run.
  4. A new instance of Visual Studio will open with the C# 7 features available for use within VS.

Note: Although there will be no error squiggles in the editors, you won’t be able to perform full-builds until you deploy your changes to the out-of-process compiler.

Enabling C# 7 Features in Out-of-process compiler

To enable full builds within your experimental Visual Studio:

  1. Make the above changes.
  2. Deploy them to the CompilerExtension project.

There you have it, you can test out local functions, binary literals and digit separators. You can also use a similar approach to try out some of the other feature branches.

Learn Roslyn Now: Part 16 The Emit API

Up until now, we’ve mostly looked at how we can use Roslyn to analyze and manipulate source code. Now we’ll take a look at finishing the compilation process by emitting it disk or to memory. To start, we’ll just try emitting a simple compilation to disk and checking whether or not it succeeded.

After running this code we can see that our executable and .pdb have been emitted to Debug/bin/. We can double click output.exe and see that our program runs as expected. Keep in mind that the .pdb file is optional. I’ve only chosen to emit it here to show off the API. Writing the .pdb file to disk can take a fairly long time and it often pays to omit this argument unless you really need it.

Sometimes we might not want to emit to disk. We might just want to compile the code, emit it to memory and then execute it from memory. Keep in mind that for most cases where we’d want to do this, the scripting API probably makes more sense to use. Still, it pays to know our options.

Finally, what if we want to influence  how our code is compiled? We might want to allow unsafe code, mark warnings as errors or delay sign the assembly. All of these options can be customized by passing a CSharpCompilationOptions object to CSharpCompilation.Create(). We’ll take a look at how we can interact with a few of these properties below.

In total there are about twenty-five different options available for customization. Basically any option you have within the Visual Studio’s project property page should be available here.

Advanced options

There are a few optional parameters available in Compilation.Emit() that are worth discussing. Some of them I’m familiar with, but others I’ve never used.

  • xmlDocPath – Auto generates XML documentation based on the documentation comments present on your classes, methods, properties etc.
  • manifestResources – Allows you to manually embed resources such as strings and images within the emitted assembly. Batteries are not included with this API and it requires some heavy lifting if you want to embed .resx resources within your assembly. We’ll explore this overload in a future blog post.
  • win32ResourcesPath – Path of the file from which the compilation’s Win32 resources will be read (in RES format). Unfortunately I haven’t used this API yet and I’m not at all familiar with Win32 Resources.
  • There is also the option to EmitDifference between two compilations. I’m not familiar with this API, and I’m not familiar with how you can apply these deltas to existing assemblies on disk or in memory. I hope to learn more about this API in the coming months.

That just about wraps up the Emit API. If you have any questions, feel free to ask them in the comments below.