preamble
The first part of this post objectively evaluates the advantages and disadvantages of several aspects of the .NET Create Dynamic Parties program, and the second half of the post isNatasha V9 The new version of the feature.
Solutions for creating dynamic methods in .
Different options for creating dynamic methods
Several options for creating dynamic methods are shown below: the following example has value as input and (value/0.3) as output:
emit version
DynamicMethod dynamicMethod = new DynamicMethod("FloorDivMethod", typeof(double), new Type[] { typeof(double) }, typeof(Program).Module);
ILGenerator ilGenerator = ();
(OpCodes.Ldarg_0);
(OpCodes.Ldc_R8, 0.3);
();
(, typeof(Math).GetMethod("Floor", new Type[] { typeof(double) }));
();
Func<double, double> floorDivMethod = (Func<double, double>)(typeof(Func<double, double>));
Expression Tree Version
ParameterExpression valueParameter = (typeof(double), "value");
Expression divisionExpression = (valueParameter, (0.3));
Expression floorExpression = (typeof(Math), "Floor", null, divisionExpression);
Expression<Func<double, double>> expression = <Func<double, double>>(floorExpression, valueParameter);
Func<double, double> floorDivMethod = ();
Natasha version
AssemblyCSharpBuilder builder = new();
var func = builder
.UseRandomLoadContext()
.UseSimpleMode()
.ConfigLoadContext(ctx => ctx
.AddReferenceAndUsingCode(typeof(Math))
.AddReferenceAndUsingCode(typeof(double)))
.Add("public static class A{ public static double Invoke(double value){ return (value/0.3); }}")
.GetAssembly()
.GetDelegateFromShortName<Func<double, double>>("A", "Invoke");
Natasha Method Template Wrapper
The extension library
Packaged on top of the original Natasha and released after Natasha v9.0.
- Lightweight build program:
var simpleFunc = "return (arg1/0.3);"
.WithMetadata(typeof(Math))
.WithMetadata(typeof(Console)) //If the report object undefined, on top of that
.ToFunc<double, double>();
- Smart Build Program:
var smartFunc = "return (arg1/0.3);".ToFunc<double, double>();
Comparison and Analysis of Scenarios
time it can thus be seen that neither dynamic construction can break free of thetypeof(Math)
of the bundle, even more metadata needs to be reflected.
Metadata is an essential structure in dynamic method creation, and since everyone relies on it, it's worth making a comparison;
Program name | coding form | Using Management | memory footprint | uninstallation function | Build speed | Execution performance | breakpoint debugging | Learning costs |
---|---|---|---|---|---|---|---|---|
Emit | many and complicated | unnecessary | lower (one's head) | NET9 uninstallable | sharp (of knives or wits) | your (honorific) | NET9 Support | your (honorific) |
Expression | general | unnecessary | lower (one's head) | NET9 uninstallable | sharp (of knives or wits) | your (honorific) | NET9 Support | center |
Natasha | general | need | center | uninstallable | late for the first time | your (honorific) | be in favor of | center |
Natasha Method | simpler | need | center | uninstallable | late for the first time | your (honorific) | be in favor of | lower (one's head) |
take
First of all, from a scenario point of view, the Emit / Expression scheme plays a very important role in the .NET ecosystem, and is the main core technology stack of a number of high-performance libraries. While Roslyn's dynamic compilation technology goes through a comprehensive compilation process from inception to completion, Natasha is based on Roslyn. Although I'm a Natasha author, I'd recommend, for small jobs, using expression trees. Those more complex dynamic business, dynamic frameworks, dynamic libraries are more comfortable using Natasha. For example, if you have a rules engine, Natasha is not a rules engine, but you can use Natasha to customize it to fit your needs. Another example is object mapping, Natasha is not an object mapping library, but you can use Natasha to customize an object mapping library that meets your operating habits; if you feel that the ORMs on the market are not easy to use, of course, you can use Natasha to customize an ORM that you like.
Coding Format Programs and Using Management
In terms of coding process, Emit is more complicated, in terms of coding thinking, Emit belongs to the "stack programming", the order of operation of the data in the back first, different from the usual use of C# keywords, it is more close to the underlying instructions, you will not see if/switch/while and other operations, but rather You won't see if/switch/while operations, but rather Label and some jump instructions, for which you can't enjoy the convenience of normal compilation, and even need to revert some operations, such as "str1" == "str2" which actually has to be replaced by the Equal() method.
Expression trees are a bit more comfortable than Emit, but it's not a proper C# programmer's mindset, it's still processing, and if you enjoy the conversion process and get a sense of accomplishment or something out of it, then yes, it's for you. I think most people choose the dynamic approach because they have no choice, and either way, not many developers get a good feeling out of it.
Compared to the first two, Natasha needs to pay attention to the "Domain" operation and Using references. With "Domain", it is easier to isolate assemblies and uninstall them, and if your dynamic code creates assemblies that will always be useful, just use the default domain. On the other hand, those dynamic features that need to be uninstalled or updated, you'll have to choose a non-default domain.
In addition to uninstallation, another aspect of the dynamic build process is Using, and since Using code is an essential part of C# scripting, building dynamic functionality as C# scripting requires consideration of the followingusing System;
This kind of code. The biggest problem encountered in Using is the problem of dichotomous references (CS0104):
Assuming that there arenamespace MyFile{ public static class File{} }
Enable implicit in VS<ImplicitUsings>enable</ImplicitUsings>
After quoting it you'll find errors, ostensibly because theMyFile
cap (a poem) The actual reason for the namespace conflict is that both namespaces have File-related metadata, and semantic analysis doesn't infer which File you want to use for subsequent code functions, which may happen in complex programming environments and is inevitably discovered and corrected.
Once this happens, you need to exclude Using.
//exclude
("");
Let's move on to the fourth, a dynamic method build based on Natasha encapsulation, which is very simple, among other things:
- Lightweight builds are written to compile on-demand references to metadata into delegates.
- The smart build write method is to compile directly to a delegate with full metadata and using coverage, which presupposes that the metadata and Using collections have been preheated, see Natasha preheating related methods for more details.
memory footprint
The first two compilations take up very little system memory. After all, it's a straightforward one-step approach, with a lot less analyzing and converting caches. The first two are a part of the Roslyn compilation, so to speak.
uninstallation function
Hereafter Natasha's "domain" is replaced by AssemblyLoadContext(ALC).
Limiting this article to 4 coding cases, I haven't seen any articles related to dynamic methods created by directly offloading Emit/expression trees.
However, the PersistedAssemblyBuilder introduced in .NET9 will allow compiling Emit and streaming the output to ALC.
PersistedAssemblyBuilder ab = ....;
using var stream = new MemoryStream();
(stream);
NatashaDomain domain = new("MyDomain");
var newAssembly = (stream);
NET9 now supports saving assemblies, but the uninstall feature of ALC is a bit difficult to deal with, the official .NET uninstall operation of ALC is almost impotent, theoretically as long as your class is in use, the class can not be uninstalled, there are many such cases, a static generalized class, or a global event, or compiled into a certain method that is not cleaned up, and does not provide a cleanup method, they will then become a zombie type for the program. There is no official way to force anything on the data being used. If my program has 60 dependencies, I need to find the authors of those dependencies, maybe more than 60, and ask them one by one: How can your library clean up the metadata created by ALC that is stored with you? Then attach some debugging evidence and tell him that a lot of unreleased metadata was found in the 2nd generation GC that is related to your XXX. It's very hard to read and even a bit ridiculous, yes, that's it, and that's all there is to it. That's why I've been thinking and experimenting with whole domain proxies to block out-of-domain references in the process of making HotExector.
That said if it's your own packaged framework then this uninstallation problem is going to be a good one because you know what to clean up and what fields to empty.
Build speed
Similar to the memory footprint note, a full compilation will definitely take longer than one of the links, and Roslyn caches and warms up some data internally. After the first compilation, the compilation will be very fast.
Execution performance
If the logic of the compiled Emit code is the same as the logic of Roslyn's internally optimized scripts, then the execution performance is theoretically equal. For example, in multiple if or switch numerical lookup logic, Roslyn may optimize these lookup branches, such as the use of binary lookup tree instead of the original lookup logic, if you can not think of these optimization points, the performance of Emit code can only rely on the continuous optimization of the subsequent JIT to improve the performance of the JIT, because of the consideration of the JIT subsequent optimization may be able to achieve the optimal result of all of them. Therefore, they are given "high". The difference between the two developers should understand, compared to Emit native programming, Roslyn compiled Emit logic is more excellent and high performance.
breakpoint debugging
Natasha since V8 version began to support script breakpoint debugging , V9 version upgraded different platforms , different PDB output compatibility optimization , Natasha compilation framework supports .netstd2.0.
NET9's PersistedAssemblyBuilder, mentioned above, by using that class'sGenerateMetadata
method to generate the metadata stream, which in turn creates thePortablePdbBuilder
Debugging data instances, which are then transformed intoBlob
(BlobBuilder), and finally write to the PDB stream. With the PDB file, Debug breakpoint debugging will become feasible.
Natasha and the advantages of dynamic method templates
Compilation of nesting dolls
With Natasha it is possible to do nested compilation and dynamic compilation using dynamic compilation, which is a complex logic. As a simple example, if the requirement is to generate a dynamic type A, a dynamic delegate B is generated in the static initialization method of A, and part of the logic of B is based on the data received by the dynamic types C and D. The logic of B is based on the data received by the dynamic types C and D. The logic of B is based on the data received by the dynamic types C and D. In the expression tree and Emit coding mentality, this may be a transitory processing of data and types, and the compilation process will use theMember metadata for A that has not yet been compiled
This is a very roundabout way of looking at things. In this case I recommend Natasha, because of the cost of learning and time, according to normal thinking 5 minutes to write a script, why spend 20 minutes or even an hour to find a solution, design caching, customize the runtime strong coding rules?
Private members
Many developers should have the habit of reading the source code, and even in high-performance scenarios will be magically altered and customized source code, some developers have 200% confidence to ensure that they get private instances and methods will not be messed with in the program, here will encounter some problems, re-customize the source code, or the use of open instances of the methods will be encountered in the access rights of the problem. To save time and space, here is an example:
It is known that in MVC frameworks that support hot reloading, there areIControllerPropertyActivator / IModelMetadataProvider
Two service instances which provide private methodsClearCache
method clears the metadata cache of the interface, but theIControllerPropertyActivator
interface due to access restrictions, the IDE will report an error when writing the code, so it's a good idea to get theIControllerPropertyActivator
An interface type can only be obtained by getting an instance of it and then getting the type through reflection, which is a contradiction in terms. If I don't know which type implements the interface, or if the type that implements the interface is also private, then how do I get an instance of it.
Natasha V9 version used to require its own customization for open private operation, let's look at the operation after V9 update:
//Turn on the private compilation switch
();
//Rewrite the script
(("","")).
//or
((typeof(IModelMetadataProvider),"")); //Rewrite the script
//or
((instance1,instance2...)) ;
With the above option turned on, Natasha will rewrite the syntax tree to support private operations. The final script is a snap without any processing, here is an example of a script that uses Natasha to bypass the access check:
//The script will not work if it is in the IDE There will be access rights issues in the
var modelMetadataProvider = <IModelMetadataProvider>();
var controllerActivatorProvider = <IControllerPropertyActivator>();
((DefaultModelMetadataProvider)modelMetadataProvider).ClearCache();
((DefaultControllerPropertyActivator)controllerActivatorProvider).ClearCache();
Security-related
Some people say that the use of scripts will lead to security problems, I think this statement is too one-sided, should not impose the human factor into a library, Natasha will not provide backdoor loopholes for the application on its own, any uploaded text and images need to have a rigorous review, including uploaded scripts, even if there is no illegal network request code, take up resources, data security and other issues of the code should be carried out. Strict checking. For services that require the support of a large number of dynamic scripts, the service should strictly limit the metadata and standardize the functional script granularity.
Natasha V9 New Version Changes
Project home page:/dotnetcore/Natasha
Chained Initialization API
To make initialization easier to understand, a set of APIs for chaining operations have been added in the new version.
This group of APIs makes it easier to control Natasha's initialization behavior.
NatashaManagement
.GetInitializer()
.WithMemoryUsing() //Without this, Natasha will not scan and prepend memory for UsingCode.
.WithMemoryReference()
.Preheating<NatashaDomainCreator>();
Note: When using Smart Mode, warming up Using and Reference is necessary unless you can manage these well.
More flexible metadata management
- Since v9, simple mode (self-managed metadata mode) supports adding metadata separately and using code separately, instead of adding references and using together.
- The compilation unit allows the addition of excluded using code collections.
("","MyNamespace",....)
, which prevents the specified using from being added to the syntax tree.
External Exception Getting
- Enhanced error alerts, when a compile exception is thrown, an error level exception will be thrown first instead of a warning.
- Add "GetException" API to get exception errors outside of Natasha compilation cycle.
repeated compilation
The V9 release has a number of enhancements in the area of recompilation to further increase reusability and flexibility.
1. Duplicate document output
- WithForceCleanOutput This API, when used, will turn on the Force Clean Files switch to avoid IO errors during repeated compilation.
- WithoutForceCleanOutput is the default API used, in this case when a duplicate file is encountered, Natasha will automatically rename it to the new file, oldname is replaced with .
2. Duplicate compilation options
- WithPreCompilationOptions This API, when enabled, will reuse the last generated compilation options (or generate new ones if there are none), which corresponds to the CSharpCompilationOptions parameter, so turn it off if you need to switch debug/release, unsafe/nullable, etc., for the second compilation. If you need to switch debug/release, unsafe/nullable, etc. for the second compilation, please disable this option.
- WithoutPreCompilationOptions is the default API, which does not lock the CompilationOptions and ensures that they are up-to-date for every compilation.
3. Duplicate references
- WithPreCompilationReferences This API, when turned on, will reuse the last set of metadata references.
- WithoutPreCompilationReferences is the default API used.
- The annotations for "referencing the API" have been enhanced in the new version to make its behavior more readable.321.uplink。
4. Private scripting support
You need to add the file
namespace
{
[AttributeUsage(, AllowMultiple = true)]
public class IgnoresAccessChecksToAttribute : Attribute
{
public IgnoresAccessChecksToAttribute(string assemblyName)
{
AssemblyName = assemblyName;
}
public string AssemblyName { get; }
}
}
// Add private annotations (tags) to the current script.
//privateObjects can be private instances, private types, or namespace strings
classScript = (privateObjects)
builder
.WithPrivateAccess() // compilation unit turns on private metadata support
.Add(classScript );
5. Compile optimization level
Note: Before using dynamic debugging, please turn off [Address Level Debugging] in Tools-Options-Debugging.
Natasha v9 has refined the compilation optimization level: the
// Normal Debug mode
WithDebugCompile(item=>()/ForStandard()/ForAssembly())
//Enhanced Debug Mode
WithDebugPlusCompile(item=>()/ForStandard()/ForAssembly())
// Normal Release Mode
WithReleaseCompile()
//Enhanced Release Mode
WithReleasePlusCompile()
Theoretically, the enhanced mode can be interpreted as a "get to the bottom of it and show it all" mode. Although the normal mode is sufficient, this mode can output debugging information at a finer level of granularity, including some implicit conversions.
Note: The experiment did not see more detailed debugging results, experienced comrades can tell me which code can present more delicate results.
6. Other APIs
- The new version of the API annotations for a large number of Chinese rewrite, partners can see more grounded, easy to understand the annotations, due to the compilation unit (AssemblyCSharpBuilder) more stateful way to save the user's configuration, so in the API is also a simple increase in the reuse aspects of the instructions.
- familiar
UseDefaultDomain()
is obsolete and is more in line with the intent of the APIUseDefaultLoadContext()
The name is more appropriate, the Domain family can no longer be the leading API for the compilation unit, and will be replaced by the LoadContext family as of the V9 release. - rise
CompileWithoutAssembly
API, allows developers to not inject assemblies into the domain after compilation, and the program will not actually install the compiled assemblies.
wind up
I thought I was getting the tip of the iceberg with Roslyn, but I didn't realize it was just an ice floe.