'via Blog this'
"...When writing CPU-intensive code, it sometimes makes sense to use volatile fields, as long as you only rely on the ECMA C# specification guarantees and not on architecture-specific implementation details..."
"...When writing CPU-intensive code, it sometimes makes sense to use volatile fields, as long as you only rely on the ECMA C# specification guarantees and not on architecture-specific implementation details..."
Here's how fluent extension methods might be defined:public static class Extensions { public static TransformBlockAddTransform ( this ISourceBlock source, Func transform, ExecutionDataflowBlockOptions options = null) { var transformBlock = new TransformBlock (transform, options ?? new ExecutionDataflowBlockOptions()); source.LinkTo (transformBlock); source.Completion.ContinueWith (_ => transformBlock.Complete()); return transformBlock; } public static TransformBlock AddTransform ( this ISourceBlock source, Func transform, int maxParallelism, int boundedCapacity = -1, CancellationToken cancelToken = default (CancellationToken), TaskScheduler scheduler = null) { return AddTransform (source, transform, GetExecutionOptions (maxParallelism, boundedCapacity, cancelToken, scheduler)); } public static TransformBlock AddTransform ( this ISourceBlock source, Func > transform, ExecutionDataflowBlockOptions options = null) { var transformBlock = new TransformBlock (transform, options ?? new ExecutionDataflowBlockOptions()); source.LinkTo (transformBlock); source.Completion.ContinueWith (_ => transformBlock.Complete()); return transformBlock; } public static TransformBlock AddTransform ( this ISourceBlock source, Func > transform, int maxParallelism, int boundedCapacity = -1, CancellationToken cancelToken = default (CancellationToken), TaskScheduler scheduler = null) { return AddTransform (source, transform, GetExecutionOptions (maxParallelism, boundedCapacity, cancelToken, scheduler)); } public static ActionBlock AddAction ( this ISourceBlock source, Action action, ExecutionDataflowBlockOptions options = null) { var actionBlock = new ActionBlock (action, options ?? new ExecutionDataflowBlockOptions()); source.LinkTo (actionBlock); source.Completion.ContinueWith (_ => actionBlock.Complete()); return actionBlock; } public static ActionBlock AddAction ( this ISourceBlock source, Action action, int maxParallelism, int boundedCapacity = -1, CancellationToken cancelToken = default (CancellationToken), TaskScheduler scheduler = null) { return AddAction (source, action, GetExecutionOptions (maxParallelism, boundedCapacity, cancelToken, scheduler)); } public static BufferBlock AddBuffer (this ISourceBlock source, DataflowBlockOptions options = null) { var bufferBlock = new BufferBlock (options ?? new ExecutionDataflowBlockOptions()); source.LinkTo (bufferBlock); source.Completion.ContinueWith (_ => bufferBlock.Complete()); return bufferBlock; } public static BufferBlock AddBuffer (this ISourceBlock source, int boundedCapacity = -1) { return AddBuffer (source, new System.Threading.Tasks.Dataflow.DataflowBlockOptions { BoundedCapacity = boundedCapacity }); } public static ExecutionDataflowBlockOptions GetExecutionOptions ( int maxParallelism = 1, int boundedCapacity = -1, CancellationToken cancelToken = default (CancellationToken), TaskScheduler scheduler = null) { var options = new ExecutionDataflowBlockOptions { BoundedCapacity = boundedCapacity, MaxDegreeOfParallelism = maxParallelism, CancellationToken = cancelToken }; if (scheduler != null) options.TaskScheduler = scheduler; return options; } }
public static Action ActionBarrier( this Action action, long remainingCallsAllowed = 1 ) { var context = new ContextCallOnlyXTimes( remainingCallsAllowed ); return () => { if ( Interlocked.Decrement( ref context.CallsAllowed ) >= 0 ) { action(); } }; } public class ContextCallOnlyXTimes { public ContextCallOnlyXTimes( long times ) { if ( times <= 0 ) { times = 0; } this.CallsAllowed = times; } public long CallsAllowed; }Example:
private static void ActionBarrierExample() { var foo = new Action( Foo ); var fooWithBarrier = foo.ActionBarrier( remainingCallsAllowed: 1 ); fooWithBarrier(); fooWithBarrier(); fooWithBarrier(); var barWithBarrier = ThreadingExtensions.ActionBarrier( action: Bar, remainingCallsAllowed: 2 ); var bob1 = new Thread( () => barWithBarrier() ); var bob2 = new Thread( () => barWithBarrier() ); var bob3 = new Thread( () => barWithBarrier() ); var bob4 = new Thread( () => barWithBarrier() ); bob1.Start(); bob2.Start(); bob3.Start(); bob4.Start(); bob1.Join(); bob2.Join(); bob3.Join(); bob4.Join(); Console.WriteLine( "enter return" ); Console.ReadLine(); }
Lazy
Lazy
typeWhy would round-robin allocation cause things to slow down for memory-spills to tempdb with a large number of files? A couple of possibilities:
- Round-robin allocation is per filegroup, and you can only have one filegroup in tempdb. With 16, 32, or more files in tempdb, and very large allocations happening from just a few threads, the extra synchronization and work necessary to do the round-robin allocation (looking at the allocation weightings for each file and deciding whether to allocate or decrement the weighting, plus quite frequently recalculating the weightings for all files – every 8192 allocations) starts to add up and become noticeable. It’s very different from lots of threads doing lots of small allocations. It’s also very different from allocating from a single-file filegroup – which is optimized (obviously) to not do round-robin.
- Your tempdb data files are not the same size and so the auto-grow is only growing a single file (the algorithm is unfortunately broken), leading to skewed usage and an I/O hotspot.
- Having too many files can lead to essentially random IO patterns when the buffer pool needs to free up space through the lazywriter (tempdb checkpoints don’t flush data pages) for systems with not very large buffer pools but *lots* of tempdb data. If the I/O subsystem can’t handle the load across multiple files, it will start to slow down.
CONCLUSION
Technically there is one more spill class: the spool spill. Since spools are *meant* to spill, the presence of a spool spill is usually less of a concern.The purpose of this article is to show that there is ample documentation available on MSDN regarding these spill events. The tempdb spills are easily detectable and reasonably explained in the product documentation. Presence of spills may indicate potential performance problems as a spill involves disk reads and writes and is many times slower than the corresponding in-memory-only operation. They also add overload to tempdb and may cause contention.