Thursday, 20 March 2014

Parallel Computing in .NET Framework 4.0 - Part 4

Pros and Cons of Parallel Programming

The common problems faced when developing parallel code are the same as those seen when using multiple threads. Some of these problems are lessened by the parallelism classes of the .NET framework but they are not removed completely, so are worthy of mention.
·    Synchronisation: A class of problem that you will encounter with parallel programming. When you start a number of tasks simultaneously, at some point in the future those tasks must "join up", perhaps to combine their results. A type of synchronization controls this process to ensure that the result of a task is not used until it has been completed.
Sometime we the synchronization to prevent parallel tasks from interfering with each other. If you have code that is not thread-safe, it may be necessary to prevent two processes from accessing that code simultaneously.
In reality, the solutions to some problems do not have such algorithms so locking mechanisms are used that stop code or data being accessed until the thread holding the lock releases it. This can severely affect performance, especially when using a large number of processors that all need to access the lockable code.
·   Race Conditions: A race condition occurs when parallel tasks are dependent on shared data, generally when the synchronisation around that data is not implemented correctly. One process may perform operations using shared data that temporarily leave a value in an inconsistent state. If the other process uses the inconsistent data, unpredictable behavior can occur. Worse, errors may only occur rarely and be difficult to predict or recreate.

·   Blocking: Locking used to avoid the synchronization problems. A lock can be requested by one task to prevent a section of code from being entered or a shared state variable from being accessed by another task. This is a technique that can be used to synchronise threads and prevent race conditions. When a process requests a lock that has already been granted to another thread, the first process stops executing and waits for the lock to be released. The stopped thread is said to have been blocked. Usually the blocked thread will eventually obtain a lock and continue working as normal. However, if there is excessive blocking some processors may become idle as they are starved of work. This impacts performance.
·   Deadlocking: Deadlocking is an extreme state of blocking involving two or more processes. In the simplest situation you may have two tasks that are each blocked by the other. As each task is blocked and will not continue until the other has released its lock, the deadlock cannot be broken and the two tasks will potentially be blocked forever.
But you can do better coding and avoid the above pitfall by using parallel programming with extra care and understanding. Now I will discuss some benefit which we can gain by using parallel programming:
·   The new parallel programming functionality in the .NET framework provides several benefits that make it the preferred choice over standard multi-threading. When manually creating threads, you may create too many, leading to excessive task-switching operations that affect performance. You may also create two few, leaving processors idle. These are some of the key problems that the new classes aim to address.
·   Both the TPL and PLINQ provide automatic data decomposition. Although you can control decomposition, usually the standard behavior is sufficient. This behavior is intelligent. For example, after decomposition and allocation of work, the activity of each processor is continually considered. If it turns out that the work assigned to one processor is more time-consuming than that of another, a work-stealing algorithm is used to transfer work from the busy processor to the under-utilized one.
·    It is important to understand that the new libraries provide potential parallelism. With standard multi-threading, when you launch a new thread it immediately starts its work. This might not be the most efficient way of utilizing the available processors. The parallelism libraries may launch new threads if processor cores are available. If they are not, tasks may be postponed until a core becomes free or until the result of the operation is actually needed.

·   Finally, the new libraries allow you to not worry about the number of available cores and the number that might be available on future computers. All of the available cores will be utilized as required. If the code is executed on a single-processor machine, it will be mostly executed sequentially. A little overhead is introduced by the parallelism libraries so parallel code running on a single core machine will run more slowly than purely sequential code. However, this impact is minor when compared with the benefits gained.

Parallel Computing in .NET Framework 4.0 - Part 3

PLINQ (Parallel - LINQ)

LINQ is Microsoft's new baby (new technology), introduced with ASP.NET 3.5. The LINQ stand for (L) Language-(IN) Integrated (Q) Query or LINQ (pronounced "link"). LINQ is designed to fill the gap between .NET languages and query languages such as SQL, with syntax specifically designed for query operations. With the introduction of LINQ into .NET, query becomes a first class concept in .NET, whether object, XML, or data queries. 

Microsoft has given a native syntax to developers in the form LINQ to C# and VB.Net for accessing data from any repository. The repository could be in memory object, database (still MS SQL Server only) and XML files.
The parallel LINQ approach is declarative rather than imperative in the same way as the sequential version of LINQ. This approach to parallelism is of a higher level that that provided by the TPL. It allows the use of the standard query operators, which you should be familiar with, whilst automatically assigning work to be carried out simultaneously by the available processors.
To get complete detail visit the MSDN link:
http://msdn.microsoft.com/en-us/library/dd460688(v=vs.100).aspx
The nature of many queries means that they can be easily dividing the task to take the advantage of Parallel approach. Most queries perform the same group of actions for each item in a collection. If all of those actions are independent, with no side effects caused due to the order in which they appear, you can often achieve a large performance increase by dividing the work between several processor cores. To support these scenarios, the .NET framework version 4.0 introduced Parallel LINQ (PLINQ).
PLINQ provides the same standard query operators and query expression syntax as LINQ. The key difference is that the source data can be broken into several groups using data decomposition.
LINQ works with sequences that implement the IEnumerable<T> interface. To signify that we wish to use PLINQ, we must ensure that the source sequence supports parallelism. To do so, we can use the static AsParallel method of the ParallelEnumerable class. This is an extension method of IEnumerable<T>, so can be applied to any sequence that supports LINQ operations. It returns an object of the type ParallelQuery<T>.
Handling Exceptions with PLINQ: When you execute a sequential query using LINQ, any one of the data items processed may lead to an exception. When an exception is thrown, the query stops executing immediately. With PLINQ, it is possible for the processing of multiple items to be executing concurrently. If one of these throws an exception, all of the other threads will stop but only after any scheduled operations have completed. 
This may mean that there is a delay between the exceptional event and the PLINQ query halting if the query operations are slow. It also means that any of the other parallel operations may also throw an exception.
To deal with the possibility of a query causing multiple exceptions, all exceptions from a PLINQ query are combined in a single AggregateException, which is thrown when all of the threads of execution stop. As when using parallel loops and tasks, you can capture this and examine its InnerExceptions property to find all of the thrown exceptions.

Parallel Computing in .NET Framework 4.0 - Part 2

Parallel Computing in .NET Framework 4.0


Computers in the near future are expected to have significantly more cores. To take advantage of the hardware, you can parallelize your code to distribute work across multiple processors. In the past, parallelization required low-level manipulation of threads and locks
Visual Studio 2010 and the .NET Framework 4 enhance support for parallel programming by providing a new runtime, new class library types, and new diagnostic tools. These features simplify parallel development so that you can write efficient, fine-grained, and scalable parallel code in a natural idiom without having to work directly with threads or the thread pool.
On the higher level, .NET framework 4.0 provided two major libraries for parallel programming. These are the Task Parallel Library (TPL) and the parallel version of Language-Integrated Query (PLINQ).
Note: Starting with the .NET Framework 4, the TPL is the preferred way to write multithreaded and parallel code. However, not all code is suitable for parallelization; for example, if a loop performs only a small amount of work on each iteration, or it doesn't run for many iterations, then the overhead of parallelization can cause the code to run more slowly.

To know more about Parallel Computing, please visit:
http://msdn.microsoft.com/en-us/library/dd460693(v=vs.100).aspx

Task Parallel Library [TPL]

The Task Parallel Library (TPL) is a set of public types and APIs in the System.Threading and System.Threading.Tasks namespaces in the .NET Framework version 4. The purpose of the TPL is to make developers more productive by simplifying the process of adding parallelism and concurrency to applications. The TPL scales the degree of concurrency dynamically to most efficiently use all the processors that are available.
In addition, the TPL handles the partitioning of the work, the scheduling of threads on the ThreadPool, cancellation support, state management, and other low-level details. By using TPL, you can maximize the performance of your code while focusing on the work that your program is designed to accomplish.
Starting with the .NET Framework 4, the TPL is the preferred way to write multithreaded and parallel code. The Task Parallel Library provides parallelism based upon both data and task decomposition. Data parallelism is simplified with new versions of for loop and foreach loop that automatically decompose the data and separate the iterations onto all available processor cores.
Task parallelism is provided by new classes that allow tasks to be defined using lambda expressions. You can create tasks and let the .NET framework determine when they will execute and which of the available processors will perform the work.

Data Parallelism (Task Parallel Library)

Data parallelism is usually applied to large data processing tasks. Data parallelism applicable to operations which performed concurrently (that is, in parallel) on elements in a source collection. Data parallelism syntax is supported by several overloads of the for and foreach methods in the System.Threading.Tasks.Parallel class.
In data parallel operations, the source collection is partitioned so that multiple threads can operate on different segments concurrently. System.Threading.Tasks.Parallel class provides method-based parallel implementations of for and foreach loops (For and For Each in Visual Basic).
You write the loop logic for a Parallel.For or Parallel.ForEach loop much as you would write a sequential loop. You do not have to create threads or queue work items. In basic loops, you do not have to take locks. The TPL handles all the low-level work for you.
Note: Many examples use lambda expressions to define delegates in TPL. If you are not familiar with lambda expressions in C# or Visual Basic, visit http://msdn.microsoft.com/en-us/library/dd460699(v=vs.100).aspx for Lambda Expressions in PLINQ / TPL.
Lambda Expressions (C#): A lambda expression is an anonymous function that can contain expressions and statements, and can be used to create delegates or expression tree types. All lambda expressions use the lambda operator =>, which is read as "goes to". The left side of the lambda operator specifies the input parameters (if any) and the right side holds the expression or statement block. The lambda expression x => x * x is read "x goes to x times x."
The => operator has the same precedence as assignment (=) and is right-associative. Lambdas are used in method-based LINQ queries as arguments to standard query operator methods such as Where.
To learn the Lambda Expressions in C# follow the below link:
1. Parallel.For: Here we will consider the parallel for loop. This provides some of the functionality of the basic for loop, allowing you to create a loop with a fixed number of iterations. If multiple cores are available, the iterations can be decomposed into groups that are executed in parallel.
Potential Pitfalls in Parallel.For loop: After working the above examples we can say it is very easy to implement and work with Parallel.For, but as developer we need to look some potential and usually unanticipated disaster or difficulty related to this. There are various issues that can be encountered. Some cause immediately noticeable bugs in your code. Some cause subtle bugs that may occur only rarely and are difficult to find. Others simply lower the performance of parallel loops.
·  Shared State: Parallel loops are ideal when the individual iterations are independent. When the iterations share mutable state, synchronization is necessary to ensure that errors are not introduced by parallel processes using inconsistent values. This usually requires the introduction of locking mechanisms that slow the performance of the software or changes to algorithms to remove shared state.
·  Dependent Iterations: With sequential loops you can assume that all earlier iterations will be completed before the current execution. With parallel loops, as seen in the first example, the order is usually changed. This means that you should not have code within a parallel loop that depends upon another iteration's result.
·  Excessive Parallelism: In general case parallelism increases the performance of loops. However, in many case it is possible to overuse parallelism which may decrease the performance.
·  Calls to Thread-Safe Methods: If the methods that you call from within a loop are thread-safe, you should not generate synchronization problems through their use. However, if the methods use locking to achieve thread-safety, you may spoil the performance of your software as multiple cores become blocked when your parallel loop executes.
·  Myth of Parallelism: A common myth is that a loop will always execute in parallel will increase the performance. On single core processors parallel loops will generally execute sequentially. Even on multiprocessor systems it is possible for a loop to run in series. If your code requires that a later iteration completes before an earlier one can continue, the loop will be deadlocked.
2. Parallel.ForEach: The parallel ForEach loop provides a parallel version of the standard, sequential foreach loop. Each iteration processes a single item from a collection. However, the parallel nature of the loop means that multiple iterations may be executing at the same time on different processors or processor cores. This opens up the possibility of synchronization problems so the loop is ideally suited to processes where each iteration is independent of the others.
A ForEach loop works like a For loop. The source collection is partitioned and the work is scheduled on multiple threads based on the system environment. The more processors on the system, the faster the parallel method runs. For some source collections, a sequential loop may be faster, depending on the size of the source, and the kind of work being performed.

We have many more points to study, I given you the basic overview of Data Parallelism using for and foreach loop. Below are list of some useful points which you can study yourself and create some examples:
Termination of Parallel Loops
  • Parallel Loop State
  • ParallelLoopState.Break
  • LowestBreakIteration
  • ParallelLoopState.Stop
Synchronization in Parallel Loops
  • Aggregation in Sequential Loops
  • Aggregation in Parallel Loops
  • Synchronization Using Locking
  • Local Loop State in For Loops

Task Parallelism (Task Parallel Library)

The Task Parallel Library (TPL), as its name implies, is based on the concept of the task. The term task parallelism refers to one or more independent tasks running concurrently. A task represents an asynchronous operation, and in some ways it resembles the creation of a new thread or ThreadPool work item, but at a higher level of abstraction. Tasks provide two primary benefits:
·    More efficient and more scalable use of system resources: Behind the scenes, tasks are queued to the ThreadPool, which has been enhanced with algorithms (like hill-climbing) that determine and adjust to the number of threads that maximizes throughput. This makes tasks relatively lightweight, and you can create many of them to enable fine-grained parallelism. To complement this, widely-known work-stealing algorithms are employed to provide load-balancing.
·   More programmatic control than is possible with a thread or work item: Tasks and the framework built around them provide a rich set of APIs that support waiting, cancellation, continuations, robust exception handling, detailed status, custom scheduling, and more.
Note: For both of these reasons, in the .NET Framework 4, tasks are the preferred API for writing multi-threaded, asynchronous, and parallel code.
Why we need Task Parallelism? - Some algorithms do not lend themselves to data decomposition because they are not repeating the same action. However, they may be candidates for task decomposition. This is where an algorithm is broken into sections that can be executed independently. Each section is considered to be a separate task that may be executed on its own processor core, with several tasks running concurrently. This type of decomposition is usually more difficult to implement and sometimes requires that an algorithm be changed substantially or replaced entirely to minimize elements that must be executed sequentially and to limit shared mutable values.
1. Creating and Running Tasks Implicitly [Parallel.Invoke]: The Parallel.Invoke method provides a convenient way to run any number of arbitrary statements concurrently. Just pass in an Action delegate for each item of work. The easiest way to create these delegates is to use lambda expressions.
The Parallel.Invoke method provides a simple way in which a number of tasks may be created and executed in parallel. As with other methods in the Parallel Task Library, Parallel.Invoke provides potential parallelism. If no benefit can be gained by creating multiple threads of execution the tasks will run sequentially.
To use Parallel.Invoke, the tasks to be executed are provided as delegates. The method uses a parameter array for the delegates to allow any number of tasks to be created. The tasks are usually defined using lambda expressions but anonymous methods and simple delegates may be used instead. Once invoked, all of the tasks are executed before processing continues with the command following the Parallel.Invoke statement. The order of execution of the individual delegates is not guaranteed so you should not rely on the results of one operation being available for one that appears later in the parameter array.
Exception Handling with Parallel.Invoke: In the case of Parallel.Invoke, it is guaranteed that every task will be executed. Each task will either exit normally or throw an exception. All of the thrown exceptions are gathered together and held until all of the tasks have stopped, at which point an AggregateException containing all of the exceptions is thrown. The individual errors can be found within the InnerExceptions property.
2.  Creating and Running Tasks Explicitly: If we need more control over parallel tasks, we can use the Task class. This allows us to explicitly generate parallel tasks. The code needed for explicit task creation is slightly more complex than that for Parallel.Invoke but the benefits outweigh this disadvantage.
A task is represented by the System.Threading.Tasks.Task class. A task that returns a value is represented by the System.Threading.Tasks.Task<TResult> class, which inherits from Task. The task object handles the infrastructure details, and provides methods and properties that are accessible from the calling thread throughout the lifetime of the task. For example, you can access the Status property of a task at any time to determine whether it has started running, ran to completion, was canceled, or has thrown an exception. The status is represented by a TaskStatus enumeration.
When you create a task, you give it a user delegate that encapsulates code that task will execute. Delegate can be expressed as a named delegate, an anonymous method, or a lambda expression. Lambda expressions can contain a call to a named method.
The Task class provides a wrapper for an Action delegate. The delegate describes the code that you wish to execute and the wrapper provides parallelism. A simple way to create a task is to use the constructor that has a single parameter, which accepts delegate that you wish to execute. Tasks do not execute immediately after being created. To start a task call its Start method.
3.  Many More….: Working with Task class is complex and big topic. In the above example I have given you some example and explanation. I cannot cover all related topic in this session; we need a separate session if you want to learn all the features and functionality. Here I am giving you a list of topic which you can take it as your task, study these topics and create some examples so that you will get better understanding: [MSDN Link: http://msdn.microsoft.com/en-us/library/dd537609(v=vs.100).aspx]
  • Waiting on Tasks: The System.Threading.Tasks.Task type and System.Threading.Tasks.Task<TResult> type provide several overloads of a Task.Wait and Task<TResult>.Wait method that enable you to wait for a task to complete. In addition, overloads of the static Task.WaitAll and Task.WaitAny method let you wait for any or all of an array of tasks to complete.
You can also have a look on Exception Handling if any exception is thrown during the execution of a task and Adding Timeouts for long running task.
  • Task Results: The Task<T> generic class inherits much of its functionality from its non-generic counterpart. Tasks are created using a delegate, often a lambda expression, started using the Start method and executed in parallel where it is efficient to do so. This return value can be accessed by reading the task's Result property.
  • Continuation Tasks: When you are writing software that has tasks that execute in parallel, it is common to have some parallel tasks that depend upon the results of others. These tasks should not be started until the earlier tasks, known as antecedents, have completed. Before the introduction of the Task Parallel Library (TPL), this type of interdependent thread execution would be controlled using callbacks.
The Task.ContinueWith method and Task<TResult>.ContinueWith method let you specify a task to be started when the antecedent task completes. The continuation task's delegate is passed a reference to the antecedent, so that it can examine its status. In addition, a user-defined value can be passed from the antecedent to its continuation in the Result property, so that the output of the antecedent can serve as input for the continuation.
You can also check some topic related to Using Task Results in Continuations, Exceptions Handling with Continuations Task, Creating Continuations with Multiple Antecedents, and Multiple Continuations of a Single Antecedent.
  • Nested Task: When user code that is running in a task creates a new task and does not specify the AttachedToParent option, the new task not synchronized with the outer task in any special way. Such tasks are called a detached nested task.
Tasks may be nested in this manner to many levels deep. The inner tasks are known as child tasks, of which there are two types. The first type is the detached child task; also known as nested tasks. The other type of child task is the attached child task, generally known simply as child tasks.
When you create nested tasks there is no link between a nested task and its parent. Nested tasks are completely independent, reporting a separate status and throwing their own exceptions.
  • Child Tasks: When user code that is running in a task creates a task with the AttachedToParent option, the new task is known as a child task of the originating task, which is known as the parent task. You can use the AttachedToParent option to express structured task parallelism, because the parent task implicitly waits for all child tasks to complete.
  • Canceling Tasks: The Task class supports cooperative cancellation and is fully integrated with the System.Threading.CancellationTokenSource class and the System.Threading.CancellationToken class, which are new in the .NET Framework version 4. Many of the constructors in the System.Threading.Tasks.Task class take a CancellationToken as an input parameter. Many of the StartNew overloads also take a CancellationToken.
You can create the token, and issue the cancellation request at some later time, by using the CancellationTokenSource class. Pass the token to the Task as an argument, and also reference the same token in your user delegate, which does the work of responding to a cancellation request.
4. What is Task ID? - Every task receives an integer ID that uniquely identifies it in an application domain and that is accessible by using the Id property. The ID is useful for viewing task information in the Visual Studio debugger Parallel Stacks and Parallel Tasks windows. The ID is lazily created, which means that it isn't created until it is requested; therefore a task may have a different ID each time the program is run.

Parallel Computing in .NET Framework 4.0 - Part 1

The Concept of Parallel Computing

After release of .NET Framework 4.0, many developers in the world are talking about the Parallel Computing. Before start talking about Parallel Computer, we need to understand, why we need this? – Many personal computers and workstations have two or four cores (that is, CPUs) that enable multiple threads to be executed simultaneously. Computers in the near future are expected to have significantly more cores. To take advantage of the hardware of today and tomorrow, you can parallelize your code to distribute work across multiple processors. 

In the early years of personal computers, machines were built with a single central processing unit (CPU).
Between the 1975's to the mid 1980's, the CPU makes increasing the clock speed to increase the processor power, but the changes used to be minor.
Between the early 1990's and the mid 2000's, the clock speed of the CPU in a personal computer increased from a mere 33 megahertz to around 3.5 gigahertz. This alone represents an increase in performance of over one hundred times. In addition, each new processor model introduced additional efficiency improvements and extra technology to make the speed improvement even greater.
Since 2005, the increase in CPU clock speed has stalled. One of the key reasons is that faster processors produce many times more heat than slower ones. Dissipating this heat to keep the processor operating within a safe temperature range is much more difficult. There are other reasons too, linked to the design of CPUs and the amount of additional power required for higher clock speeds.

The solution that the major CPU designers have selected is to move away from trying to increase clock speed and instead focus on adding more processor cores. Each core acts like a single processor that can do work. If you have two cores in your processor, it can process two independent tasks in parallel without the inefficiency of task-switching. As you increase the number of cores, you also increase the amount of code or data that can be processed in parallel, leading to an overall performance improvement without a change in clock speed.


CPU clock speed for a single CPU has been fairly static in the last couple of years – hovering around 3.4 GHz. Of course, we shouldn’t fall completely into the Megahertz myth, but one avenue of speed increase has been blocked.

In current time it is difficult to find a new computer that has a processor with only one core. Desktop computers commonly have dual-core (2) or quad-core (4) or six core CPUs with technology that gives eight or twelve virtual processors. Notebook computers usually include at least a dual-core processor and often include four cores. Netbooks, which are designed for web browsing and are less powerful that notebooks, often include dual-core CPUs too. Even some mobile phones have more than one core. This trend is likely to continue, with companies such as Intel indicating that future CPUs may include a thousand cores.
How many cores does an Intel Core i7 have? - The Intel i7 has 4 physical cores; all these four cores are then hyper-threaded. By using this Hyper-threading tool your OS will see two virtual cores for each physical core. This allows the workload of a particular task to be shared between the cores more efficiently allowing it to run faster. [Just a basic overview]
What we are doing as a Developer? - Many developers including me trained to think about programming in a sequential manner. If we continue to program in this way our software will not take advantage of the improvements made available by parallel processing. A standard .NET program that does not create new threads will only use a single core. On current hardware this may mean that only a half or a quarter of the available processing power is available to us. In the future, programs like these may only use a tiny fraction of the processor. Similar software that fully utilizes parallel programming will perform better and likely be favored by our users.
Before .NET 4.0, C# developers could obtain the improved performance of newer CPUs by creating multi-threaded software. Often this type of software only creates a few additional threads to speed up a process or to allow the user interface to remain responsive whilst a background task is completed..
With .NET 4.0, Microsoft introduced new tools that are designed to simplify the creation of parallel code. These remove some, but not all, of the complexities of multi-threading. They also allow the same code to run on different computers with varying numbers of cores, taking advantage of all of the available processors.
Moore's law: Moore's law is the observation that over the history of computing hardware, the number of transistors on integrated circuits doubles approximately every two years. The law is named after Intel co-founder Gordon E. Moore, who described the trend in his 1965 paper. [Source - WIKI]
Don’t confuse with 32 bit and 64 bit computing - As the number of bits increases there are two important benefits: 1. more bits means that data can be processed in larger chunks which also means more accurately. 2. More bits mean our system can point to or address a larger number of locations in physical memory. 32-bit systems were once desired because they could address (point to) 4 Gigabytes (GB) of memory in one go. Some modern applications require more than 4 GB of memory to complete their tasks so 64-bit systems are now becoming more attractive because they can potentially address up to 4 billion times that many locations.

How Parallel Programming Working

Parallel programming works with decomposition principle, so it is the process of breaking a program, algorithm or data set into sections that can be processed independently. Many algorithms can be decomposed but others are naturally sequential and do not support parallelism. You may have to replace an algorithm entirely to achieve a result that can be better decomposed; otherwise the sequential parts can remove the benefits of parallelism.
For example, if you have a routine that takes 8 minutes to run and the algorithm supports easy decomposition, allocating 25% of the work to each of four processors could reduce the duration towards two minutes. If 90% of the algorithm must be handled sequentially, the 8 minute task may be divided between four cores but one core would spend 7 minutes working on it and the remaining cores were idle.
There are two types of decomposition, data decomposition and task decomposition:
Data Decomposition: Data decomposition is usually applied to large data processing tasks. It is the process of splitting a large number of data into several smaller groups. Each of those smaller units can then be processed by a separate CPU or core in parallel. At the end of the process, the smaller data units may be recombined into one larger set of results.

Task Decomposition: Task decomposition is generally more complicated than data decomposition and harder to achieve. Instead of looking for large data sets to break up, we look at the algorithms being used and try to split them into smaller tasks that can run in parallel. In some cases algorithms are built from units of code that are tightly dependent upon each other, making it impossible to segregate the smaller tasks. These algorithms must be replaced entirely to gain a parallel processing advantage.

Wednesday, 19 March 2014

Get an overview aout version history of .NET Framework

·   .NET Framework 1.0: In July 2000, the .NET Framework was first announced publicly at Professional Developers Conference (PDC) in Orlando, Florida. At PDC, Microsoft also demonstrated C#, and announced ASP+ (which was later renamed to ASP.NET) and Visual Studio.NET. It took more than a year and a half, but, in February 2002, .NET Framework 1.0 was released as part of a pair with Visual Studio.NET (the latter of which is often referred as Visual Studio .NET 2002).

·   .NET Framework 1.1: A bit more than a year after Visual Studio.NET was released, a new version, Visual Studio .NET 2003, was shipped together with .NET Framework 1.1. Microsoft had a lot of work to do to stabilize the framework, and, of course, dozens of critical bugs were fixed. A few things (such as the security model) were also changed, and new features were added to the framework (such as built-in support for building mobile applications, IPv6 support, and built-in data access for ODBC and Oracle databases). Also, the CLR became more stable from version 1.0 to 1.1.

·   .NET Framework 2.0: The next major version, .NET Framework 2.0, was released in the middle of February 2006 with Visual Studio 2005, Microsoft SQL Server 2005, and BizTalk 2006.

·  .NET Framework 3.0: This major .NET version kept the CLR untouched and added infrastructure components to the framework - Windows Workflow Foundations (WF), Windows Communication Foundations (WCF), Windows Presentation Foundation (WPF), and CardSpace. Developers could download Visual Studio extensions to use these new .NET 3.0 technologies.

·   .NET Framework 3.5: Version 3.5 of the .NET Framework was released on November 19, 2007. As with .NET Framework 3.0, version 3.5 uses the Common Language Run-time (CLR) version 2.0. In addition, it installs .NET Framework 2.0 SP1 (which installs .NET Framework 2.0 SP2 with 3.5 SP1) and .NET Framework 3.0 SP1 (which installs .NET Framework 3.0 SP2 with 3.5 SP1). These changes do not affect applications written for version 2.0, however.

·    .NET Framework 4.0: Microsoft announced .NET Framework 4.0 on September 29, 2008. The Public Beta was released on May 20, 2009 and final version released on March 22, 2010.


·   .NET Framework 4.5: .NET Framework 4.5 is currently the latest version and was released Aug 2012 with features and improvements were added to the common language runtime.