<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dgfleisc</id>
	<title>Expertiza_Wiki - User contributions [en]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.expertiza.ncsu.edu/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Dgfleisc"/>
	<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=Special:Contributions/Dgfleisc"/>
	<updated>2026-05-11T12:44:10Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56596</id>
		<title>CSC/ECE 517 Fall 2011/ch7 7d df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56596"/>
		<updated>2011-12-12T20:00:04Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Design Patterns=&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Software Engineering, an Anti-Pattern is a type of pattern that, at first, may appear to solve a problem but ends up hindering it in the long run.  While a pattern can be looked at as a solution to a problem, an anti-pattern is considered a bad solution to a problem[http://c2.com/cgi/wiki?AntiPattern].  The name anti-patern was coined by Andrew Koenig in response to the book, Design Patterns, by Gang of Four. However, the term did not become commonplace until after the book, AntiPatterns, came out.  In the book, the authors had outlined a few patterns that were fairly common in the workplace that they saw as an anti-pattern.  An anti-pattern is not to be confused with bad programming habits.  There are two main distinctions between an anti-pattern and bad programming.  First, for a bad idea to be an anti-pattern, it has to have some sort of structure and is reusable and second, the anti-pattern has to have a well documented, correct solution.  Anti-Patterns can be found not only in programming but a number of different areas in the design process.  Because of this, anti-patterns can fall into different groups such as Organizational, Project Management, Software Design, and Programming.    Listed below are a select few of the many Anti-Patterns that exist [http://en.wikipedia.org/wiki/Anti-pattern].&lt;br /&gt;
&lt;br /&gt;
==Organizational==&lt;br /&gt;
Organizational anti-patterns are those that  deal with how the team working on the project is organized and managed.  A good example of these types of patterns is the Cash Cow.&lt;br /&gt;
===Cash Cow===&lt;br /&gt;
A Cash Cow is a product that makes up the majority of a companies profits.  The problem with Cash Cows is they may be making a great deal of money for the money now, but may later fall out of popularity due to newer or better technology.  The sheer popularity of the product can sometimes hinder the development of newer alternatives.  Take for example the company RIM, the maker of the BlackBerry line of smartphones.  A couple years ago RIM's BlackBerry smartphones were considered cash cows.  RIM was dominating the market.  Over the years, new players have come into the market with revolutionary ideas and have been slowly eating away at RIMs marketshare.  During this same time, BlackBerry smartphones have changed little.  Their management had been blinded by the sheer success of their products that they allowed others to come in and take that success away from them [http://en.wikipedia.org/wiki/Cash_cow].&lt;br /&gt;
&lt;br /&gt;
==Project Management==&lt;br /&gt;
Project Management anti-patterns are anti-patterns that deal with how the programmers and the whole team in general work together to complete the project at hand.  Some examples of this are Over Engineering and Software Bloat.&lt;br /&gt;
&lt;br /&gt;
===Over Engineering===&lt;br /&gt;
Over Engineering is when a product is made more complex than is practical.  This results in a waste of time and resources.  For instance, a push lawnmower with 100 HP would be a product of over engineering.   A lawnmower with a tenth of that power would work just as well without all the time and money spent upping the Horse Power [http://en.wikipedia.org/wiki/Overengineering].&lt;br /&gt;
&lt;br /&gt;
===Software Bloat===&lt;br /&gt;
Software Bloating happens when after every new release of a piece of software, more and more features are added that are not necessarily used by the consumer or stray from the original purpose of the program.  These features use more computer resources and can ultimately slow down the program.  Also, not every user will use all of the features, causing in some cases unnecessary slowdown of the program.  A good example of Software Bloating is Apple's iTunes.  iTunes was originally created to simply download and play music.  Now, iTunes can do so much more to the point where the features are detracting from the original functions.  A good alternative to software bloating is the use of plugins.  Plugins add extra functionality to a program without forcing the user to have it.  This way a user can pick and choose which extra features he or she wants [http://en.wikipedia.org/wiki/Software_bloat].&lt;br /&gt;
&lt;br /&gt;
==Software Design==&lt;br /&gt;
Software design anti-patterns are anti-patterns that deal with the overall design of a program.  They deal more with how the object in a program fit together rather than the actual code itself.  Some examples of Software Design anti-patterns are BaseBean and Call Super.&lt;br /&gt;
===BaseBean===&lt;br /&gt;
A BaseBean is a class where concrete entities have subclassed it.  As you may know from class, subclassing does not always exhibit good program design.  Subclassing causes the child class to rely too heavily on the superclass.  If the super class were to change, it could break something in the subclass.  A class should not subclass another class just because there is similar code.  Rather, the classes should interact using delegation [http://en.wikipedia.org/wiki/BaseBean].&lt;br /&gt;
&lt;br /&gt;
===Call Super===&lt;br /&gt;
Call Super is similar to BaseBean in which one class subclasses another.  The different is, in Call Super, the superclass requires the subclass to override a method in order for it to function.  The fact that this is required makes this an anti-patern.  The solution to this problem is to use the Template Method Pattern.  The Template Method pattern separates the superclass method into two distinct methods.  The first method executes all of the needed code by the subclass and then delegates the part that needs to be subclassed into an abstract method.  That way the superclass is able to separate out the information that needs to be accessed by the subclass and the method that needs to be overridden [http://en.wikipedia.org/wiki/Call_super].  Here is an example:&lt;br /&gt;
&lt;br /&gt;
        class super{&lt;br /&gt;
            ...&lt;br /&gt;
            public void doSomething(){&lt;br /&gt;
                //perform initialization&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        class sub extends super{&lt;br /&gt;
            ...&lt;br /&gt;
            public void doSomething(){&lt;br /&gt;
                super.doSomething();&lt;br /&gt;
                //add functionality&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
In the code snippet above, the subclass is required to override the doSomething method to get the preferred result.  The compiler will not let you know if this dependancy.  Here is the correct way to do this:&lt;br /&gt;
&lt;br /&gt;
        class super{&lt;br /&gt;
            ...&lt;br /&gt;
            public void doSomething(){&lt;br /&gt;
                //perform initialization&lt;br /&gt;
                addFunctionality();&lt;br /&gt;
            }&lt;br /&gt;
            public abstract void addFunctionality();&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
        class sub extends super{&lt;br /&gt;
            ...&lt;br /&gt;
            public void addFunctionality(){&lt;br /&gt;
                //add functionality&lt;br /&gt;
            }&lt;br /&gt;
        }&lt;br /&gt;
&lt;br /&gt;
This way, the sub class does not have to make a call to the super class' method and also the method 'addFunctionality()' is now required in the sub classes by the compiler&lt;br /&gt;
&lt;br /&gt;
==Programming==&lt;br /&gt;
Anti-patterns that fall under this category are ones that deal with how the code is implemented rather than how the code fits together.  Some examples of these patterns are Blind Faith, Law of Instrument, Cut and Paste, and God Object.&lt;br /&gt;
===Blind Faith===&lt;br /&gt;
Bild Faith occurs when a bug is fixed in a program but never tested before released.  The programmer just assumes the fix will work so does not bother to test it.  This can be detrimental if a bug did end up in the code that was &amp;quot;fixed&amp;quot; [http://en.wikipedia.org/wiki/Blind_faith_(computer_science)].  A solution to this problem is Test-Driven Development.  In Test-Driven Development, the test cases are written before the code.  This ensures that all code is tested and that Blind Faith can not occur [http://en.wikipedia.org/wiki/Test-driven_development].&lt;br /&gt;
&lt;br /&gt;
===Law of Instrument===&lt;br /&gt;
THe Law of Instrument or golden hammer refers to the overuse of a good tool.  The well known phrase &amp;quot;if all you have is a hammer, everything looks like a nail&amp;quot; is derived from this law.  A good example of this would be a programmer writing all of his programs in the Java programming language.  Java is a great tool but is not suited for writing all programs.  The fix for this would be the opposite: Use the right tool for the job [http://en.wikipedia.org/wiki/Golden_hammer].&lt;br /&gt;
&lt;br /&gt;
===Cut and Paste===&lt;br /&gt;
The Cut and Paste anti-pattern deals with the programmer cutting and pasting code from one section to another and then altering it.  The problem with this is duplication of code.  Although copy and paste seems like an easier and quicker way to add certain functionality to your program, it could end up making your code run slower.  This pattern goes against the DRY principle.  The refactored approach is to use a black box reuse design.  Black Box reuse does not allow the user to change how the code is implemented.  They can only interact with it the way it was intended [http://sourcemaking.com/antipatterns/cut-and-paste-programming].&lt;br /&gt;
&lt;br /&gt;
===God Object===&lt;br /&gt;
This anti-pattern is when one class does too much.  Classes and methods should only have one method or purpose.  If it has more than that, it goes against the single responsibility principal.  The solution to this is to refactor the code.  Try and break up the code into the individual responsibilities and create new classes for them[http://blog.decayingcode.com/post/anti-pattern-god-object.aspx].&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1]  http://c2.com/cgi/wiki?AntiPattern &amp;lt;br/&amp;gt;&lt;br /&gt;
[2]  http://en.wikipedia.org/wiki/Anti-pattern &amp;lt;br /&amp;gt;&lt;br /&gt;
[3]  http://en.wikipedia.org/wiki/Cash_cow &amp;lt;br /&amp;gt;&lt;br /&gt;
[4]  http://en.wikipedia.org/wiki/Overengineering &amp;lt;br /&amp;gt;&lt;br /&gt;
[5]  http://en.wikipedia.org/wiki/Software_bloat &amp;lt;br /&amp;gt;&lt;br /&gt;
[6]  http://en.wikipedia.org/wiki/BaseBean &amp;lt;br /&amp;gt;&lt;br /&gt;
[7]  http://en.wikipedia.org/wiki/Call_super &amp;lt;br /&amp;gt;&lt;br /&gt;
[8]  http://en.wikipedia.org/wiki/Blind_faith_(computer_science) &amp;lt;br /&amp;gt;&lt;br /&gt;
[9]  http://en.wikipedia.org/wiki/Test-driven_development &amp;lt;br /&amp;gt;&lt;br /&gt;
[10]  http://en.wikipedia.org/wiki/Golden_hammer &amp;lt;br/&amp;gt;&lt;br /&gt;
[11] http://sourcemaking.com/antipatterns/cut-and-paste-programming &amp;lt;br /&amp;gt;&lt;br /&gt;
[12] http://blog.decayingcode.com/post/anti-pattern-god-object.aspx &amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56595</id>
		<title>CSC/ECE 517 Fall 2011/ch7 7d df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56595"/>
		<updated>2011-12-12T19:39:57Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Design Patterns=&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Software Engineering, an Anti-Pattern is a type of pattern that, at first, may appear to solve a problem but ends up hindering it in the long run.  While a pattern can be looked at as a solution to a problem, an anti-pattern is considered a bad solution to a problem[http://c2.com/cgi/wiki?AntiPattern].  The name anti-patern was coined by Andrew Koenig in response to the book, Design Patterns, by Gang of Four. However, the term did not become commonplace until after the book, AntiPatterns, came out.  In the book, the authors had outlined a few patterns that were fairly common in the workplace that they saw as an anti-pattern.  An anti-pattern is not to be confused with bad programming habits.  There are two main distinctions between an anti-pattern and bad programming.  First, for a bad idea to be an anti-pattern, it has to have some sort of structure and is reusable and second, the anti-pattern has to have a well documented, correct solution.  Anti-Patterns can be found not only in programming but a number of different areas in the design process.  Because of this, anti-patterns can fall into different groups such as Organizational, Project Management, Software Design, and Programming.    Listed below are a select few of the many Anti-Patterns that exist [http://en.wikipedia.org/wiki/Anti-pattern].&lt;br /&gt;
&lt;br /&gt;
==Organizational==&lt;br /&gt;
Organizational anti-patterns are those that  deal with how the team working on the project is organized and managed.  A good example of these types of patterns is the Cash Cow.&lt;br /&gt;
===Cash Cow===&lt;br /&gt;
A Cash Cow is a product that makes up the majority of a companies profits.  The problem with Cash Cows is they may be making a great deal of money for the money now, but may later fall out of popularity due to newer or better technology.  The sheer popularity of the product can sometimes hinder the development of newer alternatives.  Take for example the company RIM, the maker of the BlackBerry line of smartphones.  A couple years ago RIM's BlackBerry smartphones were considered cash cows.  RIM was dominating the market.  Over the years, new players have come into the market with revolutionary ideas and have been slowly eating away at RIMs marketshare.  During this same time, BlackBerry smartphones have changed little.  Their management had been blinded by the sheer success of their products that they allowed others to come in and take that success away from them [http://en.wikipedia.org/wiki/Cash_cow].&lt;br /&gt;
&lt;br /&gt;
==Project Management==&lt;br /&gt;
Project Management anti-patterns are anti-patterns that deal with how the programmers and the whole team in general work together to complete the project at hand.  Some examples of this are Over Engineering and Software Bloat.&lt;br /&gt;
&lt;br /&gt;
===Over Engineering===&lt;br /&gt;
Over Engineering is when a product is made more complex than is practical.  This results in a waste of time and resources.  For instance, a push lawnmower with 100 HP would be a product of over engineering.   A lawnmower with a tenth of that power would work just as well without all the time and money spent upping the Horse Power [http://en.wikipedia.org/wiki/Overengineering].&lt;br /&gt;
&lt;br /&gt;
===Software Bloat===&lt;br /&gt;
Software Bloating happens when after every new release of a piece of software, more and more features are added that are not necessarily used by the consumer or stray from the original purpose of the program.  These features use more computer resources and can ultimately slow down the program.  Also, not every user will use all of the features, causing in some cases unnecessary slowdown of the program.  A good example of Software Bloating is Apple's iTunes.  iTunes was originally created to simply download and play music.  Now, iTunes can do so much more to the point where the features are detracting from the original functions.  A good alternative to software bloating is the use of plugins.  Plugins add extra functionality to a program without forcing the user to have it.  This way a user can pick and choose which extra features he or she wants [http://en.wikipedia.org/wiki/Software_bloat].&lt;br /&gt;
&lt;br /&gt;
==Software Design==&lt;br /&gt;
Software design anti-patterns are anti-patterns that deal with the overall design of a program.  They deal more with how the object in a program fit together rather than the actual code itself.  Some examples of Software Design anti-patterns are BaseBean and Call Super.&lt;br /&gt;
===BaseBean===&lt;br /&gt;
A BaseBean is a class where concrete entities have subclassed it.  As you may know from class, subclassing does not always exhibit good program design.  Subclassing causes the child class to rely too heavily on the superclass.  If the super class were to change, it could break something in the subclass.  A class should not subclass another class just because there is similar code.  Rather, the classes should interact using delegation [http://en.wikipedia.org/wiki/BaseBean].&lt;br /&gt;
&lt;br /&gt;
===Call Super===&lt;br /&gt;
Call Super is similar to BaseBean in which one class subclasses another.  The different is, in Call Super, the superclass requires the subclass to override a method in order for it to function.  The fact that this is required makes this an anti-patern.  The solution to this problem is to use the Template Method Pattern.  The Template Method pattern separates the superclass method into two distinct methods.  The first method executes all of the needed code by the subclass and then delegates the part that needs to be subclassed into an abstract method.  That way the superclass is able to separate out the information that needs to be accessed by the subclass and the method that needs to be overridden [http://en.wikipedia.org/wiki/Call_super].&lt;br /&gt;
&lt;br /&gt;
==Programming==&lt;br /&gt;
Anti-patterns that fall under this category are ones that deal with how the code is implemented rather than how the code fits together.  Some examples of these patterns are Blind Faith, Law of Instrument, Cut and Paste, and God Object.&lt;br /&gt;
===Blind Faith===&lt;br /&gt;
Bild Faith occurs when a bug is fixed in a program but never tested before released.  The programmer just assumes the fix will work so does not bother to test it.  This can be detrimental if a bug did end up in the code that was &amp;quot;fixed&amp;quot; [http://en.wikipedia.org/wiki/Blind_faith_(computer_science)].  A solution to this problem is Test-Driven Development.  In Test-Driven Development, the test cases are written before the code.  This ensures that all code is tested and that Blind Faith can not occur [http://en.wikipedia.org/wiki/Test-driven_development].&lt;br /&gt;
&lt;br /&gt;
===Law of Instrument===&lt;br /&gt;
THe Law of Instrument or golden hammer refers to the overuse of a good tool.  The well known phrase &amp;quot;if all you have is a hammer, everything looks like a nail&amp;quot; is derived from this law.  A good example of this would be a programmer writing all of his programs in the Java programming language.  Java is a great tool but is not suited for writing all programs.  The fix for this would be the opposite: Use the right tool for the job [http://en.wikipedia.org/wiki/Golden_hammer].&lt;br /&gt;
&lt;br /&gt;
===Cut and Paste===&lt;br /&gt;
The Cut and Paste anti-pattern deals with the programmer cutting and pasting code from one section to another and then altering it.  The problem with this is duplication of code.  Although copy and paste seems like an easier and quicker way to add certain functionality to your program, it could end up making your code run slower.  This pattern goes against the DRY principle.  The refactored approach is to use a black box reuse design.  Black Box reuse does not allow the user to change how the code is implemented.  They can only interact with it the way it was intended [http://sourcemaking.com/antipatterns/cut-and-paste-programming].&lt;br /&gt;
&lt;br /&gt;
===God Object===&lt;br /&gt;
This anti-pattern is when one class does too much.  Classes and methods should only have one method or purpose.  If it has more than that, it goes against the single responsibility principal.  The solution to this is to refactor the code.  Try and break up the code into the individual responsibilities and create new classes for them[http://blog.decayingcode.com/post/anti-pattern-god-object.aspx].&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1]  http://c2.com/cgi/wiki?AntiPattern &amp;lt;br/&amp;gt;&lt;br /&gt;
[2]  http://en.wikipedia.org/wiki/Anti-pattern &amp;lt;br /&amp;gt;&lt;br /&gt;
[3]  http://en.wikipedia.org/wiki/Cash_cow &amp;lt;br /&amp;gt;&lt;br /&gt;
[4]  http://en.wikipedia.org/wiki/Overengineering &amp;lt;br /&amp;gt;&lt;br /&gt;
[5]  http://en.wikipedia.org/wiki/Software_bloat &amp;lt;br /&amp;gt;&lt;br /&gt;
[6]  http://en.wikipedia.org/wiki/BaseBean &amp;lt;br /&amp;gt;&lt;br /&gt;
[7]  http://en.wikipedia.org/wiki/Call_super &amp;lt;br /&amp;gt;&lt;br /&gt;
[8]  http://en.wikipedia.org/wiki/Blind_faith_(computer_science) &amp;lt;br /&amp;gt;&lt;br /&gt;
[9]  http://en.wikipedia.org/wiki/Test-driven_development &amp;lt;br /&amp;gt;&lt;br /&gt;
[10]  http://en.wikipedia.org/wiki/Golden_hammer &amp;lt;br/&amp;gt;&lt;br /&gt;
[11] http://sourcemaking.com/antipatterns/cut-and-paste-programming &amp;lt;br /&amp;gt;&lt;br /&gt;
[12] http://blog.decayingcode.com/post/anti-pattern-god-object.aspx &amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56594</id>
		<title>CSC/ECE 517 Fall 2011/ch7 7d df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56594"/>
		<updated>2011-12-12T19:17:20Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=DesignPatters=&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Software Engineering, an Anti-Pattern is a type of pattern that, at first, may appear to solve a problem but ends up hindering it in the long run.  While a pattern can be looked at as a solution to a problem, an anti-pattern is considered a bad solution to a problem[http://c2.com/cgi/wiki?AntiPattern].  The name anti-patern was coined by Andrew Koenig in response to the book, Design Patterns, by Gang of Four. However, the term did not become commonplace until after the book, AntiPatterns, came out.  In the book, the authors had outlined a few patterns that were fairly common in the workplace that they saw as an anti-pattern.  An anti-pattern is not to be confused with bad programming habits.  There are two main distinctions between an anti-pattern and bad programming.  First, for a bad idea to be an anti-pattern, it has to have some sort of structure and is reusable and second, the anti-pattern has to have a well documented, correct solution.  Anti-Patterns can be found not only in programming but a number of different areas in the design process.  Because of this, anti-patters can fall into different groups such as Organizational, Project Management, Software Design, and Programming.    Listed below are a select few of the many Anti-Patterns that exist [http://en.wikipedia.org/wiki/Anti-pattern].&lt;br /&gt;
&lt;br /&gt;
==Organizational==&lt;br /&gt;
Organizational anti-patters are those that  deal with how the team working on the project is organized and managed.  A good example of these types of patterns is the Cash Cow.&lt;br /&gt;
===Cash Cow===&lt;br /&gt;
A Cash Cow is a product that makes up the majority of a companies profits.  The problem with Cash Cows is they may be making a great deal of money for the money now, but may later fall out of popularity due to newer or better technology.  The sheer popularity of the product can sometimes hinder the development of newer alternatives.  Take for example the company RIM, the maker of the BlackBerry line of smartphones.  A couple years ago RIM's BlackBerry smartphones were considered cash cows.  RIM was dominating the market.  Over the years, new players have come into the market with revolutionary ideas and have been slowly eating away at RIMs marketshare.  During this same time, BlackBerry smartphones have changed little.  Their management had been blinded by the sheer success of their products that they allowed others to come in and take that success away from them [http://en.wikipedia.org/wiki/Cash_cow].&lt;br /&gt;
&lt;br /&gt;
==Project Management==&lt;br /&gt;
Project Management anti-patterns are anti-patterns that deal with how the programmers and the whole team in general work together to complete the project at hand.  Some examples of this are Over Engineering and Software Bloat.&lt;br /&gt;
&lt;br /&gt;
===Over Engineering===&lt;br /&gt;
Over Engineering is when a product is made more complex than is practical.  This results in a waste of time and resources.  For instance, a push lawnmower with 100 HP would be a product of over engineering.   A lawnmower with a tenth of that power would work just as well without all the time and money spent upping the Horse Power [http://en.wikipedia.org/wiki/Overengineering].&lt;br /&gt;
&lt;br /&gt;
===Software Bloat===&lt;br /&gt;
Software Bloating happens when after every new release of a piece of software, more and more features are added that are not necessarily used by the consumer or stray from the original purpose of the program.  These features use more computer resources and can ultimately slow down the program.  Also, not every user will use all of the features, causing in some cases unnecessary slowdown of the program.  A good example of Software Bloating is Apple's iTunes.  iTunes was originally created to simply download and play music.  Now, iTunes can do so much more to the point where the features are detracting from the original functions.  A good alternative to software bloating is the use of plugins.  Plugins add extra functionality to a program without forcing the user to have it.  This way a user can pick and choose which extra features he or she wants [http://en.wikipedia.org/wiki/Software_bloat].&lt;br /&gt;
&lt;br /&gt;
==Software Design==&lt;br /&gt;
Software design anti-patters are anti-patterns that deal with the overall design of a program.  They deal more with how the object in a program fit together rather than the actual code itself.  Some examples of Software Design anti-patters are BaseBean and Call Super.&lt;br /&gt;
===BaseBean===&lt;br /&gt;
A BaseBean is a class where concrete entities have subclassed it.  As you may know from class, subclassing does not always exhibit good program design.  Subclassing causes the child class to rely too heavily on the superclass.  If the super class were to change, it could break something in the subclass.  A class should not subclass another class just because there is similar code.  Rather, the classes should interact using delegation [http://en.wikipedia.org/wiki/BaseBean].&lt;br /&gt;
&lt;br /&gt;
===Call Super===&lt;br /&gt;
Call Super is similar to BaseBean in which one class subclasses another.  The different is, in Call Super, the superclass requires the subclass to override a method in order for it to function.  The fact that this is required makes this an anti-patern.  The solution to this problem is to use the Template Method Pattern.  The Template Method pattern separates the superclass method into two distinct methods.  The first method executes all of the needed code by the subclass and then delegates the part that needs to be subclassed into an abstract method.  That way the superclass is able to separate out the information that needs to be accessed by the subclass and the method that needs to be overridden [http://en.wikipedia.org/wiki/Call_super].&lt;br /&gt;
&lt;br /&gt;
==Programming==&lt;br /&gt;
Anti-patterns that fall under this category are ones that deal with how the code is implemented rather than how the code fits together.  Some examples of these patters are Blind Faith, Law of Instrument, Cut and Paste, and God Object.&lt;br /&gt;
===Blind Faith===&lt;br /&gt;
Bild Faith occurs when a bug is fixed in a program but never tested before released.  The programmer just assumes the fix will work so does not bother to test it.  This can be detrimental if a bug did end up in the code that was &amp;quot;fixed&amp;quot; [http://en.wikipedia.org/wiki/Blind_faith_(computer_science)].  A solution to this problem is Test-Driven Development.  In Test-Driven Development, the test cases are written before the code.  This ensures that all code is tested and that Blind Faith can not occur [http://en.wikipedia.org/wiki/Test-driven_development].&lt;br /&gt;
&lt;br /&gt;
===Law of Instrument===&lt;br /&gt;
THe Law of Instrument or golden hammer refers to the overuse of a good tool.  The well known phrase &amp;quot;if all you have is a hammer, everything looks like a nail&amp;quot; is derived from this law.  A good example of this would be a programmer writing all of his programs in the Java programming language.  Java is a great tool but is not suited for writing all programs.  The fix for this would be the opposite: Use the right tool for the job [http://en.wikipedia.org/wiki/Golden_hammer].&lt;br /&gt;
&lt;br /&gt;
===Cut and Paste===&lt;br /&gt;
The Cut and Paste anti-pattern deals with the programmer cutting and pasting code from one section to another and then altering it.  The problem with this is duplication of code.  Although copy and paste seems like an easier and quicker way to add certain functionality to your program, it could end up making your code run slower.  This pattern goes against the DRY principle.  The refactored approach is to use a black box reuse design.  Black Box reuse does not allow the user to change how the code is implemented.  They can only interact with it the way it was intended [http://sourcemaking.com/antipatterns/cut-and-paste-programming].&lt;br /&gt;
&lt;br /&gt;
===God Object===&lt;br /&gt;
This anti-pattern is when one class does too much.  Classes and methods should only have one method or purpose.  If it has more than that, it goes against the single responsibility principal.  The solution to this is to refactor the code.  Try and break up the code into the individual responsibilities and create new classes for them[http://blog.decayingcode.com/post/anti-pattern-god-object.aspx].&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1]  http://c2.com/cgi/wiki?AntiPattern &amp;lt;br/&amp;gt;&lt;br /&gt;
[2]  http://en.wikipedia.org/wiki/Anti-pattern &amp;lt;br /&amp;gt;&lt;br /&gt;
[3]  http://en.wikipedia.org/wiki/Cash_cow &amp;lt;br /&amp;gt;&lt;br /&gt;
[4]  http://en.wikipedia.org/wiki/Overengineering &amp;lt;br /&amp;gt;&lt;br /&gt;
[5]  http://en.wikipedia.org/wiki/Software_bloat &amp;lt;br /&amp;gt;&lt;br /&gt;
[6]  http://en.wikipedia.org/wiki/BaseBean &amp;lt;br /&amp;gt;&lt;br /&gt;
[7]  http://en.wikipedia.org/wiki/Call_super &amp;lt;br /&amp;gt;&lt;br /&gt;
[8]  http://en.wikipedia.org/wiki/Blind_faith_(computer_science) &amp;lt;br /&amp;gt;&lt;br /&gt;
[9]  http://en.wikipedia.org/wiki/Test-driven_development &amp;lt;br /&amp;gt;&lt;br /&gt;
[10]  http://en.wikipedia.org/wiki/Golden_hammer &amp;lt;br/&amp;gt;&lt;br /&gt;
[11] http://sourcemaking.com/antipatterns/cut-and-paste-programming &amp;lt;br /&amp;gt;&lt;br /&gt;
[12] http://blog.decayingcode.com/post/anti-pattern-god-object.aspx &amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56593</id>
		<title>CSC/ECE 517 Fall 2011/ch7 7d df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56593"/>
		<updated>2011-12-12T19:15:40Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=DesignPatters=&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Software Engineering, an Anti-Pattern is a type of pattern that, at first, may appear to solve a problem but ends up hindering it in the long run.  While a pattern can be looked at as a solution to a problem, an anti-pattern is considered a bad solution to a problem.  The name anti-patern was coined by Andrew Koenig in response to the book, Design Patterns, by Gang of Four. However, the term did not become commonplace until after the book, AntiPatterns, came out.  In the book, the authors had outlined a few patterns that were fairly common in the workplace that they saw as an anti-pattern.  An anti-pattern is not to be confused with bad programming habits.  There are two main distinctions between an anti-pattern and bad programming.  First, for a bad idea to be an anti-pattern, it has to have some sort of structure and is reusable and second, the anti-pattern has to have a well documented, correct solution.  Anti-Patterns can be found not only in programming but a number of different areas in the design process.  Because of this, anti-patters can fall into different groups such as Organizational, Project Management, Software Design, and Programming.    Listed below are a select few of the many Anti-Patterns that exist [http://en.wikipedia.org/wiki/Anti-pattern].&lt;br /&gt;
&lt;br /&gt;
==Organizational==&lt;br /&gt;
Organizational anti-patters are those that  deal with how the team working on the project is organized and managed.  A good example of these types of patterns is the Cash Cow.&lt;br /&gt;
===Cash Cow===&lt;br /&gt;
A Cash Cow is a product that makes up the majority of a companies profits.  The problem with Cash Cows is they may be making a great deal of money for the money now, but may later fall out of popularity due to newer or better technology.  The sheer popularity of the product can sometimes hinder the development of newer alternatives.  Take for example the company RIM, the maker of the BlackBerry line of smartphones.  A couple years ago RIM's BlackBerry smartphones were considered cash cows.  RIM was dominating the market.  Over the years, new players have come into the market with revolutionary ideas and have been slowly eating away at RIMs marketshare.  During this same time, BlackBerry smartphones have changed little.  Their management had been blinded by the sheer success of their products that they allowed others to come in and take that success away from them [http://en.wikipedia.org/wiki/Cash_cow].&lt;br /&gt;
&lt;br /&gt;
==Project Management==&lt;br /&gt;
Project Management anti-patterns are anti-patterns that deal with how the programmers and the whole team in general work together to complete the project at hand.  Some examples of this are Over Engineering and Software Bloat.&lt;br /&gt;
&lt;br /&gt;
===Over Engineering===&lt;br /&gt;
Over Engineering is when a product is made more complex than is practical.  This results in a waste of time and resources.  For instance, a push lawnmower with 100 HP would be a product of over engineering.   A lawnmower with a tenth of that power would work just as well without all the time and money spent upping the Horse Power [http://en.wikipedia.org/wiki/Overengineering].&lt;br /&gt;
&lt;br /&gt;
===Software Bloat===&lt;br /&gt;
Software Bloating happens when after every new release of a piece of software, more and more features are added that are not necessarily used by the consumer or stray from the original purpose of the program.  These features use more computer resources and can ultimately slow down the program.  Also, not every user will use all of the features, causing in some cases unnecessary slowdown of the program.  A good example of Software Bloating is Apple's iTunes.  iTunes was originally created to simply download and play music.  Now, iTunes can do so much more to the point where the features are detracting from the original functions.  A good alternative to software bloating is the use of plugins.  Plugins add extra functionality to a program without forcing the user to have it.  This way a user can pick and choose which extra features he or she wants [http://en.wikipedia.org/wiki/Software_bloat].&lt;br /&gt;
&lt;br /&gt;
==Software Design==&lt;br /&gt;
Software design anti-patters are anti-patterns that deal with the overall design of a program.  They deal more with how the object in a program fit together rather than the actual code itself.  Some examples of Software Design anti-patters are BaseBean and Call Super.&lt;br /&gt;
===BaseBean===&lt;br /&gt;
A BaseBean is a class where concrete entities have subclassed it.  As you may know from class, subclassing does not always exhibit good program design.  Subclassing causes the child class to rely too heavily on the superclass.  If the super class were to change, it could break something in the subclass.  A class should not subclass another class just because there is similar code.  Rather, the classes should interact using delegation [http://en.wikipedia.org/wiki/BaseBean].&lt;br /&gt;
&lt;br /&gt;
===Call Super===&lt;br /&gt;
Call Super is similar to BaseBean in which one class subclasses another.  The different is, in Call Super, the superclass requires the subclass to override a method in order for it to function.  The fact that this is required makes this an anti-patern.  The solution to this problem is to use the Template Method Pattern.  The Template Method pattern separates the superclass method into two distinct methods.  The first method executes all of the needed code by the subclass and then delegates the part that needs to be subclassed into an abstract method.  That way the superclass is able to separate out the information that needs to be accessed by the subclass and the method that needs to be overridden [http://en.wikipedia.org/wiki/Call_super].&lt;br /&gt;
&lt;br /&gt;
==Programming==&lt;br /&gt;
Anti-patterns that fall under this category are ones that deal with how the code is implemented rather than how the code fits together.  Some examples of these patters are Blind Faith, Law of Instrument, Cut and Paste, and God Object.&lt;br /&gt;
===Blind Faith===&lt;br /&gt;
Bild Faith occurs when a bug is fixed in a program but never tested before released.  The programmer just assumes the fix will work so does not bother to test it.  This can be detrimental if a bug did end up in the code that was &amp;quot;fixed&amp;quot; [http://en.wikipedia.org/wiki/Blind_faith_(computer_science)].  A solution to this problem is Test-Driven Development.  In Test-Driven Development, the test cases are written before the code.  This ensures that all code is tested and that Blind Faith can not occur [http://en.wikipedia.org/wiki/Test-driven_development].&lt;br /&gt;
&lt;br /&gt;
===Law of Instrument===&lt;br /&gt;
THe Law of Instrument or golden hammer refers to the overuse of a good tool.  The well known phrase &amp;quot;if all you have is a hammer, everything looks like a nail&amp;quot; is derived from this law.  A good example of this would be a programmer writing all of his programs in the Java programming language.  Java is a great tool but is not suited for writing all programs.  The fix for this would be the opposite: Use the right tool for the job [http://en.wikipedia.org/wiki/Golden_hammer].&lt;br /&gt;
&lt;br /&gt;
===Cut and Paste===&lt;br /&gt;
The Cut and Paste anti-pattern deals with the programmer cutting and pasting code from one section to another and then altering it.  The problem with this is duplication of code.  Although copy and paste seems like an easier and quicker way to add certain functionality to your program, it could end up making your code run slower.  This pattern goes against the DRY principle.  The refactored approach is to use a black box reuse design.  Black Box reuse does not allow the user to change how the code is implemented.  They can only interact with it the way it was intended [http://sourcemaking.com/antipatterns/cut-and-paste-programming].&lt;br /&gt;
&lt;br /&gt;
===God Object===&lt;br /&gt;
This anti-pattern is when one class does too much.  Classes and methods should only have one method or purpose.  If it has more than that, it goes against the single responsibility principal.  The solution to this is to refactor the code.  Try and break up the code into the individual responsibilities and create new classes for them[http://blog.decayingcode.com/post/anti-pattern-god-object.aspx].&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1]  http://en.wikipedia.org/wiki/Anti-pattern &amp;lt;br /&amp;gt;&lt;br /&gt;
[2]  http://en.wikipedia.org/wiki/Cash_cow &amp;lt;br /&amp;gt;&lt;br /&gt;
[3]  http://en.wikipedia.org/wiki/Overengineering &amp;lt;br /&amp;gt;&lt;br /&gt;
[4]  http://en.wikipedia.org/wiki/Software_bloat &amp;lt;br /&amp;gt;&lt;br /&gt;
[5]  http://en.wikipedia.org/wiki/BaseBean &amp;lt;br /&amp;gt;&lt;br /&gt;
[6]  http://en.wikipedia.org/wiki/Call_super &amp;lt;br /&amp;gt;&lt;br /&gt;
[7]  http://en.wikipedia.org/wiki/Blind_faith_(computer_science) &amp;lt;br /&amp;gt;&lt;br /&gt;
[8]  http://en.wikipedia.org/wiki/Test-driven_development &amp;lt;br /&amp;gt;&lt;br /&gt;
[9]  http://en.wikipedia.org/wiki/Golden_hammer &amp;lt;br/&amp;gt;&lt;br /&gt;
[10] http://sourcemaking.com/antipatterns/cut-and-paste-programming &amp;lt;br /&amp;gt;&lt;br /&gt;
[11] http://blog.decayingcode.com/post/anti-pattern-god-object.aspx &amp;lt;br /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56473</id>
		<title>CSC/ECE 517 Fall 2011/ch7 7d df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56473"/>
		<updated>2011-12-02T06:08:15Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=DesignPatters=&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Software Engineering, an Anti-Pattern is a type of pattern that, at first, may appear to solve a problem but ends up hindering it in the long run.  While a pattern can be looked at as a solution to a problem, and anti-pattern is considered a bad solution to a problem.  The name anti-patern was coined by Andrew Koenig in response to the book, Design Patterns, by Gang of Four. However, the term did not become commonplace until after the book, AntiPatterns, came out.  In the book, the authors had outlines a few patterns that were fairly common in the workplace that they saw as an anti-pattern.  An anti-pattern is not to be confused with bad programming habits.  There are two main distinctions between an anti-pattern and bad programming.  First, for a bad idea to be an anti-pattern, it has to have some sort of structure and is reusable and second, the anti-pattern has to have a well documented, correct solution.  Anti-Patterns can be found not only in programming but a number of different areas in the design process.  Because of this, anti-patters can fall into different groups such as Organizational, Project Management, Software Design, and Programming.    Listed below are a select few of the many Anti-Patterns that exist [http://en.wikipedia.org/wiki/Anti-pattern].&lt;br /&gt;
&lt;br /&gt;
===Cash Cow===&lt;br /&gt;
A Cash Cow is a product that makes up the majority of a companies profits.  The problem with Cash Cows is they may be making a great deal of money for the money now, but may later fall out of popularity due to newer or better technology.  The sheer popularity of the product can sometimes hinder the development of newer alternatives.  Take for example the company RIM, the maker of the BlackBerry line of smartphones.  A couple years ago RIM's BlackBerry smartphones were considered cash cows.  RIM was dominating the market.  Over the years, new players have come into the market with revolutionary ideas and have been slowly eating away at RIMs marketshare.  During this same time, BlackBerry smartphones have changed little.  Their management had been blinded by the sheer success of their products that they allowed others to come in and take that success away from them [http://en.wikipedia.org/wiki/Cash_cow].&lt;br /&gt;
&lt;br /&gt;
===Over Engineering===&lt;br /&gt;
Over Engineering is when a product is made more complex than is practical.  This results in a waste of time and resources.  For instance, a push lawnmower with 100 HP would be a product of over engineering.   A lawnmower with a tenth of that power would work just as well without all the time and money spent upping the Horse Power [http://en.wikipedia.org/wiki/Overengineering].&lt;br /&gt;
&lt;br /&gt;
===Software Bloat===&lt;br /&gt;
Software Bloating happens when after every new release of a piece of software, more and more features are added that are not necessarily used by the consumer or stray from the original purpose of the program.  These features use more computer resources and can ultimately slow down the program.  Also, not every user will use all of the features, causing in some cases unnecessary slowdown of the program.  A good example of Software Bloating is Apple's iTunes.  iTunes was originally created to simply download and play music.  Now, iTunes can do so much more to the point where the features are detracting from the original functions.  A good alternative to software bloating is the use of plugins.  Plugins add extra functionality to a program without forcing the user to have it.  This way a user can pick and choose which extra features he or she wants [http://en.wikipedia.org/wiki/Software_bloat].&lt;br /&gt;
&lt;br /&gt;
===BaseBean===&lt;br /&gt;
A BaseBean is a class where concrete entities have subclassed it.  As you may know from class, subclassing does not always exhibit good program design.  Subclassing causes the child class to rely too heavily on the superclass.  If the super class were to change, it could break something in the subclass.  A class should not subclass another class just because there is similar code.  Rather, the classes should interact using delegation [http://en.wikipedia.org/wiki/BaseBean].&lt;br /&gt;
&lt;br /&gt;
===Call Super===&lt;br /&gt;
Call Super is similar to BaseBean in which one class subclasses another.  The different is, in Call Super, the superclass requires the subclass to override a method in order for it to function.  The fact that this is required makes this an anti-patern.  The solution to this problem is to use the Template Method Pattern.  The Template Method pattern separates the superclass method into two distinct methods.  The first method executes all of the needed code by the subclass and then delegates the part that needs to be subclassed into an abstract method.  That way the superclass is able to separate out the information that needs to be accessed by the subclass and the method that needs to be overridden [http://en.wikipedia.org/wiki/Call_super].&lt;br /&gt;
&lt;br /&gt;
===Blind Faith===&lt;br /&gt;
Bild Faith occurs when a bug is fixed in a program but never tested before released.  The programmer just assumes the fix will work so does not bother to test it.  This can be detrimental if a bug did end up in the code that was &amp;quot;fixed&amp;quot; [http://en.wikipedia.org/wiki/Blind_faith_(computer_science)].  A solution to this problem is Test-Driven Development.  In Test-Driven Development, the test cases are written before the code.  This ensures that all code is tested and that Blind Faith can not occur [http://en.wikipedia.org/wiki/Test-driven_development].&lt;br /&gt;
&lt;br /&gt;
===Law of Instrument===&lt;br /&gt;
THe Law of Instrument or golden hammer refers to the overuse of a good tool.  The well known phrase &amp;quot;if all you have is a hammer, everything looks like a nail&amp;quot; is derived from this law.  A good example of this would be a programmer writing all of his programs in the Java programming language.  Java is a great tool but is not suited for writing all programs.  The fix for this would be the opposite: Use the right tool for the job [http://en.wikipedia.org/wiki/Golden_hammer].&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
[1]  http://en.wikipedia.org/wiki/Anti-pattern &amp;lt;br /&amp;gt;&lt;br /&gt;
[2]  http://en.wikipedia.org/wiki/Cash_cow &amp;lt;br /&amp;gt;&lt;br /&gt;
[3]  http://en.wikipedia.org/wiki/Overengineering &amp;lt;br /&amp;gt;&lt;br /&gt;
[4]  http://en.wikipedia.org/wiki/Software_bloat &amp;lt;br /&amp;gt;&lt;br /&gt;
[5]  http://en.wikipedia.org/wiki/BaseBean &amp;lt;br /&amp;gt;&lt;br /&gt;
[6]  http://en.wikipedia.org/wiki/Call_super &amp;lt;br /&amp;gt;&lt;br /&gt;
[7]  http://en.wikipedia.org/wiki/Blind_faith_(computer_science) &amp;lt;br /&amp;gt;&lt;br /&gt;
[8]  http://en.wikipedia.org/wiki/Test-driven_development &amp;lt;br /&amp;gt;&lt;br /&gt;
[9]  http://en.wikipedia.org/wiki/Golden_hammer &amp;lt;br/&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56462</id>
		<title>CSC/ECE 517 Fall 2011/ch7 7d df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56462"/>
		<updated>2011-12-02T04:56:29Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=DesignPatters=&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Software Engineering, an Anti-Pattern is a type of pattern that, at first, may appear to solve a problem but ends up hindering it in the long run.  While a pattern can be looked at as a solution to a problem, and anti-pattern is considered a bad solution to a problem.  The name anti-patern was coined by Andrew Koenig in response to the book, Design Patterns, by Gang of Four. However, the term did not become commonplace until after the book, AntiPatterns, came out.  In the book, the authors had outlines a few patterns that were fairly common in the workplace that they saw as an anti-pattern.  An anti-pattern is not to be confused with bad programming habits.  There are two main distinctions between an anti-pattern and bad programming.  First, for a bad idea to be an anti-pattern, it has to have some sort of structure and is reusable and second, the anti-pattern has to have a well documented, correct solution.  Anti-Patterns can be found not only in programming but a number of different areas in the design process.  Because of this, anti-patters can fall into different groups such as Organizational, Project Management, Software Design, and Programming.    Listed below are a select few of the many Anti-Patterns that exist.&lt;br /&gt;
&lt;br /&gt;
==Organizational Anti-Patterns==&lt;br /&gt;
===Cash Cow===&lt;br /&gt;
A Cash Cow is a product that makes up the majority of a companies profits.  The problem with Cash Cows is they may be making a great deal of money for the money now, but may later fall out of popularity due to newer or better technology.  The sheer popularity of the product can sometimes hinder the development of newer alternatives.  Take for example the company RIM, the maker of the BlackBerry line of smartphones.  A couple years ago RIM's BlackBerry smartphones were considered cash cows.  RIM was dominating the market.  Over the years, new players have come into the market with revolutionary ideas and have been slowly eating away at RIMs marketshare.  During this same time, BlackBerry smartphones have changed little.  Their management had been blinded by the sheer success of their products that they allowed others to come in and take that success away from them.&lt;br /&gt;
&lt;br /&gt;
==Project Management==&lt;br /&gt;
===Over Engineering===&lt;br /&gt;
Over Engineering is when a product is made more complex than is practical.  This results in a waste of time and resources.  For instance, a push lawnmower with 100 HP would be a product of over engineering.   A lawnmower with a tenth of that power would work just as well without all the time and money spent upping the Horse Power.&lt;br /&gt;
&lt;br /&gt;
==Software Design==&lt;br /&gt;
A Software Design Anti-Pattern is an Anti-Pattern that deals with the big picture of a program and how all the different classes work together.&lt;br /&gt;
===BaseBean===&lt;br /&gt;
A BaseBean is a class where concrete entities have subclassed it.  As you may know from class, subclassing does not always exhibit good program design.  Subclassing causes the subclass to rely too heavily on the superclass.  If the super class were to change, it could break something in the subclass.  A class should not subclass another class just because there is similar code.  Rather, the classes should interact using delegation.&lt;br /&gt;
&lt;br /&gt;
===Call Super===&lt;br /&gt;
Call Super is similar to BaseBean in which one class subclasses another.  The different is, in Call Super, the superclass requires the subclass to override a method in order for it to function.  The fact that this is required makes this an anti-patern.  The solution to this problem is to use the Template Method Pattern.  The Template Method pattern separates the superclass method into two distinct methods.  The first method executes all of the needed code by the subclass and then delegates the part that needs to be subclassed into an abstract method.  That way the superclass is able to separate out the information that needs to be accessed by the subclass and the method that needs to be overridden.&lt;br /&gt;
&lt;br /&gt;
==programming==&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56400</id>
		<title>CSC/ECE 517 Fall 2011/ch7 7d df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56400"/>
		<updated>2011-12-01T13:59:34Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=DesignPatters=&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Software Engineering, Anti-Patters are a class of patterns that may appear to improve code at first but end up hindering it in the long run.  While a pattern can be looked at as a solution to a problem, and anti-pattern is considered a &amp;lt;i&amp;gt;bad&amp;lt;/i&amp;gt; solution to a problem.  The name anti-patern was coined by Andrew Koenig in response to the book, Design Patterns, by Gang of Four but was not commonplace until the book, AntiPatterns.  In the book, the authors had written about many bad design decisions they had seen used over and over in the workplace.  An anti-pattern is not to be confused with bad programming habits.  There are two main distinctions between an anti-pattern and bad programming.  First, for a bad idea to be an anti-pattern, it has to have some sort of structure and is reusable and second, the anti-pattern has to have a well documented, correct solution.  Listed below are some of the most common anti-patterns.&lt;br /&gt;
&lt;br /&gt;
==BaseBean==&lt;br /&gt;
A BaseBean is an object where concrete entities have subclassed it.  As you may know from class, subclassing does not always exhibit good program design.  Subclassing causes the subclass to rely too heavily on the superclass.  If the super class were to change, it could break something in the subclass.  A class should not subclass another class just because there is similar code.  Rather, the classes should interact using delegation.&lt;br /&gt;
&lt;br /&gt;
==Call Super==&lt;br /&gt;
Call Super is similar to BaseBean in which one class subclasses another.  The different is, in Call Super, the superclass requires the subclass to override a method in order for it to function.  The fact that this is required makes this an anti-patern.  The solution to this problem is to use the Template Method Pattern.  The Template Method pattern separates the superclass method into two distinct methods.  The first method executes all of the needed code by the subclass and then delegates the part that needs to be subclassed into an abstract method.  That way the superclass is able to separate out the information that needs to be accessed by the subclass and the method that needs to be overridden.&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56383</id>
		<title>CSC/ECE 517 Fall 2011/ch7 7d df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch7_7d_df&amp;diff=56383"/>
		<updated>2011-12-01T04:41:41Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: Created page with &amp;quot;=DesignPatters= ==Introduction== In Software Engineering, Anti-Patters are a class of patterns that may appear to improve code at first but end up hindering it in the long run.  ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=DesignPatters=&lt;br /&gt;
==Introduction==&lt;br /&gt;
In Software Engineering, Anti-Patters are a class of patterns that may appear to improve code at first but end up hindering it in the long run.  While a pattern can be looked at as a solution to a problem, and anti-pattern is considered a &amp;lt;i&amp;gt;bad&amp;lt;/i&amp;gt; solution to a problem.  The name anti-patern was coined by Andrew Koenig in response to the book, Design Patterns, by Gang of Four but was not commonplace until the book, AntiPatterns.  In the book, the authors had written about many bad design decisions they had seen used over and over in the workplace.  An anti-pattern is not to be confused with bad programming habits.  There are two main distinctions between an anti-pattern and bad programming.  First, for a bad idea to be an anti-pattern, it has to have some sort of structure and is reusable and second, the anti-pattern has to have a well documented, correct solution.  Listed below are some of the most common anti-patterns.&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51583</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51583"/>
		<updated>2011-09-30T20:48:57Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924&amp;lt;ref&amp;gt;http://c2.com/cgi/wiki?CurryingSchonfinkelling&amp;lt;/ref&amp;gt; and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Lambda_calculus&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Partial_function&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby &amp;lt;ref&amp;gt;http://www.khelll.com/blog/ruby/ruby-currying/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  product_ints:  this function takes in two arguments, a and b, and determines the product of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         product_ints = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  product_of_squares: this function takes in two arguments, a and b, and returns the product of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         product_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  product_of_powers_of_2: this function takes in two arguments, a and b, and returns the product of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         product_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the product of a sequence of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         product = lambda do |f,a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the multiplication sequence.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'product' function.&lt;br /&gt;
&lt;br /&gt;
         currying = product.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
&lt;br /&gt;
         product_ints = currying.(lambda{|x| x})&lt;br /&gt;
         product_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         product_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 &amp;lt;ref&amp;gt;http://www.haskell.org/haskellwiki/Currying&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below &amp;lt;ref&amp;gt;http://extralogical.net/articles/currying-javascript.html&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions &amp;lt;ref&amp;gt;http://www.svendtofte.com/code/curried_javascript/&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we discussed in this article, there are many advantages to using currying.&lt;br /&gt;
&lt;br /&gt;
Advantages&lt;br /&gt;
&lt;br /&gt;
1.  Makes code easier to read&amp;lt;br \&amp;gt;&lt;br /&gt;
2.  Allows for easy creation of partial functions&amp;lt;br \&amp;gt;&lt;br /&gt;
3.  A method can still be executed even if all its parameters are not known.&amp;lt;br \&amp;gt;&lt;br /&gt;
4.  Simplifies the method call of functions&amp;lt;br \&amp;gt;&lt;br /&gt;
5.  Condenses code by eliminating duplicated statements&amp;lt;br \&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51582</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51582"/>
		<updated>2011-09-30T20:48:33Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924&amp;lt;ref&amp;gt;http://c2.com/cgi/wiki?CurryingSchonfinkelling&amp;lt;/ref&amp;gt; and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Lambda_calculus&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Partial_function&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby &amp;lt;ref&amp;gt;http://www.khelll.com/blog/ruby/ruby-currying/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  product_ints:  this function takes in two arguments, a and b, and determines the product of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         product_ints = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  product_of_squares: this function takes in two arguments, a and b, and returns the product of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         product_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  product_of_powers_of_2: this function takes in two arguments, a and b, and returns the product of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         product_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the product of a sequence of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         product = lambda do |f,a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the multiplication sequence.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'product' function.&lt;br /&gt;
&lt;br /&gt;
         currying = product.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
&lt;br /&gt;
         product_ints = currying.(lambda{|x| x})&lt;br /&gt;
         product_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         product_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 &amp;lt;ref&amp;gt;http://www.haskell.org/haskellwiki/Currying&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below &amp;lt;ref&amp;gt;http://extralogical.net/articles/currying-javascript.html&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions &amp;lt;ref&amp;gt;http://www.svendtofte.com/code/curried_javascript/&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we discussed in this article, there are many advantages to using currying.&lt;br /&gt;
&lt;br /&gt;
Advantages&lt;br /&gt;
&lt;br /&gt;
1.  Makes code easier to read&amp;lt;br \&amp;gt;&lt;br /&gt;
2.  Allows for easy creation of partial functions\n&lt;br /&gt;
3.  A method can still be executed even if all its parameters are not known.&lt;br /&gt;
4.  Simplifies the method call of functions&lt;br /&gt;
5.  Condenses code by eliminating duplicated statements&lt;br /&gt;
&lt;br /&gt;
Currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51581</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51581"/>
		<updated>2011-09-30T20:48:16Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924&amp;lt;ref&amp;gt;http://c2.com/cgi/wiki?CurryingSchonfinkelling&amp;lt;/ref&amp;gt; and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Lambda_calculus&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Partial_function&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby &amp;lt;ref&amp;gt;http://www.khelll.com/blog/ruby/ruby-currying/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  product_ints:  this function takes in two arguments, a and b, and determines the product of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         product_ints = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  product_of_squares: this function takes in two arguments, a and b, and returns the product of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         product_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  product_of_powers_of_2: this function takes in two arguments, a and b, and returns the product of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         product_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the product of a sequence of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         product = lambda do |f,a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the multiplication sequence.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'product' function.&lt;br /&gt;
&lt;br /&gt;
         currying = product.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
&lt;br /&gt;
         product_ints = currying.(lambda{|x| x})&lt;br /&gt;
         product_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         product_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 &amp;lt;ref&amp;gt;http://www.haskell.org/haskellwiki/Currying&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below &amp;lt;ref&amp;gt;http://extralogical.net/articles/currying-javascript.html&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions &amp;lt;ref&amp;gt;http://www.svendtofte.com/code/curried_javascript/&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we discussed in this article, there are many advantages to using currying.&lt;br /&gt;
&lt;br /&gt;
Advantages&lt;br /&gt;
&lt;br /&gt;
1.  Makes code easier to read\n&lt;br /&gt;
2.  Allows for easy creation of partial functions\n&lt;br /&gt;
3.  A method can still be executed even if all its parameters are not known.&lt;br /&gt;
4.  Simplifies the method call of functions&lt;br /&gt;
5.  Condenses code by eliminating duplicated statements&lt;br /&gt;
&lt;br /&gt;
Currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51580</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51580"/>
		<updated>2011-09-30T20:47:57Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: /* Conclusion */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924&amp;lt;ref&amp;gt;http://c2.com/cgi/wiki?CurryingSchonfinkelling&amp;lt;/ref&amp;gt; and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Lambda_calculus&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Partial_function&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby &amp;lt;ref&amp;gt;http://www.khelll.com/blog/ruby/ruby-currying/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  product_ints:  this function takes in two arguments, a and b, and determines the product of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         product_ints = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  product_of_squares: this function takes in two arguments, a and b, and returns the product of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         product_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  product_of_powers_of_2: this function takes in two arguments, a and b, and returns the product of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         product_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the product of a sequence of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         product = lambda do |f,a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the multiplication sequence.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'product' function.&lt;br /&gt;
&lt;br /&gt;
         currying = product.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
&lt;br /&gt;
         product_ints = currying.(lambda{|x| x})&lt;br /&gt;
         product_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         product_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 &amp;lt;ref&amp;gt;http://www.haskell.org/haskellwiki/Currying&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below &amp;lt;ref&amp;gt;http://extralogical.net/articles/currying-javascript.html&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions &amp;lt;ref&amp;gt;http://www.svendtofte.com/code/curried_javascript/&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
As we discussed in this article, there are many advantages to using currying.&lt;br /&gt;
&lt;br /&gt;
Advantages&lt;br /&gt;
&lt;br /&gt;
1.  Makes code easier to read&lt;br /&gt;
2.  Allows for easy creation of partial functions&lt;br /&gt;
3.  A method can still be executed even if all its parameters are not known.&lt;br /&gt;
4.  Simplifies the method call of functions&lt;br /&gt;
5.  Condenses code by eliminating duplicated statements&lt;br /&gt;
&lt;br /&gt;
Currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51579</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51579"/>
		<updated>2011-09-30T20:29:27Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: /* Ruby */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924&amp;lt;ref&amp;gt;http://c2.com/cgi/wiki?CurryingSchonfinkelling&amp;lt;/ref&amp;gt; and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Lambda_calculus&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Partial_function&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby &amp;lt;ref&amp;gt;http://www.khelll.com/blog/ruby/ruby-currying/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  product_ints:  this function takes in two arguments, a and b, and determines the product of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         product_ints = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  product_of_squares: this function takes in two arguments, a and b, and returns the product of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         product_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  product_of_powers_of_2: this function takes in two arguments, a and b, and returns the product of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         product_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the product of a sequence of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         product = lambda do |f,a,b|&lt;br /&gt;
           s = 1 ; a.upto(b){|n| s *= f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the multiplication sequence.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'product' function.&lt;br /&gt;
&lt;br /&gt;
         currying = product.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
&lt;br /&gt;
         product_ints = currying.(lambda{|x| x})&lt;br /&gt;
         product_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         product_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 &amp;lt;ref&amp;gt;http://www.haskell.org/haskellwiki/Currying&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below &amp;lt;ref&amp;gt;http://extralogical.net/articles/currying-javascript.html&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions &amp;lt;ref&amp;gt;http://www.svendtofte.com/code/curried_javascript/&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
In conclusion, currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51566</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51566"/>
		<updated>2011-09-30T19:02:03Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924&amp;lt;ref&amp;gt;http://c2.com/cgi/wiki?CurryingSchonfinkelling&amp;lt;/ref&amp;gt; and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Lambda_calculus&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Partial_function&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby &amp;lt;ref&amp;gt;http://www.khelll.com/blog/ruby/ruby-currying/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         sum_ints = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 &amp;lt;ref&amp;gt;http://www.haskell.org/haskellwiki/Currying&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below &amp;lt;ref&amp;gt;http://extralogical.net/articles/currying-javascript.html&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions &amp;lt;ref&amp;gt;http://www.svendtofte.com/code/curried_javascript/&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
In conclusion, currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51565</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51565"/>
		<updated>2011-09-30T19:01:51Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924&amp;lt;ref&amp;gt;http://c2.com/cgi/wiki?CurryingSchonfinkelling&amp;lt;/ref&amp;gt; and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Lambda_calculus&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more&amp;lt;ref&amp;gt;http://en.wikipedia.org/wiki/Partial_function&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby &amp;lt;ref&amp;gt;http://www.khelll.com/blog/ruby/ruby-currying/&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         sum_ints = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 &amp;lt;ref&amp;gt;http://www.haskell.org/haskellwiki/Currying&amp;lt;/ref&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below &amp;lt;ref&amp;gt;http://extralogical.net/articles/currying-javascript.html&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions &amp;lt;ref&amp;gt;http://www.svendtofte.com/code/curried_javascript/&amp;lt;/ref&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
In conclusion, currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;references /&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://en.wikipedia.org/wiki/Lambda_calculus&lt;br /&gt;
&lt;br /&gt;
[3] http://en.wikipedia.org/wiki/Partial_function&lt;br /&gt;
&lt;br /&gt;
[4] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[5] http://www.haskell.org/haskellwiki/Currying&lt;br /&gt;
&lt;br /&gt;
[6] http://www.svendtofte.com/code/curried_javascript/&lt;br /&gt;
&lt;br /&gt;
[7] http://extralogical.net/articles/currying-javascript.html&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51561</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51561"/>
		<updated>2011-09-30T18:41:41Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter[http://en.wikipedia.org/wiki/Lambda_calculus].&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more[http://en.wikipedia.org/wiki/Partial_function].&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby [http://www.khelll.com/blog/ruby/ruby-currying/].&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         sum_ints = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 [http://www.haskell.org/haskellwiki/Currying].&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below [http://extralogical.net/articles/currying-javascript.html]:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions [http://www.svendtofte.com/code/curried_javascript/]:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
In conclusion, currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://en.wikipedia.org/wiki/Lambda_calculus&lt;br /&gt;
&lt;br /&gt;
[3] http://en.wikipedia.org/wiki/Partial_function&lt;br /&gt;
&lt;br /&gt;
[4] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[5] http://www.haskell.org/haskellwiki/Currying&lt;br /&gt;
&lt;br /&gt;
[6] http://www.svendtofte.com/code/curried_javascript/&lt;br /&gt;
&lt;br /&gt;
[7] http://extralogical.net/articles/currying-javascript.html&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51559</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51559"/>
		<updated>2011-09-30T18:39:24Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter[http://en.wikipedia.org/wiki/Lambda_calculus].&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more[http://en.wikipedia.org/wiki/Partial_function].&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby [http://www.khelll.com/blog/ruby/ruby-currying/].&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         sum_ints = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 [http://www.haskell.org/haskellwiki/Currying].&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below [http://extralogical.net/articles/currying-javascript.html]:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions [http://www.svendtofte.com/code/curried_javascript/]:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
In conclusion, currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://en.wikipedia.org/wiki/Lambda_calculus&lt;br /&gt;
&lt;br /&gt;
[3] http://en.wikipedia.org/wiki/Partial_function&lt;br /&gt;
&lt;br /&gt;
[4] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[5] http://www.haskell.org/haskellwiki/Currying&lt;br /&gt;
&lt;br /&gt;
[6] http://www.svendtofte.com/code/curried_javascript/&lt;br /&gt;
&lt;br /&gt;
[7] http://extralogical.net/articles/currying-javascript.html&lt;br /&gt;
&lt;br /&gt;
[8] http://en.wikipedia.org/wiki/Currying&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51556</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=51556"/>
		<updated>2011-09-30T18:33:57Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter[http://en.wikipedia.org/wiki/Lambda_calculus].&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more[http://en.wikipedia.org/wiki/Partial_function].&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby [http://www.khelll.com/blog/ruby/ruby-currying/].&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         sum_ints = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 [http://www.haskell.org/haskellwiki/Currying].&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below [http://extralogical.net/articles/currying-javascript.html]:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions [http://www.svendtofte.com/code/curried_javascript/]:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
In conclusion, currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://en.wikipedia.org/wiki/Currying&lt;br /&gt;
&lt;br /&gt;
[3] http://en.wikipedia.org/wiki/Partial_function&lt;br /&gt;
&lt;br /&gt;
[4] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[5] http://www.haskell.org/haskellwiki/Currying&lt;br /&gt;
&lt;br /&gt;
[6] http://en.wikipedia.org/wiki/Lambda_calculus&lt;br /&gt;
&lt;br /&gt;
[7] http://extralogical.net/articles/currying-javascript.html&lt;br /&gt;
&lt;br /&gt;
[8] http://www.svendtofte.com/code/curried_javascript/&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50057</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50057"/>
		<updated>2011-09-21T21:27:23Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would map to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
Currying is used throughout Mathematics.  Most notably, currying is used in [http://en.wikipedia.org/wiki/Lambda_calculus lambda calculus].  Lambda calculus borrows the idea that every function can only have one parameter[http://en.wikipedia.org/wiki/Lambda_calculus].&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and [http://en.wikipedia.org/wiki/Partial_function partial functions].  A Partial function simplifies another function by making one or more of its arguments constant.  Suppose we have a function, f(x, y, z) -&amp;gt; x + y + z.  If we were to curry the function,  we would get f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; x + y + z)).  To create a partial function, we can set x, y, and/or z to a constant value.  If we wanted to require x to always be 1, we remove x as a parameter and set it to 1 as follows: f(y, z) -&amp;gt; 1 + y + z.  One thing to point out is curried functions can only have one parameter while partial functions can have one or more[http://en.wikipedia.org/wiki/Partial_function].&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  Many different programming languages used today implement currying.  Currying is most evident in functional programing languages.  Other languages such as java and c++ do not have built in support for currying.  Shown below is a couple of languages and their implementations of currying.&lt;br /&gt;
&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby [http://www.khelll.com/blog/ruby/ruby-currying/].&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         sum_ints = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 [http://www.haskell.org/haskellwiki/Currying].&lt;br /&gt;
&lt;br /&gt;
==JavaScript==&lt;br /&gt;
In javaScript, there is no built-in currying function.  However, we can create a curried function manually by using the idea that methods can return other methods.  For instance, if we have an uncured add function as described below,&lt;br /&gt;
&lt;br /&gt;
    var add = function(a, b) {&lt;br /&gt;
        return a + b;&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
we can curry the function by returning another function.  The curried form of the add function is shown below [http://extralogical.net/articles/currying-javascript.html]:&lt;br /&gt;
&lt;br /&gt;
    var curriedAdd = function(a) {&lt;br /&gt;
        return function(b) {&lt;br /&gt;
            return a + b;&lt;br /&gt;
        };&lt;br /&gt;
    };&lt;br /&gt;
&lt;br /&gt;
==ML==&lt;br /&gt;
ML stands for Metalanguage.  In ML all anonymous functions are curried.  An anonymous function is a function that has no name and can only take one parameter  Shown below is an example of an anonymous function:&lt;br /&gt;
&lt;br /&gt;
    val add = (fn x =&amp;gt; (fn y =&amp;gt; x + y));&lt;br /&gt;
&lt;br /&gt;
We can also represent this function in its uncurried form by not using anonymous functions [http://www.svendtofte.com/code/curried_javascript/]:&lt;br /&gt;
&lt;br /&gt;
    fun add x y = y + x&lt;br /&gt;
&lt;br /&gt;
=Conclusion=&lt;br /&gt;
In conclusion, currying is used throughout Mathematics and Computer Science.  It is the underlying idea behind the programming language Haskel as well as lambda calculus.  Currying is a very powerful tool that, if used correctly, can greatly simplify a function.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://en.wikipedia.org/wiki/Currying&lt;br /&gt;
&lt;br /&gt;
[3] http://en.wikipedia.org/wiki/Partial_function&lt;br /&gt;
&lt;br /&gt;
[4] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[5] http://www.haskell.org/haskellwiki/Currying&lt;br /&gt;
&lt;br /&gt;
[6] http://en.wikipedia.org/wiki/Lambda_calculus&lt;br /&gt;
&lt;br /&gt;
[7] http://extralogical.net/articles/currying-javascript.html&lt;br /&gt;
&lt;br /&gt;
[6] http://www.svendtofte.com/code/curried_javascript/&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50041</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50041"/>
		<updated>2011-09-21T19:55:11Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would return a function with n-1 parameters.  A function can either be thought of as having n arguments or a function with 1 argument that maps to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
a function can be defined as f(x, y, z) -&amp;gt; g where it takes in three arguments x, y and z and returns q.  To express this in curried form, the function would need to be split into multiple functions with only one argument.  The new function can be written in the form f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).  The function f takes in x as an argument and returns another function which takes in y as an argument.  That function then returns another function which takes in z as an argument and returns q.&lt;br /&gt;
A function, f(x, y, z) -&amp;gt; q, can be expressed in its curried form, F(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).&lt;br /&gt;
&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  &lt;br /&gt;
&lt;br /&gt;
In contrast, there is also uncurrying.  Uncurrying is a technique of transforming a one argument function into a function with many arguments.&lt;br /&gt;
&lt;br /&gt;
The notion is that a function of n arguments can be thought of as a function of 1 argument that maps to a function of n−1 arguments&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
 currying is used throughout in Mathematics.  Most notably, currying is used in lambda calculus.  Lambda calculus &lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and partial functions.  A Partial function simplifies another function by making one or more of its arguments constant.  In contrast a curried function takes in one parameter and returns ...&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying is used in several programming languages.&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby [http://www.khelll.com/blog/ruby/ruby-currying/].&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         sum_ints = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 [http://www.haskell.org/haskellwiki/Currying].&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[3] http://www.haskell.org/haskellwiki/Currying&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50040</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50040"/>
		<updated>2011-09-21T19:50:29Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would return a function with n-1 parameters.  A function can either be thought of as having n arguments or a function with 1 argument that maps to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
a function can be defined as f(x, y, z) -&amp;gt; g where it takes in three arguments x, y and z and returns q.  To express this in curried form, the function would need to be split into multiple functions with only one argument.  The new function can be written in the form f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).  The function f takes in x as an argument and returns another function which takes in y as an argument.  That function then returns another function which takes in z as an argument and returns q.&lt;br /&gt;
A function, f(x, y, z) -&amp;gt; q, can be expressed in its curried form, F(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).&lt;br /&gt;
&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  &lt;br /&gt;
&lt;br /&gt;
In contrast, there is also uncurrying.  Uncurrying is a technique of transforming a one argument function into a function with many arguments.&lt;br /&gt;
&lt;br /&gt;
The notion is that a function of n arguments can be thought of as a function of 1 argument that maps to a function of n−1 arguments&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and partial functions.  A Partial function simplifies another function by making one or more of its arguments constant.  In contrast a curried function takes in one parameter and returns ...&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying is used in several programming languages.&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby [http://www.khelll.com/blog/ruby/ruby-currying/].&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         sum_ints = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
         div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 [http://www.haskell.org/haskellwiki/Currying].&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[3] http://www.haskell.org/haskellwiki/Currying&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50039</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50039"/>
		<updated>2011-09-21T19:50:05Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would return a function with n-1 parameters.  A function can either be thought of as having n arguments or a function with 1 argument that maps to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
a function can be defined as f(x, y, z) -&amp;gt; g where it takes in three arguments x, y and z and returns q.  To express this in curried form, the function would need to be split into multiple functions with only one argument.  The new function can be written in the form f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).  The function f takes in x as an argument and returns another function which takes in y as an argument.  That function then returns another function which takes in z as an argument and returns q.&lt;br /&gt;
A function, f(x, y, z) -&amp;gt; q, can be expressed in its curried form, F(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).&lt;br /&gt;
&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  &lt;br /&gt;
&lt;br /&gt;
In contrast, there is also uncurrying.  Uncurrying is a technique of transforming a one argument function into a function with many arguments.&lt;br /&gt;
&lt;br /&gt;
The notion is that a function of n arguments can be thought of as a function of 1 argument that maps to a function of n−1 arguments&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and partial functions.  A Partial function simplifies another function by making one or more of its arguments constant.  In contrast a curried function takes in one parameter and returns ...&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying is used in several programming languages.&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby [http://www.khelll.com/blog/ruby/ruby-currying/].&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
         sum_ints = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
                   div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 [http://www.haskell.org/haskellwiki/Currying].&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[3] http://www.haskell.org/haskellwiki/Currying&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50038</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50038"/>
		<updated>2011-09-21T19:49:35Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would return a function with n-1 parameters.  A function can either be thought of as having n arguments or a function with 1 argument that maps to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
a function can be defined as f(x, y, z) -&amp;gt; g where it takes in three arguments x, y and z and returns q.  To express this in curried form, the function would need to be split into multiple functions with only one argument.  The new function can be written in the form f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).  The function f takes in x as an argument and returns another function which takes in y as an argument.  That function then returns another function which takes in z as an argument and returns q.&lt;br /&gt;
A function, f(x, y, z) -&amp;gt; q, can be expressed in its curried form, F(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).&lt;br /&gt;
&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  &lt;br /&gt;
&lt;br /&gt;
In contrast, there is also uncurrying.  Uncurrying is a technique of transforming a one argument function into a function with many arguments.&lt;br /&gt;
&lt;br /&gt;
The notion is that a function of n arguments can be thought of as a function of 1 argument that maps to a function of n−1 arguments&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and partial functions.  A Partial function simplifies another function by making one or more of its arguments constant.  In contrast a curried function takes in one parameter and returns ...&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying is used in several programming languages.&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby [http://www.khelll.com/blog/ruby/ruby-currying/].&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
             sum_ints = lambda do |a,b|&lt;br /&gt;
               s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
             end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_squares = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
         sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
                   div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 [http://www.haskell.org/haskellwiki/Currying].&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[3] http://www.haskell.org/haskellwiki/Currying&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50037</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=50037"/>
		<updated>2011-09-21T19:48:50Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Currying=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with n parameters into a function with one parameter.  This new function would return a function with n-1 parameters.  A function can either be thought of as having n arguments or a function with 1 argument that maps to another function with n-1 arguments.  For example, we can define a function as add(x, y) -&amp;gt; x + y.  This function takes in two parameters, x and y, and returns x + y.  If we wanted to evaluate add(3, 4) here is how we would do it:&lt;br /&gt;
&lt;br /&gt;
    add(3, 4) ↦ x + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The function is evaluated all in one step.  The curried form of this function would be: add(x) -&amp;gt; (y -&amp;gt; x + y).  The add function takes in one parameter, x, and returns another function taking in y.  The second function then returns the result x + y.  If we wanted to evaluate add(3, 4) in curried form, the following steps should be taken:&lt;br /&gt;
&lt;br /&gt;
    add(3) ↦ (y ↦ x + y)&lt;br /&gt;
    = 4 ↦ 3 + y&lt;br /&gt;
    = 3 + 4 = 7&lt;br /&gt;
&lt;br /&gt;
The curried function is evaluated in two steps instead of one.  Also, x is treaded like a constant in the second function.&lt;br /&gt;
&lt;br /&gt;
As shown in the given example, the curried and uncurried form of the add function produce the same result.  Another thing to note, a function can be curried but also a curried function can be uncurried.  To transform a curried function into an uncurried one, we just need to reverse the steps.&lt;br /&gt;
&lt;br /&gt;
a function can be defined as f(x, y, z) -&amp;gt; g where it takes in three arguments x, y and z and returns q.  To express this in curried form, the function would need to be split into multiple functions with only one argument.  The new function can be written in the form f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).  The function f takes in x as an argument and returns another function which takes in y as an argument.  That function then returns another function which takes in z as an argument and returns q.&lt;br /&gt;
A function, f(x, y, z) -&amp;gt; q, can be expressed in its curried form, F(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).&lt;br /&gt;
&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  &lt;br /&gt;
&lt;br /&gt;
In contrast, there is also uncurrying.  Uncurrying is a technique of transforming a one argument function into a function with many arguments.&lt;br /&gt;
&lt;br /&gt;
The notion is that a function of n arguments can be thought of as a function of 1 argument that maps to a function of n−1 arguments&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=Currying in Math=&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and partial functions.  A Partial function simplifies another function by making one or more of its arguments constant.  In contrast a curried function takes in one parameter and returns ...&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying is used in several programming languages.&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby [http://www.khelll.com/blog/ruby/ruby-currying/].&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
             sum_ints = lambda do |a,b|&lt;br /&gt;
               s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
             end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
             sum_of_squares = lambda do |a,b|&lt;br /&gt;
               s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
             end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
             sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
               s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
             end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions are in curried form.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
                   div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 [http://www.haskell.org/haskellwiki/Currying].&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[3] http://www.haskell.org/haskellwiki/Currying&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=48861</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=48861"/>
		<updated>2011-09-16T22:39:04Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with multiple parameters into multiple functions with one parameter.  A function can either be thought of as having n arguments or a function with 1 argument that maps to another function with n-1 arguments.&lt;br /&gt;
&lt;br /&gt;
Currying can be used in programming to simplify the code and make it easier to read.  This reduces code repetition by combining the common parts of different function into one.  &lt;br /&gt;
&lt;br /&gt;
In contrast, there is also uncurrying.  Uncurrying is a technique of transforming a one argument function into a function with many arguments.&lt;br /&gt;
&lt;br /&gt;
The notion is that a function of n arguments can be thought of as a function of 1 argument that maps to a function of n−1 arguments&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
==Why is it Important?==&lt;br /&gt;
&lt;br /&gt;
=Mathematical Definition=&lt;br /&gt;
A function can be defined as f(x, y, z) -&amp;gt; g where it takes in three arguments x, y and z and returns q.  To express this in curried form, the function would need to be split into multiple functions with only one argument.  The new function can be written in the form f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).  The function f takes in x as an argument and returns another function which takes in y as an argument.  That function then returns another function which takes in z as an argument and returns q.&lt;br /&gt;
A function, f(x, y, z) -&amp;gt; q, can be expressed in its curried form, F(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and partial functions.  A Partial function simplifies another function by making one or more of its arguments constant.  In contrast a curried function takes in one parameter and returns ...&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying is used in several programming languages.&lt;br /&gt;
==Ruby==&lt;br /&gt;
In ruby 1.9, the curry method was added to procedure objects.  Previous versions of ruby do not have a dedicated curry function.  Here is an example of using the curry function in Ruby [http://www.khelll.com/blog/ruby/ruby-currying/].&lt;br /&gt;
&lt;br /&gt;
Suppose we have 3 different functions: &lt;br /&gt;
&lt;br /&gt;
1.  sum_ints:  this function takes in two arguments, a and b, and determines the sum of all integers between a and b.&lt;br /&gt;
&lt;br /&gt;
             sum_ints = lambda do |a,b|&lt;br /&gt;
               s = 0 ; a.upto(b){|n| s += n } ; s &lt;br /&gt;
             end&lt;br /&gt;
&lt;br /&gt;
2.  sum_of_squares: this function takes in two arguments, a and b, and returns the sum of the square of the integers from a to b.&lt;br /&gt;
&lt;br /&gt;
             sum_of_squares = lambda do |a,b|&lt;br /&gt;
               s = 0 ; a.upto(b){|n| s += n**2 } ;s &lt;br /&gt;
             end&lt;br /&gt;
&lt;br /&gt;
3.  sum_of_powers_of_2: this function takes in two arguments, a and b, and returns the sum of the powers of two from a to b.&lt;br /&gt;
&lt;br /&gt;
             sum_of_powers_of_2 = lambda do |a,b|&lt;br /&gt;
               s = 0 ; a.upto(b){|n| s += 2**n } ; s &lt;br /&gt;
             end&lt;br /&gt;
&lt;br /&gt;
We can note that each function follows a similar pattern.  They all use the summation function to sum a group of numbers from a to b.  We can pull this functionality out and write one main function that all three functions will use:&lt;br /&gt;
&lt;br /&gt;
         sum = lambda do |f,a,b|&lt;br /&gt;
           s = 0 ; a.upto(b){|n| s += f.(n) } ; s &lt;br /&gt;
         end&lt;br /&gt;
&lt;br /&gt;
This function takes in three arguments: f, a, and b.  The argument 'f' is the function we want to use on each element in the summation.  The arguments 'a' and 'b' are used to define the lower and upper bound respectively.  Now we can create the curried form of the 'sum' function.&lt;br /&gt;
&lt;br /&gt;
         currying = sum.curry&lt;br /&gt;
 &lt;br /&gt;
With this currying function we can create the three partial functions defined above.&lt;br /&gt;
         sum_ints = currying.(lambda{|x| x})&lt;br /&gt;
         sum_of_squares = currying.(lambda{|x| x**2})&lt;br /&gt;
         sum_of_powers_of_2 = currying.(lambda{|x| 2**x})&lt;br /&gt;
&lt;br /&gt;
==Haskell==&lt;br /&gt;
In Haskell, all functions use currying.  In other words, all functions take just one argument.  This notion is mainly hidden from the user.  For instance, we can divide two numbers, 8 and 4, by  calling 'div 8 4'.  The function does not take in two number and return the result as one would think.  In Haskell, the 'div' function is defined as:&lt;br /&gt;
&lt;br /&gt;
                   div :: int -&amp;gt; int -&amp;gt; int&lt;br /&gt;
&lt;br /&gt;
The three integers above represent the first argument, the second argument, and the result respectively.  The division is computed in two steps.  First the function 'div' evaluates the first argument, 8, and returns a function of type 'int -&amp;gt; int'.  This new function is then applied to the second argument, 4, and returns the result of 2 [http://www.haskell.org/haskellwiki/Currying].&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;br /&gt;
&lt;br /&gt;
[2] http://www.khelll.com/blog/ruby/ruby-currying/&lt;br /&gt;
&lt;br /&gt;
[3] http://www.haskell.org/haskellwiki/Currying&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=48849</id>
		<title>CSC/ECE 517 Fall 2011/ch1 2a sd</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_517_Fall_2011/ch1_2a_sd&amp;diff=48849"/>
		<updated>2011-09-16T20:56:44Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: Created page with &amp;quot;=Introduction= Currying is a technique used in Computer Science and Mathematics to transform a function with multiple parameters into multiple functions with one parameter.  A fu...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Introduction=&lt;br /&gt;
Currying is a technique used in Computer Science and Mathematics to transform a function with multiple parameters into multiple functions with one parameter.  A function can either be thought of as having n arguments or a function with 1 argument that maps to another function with n-1 arguments.  &lt;br /&gt;
&lt;br /&gt;
In contrast, there is also uncurrying.  Uncurrying is a technique of transforming a one argument function into a function with many arguments.&lt;br /&gt;
&lt;br /&gt;
The notion is that a function of n arguments can be thought of as a function of 1 argument that maps to a function of n−1 arguments&lt;br /&gt;
&lt;br /&gt;
==Background==&lt;br /&gt;
Currying was first discovered by Moses Schönfinkel, a Russian Logician, in 1924[http://c2.com/cgi/wiki?CurryingSchonfinkelling] and later re-discovered by Haskel Curry, an American Mathematician and Logician.  Although Schönfinkel was the first to discover it, the process was ultimately named after Curry.  For this reason, some people thought a more appropriate name would be  Schönfinkeling.&lt;br /&gt;
&lt;br /&gt;
==Why is it Important?==&lt;br /&gt;
&lt;br /&gt;
=Mathematical Definition=&lt;br /&gt;
A function can be defined as f(x, y, z) -&amp;gt; g where it takes in three arguments x, y and z and returns q.  To express this in curried form, the function would need to be split into multiple functions with only one argument.  The new function can be written in the form f(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).  The function f takes in x as an argument and returns another function which takes in y as an argument.  That function then returns another function which takes in z as an argument and returns q.&lt;br /&gt;
A function, f(x, y, z) -&amp;gt; q, can be expressed in its curried form, F(x) -&amp;gt; (y -&amp;gt; (z -&amp;gt; q)).&lt;br /&gt;
==Currying v.s. Partial Function==&lt;br /&gt;
There is a small distinction between currying and partial functions.  A Partial function simplifies another function by making one or more of its arguments constant.  In contrast a curried function takes in one parameter and returns ...&lt;br /&gt;
&lt;br /&gt;
=Currying in Programming=&lt;br /&gt;
Currying is used in several programming languages.&lt;br /&gt;
==Ruby==&lt;br /&gt;
&lt;br /&gt;
==Haskel==&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
[1] http://c2.com/cgi/wiki?CurryingSchonfinkelling&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45363</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45363"/>
		<updated>2011-04-25T23:54:48Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Supplemental to Chapter 11: SCI Cache Coherence''' ==&lt;br /&gt;
&lt;br /&gt;
This is intended to be a supplement to Chapter 11 of [[#References | Solihin ]], which deals with Distributed Shared Memory (DSM) in Multiprocessors.  In the book, the basic approaches to DSM are discussed, and a directory based cache coherence protocol is introduced.  This protocol is in contrast to the bus-based cache coherence protocols that were introduced earlier.  This supplemental focuses on a specific directory based cache coherence protocol called the Scalable Coherent Interface.&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [[#References | (&amp;quot;IEEE Standards Association&amp;quot;) ]].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The directory maintains a state for the block in memory, and a pointer to the first cache that is on the shared list.  The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory that they now have the only clean copy of the data.  This head cache must also notify the next cache in the shared list so that the cache can invalidate their copy.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
The structure for SCI is fairly complicated and can lead to a number of race conditions.&lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  Suppose we have two nodes, N1 and N2, with both nodes sharing a cached block.  N1 is the head of the linked list while N2 is the tail of the linked list.  If N2 wants to write a value to the cached block, the node must do it in a series of steps.  First N2 detaches itself from the linked list by updating N1's NEXT pointer.  Since N2 is no longer a sharer, the node must invalidate its cached copy to prevent a race condition (N1 writes to block while N2 holds an out-of-date version of the block).  Then N2 has to re-request the block from the directory with intent to write data.  The directory then processes N2's request and N2 swaps out the current head's pointer (N1) with its own, making it the new head.  N2 then uses the pointer to N1 to invalidate all sharers.  In this case N1 is the only sharer [[#References | (Gustavson, and Li 56) ]].  This allows the SCI protocol to keep write exclusivity.  Write exclusivity mean only one write can be performed to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Table 1 shows how this scenario would look using a simplistic version of SCI discussed later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;7&amp;quot;| Table 1: Maintaining Write Exclusivity&lt;br /&gt;
|----&lt;br /&gt;
!Action&lt;br /&gt;
!State N1&lt;br /&gt;
!State N2&lt;br /&gt;
!State Directory&lt;br /&gt;
!Head Node&lt;br /&gt;
!Comments&lt;br /&gt;
|----&lt;br /&gt;
| - &lt;br /&gt;
|Sh&lt;br /&gt;
|St&lt;br /&gt;
|S&lt;br /&gt;
|N1&lt;br /&gt;
|Initial State&lt;br /&gt;
|----&lt;br /&gt;
|W2&lt;br /&gt;
|Sh&lt;br /&gt;
|I&lt;br /&gt;
|S&lt;br /&gt;
|N1&lt;br /&gt;
|Node 2 wants to write&lt;br /&gt;
|----&lt;br /&gt;
|&lt;br /&gt;
|I&lt;br /&gt;
|M&lt;br /&gt;
|EM&lt;br /&gt;
|N2&lt;br /&gt;
|N2 becomes new head invalidating N1&lt;br /&gt;
|----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
&lt;br /&gt;
=== Directory States ===&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
If the directory has the block of memory in the GONE or EM state, then the directory must forward any incoming read/write requests for the block to the last cache that modified the block.  That cache is responsible for updating main memory and the block that requested the data. This is why the directory must maintain a pointer to the first cache in the cache list. &lt;br /&gt;
&lt;br /&gt;
=== Cache States : Complicated ===&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | (Cutler (1999))]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
In addition to these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
Each designation for the location can match with a designation for the state which creates additional states like:&lt;br /&gt;
&lt;br /&gt;
* ONLY_DIRTY - if the cache is the only state to have the cached memory block, and the cache has modified the memory block so it is different from main memory&lt;br /&gt;
* MID_FRESH - if the cache is neither the first nor last cache in the list, and the data it has is the same as what is in main memory.&lt;br /&gt;
&lt;br /&gt;
As implied in the [[#Coherence Race Conditions | Race Conditions ]] section, some states are impossible, like: MID_DIRTY or TAIL_CLEAN.&lt;br /&gt;
&lt;br /&gt;
=== Cache States : Simplified ===&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the [http://en.wikipedia.org/wiki/MESI_protocol MESI protocol] states.  This results in a more compact state diagram, which still gives the general sense of how the protocol causes transitions.  It also illustrates how the directory states influence the cache states. In this case, we maintain the three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
To further simplify this scenario, only the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
As mentioned in the [[#Coherence Race Conditions | Race Conditions ]] section, in order for a processor to make a write, they have to be in the head of the list (or the Sh state in this scenario).&lt;br /&gt;
&lt;br /&gt;
The following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
As you can see from the diagram, a Write(S) can only occur when a processor is in the &amp;quot;Sh&amp;quot; state.  In the actual SCI protocol, a write can occur when the cache is in the HEAD state combined with CLEAN/FRESH/DIRTY or when the cache is in the ONLY state.  In this simplified scenario, these states are combined into one, the Sh state.&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
The SCI protocol is a directory based protocol for Distributed Memory Management similar to the one described in [[#References | Solihin ]].  This is an extensive and complicated strategy that maintains coherence as it scales to multiple processors.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;br /&gt;
* Gustavson, David, and Qiang Li. &amp;quot;The Scalable Coherent Interface (SCI).&amp;quot; IEEE Communications Magazine 1996: 56. Web. 25 Apr 2011. &amp;lt;http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf&amp;gt;.&lt;br /&gt;
* &amp;quot;1596-1992 - IEEE Standard for Scalable Coherent Interface (SCI).&amp;quot; IEEE Standards Association. N.p., n.d. Web. &amp;lt;http://standards.ieee.org/findstds/standard/1596-1992.html&amp;gt;.&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45185</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45185"/>
		<updated>2011-04-19T01:40:15Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Supplemental to Chapter 11: SCI Cache Coherence''' ==&lt;br /&gt;
&lt;br /&gt;
This is intended to be a supplement to Chapter 11 of [[#References | Solihin ]], which deals with Distributed Shared Memory (DSM) in Multiprocessors.  In the book, the basic approaches to DSM are discussed, and a directory based cache coherence protocol is introduced.  This protocol is in contrast to the bus-based cache coherence protocols that were introduced earlier.  This supplemental focuses on a specific directory based cache coherence protocol called the Scalable Coherent Interface.&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The directory maintains a state for the block in memory, and a pointer to the first cache that is on the shared list.  The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory that they now have the only clean copy of the data.  This head cache must also notify the next cache in the shared list so that the cache can invalidate their copy.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
The structure for SCI is fairly complicated and can lead to a number of race conditions.&lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  Suppose we have two nodes, N1 and N2, with both nodes sharing a cached block.  N1 is the head of the linked list while N2 is the tail of the linked list.  If N2 wants to write a value to the cached block, the node must do it in a series of steps.  First N2 detaches itself from the linked list by updating N1's NEXT pointer.  Since N2 is no longer a sharer, the node must invalidate its cached copy to prevent a race condition (N1 writes to block while N2 holds an out-of-date version of the block).  Then N2 has to re-request the block from the directory with intent to write data.  The directory then processes N2's request and N2 swaps out the current head's pointer (N1) with its own, making it the new head.  N2 then uses the pointer to N1 to invalidate all sharers.  In this case N1 is the only sharer [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  This allows the SCI protocol to keep write exclusivity.  Write exclusivity mean only one write can be performed to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Table 1 shows how this scenario would look using a simplistic version of SCI discussed later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;7&amp;quot;| Table 1: Maintaining Write Exclusivity&lt;br /&gt;
|----&lt;br /&gt;
!Action&lt;br /&gt;
!State N1&lt;br /&gt;
!State N2&lt;br /&gt;
!State Directory&lt;br /&gt;
!Head Node&lt;br /&gt;
!Comments&lt;br /&gt;
|----&lt;br /&gt;
| - &lt;br /&gt;
|Sh&lt;br /&gt;
|St&lt;br /&gt;
|S&lt;br /&gt;
|N1&lt;br /&gt;
|Initial State&lt;br /&gt;
|----&lt;br /&gt;
|W2&lt;br /&gt;
|Sh&lt;br /&gt;
|I&lt;br /&gt;
|S&lt;br /&gt;
|N1&lt;br /&gt;
|Node 2 wants to write&lt;br /&gt;
|----&lt;br /&gt;
|&lt;br /&gt;
|I&lt;br /&gt;
|M&lt;br /&gt;
|EM&lt;br /&gt;
|N2&lt;br /&gt;
|N2 becomes new head invalidating N1&lt;br /&gt;
|----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
&lt;br /&gt;
=== Directory States ===&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
If the directory has the block of memory in the GONE or EM state, then the directory must forward any incoming read/write requests for the block to the last cache that modified the block.  That cache is responsible for updating main memory and the block that requested the data. This is why the directory must maintain a pointer to the first cache in the cache list. &lt;br /&gt;
&lt;br /&gt;
=== Cache States : Complicated ===&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | (Cutler (1999))]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
In addition to these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
Each designation for the location can match with a designation for the state which creates additional states like:&lt;br /&gt;
&lt;br /&gt;
* ONLY_DIRTY - if the cache is the only state to have the cached memory block, and the cache has modified the memory block so it is different from main memory&lt;br /&gt;
* MID_FRESH - if the cache is neither the first nor last cache in the list, and the data it has is the same as what is in main memory.&lt;br /&gt;
&lt;br /&gt;
As implied in the [[#Coherence Race Conditions | Race Conditions ]] section, some states are impossible, like: MID_DIRTY or TAIL_CLEAN.&lt;br /&gt;
&lt;br /&gt;
=== Cache States : Simplified ===&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the [http://en.wikipedia.org/wiki/MESI_protocol MESI protocol] states.  This results in a more compact state diagram, which still gives the general sense of how the protocol causes transitions.  It also illustrates how the directory states influence the cache states. In this case, we maintain the three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
To further simplify this scenario, only the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
As mentioned in the [[#Coherence Race Conditions | Race Conditions ]] section, in order for a processor to make a write, they have to be in the head of the list (or the Sh state in this scenario).&lt;br /&gt;
&lt;br /&gt;
The following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
As you can see from the diagram, a Write(S) can only occur when a processor is in the &amp;quot;Sh&amp;quot; state.  In the actual SCI protocol, a write can occur when the cache is in the HEAD state combined with CLEAN/FRESH/DIRTY or when the cache is in the ONLY state.  In this simplified scenario, these states are combined into one, the Sh state.&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
The SCI protocol is a directory based protocol for Distributed Memory Management similar to the one described in [[#References | Solihin ]].  This is an extensive and complicated strategy that maintains coherence as it scales to multiple processors.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45160</id>
		<title>CSC/ECE 506 Spring 2011/ch11 DJ</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch11_DJ&amp;diff=45160"/>
		<updated>2011-04-19T00:52:21Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== '''Supplemental to Chapter 11: SCI Cache Coherence''' ==&lt;br /&gt;
&lt;br /&gt;
This is intended to be a supplement to Chapter 11 of [[#References | Solihin ]], which deals with Distributed Shared Memory (DSM) in Multiprocessors.  In the book, the basic approaches to DSM are discussed, and a directory based cache coherence protocol are introduced.  This protocol is in contrast to the bus-based cache coherence protocols that were introduced earlier.  This supplemental focuses on a specific directory based cache coherence protocol called the Scalable Coherent Interface.&lt;br /&gt;
&lt;br /&gt;
== Introduction: What is the Scalable Coherent Interface? ==&lt;br /&gt;
&lt;br /&gt;
The Scalable Coherent Interface (SCI) is an ANSI/IEEE protocol to provide fast point-to-point connections for high-performance multiprocessor systems [http://standards.ieee.org/findstds/standard/1596-1992.html].  It works with both shared memory and message passing and was approved in 1992. In the 1980's, as processors were getting faster and faster, they were beginning to outpace the speed of the bus.  SCI was created to provide a non-bus based interconnection protocol.  It was intended to be scalable, meaning it can work with few or many multiprocessors.     &lt;br /&gt;
&lt;br /&gt;
== SCI Cache Coherence ==&lt;br /&gt;
SCI utilizes a directory based cache coherence protocol similar to what is describe in Chapter 11 of [[#References | Solihin (2008)]]. Instead of cache coherence being handled by caches snooping for interactions on a bus, a directory manages the coherence of each individual cache by creating a doubly-linked list of the caches that are sharing a particular block of memory.   The directory maintains a state for the block in memory, and a pointer to the first cache that is on the shared list.  The first cache in the list is the cache for the processor that most recently accessed the block of memory.  When the &amp;quot;head&amp;quot; of the list modifies the block of memory, they have to notify main memory that they now have the only clean copy of the data.  This head cache must also notify the next cache in the shared list so that the cache can invalidate their copy.  The information then propagates from there.   &lt;br /&gt;
&lt;br /&gt;
The structure for SCI is fairly complicated and can lead to a number of race conditions.&lt;br /&gt;
&lt;br /&gt;
== Coherence Race Conditions ==&lt;br /&gt;
There are many different instances in the SCI cache coherence protocol where race conditions are handled in such a way they are a non-issue.  For instance, in SCI only the head node in the link list of sharers is able to write to the block.  All sharers, including the head, can read.  Suppose we have two nodes, N1 and N2, with both nodes sharing a cached block.  N1 is the head of the linked list while N2 is the tail of the linked list.  If N2 wants to write a value to the cached block, the node must do it in a series of steps.  First N2 detaches itself from the linked list by updating N1's NEXT pointer.  Since N2 is no longer a sharer, the node must invalidate its cached copy to prevent a race condition (N1 writes to block while N2 holds an out-of-date version of the block).  Then N2 has to re-request the block from the directory with intent to write data.  The directory then processes N2's request and N2 swaps out the current head's pointer (N1) with its own, making it the new head.  N2 then uses the pointer to N1 to invalidate all sharers.  In this case N1 is the only sharer [http://www.cs.utexas.edu/users/dburger/teaching/cs395t-s08/papers/9_sci.pdf].  This allows the SCI protocol to keep write exclusivity.  Write exclusivity mean only one write can be performed to a give block at a time.  Without write exclusivity, each node sharing the block would be able to write to the block, allowing for multiple writes to happen simultaneously.  If each cache wrote simultaneously, every cache sharing the block would get invalidated, including the head.  Also the block in memory might not posses the correct value.  In order for this to be implemented, there needs to be special states for the head, middle, and tail of the linked list.  Also there needs to be states that determine the read/write capabilities of the nodes.  Table 1 shows how this scenario would look using a simplistic version of SCI discussed later.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
{| border=&amp;quot;1&amp;quot; cellspacing=&amp;quot;0&amp;quot; cellpadding=&amp;quot;2&amp;quot;&lt;br /&gt;
!colspan=&amp;quot;7&amp;quot;| Table 1: Maintaining Write Exclusivity&lt;br /&gt;
|----&lt;br /&gt;
!Action&lt;br /&gt;
!State N1&lt;br /&gt;
!State N2&lt;br /&gt;
!State Directory&lt;br /&gt;
!Head Node&lt;br /&gt;
!Comments&lt;br /&gt;
|----&lt;br /&gt;
| - &lt;br /&gt;
|Sh&lt;br /&gt;
|St&lt;br /&gt;
|S&lt;br /&gt;
|N1&lt;br /&gt;
|Initial State&lt;br /&gt;
|----&lt;br /&gt;
|W2&lt;br /&gt;
|Sh&lt;br /&gt;
|I&lt;br /&gt;
|S&lt;br /&gt;
|N1&lt;br /&gt;
|Node 2 wants to write&lt;br /&gt;
|----&lt;br /&gt;
|&lt;br /&gt;
|I&lt;br /&gt;
|M&lt;br /&gt;
|EM&lt;br /&gt;
|N2&lt;br /&gt;
|N2 becomes new head invalidating N1&lt;br /&gt;
|----&lt;br /&gt;
|}&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Another scenario described in 11.4.1 of the Solihin textbook is when a cache is in state M and the directory is in state EM for a given block.  If the cache holding the block flushes the value and the value does not make it to the directory before a read request is made, then there can be a potential race condition.  The race condition is handled by the use of pending states.  When the cache flushes a block, the line transitions from state M to a pending state.  The pending state transitions into the invalid state only when the directory has acknowledged receiving the flush.  This allows the cache to stall any read request made to the block until the block enters the steady state of I [[#References | (Solihin)]].&lt;br /&gt;
&lt;br /&gt;
== SCI States ==&lt;br /&gt;
&lt;br /&gt;
=== Directory States ===&lt;br /&gt;
In SCI, the directory and each cache maintain a series of states. The directory can be in one of three states:&lt;br /&gt;
&lt;br /&gt;
# HOME or UNCACHED (U) – The only clean copy of the memory block is in main memory&lt;br /&gt;
# FRESH or SHARED (S) – The memory block is shared, but the copy in main memory is updated&lt;br /&gt;
# GONE or EXCLUSIVE/MODIFIED (EM) – The memory block in main memory is not updated, but a cache has the updated memory block&lt;br /&gt;
&lt;br /&gt;
If the directory has the block of memory in the GONE or EM state, then the directory must forward any incoming read/write requests for the block to the last cache that modified the block.  That cache is responsible for updating main memory and the block that requested the data. This is why the directory must maintain a pointer to the first cache in the cache list. &lt;br /&gt;
&lt;br /&gt;
=== Cache States : Complicated ===&lt;br /&gt;
For a cache, there are 29 stable states described in the SCI standard and many pending states.  Each state consists of two parts.  The first part of the state tells where the state is in the linked list. There are four possibilities [[#References | (Cutler (1999))]]:&lt;br /&gt;
&lt;br /&gt;
# ONLY – if the cache is the only state to have cached the memory block&lt;br /&gt;
# HEAD – if the cache is the most recent reader/writer of the block and so is the first in the list&lt;br /&gt;
# TAIL – if the cache is the least recent reader/writer of the block and so is the last in the list&lt;br /&gt;
# MID – if the cache is not the first or last in the list&lt;br /&gt;
&lt;br /&gt;
In addition to these four location designations for the states, there are other states to describe the actual state of the memory on the cache.  Some of these include:&lt;br /&gt;
&lt;br /&gt;
# CLEAN – if the data in the memory block has not been modified, is the same as main memory, but can be modified.&lt;br /&gt;
# DIRTY – if the data in the memory block has been modified, is different from main memory, and can be written to again if needed.&lt;br /&gt;
# FRESH – if the data in the memory block has not been modified, is the same as main memory, but cannot be modified without notifying main memory.&lt;br /&gt;
# COPY – if the data in the memory has not been modified, is the same as main memory, but cannot be modified, only read.&lt;br /&gt;
&lt;br /&gt;
Each designation for the location can match with a designation for the state which creates additional states like:&lt;br /&gt;
&lt;br /&gt;
* ONLY_DIRTY - if the cache is the only state to have the cached memory block, and the cache has modified the memory block so it is different from main memory&lt;br /&gt;
* MID_FRESH - if the cache is neither the first nor last cache in the list, and the data it has is the same as what is in main memory.&lt;br /&gt;
&lt;br /&gt;
As implied in the [[#Coherence Race Conditions | Race Conditions ]] section, some states are impossible, like: MID_DIRTY or TAIL_CLEAN.&lt;br /&gt;
&lt;br /&gt;
=== Cache States : Simplified ===&lt;br /&gt;
A more simplistic way to envision the SCI coherence protocol states is to limit the system to just two processors and expand the [http://en.wikipedia.org/wiki/MESI_protocol MESI protocol] states.  This results in a more compact state diagram, which still gives the general sense of how the protocol causes transitions.  It also illustrates how the directory states influence the cache states. In this case, we maintain the three states in the directory (U, S, EM) and the caches have the following states:&lt;br /&gt;
&lt;br /&gt;
# Invalid (I) – if the block in the cache's memory is invalid.&lt;br /&gt;
# Modified (M) – if the block in the cache's memory has been modified and no longer matches the block in main memory.&lt;br /&gt;
# Exclusive (E) – if the cache has the only copy of the memory block and it matches the block in main memory.&lt;br /&gt;
# Shared-Head (Sh) – if the block in the cache's memory is shared with the other processor's cache, but this processor was the last to read the memory.&lt;br /&gt;
# Shared-Tail (St) – if the block in the cache's memory is shared with the other processor's cache and the other processor was the last to read the memory.&lt;br /&gt;
&lt;br /&gt;
To further simplify this scenario, only the following operations are possible:&lt;br /&gt;
&lt;br /&gt;
# Read(O/S) – a read of the data by either the processor attached to the cache (S for Self) or the other processor (O).&lt;br /&gt;
# Write(O/S) – a write of the memory block by either the processor attached to the cache (S) or the other processor (O).&lt;br /&gt;
 &lt;br /&gt;
As mentioned in the [[#Coherence Race Conditions | Race Conditions ]] section, in order for a processor to make a write, they have to be in the head of the list (or the Sh state in this scenario).&lt;br /&gt;
&lt;br /&gt;
The following state diagram illustrates the transitions:&lt;br /&gt;
&lt;br /&gt;
[[Image:SCIStatesC.png]]&lt;br /&gt;
&lt;br /&gt;
Figure 1: Simplified cache state diagram from SCI&lt;br /&gt;
&lt;br /&gt;
Initially, the cache is in the &amp;quot;I&amp;quot; state and the directory is in the &amp;quot;U&amp;quot; state.  If processor  1 (P1) makes a read, the processor's cache moves from state I to E and the directory moves from U to EM.  The directory records that P1 is the head of the directory list.  At this point, processor 2 (P2) is in state I.  If P2 makes a read, the directory is in state EM, so the directory notifies P1 that P2 is making a read, P1 transitions from E to St, and sends the data to P2.  P2 transitions from I to Sh, and creates a pointer to P1 to note that P1 is the next memory on the directory list. The directory transitions from EM to S and the directory updates its pointer from P1 to P2 to note that P2 is now the head of the directory cache list.&lt;br /&gt;
&lt;br /&gt;
As you can see from the diagram, a Write(S) can only occur when a processor is in the &amp;quot;Sh&amp;quot; state.  In the actual SCI protocol, a write can occur when the cache is in the HEAD state combined with CLEAN/FRESH/DIRTY or when the cache is in the ONLY state.  In this simplified scenario, these states are combined into one, the Sh state.&lt;br /&gt;
&lt;br /&gt;
== Summary ==&lt;br /&gt;
The SCI protocol is a directory based protocol for Distributed Memory Management similar to the one described in [[#References | Solihin ]].  This is an extensive and complicated strategy that maintains coherence as it scales to multiple processors.&lt;br /&gt;
&lt;br /&gt;
== References ==&lt;br /&gt;
* Yan Solihin, ''Fundamentals of Parallel Computer Architecture: Multichip and Multicore Systems,'' Solihin Books, 2008.&lt;br /&gt;
* Culler, David E. and Jaswinder Pal Singh, ''Parallel Computer Architecture,'' Morgan Kaufmann Publishers, Inc, 1999.&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44566</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44566"/>
		<updated>2011-03-21T00:29:40Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs use virtual addresses to access the memory and caches.  Programs use a virtual address to either &amp;quot;increase&amp;quot; the size of the memory or to deal with multiple processes using the same memory/cache.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so a cache is created that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that holds the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is obtained.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address (PPN and PPO) to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, a cache has the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This turns out to be a lesser issue due to the fact that bigger caches take longer to access&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLBs, there becomes a problem with TLB coherence.  One problem with TLB coherence is that different TLBs in different processors may have incorrect data from one processor changing a block in the TLB while other TLBs now have stale copies of data.  This problem can not be simply dealt with one processor invalidating all the other TLB's block when it changes a block.  On most systems, the processors do not have the authority to invalidate someone else's TLB blocks.  One way to effectively deal with the TLB coherency problem is with the shootdown algorithm.  When a processor realizes the changes it is making might cause inconsistencies in the TLBs, the processor will invoke the shootdown algorithm.  The shootdown algorithm causes a forceful interrupt into certain processors to perform TLB inconsistency actions, for example, an entry or buffer flush&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;5body&amp;quot;&amp;gt;[[#5foot|[5]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  The interruption is considered &amp;quot;shooting&amp;quot; entries out of the TLB and the entire process is called a shootdown.  All inconsistent entries are guaranteed to never be accessed by any TLB again.  On Intel processors, there is an instruction called INVLPG.  This instruction invalidates a single page table entry in a TLB&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;6body&amp;quot;&amp;gt;[[#6foot|[6]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  This instruction will only invalidate one TLB so this instruction would be used in a shootdown algorithm to invalidate multiple TLBs.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;5foot&amp;quot;&amp;gt;[[#5body|5.]]&amp;lt;/span&amp;gt; &amp;quot;Translation Lookaside Buffer Consistency: A Software Approach*&amp;quot;. David L. Black, Richard F. Rashid, David B. Golub, Charles R. Hill+, and Robert V. Baron. CarnegieMellon University  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;6foot&amp;quot;&amp;gt;[[#6body|6.]]&amp;lt;/span&amp;gt; http://faydoc.tripod.com/cpu/invlpg.htm  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44303</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44303"/>
		<updated>2011-03-01T04:55:16Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: /* Translation Lookaside Buffer and Cache Addressing */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs use virtual addresses to access the memory and caches.  Programs use a virtual address to either &amp;quot;increase&amp;quot; the size of the memory or to deal with multiple processes using the same memory/cache.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so a cache is created that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that holds the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is obtained.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address (PPN and PPO) to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, a cache has the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLBs, there becomes a problem with TLB coherence.  One problem with TLB coherence is that different TLBs in different processors may have incorrect data from one processor changing a block in the TLB while other TLBs now have stale copies of data.  This problem can not be simply dealt with one processor invalidating all the other TLB's block when it changes a block.  On most systems, the processors do not have the authority to invalidate someone else's TLB blocks.  One way to effectively deal with the TLB coherency problem is with the shootdown algorithm.  When a processor realizes the changes it is making might cause inconsistencies in the TLBs, the processor will invoke the shootdown algorithm.  The shootdown algorithm causes a forceful interrupt into certain processors to perform TLB inconsistency actions, for example, an entry or buffer flush&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;5body&amp;quot;&amp;gt;[[#5foot|[5]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  The interruption is considered &amp;quot;shooting&amp;quot; entries out of the TLB and the entire process is called a shootdown.  All inconsistent entries are guaranteed to never be accessed by any TLB again.  On Intel processors, there is an instruction called INVLPG.  This instruction invalidates a single page table entry in a TLB&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;6body&amp;quot;&amp;gt;[[#6foot|[6]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  This instruction will only invalidate one TLB so this instruction would be used in a shootdown algorithm to invalidate multiple TLBs.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;5foot&amp;quot;&amp;gt;[[#5body|5.]]&amp;lt;/span&amp;gt; &amp;quot;Translation Lookaside Buffer Consistency: A Software Approach*&amp;quot;. David L. Black, Richard F. Rashid, David B. Golub, Charles R. Hill+, and Robert V. Baron. CarnegieMellon University  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;6foot&amp;quot;&amp;gt;[[#6body|6.]]&amp;lt;/span&amp;gt; http://faydoc.tripod.com/cpu/invlpg.htm  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44299</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44299"/>
		<updated>2011-03-01T04:44:31Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLBs, there becomes a problem with TLB coherence.  One problem with TLB coherence is that different TLBs in different processors may have incorrect data from one processor changing a block in the TLB while other TLBs now have stale copies of data.  This problem can not be simply dealt with one processor invalidating all the other TLB's block when it changes a block.  On most systems, the processors do not have the authority to invalidate someone else's TLB blocks.  One way to effectively deal with the TLB coherency problem is with the shootdown algorithm.  When a processor realizes the changes it is making might cause inconsistencies in the TLBs, the processor will invoke the shootdown algorithm.  The shootdown algorithm causes a forceful interrupt into certain processors to perform TLB inconsistency actions, for example, an entry or buffer flush&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;5body&amp;quot;&amp;gt;[[#5foot|[5]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  The interruption is considered &amp;quot;shooting&amp;quot; entries out of the TLB and the entire process is called a shootdown.  All inconsistent entries are guaranteed to never be accessed by any TLB again.  On Intel processors, there is an instruction called INVLPG.  This instruction invalidates a single page table entry in a TLB&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;6body&amp;quot;&amp;gt;[[#6foot|[6]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  This instruction will only invalidate one TLB so this instruction would be used in a shootdown algorithm to invalidate multiple TLBs.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;5foot&amp;quot;&amp;gt;[[#5body|5.]]&amp;lt;/span&amp;gt; &amp;quot;Translation Lookaside Buffer Consistency: A Software Approach*&amp;quot;. David L. Black, Richard F. Rashid, David B. Golub, Charles R. Hill+, and Robert V. Baron. CarnegieMellon University  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;6foot&amp;quot;&amp;gt;[[#6body|6.]]&amp;lt;/span&amp;gt; http://faydoc.tripod.com/cpu/invlpg.htm  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44298</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44298"/>
		<updated>2011-03-01T04:43:30Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLBs, there becomes a problem with TLB coherence.  One problem with TLB coherence is that different TLBs in different processors may have incorrect data from one processor changing a block in the TLB while other TLBs now have stale copies of data.  This problem can not be simply dealt with one processor invalidating all the other TLB's block when it changes a block.  On most systems, the processors do not have the authority to invalidate someone else's TLB blocks.  One way to effectively deal with the TLB coherency problem is with the shootdown algorithm.  When a processor realizes the changes it is making might cause inconsistencies in the TLBs, the processor will invoke the shootdown algorithm.  The shootdown algorithm causes a forceful interrupt into certain processors to perform TLB inconsistency actions, for example, an entry or buffer flush&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;5body&amp;quot;&amp;gt;[[#5foot|[5]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  The interruption is considered &amp;quot;shooting&amp;quot; entries out of the TLB and the entire process is called a shootdown.  All inconsistent entries are guaranteed to never be accessed by any TLB again.  On Intel processors, there is an instruction called INVLPG.  This instruction invalidates a single page table entry in a TLB&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;6body&amp;quot;&amp;gt;[[#6foot|[6]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  This instruction will only invalidate one TLB so this instruction would be used in a shootdown algorithm to invalidate multiple TLBs.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;5foot&amp;quot;&amp;gt;[[#5body|5.]]&amp;lt;/span&amp;gt; &amp;quot;Translation Lookaside Buffer Consistency: A Software Approach*&amp;quot;. David L. Black, Richard F. Rashid, David B. Golub, Charles R. Hill+, and Robert V. Baron. CarnegieMellon University  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;6foot&amp;quot;&amp;gt;[[#6body|6.]]&amp;lt;/span&amp;gt; http://faydoc.tripod.com/cpu/invlpg.htm  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44297</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44297"/>
		<updated>2011-03-01T04:43:03Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: /* Cache Coherency Problem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLBs, there becomes a problem with TLB coherence.  One problem with TLB coherence is that different TLBs in different processors may have incorrect data from one processor changing a block in the TLB while other TLBs now have stale copies of data.  This problem can not be simply dealt with one processor invalidating all the other TLB's block when it changes a block.  On most systems, the processors do not have the authority to invalidate someone else's TLB blocks.  One way to effectively deal with the TLB coherency problem is with the shootdown algorithm.  When a processor realizes the changes it is making might cause inconsistencies in the TLBs, the processor will invoke the shootdown algorithm.  The shootdown algorithm causes a forceful interrupt into certain processors to perform TLB inconsistency actions, for example, an entry or buffer flush&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;6body&amp;quot;&amp;gt;[[#6foot|[6]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  The interruption is considered &amp;quot;shooting&amp;quot; entries out of the TLB and the entire process is called a shootdown.  All inconsistent entries are guaranteed to never be accessed by any TLB again.  On Intel processors, there is an instruction called INVLPG.  This instruction invalidates a single page table entry in a TLB&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;6body&amp;quot;&amp;gt;[[#6foot|[6]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  This instruction will only invalidate one TLB so this instruction would be used in a shootdown algorithm to invalidate multiple TLBs.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;5foot&amp;quot;&amp;gt;[[#5body|5.]]&amp;lt;/span&amp;gt; &amp;quot;Translation Lookaside Buffer Consistency: A Software Approach*&amp;quot;. David L. Black, Richard F. Rashid, David B. Golub, Charles R. Hill+, and Robert V. Baron. CarnegieMellon University  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;6foot&amp;quot;&amp;gt;[[#6body|6.]]&amp;lt;/span&amp;gt; http://faydoc.tripod.com/cpu/invlpg.htm  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44292</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44292"/>
		<updated>2011-03-01T04:38:57Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: /* Cache Coherency Problem */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLBs, there becomes a problem with cache coherence and TLB coherence.  Cache coherence will be talk about on a later date.  Here I would like to talk about TLB coherence problem.  One problem with TLB coherence is that different TLBs in different processors may have incorrect data.  This problem can not be simply dealt with one processor invalidating all the other TLB's block when it changes a block.  On most systems, the processors do not have the authority to invalidate someone else's TLB blocks.  One way to effectively deal with the TLB coherency problem is with the shootdown algorithm.  When a processor realizes the changes it is making might cause inconsistencies in the TLBs, the processor will invoke the shootdown algorithm.  The shootdown algorithm causes a forceful interrupt into certain processors to perform TLB inconsistency actions, for example, an entry or buffer flush.  The interruption is considered &amp;quot;shooting&amp;quot; entries out of the TLB and the entire process is called a shootdown.  All inconsistent entries are guaranteed to never be accessed by any TLB again.  On Intel processors, there is an instruction called INVLPG.  This instruction invalidates a single page table entry in a TLB&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;6body&amp;quot;&amp;gt;[[#6foot|[6]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  This instruction will only invalidate one TLB so this instruction would be used in a shootdown algorithm to invalidate multiple TLBs.&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;5foot&amp;quot;&amp;gt;[[#5body|5.]]&amp;lt;/span&amp;gt; &amp;quot;Translation Lookaside Buffer Consistency: A Software Approach*&amp;quot;. David L. Black, Richard F. Rashid, David B. Golub, Charles R. Hill+, and Robert V. Baron. CarnegieMellon University  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;6foot&amp;quot;&amp;gt;[[#6body|6.]]&amp;lt;/span&amp;gt; http://faydoc.tripod.com/cpu/invlpg.htm  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44291</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44291"/>
		<updated>2011-03-01T04:38:17Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLBs, there becomes a problem with cache coherence and TLB coherence.  Cache coherence will be talk about on a later date.  Here I would like to talk about TLB coherence problem.  One problem with TLB coherence is that different TLBs in different processors may have incorrect data.  This problem can not be simply dealt with one processor invalidating all the other TLB's block when it changes a block.  On most systems, the processors do not have the authority to invalidate someone else's TLB blocks.  One way to effectively deal with the TLB coherency problem is with the shootdown algorithm.  When a processor realizes the changes it is making might cause inconsistencies in the TLBs, the processor will invoke the shootdown algorithm.  The shootdown algorithm causes a forceful interrupt into certain processors to perform TLB inconsistency actions, for example, an entry or buffer flush.  The interruption is considered &amp;quot;shooting&amp;quot; entries out of the TLB and the entire process is called a shootdown.  All inconsistent entries are guaranteed to never be accessed by any TLB again.  On Intel processors, there is an instruction called INVLPG.  This instruction invalidates a single page table entry in a TLB&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;6body&amp;quot;&amp;gt;[[#6foot|[6]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.  This instruction only will invalidate one TLB so this instruction would be used in a shootdown algorithm to invalidate multiple TLBs.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;5foot&amp;quot;&amp;gt;[[#5body|5.]]&amp;lt;/span&amp;gt; &amp;quot;Translation Lookaside Buffer Consistency: A Software Approach*&amp;quot;. David L. Black, Richard F. Rashid, David B. Golub, Charles R. Hill+, and Robert V. Baron. CarnegieMellon University  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;6foot&amp;quot;&amp;gt;[[#6body|6.]]&amp;lt;/span&amp;gt; http://faydoc.tripod.com/cpu/invlpg.htm  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44283</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44283"/>
		<updated>2011-03-01T04:13:23Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLBs, there becomes a problem with cache coherence and TLB coherence.  Cache coherence will be talk about on a later date.  Here I would like to talk about TLB coherence problem.  One problem with TLB coherence is that different TLBs in different processors may have incorrect data.  This problem can not be simply dealt with one processor invalidating all the other TLB's block when it changes a block.  On most systems, the processors do not have the authority to invalidate someone else's TLB blocks.  One way to effectively deal with the TLB coherency problem is with the shootdown algorithm.  When a processor realizes the changes it is making might cause inconsistencies in the TLBs, the processor will invoke the shootdown algorithm.  The shootdown algorithm causes a forceful interrupt into certain processors to perform TLB inconsistency actions, for example, an entry or buffer flush.  The interruption is considered &amp;quot;shooting&amp;quot; entries out of the TLB and the entire process is called a shootdown.  All inconsistent entries are guaranteed to never be accessed by any TLB again.&lt;br /&gt;
On Intel processors, there is an instruction called INVLPG.  This instruction invalidates a single page table entry in a TLB&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;6body&amp;quot;&amp;gt;[[#6foot|[6]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;5foot&amp;quot;&amp;gt;[[#5body|5.]]&amp;lt;/span&amp;gt; &amp;quot;Translation Lookaside Buffer Consistency: A Software Approach*&amp;quot;. David L. Black, Richard F. Rashid, David B. Golub, Charles R. Hill+, and Robert V. Baron. CarnegieMellon University  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;6foot&amp;quot;&amp;gt;[[#6body|6.]]&amp;lt;/span&amp;gt; http://faydoc.tripod.com/cpu/invlpg.htm  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44278</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44278"/>
		<updated>2011-03-01T04:00:42Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32.&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLBs, there becomes a problem with cache coherence and TLB coherence.  Cache coherence will be talk about on a later date.  Here I would like to talk about TLB coherence problem.  One problem with TLB coherence is that different TLBs in different processors may have incorrect data.  This problem can not be simply dealt with one processor invalidating all the other TLB's block when it changes a block.  On most systems, the processors do not have the authority to invalidate someone else's TLB blocks.  One way to effectively deal with the TLB coherency problem is with the shootdown algorithm.  When a processor realizes the changes it is making might cause inconsistencies in the TLBs, the processor will invoke the shootdown algorithm.  The shootdown algorithm causes a forceful interrupt into certain processors to perform TLB inconsistency actions, for example, an entry or buffer flush.  The interruption is considered &amp;quot;shooting&amp;quot; entries out of the TLB and the entire process is called a shootdown.  All inconsistent entries are guaranteed to never be accessed by any TLB again.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; &amp;quot;Translation Lookaside Buffer Consistency: A Software Approach*&amp;quot;. David L. Black, Richard F. Rashid, David B. Golub, Charles R. Hill+, and Robert V. Baron. CarnegieMellon University  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44277</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44277"/>
		<updated>2011-03-01T03:57:59Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32.&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLBs, there becomes a problem with cache coherence and TLB coherence.  Cache coherence will be talk about on a later date.  Here I would like to talk about TLB coherence problem.  One problem with TLB coherence is that different TLBs in different processors may have incorrect data.  This problem can not be simply dealt with one processor invalidating all the other TLB's block when it changes a block.  On most systems, the processors do not have the authority to invalidate someone else's TLB blocks.  One way to effectively deal with the TLB coherency problem is with the shootdown algorithm.  When a processor realizes the changes it is making might cause inconsistencies in the TLBs, the processor will invoke the shootdown algorithm.  The shootdown algorithm causes a forceful interrupt into certain processors to perform TLB inconsistency actions, for example, an entry or buffer flush.  The interruption is considered &amp;quot;shooting&amp;quot; entries out of the TLB and the entire process is called a shootdown.  All inconsistent entries are guaranteed to never be accessed by any TLB again.&lt;br /&gt;
&lt;br /&gt;
====The Shootdown algorithm====&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44261</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44261"/>
		<updated>2011-03-01T03:35:30Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32.&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLB's, there becomes a problem with cache coherence and TLB coherence.  Cache coherence will be talk about on a later date.  Here I would like to talk about TLB coherence problem.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44260</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44260"/>
		<updated>2011-03-01T03:35:13Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32.&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Cache Coherency Problem===&lt;br /&gt;
When a computer has multiple cores/processors with multiple caches/TLB's, there becomes a problem with cache coherence and TLB coherence.  Cache coherence will be talk about on a later date.  Here I would like to talk about TLB coherence.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44251</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44251"/>
		<updated>2011-03-01T03:19:38Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
[[Image:VirtualtoPhysicalCacheAddress.jpg|thumbnail|600px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32.&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=File:VirtualtoPhysicalCacheAddress.jpg&amp;diff=44249</id>
		<title>File:VirtualtoPhysicalCacheAddress.jpg</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=File:VirtualtoPhysicalCacheAddress.jpg&amp;diff=44249"/>
		<updated>2011-03-01T03:16:41Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44244</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44244"/>
		<updated>2011-03-01T03:08:29Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32.&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof. Eric Rotenberg  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44241</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44241"/>
		<updated>2011-03-01T03:06:53Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;1body&amp;quot;&amp;gt;[[#1foot|[1]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;4body&amp;quot;&amp;gt;[[#4foot|[4]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;3body&amp;quot;&amp;gt;[[#3foot|[3]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;.&lt;br /&gt;
From the above equation and knowing that cache size = (number of sets) X (associativity) X (block size).  we can derive a formula relating associativity to cache size and page size:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Associativity ≥ (cache size) / (page size)&lt;br /&gt;
&lt;br /&gt;
With a fixed page size, the only way to increase cache size is by increasing associativity.  For example, the VAX processor has a page size of 4KB.  If the cache is 16KB then the associativity would have to be at least 32.&amp;lt;sup&amp;gt;&amp;lt;span id=&amp;quot;2body&amp;quot;&amp;gt;[[#2foot|[2]]]&amp;lt;/span&amp;gt;&amp;lt;/sup&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=References=&lt;br /&gt;
&amp;lt;span id=&amp;quot;1foot&amp;quot;&amp;gt;[[#1body|1.]]&amp;lt;/span&amp;gt;http://www.ibm.com/developerworks/linux/library/l-kernel-memory-access/index.html?ca=dgr-lnxw100LXUserSpacedth-LX  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;2foot&amp;quot;&amp;gt;[[#2body|2.]]&amp;lt;/span&amp;gt; http://people.engr.ncsu.edu/efg/521/f02/common/lectures/notes/lec9.html  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;3foot&amp;quot;&amp;gt;[[#3body|3.]]&amp;lt;/span&amp;gt; Fundamentals of Parallel Computer Architecture by Prof.Yan Solihin  &amp;lt;br&amp;gt;&lt;br /&gt;
&amp;lt;span id=&amp;quot;4foot&amp;quot;&amp;gt;[[#4body|4.]]&amp;lt;/span&amp;gt; Computer Design &amp;amp; Technology- Lectures slides by Prof.Eric Rotenberg  &amp;lt;br&amp;gt;&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44218</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44218"/>
		<updated>2011-03-01T02:25:05Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  I will discuss the TLB in more detail later.  Virtual addresses are broken up into two parts: a virtual page number (VPN) and a virtual page offset (VPO).  Physical Addresses are broken up into a physical page number (PPN) and a physical page offset (PPO).  The page offset is used to determine which page a given value is in.  The page number determines which page the data is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed.&lt;br /&gt;
&lt;br /&gt;
===Translation Lookaside Buffer===&lt;br /&gt;
The TLB is a small cache that hold the most recently used virtual address to physical address translations.  When a virtual address is needing to be accessed, it looks up the address in the TLB to find its corresponding physical page address.  If the virtual address is found, then there is a TLB hit and the physical address is outputted.  If there is a TLB miss (virtual address is not found) then the OS is called to handle the miss.  The OS maps the virtual address to the correct physical address and loads it into the TLB.  The use of a TLB reduces the time needed to map a virtual address to a physical address by not having to go through the OS every time.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if the processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB have to be flushed or re-initialized to empty before a new process can be worked on.  If processes switch often, the latency required to flush memory will slow down the program.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag.  With this model, the TLB only translates the VPN, not the whole virtual address.  In this type of addressing, the VPO and VPN are split up.  The VPO is used to access the cache while the VPN is sent to the TLB to be translated into the PPN.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself since the PPO is known from the beginning and the cache index and byte offset bits are contained within the PPO.  However, there is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example, you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints.&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44206</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44206"/>
		<updated>2011-03-01T01:49:44Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  Virtual addresses are broken up into two parts: a virtual tag and a virtual page offset or VPO.  Physical Addresses are broken up into a physical tag and a physical page offset or PPO.  The PPO determines which memory page a given value is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  The three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB has to be flushed before a new process can be worked on.  If processes switch often, the latency required to flush memory may not be ideal.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag into the physical tag.  With this model, the TLB and memory can be accessed simultaneously since the PPO is known from the beginning.  In this type of addressing, the VPO and virtual tag are split up.  The VPO is used to access memory while the virtual tag is sent to the TLB to be translated into the physical tag.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself.  There is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints.&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44204</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44204"/>
		<updated>2011-03-01T01:48:09Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif|thumbnail|400px|Virtual Memory Addressing]]&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  Virtual addresses are broken up into two parts: a virtual tag and a virtual page offset or VPO.  Physical Addresses are broken up into a physical tag and a physical page offset or PPO.  The PPO determines which memory page a given value is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  THe three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed.&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB has to be flushed before a new process can be worked on.  If processes switch often, the latency required to flush memory may not be ideal.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag into the physical tag.  With this model, the TLB and memory can be accessed simultaneously since the PPO is known from the beginning.  In this type of addressing, the VPO and virtual tag are split up.  The VPO is used to access memory while the virtual tag is sent to the TLB to be translated into the physical tag.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself.  There is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints.&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44202</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44202"/>
		<updated>2011-03-01T01:35:59Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  Virtual addresses are broken up into two parts: a virtual tag and a virtual page offset or VPO.  Physical Addresses are broken up into a physical tag and a physical page offset or PPO.  The PPO determines which memory page a given value is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  THe three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed.&lt;br /&gt;
&lt;br /&gt;
[[Image:Virtual_Memory_address_space.gif]]&lt;br /&gt;
&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB has to be flushed before a new process can be worked on.  If processes switch often, the latency required to flush memory may not be ideal.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag into the physical tag.  With this model, the TLB and memory can be accessed simultaneously since the PPO is known from the beginning.  In this type of addressing, the VPO and virtual tag are split up.  The VPO is used to access memory while the virtual tag is sent to the TLB to be translated into the physical tag.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself.  There is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
MaxCacheSize = PageSize X Associativity&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For example you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints.&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
	<entry>
		<id>https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44192</id>
		<title>CSC/ECE 506 Spring 2011/ch6b df</title>
		<link rel="alternate" type="text/html" href="https://wiki.expertiza.ncsu.edu/index.php?title=CSC/ECE_506_Spring_2011/ch6b_df&amp;diff=44192"/>
		<updated>2011-03-01T01:22:10Z</updated>

		<summary type="html">&lt;p&gt;Dgfleisc: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;=Translation Lookaside Buffer and Cache Addressing=&lt;br /&gt;
Programs used virtual addresses to access the memory and caches.  If the program were to not use virtual addresses, two programs running at the same time would need their own section of the cache.  For example, if the computer has a 512MB L1 cache and two programs are running at the same time, each program could only use 256MB of the cache.  Virtual addresses are used to allow each program the use of the whole cache.  One problem with the programs accessing memory with virtual addresses is that caches and memory need to be accessed with physical addresses.  The Operating System is usually used to translate these virtual addresses into physical addresses.  The OS can be relatively slow at doing this so we need to create a cache that stores the most recently accessed addresses.  This cache is called the Translation Lookaside Buffer or TLB.  Virtual addresses are broken up into two parts: a virtual tag and a virtual page offset or VPO.  Physical Addresses are broken up into a physical tag and a physical page offset or PPO.  The PPO determines which memory page a given value is in.  These pages are handled by the OS through page tables.  There are three different ways to use the virtual addresses and the physical addresses to access data.  THe three ways are physically addressed, virtually addressed, and physically tagged but virtually addressed.&lt;br /&gt;
{{Virtual_Memory_address_space.gif}}&lt;br /&gt;
===Physically Addressed===&lt;br /&gt;
Physically Addressed refers to using the whole physical address to access memory.  To access memory, the computer must first translate the virtual address into a physical address using the TLB.  After the computer has the physical address, it can index through the cache and pull out the block it needs.  The computer then checks the tag to see if it is a hit or miss.  This is not ideal because the time it takes to access the TLB is added onto the time it takes to access the cache.&lt;br /&gt;
&lt;br /&gt;
===Virtually Addressed===&lt;br /&gt;
Virtually Addressed refers to using the whole virtual address to access memory.  The good thing about this is memory and the TLB can be accessed at the same time.  The problem with this is that each process has its own virtual page table.  So if processor changes from one process to another, the memory still holds the previous process' memory.  The new process will not be able to tell the difference between its memory pages and the memory pages of the previous process.  In order to solve this problem, the memory and TLB has to be flushed before a new process can be worked on.  If processes switch often, the latency required to flush memory may not be ideal.&lt;br /&gt;
&lt;br /&gt;
===Physically Tagged but Virtually Addressed===&lt;br /&gt;
This is where the VPO and the PPO are the exact same and the only thing needing to be translated is the virtual tag into the physical tag.  With this model, the TLB and memory can be accessed simultaneously since the PPO is known from the beginning.  In this type of addressing, the VPO and virtual tag are split up.  The VPO is used to access memory while the virtual tag is sent to the TLB to be translated into the physical tag.  This method allows us to access the TLB and memory at the same time without the problem addressed with using the virtual address by itself.  There is one problem with this approach.  The bits used to specify the set and byte offset are stored within the page offset.  Because of this the size of a page has to be greater than the number of sets multiplied by the block size.  Given this we find there is a limit on the size of the cache.  The maximum cache size is given by the following formula:&lt;br /&gt;
&lt;br /&gt;
=====MaxCacheSize = PageSize X Associativity=====&lt;br /&gt;
&lt;br /&gt;
For example you have a cache with the following parameters:  page size is 4KB and cache is 2 way set associative with a block size of 32 bytes.  A page size of 4KB means 12 bits are needed for the page offset.  also, 5 bits are needed for the block offset.  This leaves 7 bits to specify the cache set which results in 2^7 or 128 different sets.  With the associativity being 2, the maximum cache size is 8KB.  This issue is not a big one because caches can not be too big in the first place due to performance constraints.&lt;/div&gt;</summary>
		<author><name>Dgfleisc</name></author>
	</entry>
</feed>