Thursday, May 25, 2006

Review - A Work In Progress

Conceptual Summary - In Reverse Order
Here are some concepts to focus on in reviewing.

Exception Handling:
Dynamic vs static binding of exception handling. How is dynamic more flexible? How is static better?
Scoping and propagation.
How does Java improve over C++'s exception handling? How does the finally clause improve? How about the OO nature of Java? The handling of built in exceptions not explicitly raised by the user?
How can a lang without specific features for exception handling handle errors? what are drawbacks of this approach?

Concurrency:

What is cooperative, competitive synchronization?
Why do we need them?
How does a binary semaphore help with competitve synchronization?
How does a regular semaphore help with competitve synchronization?
what are the inner workings of wait and release for semaphores?
What is a monitor?
What is message passing? Rendezvous?
How can we use message passing to implement a binary semaphore? A monitor?
What mechanism is used for concurrency in Ada 83? Ada 95? Java?

Interactive Fiction:
Why is an object oriented language well suited for programming interactive fiction? Why is a natural language based programming language?

Implementing Subprograms:
FORTRAN 77 with no recursion vs ALGOL and on.

FORTRAN 77:
How handle activation records, access to nonlocals (but not globals either), transfer of control. Why does this present a problem in terms of recursion?

ALGOL:
with stacked activation records. how does this allow for recursion?
What does an activation record look like.
What is the difference between the dynamic and static link. Why is a static link chain O(n)? What is the idea behind a display, and how can this minimize lookup?
Where scoping is static, how does a pair (depth, offset) work to identify a nonlocal? Why can this not work for dynamic scoping, and what would we need to store instead?

LISP:

How would you write a LISP program to return N + 3?
To get the length of a list?
What is car? cdr? caadar? If I gave you a list, could you tell me caadar of that list?

Guarded Commands:
The od and fi constructs - how they work. How are guarded commands useful in terms of ensuring program accuracy?

How are guarded commands useful in concurrency to acheiving nondeterminism?

Subprograms:

in, out, inout parameters.
what are: pass by value, pass by result, pass by value-result,
pass by reference?
positional parameters vs. keyword parameters. default parameters.

Iterative Statements
Difference between pretest and posttest,

Control Structures:
Nothing

PROLOG:
Facts, goals, and rules.
the Terach example. How would you write a rule for ancestor?

Data types:
arrays: static, fixed-stack dynamic, stack-dynamic, heap-dynamic.

Functional Programming Languages:
What is a higher order function? What is mapcar?
In Haskell, what is an infinite list? Lazy evaluation.

Tuesday, May 23, 2006

More slides on functional langs

http://www.cs.ndsu.nodak.edu/~slator/html/CS372/sebesta-pdf/14.pdf

Slides for Functional Programming Langs

http://www.csee.umbc.edu/331/fall00/nicholas/lectures/fininfp.ppt

Thursday, May 18, 2006

Exception Handling

http://www.cs.ndsu.nodak.edu/~slator/html/CS372/sebesta-pdf/13.pdf

Monday, May 15, 2006

Read this White Paper

by Graham Nelson on Inform and programming in English
http://www.inform-fiction.org/I7Downloads/Documents/WhitePaper.pdf

Read this on Java synchronization

http://www.ccs.neu.edu/home/kenb/g712/res/tutorial.html

Tuesday, May 09, 2006

GNAT for Mac

Check it out here:

http://www.macada.org/

Instructions on Setting Up Ada for DOS

OK, I got a simple compilation working.
Go to this ftp site (open in Firefox or IE)
ftp://ftp.cs.kuleuven.ac.be/pub/Ada-Belgium/mirrors/gnu-ada/3.15p/winnt/

and download the gnat-3.15p-nt.exe file. running this file will install the ada compiler.
we do not really need an IDE (integrated development environment). we can start with just command line.
it should install to c:\gnat
inside c:\gnat\bin, there are a bunch of executables, but all are runnable from the one program gnat
so type gnat and you will see the options.
copy to that directory ..\examples\hello.ada, so that you need not worry about other directories.
hello.ada is the source program.

Then:
gnat compile hello.adb
(this will create a file hello.ali)
gnat bind -x hello.ali
(then we need to link)
gnat link hello.ali
(and finally will run the program)
hello

the output will be:
Hello World. Welcome to GNAT

similar steps are available for gnat on other platforms

Thursday, May 04, 2006

Further Ada Assignment

Work through the examples on this web site to familiarize yourself with concurrency in Ada.

http://linux.disco.unimib.it/~ferretti/lp/ada_task.html

Also read pgs 292 - 300 in the book, which covers much related info.

Assignment

Download an Ada compiler, install it, and write a hello world program.
We are going to write a concurrent program in Ada.

Lecture Notes for Concurrency

Available as a pdf here.

Definition of Race hazard (condition)

at wikipedia:

A race hazard (or race condition) is a flaw in a system or process where the output exhibits unexpected critical dependence on the relative timing of events. The term originates with the idea of two signals racing each other to influence the output first.

Race hazards occur in poorly-designed electronics systems, especially logic circuits, but they may also arise in computer software.

...

Computing

Race hazards may arise in software, especially when communicating between separate processes or threads of execution.

Here is a simple example:

Let us assume that two threads T1 and T2 each want to increment the value of a global integer by one.

If the two threads run simultaneously without locking or synchronization, the outcome of the operation could be wrong

  1. Integer i = 0;
  2. T1 reads the value of i into a register : 0
  3. T2 reads the value of i into a register : 0
  4. T1 increments the value of i : 0 + 1 = 1
  5. T2 increments the value of i : 0 + 1 = 1

The final value of i is 1 instead of the expected result of 2.


For another example, consider the following two tasks, in pseudocode:

global integer A = 0;

task Received()
{
A = A + 1;
print "RX";
}

task Timeout() // Print only the even numbers
{
if (A is divisible by 2)
{
print A;
}
}

task Received is activated whenever an interrupt is received from the serial controller, and increments the value of A.

task Timeout occurs every second. If A is divisible by 2, it prints A. Output would look something like:

0
0
0
RX
RX
2
RX
RX
4
4

Now consider this chain of events, which might occur next:

  1. timeout occurs, activating task Timeout
  2. task Timeout evaluates A and finds it is divisible by 2, so elects to execute the "print A" next.
  3. data is received on the serial port, causing an interrupt and a switch to task Received
  4. task Received runs to completion, incrementing A and printing "RX"
  5. control returns to task Timeout
  6. task timeout executes print A, using the current value of A, which is 5.

Mutexes are used to address this problem in concurrent programming.

In filesystems, File locking provides a commonly-used solution. A more cumbersome remedy involves reorganizing the system in such a way that one unique process (running a daemon or the like) has exclusive access to the file, and all other processes that need to access the data in that file do so only via interprocess communication with that one process (which of course requires synchronization at the process level).

In networking, consider a distributed chat network like IRC, where a user acquires channel-operator privileges in any channel he starts. If two users on different servers, on different ends of the same network, try to start the same-named channel at the same time, each user's respective server will grant channel-operator privileges to each user, since neither server will yet have received the other server's signal that it has allocated that channel. (Note that this problem has been largely solved by various IRC server implementations.)

In this case of a race hazard, the concept of the "shared resource" covers the state of the network (what channels exist, as well as what users started them and therefore have what privileges), which each server can freely change as long as it signals the other servers on the network about the changes so that they can update their conception of the state of the network. However, the latency across the network makes possible the kind of race condition described. In this case, heading off race conditions by imposing a form of control over access to the shared resource—say, appointing one server to control who holds what privileges—would mean turning the distributed network into a centralized one (at least for that one part of the network operation). Where users find such a solution unacceptable, a pragmatic solution can have the system 1) recognize when a race hazard has occurred; and 2) repair the ill effects.

A race condition exemplifies an anti-pattern.

A particularly poignant example of a race condition was one of the problems that plagued the Therac-25 (a Life-critical system) accidents. Another example is the Energy Management System used by Ohio-based FirstEnergy Corp., that had a race condition in the alarm subsystem; when three sagging power lines were tripped simultaneously, the condition prevented alerts being raised to the monitoring technicians. This software flaw eventually led to the North American Blackout of 2003.

Tuesday, May 02, 2006

Concurrency

Concurrency

program level
subprogram level
statement level
instruction level

first computers: general, i, o
early 60's: multiple processors - program level
Now: multi-processor: single instruction multiple data - eg. on vectors
mult instruc mult data: each general processor has own data. can be a distributed system

physical vs. logical concurrency
thread of control
coroutines - quasi-concurrent, because single thread of control

physical, logical concurrency - have, or can imagine have, different threads of control

task: a unit that can be in concurrent execution with other such units
vs. subprogram:
can be implicitly started rather than explicitly called
when task invoked, need not wait for return from that task (asynchronous)
on return, may not necessarily return control to the caller

Communication w other tasks
shared nonlocal variables
message passing
parameters
if no such communication, called disjoint
often used for simulations, so to coordinate need communication

Synchronization:
mechanism controlling order of task execution
coop vs competition.
cooperative when A must wait for B, in order for task A to continue sensibly
Competitive: both require same resource which cannot be used simultaneously
if A needs to access shared location x, which B is using, B must wait.

famous cooperation: producer/consumer problem. from OS
one unit produces, the other consumes.place in, take from a buffer.
cannot take from buffer if empty.

competitive:
int total = 3;
A: add 1 to total.
B: mult total by 2

each must fetch, modify
what can happen without syncronization?
3 values can result. 8 (correct),
if A, B fetch before other writes back, --- if A writes first, Total = 6. If B first, total = 4

mutual exclusion. each must request and then release the resource.

Also, if more tasks than processors, need a scheduler, which delays execution.
Can time slice. let process execute for X amount of time. OK if all processes have same priority. complications,
as e.g. they wait for i/o, delays for syncronization.

tasks can be in several states.
1. New - task created, not begun
2. runnable, ready - ready to run but not currently running. given processor time, or was running and was blocked
3. running - currently executing
4. blocked - was running, but interrupted by an event. i/o most common, since most slow.
5. dead - no longer actuve. execution finished or was killed.

different algorithms for choosing which ready task to make runnable when currently running one is blocked. e.g. priority queue.

characteristic: liveness - program continues ti execute, leading to its completion.
deadlike can break liveness. how so?
A and B need X and Y to continue, but A has X, B has Y. deadlock avoidance - in lang and program design.

Language support for concurrency: PL/I, Ada 95, Java.



to be continued