text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
Since scipy 0.19, scientific Python programmers have had access to
scipy.LowLevelCallable, a facility that allows you to use native compiled functions (be they written in C, Cython, or even numba) to speed things like numeric integration and image filtering up.
LowLevelCallable supports loading functions from a Cython module natively, but this only works if there is a Cython module to load the function from. If you're working in a Jupyter notebook, you might want to use the
%%cython magic, in which case there is no module in the first place.
However, with some trickery, we can persuade
%%cython magic code to hand over its functions to scipy.
Let's say we want to compute the Gaussian integral numerically.
$$f(a) = \int\limits_{-\infty}^{+\infty}\,e^{-ax^2}\,\mathrm{d}x = \sqrt{\frac{\pi}{a}}$$
This is of course a silly example, but it's easy to verify the results are correct seeing as we know the analytical solution.
A simple approach to integrating this as a pure Python function might look like this:
import numpy as np from scipy.integrate import quad def integrand(x, a): return np.exp(-a*x**2) @np.vectorize def gauss_py(a): y, abserr = quad(integrand, -np.inf, np.inf, (a,)) return y
%%time a = np.linspace(0.1, 10, 10000) py_result = gauss_py(a)
CPU times: user 3.73 s, sys: 88.4 ms, total: 3.82 s Wall time: 3.71 s
%matplotlib inline from matplotlib import pyplot as plt plt.plot(a, np.sqrt(np.pi/a), label='analytical') plt.plot(a, py_result, label='numerical (py)') plt.title(f'{len(a)} points') plt.xlabel('a') plt.ylabel('Gaussian integral') plt.legend()
<matplotlib.legend.Legend at 0x7fe204f15710>
This works very nicely, but several seconds to compute a mere 104 values of this simple integral doesn't bode well for more complex computations (suffice it to say I had a reason to learn how to use Cython for this purpose).
There are three ways to construct a
scipy.LowLevelCallable:
- from a function in a cython module
- from a ctypes or cffi function pointer
- from a PyCapsule – a Python C API facility used to safely pass pointers through Python code
The first option is out in a notebook. The second might be possible, but using a PyCapsule sounds like a safe bet. So let's do that! As Cython provides us with easy access to the CPython API, we can easily get access to the essential functions
PyCapsule_New and
PyCapsule_GetPointer.
The main objective is to create an integrand function with the C signature
double func(double x, void *user_data)
to pass to
quad(), with the
user_data pointer containing the parameter
a. With
quad(), there's a simpler way to pass in arguments, but for sake of demonstration I'll use the method that works with
dblquad() and
nquad() as well.
%load_ext cython
%%cython from cpython.pycapsule cimport (PyCapsule_New, PyCapsule_GetPointer) from cpython.mem cimport PyMem_Malloc, PyMem_Free from libc.math cimport exp import scipy cdef double c_integrand(double x, void* user_data): """The integrand, written in Cython""" # Extract a. # Cython uses array access syntax for pointer dereferencing! cdef double a = (<double*>user_data)[0] return exp(-a*x**2) # # Now comes some classic C-style housekeeping # cdef object pack_a(double a): """Wrap 'a' in a PyCapsule for transport.""" # Allocate memory where 'a' will be saved for the time being cdef double* a_ptr = <double*> PyMem_Malloc(sizeof(double)) a_ptr[0] = a return PyCapsule_New(<void*>a_ptr, NULL, free_a) cdef void free_a(capsule): """Free the memory our value is using up.""" PyMem_Free(PyCapsule_GetPointer(capsule, NULL)) def get_low_level_callable(double a): # scipy.LowLevelCallable expects the function signature to # appear as the "name" of the capsule func_capsule = PyCapsule_New(<void*>c_integrand, "double (double, void *)", NULL) data_capsule = pack_a(a) return scipy.LowLevelCallable(func_capsule, data_capsule)
At this point, we should be able to use our
LowLevelCallable from Python code!
@np.vectorize def gauss_c(a): c_integrand = get_low_level_callable(a) y, abserr = quad(c_integrand, -np.inf, np.inf) return y
%%time a = np.linspace(0.1, 10, 10000) c_result = gauss_c(a)
CPU times: user 154 ms, sys: 4.69 ms, total: 159 ms Wall time: 159 ms
As you can see, even for such a simple function, using Cython like this results in a speed-up by more than an order of magnitude, and the results are, of course, the same:
%matplotlib inline from matplotlib import pyplot as plt plt.plot(a, np.sqrt(np.pi/a), label='analytical') plt.plot(a, c_result, label='numerical (Cython)') plt.title(f'{len(a)} points') plt.xlabel('a') plt.ylabel('Gaussian integral') plt.legend()
<matplotlib.legend.Legend at 0x7fe260d48668>
|
https://tjol.eu/blog/lowlevelcallable-magic.html
|
CC-MAIN-2019-35
|
en
|
refinedweb
|
Well, today is the final day of 2007. As a famous blogger I know it is important for me to say a few words. For posterity and all.
There, I feel better now.
Happy New Year.
Well, today is the final day of 2007. As a famous blogger I know it is important for me to say a few words. For posterity and all.
There, I feel better now.
Happy New Year.
So, if you ever tried to figure out what a Java program that appears to be hung is doing you are probably very familiar with the Java thread dump feature. Basically you send a signal to the JVM which responds by, essentially, writing a stack trace of each thread in the JVM to the standard output device. In fact, a thread dump contains more useful information that just a stack trace, it also show the state of each thread (i.e. runnable, waiting, etc) and which Java monitors (synchronized locks) are owned and/or being waited on.
Here is a sample snippet of a thread dump:
Full thread dump Java HotSpot(TM) Client VM (1.5.0_07-b03 mixed mode): "Timer-5" daemon prio=10 tid=0x092d5720 nid=0x73 in Object.wait() [0x9b52f000..0x9b52fd38] at java.lang.Object.wait(Native Method) - waiting on <0xa2a4b978> (a java.util.TaskQueue) at java.util.TimerThread.mainLoop(Timer.java:509) - locked <0xa2a4b978> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462) "Timer-4" prio=10 tid=0x0925d418 nid=0x72 in Object.wait() [0x9b4ed000..0x9b4edab8] at java.lang.Object.wait(Native Method) - waiting on <0xa2a49570> (a java.util.TaskQueue) at java.util.TimerThread.mainLoop(Timer.java:509) - locked <0xa2a49570> (a java.util.TaskQueue) at java.util.TimerThread.run(Timer.java:462)
As you can see a thread dump contains a lot of very useful information.
The method used to “request” a thread dump is to send a signal to the running JVM. In Unix this is the SIGQUIT signal which may be generated via either:
kill -3 <pid>
or
kill -QUIT <pid>
where <pid> is the process id the the JVM. You can also enter Ctrl-\ in the window where the JVM is running.
On Windows a thread dump is requested by sending a Ctrl-Break to the JVM process. This is pretty simple for foreground windows but requires a program (akin to Unix kill) to be used for JVMs running in the background (i.e. services).
The problem with requesting a thread dump is that it requires manual intervention, i.e. someone has to enter the kill command or press the Ctrl-Break keys to generate a thread dump. If you are having problems with your production site in the wee hours of the morning your support staff probably won’t appreciate getting out of bed to capture a few dumps for you. In addition, a single thread dump is not as useful as a series of dumps taken over a period of time. With a single dump you only get a snapshot of what is happening. You might see a thread holding a monitor that is causing others thread to block but you have no idea how long that condition has existed. The lock might have been released a millisecond after the dump was taken. If you have, say, 5 dumps taken over 20 minutes and the same thread is holding the monitor in all of them then you know you’ve got a problem to investigate.
The solution I’m going to propose makes use of JNI to request a thread dump of the current JVM and capture that output to a file which may be time stamped. This allows dump output to be segregated from everything else the JVM is sending to STDOUT.
Before you invest any more time in this article let me state that the solution I’m going to present here only partially works for windows. It is possible to programmatically request a thread dump under Windows but due to a limitation in Win32, the Microsoft C runtime, or both the capture to a separate file does not work. Even though Win32 provides APIs for changing the file handles used for STDOUT/STDERR, changing them after a process has started executing does not seem to make any difference. If you do all your Java work on Windows, you’ve been warned – don’t read to the end and then send me a nasty email saying I wasted your time!
Ok, the first thing we need to do is create a Java class that will serve as an interface to our native routine that captures thread dumps:
package com.utils.threaddump; public class ThreadDumpUtil { public static int performThreadDump(final String fileName) { return(threadDumpJvm(fileName)); } private static native int threadDumpJvm(final String fileName); static { System.loadLibrary("libthreaddump"); } }
This class loads a native library called libthreaddump when it is loaded and then exposes a static method to request a thread dump from Java code specifying the name of the file that should contain the captured dump.
Running this file through the javah tool generates a C header named com_utils_threaddump_ThreadDumpUtil.h which is used to help build our native routine.
The C code for the Unix variant follows:
#include <signal.h> #include <unistd.h> #include <fcntl.h> #include <sys/stat.h> #include <string.h> #include <errno.h> #include "com) { /* get my process id */ pid_t pid = getpid(); /* open the file where we want the thread dump written */ char* fName = (char*) (*env)->GetStringUTFChars(env, fileName, NULL); if (NULL == fName) { printf("threadDumpJvm: Out of memory converting filename"); return((jint) -1L); } int fd = open(fName, O_WRONLY | O_CREAT, S_IRUSR | S_IWUSR | S_IRGRP | S_IROTH); if (-1 == fd) { printf("threadDumpJvm: Open of file %s failed: %d[%s]\n", fName, errno, strerror(errno)); (*env)->ReleaseStringUTFChars(env, fileName, fName); return((jint) -2L); } /* redirect stdout and stderr to our thread dump file */ int fdOut = dup(FILE_STDOUT); int fdErr = dup(FILE_STDERR); dup2(fd, FILE_STDOUT); dup2(fd, FILE_STDERR); close(fd); (*env)->ReleaseStringUTFChars(env, fileName, fName); /* send signal requesting JVM to perform a thread dump */ kill(pid, SIGQUIT); /* this is kind of hokey but we have to wait for the dump to complete - 10 secs should be ok */ sleep(10); /* replace the original stdout and stderr file handles */ dup2(fdOut, FILE_STDOUT); dup2(fdErr, FILE_STDERR); return((jint) 0L); }
Following are the compile command lines I’ve used on a couple of Unix systems to build this dynamic library:
Mac OSX:
gcc -o liblibthreaddump.dylib -dynamiclib -I. -I$JAVA_HOME/include -L/usr/lib -lc libthreaddump_unix.c
Solaris:
gcc -o liblibthreaddump.so -G -I/$JAVA_HOME/include -I/$JAVA_HOME/include/solaris libthreaddump_unix.c -lc
Here is the C code for the Windows version of the native library:
#define WIN32_LEAN_AND_MEAN #include <windows.h> #include "com_nm) { auto HANDLE fd; auto HANDLE fdOut; auto HANDLE fdErr; auto long retValue = 0L; auto char* errText = ""; auto DWORD pid = GetCurrentProcessId(); /* open the file where we want the thread dump written */ char* fName = (char*) (*env)->GetStringUTFChars(env, fileName, NULL); if (NULL == fName) { printf("threadDumpJvm: Out of memory converting filename"); return((jint) -1L); } fd = CreateFile((LPCTSTR) fName, GENERIC_WRITE, FILE_SHARE_WRITE, NULL, CREATE_ALWAYS, FILE_ATTRIBUTE_NORMAL, NULL); if (INVALID_HANDLE_VALUE == fd) { printf("threadDumpJvm: Open of file %s failed: %ld\n", fName, (long) GetLastError()); (*env)->ReleaseStringUTFChars(env, fileName, fName); return((jint) -2L); } /* redirect stdout and stderr to our thread dump file */ fdOut = GetStdHandle(STD_OUTPUT_HANDLE); fdErr = GetStdHandle(STD_ERROR_HANDLE); printf("fdOut=%ld fdErr=%ld\n", (long) GetStdHandle(STD_OUTPUT_HANDLE), (long) GetStdHandle(STD_ERROR_HANDLE)); if (!SetStdHandle(STD_OUTPUT_HANDLE, fd)) printf("SetStdHandle failed: %ld\n", (long) GetLastError()); SetStdHandle(STD_ERROR_HANDLE, fd); printf("fdOut=%ld fdErr=%ld\n", (long) GetStdHandle(STD_OUTPUT_HANDLE), (long) GetStdHandle(STD_ERROR_HANDLE)); if (0 == GenerateConsoleCtrlEvent(CTRL_BREAK_EVENT, 0)) // pid fails here???? { retValue = (long) GetLastError(); errText = "Generate CTRL-BREAK event failed"; } else { /* this is kind of hokey but we have to wait for the dump to complete - 10 secs should be ok */ Sleep(10000L); } printf("This is a test message\n"); /* replace the original stdout and stderr file handles */ SetStdHandle(STD_OUTPUT_HANDLE, fdOut); SetStdHandle(STD_ERROR_HANDLE, fdErr); CloseHandle(fd); (*env)->ReleaseStringUTFChars(env, fileName, fName); if (0L != retValue) { printf("threadDumpJvm: Error generating thread dump: %s\n", errText); } return((jint) retValue); }
Remember – the file capture will not work here, it simply creates an empty file and the thread dump goes to the original STDOUT device.
Here is the command I used to create a Windows DLL using Microsoft Visual C++ 6.0:
cl -I. -I%JAVA_HOME%\include -I%JAVA_HOME%/include/win32 -LD libthreaddump_win32.c -Felibthreaddump.dll
That’s it. All the tools needed to request a thread dump any time you like. I used these tools to diagnose problems with an ATG application cluster to research problems being reported by the ATG ServerMonitor component. The ATG ServerMonitor issues warning and error log message for various reasons like the JVM being low on memory or a application request thread executing for an extended period of time. In a future post I’ll discuss how I extended the ATG ServerMonitor to capture thread dumps under these conditions.
I just checked out the Amazon site for the best selling Kindle edition computer books. Eight of the top ten are “for Dummies” books. WTF?.
Ok, as Rod Serling used to say, “Submitted for your consideration”. So, consider the shocker site (visual instructions below):
And then check out our fearless leader Gettin’ Down!
Nuff said.
Of late I’ve been listening to some Colbie Caillat and I have to tell you, this girl has talent. Oh sure, she’s cute and all but close your eyes and listen to her sing. Not bad, eh? Colbie just released her first studio album, Coco, which in some parts of Texas means horse. That’s probably not why she picked that title but, it might be.
Give a listen and see what you think.
|
https://bwithers.wordpress.com/2007/12/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
GoogleMaps blank --v2 cd Ionic2GoogleMaps
Warning: Since some of you never worked with Ionic CLI. From this point and further, every time I tell you to execute something, do that inside a project folder.
ionic platform add android
ionic platform add ios
3. Install Cordova Whitelist Plugin
cordova plugin add cordova-plugin-whitelist
<meta http-
Example
Ionic 2 Google Maps Example
Source Walkthrough
<script src=""></script>
<script src=""></script> <script src="cordova.js"></script> <script src="build/js/app.bundle.js"></script>
<script src=""></script>
Google Maps API warning: SensorNotRequired:
import {Page, Platform} from 'ionic/ionic';
constructor(platform: Platform) { this.platform = platform;
this.initializeMap();
initializeMap() { this.platform.ready().then(() => { var minZoomLevel = 12; this.map = new google.maps.Map(document.getElementById('map_canvas'), { zoom: minZoomLevel, center: new google.maps.LatLng(38.50, -90.50), mapTypeId: google.maps.MapTypeId.ROADMAP }); }); }
Continue to the next page
|
https://www.gajotres.net/ionic-2-integrating-google-maps/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
beebs1 398 Report post Posted July 10, 2007 Hiya, Does anyone know if MSVS 2005 will just omit any function calls to empty functions when it compiles a program? I know it's a bit of a strange question - the reason behind it is I'd like to take advantage of the OutputDebugString() function like so: void DebugOut( const wchar_t *format, ... ) { #ifdef _DEBUG // use va_start() etc. to build the string OutputDebugString( msgbuf ); #endif } I just want to check it won't cost me anything in a release build. Thanks for any help [smile] 0 Share this post Link to post Share on other sites
|
https://www.gamedev.net/forums/topic/455126-empty-functions/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Euclid's Extended Algorithm
DZone's Guide to
Euclid's Extended Algorithm
Join the DZone community and get the full member experience.Join For Free
Basically Euclid's algorithm for finding the largest common denominator, this one is modified in order to obtain the numbers d, x, y, where d is the dennominator and they check the following equation: d = ax + by
def euclidExtended(a, b): if b == 0: return a, 1, 0 dd, xx, yy = euclidExtended(b, a % b) d, x, y = dd, yy, xx - int(a / b) * yy return d, x, y
Topics:
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/euclids-extended-algorithm
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
In Java, the use of File class is possible. One can actually check information about the file like its existence, modification dates, and size.
I have the program below to check whether a certain file exists. It is however, important that if there really is a file, the location path must be correct.
import java.io.*;
public class checkFile{
public static void main(String args[]){
File f=new File("C:\\Documents and Settings\\Rose\\My Documents\\NetBeansProjects\\SampleExercises\\src\\","data.txt");
if (f.exists())
{
System.out.println(f.getName()+" exists.");
System.out.println("The file is "+f.length()+" bytes long" );
if (f.canRead())
System.out.println(" Ok to read.");
if (f.canWrite())
System.out.println(" ok to write.");
}
else
System.out.println("File does not exist.");
}
}
|
http://www.teeachertechiesays.com/2010/01/file-class-of-java.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I am new to applets and came across something unusual. In one of the programs I went through,
applet html tag
Applet Java file (.java)
import java.awt.*;
import java.applet.*;
/*<applet code="MyApplet" width=100 height=50></applet> */
//why we give comment here and how it is executed ??
class MyApplet extends Applet
{
public void paint(Graphics g)
{
g.drawString("A Simple Applet",100,100);
}
}
/*<applet code="MyApplet" width=100 height=50></applet> */
How is the above comment getting executed? Aren't comments meant to be skipped?
But how does it override the usual mechanism of skipping the comment lines?
You're thinking the compiler, not the applet viewer. Different tools that work in different ways.
Does it parse comments too?
AFAIU the applet viewer only parses the Java source code (note, source code, not the binary) up to the class definition. By design it ignores any code that is not in a comment and inspects only the commented code for possible applet tags. This new functionality was introduced around Java 1.7 or 1.6 AFAIR.
I didn't get this point -'By design it ignores any code that is not in a comment'. Does this mean it only looks for comments and then further selects the comments with applet tag?
Sorry, now you repeat it back to me, it is unclear. "Does this mean it only looks for comments and then further selects the comments with applet tag?" Yep. And thanks, that's a much better way to express what I meant.
I'll close by adding some comments by @zakki that I think are too informative to be hidden in a comments section:
@zakki: I think
appletviewer MyApplet.javaparses
MyApplet.javaas HTML, and it ignores unsupported elements like
import java.awt.*;. AppletViewer Tags
@zakki: It seems that
appletviewerjust process input as token stream ant it doesn't care java/html syntax. gist.github.com/zakki/b5176a7d37fa3938f0646d11d9bd01a5
|
https://codedump.io/share/Cnh392K2FaS1/1/java-applet-commented-applet-tag
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
commit 46037fcb88973e1dff0489196126c58baf69e7f5 Author: Simon Schubert <corecode@dragonflybsd.org> Date: Sat Sep 12 16:41:25 2009 +0200 atomic.h: use system namespace for function arguments We shouldn't use non-system arguments for function arguments in headers, since it might collide with other symbols. In this case it collided with exp(3) from math.h, both included by top(1) (with indirections). Reported-by: swildner@ Summary of changes: sys/cpu/amd64/include/atomic.h | 12 ++++++------ sys/cpu/i386/include/atomic.h | 18 +++++++++--------- 2 files changed, 15 insertions(+), 15 deletions(-) -- DragonFly BSD source repository
|
https://www.dragonflybsd.org/mailarchive/commits/2009-09/msg00147.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Introduction
This is the chapter web page to support the content in Chapter 12 of the book: Exploring Raspberry Pi – Interfacing to the Real World with Embedded Linux. The summary introduction to the chapter is as follows:
This chapter describes how the Raspberry Pi RPi to be a web server that uses various server-side scripting techniques to display sensor data. Next, custom C/C++ code is described that can push sensor data to the Internet and to platform as a service (PaaS) offerings, such as ThingSpeak and the IBM Bluemix IoT service (using MQTT). Finally, a client/server pair for high-speed Transmission Control Protocol (TCP) socket communication is described. The latter part of the chapter introduces some techniques for managing distributed RPi sensors, and physical networking topics: setting the RPi to have a static IP address; and using Power over Ethernet (PoE) with the RPi. By the end of this chapter, you should be able to build your own full-stack IoT devices.
After completing this chapter, you should hopefully be able to do the following:
- Install and configure a web server on the RPi and use it to display static HTML content.
- Enhance the web server to send dynamic web content that uses CGI scripts and PHP scripts to interface to RPi sensors.
- Write the code for a C/C++ client application that can communicate using either HTTP or HTTPS.
- Interface to platform as a service (PaaS) offerings, such as ThingSpeak and IBM Bluemix IoT, using HTTP and MQTT.
- Use the Linux cron scheduler to structure workflow on the RPi.
- Send e-mail messages directly from the RPi and utilize them as a trigger for web services such as IFTTT.
- Build a C++ client/server application that can communicate at a high speed and a low overhead between any two TCP devices.
- Manage remote RPi devices, using monitoring software and watchdog code, to ensure that deployed services are robust.
- Configure the RPi to use Wi-Fi adapters and static IP addresses, and wire the RPi to utilize Power over Ethernet (PoE).
Updates
Updates to the Paho Project
The paho project that is described on page 514/515 has moved from git.eclipse.org to github.com. You can follow the steps in the chapter by changing the git URI as follows:
Digital Media Resources
Below are some high-resolution images of the circuits described in the book. They are reproduced in colour and can be printed at high resolution to facilitate you in building the circuits.
The Client/Server Models Described in the Chapter
Heatsinks and the RPi
The IBM Watson Bluemix IoT Example
Errata
None for the moment
Recommended Books on the Content in this Chapter
In Listing 12.6 I believe the line
SocketClient sc( “thingpeak.com”,80);
should be
SocketClient sc(“api.thingspeak.com”, 80);
Thanks, it must have been updated and that is very useful to know. Kind regards, Derek.
Hi Derek,
Love your book. Love it to bits. But I got stuck on page 502 trying to install OpenSSL.
pi@raspberrypi:~ $ sudo apt install openssl libssl-dev
Reading package lists… Done
Building dependency tree
Reading state information… Done
openssl is already the newest version.
The following NEW packages will be installed:
libssl-dev libssl-doc
0 upgraded, 2 newly installed, 0 to remove and 0 not upgraded.
Need to get 0 B/2,262 kB of archives.
After this operation, 6,480 kB of additional disk space will be used.
Selecting previously unselected package libssl-dev:armhf.
dpkg: unrecoverable fatal error, aborting:
unable to open files list file for package `libvorbisfile3:armhf’: Structure needs cleaning
E: Sub-process /usr/bin/dpkg returned an error code (2)
Steve
RPi3, default O/S, Linux newbie
Hello derek,
On your example for “IBM IoT MQTT C++” on page 520, the code exits with an rc state of 5.
I cannot figure out the issue with the same…
#include
#include
#include
#include
#include "MQTTClient.h"
#define CPU_TEMP "/sys/class/thermal/thermal_zone0/temp"
using namespace std;
#define ADDRESS "tcp://(MY_URL).messaging.internetofthings.ibmcloud.com:1883"
#define CLIENTID "d:(MY_URL):RaspberryPi:erpi01"
#define AUTHMETHOD "use-token-auth"
#define AUTHTOKEN "(MY_TOKEN)"
#define TOPIC "iot-2/evt/status/fmt/json"
#define QOS 1
#define TIMEOUT 10000L
float getCPUTemperature(){
int CPUTemp;
fstream fs;
fs.open(CPU_TEMP, fstream::in);
fs >> CPUTemp;
fs.close();
return (((float)CPUTemp)/1000);
}
int main(int argc, char* argv[]) {
MQTTClient client;
MQTTClient_connectOptions opts = MQTTClient_connectOptions_initializer;
MQTTClient_message pubmsg = MQTTClient_message_initializer;
MQTTClient_deliveryToken token;
MQTTClient_create(&client, ADDRESS, CLIENTID, MQTTCLIENT_PERSISTENCE_NONE, NULL);
opts.keepAliveInterval = 20;
opts.cleansession = 1;
opts.username = AUTHMETHOD;
opts.password = AUTHTOKEN;
int rc;
if ((rc = MQTTClient_connect(client, &opts)) != MQTTCLIENT_SUCCESS)
{
cout<<"Failed to connect "<<rc<<endl;
return -1;
}
stringstream message;
message <<"{\"d\":{\"Temp\":"<<getCPUTemperature()<<"}}";
pubmsg.payload = (char*) message.str().c_str();
pubmsg.payloadlen = message.str().length();
pubmsg.qos = QOS;
pubmsg.retained = 0;
MQTTClient_publishMessage(client, TOPIC, &pubmsg, &token);
cout<< "Waiting for " << (int) (TIMEOUT/1000) << "seconds for pub of"<<message.str()<<"\non topic"<<TOPIC<<"for ClientID:"<<CLIENTID<<endl;
rc = MQTTClient_waitForCompletion(client, token, TIMEOUT);
cout<<"Message with token"<< (int)token<<"delivered"<<endl;
MQTTClient_disconnect(client,10000);
MQTTClient_destroy(&client);
return rc;
}
Do we have to do anything else after creating a device on IBM bluemix..??
Frankly they have changed the entire interface of bluemix.
On page 510, the setting “hostname=myaccountname@gmail.com” in /etc/ssmtp/ssmtp.config caused the error “ssmtp: Cannot open smtp.gmail.com:587”. Changing it to “hostname=raspberrypi” which is the hostname of my raspberry pi corrected the problem.
|
http://exploringrpi.com/chapter12/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
>
> Is it this that is creating an NPE in TypeScope ??
>
I got an exception, traced it to TypeScope, made a minimal intervention
(null check) and the exception went away.
> I'm curious here because IIUC, hiding an exception there can have very bad
> side effects, for example make the compiler to accept classes with no
> import type.
>
The exception I got wasn't caught anywhere in the stack. I made it all the
way out. Can't imagine the compiler 'relying' on an uncaught exception...
Anyway, I don't care all that much, and I might be wrong, so feel free
(to ask me) to revert.
EdB
> Frédéric THOMAS
>
>
> ----------------------------------------
> > From: erik@ixsoftware.nl
> > Date: Wed, 1 Jul 2015 18:48:24 +0200
> > Subject: Re: [4/5] git commit: [flex-falcon] [refs/heads/develop] - Fix
> uncaught exception
> > To: dev@flex.apache.org
> >
> > I made the USMapCoords a separate class, as the FlexJS emitter isn't set
> up
> > to emit two 'exportSymbols' statements per JS file, even though the file
> > might have two classes. Anyway, that didn't help much getting the release
> > version working, but I figured that any use case is a proper use case,
> so I
> > made the exception go away anyway ;-)
> >
> > EdB
> >
> >
> >
> > On Wed, Jul 1, 2015 at 6:17 PM, Frédéric THOMAS <webdoublefx@hotmail.com
> >
> > wrote:
> >
> >> I fixed the imports already, and I'm about to give a try to write a new
> >> compiler pass for it in replacement of the fix, it might be that in the
> >> current fix, I didn't catch all the cases but it was working fine for
> all
> >> the 3 existing externs we have, can you tell me what have you change
> before
> >> this exception has been raised ?
> >>
> >> Thanks,
> >> Frédéric THOMAS
> >>
> >>
> >> ----------------------------------------
> >>> From: erikdebruin@apache.org
> >>> To: commits@flex.apache.org
> >>> Date: Wed, 1 Jul 2015 15:46:37 +0000
> >>> Subject: [4/5] git commit: [flex-falcon] [refs/heads/develop] - Fix
> >> uncaught exception
> >>>
> >>> Fix uncaught exception
> >>>
> >>> Found this one while trying to compile a modified version of Fred's
> >> JQuery externs example. I know next to nothing about Falcon, so if more
> >> enlightened folks can trace this back to the root cause, that would be
> >> lovely :-)
> >>>
> >>> Signed-off-by: Erik de Bruin <erik@ixsoftware.nl>
> >>>
> >>>
> >>> Project:
> >>> Commit:
> >>
> >>> Tree:
> >>> Diff:
> >>>
> >>> Branch: refs/heads/develop
> >>> Commit: fe8d704616c8b5b9059703ebd4c9ec31d7b63747
> >>> Parents: 099263d
> >>> Author: Erik de Bruin <erik@ixsoftware.nl>
> >>> Authored: Wed Jul 1 17:43:19 2015 +0200
> >>> Committer: Erik de Bruin <erik@ixsoftware.nl>
> >>> Committed: Wed Jul 1 17:43:19 2015 +0200
> >>>
> >>> ----------------------------------------------------------------------
> >>> .../src/org/apache/flex/compiler/internal/scopes/TypeScope.java | 4
> ++++
> >>> 1 file changed, 4 insertions(+)
> >>> ----------------------------------------------------------------------
> >>>
> >>>
> >>>
> >>
>
> >>> ----------------------------------------------------------------------
> >>> diff --git
> >> a/compiler/src/org/apache/flex/compiler/internal/scopes/TypeScope.java
> >> b/compiler/src/org/apache/flex/compiler/internal/scopes/TypeScope.java
> >>> index 4e49e9f..8723fe2 100644
> >>> ---
> >> a/compiler/src/org/apache/flex/compiler/internal/scopes/TypeScope.java
> >>> +++
> >> b/compiler/src/org/apache/flex/compiler/internal/scopes/TypeScope.java
> >>> @@ -341,6 +341,10 @@ public class TypeScope extends ASScope
> >>> Collection<IDefinition> sDefs = new
> >> FilteredCollection<IDefinition>(STATIC_ONLY_PREDICATE, defs);
> >>> for (ITypeDefinition type : owningType.staticTypeIterable(project,
> >> false))
> >>> {
> >>> + if (type == null)
> >>> + {
> >>> + continue;
> >>> + }
> >>> ASScope typeScope = (ASScope)type.getContainedScope();
> >>> typeScope.getLocalProperty(project,
> >>> // Only lookup static properties in this scope - for any inherited
> >> scopes, we should lookup instance properties
> >>>
> >>
> >>
> >
> >
> >
> > --
> > Ix Multimedia Software
> >
> > Jan Luykenstraat 27
> > 3521 VB Utrecht
> >
> > T. 06-51952295
> > I.
>
>
--
Ix Multimedia Software
Jan Luykenstraat 27
3521 VB Utrecht
T. 06-51952295
I.
|
http://mail-archives.apache.org/mod_mbox/flex-dev/201507.mbox/%3CCAJs+wW0zX7AVEVcDioN4VVWQ=5gLC42FobtdJDTDsgZPpfGuhA@mail.gmail.com%3E
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
The QgsAdvancedDigitizingDockWidget class is a dockable widget used to handle the CAD tools on top of a selection of map tools. More...
#include <qgsadvanceddigitizingdockwidget.h>
The QgsAdvancedDigitizingDockWidget class is a dockable widget used to handle the CAD tools on top of a selection of map tools.
It handles both the UI and the constraints. Constraints are applied by implementing filters called from QgsMapToolAdvancedDigitizing.
Definition at line 42 of file qgsadvanceddigitizingdockwidget.h.
Additional constraints which can be enabled.
Definition at line 65 of file qgsadvanceddigitizingdockwidget.h.
The CadCapacity enum defines the possible constraints to be set depending on the number of points in the CAD point list (the list of points currently digitized)
Definition at line 54 of file qgsadvanceddigitizingdockwidget.h.
Create an advanced digitizing dock widget.
Definition at line 38 of file qgsadvanceddigitizingdockwidget.cpp.
Additional constraints are used to place perpendicular/parallel segments to snapped segments on the canvas.
Definition at line 245 of file qgsadvanceddigitizingdockwidget.h.
Adds point to the CAD point list.
Definition at line 967 of file qgsadvanceddigitizingdockwidget.cpp.
align to segment for additional constraint.
If additional constraints are used, this will determine the angle to be locked depending on the snapped segment.
Definition at line 666 of file qgsadvanceddigitizingdockwidget.cpp.
apply the CAD constraints.
The will modify the position of the map event in map coordinates by applying the CAD constraints.
Definition at line 518 of file qgsadvanceddigitizingdockwidget.cpp.
determines if CAD tools are enabled or if map tools behaves "nomally"
Definition at line 239 of file qgsadvanceddigitizingdockwidget.h.
Filter key events to e.g.
toggle construction mode or adapt constraints
Definition at line 707 of file qgsadvanceddigitizingdockwidget.cpp.
Clear any cached previous clicks and helper lines.
Definition at line 738 of file qgsadvanceddigitizingdockwidget.cpp.
Removes all points from the CAD point list.
Definition at line 991 of file qgsadvanceddigitizingdockwidget.cpp.
Constraint on a common angle.
Definition at line 255 of file qgsadvanceddigitizingdockwidget.h.
Constraint on the angle.
Definition at line 247 of file qgsadvanceddigitizingdockwidget.h.
Constraint on the distance.
Definition at line 249 of file qgsadvanceddigitizingdockwidget.h.
Constraint on the X coordinate.
Definition at line 251 of file qgsadvanceddigitizingdockwidget.h.
Constraint on the Y coordinate.
Definition at line 253 of file qgsadvanceddigitizingdockwidget.h.
construction mode is used to draw intermediate points. These points won't be given any further (i.e. to the map tools)
Definition at line 242 of file qgsadvanceddigitizingdockwidget.h.
The last point.
Helper for the CAD point list. The CAD point list is the list of points currently digitized. It contains both "normal" points and intermediate points (construction mode).
Definition at line 1070 of file qgsadvanceddigitizingdockwidget.cpp.
Disable the widget.
Normally done automatically from QgsMapToolAdvancedDigitizing::deactivate().
Definition at line 947 of file qgsadvanceddigitizingdockwidget.cpp..
Definition at line 920 of file qgsadvanceddigitizingdockwidget.cpp.
return the action used to enable/disable the tools
Definition at line 315 of file qgsadvanceddigitizingdockwidget.h.
Disables the CAD tools when hiding the dock.
Definition at line 158 of file qgsadvanceddigitizingdockwidget.cpp.
Definition at line 744 of file qgsadvanceddigitizingdockwidget.cpp.
The penultimate point.
Helper for the CAD point list. The CAD point list is the list of points currently digitized. It contains both "normal" points and intermediate points (construction mode).
Definition at line 1090 of file qgsadvanceddigitizingdockwidget.cpp.
Sometimes a constraint may change the current point out of a mouse event.
This happens normally when a constraint is toggled.
The number of points in the CAD point helper list.
Definition at line 302 of file qgsadvanceddigitizingdockwidget.h.
Remove any previously emitted warnings (if any)
The previous point.
Helper for the CAD point list. The CAD point list is the list of points currently digitized. It contains both "normal" points and intermediate points (construction mode).
Definition at line 1080 of file qgsadvanceddigitizingdockwidget.cpp.
Push a warning.
unlock all constraints
Definition at line 261 of file qgsadvanceddigitizingdockwidget.cpp.
Configures list of current CAD points.
Some map tools may find it useful to override list of CAD points that is otherwise automatically populated when user clicks with left mouse button on map canvas.
Definition at line 773 of file qgsadvanceddigitizingdockwidget.cpp.
Snapped to a segment.
Definition at line 312 of file qgsadvanceddigitizingdockwidget.h.
Is it snapped to a vertex.
Definition at line 307 of file qgsadvanceddigitizingdockwidget.h.
Updates canvas item that displays constraints on the ma.
Definition at line 962 of file qgsadvanceddigitizingdockwidget.cpp.
|
http://www.qgis.org/api/classQgsAdvancedDigitizingDockWidget.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Welcome to Microsoft's Custom Speech Service. Custom Speech Service is a cloud-based service that provides users with the ability to customize speech models for Speech-to-Text transcription. To use the Custom Speech Service refer to the Custom Speech Service Portal
What is the Custom Speech Service
The Custom Speech Service enables you to create customized language models and acoustic models tailored to your application and your users. By uploading your specific speech and/or text data to the Custom Speech Service, you can create custom models that can be used in conjunction with Microsoft’s existing state-of-the-art speech models.
For example, if you’re adding voice interaction to a mobile phone, tablet or PC app, you can create a custom language model that can be combined with Microsoft’s acoustic model to create a speech-to-text endpoint designed especially for your app. If your application is designed for use in a particular environment or by a particular user population, you can also create and deploy a custom acoustic model with this service.
How do speech recognition systems work?
Speech recognition systems are composed of several components that work together. Two of the most important components are the acoustic model and the language model.
The acoustic model is a classifier that labels short fragments of audio into one of a number of phonemes, or sound units, in a given language. For example, the word “speech” is comprised of four phonemes “s p iy ch”. These classifications are made on the order of 100 times per second.
The language model is a probability distribution over sequences of words. The language model helps the system decide among sequences of words that sound similar, based on the likelihood of the word sequences themselves. For example, “recognize speech” and “wreck a nice beach” sound alike but the first hypothesis is far more likely to occur, and therefore will be assigned a higher score by the language model.
Both the acoustic and language models are statistical models learned from training data. As a result, they perform best when the speech they encounter when used in applications is similar to the data observed during training. The acoustic and language models in the Microsoft Speech-To-Text engine have been trained on an enormous collection of speech and text and provide state-of-the-art performance for the most common usage scenarios, such as interacting with Cortana on your smart phone, tablet or PC, searching the web by voice or dictating text messages to a friend.
Why use the Custom Speech Service
While the Microsoft Speech-To-Text engine is world-class, it is targeted toward the scenarios described above. However, if you expect voice queries to your application to contain particular vocabulary items, such as product names or jargon that rarely occur in typical speech, it is likely that you can obtain improved performance by customizing the language model.
For example, if you were building an app to search MSDN by voice, it’s likely that terms like “object-oriented” or “namespace” or “dot net” will appear more frequently than in typical voice applications. Customizing the language model will enable the system to learn this.
For more details about how to use the Custom Speech Service, refer to the Custom Speech Service Portal.
|
https://docs.microsoft.com/en-us/azure/cognitive-services/custom-speech-service/cognitive-services-custom-speech-home
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Details
Description
After upgrading from 2.5.2 to 2.5.10 I get a NullPointerException when calling getText("myKey") in an utility class like this:
public class ListGenerator extends ActionSupport { public final List<String> getValues() { final List<String> result = new ArrayList<>(); result.add(getText("select.header")); result.add(getText("register.female")); result.add(getText("register.male")); return result; } }
java.lang.NullPointerException at com.opensymphony.xwork2.ActionSupport.getLocale(ActionSupport.java:64)
....
ERROR org.apache.struts2.dispatcher.DefaultDispatcherErrorHandler - Exception occurred during processing request: java.lang.NullPointerException
Activity
- All
- Work Log
- History
- Activity
- Transitions
When you're creating a new instance of the action by yourself the DI won't work and needed components won't be injected.
If you really want to get localized message through Struts2 try LocalizedTextUtil, otherwise just load resource bundle and used it.
Thanks! That's it...
The behaviour changed between 2.5.2 and 2.5.10. But this is not an issue and can be easily fixed in the project. Done!
Sorry for that but I want to use DI as much as possible to get rid of those static methods flying around. You can always use container to inject dependencies, in an action it can be like this:
public String execute() throws Exception { ListGenerator generator = container.inject(ListGenerator.class); .... }
The problem appears when calling the utility class from another class like this:
|
https://issues.apache.org/jira/browse/WW-4740
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
HOUSTON (ICIS)--US-listed Taminco on Tuesday confirmed plans for a joint-venture choline chloride facility in ?xml:namespace>
Financial or capacity details were not disclosed.
Taminco said that the new plant would serve a growing North American market, which is underpinned by increasing demand for choline chloride as a clay stabiliser in oil and gas drilling and hydraulic fracturing applications.
Choline chloride is also used as an additive in animal feed.
|
https://www.icis.com/resources/news/2014/02/25/9756415/taminco-confirms-plans-for-choline-chloride-facility-in-louisiana/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Well, herenScott was here toAdobeFlex Builder 2Flex SDK 2frameworksflash.
Interesting, I am working on a similar app and will try this.
Can you point to similar hack(?!) for AS2…?
Cheers!
is one.
is another. I know there are others out there but these are the 2 that come to mind.
Scott Morgan
Thanks for the info, this will undoubtedly come in useful at some point…
@xmpcray – I did another writeup on dynamically loading fonts into AS2 here:
Great article Scott. Definitely submit the bug to the bugbase. I submitted an enhancement request the other day and it was quickly assigned to an engineer and properly classified. Doesn’t guarantee they’ll fix it but shows they’re definitely taking community-submitted items seriously.
Thanks Scott and Kelvin. Very useful info. Immense help
Hi,
here’s some more info about runtime font loading in AS3 and Flash CS3. You may find this useful if you’re compiling with the Flash IDE:
Hello Scott,
Good article, very handy stuff. I find myself with a problem when compiling your stuff in flex 3 beta for eclipse: The swf for the font class does not get created. What are you compiling this with and do i need to create a css and compile it as SWF or should building the project make one from the class automatically? i feel i’m missing something here.
Cheers
Hi RedB,
I have not tried this method yet in Flex 3. I used FlexBuilder 2. Make sure you set the font class as a default application and compile it seperatly from the FontLoader class.
Ah that did the trick, setup a separate project for font files and output to a font swfs dump. Works like a charm.
Thanks again for the help.
Hey there, I’m trying to compile the _Arial class. I set up a separate project and imported the path to the _Arial class. but I stop right there because I’m not sure what else I should do. I see you have a static prop, should I call it like any other static prop?
Hi, I want to learn flash. Do you guys know where I can take flash class in bay area?
Hey there,
Been using this technique since first reading it and its been great. However i recently noticed something i had overlooked before due to the fact that i wasnt reading the content of the text displayed in the embedded font (its just testing stuff right?). Anyways, the problem is that when generating the SWFs with flexbuilder 2.0.1, it seems to create graphic artifacts and deformations depending on the original font file. Following link show an image of some characters being deformed after the embed. Those are for Arial but after testing with multiple fonts its pretty much an acrosse the board thing for most font files i tried. Am i missing something or is this something you noticed too?
Link to example:
ooops wrong link. this is right:
Very helpful 411!…Thanks!
How would you implement a multiple word named font, for ex: Trebuchet MS?
How would you also implement that font’s bold version, for ex: Trebuchet MS Bold?
Thanks again!
Heh, tnx.
I searching for this for while.
Great article on runtime embedding fonts!
I’m creating an “online logo-maker” application.
I want to (a) have a “library” of bitmap & vector objects a user can drag onto the “work area”; and (b) have user-specific libraries. I’m planning on using PhP or ASP for the user to be able to upload bitmap & vector images to the server (they will be converted on the fly to WMF). I want the Flash (cs3) application to be able to reference that folder of images so the user can also drag those onto his “work area”. Is this a runtime possibility? I don’t even know where to get started so if you have ideas, please share!
Thanks!
I have run into an issue with embedding fonts that are “extra bold” or “black” rather than just simply “bold” or “italic”. When it goes to compile, mxmlc reports that it can’t find the font with the properties specified. I fixed it by modifying the TrueType subfamily name to “Normal” using a Perl script. It’s a bad hack but it works. Now I can load my font even if it doesn’t match a “standard” weight or style.
I am having trouble at this line
var FontLibrary:Class = event.target.applicationDomain.getDefinition(“_Arial”) as Class;
I can’t seem to get a swf file with the font embedded in it. could you post or send the flex / flash files?
Thanks so much
Adam
I’m having the exact same problem as Adam above, it falls over with:
ReferenceError: Error #1065: Variable _Arial is not defined.
at flash.system::ApplicationDomain/getDefinition()
at net.xerode.test::FontTest/fontLoaded()
I’m using CS3 to publish and not Flex, would this be an issue?
Hey,
I get “VerifyError: Error #1014: Class IFlexAsset could not be found” just as soon as the font file created by the above method finishes loading
Any ideas?
Scott,
I like where you’re going with this, but from a design stand-point, I have suggestion.
If you set up an interface to your _Arial class like IFontFactory, you can set it up like this:
interface IFontFactory{
function getRegularFontClass():Class;
function getBoldFontClass():Class;
function getItalicsFontClass():Class;
}
Then you can change your fontLoaded method to look more like this:
var FontLibrary:Class = IFontFactory(evt.target.content).getRegularFontClass();
Font.registerFont(FontLibrary);
This way, you can abstract the font registration process.
Then again…perhaps you could just make the Font.registerFont() call in the constructor of _Arial in the first place? I’ll have to play around with that
For those getting ‘Variable _Arial is not defined’ I had the same error because my _Arial.as file was in a package like com.mydomain.myproject.resources.fonts so when you get the definition it should be like so…
var FontLibrary:Class = e.target.applicationDomain.getDefinition(“com.mydomain.myproject.resources.fonts._Arial”) as Class;
Well,
Quite disappointing that you don’t respond to questions from people about the very subjects you so expertly write about.
Works like a charm, thanks!!!
I use FlashCS3 and Flex3 . It works only in the debug version of flashplayer
Scott,
Spent the last day or so trying to figure this out. I generated the font swf file using flex builder, but when i load that swf using flash cs3 i get “The SWF file … contains invalid data.” when it tries to register the font. Have you tried publishing using flash cs3 or just flex?
Cheers.
Sigh, nevermind, i figured it out. Thanks for this write up.
Hi,
Sweet stuff.
But i have ran into a problem when trying to make this dynamical, as im making a multilanguage site and japanees aint the favorit one for embaded fonts. So I’m sending in two variables, the url to the font swf and the font name.
public class FontLoader extends Sprite {
var _fontName:String;
public function FontLoader(fontUrl:String, fontName:String) {
trace(“FontLoader”);
_fontName = fontName;
loadFont(fontUrl);
}
… all thigns works fine down to here:..
private function fontLoaded(event:Event):void {
var FontLibrary:Class = event.target.applicationDomain.getDefinition(_fontName) as Class;
trace(typeof(FontLibrary)); // traces object
trace(typeof(FontLibrary._fontName)); // traces string , should trace object to work i guess.
Font.registerFont(FontLibrary._fontName); // creates the error: 1067: Implict coercion of a value of type String to an unrelated type Class.
drawText();
}
…
}
Btw, I’m kind of new to AS3, so any of you have a solution/suggestion on how to solve this?
I think i have solved it now.
I changed the code in flex 3, so that i registrer the font there and looks like its workin.
package {
import flash.display.Sprite;
import flash.text.Font;;
Font.registerFont(_arial)
}
}
then in my FontLoader.as all i needed in the fontLoaded function is the drawText
private function fontLoaded(event:Event):void {
drawText();
}
So now i can start trying to load multiple fonts and font variants.
Anybody got any thoughts on how to embed a suitcase font using this method?
MikeyAction – extract the individual fonts from the suitcase, there are a bunch of freeware programs that do that.
MikeyAction – extract the individual fonts from the suitcase, there are a bunch of freeware programs that do that.
Guess what your not alone. I’m down to a single hair. No, i’m not kidding. Flex newbies be aware. Its the coaster you can’t get off. I’m just glad flex will never be installed on say a fly by wire system otherwise you may as well just slit your throat before you get on the plane. lol, NOT!
This is issue is also in flex builder 3,
thanks for solution.
Hey Scott
You are still helping me even after leaving Innovasium!
Thanks for the write up…
For the people asking questions on how to embed multiple fonts.. and btw with CS4, you can do this as well (using the Flex 3 Library Path).
So for multiple font embedding, you can simply do something like this in your class file
[Embed(source=”C:/windows/Fonts/HelveticaNeueLTStd-Cn.otf”, fontName=”HelveticaNeueLTStdCn”, mimeType=”application/x-font-opentype”,unicodeRange=”U+0041-U+005A,U+00C0-U+00DE”)]
public static var HelveticaNeueLTStdCn:Class;
[Embed(source=”C:/windows/Fonts/HelveticaLTStd-Roman.otf”, fontName=”HelveticaLTStdRoman”, mimeType=”application/x-font-opentype”,unicodeRange=”U+0041-U+005A”)]
public static var HelveticaLTStdRoman:Class;
And then register both fonts:
Font.registerFont(HelveticaNeueLTStdCn);
Font.registerFont(HelveticaLTStdRoman);
Thanks again Scott..
Regards,
Tehsin
Hello,
I am doin it in Flex3,and i am using it in an mxml.i mean loading the _Arial.swf in my application.I am getting the same error- Error #1065: Variable _Arial is not defined.
Also my _Arial class is in the src folder and know more packages.Plz help.
I implemented this as a Flex Component. Check it out.
Has this changed recently, I added the unicodeRange bit in to my font loader to try to decrease the size and it actually increased!!
I too was experiencing what #12 was with regards to the glyph errors. I filed a bug with Adobe about it and they suggested that I try compiling my app using a different font manager:
There is only one thing wrong with this method; they do not support embedding Post Script fonts from Flex/ActionScript/mxmlc… pretty much anything other than Flash CS3/4:
Otherwise it works great!
Thanks a lot, the need to clean the project bug exists in Flex builder 3 as well. I would have lost a lot of time learning that if I didn’t see your blog.
thanks,
very useful
Hi,
can u please the send me the sample in Action Script 2.0
b’coz I am using the AS2.0 only, I seen the “shared fonts manager” and download the (source)zip file, but the source in Action script1.0. Please Help me. Thanx in Advance.
I am having a problem with this. I can only get it to work when the static var name is the same name as the class name. For example if my class name is _Arial and I embed the Arial font and give the static var a name of _Arial it works great. But if I change the name of the static var to something else (and update my embed code’s fontName accordingly) I don’t get errors on the font register or load but I don’t see any text on screen. Has anyone had this issue? I have seen examples where the name of that var is different from the class and it works fine. Could anyone help???:S
After trying everything I still cant get a single textfield to display both bold and regular characters. Everything is embedded, all fonts are on the stage and wherever I could place them. All variations are embedded, but nothing works.
This was a massive help on a recent project you are a super star thanks for sharing.
Hi, isn’t it possible to use Flash to make the font swfs? I still want to use these fonts in flex, but creating the fonts swfs in flash will reduce the file size since the flex framework is not compiled. But maybe just compiling the framework seperatly has the same result…
Ok, so you want normal, bold and italic also…
Just add those fonts to your swf class like this:
[Embed(source=’C:/WINDOWS/Fonts/ARIAL.TTF’, fontName=’_Arial’)]
public static var _Arial:Class;
[Embed(source=’C:/WINDOWS/Fonts/ARIALBD.TTF’, fontWeight=’bold’, fontName=’_Arial’)]
public static var _ArialB:Class;
[Embed(source=’C:/WINDOWS/Fonts/ARIALI.TTF’, fontStyle=’italic’, fontName=’_Arial’]
public static var _ArialI:Class;
[Embed(source=’C:/WINDOWS/Fonts/ARIALBI.TTF’, fontWeight=’bold’, fontStyle=’italic’, fontName=’_Arial’]
public static var _ArialBI:Class;
As you can see, you have to set the proper fontstyle and fontweight values. Also keep the same fontName…
Now when loading the font make sure to register all 4 fonts in your class… I do this like this:
var FontLibrary:Class = event.target.applicationDomain.getDefinition(‘_Arial’) as Class;
var fl:Object = ObjectUtil.copy(FontLibrary);
for(var i:Object in fl)
Font.registerFont(FontLibrary[i]);
wondeerful post Scott. many thanks, exactly what I needed! :- )
very Nice Tutorial..
This is an awesome tutorial. Got it working in eclipse w/FDT. Very cool! Thanks for sharing.
Blake
hello,
thanks for sharing this.
I’ve managed to get the _Arial font class in my swf and use it. I works fine. Thus, i have some problems: I can’t make my Text Fild to desplay special chars (like È).
The weird thing is that in Flex I see this chars without any problem (by creating a TextField).
So I suppose that Flash lose them somehow,
Have somebody had the same problem or something close?
Thanks!
what about on a mac jack?
I’m having trouble with this technique. Specifically, it appears that it is not possible to control advanced anti-aliasing settings (via the TextRenderer class) when fonts are embedded in this way. I was wondering whether anyone else had run into this issue..?
Hi there,
I’am working on dynamic font embedding, and it works fine. But only if the font.swf is standing in the same folder als the app.swf.
My SWF files with the fonts are on a remote server, so instead of “Arial,swf” i point to “”
If i put Arial.swf in mij app-folder, YEAY, it works. If i point to a remote (or even a c:/blah/arial.swf) it wont work.
Anyone an idea?
Thanks Scott!
Very useful hack
This Is a great article, but, it is missing one CRUCIAL UPDATE…
If you are publishing to Flash Player 10+ and using the TextField class then you need to explicitly set the newly added ’embedAsCFF’ property to false, as if you don’t then this method will only work with the new TextLayoutFramework.
This confused me for hours and hours, so please add it to the article if possible.
[CODE]
[Embed(source=’C:/WINDOWS/Fonts/ARIAL.TTF’, fontName=’_Arial’, embedAsCFF=’false’, unicodeRange=’U+0020-U+007E’)]
[/CODE]
Hope that helps somebody, some day
Nice article.. Scott..
And MunkyChop your tip has helped me this day.
Thanks,
Ram
Hi I came across problem as all working great offline , but when I publish online Text after update font do not apperer ,I checked by
var fontsa:Array = Font.enumerateFonts();
for each(var font:Font in fontsa){
listatxt.text = ” + font.fontName; }
and fonts are embeded !??? – why swf from pc work , but when is online do not !:(
Thank you for any comment
Hubert W.
|
http://www.scottgmorgan.com/runtime-font-embedding-in-as3-there-is-no-need-to-embed-the-entire-fontset-anymore/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Re: C/C++ Syntax Folding - Special treatment for functions/methods
Expand Messages
- Around about 08/02/05 15:17, Jean-Sebastien Trottier typed ...
> What are your folding requirements now? Can you clearly state them orAh ... that might've helped, mightn't it :-) ... although I was (at
> will you keep us guessing? ;^)
> Please also post code examples and explain how you want them folded...
the time) more interested in why my config. generated fake parenthesis
errors, but here you go:
I don't want to make it /too/ complex at the outset, so this isn't
*quite* what I'm after, but it's a step:
a) in a namespace block, fold only on '^\s\+[{}]' indented blocks; the
extension for this would be to only do the first one block; the
extension for *that* (final) would be to only do it for the first
brace-pair inside 'class {}' itself inside 'namespace {}'.
So I was thinking along the lines of defining a 'namspace-block'
syntax region, then a 'class-block' region which is only allowed in a
namespace-block and can't contain other class-blocks, then finally a
brace-fold-block which (ditto) can only be inside class blocks and
cannot contain other brace-fold-blocks.
b) not in a namespace block, fold only on '^[{}]' (column-0 blocks).
Primarily, (a) is for our headers [and is v. tricky], (b) for our
cpp's [and is trivial & I have it working].
namespace CODEBASE
{
class jobby
{
public:
int stuff;
void do_it()
{
// some inline code
{
// some block that'll not be folded
}
}
};
}
// body
CODEBASE::jobby::do_it()
{
// main code
// lives here
{
// another inline block that'll not be folded
}
}
.. goes to:
namespace CODEBASE
{
class jobby
{
public:
int stuff;
void do_it()
+-- 6 lines: {-----------------------------
};
}
// body
CODEBASE::jobby::do_it()
{
// main code
// lives here
+-- 3 lines: {-----------------------------
}
This is what currently doesn't work right; one thing that's
obviously wrong is that it prematurely ends cFoldSimpleInner blocks, and
it think that's also corrupting the parenthesis checks.
syn region cFoldSimpleOuter start='^{' end='^}' transparent fold
syn region cFoldSimpleNamespace start='^namespace\>.*\n{' end='^}'
\ transparent contains=ALLBUT,cFoldSimpleOuter
syn region cFoldSimpleInner start='^\s\+{' end='^\s]+}'
\ transparent fold contains=ALLBUT,cFoldSimpleInner
\ containedin=cFoldSimpleNamespace
> For completeness, I would use:Yes, I've tried a few variants of this. I'll try that one
> start='^namespace\>.*\n{'
specifially in a bit, though.
> You probably should specify end='^[ \t]\+}'Oops ...
> By the way, you can easily replace '[ \t]' with a simple '\s', they meanYou're right; too many regex syntaxes ... :)
> the same thing... and tend to read better
--
[neil@fnx ~]# rm -f .signature
[neil@fnx ~]# ls -l .signature
ls: .signature: No such file or directory
[neil@fnx ~]# exit
Your message has been successfully submitted and would be delivered to recipients shortly.
|
https://groups.yahoo.com/neo/groups/vim/conversations/topics/56434?o=1&d=-1
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
WebSvcProject namespace
The WebSvcProject namespace includes methods that manage projects, project entities such as tasks, resources, and assignments, and impacts on projects for portfolio analyses.
The Project class in the PSI is deprecated. For all new development, use the Project CSOM. Project Server 2013 apps that use the Project PSI will continue to work, but Project Online apps will need to replace any Project-class PSI methods with their equivalent CSOM methods.
The WebSvcProject namespace is an arbitrary name for a reference to the Project.asmx web service (or the Project.svc service) of the Project Server Interface (PSI). Project methods can check out, check in, create, delete, read, or update projects in the draft or published tables of the Project database. Many of the methods use the Project Server Queuing Service. Methods can create, update, or delete entities within projects (tasks, resources, assignments, and so forth). Methods can get information about or update the project team or project site address.
Use Project methods to:
Get project status.
Get a list of projects in the Drafts database.
Get a list of all projects in a department.
Get all summary tasks.
Get tasks available for assignment to a specified resource.
Get all projects where a resource has assignments.
Create a project proposal from a task list in Microsoft SharePoint Server 2013.
Synchronize a project with a SharePoint list.
Read project impacts from portfolio analyses.
Manage the project team.
Find relationships between projects and a master project.
Project methods typically use or return one of the following DataSet objects:
|
https://msdn.microsoft.com/en-us/library/office/websvcproject_di_pj14mref(v=office.15).aspx
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Editor's Note09.19.00
Hi Webmasters,
Today's issue is the second part of the series about drop-shipping on
the Internet. I've received a lot of great feedback so far on
yesterday's issue and I hope that you like this second part too.
Anyway, one of our editors here is writing NetworkNewz and wanted
to put a blurb in about his excellent newsletter in this issue. So,
here it is:
Serious Network professionals need to get into NetworkNewz. We're the
best source of information for all types of networks from Microsoft
to Unix to Linux to Mac OSX. NetworkNewz brings you a white paper
that's truly worth reading. (Hint: if this didn't make sense to you,
don't bother to subscribe.)
I hope that you enjoy it.
Best,
Pete
Click
for More info about ads
Drop-shipping on the 'Net: Pt. 2
Round Two
Ok, back to the search engines. I cleverly dodged all the people who
wanted to sell me a "complete turnkey web site package with products
ready to sell". I wanted to put my kids through college, not theirs.
I finally located someone who claimed that he was the owner of an
import company that drop shipped hundreds of great products. I paid
fifty bucks for a "membership", and got a no-name catalog in return,
with a "wholesale" price list. "Great!", I thought, "here we go!" I
searched, and found, many of his products for sale on the 'Net on
other sites. The products were identical, but guess what? Their
RETAIL prices were the same as my "WHOLESALE" prices. In other
words, I had ZERO profit margin. The guy I signed up with was just
another reseller like me, and he was now fifty dollars richer. I was
still nowhere.
Finally! (sort of)
With a grim sort of last-ditch determination, I dug, and I dug,
until I found the source of this no-name catalog I had paid fifty
bucks for. It was a manufacturer and importer in Texas, and they
were actually the source of the products! To make a long story
slightly longer, my partner Gary and I set up an account with
these people and began selling their products. A little later,
we found another company ("3,500 products you can sell on your own
web site".sound familiar?) and spent weeks sifting through more
resellers who were posing as sources until we found that manufacturer
outside of LA.
Click
for More info about ads
We sold products from these two companies for about 6 months, and
actually did about $12,000 in sales, but we weren't happy. The
products were imported knock-offs. We wanted the shiny new name
brands that look so cool on your web site, and that everyone wants
to buy. After dealing with about the umpteenth customer who wanted
to return his patched leather made-in-Kuala-Lumpur backpack because
he wasn't happy with the quality, we had had enough. It was Name
Brand or bust.
750 Name Brand Products
After another exhaustive search of the 'Net, I found a company in
Arizona who offered 750 name brand products that they would drop
ship. Of course, there was a catch.you had to use their web hosting.
After much wrangling, they agreed to let us mirror our current site
to their server, and sell their products in both places. The stuff
was great! Everything from Panasonic to Shop-Vac! They sent us the
catalog, and we spent two week working on the site, replacing most
of the knock-off stuff with the Holy Grail of Name Brand Products.
Then they sent us the price list.
They had done it to me again. The "wholesale" prices we were supposed
to purchase this stuff at was in most cases HIGHER than other sites
were SELLING it for! Needless to say, my partner had by this time
decided I was pretty much an idiot, and I was royally peeved.
Right from the Horse's Mouth
When I was a kid, and my mother was angry with someone, she would go
right to the top. I remember her bulling her way through to the
President of a national bank when she was furious about having to
wait 6 days to cash a large check. He experienced The Wrath of Mom,
and she walked out of the bank with thirty thousand in cash that same
day. I decided to try it. Nothing left to lose, right?
I went to the Westclox web site, since that was one of the lines we
were supposed to sell through that company in Arizona. Turns out
Westclox is owned by General Time, Inc. in Atlanta, GA. So is Seth
Thomas Clocks. I found the number for General Time, and asked for the
Sales department. I explained my situation, and asked if they could
refer me to anyone who could drop ship their products for me. "Oh,
no problem," said Jason, the salesman, "we drop ship single units for
you right from the factory."
It took me a full ninety seconds to crank my jaw back up off the
floor.
After about ten minutes on the phone, I was well on my way to
establishing an account directly with General Time. Talk about
wholesale prices on Name Brand products! This was the real thing!
They sent me catalogs, price lists and all the info I needed to get
started right away. For FREE!
The Rest is History
So there it is, folks. That's all you have to do. CALL THE
MANUFACTURER. As of today, we have accounts with so many name brand
manufacturers and distributors that we can't get products on our site
fast enough to keep up. We sell everything from Coleman Camping
products to Panasonic TVs. Some of the brands we're not even bothering
with yet.
Look up the manufacturer's web site. The phone numbers can be hard to
find. They hide them, or don't post them at all. With one company, it
was hidden so well that I just called their Product Support number
and said, "Whoops, I must have the wrong extension. Can you transfer
me to Sales, please?"
Most of them won't drop ship directly from the factory. If they tell
you they can't, TALK to them. Ask for a short list of their largest
distributors, then call THEM. If THEY can't help you, ask them who
CAN. So far I've gotten every single brand name I've gone after, with
the exception of a good source for Sony, and I will crack them,
sooner or later.
You'll find that some companies are owned by others. START AT THE
TOP. It's ok if you get trickled down to a subsidiary. At least you
know that everybody else is buying there, too. If you have to deal
through a distributor, make sure they ARE a distributor, NOT a broker.
(Brokers charge higher prices, and rarely have a lot of stock).
Well, that's my story, and I'm stickin' to it! You CAN get name
brand product drop shipped for your site. You just have to know
who to talk to.
Chris Malta
MyDirectBuy.com
Shelley Lowery is the Webmistress of Web-Source.net.
Your Guide to Professional Web Site Design & Development.
Display complete, professionally written articles & photographs
on your website that automatically update each week, free.
Join The Syndicator.
$13 DOMAIN NAMES - END OF THE MONTH SALE!!
DomainInvestigator.com
RICHEST PAYOUT OF ANY AFFILIATE PROGRAM FOR HOTELS
Hotelquest.com
|
http://archive.webpronews.com/archives/091900.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
This is a Guest Post by Dhwaneet Bhatt. He’s one awesome programmer and technologist with an innate ability to master any technical topic very quickly. Here’s his twitter profile: for you to able to connect with him.
If you searching for tutorial to get you started with python app on Google App engine. click here. And Here’s his post which would help you to get started with creating a java app on Google App engine:
1. We will be using Eclipse IDE to code a simple Java App for Google App Engine because it is very simple, the other option is building the application using Ant script, which involves a lot of code writing, and I prefer to keep things simple.
2. Download Eclipse from the following URL:
The App Engine has support for all the versions but we prefer to use the latest version, so download Eclipse Indigo (3.7.x). On the web page, download from the first link that reads “Eclipse IDE for Java EE Developers”. That will download Eclipse 3.7.x.
3. Extract eclipse and start, you will be prompted to create a workspace, create it anywhere you like and proceed.
4. Once in eclipse, we need to install the Google App Engine plugin. Go to Help->Install New Software, once there, enter the following URL in the “Work With” text box:
4.
Note: 3.7 in the URL corresponds to the Eclipse version, if you are using some other version, change the last number according to that. Or you can visit this link that gives all the URLs:
Click on the “Add” button and this repository will be added to Eclipse. Next time you can directly use this repository name for downloading any Google related plugin. It will then download a list of plugins associated with this URL.
Select the options “Google Plugin for Eclipse 3.7” under Google Plugin for Eclipse and Google App Engine Java SDKunder SDKs. These are the basic tools required for deploying a simple app to App Engine.
Click Next (2 times), and Accept the Terms of Agreement (after reading of course) and then it will take a couple of minutes for the plugins to get installed. Go grab a cup of coffee.
5. After it is installed, Eclipse will prompt for restart. Restart the Eclipse.
6. Click on New -> Other… and from the list select “Web Application Project” under Google”, give a name for your project and a package structure, and untick Use Google Web Toolkit and click Finish.
7. Now comes the surprise, you don’t need to do anything now. Eclipse has already created all the files required for the Hello World App, but still, I will be taking them one by one so that you can understand the necessary steps for writing a custom app.
8. First comes the deployment descriptor (for people who are new to J2EE, I recommend reading about J2EE first), it is the file web.xml located in the location war/WEB-INF/web.xml.
<?xml version="1.0" encoding="utf-8"?>
<web-app xmlns:xsi=""
xmlns=""
xmlns:web=""
xsi:
<servlet>
<servlet-name>Dhwaneetbhattjavaapp</servlet-name>
<servlet-class>com.dhwaneetbhatt.helloworldjavaapp.DhwaneetbhattjavaappServlet</servlet-class>
</servlet>
<servlet-mapping>
<servlet-name>Dhwaneetbhattjavaapp</servlet-name>
<url-pattern>/dhwaneetbhattjavaapp</url-pattern>
</servlet-mapping>
<welcome-file-list>
<welcome-file>index.html</welcome-file>
</welcome-file-list>
</web-app>
The web.xml contains mapping for the default servlet created with the name DhwaneetbhattjavaappServlet, and its URL mapping. This is the URL which will allow the servlet to give a response. And index.html located under war/ is the first file to be loaded when the application is started.
9. The next file is a pretty simple HTML file that contains nothing but a simple link to call the servlet.
<html>
<head>
<meta http-
<title>Hello App Engine</title>
</head>
<body>
<h3>Hello App Engine!</h3>
<a href="dhwaneetbhattjavaapp">Servlet</a>
</body>
</html>
10. The next important file is the servlet itself – which responds to the request by the html file. The doGet method means that the request coming from the html file is a GET request. You can repond to a POST request by implementing the doPost method and changing the type of request coming from html form to POST.
package com.dhwaneetbhatt.helloworldjavaapp;
import java.io.IOException;
import javax.servlet.http.*;
@SuppressWarnings("serial")
public class DhwaneetbhattjavaappServlet extends HttpServlet {
public void doGet(HttpServletRequest req, HttpServletResponse resp)
throws IOException {
resp.setContentType("text/plain");
resp.getWriter().println("Hello, world");
}
}
11. The last file is the appengine-web.xml file which is located in the WEB-INF/ directory. pp Engine needs one additional configuration file to figure out how to deploy and run the application. This filetag, enter the name of your App Id that you had chosen while registering for the app previously. (step 3)
<?xml version="1.0" encoding="utf-8"?>
<appengine-web-app
<application>dhwaneetbhattjavaapp</application>
<version>1</version>
<system-properties>
<property name="java.util.logging.config.file" value="WEB-INF/logging.properties"/>
</system-properties>
</appengine-web-app>
12. That’s about all the files we’ve covered. There are a couple of other files – configuration files in META-INF, logging.properties file and a favicon file that are unimportant at the moment. As I said Keep it Simple.
13. Now we are ready for a test run. Right click on the project from the Project Explorer, Run As -> Web Application, it will start the server. The default port is 8888. Open any web browser, type, you web application will run.
14. Now that we have tested the code on our machine, it is time to deploy it go Google App Engine. Right click on the project, Google -> Deploy to App Engine, login with your credentials and on the next screen that comes, click on Deploy.
15. Your app will be deployed on the App Engine. Open any browser and enter the URL: and you will have the “Hello, world” app ready for the world to access.
|
http://beyondrelational.com/modules/24/syndicated/404/Posts/14263/getting-started-with-creating-java-app-on-google-app-engine-guest-post-by-dhwaneet-bhatt.aspx
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
This article describes how to use the UrlHelper class in WebApi to create URL’s that can be used to invoke ApiController methods. This UrlHelper class uses the application’s route tables to compose the URL, avoiding the need to hard-code specific URL’s into the app.
In our example scenario, we’ll expose these concepts:
We need UrlHelper in this scenario because we want to expose a unique URL for each store, allowing clients to do basic create/read/update/delete operations on a single store. We won’t know each store’s address at compile time because they will be created dynamically.
The following sample code was built using Visual Studio 2010 and the latest bits of WebApi built each night which you can get here. Or if you prefer, you can use VS 2012 RC and get VS 2012 RC bits here.
We’ll start using File New Project | Web | Mvc4 | WebApi to create a new MVC 4 ASP.Net Web Api project, and they we’ll add to it.
In this app we need to expose a unique URL for each separate store. The easiest way to do this is to assign a unique ID to each store when it is created and to use that as part of the route information in the store.
When we created the default WebApi project above, the template automatically setup this route for us:
1: public static class WebApiConfig
2: {
3: public static void Register(HttpConfiguration config)
4: {
5: config.Routes.MapHttpRoute(
6: name: "DefaultApi",
7: routeTemplate: "api/{controller}/{id}",
8: defaults: new { id = RouteParameter.Optional }
9: );
10: }
11: }
That route is exactly what we need, so we’ll leave it unchanged. It effectively says the canonical route will for this WebApi will be:
An example of such a URL is which will invoke a method on a StoreController with the ID == 5
Let’s create some very simple models in our project’s Models folder:
1: public class Store
3: public string Id { get; set; }
4: public string Url { get; set; }
5: public List<string> Products { get; set; }
6: }
1: public interface IStoreRepository
3: Store CreateStore();
4: IEnumerable<Store> GetStores();
5: Store GetStore(string id);
6: bool DeleteStore(Store store);
7: void AddProduct(Store store, string product);
8: void RemoveProduct(Store store, string product);
9: }
Now let’s create the StoreController itself by using “Add New Item | Controller” in the Controllers folder. For now, we’ll just expose the basic methods to create a new store and to retrieve the list of all stores.
1: public class StoreController : ApiController
3: public IStoreRepository Repository { get; set; }
4:
5: [HttpPost]
6: public Store CreateStore()
7: {
8: Store store = Repository.CreateStore();
9: store.Url = Url.Link("DefaultApi", new { controller = "Store", id = store.Id });
10: return store;
11: }
12:
13: [HttpGet]
14: public IEnumerable<Store> GetStores()
15: {
16: return Repository.GetStores().ToArray();
17: }
18: }
Already we can see how to use UrlHelper in this line. Every ApiController instance is given a UrlHelper instance available through the “Url” property. The StoreController uses it to create the unique URL for the new store. The meaning of that line of code is:
If you’re not already familiar with MVC routes, the route values may look a little strange to you. It is just a way to use C#’s anonymous types to compose a set of name/value pairs. The Link() method supports an overload that accepts an IDictionary<string, object> if you prefer. This is the normal pattern to pass values to the MVC routing layer and is not unique to WebApi.
In this case, the call to the Link() method effectively says “Using the route called ‘DefaultApi’ create a URL that will invoke the StoreController and pass the store’s ID to it.”
If the store’s ID had been 99, the resulting URL would have been something like
Note: How the StoreController gets the IStoreRepository instance is not important to this sample. Refer to this blog for an example of how to use Dependency Injection to provide it at runtime. Not shown in this example is an implementation of IStoreRepository that simply keeps Store instances in memory.
At this point, we have all we need to run the app. Start it with F5, observe the normal MVC Home page, and then use Fiddler to issue a request to create a new store. This POST request will invoke the StoreController.CreateStore() method. The response is the new Store object represented in Json. The Url field () shows the value returned from UrlHelper.Link().
So now we have a controller that can manufacture new Store objects and return a unique URL for each one. Let’s complete this example by extending StoreController to contain the methods to operate on those individual stores:
3: // GET api/Store/id -- returns the Store from its Id
4: [HttpGet]
5: public Store GetStore(string id)
6: {
7: return EnsureStoreFromId(id);
8: }
9:
10: // DELETE api/Store/id -- deletes the store identified by Id
11: [HttpDelete]
12: public void DeleteStore(string id)
13: {
14: Store store = EnsureStoreFromId(id);
15: bool existed = Repository.DeleteStore(store);
16: if (!existed)
17: {
18: throw new HttpResponseException(HttpStatusCode.NotFound);
19: }
20: }
21:
22: // PUT api/Store/id?product=Xyz -- adds product Xyz to store identified by Id
23: [HttpPut]
24: public void AddProduct(string id, string product)
25: {
26: Store store = EnsureStoreFromId(id);
27: Repository.AddProduct(store, product);
28: }
29:
30: // DELETE api/Store/id?product=Xyz -- removes product Xyz to store identified by Id
31: [HttpDelete]
32: public void RemoveProduct(string id, string product)
33: {
34: Store store = EnsureStoreFromId(id);
35: Repository.RemoveProduct(store, product);
36: }
37:
38: private Store EnsureStoreFromId(string id)
39: {
40: Store store = Repository.GetStore(id);
41: if (store == null)
42: {
43: throw new HttpResponseException(HttpStatusCode.NotFound);
44: }
45:
46: return store;
47: }
48: }
Each of these new StoreController methods accept a string ID – which is the unique Store ID created by the repository. So when a request such as GET is sent, the existing route information tells WebApi to invoke StoreController.GetStore() and to model-bind the string ID parameter from the route information contained in the URL.
Let’s run the app again to create a new store and add a new product:
First, we issue a PUT to add a Bicycle to store #3:
And then we issue a GET to to examine the contents of store #3:
MVC 4 exposes 2 kinds of controllers – the conventional MVC Controller and the new WebApi ApiController. The MVC controllers are normally used for the presentation level and usually have corresponding “views” to display their respective “models”. In contrast, the ApiController is used for the “web api” part of the application, and generally exposes actions to bind to the Http GET, PUT, POST, DELETE (etc) methods.
An MVC 4 app can have a mix of either of these controller types. In fact, it is not unusual for the MVC controllers to be aware of one or more web api’s and to want to render URL links to them.
In the example above, we showed an ApiController creating a URL to reach a specific action on an ApiController. To do this, it used a UrlHelper instance available directly from the executing ApiController.
MVC Controllers have a similar concept – the Controller instance contains a Url property that returns a UrlHelper instance. In this case, this UrlHelper is a different class than the WebApi one because it already existed before WebApi. But they are very similar in concept and functionality.
Let’s modify our app so that the MVC HomeController can render URL links to the StoreController:
1: public class HomeController : Controller
3: public ActionResult Index()
5: return View(InMemoryStoreRepository.Instance);
6: }
7:
8: public ActionResult CreateStore()
9: {
10: Store store = InMemoryStoreRepository.Instance.CreateStore();
11:
12: string storeLink = Url.HttpRouteUrl("DefaultApi", new { controller = "Store", id = store.Id });
13: store.Url = new Uri(Request.Url, storeLink).AbsoluteUri.ToString();
14:
15: return this.RedirectToAction("Index", "Home");
16: }
17: }
The highlighted lines above do exactly what we did before in the StoreController – create a new store and then create a URL that will direct a request to the StoreController to operate on that store. It looks slightly different because the MVC UrlHelper already existed before WebApi, and backwards compatibility was important.
The HttpRouteUrl() method was added to the existing MVC UrlHelper to deal with ApiController routes. The example above effectively says “I want a URL to an ApiController using the route named ‘DefaultApi’ to invoke the StoreController, with the given ID”.
As you saw in this example, the UrlHelper class exists to help compose URL’s using the application’s routing information. There are 2 version of UrlHelper – the existing MVC version and the new WebApi version. Both are available from their respective controller instance, and both can be used to create URL’s.
|
http://blogs.msdn.com/b/roncain/archive/2012/07/17/using-the-asp-net-web-api-urlhelper.aspx
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Using Oracle BPM Object Methods in Script Tasks (OBPM 12.1.3)
By Venugopal Mangipudi on Jul 03, 2014
With Oracle BPM 12.1.3 becoming GA, I wanted to try out some of the new features that got added to this release. There are 2 features that are very helpful to the BPM developers:
- Groovy Scripting support
- BPM Data Object Methods
These are very good additions to the product as it provides a lot of flexibility to the developer ( For developers who come from ALBPM or OBPM this is priceless....).
In this blog post, I wanted to explore the following:
- Create a BPM Data Object with attributes and Methods
- Use the Data Object Method from a Script task
For this simple scenario, I created a process that accepts 2 Integers as Input and returns back the result of Adding the 2 Integers.
The following are the High Level Steps to implement the process:
- Create the BPM process (Synchronous BPM Process Type).
- Implement the Start Event and define the interface which accepts 2 input arguments (input1Arg and input2Arg of Type Int)
- Implement the End Event and define the Interface which returns 1 output argument (outputArg of Type Int)
- Add a Script task to the Process CalculateTotal
- In the Business Catalog add a new Module MyModule
- Add a new BPM Data Object ArithmeticHelper to the module. Define ArithmeticHelper by adding the following:
- Attribute: input1 Type: Int
- Attribute: input2 Type: Int
- Method: calculateSum Parameters: none Return: Integer
- Implement the calculateSum method with the following Groovy script code:
def result=0;
result=input1+input2;
return result;
- In the BPM Process, create 2 Process Data Objects:
- v_arithmeticHelper Type: ArithmeticHelper
- v_output Type: Integer
- Map the Start input arguments to the attributes in the process data object v_arithmeticHelper
- Map the End output arguments to the process data object v_output
- Implement the Script task CalculateTotal. To implement Groovy scripting on a BPM Script task, we can navigate to the script editor by right-clicking the Script task and selecting the option Go To Script. In the script editor, add the following groovy script:
v_output=v_arthmeticHelper.calculateSum();
Once the Project is compiled and deployed, we can test the composite from EM. The result of the testing should show the Total for the 2 input integers that were entered.
You can find the complete BPM project here to run the sample on your environment.
Hope this blog helps in demonstrating Simple Scripting example in Oracle BPM 12.1.3 which can be used to implement your own requirements!
Le i didn't understand about this
Posted by Narayan on July 29, 2014 at 04:35 PM IST #
|
https://blogs.oracle.com/VenugopalMangipudi/entry/using_oracle_bpm_object_methods
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Opened 7 years ago
Closed 6 years ago
Last modified 4 years ago
#10300 closed (fixed)
Custom File Storage Backend broken by recent SVN commit.
Description
I just did an svn update to the latest trunk (from r9700), and now my file upload backend storage isn't behaving correctly. A 6MB file is being uploaded to my backend storage as a 1 byte file. When I revert by to r9700, the code behaves as intended.
My backend is a slightly modified (added encryption) version of the S3 backend from
If any other info would be helpful in tracking this down, let me know and I'll post it.
Attachments (2)
Change History (12)
comment:1 Changed 7 years ago by mtredinnick
- Needs documentation unset
- Needs tests unset
- Patch needs improvement unset
comment:2 Changed 7 years ago by erikcw
comment:3 Changed 7 years ago by kmtracey
- Triage Stage changed from Unreviewed to Accepted
I looked at this a little, though I cannot recreate since I do not have access to S3 (let alone the modified backend...the modifications may be relevant here). However, taking the S3 backend you point to and stubbing it out to not actually attempt to call S3 services but just log what it's called to do, I see a difference pre- and post- r9766 involving calls to the backend's size method. Specifically, prior to r9766 the S3Storage size method is not called when a file is uploaded (via admim). Running with r9766, however, it is called. Ultimately the call comes from the len(content) call in the FieldFile save method in django/db/models/fields/files.py:
def save(self, name, content, save=True): name = self.field.generate_filename(self.instance, name) self._name = self.storage.save(name, content) setattr(self.instance, self.field.name, self.name) # Update the filesize cache print 'Prior to updating filesize cache: content.__class__.__bases__ is: %s' % str(content.__class__.__bases__) self._size = len(content) # Save the object because it has changed, unless save is False if save: self.instance.save() save.alters_data = True
Though this method itself did not change, that print (added by me for debug) shows that the bases for content has changed. Prior to r9766 it prints:
Prior to updating filesize cache: content.__class__.__bases__ is: (<class 'django.core.files.uploadedfile.UploadedFile'>,)
Prior to updating filesize cache: content.__class__.__bases__ is: (<class 'django.core.files.uploadedfile.InMemoryUploadedFile'>, <class django.db.models.fields.files.FieldFile'>)
The effect of this difference is that instead of django/core/files/base.py File's _get_size:
def _get_size(self): if not hasattr(self, '_size'): if hasattr(self.file, 'size'): self._size = self.file.size elif os.path.exists(self.file.name): self._size = os.path.getsize(self.file.name) else: raise AttributeError("Unable to determine the file's size.") return self._size def _set_size(self, size): self._size = size size = property(_get_size, _set_size)
being called to determine len(content) (with _size having already been set previously at a point in the code I didn't track down), django/db/models/fields/files.py FieldFile's _get_size:
def _get_size(self): self._require_file() return self.storage.size(self.name) size = property(_get_size)
is called instead.
Now the _save method in the S3 storage backend you point to does not rely on len(content) when it saves the data, but if your modified version does then it likely has a problem. If I put some prints into _save:
print 'in _save: type(content) = %s, content.__class__.__bases__ is: %s' % (str(type(content)), str(content.__class__.__bases__)) print 'in _save: len(content): %d' % len(content)
in _save: type(content) = <class 'django.core.files.uploadedfile.InMemoryUploadedFile'>, content.__class__.__bases__ is: (<class 'django.core.files.uploadedfile.UploadedFile'>,) in _save: len(content): 818
in _save: type(content) = <class 'django.db.models.fields.files.InMemoryUploadedFile'>, content.__class__.__bases__ is: (<class 'django.core.files.uploadedfile.InMemoryUploadedFile'>, <class 'django.db.models.fields.files.FieldFile'>
and my stubbed backend's size raises an exception because it is called to return the length of a file it hasn't saved yet.
So it seems there is a fairly non-obvious side-effect of r9766 involving len(content) for the content passed into the backend _save:
- Post-r9766 the backend's own size method will get called, which isn't likely to work since _save itself hasn't had a chance to save the file yet.
But I don't actually know for sure that this is the cause of the problem reported here, since I don't know that the modified backend relies on len(content) in its _save method. It does seem to be a problem we need to fix, though, since it seems valid for a backend _save to call len on the passed content, and it used to work...now it will find itself called to answer the len() question, which likely isn't going to work.
This is likely related to #10249...that one is reporting inability to determine a method resolution order while this one seems to be resulting from the fact that the method resolution order has changed.
Feedback from someone with a better understanding of r9766 would be helpful here.
Changed 7 years ago by erikcw
Modified S3Storage.py to include Encrypt content.
comment:4 follow-up: ↓ 6 Changed 7 years ago by erikcw
I've attached a copy of the Modified storage backend (S3Storage.py) in case it will help in tracking this issue down.
comment:5 Changed 6 years ago by kmtracey
- milestone set to 1.1
Changed 6 years ago by kmtracey
comment:6 in reply to: ↑ 4 Changed 6 years ago by kmtracey
I've attached a copy of the Modified storage backend (S3Storage.py) in case it will help in tracking this issue down.
Yes, I think it helps. Your S3Storage _put_file function looks like this:
def _put_file(self, name, content): if self.encrypt == True: # Create a key object k = ezPyCrypto.key() # Read in a public key fd = open(settings.CRYPTO_KEYS_PUB, "rb") pubkey = fd.read() fd.close() # import this public key k.importKey(pubkey) # Now encrypt some text against this public key content = k.encString(content) # ...snip remainder...
looking at the ezPyCrypto code, that encString(content) call is going to result in it doing a len(content) to get the length of the data to encrypt. Given what I detailed above, that len call is going to result in the storage backend being called to report the length of something that hasn't been written to the backend yet.
I've attached a patch that may fix the issue. It changes the FieldFile's _get_size method so that the storage backend is called to supply the length only if _committed is true. If not, super is used to fallback to File's size property, which will be the size of the uploaded file. That may be all that's needed to fix this -- could you give it a try an let us know?
Review from someone more familiar with this code would be good. I'm still vaguely worried about he side-effects on method resolution order introduced by r9766, but that could be due to my relative unfamiliarity with the code here.
comment:7 Changed 6 years ago by erikcw
Thanks for the patch kmtracey! I applied it to my svn checkout and it seems to have fixed the problem. I'm going to keep an eye on it for a few days to make sure everything is behaving correctly.
Like you said, there could still be some other side-effects from r9766 -- hopefully the maintainer of this area of the branch will take a look and give the final sign-off for this fix.
I'll report if I notice any oddities...
comment:8 Changed 6 years ago by mitsuhiko
- Owner changed from nobody to mitsuhiko
comment:9 Changed 6 years ago by jacob
- Resolution set to fixed
- Status changed from new to closed
comment:10 Changed 4 years ago by jacob
- milestone 1.1 deleted
Milestone 1.1 deleted
Well, the next step is to work out which revision caused the problem. Since r9700 works and r9800+ doesn't work, use binary searching to find the version that breaks things. It will be at most seven checkouts and tests (and probably less, since not all those commits are to the same branch).
|
https://code.djangoproject.com/ticket/10300
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
81
This item is only available as the following downloads:
Northwestern University ( PDF )
Richard C. Alkire of Illinois ( PDF )
Book reviews ( PDF )
A grand sale: $12 for a dozen experiments in CRE ( PDF )
Two computer programs for equipment cost estimation and economic evaluation of chemical processes ( PDF )
New adsorption methods ( PDF )
The process design courses at Pennsylvania: Impact of process simulators ( PDF )
Introducing the regulatory process into the chemical engineering curriculum: A painless method ( PDF )
Modular instruction under restricted conditions ( PDF )
Setting the pressure at which to conduct a distillation ( PDF )
Full Text
chemical edu
GCC
achmwal ed and .. t.a.t....
3M COMPANY
CHEMICAL ENGINEERING EDUCATIONo
CHEMICAL ENGINEERING EDUCATION
awi/ a oafdaw44 oj uk"i.les 1 WINTER 1984
DEPARTMENTS
2 Department of Chemical Engineering
Northwestern University
John C. Slattery
6 The Educator
Richard C. Alkire of Illinois,
Illinois Colleagues
Laboratory
10 A Grand Sale: $12 for a Dozen Experiments
in CRE,
Zhang Guo-Tai, Hau Shau-Drang
20 New Adsorption Methods,
Phillip C. Wankat
Classroom
14 Two Computer Programs for Equipment
Cost Estimation and Economic Evalu-
ation of Chemical Processes,
Carlos J. Kuri, Armando B. Corripio
26 The Process Design Courses at
Pennsylvania: Impact of Process
Simulators, Warren D. Seider
34 Modular Instruction Under Restricted
Conditions, Tjipto Utomo, Kees Ruijter
30 Curriculum
Introducing the Regulatory Process into
the Chemical Engineering Curriculum:
A Painless Method,
Franklin G. King, Ramesh C. Chawla
38 Class and Home Problems
Setting the Pressure at Which to Conduct a
Distillation, Allen J. Barduhn
9 Positions Available
9,37 Book Reviews
18,48 Books Received
19 In Memoriam J. H. Erbar
19 Stirred Pots 1984
The Technological Institute, home of the Department of Chemical Engineering.
Wmn 1n department
NORTHWESTERN UNIVERSITY
... THE NORTHWESTERN PHILOSOPHY
JOHN C. SLATTERY
Northwestern University
Evanstan, IL 60201
IF YOU READ NO further than this first para-
graph, I would like you to leave with the im-
pression that we take pride in our teaching, that
we strive to be on the forefront in our research,
and that we are committed to meaningful contact
with our students. Those priorities provide the
guiding philosophy for the department.
This philosophy has been tested by time. This
year marks the 40th anniversary of the first class
graduating in chemical engineering from the
Technological Institute. We awarded our first
master's degree in 1945 and our first PhD in 1948.
Our department now includes 18 faculty, 250
undergraduate and 100 graduate students, six
visiting scholars, and three postdoctoral fellows.
Since modern chemical engineering research is
increasingly interdisciplinary in nature, a number
of the faculty hold joint appointments with other
As we have become
convinced of the synergism, we have
chosen to emphasize several broad areas of research
rather than 18 individual activities.
departments: biomedical engineering, chemistry,
materials science and engineering, mechanical and
nuclear engineering, and neurobiology/physiology.
No scale for the comparison of chemical engi-
neering graduate programs exists. It is clear that,
while top departments all offer excellent faculty
and facilities, there are differences in educational
philosophy and in the research interests of their
faculties. Our department strives to maintain a
balanced commitment to teaching and research.
The training of graduate and undergraduate
students is taken seriously by all of us.
The prerequisites for admission to graduate
work include a bachelor's degree in chemical engi-
neering from a university or college of recognized
standing. Graduates of a curriculum in science or
in other fields of engineering whose programs
CHEMICAL ENGINEERING EDUCATION
have included sufficient courses in chemistry,
mathematics, and physics will also be accepted for
graduate work in chemical engineering. However,
they may be required to take selected undergradu-
ate courses without credit, in addition to the
normal graduate program.
An individual plan of study is arranged for
each student after consultation between the
student, his or her adviser, and the graduate com-
mittee of the department. Every effort is made to
design a program covering the fundamentals of
modern chemical engineering science and tech-
nology while allowing for individual specialization
in particular fields of interest.
For the MS degree, we require a minimum of
nine (quarter) courses. Research and the prepa-
ration of an acceptable thesis may be an alterna-
tive to three extra courses.
For the PhD, a minimum of 18 (quarter)
courses are required beyond the BS degree or nine
(quarter) courses beyond the MS. Students are
guided towards this program based on their class-
room performances. The formal qualifying exam
is oral and focused on the research topic proposed
for the thesis. We have no language requirements.
As we have become convinced of the synergism,
we have chosen to emphasize several broad areas
of research rather than 18 individual activities. In
the descriptions that follow, observe that we have
encouraged for the same reason interactions be-
tween faculty which cross department boundaries.
The single paragraphs devoted to individual
faculty are meant to give an impression of their
activities rather than to summarize their multi-
faceted research programs.
RESEARCH
Chemical Reaction Engineering. The largest
single area is chemical reaction engineering:
kinetics, catalysis, chemical reactor design, and
combustion. There are five faculty active in this
area: John B. Butt, Joshua S. Dranoff, Harold H.
Kung (who has a joint appointment with chemis-
try), Chung K. Law (who has a joint appoint-
ment with mechanical and nuclear engineering),
and Wolfgang M. H. Sachtler (who has a joint
appointment with chemistry). This group features
extraordinary interactions with faculty in ma-
terials science and engineering, chemistry, and
physics through the Catalysis Research Center,
which will soon have its own building adjacent
to the Technological Institute.
John Butt's work in catalysis has been largely
in the area of supported metal catalysts. His
group's current research is devoted to the study of
hydrogenolysis and hydrogenation reactions on
supported Pt group metals and to synthesis re-
actions on supported iron alloys. Particular empha-
sis is given to the relationship between the morph-
ology of the supported metal crystallites and their
activity and selectivity properties. More generally,
John Butt is concerned with the interrelation be-
tween catalyst deactivation and chemical process
dynamics.
The work of Josh Dranoff and his students in
photochemical reaction engineering has previously
involved gas and liquid phase photochlorination
reactions as well as solution photopolymerization.
(,.. .-. ;^* - ,^- -- .--*- *,*i, fS T Ig^ ^. -- ^ TI_ TI.-r. l r. i ) .;, :._
A practice race in view of the campus.
Current emphasis is focused on the study of novel
photoreactor designs in which the photoinitiation
and subsequent thermal reaction steps common
to many photoreactions of interest are carried
out in spatially segregated zones.
Harold Kung is pursuing the reasons for high
selectivities in oxide catalysis. Using modern
surface science and catalyst characterization
techniques, his group has prepared and character-
ized both model single crystal oxide catalysts that
have high concentrations of a particular type of
surface defect, such as anion or cation vacancies,
as well as microcrystalline oxide catalysts smaller
than 10 nm that possess unusually high selectivi-
ties.
A viable approach to enhance combustion
efficiency and reduce pollutant formation is
through lean combustion. Since lean mixtures are
hard to ignite and easy to extinguish, the use of
heterogeneous catalysts can significantly extend
the lower flammability limits of these mixtures. Ed
WINTER 1984
Law's group is working to identify the dominant
catalytic mechanisms and to determine the as-
sociated overall kinetic constants for hydrocarbon/
air mixtures flowing over different catalysts.
Wolfgang Sachtler and Harold Kung are study-
ing stereospecific catalysts with the objective of
understanding the relationship between the
geometry of the active site and catalytic selectivity.
On the basis of Wolfgang Sachtler's previous
work, it has been proposed that many such re-
actions involve a dual site mechanism. Their re-
search is aimed towards checking this model and
evaluating the prospects of dual site hydro-
genation catalysts in general.
Interfacial and Multiphase Transport Phe-
nomena. We have three faculty working in the
general area of interfacial and multiphase trans-
. we take pride in our teaching, . we strive to be
forefront in our research, and . we are committed to
Those priorities provide the guiding philosophy for the
port phenomena: S. George Bankoff (who has a
joint appointment with mechanical and nuclear
engineering), Gregory Ryskin, and me. All three
of us find complementary interests in the activi-
ties of Stephen H. Davis (who has joint appoint-
ments in engineering science and applied mathe-
matics and in mechanical and nuclear engineer-
ing). We are involved in such diverse activities as
dynamic interfacial phenomena, coalescence, two-
phase flows with heat transfer, flows in porous
media, flows of suspensions, and structural
models for the stress-deformation behavior of
polymer solutions.
George Bankoff has been directing a broad
program of experimental and theoretical studies
on two-phase flow and heat transfer. His particu-
lar motivation has been problems associated
with nuclear reactor accidents. Rather than
studying these complicated problems directly, he
and his students have chosen to examine more
fundamental problems that can shed light on par-
ticular aspects of the overall process.
Gregory Ryskin's current research interests
focus on the numerical solution of fluid mechanics
problems. He is considering both flows with free
boundaries as well as the motions of polymer
solutions, the stress-deformation behavior of
which is determined by the local microstructure.
My students and I have directed our attention
to a series of fundamental problems concerned
with dynamic interfacial behavior and multiphase
flows arising in the context of oil production. For
example, we have been investigating the in-
fluence of the interfacial viscosities upon displace-
ment and the stability of displacement of residual
oil from old oil fields.
Polymer Science. Our three faculty whose
primary interests are in the area of polymers have
joint appointments with materials science and
engineering: Stephen H. Carr, Buckley Crist Jr.,
and John M. Torkelson.
Plastic films that possess either permanent
electrical polarizations or electrical conductivity
are currently being used as the active elements in
such devices as microphones, infrared detectors
or batteries. Steve Carr and some of his students
on the
meaningful contact with our students.
department.
are seeking to understand the origins of the per-
sistent polarization that can be established in
some polymer solids. They are studying other
polymers that are electronic conductors and act as
organic metals.
Using model crystallizable hydrogenated poly-
butadiene (HPB), Buck Crist's group is making
significant advances in understanding the im-
portant effects of molecular weight, molecular
weight distribution, short chain branching and
long chain branching on the structure and proper-
ties of semicrystalline polymers. These studies
utilize light scattering, x-ray scattering and
diffraction, small-angle neutron scattering, calori-
metry and density measurements on HPB having
extremely well-defined molecular microstructures.
The utility of photophysics in studying macro-
molecular diffusion-controlled reactions has been
demonstrated by studies of intermolecular re-
actions between labelled polystyrene chains as well
as by studies of intramolecular cyclization dy-
namics of a single polystyrene chain. By a com-
bination of carefully selected fluorescence and
phosphorescence studies, John Torkelson is in-
vestigating the Rouse dynamics of polymer
chains.
Process Engineering. The area of computer-
aided process planning, design, analysis, and
CHEMICAL ENGINEERING EDUCATION
control is the interest of Richard S .H. Mah and
William F. Stevens.
The research of Dick Mah and his students is
directed towards the development of compre-
hensive theories and techniques for operating pro-
cesses. One focus of their research is their work
on process data reconciliation and rectification,
which has already led to new techniques of gross
error detection and identification, a rigorous theory
of observability and redundancy, and efficient
variable and measurement classification al-
gorithms. Another thrust is in the design and
scheduling of batch chemical processes.
Process optimization and process control are
beginning to depend significantly upon the utiliza-
tion of equipment and procedures for "real-time"
computing. Bill Stevens' current research activi-
ties emphasize the development of programs and
procedures for the implementation of various
"real-time' applications utilizing minicomputers
and microcomputers.
Separation Processes. There is currently a de-
veloping interest in the department in separation
processes.
Josh Dranoff has had a long-term interest in
separations based on sorption in zeolites and simi-
lar adsorbents. Currently his students are investi-
gating the kinetics of sorption of binary gas mix-
tures by zeolite adsorbents using a differential
bed-chromatographic type apparatus.
Dick Mah's group has proposed and is investi-
gating a new class of distillation schemes designed
to enhance overall thermal efficiency. This is ac-
complished through heat exchange between the
rectifying and stripping sections of a distillation
apparatus in what is known as secondary reflux
and vaporization (SRV) distillation.
George Thodos and his students are studying
the removal of SO, from flue gases using re-
generable sorbents such as Nahcolite (NaHCO3),
which may offer the possibility of closed loop
systems for clean-up of power station stack gases.
Separately, he is testing supercritical extraction
as a separation tool.
Individual Activities. Naturally, not all of the
research in the department is done in the context
of group activities.
Thomas K. Goldstick (who has joint appoint-
ments with biomedical engineering and neuro-
biology/physiology) is well known for his long-
term interests in biomedical engineering. His
current research centers around the dynamics of
oxygen transport in the retina of the eye.
Studies of vapor-liquid equilibria and critical
state phenomena continue to occupy the interests
of George Thodos and some of his students, while
Chairman John Slattery in an informal discussion with
students.
with another group he extends his investigation
of solar energy collection and storage.
FUTURE DIRECTIONS
Chemical engineering is an evolving discipline,
the one continuous thread being that we are all
concerned with applications of chemistry, broadly
interpreted. The emphasis given to particular areas
of research shifts as the needs of society change,
the current faculty matures in its outlook, and we
add new faculty.
Looking to the future, we are anxious to ex-
pand our activities in the area of computer-aided
process planning, design, analysis, and control,
when we are presented with the right oppor-
tunity. Both the students and faculty agree that
this will be a field of increasing importance to the
profession.
We would also like to move into biochemical
technology. This is not only an area of consider-
able promise, but also it is one in which, by our
judgment, the primary impact of chemical engi-
neers is still developing.
But as we continue to look in new directions,
basic priorities will remain unchanged: our pride
in our teaching, our eagerness to be on the fore-
front in our research, and our commitment to
meaningful contact with our students, 2
WINTER 1984
educator
Richa C. A4Zie
of Illinois
Prepared by his
ILLINOIS COLLEAGUES
University of Illinois
Urbana, IL 61801
T O HIS PROFESSIONAL colleagues, Dick Alkire is
known as an electrochemical engineer, to his
students as an outstanding teacher and to others
through many different perspectives, especially
music.
Though he now lives in the heartland of Il-
linois, Dick grew up in Easton, Pennsylvania,
where he graduated from Lafayette College in
1963. For two years of his time at Lafayette, he
was tutored on the subject of electrochemical cor-
rosion by Zbigniew Jastrzebski. Traveling to the
other coast, Dick attended the University of Cali-
fornia at Berkeley to continue the study of electro-
chemical engineering. Working under the direction
of Charles Tobias and Edward Grens, he carried
out graduate research on transport processes in
porous electrodes. Just to keep things in balance,
he enrolled in a piano performance class where he
met his future bride, Betty. They left Berkeley in
1968 to spend a post-doctoral year in G6ttingen,
at the Max Planck Institut fiir physikalische
Chemie, where he studied thermodynamics of
solid-state galvanic cells under the late Carl
Wagner. A year later, Dick brought his young
family back to the United States and took up a
Illinois. Promotion to associate professor came in
1975, and to full professor in 1977.
... it is Dick's philosophy that
"you can't teach research creativity by
telling everyone what to do." He gives students
a great deal of independence in the pursuit
of their thesis research ...
(0 Copyright ChE Division, ASEE 1984
During these early years, Dick was deeply in-
fluenced by his mentors Jastrzebski, Tobias, Grens
and Wagner. Under Tobias he had experienced the
excitement of a research group that moved
steadily into uncharted waters, and with Carl
Wagner had had long discussions on how to break
open new problems. As a consequence, he em-
barked on a program of electrochemical engineer-
ing research, at Illinois, which continues to this
day. At the time, however, the electrochemical
field was not common to chemical engineers and it
was John Quinn and Roger Schmitz who gave him
strong encouragement to maintain his direction.
With these perspectives, Dick worked resolute-
ly to broaden his capabilities and interests. He has
emphasized application of potential distribution
principles to complex electrochemical problems
which involve coupled mass transport, ohmic re-
sistance, and interfacial processes. Concentrating
at first on electrodeposition, his work for over a
decade in that area earned him, in 1983, the Re-
search Award of The Electrodeposition Division
of The Electrochemical Society. Also in the early
seventies, he returned to corrosion problems where
transport phenomena in the solution phase played
a critical but unexplored role. Based on his Ph.D.
dissertation on crevice corrosion, Dick's student,
CHEMICAL ENGINEERING EDUCATION
David Siitari, was awarded the Young Author's
Prize for the best paper in the 1982 Journal of
the Electrochemical Society. During a sabbatical
leave at Cal Tech in 1976, Dick formulated a new
program in electro-organic synthesis, and intro-
duced rigorous chemical engineering concepts to
electro-organic reactor design and scale-up. These
programs subsequently led to interactions with
Robert Sani, at the University of Colorado, in
finite element calculations of electrode shape
evolution; with Mark Stadtherr, at Illinois, on
electrolytic process simulation and optimization;
and with Theodore Beck, of the Electrochemical
Technology Corp. in Seattle, on corrosion. In 1983,
Dick again used his sabbatical leave, this time at
the University of Washington in Seattle, to de-
velop a research program in plasma reactor de-
sign, where potential field and convective diffusion
phenomena play a critical role.
Working with Steve Perusich, Dick investigates trans-
port processes during corrosion. Here, they use focused
ultrasound to trigger breakdown of protective surface
films, and then study film repair in the presence of
fluid flow.
A consequence of these broad and continuing
interests is that Dick's research program is by now
very large. Last Fall his group included twenty
graduate students and a half dozen undergradu-
ate laboratory assistants. Of necessity, a group of
such size demands a meticulous management of
time and resources. Dick is quick to point out that
a major factor in this regard is the excellent repu-
tation with which Illinois attracts truly outstand-
ing graduate students. In addition, it is Dick's
philosophy that, "you can't teach research
creativity by telling everyone what to do." He
Dick supervises four seminars a week for his graduate
students. Shown here (I. to r.) are Steve Lott, Mark
Greenlaw, Dick, Bob Schroeder, Demetre Economou,
and Kurt Hebert.
gives students a great deal of independence in the
pursuit of their thesis research, but demands
high standards of commitment, knowledge of the
literature, and developing intuitive prowess for
linking mathematics to the physical world. One of
Dick's former research students observes that "he
instills by personal example a deep commitment
for achieving a high level of innovation and techni-
cal excellence." Another former student notes,
"The advice Dick gave me in graduate school in
all areas, technical and non-technical, has helped
me immensely in my professional career."
To a significant extent, Dick's early years in
music have shaped his character and attitudes.
Dick and his brother Ed grew up in a family music
business where performing came at a young age at
the encouragement of Dad, the pro, and Mom, the
supporter. Dad, Ed and Dick started playing pro-
fessionally when Dick was twelve; by the time
he was sixteen, they had performed throughout
the East and had cut numerous records. Mean-
while, back at the family studio, Dick taught
piano, guitar, bass, and vibes, helped run the
wholesale and retail businesses, and did much of
the art work for Dad's teaching publications. He
turned down a four-year organ scholarship to
attend Lafayette College to study chemical engi-
neering, but nevertheless performed on over three
hundred occasions in the college touring choir, in
a barbershop quartet, in a jazz group, and as a
solo pianist at weddings and receptions.
The time and energies invested in public per-
formance, in music teaching, and in business-
related affairs paid invaluable dividends for
Dick's management of his massive research effort
today. By the way, his brother is also a chemical
engineer with Air Products & Chemicals. Ed is
WINTER 1984
... no matter how hectic the day ... Dick
will always give a student his total attention.
His efforts were rewarded in 1982 with the Teaching
Excellence Award of the School of Chemical
Sciences at Illinois.
Manager of Technical Affairs for the Industrial
Gas Division and has responsibility for safety and
operating procedures, process engineering, quality
assurance, engineering standards, and environ-
mental compliance.
Dick takes it as a given fact that competence
in research both requires and demands excellence
in the teaching classroom. With a repertoire of
a dozen lecture courses, he takes special pleasure
in teaching the subjects and in dealing with
students on a personal basis. Thinking back on
his own training, he recalls that "I have been
extremely fortunate to have had teachers who
took a personal interest in me and who inspired
me to standards which were beyond my awareness
at the time. Sometimes those feelings of in-
spiration came from only brief moments in their
presence when I felt that their entire energies
were directed toward giving me an appreciation
of the subject matter." As a result, no matter
how hectic the day, in the classroom or in his
office, Dick will always give a student his total
attention. His efforts were rewarded in 1982 with
the Teaching Excellence Award of the School of
Chemical Sciences at Illinois.
Active in professional pursuits, Dick is the
youngest Vice President in the history of the 82-
year old Electrochemical Society, and will succeed
to the presidency in 1985. He is also a divisional
editor of the Society's journal. In the AIChE, he
founded a group in 1974 for programming sym-
posia in electrochemical engineering, and has also
served as chairman of the Heat Transfer and
Energy Conversion Division of the Institute. To
quote one of Dick's colleagues, he applies "the
same enthusiasm, integrity, and competence to
Society affairs as he has to his own students and
research."
These experiences, along with extensive con-
sulting activity, serve as critical elements in the
continual upgrading of teaching and research.
With this activity, he has averaged an off-campus
seminar every two weeks during the past four
years. As one of his colleagues notes, "A hall-
mark of his work is his ability to translate results
of complex calculations into a form easily under-
standable to practical users in the field." Like the
family music business, Dick's life represents a
total commitment to advancing the electrochemical
engineering field so that others will be encouraged
to follow.
One activity has brought him a special sense
of satisfaction. Emeritus Professor Sherlock
Swann, Jr. had been at Illinois since 1927 and had,
for 45 of those years, meticulously compiled an ex-
haustively detailed bibliography of the electro-
organic synthesis literature, beginning with the
first known paper in 1801. Their friendship had
begun, understandably, with a mutual love of
music which found Dick spending evenings at
Sherlock's home listening to old 78-rpm recordings
of the masters. Through this musical bond of
Dick's students often spring surprise parties to bid an
affectionate adieu to a graduating member of the gang.
shared trust, Sherlock slowly revealed his in-
credible bibliography. Dick eventually raised over
$90,000 to support a meticulous effort at indexing
and publishing the collection through The Electro-
chemical Society. The result was deeply satisfying
to Professor Swann, who passed away in 1983
after having seen an important part of his life's
work brought to fruition.
Music continues to be the center of Dick's out-
side interests. It seemed ironic at the time that,
within a few months of deciding on a college
career in chemical engineering, his parents' music
business took an upswing and they presented him
with a Baldwin grand piano. During the years
since, his main hobby has been keeping up a sound
technique and broadening his knowledge of the
literature. A few years ago, Dick built a two-
manual harpsichord to gain access to four more
centuries of keyboard literature. His daughters,
now 14 and 16, play violin and cello and, in ad-
dition, are studying string quartets under Gabriel
CHEMICAL ENGINEERING EDUCATION
Magyar, master cellist for 16 years with the
Hungarian String Quartet. Meanwhile Betty, the
Berkeley music major, continues the family tra-
dition by operating her own music studio.
In summary, Dick has made a significant
contribution by identifying electrochemical phe-
nomena where chemical engineering concepts
find welcome application. He has helped unify
diverse electrochemical subfields so that inter-
communication between them has been promoted.
Through his research students and his professional
activities, he has contributed significantly to the
broadening horizon of chemical engineering. O
book reviews
MASS TRANSFER IN ENGINEERING
PRACTICE
By Aksel L. Lydersen
John Wiley & Sons, 1983, xiii + 321 pgs. $39.95
Reviewed by F. L. Rawling, Jr.
E.I. Du Pont de Nemours & Co., Inc.
This book is a companion volume to the
author's previous book "Fluid Flow and Heat
Transfer" (John Wiley & Sons, 1979). The aim
of the present volume is to present a short re-
fresher course in those areas of unit operations
specifically dealing with mass transfer. The book
consists of eight chapters: an introductory
chapter on the principles of diffusion and seven
chapters covering distillation, gas absorption and
desorption, liquid-liquid extraction and leaching,
humidification, drying of solids, adsorption and
ion exchange, and crystallization. The introduc-
tory chapter on the principles of diffusion pro-
vides a summary of the major equations together
with a short discussion of the various types of
diffusion, i.e. diffusion with bulk of mass in
motion, eddy diffusion, molecular diffusion in
liquids, etc. A short discussion of the two film
theory and the penetration theory is also pre-
sented. No attempt is made at providing a funda-
mental treatment of the subject of diffusion;
rather, reference is made to the literature. Several
problems, typical of those encountered in in-
dustry are worked out in detail. There are four
problems to be worked by the reader. The chapter
ends with a good bibliography, although half the
references are pre-1970.
Approximately two-thirds of the book is con-
- POSITIONS AVAILABLE
Use CEE's reasonable rates to advertise. Minimum rate
% page $60; each additional column inch $25.
OKLAHOMA STATE UNIVERSITY
Chemical Engineering: Assistant, Associate, or Full Pro-
fessor Position. This is a tenure-track position and will be
approximately half-time teaching and half-time research.
We will help the successful candidate establish research
by providing initiation funds, co-investigation opportuni-
ties with senior faculty and proposal preparation-processing
assistance from our Office of Engineering Research. Candi-
dates must possess an earned Ph.D. from an accredited
Department or School of Chemical Engineering or have a
Ph.D. in related areas and have strongly related qualifica-
tions. We welcome applications from candidates with
competencies and interest in any field of chemical engi-
neering, but especially seek those with strengths in design
and computer applications. This position is available as
early as July 1984. Applications will be received through
March 16, 1984. Please send your resume and list of three
references to Professor Billy L. Crynes, Head, School of
Chemical Engineering, 423 Engineering North, Oklahoma
State University, Stillwater, OK 74078. Calls for additional
information invited. OSU is an equal opportunity/affirma-
tive action employer.
cerned with staged operations, reflecting the in-
dustrial importance of this type of process. In
general, each chapter follows the same outline:
a short discussion of the theory involved together
with the relevant equations, a discussion of the
unit operation presenting the assumptions in-
volved and the major design equations, a very
general discussion on the various types of equip-
ment employed, a series of worked examples, a
set of problems to be worked by the reader, and a
bibliography.
The worked examples in each chapter make
this book worthwhile. They are well chosen to
illustrate industrial problems and are worked out
in detail, giving the assumptions and reasoning
involved in arriving at a solution. In a few in-
stances, a programmable calculator (Hewlett-
Packard) is used in the solution of a problem. The
calculator program is given.
I believe the book fulfills its goal, i.e. a re-
fresher course in mass transfer. The many refer-
ences adequately direct the user to the funda-
mental literature. Practicing engineers faced with
a problem in an area of mass transfer that they
have not been involved with for some time will
find this a good, succinct review. Students will
find the worked examples illuminating. In-
structors should find this book to be a useful
adjunct to their course. E
WINTER 1984
IM-1laboratory
A GRAND SALE:
$12 For A Dozen Experiments In CRE
ZHANG GUO-TAI* AND
HAU SHAU-DRANG**
Oregon State University
Corvallis, OR 97331
WE HAVE NOTICED THAT undergraduate chemi-
cal engineering laboratories in the United
States commonly make use of experiments in unit
operations, instrumentation and control; but that
experiments in chemical reaction engineering
(CRE) are very rare. This is understandable be-
cause such experiments usually require an ad-
vanced level of understanding, are rather complex
in set up, and more involved to operate.
We would like to introduce a whole class of
experiments which require very simple and in-
expensive equipment and which illustrate one of
the basic problems of chemical reaction engineer-
ing: the development of a kinetic rate equation
from laboratory data. In essence, the student takes
laboratory data, guesses a kinetic equation, tests
its fit to the data and, if this is satisfactory, de-
termines the corresponding rate constants.
Basically, we use a hydraulic analog. We will
illustrate this with the simplest case; the fitting
of a first order decomposition, A->R.
Connect an ordinary glass capillary to a burette
as shown in Fig. 1. Fill the burette with water,
at time zero let the water flow out, and record
the change in volume as time progresses.
We would like to introduce
a whole class of experiments which
require very simple and inexpensive equipment
and which illustrate one of the basic problems of
chemical reaction engineering.
*On leave from Shanghai Institute of Chemical Tech-
nology, China.
**On leave from Sichuan University, Chengdu, China.
50cm3
0
at start V= 50cm3
horizontal capillary should
be level with the zero volume
reading on the burette
vy ----
Volume
5cm3)
50
FIGURE 1. Experimental set up to represent the first
order decomposition of reactant A, or A -- R.
The student is told to view the experiment of
Fig. 1 as a batch reactor in which reactant A
disappears to form product R. The volume read-
ing on the burette in cm3 is to be considered as a
concentration of reactant in mole/m3. Thus the
experiment of Fig. 1 is to be treated as shown in
Fig. 2.
By following the reactant concentration
(actually the volume of water in the burette)
versus time the student is to determine the order
of reaction and the value of the rate constant. If
the experiment is set up properly, one will find
that the data fits first order kinetics.
The student sooner or later guesses first order
kinetics, integrates the rate equation to give
In (Co/C) = kt, plots the logarithm of concentration
versus time, and from this evaluates the rate
constant. Thus he learns how to test kinetic
models. Of course, the length and diameter of
capillary will determine the value of the rate
constant.
The experiment is so simple and quick to do
one can incorporate a lesson in statistical analysis
with it. Ask the student to repeat the experiment
CHEMICAL ENGINEERING EDUCATION
a number of times, find the rate constant, and also
the 95% confidence interval for the rate constant.
This is the simplest of a whole class of experi-
ments that can be done with burettes and capil-
laries. Fig. 3 shows some of the many other re-
action schemes that may be used.
There may be more than one way to test a
rate form and fit the rate constants. It challenges
the student's initiative and ingenuity to see how
he approaches his particular problem. For
example: for reactions in series, A--R--S he can
try to follow the concentrations directly, he may
follow concentration ratios and fit them to charts
as shown in Levenspiel [1], or he may try to use
the conditions when the intermediate hits its
maximum value. He soon finds that some ap-
proaches are much more discriminating than
others.
SUGGESTIONS FOR SETTING UP THE EXPERIMENTS
WE HAVE TESTED ALL THE experimental vari-
ations of Fig. 3 in the laboratory ourselves,
and we have found that observation of the follow-
ing simple precautions will result in excellent
agreement of experiment with theory.
Be sure that the capillary outlet is level with the zero
line of the burette. Check this by filling the burette
Zhang Guo-tai was born in Shanghai, China, in 1943. He graduated
from the Shanghai Institute of Chemical Technology in 1965 and is an
instructor there. His teaching and research activities center on the
theory of chemical kinetics and on reactor design. He was at Oregon
State University on a two year visiting faculty appointment, spon-
sored by the Chinese government. (L)
Hau Shau-Drang is an instructor in the chemistry department at
Sichuan University, China, where he is responsible for the teaching
of basic chemical engineering subjects to all chemistry students, a
normal feature of Chinese universities. His duties included being
superintendent of the chemical factory, and later, the pharmaceutical
factory, owned and run by the chemistry department. Mr. Hau
held a courtesy faculty appointment at Oregon State University where
he studied chemical reaction engineering, on a Chinese government
grant. (R)
batch reactor for the
reaction A---R
initial concentration
Reactant
concentration
(mol/m3)
50
FIGURE 2. Reactor analog to the hydraulic experiment
of Fig. I.
and letting water run out until equilibrium is achieved.
* Do not have any restriction between burette and
capillary comparable with that of the capillary itself.
* Verify that laminar flow exists in the capillary. For an
ordinary capillary and burette this condition is well
satisfied.
* Before starting an experiment pour water through the
burettes so as to wet them. Also see that water does
not flow back along the bottom of the capillary. Use a
rubber band or a Chinese teapot spout dripper at the
capillary outlet.
* If the capillary is not long enough then the run time
will be awkwardly short; if too long, a lot of time is
wasted: The following suggestions are convenient time
scales and capillary lengths.
Let T, be the time required for the water in the first
burette to drop to half its initial height and let the
corresponding length of capillary be L,.
For the first experiment of Fig. 3 we find that
T, = 30-40 sec is about right. For other experiments
the appropriate time scales and capillary lengths are
shown in Table 1.
By preparing a number of capillaries and by using
TABLE 1
Recommended experimental conditions
Reaction Scheme
Shown in Fig. 3
Case 1
Case 2
Case 3
Case 4
Case 5
Case 6
Case 7
Case 8a,b
Case 9
Case 10
Case l1a,b
Case 12
T,(sec) Capillary length
30-40 L,
20-30 L2 = 2-6 L1
30-40 L, = 2-4 L,
20-30 L2 = 4 L,, L3 = 2-4 L,
30-40 L, = 2-4 L1, L, = 2 L,
20-30 L2 = 4-5 L,,
L, = 2-4 L1
20-30 L, = 4 L,, L3 = 2-4 L,,
L, = 8-10 L,
30-40 L,
20-30 L3 = 4 L
30-40 L, = 4 L,
30-40 L, = 3-4 L,
30-40 L, = 2-4 L,,
L, = 6-8 L,
WINTER 1984
A JRA Ai
S R R
SA-R AR R
- A--R S- -S A -R--
U
k k k1 k3
A
T R
T A S
kA k3
"T -.U
rA A R A
LJ R siT bJ s
@ AR
k2
A-R
k2
A R -- S
k A4
A, k~- S
S.
A R --S
k2
A R S-S T
FIGURE 3. Some reaction schemes.
NOTES: In 8a, 9, 10, 11, use different diameter burettes to obtain different rate constants for the forward and reverse
reactions. Be sure to take the volumetric burette reading, not height.
8b. One can use just one burette if one locates the capillary at a height above the zero reading on the burette.
1 lb. One can use 2 burettes if one locates the second capillary somewhat above the zero reading on the burettes.
CHEMICAL ENGINEERING EDUCATION
various combinations of burettes the laboratory in-
structor can insure that no two laboratory groups will
have the same experiment to perform, even in the
giant classes which are now being processed.
Finally, a nice feature of this set of experi-
ments is that the student most likely will be led
to perform an integration of the performance
equation for the batch reactor before he can test
his guess with experiment.
CONCLUSION
WE HAVE SHOWN HOW A FEW burettes and
capillaries, properly connected, can be the
basis for a large number of simple experiments
to teach the principles of data fitting in chemical
reaction engineering. These experiments may be
simple but they are not trivial. E
REFERENCES
1. O. Levenspiel, Chemical Reaction Engineering, 2nd
Ed., Figure 15, page 191, Wiley, 1972.
APPENDIX
1. Many of the kinetic models of Fig. 3 (cases 1
to 6) are special cases of the Denbigh reaction
scheme (case 7). The integrated form for this
kinetic model, after appropriate manipulation
is found to be
CA
CA = exp(-Kt)
CA,o
CR --- [exp (-Kt) exp (-K2t)]
CA,o K2 K1
CT k
-- = [1 exp(-Kit)]
CA,o K,
Cs kik 1 exp (-Kit)
CA,, (K2 K,) K,
1 exp (-Kat)
K2
Cu k,k4 1- exp (-K,t)
CA.o (K, K,) K,
1 exp (-K2t)
K,
where
K, = k3 + k,
The conditions when the intermediate is at its + k
The conditions when the intermediate is at its
Substantial Chemistry Texts, a ll
Alaska, Hawaii or Puerto Rico. i B rentice-Hall
maximum value are then
CR.mx ki K, K2/(K2-K1)
CA,o K, K2
and
t fo ) n (K2/K,)
t(forC ...ax) K_--K_-
K2 K,
These expressions may be useful for the instructor
as a check of the students work.
2. All the kinetic equations in Fig. 3 involve
systems of first order reactions and are con-
veniently solved, either by integration or by com-
puter simulation for those who know how to talk
to these machines. In a later paper we will con-
sider non-linear systems and reaction orders
different from unity. O
ACKNOWLEDGMENTS
We would like to thank our advisor, Professor
Levenspiel for suggesting that we develop this
series of experiments, and we would like to recog-
nize Professor Jodra of the University of Madrid
for indirectly bringing this type of experiment to
our attention.
WINTER 1984
Slclassroom
TWO COMPUTER PROGRAMS
FOR EQUIPMENT COST ESTIMATION AND
ECONOMIC EVALUATION OF CHEMICAL PROCESSES
CARLOS J. KURI AND
ARMANDO B. CORRIPIO
Louisiana State University
Baton Rouge, LA 70803
IN RECENT YEARS SEVERAL cost estimation and
economic evaluation computer programs have
been developed, including those associated with
ASPEN [2, 3, 4], Monsanto's FLOWTRAN [10],
Purdue's PCOST [11], and others. However, the
fact that these programs are not readily available
to most colleges and universities motivated this
work: the development of a cost estimation and
economic evaluation computer program with the
latest information in the field, easy to use and by
all means suited to fulfill the requirements of a
senior process design course.
The algorithms used for the cost estimation
computer program were obtained from the
ASPEN Project, eleventh, thirteenth and
fourteenth quarterly progress reports [2, 3, 4].
These algorithms are based on cost data for 1979.
EQUIPMENT COSTING PROGRAM
The equipment costing program is modular in
design so that it is relatively easy to add equip-
ment classes as new costing models are developed.
It is also relatively easy to update the cost cor-
relations for existing equipment classes without
affecting other classes. A schematic diagram of
the program modular structure is given in Fig. 1.
A feature of the general design that is worth
mentioning is the procedure for handling input
data errors. When an error occurs in the specifica-
tions for a given equipment item, the calculated
cost of that item is the only one affected. In other
words, the program can recover and continue to
calculate the costs of the items that follow. This
Carlos J. Kuri is Chairman of the Board and Chief Executive Officer
of IMEX Corporation in Houston, Texas. He is a native of El Salvador
and holds a B.S. degree in chemical engineering from the University
of El Salvador and an M.S. degree, also in chemical engineering, from
LSU. He has practiced engineering for the Salvadorean Institute for
Industrial Development and for Barnard and Burke Consulting Engi-
neers of Baton Rouge, Louisiana. He has also taught engineering
courses at the University of El Salvador and at LSU. Mr. Kuri is
married and the proud father of two children. (L)
Armando B. Corripio is professor of chemical engineering at
Louisiana State University. For the past fourteen years he has taught
courses and directed research in the areas of computer process
control, automatic control theory and process simulation and optimiza-
tion. He has been actively involved in consulting for several industrial
organizations, has authored and coauthored over seventy technical
articles and presentations, and has presented over seventy short
courses and seminars for ISA, AIChE, and other organizations. He is
a member of ISA, AIChE, The Society for Computer Simulation, and
other professional and honorary societies. He is also a registered
professional engineer, married, and proud father of four children. In
his spare time he plays tennis, swims and coaches a youth soccer
team. (R)
procedure is designed so that the program can
detect as many input data errors as possible in a
single run, as opposed to detecting one error per
run.
Equipment Cost Correlations. The basic cor-
CHEMICAL ENGINEERING EDUCATION
FIGURE 1. Modular structure of equipment cost estima-
tion program.
relation for the base cost of a piece of equipment
is usually of the form:
In CB = ai + a2 In S + a, (In S)2
where CB = the base equipment cost per unit
S = the equipment size (or duty pa-
rameter) per unit
a,, a2, a3 = the cost correlation coefficients.
The base cost is used as a basis to compute the
actual estimated cost, the installation materials
cost and the installation labor hours. It is usually
the cost of the equipment in carbon steel for a
common design type and pressure rating and thus
independent of the equipment design type, the
material of construction and the pressure rating.
The estimated equipment cost is then calculated
by the following formula
CE = CBfDfMfP
where CE = the estimated equipment cost
fD = the design type cost factor (if
applicable)
fA = the material of construction cost
factor
fp = the pressure rating factor (if
applicable)
If the equipment size is larger than the cor-
relation upper limit, the function is extrapolated
at constant cost per unit size
CB = CBamx (S/S.mx)
where CBmax = the maximum cost of the maxi-
mum size
Smax = the maximum size for which the
correlation is valid.
If the equipment size per unit is less than the
Two computer programs have
been developed which are suitable for
use by students in process design courses. The
equipment cost estimation program is flexible, easy
to use and based on the latest cost correlations
available, those from project ASPEN.
correlation lower limit, the cost per unit is set
equal to the minimum
CB = CBrmi
where Camin = the minimum cost for which the
correlation is valid.
The equipment cost is adjusted to the specified
escalation index in order to correct for inflation.
The Chemical Engineering Fabricated Equip-
ment Index [5] is used for this purpose.
Input Data Specifications. A sample of the
input specifications for the equipment costing pro-
gram is shown in Table 1. The data on this table
illustrate the costing of seven different equipment
items, organized in card-image (80-column)
records. The first record of specifications for each
equipment item is easily recognized by the asterisk
(*) in column one. This code provides a key for the
detection by the program of missing records or
of records out of sequence.
Discussion of Cost Estimation Results. The
results of the equipment cost estimation program
for the items specified in Table 1 are compared in
Table 2 with costs for similar equipment items
that have been reported in the literature. Most of
the literature costs are from Peters and Timmer-
haus [7] which is a widely used text for process
design courses. All of the literature costs have
been escalated to a Chemical Engineering Fabri-
cated Equipment Index of 259.9 (1979) for com-
parison. The agreement between the program
TABLE 1
Sample of Inpute Data for Equipment Costing Program.
*CDCK101DCCY
M3/SN/M2
"CFAKI02
M3/SN/M2
K
.CHEK103
F2N/M2
-CPVK104PVHO
FF
CYCLONE
INDUSTRIAL FAN
31
HEAT EXCHANGER 3
4
HORIZONTAL VESSEL I
F
*CPUK105 CENTRIFUGAL PUMP
LBF3F3/' N/M2
*CTAK106 .I FIELD-ERECTED TANK
M3
.CTOK107TOTR TRAY TOWER
FF
F
*SUM
"END
1979
12 8.50 6.90E3
16.5 1870.
323.
1000.1.034E6
9. 30.
.041666
2 1
62.4 47.472
1.13
1892.5
3 14. 130.
0.2Q
6.9E05
WINTER 1984
costs and the literature costs is quite good in most
cases and within the accuracy of preliminary
study estimates. The largest discrepancies are in
the cyclone and tray tower costs. For each of these
cases the graphs in Peters and Timmerhaus had
to be extrapolated, which may account for the
discrepancy.
ECONOMIC EVALUATION PROGRAM
An acceptable plant design must present a
process that is capable of operating under con-
ditions that will yield a profit. The purpose of the
economic evaluation computer program is to calcu-
late two profitability indices: the net present value
and the internal rate of return. These two indices
are based on discounted cash flow techniques,
taking into consideration the time value of money.
Net Present Value (NPV)
n NCFk
NPV = NCF
V = (l+i)(1+RINF) k
where NCFk = the net cash flow for the k'"
year
i = the effective annual rate of
return
RINF = the annual inflation rate
n = the number of years of dura-
tion of the project.
Internal Rate of Return (IRR). This is the
rate of return that equates the present value of
the expected future cash flows or receipts to the
initial capital outlay. Normally a trial and error
procedure or root finding technique is required
to find the discount rate that forces the NPV to
zero.
To be more realistic in the calculation of these
two indices of profitability, the effect of inflation
is included. Failure to at least try to predict in-
flation rates and take them into account can
greatly distort project economics, especially at
the double-digit rates that have become common
throughout the world.
The procedure used in the program in com-
puting the indices of profitability is described in
the text by Bussey [1].
Economic Evaluation Program Results. Results
of the economic evaluation program for a sample
case are presented in Table 3. The problem is to
estimate the profitability of a solids processing
plant. Total purchased equipment cost is estimated
at $3,200,000, with an economic life of 10 years.
An interest rate of 10%, inflation rate of 8% and
TABLE 2
Comparison of Equipment Cost Estimation Results
Program
Cost, 1979
8.5 ms/s (18,000 cfm); $ 3,380
6,900 N/m2 (28 in water);
Excluding blower and motor
16.50 m3/s (35,000 cfm);
1,870 N/m2 (7.5 in water); 3230K (121 F);
Carbon steel; Explosion-proof motor;
Belt drive coupling
1,000 ft2; 1.034*106 N/m2 (150 psi);
Stainless 316; U-Tube
1.034*106 N/m2 (150 psi);
9 ft diameter; 30 ft long
6.9*105 N/m2 (100 psi); 62.4 lb/ft3;
23.74 ft-/min (178 GPM);
Totally enclosed fan-cooled electric motor
1,893 m3 (500,000 gal); Carbon steel
14 ft diameter; TTL = 130 ft long;
Stainless 304; 75 Valve trays
Literature
Cost, 1979
$ 5,300
Reference
Peters and Timmerhaus [7],
(p. 599). Extrapolation
required.
$ 8,360 $ 8,500 Richardson [9].
$ 29,100 $ 33,000 Peters and Timmerhaus [7],
(p. 670).
$ 28,600 $ 25,500 Pikulik and Diaz, [8].
$ 1,830 $ 1,800 AVS (American Volunteer
Standard) Peters and
Timmerhaus [7], (p. 557).
$ 79,600 $ 71,000 Cone roof tank
Peters and Timmerhaus [7],
(p. 573).
$859,000 $657,000 Peters and Timmerhaus [7],
(p. 768). Extrapolation
required.
CHEMICAL ENGINEERING EDUCATION
Equipment Item
1. Cyclone
2. Fan
3. Heat Exchanger
4. Horizontal Vessel
5. Centrifugal Pump
6. Storage Tank
7. Tray Tower
TABLE 3
Sample of Output Results for Economic Evaluation
Program. Profitability.
LAD OF NET OPERATING GROSS INTEREST DEPRECIATION NET INCO
YEAR SALES COST INCOME EXPENSE EXPENSE AFTER TA
0. 0. 0.
1 13933488. 11581918. 2351570.
2 17424176. 13894143. 3530033.
1027764.
963276.
3 21669312. 16668485. 5000827. 892339.
4 26790096. 19977360. 6812736. 814308.
5 33256672. 24096912. 9159760. 728474.
6 35917184. 26024640. 9892544. 634056.
7 38790544. 28106608. 10683936. 530197.
8 41893760. 30355104. 11538656. 415952.
9 45245248. 32783504. 12461744. 290283.
10 48864816. 35406160. 13458656. 152047.
0.
2431999. -57626
1958071. 31651
1576499. 131663
1269284. 2459155.
1021936. 3852862.
ME SECTION 1231 NET
X CASH FLOW CASH FLOW
J. -4404706. -4404706.
.0. -644883. 1210855.
7. -709371. 1565215.
4. -780308. 2112824.
-858339. 2870099.
-944173. 3930624.
822790. 4386562. -1038591. 4170761.
769855. 4879619. -1142449. 4507024.
769855. 5383481. -1256694. 4896641.
769855. 5928835.
769855. 6519112.
-1382363. 5316326.
1001752. 8290718.
PRESENT VALUE
INCREMENT
-4404706.
934302.
931891.
970621.
1017369.
1075074.
880213.
733936.
615265.
515431.
620221.
NET PRESENT VALUE AT 20.0 PERCENT RATE OF RETURN
DISCOUNTED CASH FLOW RATE OF RETURN (PERCENT)
INFLATION RATE (PERCENT)
TAX RATE (PERCENT)
tax rate of 48% are specified. Base net annual
sales are estimated at $22,634,000 with a fixed
annual operating cost of $3,200,000 and a variable
annual operating cost of $13,200,000 at 100%
production. The variable annual operating cost is
assumed to be proportional to the production rate.
Percentages of production for the ten years of
operation are as follows: 57, 66, 76, 87, 100, 100,
100, 100, 100, 100. Additional input data are the
percentage of total investment financed by debt,
70%, the life of the loan, 10 years, and the depreci-
able life, 10 years. Depreciation is computed by
the double-declining balance method with a salvage
value of $320,000. The program input data are
entered in free format.
The columns of the cash flow table (Table 3)
summarize the major components of the cash flow
for each year of operation. The numbers represent
annual amounts in inflated dollars.
Case Studies. A series of results obtained with
the economic evaluation program are summarized
in Tables 4 and 5. The effect of inflation on the
internal rate of return (IRR) and on the net
present value (NPV) is illustrated in Table 4 for
various financing and tax situations. Cases 1
through 4 represent "after-tax return" with a tax
rate of 48%, while cases 5 through 8 represent
"before-tax return," that is, tax rate equal to
3889614,
38.19
8.00
48.00
zero. The rest of the input data are the same as
for the sample problem described above.
Comparison of cases 1 and 2 and of cases 5 and
6 show the effect of inflation on a heavily debt-
financed project. The increase in both the IRR and
NPV is due to the fact that the net income from
the project inflates while the loan payments re-
main constant. In other words, most of the in-
flation losses are passed on to the financing
organization. Comparison of cases 3 and 4 show
that the effect of inflation on the IRR and NPV
reverses when the project is 100% equity financed.
This is due to taxes which increase with inflation
TABLE 4
Effect of Inflation on the Rate of Return
and the Net Present Value
Inflation % Tax IRR NPV @
Rate, % Debt Rate, % % 20% k$
0 70
8 70
35.76
38.19
3,250
3,890
0 0 48 17.17 -1,580
8 0 48 15.56 -2,400
0 70 0 46.46
8 70 0 51.17
0 0 0 26.56
8 0 0 26.32
7,620
9,180
4,350
4,130
WINTER 1984
as depreciation remains constant. Notice the nega-
tive NPV for both of these cases. This is because
the actual IRR is less than the 20% rate of re-
turn used to calculate the NPV. The obvious ad-
vantage of debt financing in this problem is due
to the low interest rate on the loan (10%). Finally,
comparison of cases 7 and 8 shows that inflation
has no effect on the before-tax return when there
is no loan. This is because all of the remaining
cash flow items are assumed to inflate at the same
rate. Depreciation has no effect on the before-tax
returns.
The effect of the depreciation method on the
IRR and on the NPV is shown in Table 5. Both
double-declining balance and sum-of-the-years'
digits produce similar results and are superior to
the straight line method. This is because the de-
preciation allowance is accelerated in the early
years of the project reducing taxes and shifting
after-tax income to the early years where it counts
TABLE 5
Effect of Method of Depreciation on the Rate of Return
and the Net Present Value
Depreciation Method
Straight-line
Double-declining balance
Sum-of-the-years' digits
Percent debt: 70%
Tax Rate: 48%
Inflation Rate: 8%
NPV @
IRR, % 20% k$
34.22
38.19
38.30
3,300
3,890
3,943
more. The double-declining balance method used
by the program switches automatically to straight-
line in the later years of the project as allowed by
the rules of the Internal Revenue Service.
CONCLUSIONS
Two computer programs have been developed
which are suitable for use by students in process
design courses. The equipment cost estimation pro-
gram is flexible, easy to use and based on the latest
cost correlations available, those from project
ASPEN. The economic evaluation program frees
the student from the tedious trial-and-error calcu-
lations which are involved in the determination
of the internal rate of return. The program is
realistic as it accounts for depreciation, income
taxes and inflation. O
ACKNOWLEDGMENT
The authors wish to express their appreciation
to the staff of project ASPEN, MIT Energy
Laboratory, for the cost correlations used in the
equipment cost estimation program, and to Banco
Central de Reserva de El Salvador for the support
of Mr. Kuri.
REFERENCES
1. Bussey, L. E., The Economic Analysis of Industrial
Projects, Prentice-Hall, Inc., Englewood Cliffs, NJ,
1978.
2. Evans, L. B., et al., "Computer-Aided Industrial Pro-
cess Design," The ASPEN Project, Eleventh Quarter-
ly Progress Report, Massachusetts Institute of Tech-
nology, Cambridge, MA, March 15, 1979.
3. Evans, L. B., et al., "Computer-Aided Industrial Pro-
cess Design," The ASPEN Project, Thirteenth
Quarterly Progress Report, Massachusetts Institute
of Technology, Cambridge, MA, September 15, 1979.
4. Evans, L. B., et al., "Computer-Aided Industrial Pro-
cess Design," The ASPEN Project, Fourteenth
Quarterly Progress Report, Massachusetts Institute
of Technology, Cambridge, MA, December 15, 1979.
5. Kohn, Philip M., "CE Cost Indexes Maintain 13-year
Ascent," Chemical Engineering, Vol. 85, No. 10, May
8, 1978, pp. 189-192.
6. Kuri, C. J., "Process Equipment Cost Estimation
and Economic Evaluation," M.S. Thesis, Department
of Chemical Engineering, Louisiana State University,
Baton Rouge, Louisiana, 1980.
7. Peters, M. S., and K. D. Timmerhaus, Plant Design
and Economics for Chemical Engineers, 3rd. ed.,
McGraw-Hill, New York, 1980.
8. Pikulik, A., and H. E. Diaz, "Cost Estimating for
Major Process Equipment," Chemical Engineering,
Vol. 84, No. 22, Oct. 10, 1977, pp. 106-122.
9. Richardson Engineering Services, Process Plant Con-
struction Estimating Standards, Vol. 4, Solana Beach,
CA, 1979-80.
10. Seader, J. D., W. D. Seider, and A. C. Pauls, FLOW-
TRAN Simulation-An Introduction, 2nd Ed., CACHE
Corporation, Cambridge, MA, 1977.
11. Soni, Y., M. K. Sood, and G. V. Reklaitis, PCOST
Costing Program, Purdue University, W. Lafayette,
Indiana, May 1979.
S t^books received
"Flame-Retardant Polymeric Materials," edited by Mena-
chem Lewin, S. M. Atlas, and Eli M. Pearce; Plenum
Publishing Corp., New York, 10013; 238 pages, $35.00
(1982)
"Advances in Cryogenic Engineering," R. W. Fast, Editor;
Plenum Publishing Corp., New York 10013; 1224 pages,
$95.00 (1982)
"Surface Chemistry of Froth Flotation," Jan Leja; Plenum
Publishing Corp., New York 10013; 758 pages, $69.50
(1982)
"Flat and Corrugated Diaphragm Design Handbook," Mario
Di Giovanni; Marcel Dekker, Inc., New York 10016; 424
pages, $55.00 (1982)
CHEMICAL ENGINEERING EDUCATION
Y9 MWemo4am
J. H. ERBAR
John Harold Erbar, 51, professor of chemical
engineering at Oklahoma State University, died
September 17, 1983.
Born in El Reno, OK, Erbar earned all of his
academic degrees in chemical engineering at Okla-
homa State University. Following service in the
U.S. Army, he joined Standard Oil Company and
worked in several research positions. He joined
the OSU faculty as an assistant professor of
chemical engineering in 1962 and was named full
professor in 1969. He was named Teacher of the
Year in 1970-71 and again in 1982-83.
Dr. Erbar was recognized internationally as an
expert in computer applications in chemical engi-
neering and taught courses in chemical engineer-
ing design, thermodynamics, fluid flow, stagewise
operations and others. He was a member of Omega
Chi Epsilon, AIChE, ACS, ASEE, and various
Oklahoma and national societies for professional
engineers. He was a registered professional engi-
neer in Oklahoma.
He is survived by his widow, Ruth, and a
daughter and a son.
J stirred pots
The Limerick Metric
Applied to
Thermodynamics
The subject of Thermodynamics,
'Tis true, is not for pedantics.
For, tho work must be done
And sweat be not shunned,
Insight requires more than mechanics.
O'r the four Laws stands Confusion,
As their numbering is all but illusion;
For the first is not first,
Tho the first is well vers'd,
And the last is not fourth-how amusin'!
The relations of Maxwell are infamous
For prompting ill-natured remarks most
boisterous.
Their exactness is trying,
Their permutations vying
With other companions more amorous.
The compressibility of liquids and gases
Is oft devious to lads and lasses.
Relating P, V, and T
Seems difficult to see
Without perturbing the masses.
Some students have little capacity
For understanding fugacity.
Their tendency to flee
Is paradoxical, to me,
And how will they develop tenacity?
The structure of phase diagrams abound
With complexities horribly profound.
Solid fluid, triple critical,
And others more mythical,
Its very dimensions can naught but astound.
T'was once a Chem Engineer grasping
For the concept of entropy dashing
To proverbial heights;
But try as he might,
There seemed little hope of his passing!
J. M. Haile
Clemson University
Clemson, SC 29631
WINTER 1984
laboratory
NEW ADSORPTION METHODS*
PHILLIP C. WANKAT
Purdue University
West Lafayette, IN 47907
ADSORPTION AND ION EXCHANGE systems are used T
for a variety of separations and purifications 2
in industry. Many different operational techniques o
have been proposed for these separation schemes.
In this review we will first develop a simple
method (suitable for undergraduate and graduate
students) for following the movement of a solute
in an adsorption or ion exchange system. Then this
solute movement will be used to study a variety of
operational methods. Much of this paper appeared c
previously [23]. excua/ed
FIGURE 1. Porosities in packed bed.
SOLUTE MOVEMENT
Consider first a bed of porous particles. The
particles have an interparticle (between different
particles) porosity of a and an intraparticle
(within a given particle) porosity of E. The total
porosity of the bed for small molecules is a +
(1-a)e. This is illustrated in Fig. 1. In addition,
large solutes will not be able to penetrate all of the
intraparticle void space. The fraction of volume
of the particle which any species can penetrate
is Kd. For a non-adsorbed species, Kd can be de-
termined from
Kd = V (1)
Vi
where Ve is the elution volume, Vo is the external
void volume between the particles, and V1 is the
*Presented at ChE Division ASEE Summer School,
August, 1982.
internal void volume. When the molecules are
small and can penetrate the entire interparticle
volume, Ve = Vi + Vo and Kd = 1.0. When the
molecules are large and can penetrate none of the
interparticle volume, Ve = Vo and Kd = 0.
As solutes migrate through the bed they can
be in the mobile fluid in the external void volume,
in the stagnant fluid inside a particle, or sorbed
to the particle. The only solutes which are moving
towards the column exit are those in the mobile
fluid. Consider the movement of an incremental
mass of solute added to a segment of the bed shown
in Fig. 1. Within this segment this incremental
amount of solute must distribute to form a change
in fluid concentration, Ac, and a change in the
amount of solute adsorbed, Aq. The amount of this
increment of solute in the mobile fluid compared
to the total amount of solute increment in this
segment is
Amt. in mobile fluid
Total amt. in segment
Amt. in mobile fluid
Amt. in: (Mobile fluid + stationary fluid + sorbed)
which is
Amt. in mobile fluid (AzAc) aAc
Total amt. in segment (AzA) aAc + (AzAe) (1-a) eAcKd + (AzA,) (1-a) (1-e) pAqKd
CHEMICAL ENGINEERING EDUCATION
The solid density, ps, is included in Eq. (3) to
make the units balance. Ae is the cross sectional
area, and z is the axial distance.
If fluid has a constant interstitial velocity, v,
then the average velocity of the solute in the bed
(the solute wave velocity) is just v times (relative
amount of time the incremental amount of solute
is in the mobile phase). Assuming a random pro-
cess of adsorption, desorption and diffusion in and
out of the stagnant fluid, the solute wave velocity
becomes
Amount solute in mobile phase
soute = total amount solute in column
(4)
or, after rearrangement
usolute (T) =
therm. This assumption allows us to ignore mass
transfer effects. The second assumption is that dis-
persion and diffusion are negligible; thus, all of
the solute will travel at the same average solute
velocity. These assumptions greatly oversimplify
the physical situation, but they do allow us to
make simple predictions. As long as we don't be-
lieve these predictions must be exactly correct, the
simple model which results can be extremely help-
ful in understanding separation techniques.
For undergraduate students we limit the
theory to simple linear equilibrium of the form
q = A(T)c (6)
where q is the adsorbed solute concentration,
S(5)
1 + [(1-a) /a] EKd + [(1-a)/a] (1-E)pJ(Aq/Ac) Kd (5)
Eq. (5) represents a crude, first order description
of movement of solute in the column. With a few
additional assumptions this equation can be used
to predict the separation in the system.
The most important assumption, and the as-
sumption least likely to be valid, is that the solid
and fluid are locally in equilibrium. Then Aq will
be related to Ac by the equilibrium adsorption iso-
usolute (T) =
Phil Wankat received his BSChE from Purdue and his PhD from
Princeton. He is currently a professor of chemical engineering at
Purdue. He is interested in teaching and counseling, has won several
teaching awards at Purdue, and is a part-time graduate student in
Education. Phil's research interests are in the area of separation
process with particular emphasis on cyclic separations, two-dimensional
separations, preparative chromatography, and high gradient magnetic
separation.
A(T) is the equilibrium constant which is a
function of temperature, and c is the solute con-
centration in the fluid.
For common adsorbents the amount of material
adsorbed decreases as temperature is increased.
Thus A (T) is a monotonically decreasing function
of temperature. With linear equilibrium
Aq/Ac = A(T), and Eq. (5) becomes
v
1 + [(1-a) /a]EK, + [1-a) Ia] Ka,(1-e)p$A(T) (7)
For Eq. (7) the solute wave velocity is the same
as the average solute velocity. Eq. (7) allows us
to explore the behavior of solute in the column
for a variety of operating methods.
Several facts about the movement of solute can
be deduced from Eq. (5) or Eq. (7). The highest
possible solute velocity is v, the interstitial fluid
velocity. This will occur when the molecules are
very large and K,, = 0.0. For small molecules Kd =
1.0, and with porous packing these molecules
always move slower than the interstitial velocity
even when they are not adsorbed. If adsorption is
very strong the solute will move very slowly.
When the adsorption equilibrium is linear, Eq. (7)
shows that the solute velocity does not depend on
the solute concentration. This is important and
greatly simplifies the analysis for linear equilibria.
If the equilibrium is nonlinear, Aq/Ac will depend
on the fluid concentration and Eq. (5) shows that
the solute velocity will depend on concentration.
WINTER 1984
Nonlinear equilibrium will be considered later.
A convenient graphical representation of the
solute movement is obtained on a plot of axial
distance, z, versus time. Since the average solute
molecule moves at a velocity of usoute(T), this
movement is shown as a line with a slope usolute.
This is illustrated for a simple chromatographic
separation in Fig. 2. Fig. 2A shows the feed pulse
while Fig. 2B shows the solute movement in the
column. The product concentrations predicted are
shown in Fig. 2C. Note that this simple model does
not predict dispersion or zone speeding, but does
predict when the peaks exit.
If desired, zone spreading can be included, but
will conceptually complicate the model. The ad-
vantage of this model is that it is simple and can be
used to understand a variety of methods of opera-
tion.
EFFECTS OF CHANGING THERMODYNAMIC
VARIABLES
Changes in temperature, pH, ionic strength or
solvent concentration are often used to help de-
sorb and elute the solute. Changes in these
variables will change the equilibrium constant, A,
in Eqs. (6) and (7). With temperature one can
either use a jacketed bed and change the tempera-
ture of the entire bed (called the direct mode) or
he can change the temperature of the inlet stream
and have a temperature wave propagate through
the column (called the travelling wave mode). For
most of the other elution methods the travelling
wave mode is used. Elution may be done either co-
currently or countercurrently.
The wave velocities for chemicals added to the
system can be obtained from Eq. (5) or (7). The
CA /A 1
or
CAl.
z
C5As.
A.
I 1
TIME
FIGURE 2. Solute movement model for isothermal
chromatography: A) Feed pulse; B) Trace of solute
movement in column; C) Product concentrations.
In Eq. (8) W is the weight of column wall per
length and Tref is any convenient reference
temperature. The wall term is only important in
laboratory scale columns. The velocity of the
thermal wave in the column is just the ratio in
Eq. (8) multiplied times the fluid velocity. After
assuming local equilibrium so that Ts = Tr = T,
and simplifying, we have
Uthernial = 1 + (1-Cp + (W/A) C(9)
1 + [(l-a)/a]E + [l-a) (1-E) Cp, + (W/A,) C,]/ap Cf
velocity at which temperature moves in the column
(the thermal wave velocity) can be obtained from
an energy balance. If we can ignore the heat of
adsorption and heat of mixing and assume the
column is adiabatic, then the energy in the mobile
fluid compared to the total energy in mobile fluid,
stationary fluid, solid and the column wall in a
segment of the column is
Note that with the simplifying assumptions made
here uthernma is independent of temperature. Com-
parison of Eqs. (9) and (7) show they have a
similar form but there is an additional term in Eq.
(8) to account for thermal storage in the column
wall, and effectively K, = 1.0 for energy changes.
Just as Eqs. (5) and (7) represented the move-
Energy in mobile phase
Total energy in column segment
(AzA,) apC, (Tf-Tr,., r)
[(AzA,) (a + (1-a) E) pfC (Tf-T,,,) + (AzAe) (1-a) (1-E) Cp (Ts-Trer) + (AzW) Cw (Tw-T,r,,) (8)
CHEMICAL ENGINEERING EDUCATION
L
ment of the average solute molecule, Eq. (9)
represents the average rate of movement of the
thermal wave. A more exact analysis is needed to
include dispersion and heat transfer rate effects.
On a graph of axial distance z versus time t
the thermal wave will be a straight line with a
slope Uthermal. Figure 3 illustrates elution using
temperature for counter-current desorption. In
this case a single solute is adsorbed. The feed flow
is continued until just before solute breakthrough
occurs. Then counter-current flow of a hot fluid
is used to remove solute. Upon reversing the flow
the fluid first exits at the feed temperature and
the feed concentration. When the thermal wave
breaks through, the temperature and concentra-
tion both jump. Since the adsorption equilibrium
constant is lower at high temperature the solute
velocity can be significantly greater at the higher
temperature. In actual practice the outlet tempera-
ture and concentration waves will be S-shaped
curves because of the dispersion forces.
To complete the analysis of the traveling wave
mode we need to consider the change in solute
concentration when the adsorbent temperature is
changed. The effect of temperature changes on
solute concentration can be determined by a mass
balance on a differential section of column Az
over which the temperature changes during a time
interval At. This balance for one solute is
avAt(c2-cl) -[a + Kde(l-a)](c2-cl) Az
(1-a) (1-e)Kdpa(q2-q1) Az = 0
(10)
where 1 refers to conditions before the tempera-
ture shift and 2 to after the shift. In Eq. (10) the
first term is the in-out term, the second term is
accumulation of solute in the fluid, and the third
term is accumulation of solute on the solid. To
ensure that all material in the differential section
undergoes a temperature change the control
volume is selected so that At = Az/uthermaI. The
mass balance then becomes
(a+e-(1-a)Kd- avh (c2-c1)
11thermal I
+ Kd(l-a) (1-e)p (q2-ql) = 0 (11)
If we assume that solid and fluid are locally in
equilibrium and that the equilibrium isotherm is
linear, then Eq. (11) reduces to
... we will first develop a simple
method (suitable for undergraduate and
graduate students) for following the movement of a
solute in an adsorption or ion exchange system. Then
this solute movement will be used to study
a variety of operational methods.
A.
C
L I
01
I
Z l .
I
^ ( -
.^ (^ i)
o ^-------
TIME
FIGURE 3. Solute movement model for adsorption
followed by counter-current elution with a hot fluid:
A) Inlet concentration and temperatures; B) Trace of
solute and temperature movement in bed; C) Product
concentrations and temperatures.
In the typical liquid system thermal > Usolute (TH) >
Usoiute(Tc), and C(TH) > C(Te). Thus the solute
is concentrated during elution. This concentration
is calculated from Eq. (12), and was plotted on
Fig. 3C. Note in Figs. 3A and 3C that the overall
mass balance will be satisfied. If the equilibrium
constant A does not change very much uso (TH) =
Uso1 (Tc) and there will be little change in con-
centration during elution. Since A is not usually
strongly dependent on temperature, large tempera-
ture changes are required. An alternative is to use
c(T,) ( 1 1 /( 1
c (T,) Uol ute(T.) Utherm ai / UsQite (TH)
1 t (12)
Thermal &
WINTER 1984
The equations developed here can be rigorously derived from
the governing partial differential equations by making a group of assumptions called
the local equilibrium assumptions and then using . th3 method of characteristics.
a different eluant which has a major effect on A.
Eqs. (11) and (12) are still valid but with
UEluant replacing Uthermal.
In the direct mode the entire column is heated
or cooled simultaneously. In this case Uthermal is
essentially infinite and Eq. (12) simplifies to
c(Ta) Uolute (TH) (13)
c(Te) Usoute(Te)
For the usual adsorbent A(T) decreases ( solute
desorbs) as temperature increases. Thus Uolutc
increases and, as expected, Eq. (13) predicts that
the solute concentration increases as temperature
increases.
This completes the basic analysis procedure for
the solute movement model for linear isotherms.
As presented, this is not a rigorous mathematical
model but was based on simple physical ideas.
The equations developed here can be rigorously
derived from the governing partial differential
equations by making a group of assumptions called
the local equilibrium assumptions and then using
a mathematical method called the method of
characteristics [1, 22]. Although some of the as-
sumptions required for the rigorous development
have been mentioned in passing, it will be help-
ful to list them explicitly here.
1. Homogeneous packing (no channeling).
2. Radial gradients are negligible.
3. Neglect thermal and pressure diffusion.
4. No chemical reactions except for sorption.
5. Neglect kinetic and potential energy terms.
6. No radiant heat transfer.
7. No electrical or magnetic fields.
8. No changes of phase except sorption.
9. Parameters are constant except for A.
10. Constant fluid velocity.
11a. Column is adiabatic, or
11b. Column is at controlled constant temperature.
12. Heat of adsorption is negligible.
13. Solutes do not interact.
14. Thermal and mass dispersion and diffusion are
negligible.
15. Heat and mass transfer rates are very high so that
fluid and solid are locally in equilibrium.
16. Equilibrium is linear.
This is a formidable list of assumptions. If any
of these assumptions are invalid the predictions
can be way off. The most critical assumptions are
the last three. Assumptions 14 and 15 cause the
outlet concentrations and temperatures to show
sharp jumps instead of the experimentally ob-
served S-shaped curves. Alternate mathematical
models which are more realistic but much more
complex are reviewed by Sherwood et al [19]. As-
sumption 16 can also cause physically impossible
predictions, but fortunately this assumption of
linear equilibrium is easily relaxed (see the next
section).
As we have seen this model greatly over-
simplifies the actual fluid flow and heat and mass
transfer processes occurring in the column. Be-
cause of this the predicted separation is always
better than that obtained in practice. What is this
model good for? The model is simple and can thus
be used to analyze rather complex processes. The
model does predict when the peak maximum will
exit and thus is a good guide for setting operating
variables. Since this model predicts the best
possible separation, the model can be used to de-
termine if, at its best, a separation scheme is of
interest. Since the predictions made are qualita-
tively correct, as long as the model predictions are
interpreted in a qualitative or at best semi-
quantitative sense the model is very useful.
NONLINEAR SYSTEMS
If the equilibrium isotherm is nonlinear the
basic structure developed here is still applicable,
but we must use Eqs. (5) and (11) instead of (7)
and (12). The solute velocity now depends on
both temperature and concentration. Once a
specific isotherm is determined it can be substi-
tuted into Eq. (5). For example, if the Freundlich
isotherm
q = A(T)c1 k<
is used, then
lim Aq q kA(T)-
Ac-40 Ac ac
and
(14)
(15)
usolute = +
[(l-a)/a] eKd + [(1-a)/a] Kd(1-e)p,kA(T) c-1
(16)
CHEMICAL ENGINEERING EDUCATION
This is shown in Fig. 5C and the outlet concentra-
tion is calculated in Fig. 5D.
Nonlinear isotherms can result in interesting
interactions when various shock and diffuse waves
intersect. The mathematical principles involved
in switching from differential to finite elements is
also a good pedalogical tool for teaching graduate
students. For graduate students I rigorously de-
rive all these results using the local equilibrium
model and the method of characteristics [19, 22].
COUNTER-CURRENT OPERATION I. Continuous Flow
In most chemical engineering unit operations
continuous counter-current flow is used since it is
usually the most efficient way to operate. Counter-
current movement of solids and fluid is difficult
Continued on page 44.
C. I
_____I
TIME
FIGURE 4. Diffuse waves: A) Inlet concentration; B)
Solute movement; C) Outlet concentration.
A diffuse wave occurs when a concentrated
solution is displaced by a dilute system. This is
illustrated in Fig. 4 where the outlet wave con-
centrations are calculated. If we try the reverse
(dilute solution displaced by a concentrated solu-
tion) then the limit does not exist since Ac has a
finite value. A shock wave occurs. Another way of
looking at this is Eq. (16) predicts a lower slope
for dilute systems. When a concentrated solution
displaces a dilute solution (Fig. 5A), the theory
predicts that the solute lines overlap and two
different concentrations occur simultaneously
(Fig. 5B). This is physically impossible. To avoid
this problem a mass balance is done on a finite
section of the column of length Az. This balance
is the same as Eq. (10) except 1 refers to before
the shock and 2 after the shock. Now we select the
time interval At = Az/ushock so that the shock has
passed through the entire section. Solving for the
shock wave velocity, we obtain
Z
C
FIGURE 5. Shock wave: A) Inlet concentration; B) Solute
waves following Eq. 18; C) Shock wave; D) Outlet
concentration.
V
ushock = 1 + [(l-a)/a]eKd + [(l-a)/a] (1-e) KdpS [(q2-q) /(c2-c1)] (17)
WINTER 1984
Classroom
THE PROCESS DESIGN COURSES AT PENNSYLVANIA:*
Impact Of Process Simulators
WARREN D. SEIDER
University of Pennsylvania
Philadelphia, PA 19104
F OR THE PAST 35 years, a two-semester process
design sequence has been taught at the Uni-
versity of Pennsylvania. This sequence is unique
in several aspects, most notably its diversity of de-
sign projects, involving seven faculty advisors and
seven consultants from local industry. The fall
lecture course, "Introduction to Process Design,"
covers the methodology of process design and pre-
pares our students for the spring project course,
"Plant Design Project," which is intended to pro-
vide a meaningful design experience. This article
is focused on the impact of process simulators in
recent years.
In 1967, we began to introduce computing
technology and modern design strategies, princi-
pally in the sophomore course on "Material and
Energy Balances," and have gradually integrated
process simulators into the design sequence. Our
objective was to strengthen the highly successful
sequence developed by Melvin C. Molstad, A.
Norman Hixson, other faculty, and our many in-
dustrial consultants. From 1967-1973, our efforts
to develop educational materials (principally soft-
ware) far outweighed the benefits to the students.
However, in 1974, the availability of Monsanto's
FLOWTRAN Simulator finally made this step
successful. Additional industrial simulators have
since become available and the benefits to our
students now far outweigh our past efforts.
*Based on a lecture at the ASEE Summer School for
ChE faculty, Santa Barbara, CA, August, 1982.
Initially, the steps to
satisfy a societal need are summarized
e.g., the conversion of coal to
liquid and gaseous fuels.
Warren D. Seider is Associate Professor of chemical engineering
at the University of Pennsylvania. He received his B.S. Degree from
the Polytechnic Institute of Brooklyn and Ph.D. Degree from the
University of Michigan. Warren's research concentrates on mathe-
matical modelling for process design and analysis. He and his
students have developed new methods for calculation of phase and
chemical equilibria, analysis of azeotropic distillation towers, analysis
of complex reaction systems, and integration of stiff systems. Warren
teaches courses with emphasis on process analysis and design. He has
co-authored Introduction to Chemical Engineering and Computer
Calculations (with A. L. Myers) and FLOWTRAN Simulation-An Intro-
duction (with J. D. Seader and A. C. Pauls). He helped to organize
CACHE and served as the first chairman from 1971-73. In 1979 he was
elected Chairman of the CAST Division of AIChE and in 1983 he was
elected a Director of AIChE.
Design courses are normally taught last in
the chemical engineering curriculum and, hence,
the details of the lecture course depend somewhat
on the prerequisite courses-and class size. Hope-
fully, this discussion of our particular format will
provide useful ideas.
FALL LECTURE COURSE:
INTRODUCTION TO PROCESS DESIGN
The outline of topics and associated lecture
hours are listed in Table 1. Two books are re-
quired: Plant Design and Economics for Chemical
Engineers [7], and FLOWTRAN Simulation-An
CHEMICAL ENGINEERING EDUCATION
TABLE 1
Outline of topics. Fall lecture course.
Lecture Hours
Introduction 1
Process Synthesis 1
Analysis of Flowsheets-FLOWTRAN 12
Simulation
Design of Heat Exchangers 8
Cost Estimation 3
Time-Value of Money, Taxes, Depreciation 3
Profitability Analysis 2
Optimization 3
Heat Integration 4
Synthesis of Separation Processes 2
Selection of Design Projects (for 1
Spring Project Course)
Exams 2
42
Introduction [9]. In addition, materials are taken
from seven other sources which are placed on re-
serve in our library [1, 2, 3, 4, 5, 6, 11].
The course expands upon the steps in the de-
velopment of a new chemical process as shown in
Fig. 1 (based upon a similar figure in Rudd and
Watson [8]). Initially, the steps to satisfy a societal
need are summarized; e.g., the conversion of coal
to liquid and gaseous fuels. Of course, the steps
are not always carried out in the sequence shown.
For example, a sensitivity analysis is often per-
formed prior to an economic analysis. In the inte-
grated plants of today, aspects of transient and
safety analysis must also be considered in the
synthesis of the process flowsheet.
Next, the steps in the synthesis of a vinyl
chloride process are illustrated with the intent of
exposing the steps that enter into the invention
of alternative flowsheets. Fig. 2 shows the evolu-
tion of one flowsheet, beginning with selection of
the reaction path (not shown), followed by the
distribution of chemicals (matching sources and
sinks and introducing recycle), selection of the
separation steps, the temperature and phase
change operations (not shown), and, finally, the
integration into chemical process equipment. This
is based upon Chapter 3, "Process Synthesis," in
Introduction to Chemical Engineering and Com-
puter Calculations [6]. It is noteworthy that steam
is used to vaporize dichloroethane and cooling
water to cool the reaction products in a quench
operation. We emphasize the desirability of heat
integration, when feasible, but because carbon
would deposit in the evaporator tubes, a rapid
low-temperature quench, with water as the cool-
Principal emphasis is given
to the subroutines to model the process
units such as vapor-liquid separators, multi-staged
towers, heat exchangers, compressors,
and reactors.
ing medium, is necessary.
Having introduced the concepts of flowsheet
synthesis, attention is turned to the analysis of
alternative flowsheets. In practice, of course, the
two go hand-in-hand. FLOWTRAN is used princi-
pally because our book [9] is written in a tutorial
fashion, as compared with the usual User Manuals.
FLOWTRAN has been available for students on
United Computing Systems, but its usage has been
limited by the relatively high cost of commercial
computers. Recently, however, like ChemShare
(DESIGN/2000) and Simulation Sciences (PRO-
CESS), Monsanto has made FLOWTRAN avail-
able for installation on university computers-
Societal Need
Define General E economic Forecasts
Information P ormanc Proram
Alternative
Process Concepts
Synthesize
Process Operations
Equipment
I ...... I Modify Process
Cond lonsModty Process
FIGURE 1. Steps in the development of a new chemical
process.
WINTER 1984
greatly improving the effectiveness of process
simulators in the design course.
Principal emphasis is given to the subroutines
to model the process units such as vapor-liquid
separators, multi-staged towers, heat exchangers,
compressors, and reactors. There are several sub-
routines to solve the equations that model each
process unit. The models vary in specifications and
rigor and it is important that the design student
understand the underlying assumptions, but not
the solution algorithm. We review the assumptions
and make recommendations concerning usage of
the subroutines listed in Fig. 3. For example, the
PUMP routine disregards the capacity-head curve
and uses the viscosity of pure water. When de-
recommended to calculate the minimum number
of trays, the minimum reflux ratio, and the theo-
retical number of trays (given the reflux ratio),
followed by DISTL which uses the Edmister as-
sumptions to simulate the tower and, in some cases,
FRAKB or AFRAC to solve the MESH equations
with fewer assumptions.
The synthesis of the simulation flowsheet is
also emphasized with consideration of novel ways
of using the subroutines to analyze a process
flowsheet. For example, consider the quench pro-
cess (Fig. 4a) in which hot gases are contacted
with a cold liquid stream. Given the recycle
fraction, and assuming that the vapor and liquid
products are at equilibrium with no entrainment,
most designers would develop the simulation flow-
sheet shown in Fig. 4b. However, iterative recycle
calculations are unnecessary because the vapor
and liquid products (S3, S5) are independent of
the recirculation rate. In a more efficient simu-
lation flowsheet, the IFLSH subroutine deter-
mines the flow rate and compositions of S3 and S5
(see Fig. 4c). Then, MULPY subroutines com-
pute the flow rates for S4 and S6. Most students
use the "brute-force" approach in Fig. 4b, re-
quiring about 5-10 iterations with Wegstein's
method, before we demonstrate that the iterative
recycle calculations can be avoided.
These lessons are reinforced with a problem to
simulate the reactor section of the toluene hydro-
dealkylation process [9, p. 228]. Feed toluene is
mixed with recycle toluene and a recycle gas
stream. The reaction products at 1268F are
quenched and the recycle fraction is adjusted to
reduce the product temperature to 1150F. As
above, the iterative recycle calculations can be
avoided, although most students use the brute-
Raw Materials
(possibly C2 H4, CI)
Desired Product
(C2H3CI)
a) The process synthesis problem
c., nu l rl abibid
I 1 10 811u h 5;, 10 BlU /0f
SIr.iH Cl0CI, Prolli H CI
Chlfolxion 500. C -C,,CI -- C,ICI
C.HU+CI- 2C1M4C0, C0H4CI-2 C,03CIt+HCI
10.-5 11h,
b) Flow sheet showing distribution of chemicals for thermal cracking of
dichloroethane from chlorination of ethylene (reaction past 3)
c) Flow sheet showing separation scheme for vinyl chloride process
d) Flow sheet showing task integration for the vinyl chloride process
FIGURE 2. Synthesis of a vinyl chloride process (Myers
and Seider [6]).
CHEMICAL ENGINEERING EDUCATION
i
'c.
force approach.
The FLOWTRAN subroutine EXCH1 imple-
ments the method of thermodynamic effectiveness
(computes terminal temperatures, given the area
and overall heat transfer coefficient), and the
EXCH3 subroutine implements the log-mean
temperature difference method (computes the
area, given terminal temperatures and overall
heat transfer coefficient). Since these methods
are not covered in our course on heat and mass
transfer, the methods are derived and problems
are worked using Chapter 11 of Principles of Heat
Transfer [5] as text material. Then, the students
design a heat exchanger (e.g., 1-4 parallel-
counterflow) with correlations for the heat trans-
fer coefficients and pressure drops on the shell and
tube sides and the methods presented by Kern [4]
and Peters and Timmerhaus [7]. This exposes the
student to more detailed analysis procedures than
are available in most process simulators. Such de-
tail is recommended only when the approximate
models introduce large errors and the cost of a
heat exchanger contributes significantly to the
economics of the process.
This leads to methods of cost estimation.
FLOWTRAN has subroutines for cost estimation,
but the assumptions of the cost models are not
FIGURE 3
FLOWTRAN subroutines (blocks)
stated or referenced. Since no basis is available
for justifying their results, well-established and
clearly stated methods are preferred. The factored
cost methods of Guthrie [3] have been used to
date, but the factors in his article need updating.
We are evaluating the PCOST program de-
veloped at Purdue University [10] and the data
Quenched
Gas
Hot Gas
(a) Process Flowsheet
(b) Iterative Simulation
Flowsheet
Isothermal flash
Adiabatic flash
General purpose flash
SEPR Split fraction specification
DSTWU Winn-Underwood-Gilliland
distillation
DISTL Edmister distillation
FRAKB Tray-to-tray distillation (KB
method)
AFRAC Tray-to-tray distillation and
absorption (matrix method)
HEATR Heat requirement
EXCH1 Shell and tube-method of
thermodynamic effectiveness
EXCH3 Shell and tube-log-mean temp.
diff. method
REACT Fractional conversion specifica-
tion
XTNT Extent of reaction specification
Compression PUMP Centrifugal pump
GCOMP Compressor (or turbine)
Misc.
ADD
SPLIT
MULPY
Mixer
Stream splitter
Stream multiplier
(c) Acyclic Simulation Flowsheet
FIGURE 4. QUENCH process.
books of Woods [12]. Both give cost curves and
factors based upon more recent cost data.
For production or operating costs, the recom-
mendations of Peters and Timmerhaus in Chap. 5
are used with concentration on the direct pro-
duction costs, such as for raw materials, operat-
ing labor, and utilities. The Chemical Marketing
Reporter provides the costs of chemicals bi-weekly
(often for several locations within the United
States).
Next, the concepts of profitability analysis are
introduced, following the sequence of Peters and
Timmerhaus in Chaps. 6-9. The concepts of simple
and compound interest are applied to give the
present and future values of an investment and
to define an annuity. Then, capitalized costs are
covered to provide a basis for evaluating the cost
of equipment having different service lives. For
Continued on page 41.
WINTER 1984
IFLSH
AFLSH
BFLSH
Flash
Stagewise
separation
Heat exchange
Reactor
curriculum
INTRODUCING THE REGULATORY PROCESS
INTO THE CHEMICAL ENGINEERING CURRICULUM
A Painless Method
FRANKLIN G. KING AND
RAMESH C. CHAWLA
Howard University
Washington, D.C. 20059
E ENGINEERING FACULTY CAN NO longer doubt
that government regulations have had a major
impact on engineering practice. Public policy de-
cisions have resulted from social concerns and
have mandated engineering solutions in many
areas, such as environmental pollution, proper dis-
posal of hazardous wastes, consumer product
safety, and the control of exposure to carcino-
genic and toxic materials. Because technological
invention has such a great impact on our society,
the scope of an engineer's responsibilities must
include a sensitivity for social concerns and the
participation in public policy issues involving
technology. Engineers are directly affected be-
cause our designs are often covered by govern-
ment regulations. Engineers must also provide the
data and technical judgment needed to formu-
late public policy and to write realistic and
technically feasible regulations to implement the
policies. If all these things are true, then an argu-
ment can be made that engineering students
should be exposed to the interaction of engineer-
ing and public policy as part of their professional
education.
There are at least three different ways that an
introduction to government regulations and the
Engineers are directly affected
because our designs are often covered by
government regulations. Engineers must also provide
the data and technical judgment needed
to formulate public policy...
Franklin G. King received his B.S. degree from Penn State, his
masters in education from Howard University, his masters in chemical
engineering from Kansas State and his D.Sc. from Stevens Institute of
Technology. He has been teaching for the last 18 years at Howard
University and at Lafayette College. His current interests include the
development of personalized instruction methods, biochemical engi-
neering and pharmacokinetic modelling of anticancer drugs. (L)
Ramesh C. Chawla is an Associate Professor of chemical engineer-
ing at Howard University. He did his undergraduate work at the
Indian Institute of Technology, Kanpur and obtained his M.S. and
Ph.D. degrees from Wayne State University. Dr. Chawla is a member
of many professional societies including AIChE, APCA, WPCF, Sigma-
Xi and SAE. He has received several teaching awards including Out-
standing Instructor Award from the Howard University AIChE Student
Chapter and Teetor Award from SAE. His research interests include
kinetics, air and water pollution, and mass transfer. (R)
regulatory process can be integrated into the
chemical engineering curriculum. The first ap-
proach would be to recommend an elective course
on the topic. Many universities have introduced
"Society and Technology" elective courses which
usually focus on the impact of technology as a
social phenomenon, rather than on the technical
aspects of public policy issues. These courses are
often not recommended by engineering faculty be-
cause they do not meet the requirements as a
CHEMICAL ENGINEERING EDUCATION
technical or a social science elective. Generally
these courses are not taught by engineering
faculty. Thus, engineering students are deprived
of a role model and get the feeling that engineers
are not concerned with the regulatory process and
public policy issues. Also, since the courses are
elective courses, many students may not select
them voluntarily.
A second approach is the use of interdisciplin-
ary project-oriented courses. The course would be
team-taught by both engineering and social
science faculty and the students could be both
technical and non-technical. As an example, a
course could be devoted to a project of cleaning
up a river where the team would have to consider
both the technical and the social aspects of differ-
ent solutions. These courses require a consider-
able amount of faculty time to organize and
develop. They also require cooperation of different
departments operating on different budgets.
A third technique is to use engineering case
studies to introduce public policy considerations
directly into the engineering curriculum. The use
of case studies can overcome the local problem of
developing and sustaining projects with public
policy issues. If case materials were available and
if only part of a course involved issues concerning
public policy, then many faculty might be willing
to get involved. On the other hand, few engineer-
ing faculty would feel comfortable with or be
willing to teach an entire course dealing with
public policy. However, if faculty would give
students even a small peek at public policy issues
they should be able to foster an awareness of the
relevance of the social science/humanities com-
ponent in education. We, as engineering faculty,
might even grow a bit as we come to understand
the impact of policy issues on the practice of engi-
neering education.
We would like to describe a project which was
aimed at developing a number of case studies with
public policy considerations and how we have
introduced public policy into our undergraduate
curriculum at Howard University. We would also
invite you to get involved so that case studies can be
developed that can be used in chemical engineer-
ing curricula.
THE EPEPP PROJECT
The Educating Prospective Engineers for
Public Policy (EPEPP) project is administered by
ASEE with the University of Washington as the
academic sponsor and Professor Barry Hyman [2]
The goal of the project is to provide
future engineers with the tools necessary to
contribute professionally to the resolution of
technically intensive public policy issues.
as the project director. The project has the
financial support from the National Science
Foundation, Sun Oil Company, General Motors,
and nine engineering societies. AIChE has become
a sponsor for the 1984 program.
The goal of the project is to provide future
engineers with the tools necessary to contribute
professionally to the resolution of technically in-
tensive public policy issues. The project is in re-
sponse to the needs of society to have a greater
technical input into the making of public policy
in engineering and technology areas and to the
needs of engineers to have a broadened awareness
and understanding of the meaning of public policy.
The project is geared to produce case studies on
topics concerning public policy issues which have
technical components. The objective of the pro-
ject will be accomplished by the direct experience
of a small number of students and faculty and
by the integration of the case studies into the
typical engineering curriculum.
The project has three major integrated com-
ponents: 1) Washington Internship for Students
of Engineering (WISE); 2) the development of
case studies on engineering and public policy on
topics based on the WISE program; and 3) a series
of regional ASEE faculty workshops to promote
the utilization of case studies.
The WISE Program provides an opportunity
for about 15 third-year engineering students to
spend 10 weeks during the summer studying the
relationship between engineering and public
policy. An objective of the WISE program is for
the students to gain an understanding of the ope-
ration of the federal government so that they can
appreciate the non-technical aspects of technology
related public policy problems. The students re-
ceive a stipend of $1750 to cover expenses. They
also receive 3 credits from University of Washing-
ton. Their goal for the summer is to complete a
written report on their project to provide the basis
of a case study. The students are selected com-
petitively by the sponsoring engineering societies.
Fortunately, five chemical engineers have been
selected even though AIChE was not a sponsor.
WINTER 1984
1980 1981 1982
5 4 5
3 2 3
2 (SAE, 2(ANS) I(ANS)
NSPE)
*One or more students were sponsored by the societies
shown in parenthesis-Society of Automotive Engineers
(SAE), National Society of Professional Engineers
(NSPE) and American Nuclear Society (ANS).
Table 1 lists past WISE participants by back-
ground.
In pursuing their specific projects, the students
spend about 5 hours a week in a classroom setting
discussing the dimensions of engineering and
public policy [4]. In pursuing their specific projects,
the students interact regularly with ASEE head-
quarters, the Washington office of their sponsor-
ing societies, government agencies, congressional
staff, corporate lobbyists and consumer advocates.
They also attend seminars by leading experts on
current issues of interest to the technical com-
munity and to society.
The classroom work and field work is co-
ordinated by a faculty-member-in-residence. The
faculty-member-in-residence meets regularly with
individual WISE students to monitor the progress
of their activities. The faculty-member-in-resi-
dence is selected on the basis of first-hand ex-
perience with public issues, record of teaching,
and familiarity with the case method of instruc-
tion. Professors Charles Overby (Ohio University),
Paul Craig (UC-Davis), and F. Karl Willenbrock
TABLE 2
1980 WISE Case Studies
* Regulation of trihalomethanes in drinking water
* Subsurface disposal of hazardous waste
* Problems with implementing an effective automobile
fuel economy program
* Management of high-level radioactive wastes
* Building energy performance standards
TABLE 1
WISE Participants*
DISCIPLINE
C.E.
Ch.E. (Howard, Suny-Buffalo)
M.E.
I.E.
E.E.
NUMBER
regional ASEE conferences. The workshops are
to encourage and publicize the use of EPEPP cases
on many campuses. Ten of the faculty that ex-
pressed an interest in the project were invited to
participate in a case workshop which was held in
conjunction with the 1981 ASEE Annual Confer-
ence. The field test faculty were selected to get a
mix of disciplines and geographical areas. The
distribution by disciplines is shown in Table 3.
USING ENGINEERING CASE STUDIES
An engineering case study is a written account
of an engineering activity as it actually took
place [3]. A case gives the sequence of events of a
real experience, often from the viewpoint of one
or more participants. Unlike a technical paper
CHEMICAL ENGINEERING EDUCATION
INTERNS
Mechanical Engineering
Civil Engineering
Chemical Engineering
Electrical Engineering
Agricultural Engineering
Industrial Engineering
Aeronautical Engineering
Manufacturing Engineering
Energy Systems Engineering
Nuclear Engineering
Engineering & Public Policy
Engineering Science
(SMU) were the faculty-members-in-residence for
the first three years of the program.
The second phase of the EPEPP project is to
convert the student papers into draft cases and
to coordinate the preparation for classroom testing
of the drafts [1]. In autumn 1980, in response to a
national questionnaire, about 75 faculty expressed
interest in using a case on a specific area and to
participate in workshops on the use of the cases.
On the basis of the questionnaire, five topics were
selected by the project director to be converted
into draft cases. The topics selected are listed
in Table 2. As part of the process of converting
the papers into draft cases, additional introductory
material was written describing the regulatory
process. Exerpts from the Federal Register and
transcripts of expert testimony on proposed rules
were included where appropriate. An Instructor's
Guide was also written for each case. The Guide
contains suggestions on how each case might be
used.
The third and final component of the project is
the validation of the cases and their integration
into engineering curricula. The validation and
integration of the cases are to be accomplished
by classroom testing and a series of workshops at
TABLE 3
Faculty Participation
which focuses on the validity of a solution, a case
considers how the results were obtained. A case
often shows how the participants interacted to ac-
complish the engineering task. A case is often
written in segments to allow class study or dis-
cussion at critical decision points. Cases, like de-
sign projects, have no single correct answer and
depend on many subdivisions of engineering. They
often raise questions of human behavior and
ethics as well as technical questions, and thus
permit many possible solutions.
Engineering cases can be used in many differ-
ent ways. A list of some of the more common
methods is given in Table 4. Using cases as read-
TABLE 4
Classroom Uses of Case Material
Reading assignments
Background to specific problems
Practice in formulating problems
Subjects for class discussion
Medium for relating engineering history and illustrat-
ing the engineering method
Motivation for laboratory work
Background and source for research or design projects
ing material is the simplest method, but it is
probably the least effective. One of the other
methods should probably be selected since students
will be more involved in the learning process.
EXPERIENCE AT HOWARD UNIVERSITY
We used the draft case study "Regulation of
Trihalomethanes (THM's) in Drinking Water" as
part of our first chemical engineering course
"Introduction to Engineering Design." All of our
students were concurrently taking their second
semester of chemistry and calculus. The THM
case was selected from the available cases be-
cause it had a strong chemistry component and
the topic had the most interest to chemical engi-
neers.
The students were all given a copy of the case
study as a reading assignment and were given a
short technical presentation on the formation
of THM's in water. Every attempt was made to
insure unbiased dissemination of information. The
students were next organized, voluntarily, into one
of six groups consisting of the EPA, Congress, in-
dustry, consumer groups, research centers and
universities, and judges. Each group was asked
to prepare themselves for a role-playing discus-
sion of the topic. Each group then met individual-
Unlike a technical paper which focuses
on the validity of a solution, a case considers
how the results were obtained ... Cases, like design
projects, have no single correct answer and
depend on many subdivisions
of engineering.
ly to formulate their strategy. They were en-
couraged to explore the topic in as great technical
detail as possible. The students were expected to
present and defend the positions of their group,
rather than express their personal views.
The discussion period consisted of a two-hour
session which began by having each group's
spokesperson summarize the group's role in the
regulation process. Two faculty members and a
senior student joined the judges group to moder-
ate and guide the discussion and to bring out
various aspects of the problem. The judges also
acted to evaluate individual student and group
performance. A lively debate followed, with each
group questioning the others. The judges were
successful in keeping the discussion going and
getting most of the students involved in the dis-
cussion. The student judges were unanimous in
siding with the consumer group, their evaluation
being more on an emotional basis than a factual
one. The faculty judges felt that the discussions
should have been more technical, but everyone
agreed that they had a better understanding of the
regulatory process and why engineers must be in-
volved.
We plan to continue using the case studies in
our freshman class and intend to introduce them
in the senior level design course. We expect to get
a better technical response from the seniors, but
both groups will gain a sensitivity for the social
responsibility of engineers. O
REFERENCES
1. Federow, H. L., and B. Hyman. "Developing Case
Histories from the WISE Program," Proceedings of
the 1981 ASEE Annual Conference, pp. 1011-1015.
2. Hyman, Barry (EPEPP Project Director). Program
in Social Management of Technology, FS-15. Uni-
versity of Washington, Seattle, Washington 98195.
3. Kardos, G., and C. O. Smith. "On Writing Engineer-
ing Cases." Proceedings of the 1979 ASEE Engineer-
ing Case Studies Conference.
4. Overby, C. M. "Engineering and Public Policy: Re-
flections by the 1980 (WISE) Faculty-Member-in-
Residence." Proceedings of the 1981 ASEE Annual
Conference, pp. 1004-1009.
WINTER 1984
BS5 classroom
MODULAR INSTRUCTION
UNDER RESTRICTED CONDITIONS
TJIPTO UTOMO
Bandung Institute of Technology
Bandung, Indonesia
KEES RUIJTER
Twente University of Technology
Enschede, The Netherlands
D UE TO THE ECONOMIC recession and cuts in edu-
cational budgets, discussions on the efficiency
of the education system (especially with regard
to faculty time) have only recently been started
in the Western world. For developing countries,
however, this is not only a well known problem
but only one of many problems. Besides having a
Tjipto Utomo graduated from high school in 1941 but had to
suspend his academic activities during the second World War and
the following struggle for independence. He began university studies
in 1950 and graduated from the chemical engineering department of
the Institute of Technology Bandung in 1957. He obtained his MChE
degree in 1959 from the University of Louisville and is presently a
professor at the Institute of Technology Bandung. (L)
Kees T. A. Ruijter is a graduate in chemistry from the University
of Amsterdam. During the years of 1979-1983 he worked in the
Dutch-Indonesian program on upgrading chemical engineering edu-
cation in Indonesia. Before that he was with the Twente University
of Technology (Holland) working on the development of chemical
engineering education and he has now returned there. His main
areas of interest are lab course improvement, efficiency of the learn-
ing-teaching process, and curriculum evaluation and development. (R)
small faculty, underpaid and overoccupied with
additional activities, universities often face such
conditions as inhomogeneous classes (in capability
as well as in motivation), low staff-student ratios,
rapidly increasing enrollments, and the need for
more graduates.
In a cooperative project between the Bandung
Institute of Technology (ITB) and technical uni-
versities in The Netherlands, the educational sys-
tem was improved gradually within these restric-
tive conditions [1]. One of the main features, a
modified modular instructional system, is the
subject of this article.
THE TRANSPORT PHENOMENA COURSE
The transport phenomena (T.P.) course is a
fourth semester course. However, because the first
year is a common basic science program for all
ITB students, it is their first real confrontation
with engineering concepts. For this reason and
because less than 30% of the students passed
yearly, the T.P. course was chosen as our pilot
course.
In the first phase of reconstruction the course
and its context were thoroughly evaluated. Some
measures were investigated in an experimental
set-up and implemented step-wise. The main find-
ings were
1. Only a few students perform at an acceptable level.
Many students know the principles and laws but
cannot apply them in any situation.
We decided to restrict the number of topics to be
discussed and to require the students to perform
at a higher level of competence.
2. The individual differences in student performance are
enormous (despite the common first year program).
To minimize this problem we developed a modu-
lar instruction scheme enabling the students to
study at their own pace.
3. The students (80%) are not able to read English
texts.
Also, because the lecture as a source of informa-
CHEMICAL ENGINEERING EDUCATION
tion is inadequate for an inhomogeneous class,
we decided that all information (text, examples,
exercises and solutions) should be made avail-
able on paper.
4. Students do not solve the problems systematically
and they have difficulty in describing physical phe-
nomena in mathematical terms.
For this we adopted a methodology for solving
science problems and modified it for the T.P.
problems [2, 3].
5. Individual guidance of students working on prob-
lems is quite effective but cannot be applied to
exercises at home because of lack of tutors.
An instructional scheme was developed wherein
the presentation of theory and applications by
the teacher was followed directly by individual
exercises in class and continued at home.
6. The usual norm-referenced grading procedure ap-
pears to be inadequate to evaluate effectiveness of
learning and instruction and is demotivating for the
students.
In grading the module exams we applied the
criterion-referenced performance assessment. As
criterion we chose 60%, the minimum level of
mastery necessary to take the following module.
MODULAR INSTRUCTION
Modularization is a classical solution for the
problem of an inhomogeneous student population.
Students in such a group differ in capability, in-
telligence, and motivation, resulting in different
time requirements for study. In a modular scheme
the allowed time is made to correspond with the
students' required time. The course is divided into
modules, enabling students to choose more or less
individual paths [4]. However, we could not apply
all principles of modular instruction because
ITB students are not able to study on their own
(they have never studied in instructional schemes
other than the lecture).
Faculty time is very restricted and assistants,
proctors, or administrative staff are not available.
Therefore we limited the number of examina-
tions and the opportunity for remedial instruction
and developed a teacher-paced modular system.
The contents of the T.P. course are very suitable
for modularization because of the similarity
among the three sections: transfer of momentum,
heat, and mass. After the first part the other two
can be presented by analogy.
The contents were divided into 6 modules; 3
modules covering the basics of momentum, heat,
and mass transfer; 2 covering extensions and ap-
plications of these; and the 6th module (C-2, ex-
tension of mass transfer) is postponed to the unit
operations courses. The first module is a small
module, to encourage the students to start their
We developed a teacher-paced
modular system which allows the students
to study on a full or a 60% pace
(a 2 gear-system).
study immediately. The students can concentrate
on the macro-balance approach and by the time
they are acquainted with this concept and the new
instructional system, the micro-balance approach
is introduced (see Table 1).
TABLE 1
Contents of 5 Modules
MODULE
A-1
SUBJECT
INTRODUCTION
Transport phenomena
Laws of conservation
MOMENTUM TRANSPORT
Laminar flow
Sample problems
Abstract and exercise
Dimensional analysis
Exercise
A-2 Flow in pipes
Turbulent flow
Pressure drop in tube flow
Flow in conduits with varying cross section
Sample problems
Abstract and exercise
Flow around objects
Exercise
B-1 MICROBALANCES
Introduction
Equation of Continuity
Equation of Motion
Application of the equation
Abstract and exercise
HEAT TRANSPORT
Introduction
Equation of Energy
Application of the equation of energy
Abstract and exercise
B-2 Unsteady state conduction
Heat transfer by convection
Radiation
Abstract and exercise
C-1 MASS TRANSPORT
Diffusion
Mass transfer
Coefficient of mass transfer
The film theory
Concentration distribution
Unsteady state diffusion
Mass transfer by convection
Simultaneous heat and mass transfer
Abstract and exercise
WINTER 1984
The examination procedure forces the students
to concentrate on the basic modules. Only students
who pass the first module (A-l) are allowed to
proceed to the second module (A-2). The others
must repeat the A-1 exam, which is scheduled at
the same time as the regular module A-2 exam.
This procedure requires the least possible time
from the lecturer. For the same reason, no reme-
dial instruction for repeating students is organ-
ized. The printed text and work-out exercises
should enable them to prepare for the repetition.
As a consequence, as many students as possible
who are starting to study heat transfer have
passed module A-1 and have a proper compre-
hension of the essentials of momentum transfer.
The procedure for the second section is quite simi-
lar.
A calculation showed that the examination for
modular instruction should consume no more in-
structor time than examination by the former
method, if 60% of the group passes on the first
attempt.
Since 1981 all homework problems have been
directly related to classroom exercise. Feedback
on the exercises was provided in class and in the
lecture notes (answers), while many worked-out
problems were shown in show-windows. All prob-
lems were worked out in four phases: analysis,
plan, elaboration, evaluation. Only a precise result
in one phase allows the problem solver to proceed
to the next phase. The results at each phase enable
the instructor to provide adequate feedback and
allows the students to look for adequate in-
formation about their solving process. This prob-
lem solving scheme was also followed for examina-
tions.
RESULTS
The module test results of 1981 are shown in
Table 2. The student performances were accept-
able except for modules B-2 and C. Here a final
module effect, "we have got the ticket," seems
likely.
After the tests only 66% had to repeat two
modules or less. Most of them were able to pass
the respective parts of the final exam and thus the
whole course. This and the overall effect of the re-
construction is shown in Table 3.
By means of questionnaires, interviews, and
analysis of examination results, other informa-
tion was collected on
TIME SPENT Students did not feel the modular
TABLE 2
1981 Module Test Results
STUDENTS WITH SCORE
ABOVE 60%
First Second Final
MODULE attempt attempt exam
A-1 95 (70%) 34 1
A-2 67 (70%)* ** 35
B-1 97 (72%) 26 1
B-2 55 (57%) ** 1
C 27 (20%) ** 77
% Students
passed
(N=135)
96
67
92
41
77
*Percentage of number of participants in the examina-
tion. Here 67 passed out of 95 students allowed to par-
ticipate.
**For these modules the final exam is the second attempt.
system forced them to spend more time on trans-
port phenomena.
PROBLEM SOLVING The plan phase was very
difficult for the students. The lecturer considers the
systematical method on problem solving not only
as a means for learning but also as a sound tool for
instruction and explanation to the students.
ATTENDANCE AT LECTURES/INSTRUCTIONS
The number of students attending the lectures
during the semester was higher than before. The
decrease in attendance during modules B-2 and C
was partly caused by the interference of labora-
tory activities. This explains the lower scores on
these modules.
ACCEPTANCE Students and lecturer were very
positive about the reconstructed course and the
modular system.
CONCLUSIONS
The examination results of the reconstructed
courses and the general acceptance show that the
modular T.P. course is a substantially improved
course. The better performance of the students is
not a result of an increase in their efforts or the
TABLE 3
Distribution of Final Grades for the TP Course
During and After 1980
GRADE
A
B
C
(A+B+C)
D
E
F
1980
0%
2.5%
18.5%
(21.0%)
25.2%
30.3%
23.5%
1981
3.3%
30.1%
46.3%
(79.7%)
12.2%
0%
8.1%
1982
21.0%
29.5%
27.6%
(78.1%)
5.7%
4.8%
11.4%
A/B/C-Passed: D-Passed conditionally: E/F-Failed.
CHEMICAL ENGINEERING EDUCATION
activities of the teacher, but by a more efficient
use of the students' and lecturer's time; i.e. the
internal efficiency of the instructional process has
been improved.
The main feature of the new course is the
modular system. We developed a teacher-paced
modular system which allows the students to study
on a full or a 60% pace (a 2 gear-system). Re-
medial teaching was not applied. This system re-
sulted in a constant study load in transport phe-
nomena during the semester and few students lost
the junction in an early phase as they had in the
past. We may conclude that it is worthwhile to
apply a modular scheme, even under very re-
stricted conditions of faculty time. O
ACKNOWLEDGMENT
This educational upgrading program was
sponsored by The Netherlands Ministery of De-
velopment Cooperation.
LITERATURE
1. Ruijter, K. and Tjipto Utomo, "The Improvement of
Higher Education in Indonesia: A Project Approach."
Higher Education, 12 (1983).
2. Mettes, C. T. C. W., et al., "Teaching and Learning
Problem Solving in Science," Part I; Journal of
Chemical Education, 57/12 (1980).
3. Idem, Part II: Journal of Chemical Education, 58/1
(1981).
4. Russel, J. D., "Modular Instruction," Burgess Co.,
1974.
Book reviews
THE HISTORY OF CHEMICAL ENGINEERING
AT CARNEGIE-MELLON UNIVERSITY
By Robert R. Rothfus
Carnegie-Mellon University,
Pittsburgh, PA 15213, 302 pages
Reviewed by
Robert B. Beckmann
University of Maryland
The author, Robert R. Rothfus has been asso-
ciated with the chemical engineering program at
Carnegie-Mellon, as a graduate student and fac-
ulty member, for over forty years, a period that
covers over half the Chemical Engineering pro-
grams total existence and almost the entire period
of its existence as a separate department. The book
was obviously a labor of love to Professor Rothfus
as evidenced by its attention to statistical detail
and anecdotes as well as historical development.
The first part of the book outlines the histori-
cal development of the school beginning with An-
drew Carnegie's original offer to establish an
institution for technical education on 15 Novem-
ber 1900 and traces the development from the
"Carnegie Technical Schools" to the transition
(1912) to Carnegie Institute of Technology and
the final transition (1967) to Carnegie Mellon
University. Following the detailed development
to University status the book turns to the histori-
cal growth and development of the original School
of Applied Science . one of the four original
Schools founded by the Carnegie gift .. to the
current College of Engineering. The first diplomas
in Chemical Engineering Practice were awarded
in 1908 along with the initial "Diplomas" in the
Civil, Electrical, Mechanical and Metallurgical
Practice fields. Included are statistical and organi-
zational details relating to the various depart-
ments, research laboratories, interdisciplinary
programs, the academic calendar, tuition and en-
rollments.
The development and growth of the Chemical
Engineering Department is chronicled in Chap-
ter 4, beginning with the original Chemical Prac-
tice program in 1905 and the transition to Chemi-
cal Engineering in 1910. The chapter divides the
history of the Department into quantum periods
depending upon who was the chief administrative
officer of the department during that period. The
problems, issues and accomplishments of each
period are well chronicled. The development is
carried through 1980.
Part Two of the book, which comprises over 40
percent of the total pages is devoted to an ex-
haustive presentation of departmental statistics
from its inception through 1980. The various
chapters include such topics as enrollment and
degrees granted, the faculty over the years, the
changing undergraduate curriculum and gradu-
ate instruction, research activities and financial
support and anecdotal sections devoted to depart-
mental "personalities" and a recalling of the un-
usual, comical and tragicomical events over the
years. The Appendices, about a third of the book,
are devoted to a complete delineation of faculty,
staff and students (graduate and undergraduate)
by name and years of service, or graduation, who
have been a part of the Carnegie Story in chemi-
cal engineering.
Continued on page 48.
WINTER 1984
[ class and home problems
The object of this column is to enhance our readers' collection of interesting and novel problems in
Chemical Engineering. Problems of the type thanot Fogler, ChE department, University of Michigan, Ann Arbor, MI 48109.
SETTING THE PRESSURE
AT WHICH TO CONDUCT A DISTILLATION
ALLEN J. BARDUHN
Syracuse University
Syracuse, NY 13210
This memorandum was issued to the students
in chemical engineering stage operations, most
of whom are sophomores and first semester
juniors but none of whom have yet had any heat
transfer courses. Thus the elementary explana-
tion of heat exchangers may be unnecessary for
more experienced students.
The subject of the memorandum is usually not
covered at all in stage operation texts, or at most
only lightly covered. In most of the problems on
this subject for distillation the pressure is given
but there is no statement as to how it is de-
termined.
When the pressure is given, most students
(and a few professors) will have no idea whether
it is reasonable or even possible.
T HE MINIMUM PRESSURE at which to conduct a
distillation is set by the condenser. The
temperature of condensation of the top product
must be high enough to condense it with the cool-
ing water available. We remove heat in the con-
denser with cooling water which rises in tempera-
ture as it removes heat, since its sensible heat rises
The maximum pressure for the
separation is set by the reboiler. The boiling
temperature of the bottom product must be low enough
to be boiled by condensing the
steam available.
with temperature. Also the condensing tempera-
tures of the overhead product increase with in-
creasing pressure.
The maximum pressure for the separation is
set by the reboiler. The boiling temperature of
the bottom product must be low enough to be
boiled by condensing the steam available. We must
be able to add heat to the reboiler, by having the
heat source at a higher temperature than the
bubble point of the bottom product.
HEAT EXCHANGERS
Both the condenser and the reboiler are generic
heat exchangers and all heat exchangers require
that there be a temperature difference between
the source of heat and the sink, i.e. the source
must be hotter than the sink. The local tempera-
ture difference (At) between the two streams ex-
changing heat is often not a constant but it must
be everywhere greater than zero. If the At is zero
anywhere in the exchanger, the area required to
transfer the heat becomes uneconomically large.
It is useful to plot the temperature history in any
heat exchanger to see what the At (driving force)
is and how it varies throughout the device. For
example, the overhead vapors (from a distillation
column when heat is removed) will first begin to
condense at their dew point. Further cooling will
condense more until it is completely condensed at
the bubble point.
THE CONDENSER
If the dew point of the overhead product at
the column pressure is 1400F and the bubble
point is 120oF-and if the inlet cooling water is
CHEMICAL ENGINEERING EDUCATION
70F and a 30F rise may be taken-the tempera-
ture pattern in the condenser when the overhead is
condensed completely but not subcooled is repre-
sented in Fig. 1.
The At driving heat transfer is thus between
400 and 50F. This is quite adequate, but At's as
low as 10 or 20F are not uncommon.
Now if the pressure of operation were lowered
then the dew and bubble point temperatures would
be less than those shown and they might ap-
proach the cooling water temperature, thus de-
creasing the At which drives heat transfer and
increasing the area required according to the de-
sign equation
A = ft2 Area req'd. = U At
U Atlmean
where Q = the rate at which heat is to be
transferred (Btu/hr.)
U = the overall heat transfer coefficient
Btu/[(hr) (ft2) (oF)]
Atmean = the mean driving force tempera-
ture difference in F.
1-
All during my undergraduate days (1936-1940) at the University
of Washington in Seattle, the country was in the depths of the great
depression and jobs were scarce. After obtaining my M.S. I was
lucky to get one job offer with an oil refinery in California which I
promptly accepted and started to work 24 November, 1941 just two
weeks before Pearl Harbor.
The refinery work was very good chemical engineering experience
for me. I worked there for over 7 years and remember wondering
when I first arrived, why all de-butanizers operated at 50 to 60 psig
and all depropanizers operated at 180 psig. When I later figured it
out it was simple but I seldom come across any professors who
have thought about it enough to have a well organized answer to my
question of "How is the Pressure Set for a Distillation Column?" I
have yet to see a thorough or even a sketchy treatment of this subject
in any text on distillation. So I thought it would be appropriate to
write this article for a class in Stage Operations and perhaps to
publish it in CEE. The elementary discussion of heat exchangers was
necessary because the students had not yet taken a course in heat
transfer. (Informal biographical sketch submitted by author.)
The condenser thus
sets the minimum pressure at
which the column must operate. To
find this minimum pressure, first find the bubble
point pressure of the D product...
140F o,,
WaterOUT= --I
At= 400F c120F
1200F
70F= Water IN
At = 50F
FIGURE 1.
The mean At is usually somewhere between the
two terminal temperature differences (unless one
of them is 0) and you will learn in Transport II
how to calculate these mean At's.
The condenser thus sets the minimum pressure
at which the column must operate. To find this
minimum pressure, first find the bubble point
pressure of the D product at a temperature of say
100 or 20F higher than the inlet cooling water
temperature. For a binary distillation this can
be taken from the P-x-y diagram for that tempera-
ture, and for multicomponent mixtures one must
use the relations for vapor-liquid equilibrium in
multicomponent mixtures.
THE REBOILER
As the column pressure is increased, the At in
the condenser will just get bigger and this is satis-
factory; but the pressure also affects the opera-
tion of the reboiler in the opposite fashion. The
material being boiled in the reboiler has the com-
position of the bottom product and the boiling
temperature is the bubble point of W, the bottom
product. It is fixed once its composition and the
pressure are known. The heat source is usually
condensing steam and the steam condensing
temperature must thus be at least 10 or 20F
above the bubble point of the bottoms product.
Since the latter temperature goes up with in-
creasing pressure, this sets a maximum pressure at
which the distillation may be conducted. The
temperature pattern in the reboiler is simpler
than that in the condenser since neither the steam
nor the bottom product changes temperature as
heat is transferred. If, for example, the bubble
point of W is 300F at the column pressure, and
we have steam available at 300 psig, the condens-
ing temperature for this steam (steam tables) is
WINTER 1984
about 422F. This temperature pattern is il-
lustrated in Fig. 2.
4220F condensing steam 4220F,
300OF boiling btms. product 3000
At = 122*F At 1220F
FIGURE 2.
The bubble point of the bottom product (which
increases as P goes up) and the available steam
pressure (and its condensing temperature) thus
set a maximum pressure for the distillation. To
find this pressure, estimate the bubble point pres-
sure of the W product at a temperature of say
200F (more or less) below the condensing steam
temperature.
We now have both a maximum and minimum
pressure at which the distillation can be carried
out. Some pressure between these two would be
used in the final design and its optimum value
will be determined by an economic balance. The
items to consider in such an economic balance
are:
1. The effect of P on the temperatures of condensation
of the overhead and on the boiling temperature of the
bottom product which affect the mean At in the con-
denser and reboiler and thus their areas. Each of these
exchangers may be made smaller at the expense of the
other one by adjusting the pressure up or down be-
tween the limits imposed. The higher the pressure,
the smaller the condenser and the larger the reboiler,
and vice versa. Pressure affects these two exchangers
in opposite directions because we are removing heat
from one and adding heat to the other.
2. The effect of the column design pressure on its wall
thickness and thus cost.
3. The effect of pressure on the vapor-liquid equilibrium.
This effect is likely small for nearly ideal liquid
solutions since for these liquid mixtures the relative
volatility is independent of pressure. For hydrocarbon
systems for example, by changing the absolute pres-
sure by a factor of 2, the relative volatility changes
only about 5%. For non-ideal systems however pres-
sure may have a more important effect especially
when there are azeotropes since the composition of the
azeotrope may change with pressure. It may even be
possible to eliminate an azeotrope by suitably adjusting
the pressure.
A COMMON PROBLEM
It is entirely possible (especially when there
is wide variation in the boiling points of the
bottom and top product) that the minimum pres-
sure set by the condenser is higher than the maxi-
mum pressure set by the reboiler. In this case
there are three possible solutions. One is to use
refrigeration to condense the top product, but
this may be expensive. A second solution is to use
a fired reboiler for the bottoms, i.e. send the liquid
from the bottom tray to a furnace which may heat
to temperatures much higher than condensing
steam. The liquid is thus partially vaporized and
sent to a flash drum to separate the vapor formed
from the remaining liquid. The vapor off this
drum is returned to the column below the bottom
plate, and the liquid becomes the W product.
A third possible solution to the problem is to
accept as a vapor that part of the overhead
product which can not be condensed with the
cooling medium, i.e. design a partial condenser.
The part that is condensed is partly returned as
reflux and the rest is liquid D product. The D
products thus consist of two streams; one a vapor
and one a liquid. This is usually the case for the
first crude oil fractionation. The flow sheet is
illustrated in Fig. 3.
FIGURE 3.
The vapor product contains methane, and
ethane which can't be condensed easily. The vapor
product is called "wet gas" not because it con-
tains water (which it does), but because it con-
tains some condensible hydrocarbons. It is sent to
compressors and thence to an absorption column
where the ethane and heavier are removed as
ethane and L.P.G. (Liquified Petroleum Gasses)
which consist mostly of Cs's and C4's. The methane
containing very little ethane and heavier is called
dry gas and is about the same as natural gas. The
ethane is usually cracked at high temperature to
yield ethylene which is the source of many of our
petrochemicals.
CHEMICAL ENGINEERING EDUCATION
The liquid water product comes from open
steam used to assist in the first crude oil fractiona-
tion instead of having a reboiler.
GENERAL FRACTIONATION NOTES
(a) The optimum reflux ratio is said by
Treybal to fall in the range of 1.2 to 1.5 times
the minimum reflux ratio. This rule was formu-
lated when heat was cheap, say $0.50 to $1.00 per
million Btu. With currently expensive heat, say
$5.00 to $8.00 per million Btu the optimum reflux
ratio comes much nearer to the minimum and may
lie in the range (1.05 to 1.2) (Rmi,).
(b) In desert areas when water is scarce and
expensive, air cooling is often used to condense
the overhead vapors but in this case the overall
heat transfer coefficients are much lower than with
water cooling and the optimum approach tempera-
ture differences for condensing may be much
larger than the 100 to 20F quoted above. Also
the design air inlet temperatures may have to be
900 to 110F or even 120oF in order to get a design
which will work most of the time. O
DESIGN COURSE
Continued from page 29.
projects with gross profit, tax and depreciation
schedules are described. Finally, cash flow dia-
grams are introduced for comparing investments
on the basis of simple rate of return, present
worth of cash flows, or discounted cash flows.
Given profitability measures, questions of op-
timality arise. The optimization problem is de-
fined in general terms to begin coverage of this
comprehensive subject. The objective is to intro-
duce optimization methods, suggesting the need
for further study. Single variable, unconstrained
methods known as sequential search methods
(e.g., the Golden-Section method) are covered
using the excellent descriptions in Chap. 10 of
Digital Computers and Numerical Methods [2]
with two example problems from Chap. 10 of
Peters and Timmerhaus (optimal insulation
thickness and optimal batch time). Then, multi-
variable, unconstrained methods are covered in-
cluding lattice search, repeated uni-directional
search, and optimal steepest descent [2].
Next, the students optimize the design of a
distillation tower with a condenser, reboiler, and
reflux pump. Throughout the course they have
solved problems involving these components, so
for this problem they are given the FORTRAN
function DISTIL which computes the rate of re-
turn on investment as a function of the product
purity, the reflux ratio, and the fractional re-
covery of the most volatile species in the distillate.
The use of DISTIL to (1) carry out material
balances, (2) count trays, (3) calculate the tower
diameter, heat exchanger areas, and pump horse-
power, and (4) calculate costs, cash flows and
discounted cash flow rate of return is reviewed.
Then, the students write a program to calculate
the maximum rate of return on investment. Inci-
dentally, DISTIL was written by Prof. D. Brutvan
[1] and has been modified slightly for use in our
course. Prof. Brutvan prepared an excellent
problem-statement, typical of a large company,
with design specifications, sources of physical
property data, cost data, and explanations of the
algorithm. This has also been modified for use in
our course.
After the introduction to process synthesis,
the course concentrates on analysis with the con-
figuration of the process flowsheet given. The de-
sign variables are adjusted to locate an optimal
design for a given configuration. However, in pro-
cess synthesis, the emphasis is placed upon finding
the best configuration. This approach is well-suited
to teach methods of increasing the thermodynamic
efficiency by heat integration. The monograph,
Availability (Exergy) Analysis [13] and the paper
"Heat Recovery Networks" [11] provide excellent
introductions to the analysis of thermodynamic
efficiency and the pinch method for minimizing
utilities. Synthesis of separation processes is also
covered, but briefly in just two hours. The key con-
siderations are introduced, time being unavailable
to solve a meaningful problem.
The course concludes with a final exam and
the course grade is based upon two mid-semester
exams and the homework. Approximately 15
problem sets are assigned, with two problems
using FLOWTRAN and one problem in which the
rate of return for a distillation tower ( using the
DISTIL function) is maximized.
SPRING COURSE: PLANT DESIGN PROJECT
Penn's strength in process design can be at-
tributed in part to the large concentration of
chemical industry along the Delaware River and
to our close interactions with several industrial
WINTER 1984
colleagues. In this section, organization of the
project course to benefit from this interaction is
examined, before considering the impact of pro-
cess simulators.
During the last two weeks of the fall lecture
course, the students select design projects suggest-
ed by our industrial colleagues and the chemical
engineering faculty. The projects must be timely,
of practical interest to the CPI, and be workable
in 15 weeks. Kinetic and thermophysical property
data should be available. Abstracts of possible de-
sign projects are prepared and the students select
a project or propose one of special interest to
themselves. No effort is made to restrict projects
to those well-suited for simulation.
In the spring, 1982, we had sixteen projects,
one for each group of three students, and in 1983
we had nineteen projects. Each group is advised
by one of seven members of our faculty, usually
supplemented by a visiting faculty member and a
research student in the area of computer-aided
design.
During the spring, as the designs proceed,
each group meets for one hour weekly (on Tues-
day afternoon) with its faculty advisor and one
of its four industrial "consultants." For the past
three years we have had seven outstanding con-
sultants. Dr. Arnold Kivnick of Pennwalt Corp.
has completed his twenty-fifth year as a con-
sultant to our students. Arnold has shared his
years of experience in helping our students and
young faculty develop their design skills. Other
members of our consultant team contribute simi-
larly, making it possible to expose our students to
a broad range of design projects.
The course concludes with a one-day technical
FIGURE 5
Abstract of a typical design project
High purity isobutene
(suggested by Len Fabiano, ARCO)
Isobutene will be recovered from a mixed C4 stream con-
taining n-butane, i-butane, butene-1, butene-2, i-butene, and
butadiene. A four-step sequence will be considered: (1) re-
action with CHOH to MTBE (methyl-tertiary-butyl-
ether), (2) recovery of MTBE from the reaction products,
(3) cracking of MTBE to methanol, isobutene, and by-
products, and (4) recovery of isobutene, by-products and
methanol.
This design will concentrate on (3) and (4). Kinetic data
in the literature will be supplemented by ARCO.
Fattore, Massi Mauri, Oriani, Paret, "Crack MTBE for
Isobutylene," Hydrocarbon Processing, 101, Aug., 1981.
1. Cyclohexane oxidation to
cyclohexanol
2. Polymerizer solvent recovery
3. High purity isobutene
4. Catalyst recovery plant
5. Triolefin process
6. Ethylene dimerization
7. Ethanol to gasoline
8. Methane from coal with K2CO3
catalyst
9. Liquid CO2 for extraction of
pyrethrin from chrysanthemums
10. Syngas to methanol
11. Separation of benzene, toluene,
xylene
12. Heat pump for ethane-
ethylene split
13. Optimization of solar heated
1A. Maleic anhydride from butane
15. Fluidized-bed, coal combustion,
electric power plant
16. Suuercritical fluid extraction
17. Dimethylamine
18. Paramethylstyrene with zeolite
catalyst
19. Hydrogen production by
radiation of CO, and water-
gas shift
W. D. Seider
D. F. Kelley, DuPont
L. A. Fabiano, ARCO
L. A. Fabiano, ARCO
L. A. Fabiano, ARCO
L. A. Fabiano, ARCO
W. B. Retallick, Cons.
W. D. Seider
W. D. Seider
S. W. Churchill
W. D. Seider
W. D. Seider
N. Lior
W. D. Seider
N. Arai
A. L. Myers
P. J. O'Flynn
W. D. Seider
S. W. Churchill
meeting of oral presentations accompanied by
written design reports. From the oral and written
reports, the faculty selects the outstanding design
project for the Melvin C. Molstad Award. Each
member of the winning group receives a $100 prize
thanks to the generous endowment of Dr. Ken
Chan, Class of 1962. Notably, the last five reports
have also won the Zeisberg Award in competition
with other schools in our area.
A typical abstract of a design project is shown
in Fig. 5 and the titles for 1982-83 are in Table 2.
The problems are timely and their diversity shows
the broad interests of our faculty and industrial
consultants.
IMPACT OF SIMULATORS
Since 1974 we have had access to the FLOW-
TRAN program on United Computing Systems
(UCS), but its usage has been limited by the high
cost of UCS, a commercial computing system.
Initially modest funds were budgeted for FLOW-
TRAN, but with increasing class sizes and tight
budgets it became necessary to charge the
CHEMICAL ENGINEERING EDUCATION
TABLE 2
Possible Design Projects (1982-83)
Suggested by
students for use of FLOWTRAN. Consequently,
FLOWTRAN was used by just a few groups
-as a last resort. The maximum charge per
group was approximately $100.
In 1982, ChemShare Corp. provided DESIGN/
2000 as a load module for installation on our
UNIVAC/1100 at no cost to the University of
Pennsylvania. Subsequently, eight of the sixteen
design groups chose to use DESIGN/2000, averag-
ing $800 of computer charges per group.
DESIGN/2000 has a well-developed thermo-
physical property system, CHEMTRAN, with a
data bank containing constants for 900 chemicals
(as compared with 180 in the student version of
FLOWTRAN). Programs are available to calcu-
late constants such as the normal boiling point
temperature and critical properties, given the
molecular structure (the atom-bond interconnec-
tions). For nonideal solutions, programs are avail-
able to compute the interaction coefficients for the
UNIQUAC equation and, when equilibrium data
are unavailable, to estimate activity coefficients
using the UNIFAC group interaction coefficients.
Furthermore, CHEMTRAN provides the Soave-
Redlich-Kwong and Peng-Robinson equations for
calculations in the vapor-liquid critical region. In
addition to these advantages (compared with
FLOWTRAN), alternative programs are provided
for short-cut and rigorous analysis of multistaged
towers.
Similarly, the PROCESS system of Simulation
Sciences, Inc., provides features that are not in-
cluded in the student version of FLOWTRAN.
Some are equivalent to DESIGN/2000, some are
not in DESIGN/2000, while some of the DESIGN/
2000 features are not included. PROCESS has not
yet been installed on our computer, so that we are
less familiar with this system.
Several limitations remain and these are gradu-
ally being eliminated. However, currently FLOW-
TRAN, DESIGN/2000 and PROCESS do not
model processes with inorganic compounds and
ionic species. There are no programs to calculate
compositions in phase and chemical equilibrium
or to simulate CSTRs, PFTRs, and solids-handling
equipment. These features have been included in
the ASPEN system, but ASPEN is not yet avail-
able for routine student usage. As expected,
ChemShare and Simulation Sciences are adding
many of the same features.
The bottom line with respect to our design se-
quence is that industrial process simulators permit
more routine analysis of simple processes and
give more accurate analyses for complex pre-
cesses; for example, extractive distillation towers.
These simulators enable more complete parametric
analysis and examination of process alternatives.
Normally, they are applicable for just parts of the
analysis; rarely for analysis of the entire flow-
sheet. They provide our students with experience
in the use of modern CAD tools.
In our research, the development of new CAD
methodologies is emphasized. In the senior design
course, some of these methodologies are intro-
duced using well-tested industrial simulators
which are gradually upgraded. Emphasis is
placed on completing the design. Student time is
not wasted working out difficulties with a proto-
type program.
When possible, process synthesis method-
ologies are emphasized. As yet, however, few pro-
jects have been found which are sufficiently open-
ended to permit analysis of many alternate con-
figurations in the fifteen week term. Good sugges-
tions are welcomed. D
REFERENCES
1. Brutvan, D. R., "Economic Optimum Conditions for a
Staged Separation Process," Computers in Engineer-
ing Design Education, Vol. II, Univ. of Michigan
Project, Ann Arbor, 1966.
2. Carnahan, B. and J. 0. Wilkes, Digital Computing
and Numerical Methods, Wiley, 1973.
3. Guthrie, K. M., "Capital Cost Estimating," Chem.
Eng., March 24, 1969.
4. Kern, D. Q., Process Heat Transfer, McGraw-Hill,
1950.
5. Kreith, F., Principles of Heat Transfer, Third Ed.,
Int'l. Text Co., 1973.
6. Myers, A. L., and W. D. Seider, Introduction to
Chemical Engineering and Computer Calculations,
Prentice-Hall, 1976.
7. Peters, M. S., and K. D. Timmerhaus, Plant Design
and Economics for Chemical Engineers, Third Ed.,
McGraw-Hill, 1980.
8. Rudd, D. F., and C. C. Watson, Strategy of Process
Engineering, Wiley, 1968.
9. Seader, J. D., W. D. Seider, and A. C. Pauls, FLOW-
TRAN Simulation-An Introduction, Second Ed.,
CACHE, Ulrich's Bookstore, Ann Arbor, Michigan,
1977.
10. Soni, Y., M. K. Sood, and G. V. Reklaitis, PCOST
Costing Program, School of Chem. Eng., Purdue Uni-
versity, W. Lafayette, Indiana, May, 1979.
11. Linnhoff, B., and J. A. Turner, "Heat Recovery Net-
works," Chem. Eng., Nov. 2, 1981.
12. Woods, D., "Cost Data for the Process Industries,"
McMaster Univ. Bookstore, Hamilton, Ontario,
Canada (1974).
13. Sussman, M. V., Availability (Exergy) Analysis,
Milliken House, 1980.
WINTER 1984
NEW ADSORPTION METHODS
Continued from page 25.
to achieve, but has been extensively studied.
Counter-current schemes have included flow in
open columns [2, 10, 17]; the hypersorption pro-
cess where solids flow was controlled by the open-
ing and closing of holes in sieve trays [11], moving
belt schemes [12] and the recent magnetically
stabilized moving bed system developed by Exxon.
The idealized analysis of all these systems will
be similar.
A counter-current system is shown in Fig. 6.
The solids flow down the column while the fluid
flows up. The less strongly adsorbed solute A
moves up in zone 1 while strongly adsorbed solute
B moves down in this zone. Thus zone 1 purifies
solute A. Zone 2 removes solute A from B and
thus purifies solute B. In zone 3 solute B is de-
sorbed with desorbent D. Zone 4 serves to remove
solute A from the desorbent so that desorbent
can be recycled. The desorbent could be water or a
solvent.
The solute movement theory can be applied to
this system. The solute wave velocities calculated
from Eq. (5) or (7) were with respect to a
stationary solid. The appropriate fluid velocity is
then the interstitial fluid velocity relative to the
solid. Thus
V= V--u + Volid
a
(18)
where Vsuper is the superficial fluid velocity and
Vsolid is the superficial solid velocity. Now uo1ute,
calculated from Eq. (5) or (7) is the solute
velocity with respect to the solid. The solute
FIGURE 6. Counter-current separator.
velocity which an observer will see is obtained by
subtracting the solids velocity
Usolute Cc = usolute Vsolid
(19)
usolute cc is positive when the solute flow is up the
column and negative when it flows down.
In the counter-current column the solids
velocity is the same in all zones but the superficial
fluid velocity varies from zone to zone since feed
is added and products are withdrawn. If we set
Vsuper,a as velocity in zone 3, then for relatively
dilute systems
Vsuper,2 = Vsuper,3 P2/Ac
(20)
Vsuper,l = Vsuper,2 + F/Ac
VSuper,4 = Vsuper,L P1/Ac
Since Vsuper changes, Usolute cc will change from
zone to zone. In addition, if the desorbent affects
the adsorption of solute then the equilibrium
TIME
FIGURE 7. Solute movement in continuous counter-
current column.
constant A(T) will vary from zone to zone and
Usoute cc will change. This latter effect is not
necessary to make the counter-current column
work.
To achieve the separation indicated in Fig. 6 we
want solute A to move upward in zones 1 and 2
and downward in zone 4. Thus
uA CC,1 > UA CC,2 > 0 > UA CC,4
(21)
Solute B should move downwards in zones 1 and
2 and upwards in zone 3. Thus
UB C0,3 > 0 > uB CC,1 > UB CC,2
(22)
CHEMICAL ENGINEERING EDUCATION
An alternative to continuously moving solid down the column is to
move solid and entrained fluid down in pulses. This is commonly used in continuous ion
exchange systems such as variants of the Higgins system and the Graver Water Treatment System.
This system could also be applied to the binary separator.
Eqs. (21) and (22) are an important result since
they control the operation of the continuous
counter-current column. There is a range of values
for P1, P2 and D for a given feed flow rate which
will satisfy inequalities (21) and (22). In actual
practice it is desirable to choose the flow rates so
that all the inequalities are as large as possible.
The appropriate solute waves are shown in
Fig. 7. In the ideal case at steady state there will
be no solute A in zones 4, 2 or 3 and no solute B
in zones 1, 3 and 4. Because of dispersion and
finite mass transfer rates solute A will appear in
zones 4 and 2, and B will be in zones 1 and 3. The
size of the zones required depends on these dis-
persion and mass transfer rate effects. In ad-
dition, any axial solid or fluid mixing caused by
non-perfect flow will require a larger column. Ex-
treme mixing or channeling can destroy the de-
sired separation.
COUNTER-CURRENT OPERATION II. Pulsed Flow
An alternative to continuously moving solid
down the column is to move solid and entrained
fluid down in pulses. This is commonly used in
continuous ion exchange systems such as variants
of the Higgins system [10, 18, 20] and the Graver
Water Treatment System [14]. This system could
also be applied to the binary separator shown in
Fig. 6.
In the pulsed system the solid is stationary
except for short periods when it moves down. Thus
usolute is given directly by Eq. (5) or (7) with the
superficial fluid velocities given by Eq. (20).
When the solid and fluid are pulsed downward,
the solute waves are also shifted down. The solute
wave theory for the pulsed flow system is shown
in Fig. 8. The net movement of solute A is upward
in zones 1 and 2 and downward in zone 4. The net
movement of solute B is downward in zones 1 and
2 and up in zone 3. Fig. 8 is drawn for a plug
flow movement of solids and fluids during the
downward pulse. Feed would be introduced con-
tinuously and withdrawn continuously. Only one
feed period was shown to keep the figure simple.
If mixing occurs during the pulse less separation
will be obtained. This is a practical limit to sharp
fractionation of two solutes in a pulsed flow
counter-current system.
Vsolids,avg = lp/tp
(23)
where 1, is the length of a pulse movement in
meters and tp is the time between pulses in
minutes. The average solute velocity over many
pulses is given by Eq. (19). The desired separa-
tion will be achieved when inequalities (21) and
(22) are satisfied.
SIMULATED COUNTER-CURRENT SYSTEMS
An alternative to moving bed systems is to
simulate counter-current movement. This is done
with a series of packed bed sections by switching
the location of all feed and product withdrawal
ports. An observer located at a product withdrawal
port sees the solids move downwards everytime
FIGURE 8. Solute
system.
TIME
movement in pulsed counter-current
the port location is shifted upwards. Thus the
observer sees a process very similar to the pulsed
counter-current system analyzed in Fig. 8.
The first simulated counter-current system
was the Shanks system which has been applied
to leaching, adsorption and ion exchange [13, 21].
The Shanks system uses a series of columns with
plumbing arranged so that feed can be input and
WINTER 1984
The solute movement
theory can be used to analyze
the simulated counter-current system
in two ways.
product withdrawals removed from any column.
Thus the counter-current separator shown in Fig.
6 can be simulated. Modern adaptations of the
Shanks process have been done by Barker and his
co-workers for gas chromatography systems [3, 5]
and for gel permeation chromatography [4]. UOP
has extensively used a pilot plant scale system
which is a series of columns for scaling up their
commercial scale units [9, 16]. The commercial
UOP simulated counter-current process, Sorbex,
uses a single column with many packed sections
and has a rotating valve for distributing feed, de-
sorbent and products. The commercial units simu-
late the system shown in Fig. 6 [6, 7, 8, 9, 16]. The
UOP process was first commercialized as Molex
for separation of linear paraffins from branched-
chain and cyclic hydrocarbons using 5A molecular
sieves. Since then, processes for p-xylene purifica-
tion (Parex), Olefin separation (Olex), and
separation of fructose from glucose (Sarex) have
been commercialized. Pilot plant scale separations
for a variety of other problems have been demon-
strated [9, 16]. A large number of patents have
been granted on simulated moving bed systems.
The solute movement theory can be used to
analyze the simulated counter-current system in
two ways. First, if the observer fixes himself at
one of the outlet or inlet ports then he sees the
solid and entrained fluid transferred downwards
in pulses. This observer then sees solute movement
as shown in Fig. 8. The average solids velocity this
observer sees is given by Eq. (19), and the analysis
applied for the pulsed counter-current operation is
applicable.
In the second analysis the observer fixes him-
self on the ground and he sees the solid as station-
ary. With the fluid flowing up the column, he sees
all the inlet and outlet ports move up the column
at discrete times. When a port reaches the top of
the column, it recycles back to the bottom. In be-
tween the shifting of port locations, the adsorber
is a fixed bed system. Thus the solute wave velocity
can be determined from Eqs. (5) or (7). The fluid
velocities in each section will differ. The super-
ficial fluid velocities are given by Eq. (20) and the
interstitial velocity v equals Vsuper/a. The shifting
of ports does not shift the solute waves, but does
change the wave velocities since it changes which
zone the solute is in. This is illustrated in Fig. 9.
If the desorbent changes the equilibrium constants
this will also change the solute velocities.
Note in Fig. 9 that the movement of both
species is up, but the more strongly adsorbed
solute B moves down with respect to the port
locations. Feed would be introduced continuously
at the port marked A + B, but was illustrated for
only one time period. The zone numbers cor-
responding to Fig. 6 are shown on Fig. 9. The
i I _/ i
S_ I Ar.JV / I ,a I
1- tp--l
TIE1.
3
7i-
3
3_ I I I
TIME
uport,avg = Iport/tport
(24)
where port is the packing height between ports and
CHEMICAL ENGINEERING EDUCATION
tpor is the time between switches of ports. The
conditions to achieve separation are then
UA,I > UA,2 > Uport,avg > UA,i
(25)
UB,3 > Uport,avg > UB,1 > uB,2
These conditions follow the same order as Eqs.
(21) and (22).
How close is a simulated counter-current
system to a truly counter-current separator? Al-
though the answer to this depends on the chemical
system and the column length, Liapis and Rippin
[15] found that the simulated system had an ad-
sorbent utilization from 79% to 98% that of the
truly counter-current system. With a single zone
system they found that from two to four sections
were sufficient and that two to four column
switches were required to reach a periodic con-
centration profile.
Comparison of simulated counter-current and
truly counter-current systems is of considerable
interest. Both systems at steady state can at best
do a complete binary separation. Partial sepa-
ration of additional components can be obtained
with side withdrawals. The simulated counter-
current system could also be extended to more
complex cycles where part of the bed is temporarily
operated as a batch chromatograph. The simulated
moving bed system is actually a fixed bed system.
Thus flooding (unintentional upwards entrain-
ment of solid) will not be a problem, but excessive
pressure drop may result for small particles or
viscous solutions. The fixed bed will have a lower
a and hence a higher capacity than truly counter-
current systems, but this will be offset by the
distribution zones between sections. The actual
movement of solids requires means for keeping the
bed stable, may result in excessive attrition, but
allows for easy solids replacement or external re-
activation. Both systems have mechanical difficul-
ties to overcome. In the simulated moving bed
these difficulties are the valving and timing while
in an actual moving bed they involve moving,the
solids without mixing. Currently, the simulated
counter-current systems have been the preferred
choice for large-scale adsorption installations. O
ACKNOWLEDGMENT
Some of the research reported here was sup-
ported by NSF Grant CPE-8006903. This paper
is a modified version of Wankat [23]. The per-
mission of the Corn Refiner's Association to
reprint parts of that paper is gratefully
acknowledged.
NOMENCLATURE
A(T) -Equilibrium parameter, Eq. (6)
A, -Cross sectional area of column
c -Solute concentration in fluid, kg/L
Cr -Heat capacity of fluid
Cs -Heat capacity of solid
Cw -Heat capacity of wall
F -Feed rate, L/min
k -Exponent in Freundlich isotherm,
Eq. (14)
K, -Fraction of interparticle volume
species can penetrate, Eq. (1)
lp,lport -Length of travel of pulse, or pack-
ing height between ports, m
L -Column length, m
Pi,P2 -Product flow rates, L/min
q -Amount of solute adsorbed, kg/kg
adsorbent
t -Time, min
tp -Time between switching port
locations, min
T -Temperature, C
Tr,Ts,Tw,Tr --Temperature of fluid, solid, wall
and reference
Tc,TH -Cold and hot temperatures
uA,UBusolute -Solute wave velocity, m/min, Eq.
(5) or (7)
Ushock -Shock wave velocity, m/min, Eq.
(17)
thermal -Thermal wave velocity m/min, Eq.
(9)
v -Interstitial fluid velocity, m/min
Ve -Elution volume of non-adsorbed
species, L
V, -Internal void volume, L
Vo -External void volume, L
Vsolid -Solid velocity, m/min
Vsuper -Superficial fluid velocity, m/min
W -Weight of column wall per length,
kg/m
z -Axial distance in column, m
a -TnterDarticle void fraction
e --ntraparticle void fraction
A -T-ifference calculation
ps -Solid density, kg/L
REFERENCES
1. Baker, B. and R. L. Pigford, "Cycling Zone Adsorp-
tion: Quantitative Theory and Experimental Results,"
Ind. Eng. Chem. Fundam., 10, 283 (1971).
2. Barker, P. E., "Continuous Chromatographic Re-
fining," in E. S. Perry and C. J. Van Oss (eds.), Pro-
gress in Separation and Purification, Vol. 4, p. 325,
Wiley, N.Y., 1971.
3. Barker, P. E. and R. E. Deeble, "Production Scale
Organic Mixture Separation Using a New Sequential
Chromatographic Machine," Anal. Chem., 45, 1121
(1973).
4. Barker, P. E., F. J. Ellison, and B. W. Hatt, "A New
WINTER 1984
Process for the Continuous Fractionation of Dextran,"
Ind. Eng. Chem. Proc. Des. Develop., 17, 302 (1978).
5. Barker, P. E., M. I. Howari, and G. A. Irlam,
"Further Developments in the Modelling of a Se-
quential Chromatographic Refiner Unit," Chromato-
graphia, 14, 192 (1981).
6. Broughton, D. B., "Molex: Case History of a Pro-
cess," Chem. Eng. Prog., 64, (8) 60 (1968).
7. Broughton, D. B. and D. B. Carson, "The Molex Pro-
cess," Petroleum Refiner, 38 (4), 130 (1959).
8. Broughton, D. B., R. W. Neuzil, J. M. Pharis, and
C. S. Breasley, "The Parex Process for Recovering
Paraxylene," Chem. Eng. Prog. 66, (9) 70 (1970).
9. de Rosset, A. J., R. W. Neuzil, and D. J. Korous,
"Liquid Column Chromatography as a Predictive Tool
for Continuous Counter-current Adsorption Separa-
tions," Ind. Eng. Chem. Proc. Des. Develop., 15, 261
(1976).
10. Gold, H., A. Todisco, A. A. Sonin, and R. F. Prob-
stein, "The Avco Continuous Moving Bed Ion Ex-
change Process and Large Scale Desalting," Desali-
nation, 17, 97 (Aug. 1975).
11. Hiester, N. K., T. Vermeulen, and G. Klein, "Ad-
sorption and Ion Exchange," Sect. 16 in R. H. Perry,
C. H. Chilton and S. D. Kirkpatrick (eds.), Chemical
Engineer's Handbook, 4th ed., McGraw-Hill, N.Y.,
1963.
12. Jamrack, W. D., Rare Metal Extraction by Chemical
Engineering Methods, Pergamon Press, N.Y., 1963.
13. King, C. J., Separation Processes, 2nd ed., McGraw-
Hill, N.Y., p. 172-195, 1980.
14. Levendusky, J. A., "Progress Report on the Con-
tinuous Ion Exchange Process," Water-1969, CEP
Symposium Ser., 65, #97, 113 (1969).
15. Liapis, A. I. and D. W. T. Rippin, "The Simulation
of Binary Adsorption in Continuous Counter-current
Operation and a Comparison with Other Operating
Modes," AIChE Journal, 25, 455 (1979).
16. Neuzil, R. W., D. H. Rosback, R. H. Jensen, J. R.
Teague, and A. J. de Rosset, "An Energy-Saving
Separation Scheme," Chemtech, 10, 498 (Aug. 1980).
17. Rendell, M., "The Real Future for Large-Scale
Chromatography," Process Engineering, p. 66 (April
1975).
18. Roland, L. D., "Ion Exchange-operational advantages
of continuous plants," Processing, 22, (1), 11 (1976).
19. Sherwood, T. K., R. L. Pigford, and C. R. Wilke,
Mass Transfer, Chapt. 10, McGraw-Hill, N.Y., 1975.
20. Smith, J. C., A. W. Michaelson, and J. T. Roberts,
"Ion Exchange Equipment," p. 19-18 to 19-26, in
R. H. Perry, C. H. Chilton and S. D. Kirkpatrick
(eds.), Chemical Engineer's Handbook, 4th ed.,
McGraw Hill, N.Y., 1963.
21. Treybal, R. E., Mass Transfer Operations, 2nd ed.,
McGraw-Hill, 1968.
22. Wankat, P. C., "Cyclic Separation Techniques," in
A. E. Rodrigues and D. Tondeur (eds.), Percolation
Processes, Theory and Applications, Sijthoff and
Noordoff, Alphen aan den Rijn, Netherlands, p. 443-
516 (1981).
23. Wankat, P. C., "Operational Techniques for Ad-
sorption and Ion Exchange," Corn Refiner's Associ-
ation Conf., Lincolnshire, IL, June 1982.
BOOK REVIEW: Carnegie-Mellon
Continued from page 37.
Obviously the book is not intended for use in
the usual academic sense and its particular audi-
ence is the many people .. faculty, staff and stu-
dents . who have contributed to chemical engi-
neering at Carnegie over the years. It can also
serve as a guide to those considering similar
undertakings at their own institution in pointing
out the monumental effort involved. Admittedly,
this reviewer is not wholly unbiased in considera-
tion of this volume inasmuch as he has spent
almost half of his academic career at Carnegie,
but he can attest to a considerable portion of the
accuracy of Professor Rothfus' many details. Its
delightful reading!! D
books received
"Resource Recovery Economics," Stuart H. Russell;
Marcel Dekker Inc., New York 10016; 312 pages, $39.75
(1982)
"Specifying Air Pollution Control Equipment," edited by
Richard A. Young, Frank L. Cross, Jr.; Marcel Dekker Inc.,
New York 10016; 296 pages, $38.50 (1982)
"Introduction to High-Performance Liquid Chromatogra-
phy," R. J. Hamilton, P. A. Sewell; Chapman & Hall, 733
Third Ave., New York 10017; 183 pages, $29.95 (1982)
"Nuclear Waste Management Abstracts," Richard A.
Heckman, Camille Minichino; Plenum Publishing Corp.,
New York 10013; 103 pages, $45.00 (1982)
"Heat Transfer in Nuclear Reactor Safety," S. George
Bankoff, N. H. Afgan; Hemisphere Publishing Corp., New
York 10036; 964 pages, $95.00 (1982)
"Essentials of Nuclear Chemistry," H. J. Arnikar; John
Wiley & Sons, Somerset, NJ 08873; 335 pages, $17.95
(1982)
"Technology Transfer and Innovation," Louis N. Mogavero,
Robert S. Shane; Marcel Dekker Inc., New York, 10016;
168 pages, $22.50 (1982)
"Solar Heating and Cooling: Active and Passive Design,"
Second Edition, J. F. Kreider, F. Kreith; Hemisphere
Publishing Corp., Washington DC 20005; 479 pages,
$29.95 (1982)
"Liquids and Liquid Mixtures," Third Edition, J.S. Rowlin-
son, F. L. Swinton; Butterworths, Woburn, MA 01801; 328
pages, $69.95 (1982)
"Handbook of Multiphase Systems," edited by G. Hetsroni;
Hemisphere Publishing Corp., Washington, DC 20005;
$64.50 (1982)
"Liquid Filtration," Nicholas P. Cheremisinoff, David S.
Azbel; Butterworth Publishers, Woburn, MA 01801; 520
pages, $49.95 (1983)
CHEMICAL ENGINEERING EDUCATION
ACKNOWLEDGMENTS
Departmental Sponsors: The following 141
U. of Detroit
Drexel University
University of Florida
Florida Institute of Technology
Georgia Technical Institute
University of Houston
Howard University
University of Illinois (Urbana)
Illinois Institute of Technology
Institute of Paper Chemistry
University of Iowa
Iowa State University
John Hopkins University
Kansas State University
University of Kansas
University of Kentucky
Lafayette College
Lamar University
Laval University
Lehigh University
Louisiana State University
Louisiana Tech. University
University of Louisville
University of Maine
Manhattan College
University of Maryland
University of Massachusetts
Massachusetts Institute of Technology
McMaster
Princeton University
Purdue University
University of Queensland
Queen's University
Rensselaer Polytechnic Institute
University of Rhode Island
Rice University
University of Rochester
Rose-Hulman Institute
Rutgers U.
University of South Alabama
University of South Carolina
University of Saskatchewan
South Dakota School of Mines
University of Southern California
Stanford University
Stevens Institute of Technology
University of Sydney
Syracuse University
Teesside Polytechnic Institute
Tennessee Technological University
University of Tennessee
Texas A&M University
University of Texas at Austin
Texas Technological University
University of Toledo.
Our name has been synonymous with
engineering education for over 150 years.
Here are thirteen more reasons why.
New
GUIDE TO CHEMICAL
ENGINEERING PROCESS
DESIGN AND ECONOMICS
Gael Ulrich, University ofNew
Hampshire
Solutions Manual available
January 1984 approx. 464 pp.
New
FUNDAMENTALS OF
MOMENTUM, HEAT AND MASS
TRANSFER, 3rd Edition
James R. Welty Charles E. Wicks, and
Robert E. Wilson, Oregon State
University
Solutions Manual available
January 1984 approx. 832 pp.
New
NUMERICAL METHODS AND
MODELLING FOR CHEMICAL
ENGINEERS
Mark E. Davis, Virginia Polytechnic
Institute and University
Solutions Manual available
February 1984 approx. 320 pp.
New
INTRODUCTION TO
MATERIAL AND ENERGY
BALANCES
Gintaras V Reklaitis, Purdue University
Solutions Manual available
1983 683 pp.
New
NATURAL GAS PRODUCTION
ENGINEERING
Chi Ikoku, Pennsylvania State
University
Solutions Manual available
January 1984 approx. 464 pp.
ELEMENTARY PRINCIPLES OF
CHEMICAL PROCESSES
Richard M. Felder, Ronald W Rousseau,
North Carolina State University
Solutions Manual available
1978 571 pp.
CHEMICAL AND
ENGINEERING
THERMODYNAMICS
StanleyJ. Sandler, University of
Delaware
Solutions Manual available
1977 587 pp.
CHEMICAL REACTION
ENGINEERING, 2nd Edition
Octave Levenspiel, Oregon State
University
Solutions Manual available
1974 600 pp.
AN INTRODUCTION TO
CHEMICAL ENGINEERING,
KINETICS AND REACTOR
DESIGN
Charles G. Hill,Jr., University of
Wisconsin
Solutions Manual available
1977 594 pp.
CHEMICAL REACTOR
ANALYSIS AND DESIGN
G.F. Froment, Rysksuniversteit Ghent,
Belgium, and Kenneth B. Bischoff,
University ofDelaware
1979 765 pp.
PRINCIPLES OF UNIT
OPERATIONS, 2nd Edition
Alan Foust, Emeritus Lehigh University
Leonard A. Wenzel and Curtis W Clumb,
both of Lehigh University
Louis Maus and L. Bruce Andersen, New
Jersey Institute of Technology
Solutions Manual available
1980 784 pp.
TRANSPORT PHENOMENA
R. Byron Bird, Warren E. Stewart, and
Edwin N. Lightfoot, University of
Wisconsin
1960 780 pp.
EQUILIBRIUM-STAGE
SEPARATION OPERATIONS IN
CHEMICAL ENGINEERING
ErnestJ. Henley University ofHouston
andJ. D. Seader, University of Utah
1981 742 pp.
To be considered for complimentary
copies, please write to Dennis Sawicki,
Dept. 4-1315. Please include course
name, enrollment, and title of present
text.
JOHN WILEY & SONS, Inc.
605 Third Avenue, New York, NY 10158
In Canada: 22 Worchester Road
Rexdale, Ontario M9W 11.1
4-1315
Full Text
|
http://ufdc.ufl.edu/AA00000383/00081
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
- OSI-Approved Open Source (19)
- Other License (1)
- Public Domain (1)
- Grouping and Descriptive Categories (20)
- Windows (20)
- Modern (16)
- Linux (12)
- Mac (11)
- BSD .122.11
ALJ - livejournal downloader
ALJ is a program for exporting complete user's journal including all comments into plain files. Once journal is downloaded, program can check if new entries are available, download them and add to plain file. weekly downloads
RegExp Developer Tool
An utility is intended for editing perl regular expressions, allows you to choose different options for compilation regular expression. Sсript has a simple interface for the reflection results of implementated expressions.2 weekly downloads
acWEB - HTTP server for Win32
acWEB is an OpenSource replacement for MS IIS and other proprietary WEB servers for Windows. Unlike IIS, acWEB is not affected by viruses like CodeRed, Nimda, etc :).2
MedSurvey
A medical information system for Windows-based systems (95-XP/CE/PocketPC) based on Open Source platforms1
The Rsdn.Framework.Data namespace
The Rsdn.Framework.Data is a namespace that represents a higher-level wrapper for ADO.NET with high performance object-relational mapping.1 weekly downloads
YourShop is a MS BASED E-SHOP
YourShop is a Free Internet Shop based on MS technologies : Windows9x/2000/XP, IIS, Access97, VB, JS
Nuclos
"Nuclos" is an object-oriented framework which incorporates basic functionality to build "business"
|
http://sourceforge.net/directory/natlanguage:russian/os:win95/environment:web/
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
- ASP.NET:
- Overview
- ASP.NET Samples
Microsoft Azure makes it easy to build and deploy websites to production. But you’re not done when your application is live, you’re just getting started! You’ll need to handle changing requirements, database updates, scale, and more. Fortunately, Azure App Service has you covered, with plenty of features to help you keep your sites running smoothly.
Azure offers secure and flexible development, deployment and scaling options for any size web application. Leverage your existing tools to create and deploy applications without the hassle of managing infrastructure.
Provision a production web application yourself in minutes by easily deploying content created using your favorite development tool. You can deploy an existing site directly from source control with support for Git, GitHub, Bitbucket, CodePlex, TFS, and even DropBox. Deploy directly from your favorite IDE or from scripts using PowerShell in Windows or CLI tools running on any OS. Once deployed, keep your sites constantly up-to-date with support for continuous deployment.
Azure provides scalable, durable cloud storage, backup, and recovery solutions for any data, big or small. When deploying applications to a production environment, storage services such as Tables, Blobs and SQL Databases, help you scale your application in the cloud.
With SQL Databases, it is important to keep your productive database up-to-date when deploying new versions of your application. Thanks to Entity Framework Code First Migrations, the development and deployment of your data model has been simplified to update your environments in minutes. This hands-on lab will show you the different topics you could encounter when deploying your web app to production environments in Microsoft Azure.
All sample code and snippets are included in the Web Camps Training Kit, available at.
For more in-depth coverage of this topic, see the Building Real-World Cloud Apps with Azure e-book.
Overview
Objectives
In this hands-on lab, you will learn how to:
- Enable Entity Framework Migrations with an existing model
- Update the object model and database accordingly using Entity Framework Migrations
- Deploy to Azure App Service using Git
- Rollback to a previous deployment using the Azure Management portal
- Use Azure Storage to scale a web app
- Configure auto-scaling for a web app using the Azure Management Portal
- Create and configure a load test project in Visual Studio
Prerequisites
The following is required to complete this hands-on lab:
- Visual Studio Express 2013 for Web or greater
- Azure SDK for .NET 2.2
- GIT Version Control System
- A Microsoft Azure subscription
- If you are a Visual Studio Professional, Test Professional, Premium or Ultimate with MSDN or MSDN Platforms subscriber, activate your MSDN benefit now to start developing and testing on Azure
- BizSpark members automatically receive the Azure benefit through their Visual Studio Ultimate with MSDN subscriptions
- Members of the Microsoft Partner Network Cloud Essentials program receive monthly Azure credits at no charge
Setup
In order to run the exercises in this hands-on lab, you will need to set up your environment first.
- Open Windows Explorer and browse to the lab's Source folder.
- Right-click on Setup.cmd and select Run as administrator to launch the setup process that will configure your environment and install the Visual Studio code snippets for this lab.
- If the User Account Control dialog is shown, confirm the action to proceed.
Note: Make sure you have checked all the dependencies for this lab before running the setup.
Using the Code Snippets
Throughout the lab document, you will be instructed to insert code blocks. For your convenience, most of this code is provided as Visual Studio Code Snippets, which you can access from within Visual Studio 2013 to avoid having to add it manually.
Note: Each exercise is accompanied by a starting solution located in the Begin folder of the exercise that allows you to follow each exercise independently of the others. Please be aware that the code snippets that are added during an exercise are missing from these starting solutions and may not work until you have completed the exercise. Inside the source code for an exercise, you will also find an End folder containing a Visual Studio solution with the code that results from completing the steps in the corresponding exercise. You can use these solutions as guidance if you need additional help as you work through this hands-on lab.
Exercises
This hands-on lab includes the following exercises:
- Using Entity Framework Migrations
- Deploying a Web App to Staging
- Performing Deployment Rollback in Production
- Scaling Using Azure Storage
- Using Autoscale for Web Apps (Optional for Visual Studio 2013 Ultimate edition)
Estimated time to complete this lab: 75 minutes
Note: When you first start Visual Studio, you must select one of the predefined settings collections. Each the steps that you should take into account.
Exercise 1: Using Entity Framework Migrations
When you are developing an application, your data model might change over time. These changes could affect the existing model in your database (if you are creating a new version) and it is important to keep your database up-to-date to prevent errors.
To simplify the tracking of these changes in your model, Entity Framework Code First Migrations automatically detect changes comparing your model with the database schema and generates specific code to update your database, creating new versions of your database.
This exercise shows you how to enable Migrations for your application and how you can easily detect and generate changes to update your databases.
Task 1 – Enabling Migrations
In this task, you will go through the steps of enabling Entity Framework Code First Migrations to the Geek Quiz database, changing the model and understanding how those changes are reflected in the database.
Open Visual Studio and open the GeekQuiz.sln solution file from Source\Ex1-UsingEntityFrameworkMigrations\Begin.
Build the solution in order to download and install the NuGet package dependencies. To do this, right-click the solution and click Build Solution or press Ctrl + Shift + B.
From the Tools menu in Visual Studio, select Library Package Manager, and then click Package Manager Console.
In the Package Manager Console, enter the following command and then press Enter. An initial migration based on the existing model will be created.
Enable-Migrations -ContextTypeName GeekQuiz.Models.TriviaContext
Enabling Migrations
Note: This command adds a Migrations folder to Geek Quiz project that contains a file called Configuration.cs. The Configuration class allows you to configure how Migrations behaves for your context.
With Migrations enabled, you need to update the Configuration class to populate the database with the initial data that Geek Quiz requires. Under Migrations, replace the Configuration.cs file with the one located in the Source\Assets folder of this lab.
Note: Since Migrations will call the Seed method with every database update, you need to be sure that records are not duplicated in the database. The AddOrUpdate method will help to prevent duplicate data.
To add an initial migration, enter the following command and then press Enter.
Note: Make sure that there is no database named "GeekQuizProd" in your LocalDB instance.
Add-Migration InitialSchema
Adding base schema migration
Note: Add-Migration will scaffold the next migration based on changes you have made to your model since the last migration was created. In this case, as it is the first migration of the project, it will add the scripts to create all the tables defined in the TriviaContext class.
Execute the migration to update the database by running the following command. For this command you will specify the Verbose flag to view the SQL statements being applied to the target database.
Update-Database -Verbose
Creating initial database
Note: Update-Database will apply any pending migrations to the database. In this case, it will create the database using the connection string defined in your web.config file.
Go to View menu and open SQL Server Object Explorer.
Open in SQL Server Object Explorer
In the SQL Server Object Explorer window,
Open the GeekQuizProd database and expand the Tables node. As you can see, the Update-Database command generated all the tables defined in the TriviaContext class. Locate the dbo.TriviaQuestions table and open the columns node. In the next task, you will add a new column to this table and update the database using Migrations.
Trivia Questions Columns
Task 2 – Updating Database Schema Using Migrations
In this task, you will use Entity Framework Code First Migrations to detect a change in your model and generate the necessary code to update the database. You will update the TriviaQuestions entity by adding a new property to it. Then you will run commands to create a new Migration to include the new column in the table.
In Solution Explorer, double-click the TriviaQuestion.cs file located inside the Models folder.
Add a new property named Hint, as shown in the following code snippet.
public class TriviaQuestion { public int Id { get; set; } [Required] public string Title { get; set; } public virtual List<TriviaOption> Options { get; set; } public string Hint { get; set; } }
In the Package Manager Console, enter the following command and then press Enter. A new migration will be created reflecting the change in our model.
Add-Migration QuestionHint
Add-Migration
Note: A Migration file is composed of two methods, Up and Down.
- The Up method will be used to specify what changes the current version of our application need to apply to the database.
- The Down is used to reverse the changes we have added to the Up method.
When the Database Migration updates the database, it will run all migrations in the timestamp order, and only those that have not been used since the last update (The _MigrationHistory table keeps track of which migrations have been applied). The Up method of all migrations will be called and will make the changes we have specified to the database. If we decide to go back to a previous migration, the Down method will be called to redo the changes in a reverse order.
In the Package Manager Console, enter the following command and then press Enter.
Update-Database -Verbose
The output of the Update-Database command generated an Alter Table SQL statement to add a new column to the TriviaQuestions table, as shown in the image below.
Add column SQL statement generated
In SQL Server Object Explorer, refresh the dbo.TriviaQuestions table and check that the new Hint column is displayed.
Showing the new Hint Column
Back in the TriviaQuestion.cs editor, add a StringLength constraint to the Hint property, as shown in the following code snippet.
public class TriviaQuestion { public int Id { get; set; } [Required] public string Title { get; set; } public virtual List<TriviaOption> Options { get; set; } [StringLength(150)] public string Hint { get; set; } }
In the Package Manager Console, enter the following command and then press Enter.
Add-Migration QuestionHintLength
In the Package Manager Console, enter the following command and then press Enter.
Update-Database -Verbose
The output of the Update-Database command generated an Alter Table SQL statement to update the hint column type of the TriviaQuestions table, as shown in the image below.
Alter column SQL statement generated
In SQL Server Object Explorer, refresh the dbo.TriviaQuestions table and check that the Hint column type is nvarchar(150).
Showing the new constraint
Exercise 2: Deploying a Web App to Staging
Web Apps in Azure App Service enables you to perform staged publishing. Staged publishing creates a staging site slot for each default production site and enables you to swap these slots with no down time. This is really useful to validate changes before releasing to the public, incrementally integrate site content, and roll back if changes are not working as expected.
In this exercise, you will deploy the Geek Quiz application to the staging environment of your web app using Git source control. To do this, you will create the web app and provision the required components at the management portal, configure a Git repository and push the application source code from your local computer to the staging slot. You will also update your production database with the Code First Migrations you created in the previous exercise. You will then execute the application in this test environment to verify its operation. Once you are satisfied that it is working according to your expectations, you will promote the application to production.
Note: To enable staged publishing, the web app must be in Standard mode. Note that additional charges will be incurred if you change your web app to Standard mode. For more information about pricing, see App Service Pricing.
Task 1 – Creating a Web App in Azure App Service
In this task, you will create a web app in Azure App Service from the management portal. You will also configure a SQL Database to persist the application data, and configure a local Git repository for source control.
Go to the Azure management portal and sign in using the Microsoft account associated with your subscription.
Click New in the command bar at the bottom of the page.
Creating a new web app
Click Compute, Website and then Custom Create.
Creating a new web app using Custom Create
In the New Website - Custom Create dialog box, provide an available URL (e.g. geek-quiz), select a location in the Region drop-down list, and select Create a new SQL database in the Database drop-down list. Finally, select the Publish from source control check box and click Next.
Customizing the new web app
Specify the following information for the database settings:
- In the Name text box, enter a database name (e.g. geekquiz_db)
- In the Server drop-down list, select New SQL database server. Alternatively, you can select an existing server.
- In the Database username and Database password boxes, enter the administrator username and password for the SQL database server. If you select a server you have already created, you will be prompted for the password.
Specifying the database settings
Click Next to continue.
Select Local Git repository for the source control to use and click Next.
Note: You may be prompted for the deployment credentials (a username and password).
Creating the Git repository
Wait until the new web app is created.
Note: By default, Azure provides domains at azurewebsites.net but also gives you the possibility to set custom domains using the Azure management portal. However, you can only manage custom domains if you are using certain Azure App Service modes.
Azure App Service is available in Free, Shared, Basic, Standard, and Premium editions. In Free and Shared mode, all web apps run in a multi-tenant environment and have quotas for CPU, Memory, and Network usage. The maximum number of free apps may vary with your plan. In Standard mode, you choose which apps run on dedicated virtual machines that correspond to the standard Azure compute resources. You can find the web app mode configuration in the Scale menu of your web app.
If you are using Shared or Standard mode, you will be able to manage custom domains for your web app by going to your app’s Configure menu and clicking Manage Domains under domain names.
Once the web app is created, click the link under the URL column to check that the new web app is running.
Browsing to the new web app
web app running
Task 2 – Creating the Production SQL Database
In this task, you will use the Entity Framework Code First Migrations to create the database targeting the Azure SQL Database instance you created in the previous task.
In the Management Portal, navigate to the web app you created in the previous task and go to its Dashboard.
In the Dashboard page, click View connection strings link under the quick glance section.
View connection strings
Copy the connection string value and close the dialog box.
Connection String in Azure Management Portal
Click SQL Databases to see the list of SQL databases in Azure
SQL Database menu
Locate the database you created in the previous task and click on the Server.
SQL Database server
In the Quick Start page of the server, click on Configure.
Configure menu
In the Allowed IP addresses section, click on Add to the allowed IP addresses link to enable your IP to connect to the SQL Database server.
Allowed IP addresses
Click Save at the bottom of the page to complete the step.
Switch back to Visual Studio.
In the Package Manager Console, execute the following command replacing [YOUR-CONNECTION-STRING] placeholder with the connection string you copied from Azure
Update-Database -Verbose -ConnectionString "[YOUR-CONNECTION-STRING]" -ConnectionProviderName "System.Data.SqlClient"
data-
Update database targeting Azure SQL Database
Task 3 – Deploying Geek Quiz to Staging Using Git
In this task, you will enable staged publishing in your web app. Then, you will use Git to publish the Geek Quiz application directly from your local computer to the staging environment of your web app.
Go back to the portal and click the name of the web app under the Name column to display the management pages.
Opening the web app management pages
Navigate to the Scale page. Under the general section, select Standard for the configuration and click Save in the command bar.
Note: To run all web apps in the current region and subscription in Standard mode, leave the Select All check box selected in the Choose Sites configuration. Otherwise, clear the Select All check box.
Upgrading the Web App to Standard mode
Click Yes to confirm the changes.
Confirming the change to Standard mode
Go to the Dashboard page and click Enable staged publishing under the quick glance section.
Enabling staged publishing
Click Yes to enable staged publishing.
Confirming staged publishing
In the list of web apps, expand the mark to the left of your web app name to display the staging site slot. It has the name of your web app followed by (staging). Click the staging site to go to the management page.
Navigating to the staging app
Notice that he management page looks like any other web app's management page. Navigate to the Deployments page and copy the Git URL value. You will use it later in this exercise.
Copying the Git URL value
Open a new Git Bash console and execute the following commands. Update the [YOUR-APPLICATION-PATH] placeholder with the path to the GeekQuiz solution, located in the Source\Ex1-DeployingWebSiteToStaging\Begin folder of this lab.
cd "[YOUR-APPLICATION-PATH]" git init git config --global user.email "{username@example.com}" git config --global user.name "{your-user-name}" git add . git commit -m "Initial commit"
data-
Git initialization and first commit
Run the following command to push your web app to the remote Git repository. Replace the placeholder with the URL you obtained from the management portal. You will be prompted for your deployment password.
git remote add azure [GIT-CLONE-URL] git push azure master
Pushing to Azure
Note: When you deploy content to the FTP host or GIT repository of a web app, you must authenticate using the deployment credentials that you created from the web app’s Quick Start or Dashboard management pages. If you do not know your deployment credentials you can easily reset them using the management portal. Open the web app Dashboard page and click the Reset your deployment credentials link. Provide a new password and click OK. Deployment credentials are valid for use with all web apps associated with your subscription.
In order to verify the web app was successfully pushed to Azure, go back to the management portal and click Websites.
Locate your web app and expand the entry to display the staging site slot. Click its Name to go to the management page.
Click Deployments to see the deployment history. Verify that there is an Active Deployment with your "Initial Commit".
Active deployment
Finally, click Browse in the command bar to go to the web app.
Browse web app
If the application is successfully deployed, you will see the Geek Quiz login page.
Note: The address URL of the deployed application contains the name of your web app followed by -staging.
Application running in the staging environment
If you wish to explore the application, click Register to register a new user. Complete the account details by entering a user name, email address and password. Next, the application shows the first question of the quiz. Answer a few questions to make sure it is working as expected.
Application ready to be used
Task 4 – Promoting the Web App to Production
Now that you have verified that the web app is working correctly in the staging environment, you are ready to promote it to production. In this task, you will swap the staging site slot with the production site slot.
Go back to the management portal and select the staging site slot. Click Swap in the command bar.
Swap to production
Click Yes in the confirmation dialog box to proceed with the swap operation. Azure will immediately swap the content of the production site with the content of the staging site.
Note: Some settings from the staged version will automatically be copied to the production version (e.g. connection string overrides, handler mappings, etc.) but other settings will not change (e.g. DNS endpoints, SSL bindings, etc.).
Confirming swap operation
Once the swap is complete, select the production slot and click Browse in the command bar to open the production site. Notice the URL in the address bar.
Note: You might need to refresh your browser to clear cache. In Internet Explorer, you can do this by pressing CTRL+R.
In the GitBash console, update the remote URL for the local Git repository to target the production slot. To do this, run the following command replacing the placeholders with your deployment username and the name of your web app.
Note: In the following exercises, you will push changes to the production site instead of staging just for the simplicity of the lab. In a real-world scenario, it is recommended to verify the changes in the staging environment before promoting to production.
git remote set-url azure https://<your-user>@<your-web-site>.scm.azurewebsites.net:443/<your-web-site>.git
Exercise 3: Performing Deployment Rollback in Production
There are scenarios where you do not have a staging slot to perform hot swap between staging and production, for example, if you are working with Free or Shared mode. In those scenarios, you should test your application in a testing environment –either locally or in a remote site– before deploying to production. However, it is possible that an issue not detected during the testing phase may arise in the production site. In this case, it is important to have a mechanism to easily switch to a previous and more stable version of the application as quickly as possible.
In Azure App Service, continuous deployment from source control makes this possible thanks to the redeploy action available in the management portal. Azure keeps track of the deployments associated with the commits pushed to the repository and provides an option to redeploy your application using any of your previous deployments, at any time.
In this exercise you will perform a change to the code in the Geek Quiz application that intentionally injects a bug. You will deploy the application to production to see the error, and then you will take advantage of the redeploy feature to go back to the previous state.
Task 1 – Updating the Geek Quiz Application
In this task, you will refactor a small piece of code of the TriviaController class to extract part of the logic that retrieves the selected quiz option from the database into a new method.
Switch to the Visual Studio instance with the GeekQuiz solution from the previous exercise.
In Solution Explorer, open the TriviaController.cs file inside the Controllers folder.
Locate the StoreAsync method and select the code highlighted in the following figure.
Selecting the code
Right-click the selected code, expand the Refactor menu and select Extract Method....
Selecting Extract Method
In the Extract Method dialog box, name the new method MatchesOption and click OK.
Specifying the name for the extracted method
The selected code is then extracted into the MatchesOption method. The resulting code is shown in the following snippet.
private async Task<bool> StoreAsync(TriviaAnswer answer) { this.db.TriviaAnswers.Add(answer); await this.db.SaveChangesAsync(); var selectedOption = await this.db.TriviaOptions.FirstOrDefaultAsync(o => MatchesOption(answer, o)); return selectedOption.IsCorrect; } private static bool MatchesOption(TriviaAnswer answer, TriviaOption o) { return o.Id == answer.OptionId && o.QuestionId == answer.QuestionId; }
Press CTRL + S to save the changes.
Task 2 – Redeploying the Geek Quiz Application
You will now push the changes you made in the previous task to the repository, which will trigger a new deployment to the production environment. Then, you will troubleshot an issue using the F12 development tools provided by Internet Explorer, and then perform a rollback to the previous deployment from the Azure management portal. "Refactored answer check" git push azure master
Pushing refactored code to Azure
Open Internet Explorer and navigate to your web app (e.g. http://<your-web-site>.azurewebsites.net). Log in using the previously created credentials.
Press F12 to launch the development tools, select the Network tab and click the Play button to start recording.
Starting network recording
Select any option of the quiz. You will see that nothing happens.
In the F12 window, the entry corresponding to the POST HTTP request shows an HTTP 500 result.
HTTP 500 error
Select the Console tab. An error is logged with the details of the cause.
Logged error
Locate the details part of the error. Clearly, this error is caused by the code refactoring you committed in the previous steps.
Details: LINQ to Entities does not recognize the method 'Boolean MatchesOption ....
Do not close the browser.
In a new browser instance, navigate to the Azure management portal and sign in using the Microsoft account associated with your subscription.
Select Websites and click the web app you created in Exercise 2.
Navigate to the Deployments page. Notice that all the commits performed are listed in the deployment history.
List of existing deployments
Select the previous commit and click Redeploy on the command bar.
Redeploying the previous commit
When prompted to confirm, click Yes.
When the deployment completes, switch back to the browser instance with your web app and press CTRL + F5.
Click any of the options. The flip animation will now take place and the result (correct/incorrect) will be displayed.
(Optional) Switch to the Git Bash console and execute the following commands to revert to the previous commit.
Note: These commands create a new commit that undoes all changes in the Git repository that were made in the bad commit. Azure will then redeploy the application using the new commit.
git revert HEAD --no-edit git push azure master
Exercise 4: Scaling Using Azure Storage
Blobs are the simplest way to store large amounts of unstructured text or binary data such as video, audio and images. Moving the static content of your application to Storage, helps to scale your application by serving images or documents directly to the browser.
In this exercise, you will move the static content of your application to a Blob container. Then you will configure your application to add an ASP.NET URL rewrite rule in the Web.config to redirect your content to the Blob container.
Task 1 – Creating an Azure Storage Account
In this task you will learn how to create a new storage account using the management portal.
Navigate to the Azure management portal and sign in using the Microsoft account associated with your subscription.
Select New | Data Services | Storage | Quick Create to start creating a new storage account. Enter a unique name for the account and select a Region from the list. Click Create Storage Account to continue.
Creating a new storage account
In the Storage section, wait until the status of the new storage account changes to Online in order to continue with the following step.
Storage Account created
Click on the storage account name and then click the Dashboard link at the top of the page. The Dashboard page provides you with information about the status of the account and the service endpoints that can be used within your applications.
Displaying the Storage Account Dashboard
Click the Manage Access Keys button in the navigation bar.
Manage Access Keys button
In the Manage Access Keys dialog box, copy the Storage Account Name and Primary Access Key as you will need them in the following exercise. Then, close the dialog box.
Manage Access Key dialog box
Task 2 – Uploading an Asset to Azure Blob Storage
In this task, you will use the Server Explorer window from Visual Studio to connect to your storage account. You will then create a blob container and upload a file with the Geek Quiz logo to the container.
Switch to the Visual Studio instance with the GeekQuiz solution from the previous exercise.
From the menu bar, select View and then click Server Explorer.
In Server Explorer, right-click the Azure node and select Connect to Azure.... Sign in using the Microsoft account associated with your subscription.
Connect to Azure
Expand the Azure node, right-click Storage and select Attach External Storage....
In the Add New Storage Account dialog box, enter the Account name and Account key you obtained in the previous task and click OK.
Add New Storage Account dialog box
Your storage account should appear under the Storage node. Expand your storage account, right-click Blobs and select Create Blob Container....
Create Blob Container
In the Create Blob Container dialog box, enter a name for the blob container and click OK.
Create Blob Container dialog box
The new blob container should be added to the Blobs node. Change the access permissions in the container to make the container public. To do this, right-click the images container and select Properties.
Images container properties
In the Properties window, set the Public Read Access to Container.
Changing public read access property
When prompted if you are sure you want to change the public access property, click Yes.
Microsoft Visual Studio warning
In Server Explorer, right-click in the images blob container and select View Blob Container.
View Blob Container
The images container should open in a new window and a legend with no entries should be shown. Click the upload icon to upload a file to the blob container.
Images container with no entries
In the Upload Blob dialog box, navigate to the Assets folder of the lab. Select the logo-big.png file and click Open.
Wait until the file is uploaded. When the upload completes, the file should be listed in the images container. Right-click the file entry and select Copy URL.
Copy blob URL
Open Internet Explorer and paste the URL. The following image should be shown in the browser.
logo-big.png image from Azure Blob Storage
Task 3 – Updating the Solution to Consume Static Content from Azure Blob Storage
In this task, you will configure the GeekQuiz solution to consume the image uploaded to Azure Blob Storage (instead of the image located in the web app) by adding an ASP.NET URL rewrite rule in the web.config file.
In Visual Studio, open the Web.config file inside the GeekQuiz project and locate the <system.webServer> element.
Add the following code to add an URL rewrite rule, updating the placeholder with your storage account name.
(Code Snippet - WebSitesInProduction - Ex4 - UrlRewriteRule)
<system.webServer> <rewrite> <rules> <rule name="redirect-images" stopProcessing="true"> <match url="img/(.*)"/> <action type="Redirect" url="http://[YOUR-STORAGE-ACCOUNT].blob.core.windows.net/images/{R:1}"></action> </rule> </rules> </rewrite>
Note: URL rewriting is the process of intercepting an incoming Web request and redirecting the request to a different resource. The URL rewriting rules tells the rewriting engine when a request needs to be redirected, and where should they be redirected. A rewriting rule is composed of two strings: the pattern to look for in the requested URL (usually, using regular expressions), and the string to replace the pattern with, if found. For more information, see URL Rewriting in ASP.NET.
Press CTRL + S to save the changes. "Added URL rewrite rule in web.config file" git push azure master
Deploying update to Azure
Task 4 – Verification
In this task you will use Internet Explorer to browse the Geek Quiz application and check that the URL rewrite rule for images works and you are redirected to the image hosted on Azure Blob Storage.
Open Internet Explorer and navigate to your web app (e.g. http://<your-web-site>.azurewebsites.net). Log in using the previously created credentials.
Showing the Geek Quiz web app with the image
Press F12 to launch the development tools, select the Network tab and start recording.
Starting network recording
Press CTRL + F5 to refresh the web page.
Once the page has finished loading, you should see an HTTP request for the /img/logo-big.png URL with an HTTP 301 result (redirect) and another request for http://[YOUR-STORAGE-ACCOUNT].blob.core.windows.net/images/logo-big.png URL with a HTTP 200 result.
Verifying the URL redirect
Exercise 5: Using Autoscale for Web Apps
Note: This exercise is optional, since it requires support for Web Load & Performance Testing which is only available for Visual Studio 2013 Ultimate Edition. For more information on specific Visual Studio 2013 features, compare versions here.
Azure App Service Web Apps provides the Autoscale feature for web apps running in Standard Mode. Autoscale lets Azure automatically scale the instance count of your web app depending on the load. When Autoscale is enabled, Azure checks the CPU of your web app once every five minutes and adds instances as needed at that point in time. If the CPU usage is low, Azure will remove instances once every two hours to ensure that the performance of your web app is not degraded.
In this exercise you will go through the steps required to configure the Autoscale feature for the Geek Quiz web app. You will verify this feature by running a Visual Studio load test to generate enough CPU load on the application to trigger an instance upgrade.
Task 1 – Configuring Autoscale Based on the CPU Metric
In this task you will use the Azure management portal to enable the Autoscale feature for the web app you created in Exercise 2.
In the Azure management portal, select Websites and click the web app you created in Exercise 2.
Navigate to the Scale page. Under the capacity section, select CPU for the Scale by Metric configuration.
Note: When scaling by CPU, Azure dynamically adjusts the number of instances that the app uses if the CPU usage changes.
Selecting to scale by CPU
Change the Target CPU configuration to 20-40 percent.
Note: This range represents the average CPU usage for your web app. Azure will add or remove instances to keep your web app in this range. The minimum and maximum number of instances used for scaling is specified in the Instance Count configuration. Azure will never go above or beyond that limit.
The default Target CPU values are modified just for the purposes of this lab. By configuring the CPU range with small values, you are increasing the chances to trigger Autoscale when a moderate load is placed on the application.
Changing the Target CPU to be between 20 and 40 percent
Click Save in the command bar to save the changes.
Task 2 – Load Testing with Visual Studio
Now that Autoscale has been configured, you will create a Web Performance and Load Test Project in Visual Studio to generate some CPU load on your web app.
Open Visual Studio Ultimate 2013 and select File | New | Project... to start a new solution.
Creating a new project
In the New Project dialog box, select Web Performance and Load Test Project under the Visual C# | Test tab. Make sure .NET Framework 4.5 is selected, name the project WebAndLoadTestProject, choose a Location and click OK.
Creating a new Web and Load Test project
In the WebTest1.webtest Right-click the WebTest1 node and click Add Request.
Adding a request to WebTest1
In the Properties window of the new request node, update the Url property to point to the URL of your web app (e.g.).
Changing the Url property
In the WebTest1.webtest window, right-click WebTest1 and click Add Loop....
Adding a loop to WebTest1
In the Add Conditional Rule and Items to Loop dialog box, select the For Loop rule and modify the following properties.
- Terminating value: 1000
- Context Parameter Name: Iterator
- Increment Value: 1
Selecting the For Loop rule and updating the properties
Under the Items in loop section, select the request you created previously to be the first and last item for the loop. Click OK to continue.
Selecting the first and last items for the loop
In Solution Explorer, right-click the WebAndLoadTestProject project, expand the Add menu and select Load Test....
Adding a Load Test to the WebAndLoadTestProject project
In the New Load Test Wizard dialog box, click Next.
New Load Test Wizard
In the Scenario page, select Do not use think times and click Next.
Selecting not to use think times
In the Load Pattern page, make sure that the Constant Load option is selected. Change the User Count setting to 250 users and click Next.
Changing the user count to 250
In the Test Mix Model page, select Based on sequential test order and click Next.
Selecting the test mix model
In the Test Mix Model page, click Add... to add a test to the mix.
Adding a test to the test mix
In the Add Tests dialog box, double-click WebTest1 to add the test to the Selected tests list. Click OK to continue.
Adding the WebTest1 test
Back in the Test Mix page, click Next.
Completing the Test Mix page
In the Network Mix page, click Next.
Clicking next in the Network Mix page
In the Browser Mix page, select Internet Explorer 10.0 as the browser type and click Next.
Selecting the browser type
In the Counter Sets page, click Next.
Clicking Next in the Counter Sets page
In the Run Settings page, set the Load test duration to 5 minutes and click Finish.
Setting the load test duration to 5 minutes
In Solution Explorer, double-click the Local.settings file to explore the test settings. By default, Visual Studio uses your local computer to run the tests.
Note: Alternatively, you can configure your test project to run the load tests in the cloud using Visual Studio Online (VSO). VSO provides a cloud-based load testing service that simulates a more realistic load, avoiding local environment constraints like CPU capacity, available memory and network bandwidth. For more information about using VSO to run load tests, see this article.
Task 3 – Autoscale Verification
You will now execute the load test you created in the previous task and see how your web app behaves under load.
In Solution Explorer, double-click LoadTest1.loadtest to open the load test.
Opening LoadTest1.loadtest
In the LoadTest1.loadtest window, click the first button in the toolbox to run the load test.
Running the load test
Wait until the load test completes.
Note: The load test simulates multiple users that send requests to the web app simultaneously. When the test is running, you can monitor the available counters to detect any errors, warnings or other information related to your load test run.
Load test running
Once the test completes, go back to the management portal and navigate to the Scale page of your web app. Under the capacity section, you should see in the graph that a new instance was automatically deployed.
New instance automatically deployed
Note: It may take several minutes for the changes to appear in the graph (press CTRL + F5 periodically to refresh the page). If you do not see any changes, you can try the following:
Summary
In this hands-on lab, you learned how to set up and deploy your application to production web apps in Azure. You started by detecting and updating your databases using Entity Framework Code First Migrations, then continued by deploying new versions of your site using Git and performing rollbacks to the latest stable version of your site. Additionally, you learned how to scale your app using Storage to move your static content to a Blob container.
This article was originally created on July 16, 2014
|
http://www.asp.net/aspnet/overview/developing-apps-with-windows-azure/maintainable-azure-websites-managing-change-and-scale
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
JBoss Developer: Message List Most recent forum messages Jive Engage 2012-05-01T21:53:45Z 2012-05-01T21:53:45Z en Re: Does anyone know how I can directly use classes under sun.awt namespace in a JBOSS 7.1 servelet? TZ Zhang /people/ezhangtina2 do-not-reply@jboss.com 2012-05-01T21:53:45Z 2012-05-01T21:53:45Z <!-- [DocumentBodyStart:40d4a4d0-b99a-44a1-976c-81d3d7c5574d] --><div class="jive-rendered-content"><p>Peter, thanks for the link. Unfortunately, it doesn't seem to work for me.</p></div><!-- [DocumentBodyEnd:40d4a4d0-b99a-44a1-976c-81d3d7c5574d] --><img src='/beacon?t=1440739853121' /> 2012-05-01T21:53:45Z 3 years 4 months ago 0 Does anyone know how I can directly use classes under sun.awt namespace in a JBOSS 7.1 servelet? TZ Zhang /people/ezhangtina2 do-not-reply@jboss.com 2012-05-01T13:57:45Z 2012-05-01T13:57:45Z <!-- [DocumentBodyStart:7a1ead59-0624-4d71-8715-126b8e4dc925] --><div class="jive-rendered-content"><p>I posted this question in stackoverflow a few days ago and repost it here.</p><p style="min-height: 8pt; height: 8pt; padding: 0px;"> </p><p.</p><p style="min-height: 8pt; height: 8pt; padding: 0px;"> </p><p>Thanks in advance for any help.</p></div><!-- [DocumentBodyEnd:7a1ead59-0624-4d71-8715-126b8e4dc925] --> 2012-05-01T13:57:45Z 3 years 4 months ago 2 0
|
https://community.jboss.org/en/feeds/messages?rssUsername=ezhangtina2
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
On Mon, Jan 15, 2001 at 01:04:28AM +1100, Chris Leishman wrote: > Hopefully someone can set me straight on this one... > - The STL, as distributed with g++, does NOT use namespaces as it should. It > appears to be an older (or modified) version of the STL from that on the > SGI website. Why is this different or old? [SNIP various other standards conformance things] A lot of these things come down to the C++ standard being fairly new and GCC 2.95.2 being fairly old. At the time few if any compilers would have had complete support for the standard, G++ being no exception. Current GCC (which is starting to move towards a 3.0 release) is much more standards conformant, particularly WRT the standard library. -- Mark Brown mailto:broonie@tardis.ed.ac.uk (Trying to avoid grumpiness) EUFS
Attachment:
pgp764D_cODy9.pgp
Description: PGP signature
|
https://lists.debian.org/debian-devel/2001/01/msg01486.html
|
CC-MAIN-2015-35
|
en
|
refinedweb
|
Properties Mantra 9.0 rendering properties
Overview
These properties control rendering in mantra. They are in the
mantra9.0 folder in the list of properties available under the For rendering tab of the Edit parameter interface window. Select a node, and in the parameter editor click the
Gear menu and choose Edit rendering properties to add or remove properties to a render driver, camera, object, shader, or properties node.
Note that all command line options to mantra (except
-H and
-P) now have property equivalents, so you can add them to the driver node instead of specifying them on the command line.
Properties
Global Properties
0– No VEX profiling.
1– Performance analysis of VEX execution.
2– Performance analysis of VEX execution with NAN detection..
viewpohoton– View dependent photon map generation.
irradiance– Irradiance cache creation (partially implemented).
metropolis– Physically Based Rendering using metropolis integration (partially implemented).
2D
.ratfile caching
3D point cloud caching
3D volume caching.
diffuse– Only diffuse paths are traced.
specular– Diffuse & specular paths are traced.
caustic– Diffuse, specular and caustic paths are traced.
all– All paths are traced.
null– No depth of field.
box– Box shaped depth of field.
radial– Radial shaped depth of field..
Mantra is able to render an object in UV space rather than 3D space. Only one object may be rendered in UV space. This is the name of the object.
The name of the attribute used in UV un-wrapping.
When performing network rendering, this specifies the number of tiles queued for each remote render..
The
shadingfactor setting is a global multiplier on all shading rates in the scene. Each object has a
shadingrate property.
The shading rate used for an object is determined by…
shadingrate = object:shadingrate * renderer:shadingfactor
Mantra allows a separate
shadingrate when ray-tracing is performed (compared with scanline rendering). The
rayshadingfactor is a global multiplier on all ray-tracing shading rates in the scene. Each object has a
rayshadingrate property.
The ray-tracing shading rate used for an object is determined by…
rayshadingrate = object:rayshadingrate * renderer:rayshadingfactor
VEX profiling provides a tool to do performance analysis of shader execution. Turning on VEX profiling will affect performance of shading adversely. Especially when NAN detection is turned on.
When NAN detection is turned on, each instruction executed in VEX will be checked for invalid arithmetic operations. This will check for division by 0, numeric overflow, invalid operations. Errors like this will typically result in white or black pixels in the resulting image.
Determines the verbose level of the renderer. Higher values will cause mantra to print out more information about the rendering process. Typically a level of 1 or 2 is sufficient.
The number of threads used for rendering.
Mantra has several different methods of rendering an image. The rendering engine determines which algorithm will be used to generate the image. Though this token has an integer value, it’s also possible to set the value through a string value.
The number of megabytes used for the geometry cache.
The number of megabytes used for texture caching. This includes all texture caching.
Output progress in a format which can be processed by Pixar’s Alfred render queue.…
import mantra import sys tile = mantra.property("tile:ncomplete")(0) if tile == 1 print mantra.property("renderer:name") print mantra.property("renderer:version")
Turns on compression when network rendering. This cuts down on network bandwidth significantly. This should not usually be changed.
Deep shadow map optimization. This value should not usually be changed.).
Set
renderer:threadcount to the number of CPUs of the rendering machine.
Perform hidden surface removal. When hidden surface removal is disabled, all surfaces in the camera’s frustum will be rendered, regardless of whether they are occluded. This can impact render time significantly.
Whether to enable motion blur sampling for scanline rendering.
Whether to enable motion blur sampling for ray-traced rendering.
Whether to enable depth of field sampling.
Whether to enable ray-tracing. By disabling ray-tracing no ray-traced shadows, reflections, refractions, etc. will be performed.
Whether to enable the irradiance and occlusion VEX functions. This also controls whether irradiance caching will be enabled.
Specifies the method of caching for primary rays in PBR mode. This may be set to either “icache” or “photon”.
Specifies the method of caching for secondary rays in PBR mode. This may be set to either “icache” or “photon”.
Specifies the method of caching for caustic rays in PBR mode. This may be set to either “icache” or “photon”
The cache file will store direct illumination as well as indirect illumination.
The type of path tracing to perform in PBR mode.
The maximum specular bounces allowed in PBR mode.
The maximum diffuse bounces allowed in PBR mode.
The number of photon samples in PBR mode.
The ray-tracing bias used when the PBR rendering engines are used.
The tile size of buckets used to render the image.
The number of x and y samples per-pixel.
A floating point value which adds noise to sampling patterns.
Normally, sub-pixel samples are filtered using the pixel filter defined on an image plane. When image:subpixel is true,.
This property will “lock” the sampling patterns from frame-to-frame. This minimizes the buzzing caused by noise when rendering animations. The noise is still present, but is more consistent frame-to-frame.
After shading of a surface, if the Of variable is less than this threshold, mantra will consider that the surface doesn’t exist and samples will be ignored.
When compositing layers of transparent surfaces, when the cumulative opacity of the transparent layers is more than this threshold, the pixel will be considered opaque, allowing mantra to ignore objects which are occluded.
When performing shading, mantra places no limits on values which may be returned. However, when performing Physically Based Rendering, it’s possible to get very large values for some rays. These extrema will show cause color spikes which cannot be smoothed out without sending huge numbers of rays. The color limit is used to clamp the value of the Cf variable to avoid these spikes.
When
renderer:renderengine is “photon”, this determines the photon map file for diffuse photons (irradiance)
When
renderer:renderengine is “photon”, this determines the photon map file for caustic photons (specular bounces)
The number of photons sent when
renderer:renderengine is set to “photon”.
The number of photons used in pre-filtering the global map.
The number of photons used in pre-filtering for the caustic map.
The search radius when pre-filtering photons for the global map.
The search radius when pre-filtering photons for the caustic map.
The probability for discarding photons during pre-filtering of the global map.
The probability for discarding photons during pre-filtering of the caustic map.
A boolean to determine whether pre-filtering should be performed on the global photon map.
A boolean to determine whether pre-filtering should be performed on the caustic photon map.
When
camera:projection is “orthographic”, this determines the width of the projection.
Mantra is able to render using a non-flat projection plane. When the curvature is non-zero, ray-tracing will be used for primary rays. The curvature may either be greater or less than zero to mimic a wide angle or fish-eye lens.
Bokeh determines the “quality” of depth of field.
Geometry Properties
Most of the geometry properties are set using the ray_detail statement, though it is possible to explicitly set the values.).
When the
-v option is specified on the
ray_detail line, this determines the time scale for velocity based motion blur.
Object Properties
Unless specified, each object property may be attached to a primitive by binding a material object to the primitive using the
shop_materialpath attribute.
0– No coving.
1– Only displaced surfaces and sub-division surfaces will be coved.
2– All primitives will be coved.
none– No illumination
direct– Direct illumination (from light sources) only
indirect– Both direct and indirect illumination
none– No illumination.
direct– Direct illumination (from light sources) only.
indirect– Both direct and indirect illumination.
none– No illumination
direct– Direct illumination (from light sources) only
indirect– Both direct and indirect illumination
point
box
gauss
bartlett
blackman
catrom
hanning
mitchell
r– Read only.
w– Write only.
rw– Read and write.
The space or comma separated list of categories to which this object belongs.
Currently not supported for per-primitive material assignment (material SOP).
If this option is turned off, then the instance will not be rendered. The object’s properties can still be queried from within VEX, but no geometry will be rendered. This is roughly equivalent to turning the object into a transform space object.
Perform backface removal on surfaces.
Render polygons as a subdivision surface. The
creaseweight attribute is used to perform linear creasing. This attribute may appear on points, vertices or primitives.
Render only the points of the geometry. Two attributes control the point primitives if they exist.
Render metaballs as volumes as opposed to surfaces.
When geometry has shaders defined on a per-primitive basis, this parameter will override these shaders and use only the object’s shader. This is useful when performing matte shading on objects.
Not supported for per-primitive material assignment (material SOP).
The maximum refraction bounces.
The maximum reflection bounces.
As rays propagate through the scene, they contribute less and less to the final color of the surface. When the contribution becomes less than the ray weight, no further rays will be sent. This is similar to the
renderer:opacitylimit.
These parameters determine the set of objects which will be visible in reflection rays.
These parameters determine the set of objects which will be visible in refraction rays.
These parameters determine the set of light sources which are used to illuminate the surface.
When biases are used in VEX shaders, the bias can either be performed along the ray direction or along the surface normal. If this parameter is turned on, biasing will be along the surface normal (in the correct direction).
When shading grid edges, this bias will move the edge points slightly away from the actual grid edge. This improves bad shadowing artifacts which might be seen where two primitives meet. This is only used when micro-polygon rendering.
The maximum bounds that the displacement shader will move geometry. This is defined in “camera” space. Note, that the absolute value is used to determine the bounds.
With extreme displacement, it’s possible to get micro-polygons which are stretched far out of shape. Turning re-dicing on will cause the geometry to be re-diced after the geometry has been displaced. This will result in micro-polygons which have a much more uniform size and will most likely provide higher quality images. This is more costly since the displacement shader may be run multiple times during the projection process.
When splitting primitives, this ensures that primitives are split into grids containing a power of 2 micro-polygons. This minimizes patch cracks.
When running displacement shaders when micro-polygon rendering, this option will determine whether the VEX variable P is actually moved or whether bump mapping will be performed.
When running displacement shaders when ray-tracing, this option will determine whether the VEX variable P is actually moved or whether bump mapping will be performed.
When ray-tracing, it’s more efficient to pre-dice all the geometry in the scene, rather than caching portions of the geometry and re-generating the geometry on the fly. This is especially true when global illumination is being computed (since there is less coherency among rays). The
object:raypredice property will cause this object to generate all the micro-polygons before the render begins. Ray tracing can be significantly faster at the cost of potentially huge memory requirements.
Currently not supported for per-primitive material assignment (material SOP).
When micro-polygon rendering, motion can either be sampled in screen space or in 3D space. Turning on object:perspective will cause sampling to occur in perspective projected space. This will not match ray-traced motion blur, which is always done in 3D space.).
When micro-polygon rendering, shading normally occurs at micro-polygon vertices at the beginning of the frame. This option causes the vertex colors to be Gouraud shaded to determine the color for a sample.
When trying to match a background plate exactly, it’s desirable to eliminate any filtering which might occur on the plate. The Gouraud interpolation will cause a softening of the map, and thus, this option should be turned off.).
The shading quality for scanline rendering. A higher quality will generate smaller micro-polygons meaning more shading and sampling will occur, but the quality will be higher.
The shading quality when ray-tracing.
Whether crack-prevention will be performed.
See the help for the Coving parameter in the Geometry container object.
Increasing the motion factor of an object will dynamically adjust the shading quality based on the rate of motion. This can significantly speed up renderings of rapid moving objects. It also affects depth of field and may improve speed of scenes with deep depth of focus.
This property controls the tesselation levels for nearly flat primitives. By increasing the value, more primitives will be considered flat and will be sub-divided less.
When PBR samples the diffuse illumination for the surface, this determines how the illumination should be computed.
When PBR samples the glossy illumination for the surface, this determines how the illumination should be computed.
When PBR samples the specular illumination for the surface, this determines how the illumination should be computed.
The step size used when rendering volumes. The size is specified in camera space.
Enable the use of a separate step size when computing shadows for volumes.
When computing shadows, use this step size instead of the
object:volumestepsize property.
Some volume primitives (Image3D, Houdini Geometry Volumes) can use a filter during evaluation of volume channels. This specifies the filter.
This specifies the filter width for the object:filter property. The filter width is specified in number of voxels.
Enable irradiance cache generation for this object.
The file to store the irradiance cache. If multiple objects specify the same file, the cache file will contain samples from all objects.
The read-write mode for the file.
The default number of samples used to compute irradiance when the shader doesn’t specify it. In most cases, the shader does specify the value.
The error used to determine whether a new irradiance sample needs to be computed. Smaller values will lead to larger, but more accurate irradiance cache files.
The minimum and maximum number of pixels between nearby irradiance cache samples.
When ray-tracing VEX functions are invoked, send out additional rays to perform anti-aliasing of ray-traced effects. This will typically generate higher quality ray-tracing. The sampling is determined by
image:minraysamples and
image:maxraysamples.
The variance threshold to send out additional anti-aliasing rays. When near-by samples are very similar, fewer anti-aliasing rays will be sent out. When near-by samples are different, more rays will be sent.
The minimum number of ray-tracing samples used in variance anti-aliasing.
The maximum number of ray-tracing samples used when variance anti-aliasing.
Enable or disable raytrace motion blur for micropolygon rendering and photon map generation. By default, raytrace motion blur is disabled. This setting has no effect on the ray tracing rendering engines.
Atmosphere Properties
The category membership list for the fog object.
A pattern of lights (by name) which are used to illuminate the fog object.
A pattern of lights (by category) which are used to illuminate the fog object.
Light Properties
point– No area shape
line– Line light (unit line along x-axis)
grid– Grid light (unit square in XY plane)
disk– Circle shaped light (radius 0.5 in XY plane)
sphere– Sphere shaped light (radius 0.5)
environment– Sphere shaped light (infinite radius)
line – Only the X size is used
grid – The X & Y size of the grid
disk – The X & Y radii of the circle
sphere – The average of the sizes is used as the radius
environment – Ignored
The category membership list for the light object
The pattern of object names which are considered for ray-traced shadows.
The pattern of object categories which are considered for ray-traced shadows.).
When true, the light will not contribute to diffuse() calls.
When true, the light will not contribute to specular() calls.
When true, the light is only used on secondary GI bounces.
The shape of an area light.
The size of the area light. The sizes are interpreted slightly differently for each shape.
The number of illumination samples to be used for the light source.
For the environment light, whether the light source represents the full sphere or the upper hemisphere
Only used for the environment light. This determines whether the the
light:areamap parameter will be used.
Only used for the environment light. This specifies an environment map which is used for illuminating the scene. The map may be an HDRI map.
See how to create an environment/reflection map.
If this is true, the map attached to the environment area light will be analyzed to determine the best sample locations for illumination points. This should be turned on for maps which have very sharp discontinuities in illumination levels. For maps which are fairly uniform in color, the toggle should be turned off. With the toggle turned off, the sphere geometry will be used (which provides less noise for smooth maps).
When sending photons from this light source, this is the category expression to determine which objects will receive photons.
Image output properties
These properties control the conversion of mantra pixels into the chosen output file format.
Generally these are only useful on render drivers and cameras. Properties such as
vm_image_rat_makemipmaps could be applied to light sources and objects to control the generation of mipmaps in shadow and reflection/environment maps, but it’s probably not a good idea.
These Houdini properties will be included in the IFD as plane properties named similarly to the format options described in the output of the iconvert utility, for example
TIFF.compression.
The name of the image creator. By default uses the current user’s log in name.
Houdini, TIFF, PNG formats
A text comment to include in the output file.
Houdini, OpenEXR, PNG formats
The name of the computer where this image was created.
Houdini format".
Enable generation of MIP MAPS when creating RAT files
JPEG Quality, integer from
10 to
100.
Color space for Cineon format images. Possible values are
"log" (unconverted), and
"lin" (linear).
Filename of a Look Up Table file to use for display of Cineon images in MPlay.
White point for Cineon format images, integer from
0 to
1023.
White point for Cineon format images, from
0.001 to
4.0.
Compression type for EXR format images. Possible values are
"none", "rle", "zips", "zip", "piz", "pix".
Storage method for EXR format images. Possible values are
"scan" (scanline) and
"tile".
Whether to pre-multiply PNG format images. Possible values are
"premult" or
"unpremult".).
How MPlay renders the image. Possible values are
"middle" (middle out),
"top" (top down), or
"bottom" (bottom up).
Display gamma for MPlay, from
0.0 to
4.0.
Filename of a Look Up Table file for MPlay.
Houdini-only properties
Replaces any shaders with the default matte shader. This property is not available in IFD, because it works by changing what is written to the IFD file.
IFD-only properties
These properties exist in IFD scene description files, but do not have equivalents in the Houdini property UI.
Implicit properties
These properties are only meaningful in IFD:
Computed properties
The following properties are computed in scripts during the mapping process. They do not have directly equivalent Houdini properties.
0– Both even & odd fields.
1– Odd field.
2– Even field.
filename(default = “”) – The filename to output the deep shadow information.
ofstorage(default = “real16”) – The storage format for
Of. The value should be one of…
real16– 16 bit floating point values.
real32– 32 bit floating point values.
real64– 64 bit floating point values.
pzstorage(default = “real32”) – The storage format for Pz. The value should be one of…
real16– 16 bit floating point values.
real32– 32 bit floating point values.
real64– 64 bit floating point values.
ofsize(default = 3) – The number of components to store for opacity. This should be either
1for monochrome (stored as the average value) or
3for full RGB color.
compression(default = 4) – Compression value between 0 and 10. Used to limit the number of samples which are stored in a lossy compression mode.
zbias(default = 0.001) – Used in compression to “merge” samples which are closer than some threshold.
depth_mode(default = “nearest”) – Used in compression to determine whether to keep the nearest, the farthest or the midpoint of samples. The possible choices for depth_mode are…
nearest– Choose the smallest Pz value.
farthest– Choose the largest Pz value.
midpoint– Choose the midpoint of Pz values.
depth_interp(default = “discrete”)
discrete– Each depth sample represents a discrete surface.
continuous– Each depth sample is part of a continuum (i.e. volume).
uniform [-s size]– Generates uniform divisions. The -s option can be used to scale the size of the micro-polygons. A larger scale will result in smaller micro-polygons.
raster– Measures geometry in screen space. This is roughly equivalent to the “nonraster -z 0” measurer, so is deprecated in favor of that approach.
nonraster [-z importance]– This measures geometry in 3D. The z-importance can be used to bias the z-component of the surface. A z-importance of
0means output image resolution.
The video field to render... The
image:deepresolver property specifies the resolver and any arguments to the resolver.
Options:
Example:
shadow filename test.rat ofsize 1
The pixel aspect ratio of the output image.
The camera’s projection model. This may be one of
perspective,
orthographic,
polar, or
cylindrical.
Near and far clipping planes for the projection.
Ratio of the focal length to the aperture of the camera. It is used to determine the field of view of the camera.
The name associated with the geometry. This is the name which instance objects use to access the geometry.
Materials may be specified on a per-primitive basis. However, since materials refer to SHOP paths, it’s sometimes important to be able to resolve relative paths.
When
true, the object will not be rendered by primary rays. Only secondary rays will hit the object.
The surface shader attached to the object.
The displacement shader attached to the object.
When primitives are rendered in mantra, they are split into smaller primitives if they are “too big” to be rendered. The primitives are measured to determine if they are to big using the measurer.
There are several different measurers available, each which take some optional arguments…
See
object:measure above.
The VEX shader used to shade a fog object.
The shader used to compute the illumination of the light source.
The shader used to compute occlusion of light from the light source.
Used when computing NDC (Normalized Device Coordinates) from within VEX shaders.
Used when computing NDC (Normalized Device Coordinates) from within VEX shaders.
Used when computing NDC (Normalized Device Coordinates) from within VEX shaders.
Whether object shaders will use shaders defined by the photon context or by the surface context when shading.
Misc. properties
When Houdini starts a render, it creates a “port” which allows mplay or other applications to communicate information back. This is the port number that Houdini opened. This setting will be deprecated in the future and be passed as an image device option.
|
http://www.sidefx.com/docs/houdini9.5/props/mantra9_0
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Start Lecture #1
I start at 0 Dale, Joyce, and Weems,
Object-Oriented Data Structures Using Java.
It is available at the NYU Bookstore.. The weighting will be approximately 30%*Labs, 30%*Midterm, and 40%*Final .
The following is supposed to work; let's see.
mailx -s "the subject goes here" gottlieb@nyu.edu words go here and go in the msg. The next line causes a file to be placed in the msg (not attached simply copied in) ~r filename .
Good methods for obtaining help include
You labs must be written in Java.
Incomplete
The rules for incompletes and grade changes are set by the school and not the department or individual faculty member.
The rules set by CAS can be found here. department
The university-wide policy is described here.
In 101 our goal was to learn how to write correct programs in Java.
Step 1 in that goal,.
One question will be how to measure the performance. loop over N numbers as shown on the right. Minimum is the same simple idea.
Both of these are essentially optimal: To find the max, we surely need to examine each number so the work done will be proportional to N and the program above is proportional to N.
Homework: Write a Java method that computes the 2nd largest. Hint calculate both the largest and 2nd largest; return the latter..
This second method also works for any k.
Both these methods are correct (the most important criterion) and simple. The trouble is that neither method has optimal performance. For large N, say 10,000,000 and k near N/2, both methods take a long time; whereas a more complicated algorithm is very fast.
Another example is given in section 5.3 where we choose a circular array implementation for queues rather than a simpler array implementation because the former is faster.
In 101, we had objects of various kinds, but basically whenever we
had a great deal of data, it was organized as an array (of
something).
An array is an example of a
data structure.
As the course title suggests, we will learn a number of other data
structures this semester.
Sometimes these data structures will be more complicated than simpler ones you might think of. Their compensating advantage is that they have higher performance.
Also we will learn techniques useful for large programs. Although we will not write any large programs, such programs are crucially important in practice. For example, we will use Java generics and interfaces to help specify the behavior we need to implement.
Client-server systems (now often referred to as
cloud computing are of increasing importance.
The idea is that you work on your client computer with assistance
from other server systems.
We will do some of this.
In particular, we will use the computer i5.nyu.edu as a server.
We need to confirm that everyone has an account on i5.nyu.edu.
Homework: As mentioned previously, those of you with Windows clients, need to get putty and winSCP. They are definitely available on the web at no cost. (The last time I checked, winSCP available through ITS and putty was at).
I will do a demo of this software next class. Please do install it by then so you can verify that it works for you.
macOS users should see how to have their terminal run in plain (ascii) text mode.
Read.
Read.
Read.
We have covered this in 101, but these authors tie the attributes of object orientation to the goals of quality software just mentioned.
Read
This section is mostly a quick review of concepts we covered in 101. In particular, we covered.
Most of you learned Java from Liang's book, but any Java text should be fine. Another possibility is my online class notes from 101.
The book does not make clear that the rules given are for data fields and methods. Visibility modifiers can also be used on classes, but the usage and rules are different.
I have had difficulty deciding how much to present about visibility. The full story is complicated and more than we need, but I don't want to simplify it to the extent of borderline inaccuracy. I will state all the possibilities, but describe only those that I think we will use.
A .java file consists of 1 or
more classes.
These classes are called
top level since they are not inside
any other classes.
Inside a class we can define methods, (nested) classes, and fields The later are often called data fields and sometimes called variables. I prefer to reserve the name variables for those declared inside methods.
class TopLevel1 { public int pubField; private int pvtField; protected int proField; int defField; public class PubNestedClass {} private class PriNestedClass {} protected class ProNestedClass {} class DefNestedClass {} } public class TopLevel2 { public static void main (String[] arg) {} private static void pvtMethod() {} protected static void proMethod() {} static void defMethod() {} }
Top-level classes have two possible visibilities: public and package-private (the default when no visibility keyword is used). Each .java file must contain exactly one top-level public class, which corresponds to the name of the file
Methods, fields, and nested classes each have four possible visibilities: public, private, protected, and package-private (the default).
The two visibilities for top-level classes, plus the four visibilities for each of the three other possibilities give a total of 14 possible visibilities, all of which are illustrated on the right.
Question: What is the filename?
For now we will simplify the situation and write .java files as shown on the right.
// one package statement // import statements public class NameOfClass { // private fields and // public methods }
An object is an instantiation of a class. Each object contains a copy of every instance (i.e., non-static) field in the class. Conceptually, each object also contains a copy of every instance method. (Some optimizations do occur to reduce the number of copies, but we will not be concerned with that.)
In contrast all the objects instantiated from the same class share class (i.e., static) fields and methods.
This is a huge deal in Java. It is needed to understand the behavior of objects and arrays. Unfortunately it.
Start Lecture #2
Remark: If your last name begins with A-K your grader is Arunav Borthakur; L-Z have Prasad Kapde. This determines to whom you email your lab assignments
Remark: Chen has kindly created a wiki page on using WinSCP and Putty. See.
Lab: Lab 1 parts 1 and 2 are assigned and are due 13 September 2012. Explain in class what is required.
In all Java systems, you run a program by invoking the
Java Virtual Machine (aka JVM).
The primitive method to do this, which is the only one I use, is to
execute a program named java.
The argument to java is the name of the class file (without the .class extension) containing the main method. A class file is produced from a Java source file by the Java compiler javac. Each .java file is often called a compilation unit since each one is compiled separately.
If you are using an
Integrated Development Environment
(IDE), you probably click on some buttons somewhere, but the result
is the same: The Java compiler javac is given each .java
file as a compilation unit and produces class files; the JVM
program java is run on the class file containing the main
method.
Inheritance was well covered in 101; here I add just a few comments.
public class D extends B { ... }
biggerthan B objects.
biggerthan the set of C objects
Java (unlike C++) permits only what is called
single inheritance.
That is, although a parent can have several children, a child can
have just one parent (a biologist would call this asexual
reproduction).
Single inheritance is simpler than multiple inheritance.
For one thing, single inheritance implies that the classes form a
tree, i.e., a strict hierarchy.
For example, on the right we see a hierarchy of geometric concepts.
Note that a point
is a geometric object and a rhombus
is a quadrilateral.
Indeed, for every line in the diagram, the lower concept
is a
upper concept.
This so-called ISA relation suggests that the lower concept should
be a class derived from the upper.
Indeed the handout shows classes with exactly these inheritances. The Java programs can be found here.
You might think that Java classes form a forest not a tree since you can write classes that are not derived from any other. One example would be ReferenceVsValueSemantics above. However, this belief is mistaken. Any class definition that does not contain the extends keyword, actually extends the built-in class object. Thus Java classes do indeed form a tree, with object as the root.
(I reordered the material in this section.)
Java permits grouping a number of classes together to for a package, which gives the following advantages.
package packageName;
The syntax of the package statement is trivial as shown on the right. The only detail is that this statement must be the first (nonblank, noncomment) line in the file.
import packageName.ClassName; import packageName.*
To access the contents of another package, you must either
import java.util.Scanner; public class Test { public static void main(String[] arg) { int x; Scanner getInput = new Scanner(System.in); x = getInput.nextInt(); } public class Test { public static void main(String[] arg) { int x; java.util.Scanner getInput = new java.util.Scanner(System.in); x = getInput.nextInt(); } }
For example, recall from 101 the important Scanner class used to read free-form input. This class is found in the package java.util.
To read an int, one creates a Scanner object (I normally call mine getInput) and then invokes the nextInt() method in this object.
The two examples on the right illustrate the first and third procedures mentioned above for accessing a package's contents. (For the second procedure, simply change java.util.Scanner to java.util.*.
Note that both the Scanner class and the nextInt() method have public visibility. How do I know this and where can you go to check up on me?
The key is to go to java.sun.com. This will send you to an Oracle site, but Java is a Sun product; Oracle simply bought Sun. You want Java SE-->documentation-->API
Let's check and see if I was right that Scanner and nextInt() are public.
Homework: How many methods are included in the Scanner class? How many of them are public?
The name of a package constrains where its files are placed in the filesystem. First, assume a package called packageName consisting of just one file ClassName.java (this file defines the public class ClassName). Although not strictly necessary, we will place this .java files in a directory called packageName.
javac packageName/ClassName.java
To compile this file we go to the parent directory of packageName and type the command on the right. (I use the Unix notation, / for directories; Windows uses \.)
Unlike the situation above, most packages have multiple .java files.
javac packageName/*.java
If the package packageName contains many .java files, we place them all in a directory also called packageName and compile them by going to the parent of packageName and executing the command on the right.
Just as the .java files for a package named packageName go in a directory named packageName, the .java files for a package named first.second go in the subdirectory second of a directory named first.
javac first/second/*.java
To compile these .java files, go to the parent of first and type the command on the right
javac first/second/third*.java
Similarly a package named first.second.third would be compiled by the command on the right.
The simplest situation is to leave the generated .class flies in the same directory as the corresponding .java files and execute the java command from the same directory that you executed the javac command, i.e., the parent directory.
I show several examples on the right.
java packageName/ClassName java testPackage/Test java bigPackage/M java first/second/third/M
The examples above assume that there is just one package, we keep the .class files in the source directory, and we only include classes from the Java library. All of these restrictions can be dropped by using the CLASSPATH environment variable.
public class Test { public static void main (String[] args) { C c = new C(); c.x = 1; } } public class C { public int x; }
The basic question is
Where do the compiler and JVM look for .java
and .class files?.
First consider the 2-file program on the right. You can actually compile this with a simple javac Test.java. How does the compiler find the definition of the class C?
The answer is that it looks in all .java files in the current directory. We use CLASSPATH if we want the compiler to look somewhere else.
Next consider the familiar line
include java.util.Scanner;.
How does the compiler (and the JVM) find the definition of
the Scanner class?
Clearly java.util is a big clue, but where do they look
for java.util.
The answer is that they look in the
system jar file
(whatever that is).
We use CLASSPATH if we want it to look somewhere else in
addition.
Many of the Java files used in the book are available on the web (see page 28 of the text for instructions on downloading). I copied the directory tree to /a/dale-datastructures/bookFiles on my laptop.
In the subdirectory ch02/stringLogs we find all the .java files for the package ch02.stringLogs.
One of these files is named LLStringNode.java and contains the definition of the corresponding class. Hopefully the name signifies that this class defines a node for a linked list of strings.
import ch02.stringLogs.LLStringNode; public class DemoCLASSPATH { public static void main (String[] args) { LLStringNode lLSN = new LLStringNode("Thank you CLASSPATH"); System.out.println(lLSN.getInfo()); } }
I then went to directory java-progs not related to /a/dale-datastructures/bookFiles and wrote the simple program on the right.
A naive attempt to compile this with javac DemoCLASSPATH.java fails since it cannot find a the class LLStringNode in the subdirectory ch02/stringLogs (indeed java-progs has no such directory.
export CLASSPATH=/a/dale-datastructures/bookFiles:.
However once CLASSPATH was set correctly, the compilation succeeded. On my (gnu-linux) system the needed command is shown on the right.
Homework: 28. (When just a number is given it refers to the problems at the end of the chapter in the textbook.)
Data structures describe how data is organized.
Some structures give the physical organization, i.e., how the data is stored on the system. For example, is the next item on a list stored in the next higher address, or does the current item give the location explicitly, or is there some other way.
Other structures give the logical organization of the data. For example is the next item always larger than the current item? For another example, as you go through the items from the first, to the next, to the next, ..., to the last, are you accessing them in the order in which they were entered in the data structure?
Well studied in 101.
We will study linked implementations a great deal this semester. For now, we just comment on the simple diagram to the right.
groundsymbol from electrical engineering).
Start Lecture #3
Remark: Chen will have an office hour Wednesdays 5-6pm in CIWW 412.
For these structures, we don't describe how the items are stored, but instead which items can be directly accessed.
The defining characteristic of a stack is that you can access or remove only the remaining item that was most recently inserted. We say it has last-in, first-out (LIFO) semantics. A good real-world example is a stack of dishes, e.g., at the Hayden dining room.
In
Java-speak we say a stack object has three public
methods: top() (which returns the most recently inserted
item remaining on the list), pop() (which removes the
most recently inserted remaining item), and push() (which
inserts its argument on the top of the list).
Many authors define pop() to both return the top element and remove it from the stack. That is, many authors define pop() to be what we would call top();pop().
The defining characteristic of a queue is that you can remove only the remaining item that was least recently inserted. We say it has first-in, first-out (FIFO) semantics. A good example is an orderly line at the bank.
In java-speak, we have enqueue() (at the rear) and dequeue() (from the front). (We might also have accessors front(), and rear().
Homework: Customers at a bank with many tellers form a single queue. Customers at a supermarket with many cashiers form many queues. Which is better and why?
Here the first element is the smallest, each succeeding element is larger (or equal to) its predecessor, and hence the last element is the largest. It is also possible for the elements to be in the reverse order with the largest first and the smallest last.
One natural implementation of a sorted list is an array with the elements in order; another is a linked list, again with the elements in order. We shall learn other structures for sorted lists, some of which provide searching performance much higher than either an array or simple linked list.
The structures we have discussed are one dimensional (except for higher-dimensional arrays). Each element except the first has a unique predecessor, each except the last has a unique successor, and every element can be reached by starting at one end and heading toward the other.
Trees, for example the one on the right, are different. There is no single successor; pictures of trees invariable are drawn in two dimensions, not just one.
Note that there is exactly one path between any two nodes.
Trees are a special case of graphs. In the latter, we drop the requirement of exactly one path between any two nodes. Some nodes may be disconnected from each other and others may have multiple paths between them.
A specific method for organization, either logically or physically.
A references is a pointer to (or the address of) an object; they are typically drawn as arrows.
We covered this topic is section 1.3, but it is such a huge deal in Java that we will cover it again here (using the same example). Reference semantics determine the behavior of objects and arrays. Unfortunately these semantics.
Continuing with the last example, we note that c1 and c2 refer to the same object. They are sometimes called aliases. As the example illustrated, aliases can be confusing; it is good to avoid them if possible.
What about the top ellipse (i.e., object) in the previous diagram for which there is no longer a reference? Since it cannot be named, it is just wasted space that the programmer cannot reclaim. The technical term is that the object is garbage. However, there is no need for programmer action or worry, the JVM detects garbage and collects it automatically.
The technical terminology is that Java has a built-in garbage collector.
Thus Java, like most modern programming languages supports the run-time allocation and deallocation of memory space, the former is via new, the latter is performed automatically. We say that Java has dynamic memory management.
// Same class C as above public class Example { public static void main (String[] args) { C c1 = new C(1); C c2 = new C(2); C c3 = c1; for (int i=0; i<1000; i++) { c3 = c2; c3 = c1; } } }
Execute the code on the right in class. Note that there are two objects (one with x==1 and one with x==2) and three references (the ones in c1, c2, and c3).
As execution proceeds the number of references to the object with x==1 changes from 1 to 2 to 1 to 2 ... . The same pattern occurs for the references to the other object.
Homework: Can you
write a program where the number of references to a fixed object
changes from 1 to 2 to 3 to 2 to 3 to 2 to 3 ... ?
Can you write a program where the number of references to a fixed object changes from 1 to 0 to 1 to 0 to 1 ... ?
In Java, when you write r1==r2, where the r's are variables referring to objects, the references are compared, not the contents of the objects. Thus r1==r2 evaluates to true if and only if r1 and r2 are aliases.
We have seen equals() methods in 101 (and will see them again this semester). Some equals() methods, such as the one in the String class, do compare the contents of the referred to objects. Others, for example the one in the object class, act like == and compare the references.
First we need some, unfortunately not standardized, terminology. Assume in your algebra class the teacher defined a function f(x)=x+5 and then invoked it via f(12). What would she call x and what would she call 12. As mentioned above, usage differs.
I call x (the item in the callee's definition) a parameter and I call 12 (the item in the caller's invocation) an argument. I believe our text does as well.
Others refer to the item in the callee as the
formal parameter and the item in the caller as the
actual parameter.
Still others use
argument and
parameter
interchangeably.
After settling on terminology, we need to understand Java's parameter passing semantics. Java always uses call-by-value semantics. This means.
public class DemoCBV { public static void main (String[] args) { int x = 1; CBV cBV = new CBV(1); System.out.printf("x=%d cBV.x=%d\n", x, cBV.x); setXToTen(x, cBV); System.out.printf("x=%d cBV.x=%d\n", x, cBV.x); } public static void setXToTen(int x, CBV cBV) { x = 10; cBV.x = 10; } } public class CBV { public int x; public CBV (int x) { this.x = x; } }
Be careful when the argument is a reference. Call-by-value still applies to the reference but don't mistakenly apply it to the object referred to. For example, be sure you understand the example on the right.
Well covered in 101.
There is nothing new to say: an array of objects is an array of
objects.
You must remember that arrays are references and objects are
references, so there can be
two layers of arrows.
Also remember that you will need a new to create the
array and another new to create each
object.
The diagram on the right shows the result of executing
cBV = new CBV(7); CBV[] arrCBV = new CBV[5]; arrCBV[0] = cBV; arrCBV[2] = new CBV(4);
Note that only two of the five array slots have references to objects; the other three are null.
Lab: Lab 1 part 3 is assigned and is due in 7 days.
As discussed in 101, Java does not, strictly speaking, have 2-dimensional arrays (or 3D, etc). It has only 1D arrays. However each entry in an array can itself be an array
On the right we see a two dimensional array M with two rows and three columns that can be produced by executing
int[][] M; M = new int [2][3];
Note that M, as well as each M[i] is a reference. The latter are references to int's; whereas, the former is a reference to an array of int's.
2d-arrays such as M above in which all rows have the same length are often called matrices or rectangular arrays and are quite important.
Java however does support so-called
ragged arrays such as
R shown on the right.
That example can be produced by executing
int [][] R; R = new int [2][]; R[0] = new int [3]; R[1] = new int [1];
Homework: Write a Java program that creates a 400 by 500 matrix with entry i,j equal to i*j.
The idea is to approximate the time required to execute an algorithm (or a program implementing the algorithm) independent of the computer language (or the compiler / computer). Most importantly we want to understand how this time grows as the problem size grows.
For example, suppose you had an unordered list of N names and wanted to search for a name that happens not to be on the list.
You would have to test each of the names. Without knowing more, it is not possible to give the exact time required. It could be 3N+4 seconds or 15n+300 milliseconds, or many other possibilities.
But it cannot be 22 seconds or even 22log(N) seconds. You just can't do it that fast (for large N).
If we look at a specific algorithm, say the obvious check one entry at a time, we can see the time is not 5N2+2N+5 milliseconds. It isn't that slow (for large N).
Indeed that obvious algorithm takes AN+B seconds, we just don't know A and B.
We will crudely approximate the above analysis by saying the algorithm has (time) complexity (or takes time) O(N).
That last statement is really sloppy. We just did three analyses. First we found that the complexity was greater than 22log(N), and then we found that the complexity was less than 5N2+2N+5, and finally we asserted the complexity was AN+B. The big-Oh notation strictly speaking covers only the second (upper-bound) analysis; but is often used, e.g., by the authors, to cover all three.
The rest of this optional section is from a previous incarnation of 102, using a text by Weiss.
We want to capture the concept of comparing function growth where
we ignore additive and multiplicative constants.
For example we want to consider 4N2-500N+1 to
be
equivalent).
It is difficult to decide how much to emphasize algorithmic analysis, a rather mathematical subject. On the one hand
On the other hand
As a compromise, we will cover it, but only lightly (as does our text).
For the last few years we used a different text, with a more mathematical treatment. I have left that material at the end of these notes, but, naturally, it is not an official part of this semester's courses.
As mentioned above we will be using the
big-O
(or
big-Oh) notation to indicate we are giving an
approximation valid for large N.
(I should mention that the O above is really the Greek Omicron.)
We will be claiming that, if N is large (enough), 9N30 is insignificant when compared to 2N.
Most calculators cannot handle really large numbers, which makes the above hard to demo.
import java.math.BigInteger; public class DemoBigInteger { public static void main (String[] args) { System.out.println( new BigInteger("5").multiply( new BigInteger("2").pow(1000))); } }
Fortunately the Java library has a class BigInteger that allows arbitrarily large integers, but it is a little awkward to use. To print the exact result for 5*21000 you would write a program something like the one shown on the right.
To spare you the anxiety of wondering what is the actual answer, here it is (I added the line breaks manually).
53575430359313366047421252453000090528070240585276680372187519418517552556246806 12465991894078479290637973364587765734125935726428461570217992288787349287401967 28388741211549271053730253118557093897709107652323749179097063369938377958277197 30385314572855982388432710838302149158263121934186028340346880
Even more fortunately, my text editor has a built in infinite precision calculator so we can try out various examples. Note that ^ is used for exponentiation, B for logarithm (to any base), and the operator is placed after the operands, not in between.
Do the above example using emacs calc.
The following functions are ordered from smaller to largerN-3 N-1 N-1/2 N0 N1/3 N1/2 N1 N3/2 N2 N3 ... N10 ... N100
Three questions that remain (among many others) are
To answer the first question we will (somewhat informally) divide one function by the next. If the quotient tends to zero when N gets large, the second is significantly larger that the first. If the quotient tends to infinity when N gets large, the second is significantly smaller than the first.
Hence in the above list of powers of N, each entry is significantly larger than the preceding. Indeed, any change in exponent is significant.
The answer to the second question is that log(N) comes after N0 and before NP for every fixed positive P. We have not proved this.
The answer to the third question is that 2N comes after NP for every fixed P. (The 2 is not special. For every fixed Q>1, QN comes after every fixed NP.) Again, we have not proved this.
Do some examples on the board.
Read. The idea is to consider two methods to add 1+2+3+4+...+N
These have different big-Oh values.
So a better algorithm has better (i.e., smaller) complexity.
Imagine first that you want to calculate (0.3)100. The obvious solution requires 99 operations.
In general for a fixed b≠0, raising bN using the natural algorithm requires N-1=O(N) operations.
I describe the algorithm for the special case of .3100 but it works in general for bN. The basic idea is to write 100 in binary and only use those powers. On the right is an illustration.
1 2 4 8 16 32 64 128
0 0 1 0 0 1 1 0
.3 .32 .34 .38 .316 .332 .364
.32 × .332 × .364
Since there were just 4 large steps, each of which has complexity O(log(N)), so does the entire algorithm.
Homework: The example above used N=100. The natural algorithm would do 99 multiplications, the better algorithm does 8. Assume N≤1,000,000. What is the maximum number of multiplications that the natural algorithm would perform? What is the maximum number of multiplications that the better algorithm would perform?
Start Lecture #4
Assume the phone book has N entries.
If you are lucky, the first entry succeeds, this takes O(1) time.
If you are unlucky, you need the last entry or, even worse, no entry works. This takes O(N) time.
Assuming the entry is there, on average it will be in the middle. This also takes O(N) time.
From a big-Oh viewpoint, the best case is much better than the average, but the worst case is the same as the average.
Lab: Lab 1 part 4 is assigned and is due in 7 days.
The above is kinda-sorta OK. When we do binary search for real will get the details right. In particular we will worry about the case where the sought for item is not present.
This algorithm is a big improvement. The best case is still O(1), but the worst and average cases are O(log(N)).
BUT binary search requires that the list is sorted. So either we need to learn how to sort (and figure out its complexity) or learn how to add items to a sorted list while keeping it sorted (and figure out how much this costs.
This are not trivial questions; we will devote real effort on their solutions.
Read
public class Quadrilateral extends GeometricObject { ... public Quadrilateral (Point p1, Point p2, Point p3, Point P4) public double area() {...} }
Definition: An Abstract Data Type is a data type whose properties are specified but not their implementations. For example, we know from the abstract specification on the right that a Quadrilateral is a Geometric Object, that it is determined by 4 points, and that its area can be determined with no additional information.
The idea is that the implementor of a modules does not reveal all the details to the user. This has advantages for both sides.
In 201 you will learn how integers are stored on modern computers (twos compliment), how they were stored on the CDC 6600 (ones complement), and perhaps packed decimal (one of the ways they are stored on IBM mainframes). For many applications, however, the implementations are unimportant. All that is needed is that the values are mathematical integers.
For any of the implementations, 3*7=21, x+(y+z)=(x+y)+z (ignoring overflows), etc. That is, to write many integer-based programs we need just the logical properties of integers and not their implementation. Separating the properties from the implementation is called data abstraction.
Consider the BigDecimal class in the Java library that we mentioned previously (it includes unbounded integers). There are three perspectives or levels at which we can study this library.
Preconditions are requirements placed on the user of a method in order to ensure that the method behaves properly. A precondition for ordinary integer divide is that the divisor is not zero.
A postcondition is a requirement on the implementation. If the preconditions are satisfied, then upon method return the postcondition will be satisfied.
This is Java's way of specifying an ADT. An interface is a class with two important restrictions.
The implied keywords public static final and public abstract can be (and typically are) omitted.
A method without a body is specified at the user level, it tells you what arguments you must supply and the type of the value returned. It does not tell you how any properties of the return value or any side effects of the method (e.g., values printed).
Each interface used must be implemented by one or more real classes that have non-abstract versions of all the methods.
The next two sections give examples of interfaces and their implementing classes. You can also read the FigureGeometry interface in this section.
As a warm up for the book's Stringlog interface, I present an ADT and two implementations of the trivial operation of squaring an integer. I even threw in a package. All in all it is way overblown. The purpose of this section is to illustrates the Java concepts in an example where the actual substance is trivial and the concepts are essentially naked.
There are four .java files involved as shown on the right and listed below. The first three constitute the squaring package and are placed in a directory with the same name (standard for packages).
package squaring; public interface SquaringInterface { int getSquare(); } package squaring; public class SimpleSquaring implements SquaringInterface { private int x; public SimpleSquaring(int x) { this.x = x; } public int getSquare() { return x*x; } } package squaring; public class FancySquaring implements SquaringInterface { private int x; public FancySquaring(int x) { this.x = x; } public int getSquare() { int y = x + 55; return (x+y)*(x-y)+(y*y); } } import squaring.*; public class DemoSquaring { public static void main(String[] args) { SquaringInterface x1 = new SimpleSquaring(5); SquaringInterface x2 = new FancySquaring(6); System.out.printf("(x1.x)*(x1.x)=%d (x2.x)(x2.x)=%d\n", x1.getSquare(), x2.getSquare()); } }
The first file is the ADT. The only operation supported is to compute and return the square of the integer field of the class. Note that the (non-constant) data field is not present in the ADT and the (abstract) operation has no body.
The second file is the natural implementation, the square is computed by multiplying. Included is a standard constructor to create the object and set the field.
The third file computes the square in a silly way (recall that (x+y)*(x-y)=x*x-y*y). The key point is that from a client's view, the two implementations are equivalent (ignoring performance).
The fourth file is probably the most interesting. It is located in the parent directory. Both variables have declared type SquaringInterface. However, they have actual types SimpleSquaring and FancySquaring respectively.
A Java interface is not a real class so no object can have actual type SquaringInterface (only a real class can follow new). Since the getSquare() in each real class overrides (not overloads) the one in the interface, the actual type is used to choose which one to invoke. Hence x1.getSquare() invokes the getSquare() in SimpleSquaring; whereas x2.getSquare() invokes the getSquare() is FancySquaring().
UML is a standardize way of
summarizing classes and
interfaces.
For example, the 3 classes/interfaces for in the squaring
package would be written as shown on the right.
The commands in this section are run from the parent directory of squaring. Note that the name of this parent is not important. What is important is that the name of the squaring must agree with the package name.
javac squaring/*.java javac DemoSquaring.java java DemoSquaring
It would be perfectly fine to execute the three lines on the right, first compiling the package and then compiling and running the demo.
However, the first line is not necessary. The import statement in DemoSquaring.java tells the compiler that all the classes in the squaring package are needed. As a result the compiler looks in subdirectory squaring and compiles all the classes.
Homework: 9, 10.
In contrast to the above, we now consider an example of a useful package.
The book's stringLogs package gives the specification and two implementations of an ADT for keeping logs of strings. A log has two main operations, you can insert a string into the string and ask if a given string is already present in the log.
Homework: If you haven't already done so, download the book's programs onto your computer. It can be found at.
Unzip the file and you will have a directory hierarchy starting at XXX/bookFiles, where XXX is the absolute name (i.e. starting with / or \) of the directory at which you executed the unpack. Then set CLASSPATH via
As mention previously much of the book's code can be downloaded onto your system. In my case I put it in the directory /a/dale-dataStructures/bookFiles. Once again the name of this directory is not important, but the names of all subdirectories and files within must be kept as is to agree with the usage in the programs
import ch02.stringLogs.* public class DemoStringLogs { }
To prepare for the serious work to follow I went to a scratch directory and compiled the trial stub for a stringLog demo shown on the right. The compilation failed complaining that there is no ch02 since DemoStringLogs.java is not in the parent directory of ch02.
Since I want to keep my code separate from the book's, I do not want my demo in /a/dale-dataStructures/BookFiles. Instead I need to set CLASSPATH.
export CLASSPATH=/a/dale-dataStructures/bookFiles:.
The command on the right is appropriate for my gnu/linux system and (I believe) would be the same on MacOS. Windows users should see the book. This will be demoed in the recitation.
Now javac DemoStringLogs.java compiles as expected.
StringLog(String name);
Since a user might have several logs in their program, it is useful for each log to have a name. So we will want a constructor something like that on the right.
For some implementations there is a maximum number of entries possible in the log. (We will give two implementations, one with and one without this limit.)
StringLog(String name, int maxEntries);
If we have a limit, we might want the user to specify its value with a constructor like the one on the right.
nicely formattedversion of the log.
Here is the code downloaded from the text web site. One goal of an interface is that a client who reads the interface can successfully use the package.
That is an ambitious goal; normally more documentation is needed.
How about the questions I raised in the previous section?
import ch02.stringLogs.*; public class DemoStringLogs { public static void main (String[] args) { StringLogInterface demo1; demo1 = new ArrayStringLog ("Demo #1"); demo1.insert("entry 1"); demo1.insert("entry 1"); // ?? System.out.println ("The contains() " + "method is case " + (demo1.contains("Entry 1") ? "in" : "") + "sensitive."); System.out.println(demo1); } } The contains() method is case insensitive. Log: Demo #1 1. entry 1 2. entry 1
On the right we have added some actual code to the demo stub above. As is often the case we used test cases to find out what the implementation actually does. In particular, by compiling (javac) and running (java) DemoStringLogs we learn.
conditional expression(borrowed from the C language).
nicely formattedoutput of the toString() method.
Homework: 15.
Homework: Recall that a stringlog can contain multiple copies of the same string. Describe another method that would be useful if multiple strings do exist.
Start Lecture #5
Now it is time to get to work and implement StringLog.
package ch02.stringLogs; public class ArrayStringLog implements StringLogInterface { ... }
In this section we use a simple array based technique in which a StringLog is basically an array of strings. We call this class ArrayStringLog to emphasize that it is array based and to distinguish it from the linked-list-based implementation later in the chapter.
private String[] log; private int lastIndex = -1; private String name;
Each StringLog object will contain the three instance fields on the right: log is the basic data structure, an array of strings with each array entry containing one log entry. name is the name of the log and lastIndex is is the last index (i.e., largest) into which an item has been placed.
The book uses protected (not private) visibility. For simplicity I will keep to the principal that fields are private and methods public until we need something else.
public ArrayStringLog(String name, int maxSize) { this.name = name; log = new String[maxSize]; } public ArrayStringLog(String name) { super(name, 100); // 100 arbitrary }
We must have a constructor
Why? That is, why isn't the default constructor adequate?
Answer: Look at the fields, we need to set name and create the actual array.
Clearly the user must supply the name, the size of the array can either be user-supplied or set to a default. This gives rise to the two constructors on the right.
Although the constructor executes only two Java statements, its complexity is O(N) (not simply O(1). This is because each string in the array is automatically initialized to the empty string. You can see this if you read the String constructor.
Both constructors have the same name, but their signatures (name plus parameter types) are different so Java can distinguish them.
public void insert (String s) { log[++lastIndex] = s; }
This mutator executes just one simple statement so is O(1). Inserting an element into the log is trivial because.
public void clear() { for(int i=0; i<=lastIndex; i++) log(i) = null; lastIndex = -1; }
To empty a log we just need to reset lastIndex.
Why?
If so, why the loop?
public boolean isFull() { return lastIndex == log.length-1; } public String size() { return lastIndex+1; } public String getName() { return name; }
The isFull() accessor is trivial (O(1)), but easy to get wrong by forgetting the -1. Just remember that in this implementation the index stored is the last one used not the next one to use and all arrays start at index 0.
For the same reason size() is lastIndex+1. It is also clearly O(1)
The accessor getName() illustrates why accessor methods
are sometimes called getters.
So that users cannot alter fields
behind the implementor's back, the fields are normally
private and the implementor supplies a getter.
This practice essentially gives the user read-only access to the
fields.
public boolean contains (String str) { for (int i=0; i<=lastIndex; i++) if (str.equalsIgnoreCase(log(i)) return true; return false; }
The contains() method loops through the log looking for the given string. Fortunately, the String class includes an equalsIgnoreCase() so case insensitive searching is no extra effort.
Since the program loops through all N entries it must take time that grows with N. Also each execution of equalsIgnoreCase() can take time that grows with the length of the entry. So the complexity is more complicated and not just O(1) or O(N)
public String toString() { String ans = "Log: " + name + "\n\n"; for (int i=0; i<=lastIndex; i++) ans = ans + (i+1) + ". " + log[i] + "\n"; return ans; }
The toString() accessor is easy but tedious.
It's specification is to return a
nicely formatted version of
the log.
We need to produce the name of the log and each of its entries.
Different programmers would often produce different programs since
what is
nicely formatted for one, might not be nicely
formatted for another.
The code on the right (essentially straight from the book) is
one reasonable implementation.
With such a simple implementation, the bugs should be few and shallow (i.e., easy to find). Also we see that the methods are all fast except for contains() and toString(). All in all it looks pretty good, but ...
Conclusion: We need to learn about linked lists.
Homework: 20, 21.
Read.
As noted in section 1.5, arrays are accessed via an index, a value that is not stored with the array. For a linked representation, each entry (except the last) contains a reference or pointer to the next entry. The last entry contains a special reference, null.
Before we can construct a linked version of a stringlog, we need to define a node, one of the horizontal boxes (composed of two squares) on the right.
The first square Data is application dependent.
For a stringlog, it would be a string, but in general it could be
arbitrarily complex.
However, it need not be arbitrarily complex.
Why?
The Next entry characterizes a linked list; it is a pointer to another node. Note that I am not saying one Node contains another Node, rather that one node contains a pointer to (or a reference to) another node.
As an analogy consider a
treasure hunt game where you are
given an initial clue, a location (e.g., on top of the basement TV).
At this location, you find another clue, ... , until the last clue
points you at the final target.
It is not true that the first clue contains the second, just that the first clue points to the second.
public class LLStringNode { private String info; private LLStringNode link; }
Despite what was just said, the Java code on the right makes it
look as if a node does contain another node.
This confusion is caused by our friend
reference semantics.
Since LLStringNode is a class (and not a
primitive type), a variable of that type, such as link
shown on the right, is a reference, i.e., a pointer
(link corresponds to the Next box above;
info corresponds to Data).
The previous paragraph is important. It is a key to understanding linked data structures. Be sure you understand it.
public LLStringNode(String info) { this.info = info; link = null; }
The LLStringNode constructor simply initializes the two fields. We set link=null to indicate that this node does not have a successor.
Forgive me for harping so often on the same point, but please be sure you understand why the constructor above, when invoked via the Java statement
node = new LLStringNode("test");
gives rises to the situation depicted on the near right and not the situation depicted on the far right.
Recall from 101 that all references are the same size. My house in the suburbs is much bigger than the apartment I had in Washington Square Village, but their addresses are (about) the same size.
Now let's consider a slightly larger example, 3 nodes. We execute the Java statements
node1 = LLStringNode("test1"); node2 = LLStringNode("test2"); node3 = LLStringNode("test3"); node3.setLink(node2); node2.setLink(node1);
setLink() is the usual mutator that sets
the link field of a node.
Similarly we define setInfo(), getLink(), and getInfo().
Homework: 41, 42.
Remark: In 2012-12 fall, this was assigned next lecture.
From the diagram on the right, which we have seen twice before, the Java code for simple linked list operations is quite easy.
The usual terminology is that you
traverse a
list,
visiting each node in order of its occurrence on the
list.
We need to be given a reference to the start of the list (Head in
the diagram) and need an algorithm to know when we have reached the
end of the list (the link component is null).
As the traversal proceeds each node is
visited.
The visit program is unaware of the list; it processes the Data
components only (called info in the code).
public static void traverse(LLStringNode node) { while (node != null) { visit(node.getInfo()); node = node.getLink(); } }
Note that this works fine if the argument is null, signifying a list of length zero (i.e., an empty list).
Note also that you can traverse() a list starting at any node.
The bottom row of the diagram show the desired result of inserting an node after the first node. (The middle row shows deletion.) Two references are added and one (in red) is removed (actually replaced).
The key point is that to insert a node you need the red reference as well as a reference to the new node. In the picture the red reference is the Next field of a node. Another possibility is that it is an external reference to the first node (e.g. Head in the diagram). A third possibility is that the red reference is the null in the link field of the last node. Yet a fourth possibility is that the list is currently empty and the red reference is a null in Head.
However in all these cases the same two operations are needed (in this order).
We will see the Java code shortly.
Mosts list support deletion. However our stringlog example does not, perhaps because logs normally do not (or perhaps because it is awkward for an array implementation).
The middle row in the diagram illustrates deletion for (singly) linked lists. Again there are several possible cases (but the list cannot be empty) and again there is one procedure that works for all cases.
You are given the red reference, which points to the node to be deleted and you set it to the reference contained in the next field of the node to which it was originally pointing.
(For safety, you might want to set the original next field to null).
General lists support traversal as well as arbitrary insertions and deletions. For stacks, queues, and stringlogs only a subset of the operations are supported as shown in the following table. The question marks will be explaned in the next section.
package ch02.stringLogs; public class LinkedStringLog implements StringLogInterface { ... }
This new class LinkedStringLog implements the same ADT as did ArrayStringLog and hence its header line looks almost the same.
private LLStringNode log; private String name;
We again have a name for each stringlog, but the first field looks
weird.
Why do we have a single node associated with each log?
It seems that all questions have the same answer:
reference semantics.
An LLStringNode is not a node but a reference or pointer to a node. Referring to the pictures in the previous section, the log field corresponds to head in the picture and not to one of the horizontal boxes containing data and next components. If the declaration said LLStringNode log new LLStringNode(...), then log would still not be a horizontal box but would point to one. As written the declaration yields an uninitialized data field.
public LinkedStringLog(String name) { this.name = name; log = null; }
An advantage of linked over array based implementations is that there is no need to specify (or have a default for) the maximum size. The linked structure grows as needed. This observation explains why there is no size parameter in the constructor.
As far as I can tell the log=null; statement could have be omitted if the initialization was specified in the declaration.
There are many names involved so a detailed picture may well prove helpful. Note that the rounded boxes (let's hope apple doesn't have a patent on this as well) are actual objects; whereas references are drawn as rectangular boxes. There are no primitive types in the picture.
Note also that all the references are the same size; but not all the objects. The names next to (not inside) an object give the type (i.e., class) of the object. To prevent clutter I did not write String next to the string objects, believing that the "" makes the type clear.
The first frame shows the result of executing
LinkedStringLog lsl = new LinkedStringLog("StrLog1");
Let's go over this first frame and see that we understand every detail in it.
The other two frames show the result after executing
lst.insert ("Node1"); lst.insert ("Node2");
We will discuss them in a few minutes. First, we will introduce insert() with our familiar, much less detailed, picture and present the Java commands that do the work.
Note that the size of a LinkedStringLog and of an LLStringNode look the same in the diagram because they both contain two fields, each of which is a reference (and all references are the same size). This is just a coincidence. Some objects contain more references than do others; some objects contain values of primitive types; and we have not shown the methods associated with types.
An interesting question arises here that often occurs.
A liberal interpretation of the word
log permits a more
efficient implementation than would be possible for a stricter
interpretation.
Specifically does the term log imply that new entries are inserted
at the end of the log?
I think of logs as structures that are append only, but that was never stated in the ADT. For an array-based implementation, insertion at the end is definitely easier, but for linked, insertion at the beginning is easier.
However, the idea of a beginning and an end really doesn't apply to the log. Instead, the beginning and end are defined by the linked implementation. As far as the log is concerned, we can call either side of the picture the beginning.
The bottom row of the picture on the right shows the result of inserting a new element after the first existing element. For stringlogs we want to insert at the beginning of the linked list, so the red reference is the value of Head in the picture or log in the Java code.
public void insert(String element) { LLStringNode node = new LLStringNOde(element); node.setLink(log); log = node; }
The code for insert() is very short, but does deserve
study.
The first point to make is that we are inserting at the beginning of
the linked list so we need a pointer to
the
node before the first node.
The stringlog itself serves as a surrogate for this
pre-first
node.
Let's study the code using the detailed picture in the
An Example section above.
public void clear() { log = null; }
The clear() method simply sets the log back to initial state, with the node pointer null.
Start Lecture #6
Remarks: I added a
uses arrow to the
diagram in section 2.B.
Redo insert using the new detailed diagram.
Homework: 41, 42.
Remark: This homework should have been assigned last lecture.
Lab: 2 assigned. It is due 2 October 2012 and includes an honors supplement.
public boolean isFull() { return false; }
With a linked implementation, no list can be full.
Recall that the user may not know the implementation chosen or may wish to try both linked- and array- based stringlogs. Hence we supply isFull() even though it may never be used and will always return false.
public String getName() { return name; }
getName() is identical for both the linked- and array-based implementations.
public int size() { int count = 0; LLStringNode node = log; while (node != null) { count ++; node = node.getLink(); } return count; }
The size() method is harder for this linked representation than for out array-based implementation since we do not keep an internal record of the size and must traverse the log as shown on the right
An alternate implementation would be to maintain a count field and have insert() increment count by 1 each time.
One might think that the alternate implementation is better for the following reason. Currently insert() is O(1) and size() is O(N). For the alternative implementation both are O(1). However, it is not that simple, the number of items that size() must count is the number of inserts() that have been done, so there could be N insert()'s for one size().
A proper analysis is more subtle.
public String toString() { String ans = "Log: " + name + "\n\n"; int count = 0; for (LLStringNode node=log; node!=null; node=node.getLink()) ans = ans + (++count) + ". " + node.getInfo() + "\n"; return ans; }
The essence of toString() is the same as for the array-based implementation. However, the loop control is different. The controlling variable is a reference, it is advanced by getLink() and termination is a test for null.
public boolean contains(String str) { for (LLStringNode node=log; node!=null; node=node.getLink()) if (str.equalsIgnoreCase(node.getInfo())) return true; return false; }
Again the essence is the same as for the array-based implementation, but the loop control is different. Note that contains() and toString() have the same loop control. Indeed, advancing by following the link reference until it is null is quite common for any kind of linked list.
The while version of this loop control was used in size().
Homework: 44, 47, 48.
Read
Read
A group is developing a software system that is comprised of several modules, each with a lead developer. They collect the error messages that are generated during each of their daily builds. Then they wish to know, for specified error messages, which modules encountered these errors.
I wrote a CodeModule class that contains the data and methods for one code module, including a stringlog of its problem reports. This class contains
This is not a great example since stringlogs are not a perfect fit. I would think that you would want to give a key word such as documentation and find matches for documentation missing and inadequate documentation . However, stringlogs only check for (case-insensitive) complete matches.
The program reads three text files.
Let's look at the UML and sketch out how the code should be constructed. Hand out the UML for stringlogs. The demo class could be written many ways, but we should pretty much all agree on CodeModule.java.
The Java programs, as well as the data files, can be found here.
Note that program is a package named codeQuality. Hence it must live in a directory also named codeQuality. To compile and run the program, I would be in the parent directory and execute.
export CLASSPATH=/a/dale-dataStructures/bookFiles:. javac codeQuality/DemoCodeQuality.java java codeQuality/DemoCodeQuality
You would do the same but the replace /a/dale-dataStructures/BookFiles by wherever you downloaded the book's packages.
It is important to distinguish the user's and implementers view of data structures. The user just needs to understand how to use the structure; whereas, the implementer must write the actual code. Java has facilities (classes/interfaces) that make this explicit.
Start Lecture #7
We will now learn our first general-purpose data structure, taking a side trip to review some more Java.
Definition: A stack is a data structure from which one can remove or retrieve only the element most recently inserted.
This requirement that removals are restricted to the most recent insertion implies that stacks have the well-known LIFO (last-in, first-out) property.
In accordance with the definition the only methods defined for stacks are push(), which inserts an element, pop(), which removes the most recently pushed element still on the stack, and top(), which retrieves (but does not remove) the most recently pushed element still on the stack.
An alternate approach is to have pop() retrieve as well as remove the element.
Stacks are a commonly used structure, especially when working with data that can be nested. For example consider evaluating the following postfix expression (or its infix equivalent immediately below).
22 3 + 8 * 2 3 4 * - - ((22 + 3) * 8) - (2 - 3 * 4))
A general solution for infix expressions would be challenging for us, but for postfix it is easy!
Homework: 1, 2
In Java a collection represents a group of objects, which are called the collections's elements. A collection is a very general concept that can be refined in several ways. We shall see several such refinements.
For example, some collections permit duplicates; whereas others do not. Some collections, for example, stacks and queues, restrict where insertions and deletions can occur. In contrast other collections are general lists, which support efficient insertion/deletion at arbitrary points.
Last semester I presented a sketch of these concepts as an optional topic. You might wish to read it here.
Our ArrayStringLog class supports a collection of Strings. It would be easy, but tedious, to produce a new class called ArrayIntegerLog that supports a collection of Integers. It would then again be easy, but now annoying, to produce a third class ArrayCircleLog, and then ArrayRectangleLog, and then ... .
If we needed logs of Strings, and logs of Circles, and logs of Integers, and logs of Rectangles, we could cut the ArrayStringlog class, paste it four times and do some global replacements to produce the four needed classes. But there must be a better way!
In addition this procedure would not work for a single heterogeneous log that is to contain Strings, Circles, Integers, and Rectangles.
public class T { ... } public class S extends T { ... } public class Main { public static void main(String[] args) { T t; S s; ... t = s; } }
Recall from 101 that a variable of type T can be assigned a value from any class S that is a subtype of T.
For example, consider three files T.java, S.java, and Main.java as shown on the right. We know that the final assignment statement t=s; is legal; whereas, the reverse assignment s=t; may be invalid;
When applied to our ArrayStringLog example, we see that, if we developed an ArrayObjectLog class (each instance being a log of Java objects) and constructed objLog as an ArrayObjectLog, then objLog could contain three Strings, or it could contain two Circles, or it could contain four Doubles, or it could contain all 9 of these objects at the same time.
For this heterogeneous example where a single log is to contain items of many different types, ArrayObjectLog is perfect.
As we just showed, an ArrayObjectLog can be used for either homogeneous logs (of any type) or for heterogeneous logs. I asserted it is perfect for the latter, but (suspiciously?) did not comment on its desirability for the former.
To use ArrayObjLog as the illustrative example, I must extend it a little. Recall that all these log classes have an insert() method that adds an item to the log.. Pretend that we have augmented the classes to include a retrieve() method that returns some item in the log (ignore the possibility that the log might be empty). So the ArrayStringLog method retrieve() returns a String, the ArrayIntegerLog method retrieve() returns an Integer, the ArrayObjectLog method retrieve() returns an Object, etc.
Recall the setting. We wish to obviate the need for all these log classes except for ObjectLog, which should be capable of substituting for any of them. The code on the right uses an ObjectLog to substitute for an IntegerLog.
ArrayObjectLog integerLog = new ArrayObjectLog("Perfect?"); Integer i1=5, i2; ... integerLog.insert(i1); // always works i2 = integerLog.retrieve(); // compile error i2 = (Integer)integerLog.retrieve(); // risky Object o = integerLog.retrieve(); if (o instanceof Integer) i2 = (Integer)o; // safe else // What goes here? // Answer: A runtime error msg // or a Java exception (see below).
Java does not know that this log will contain only Integers so a naked retrieve() will not compile.
The downcast will work providing we in fact insert() only Integers in the log. If we erroneously insert something else, retrieve()d will generate a runtime error that the user may well have trouble understanding, especially if they do not have the source code.
Using instanceof does permit us to generate a (hopefully) informative error message, and is probably the best we can do. However, when we previously used a real ArrayIntegerLog no such runtime error could occur, instead any erroneous insert() of a non-Integer would fail to compile. This compile-time error is a big improvement over the run-time error since the former would occur during program development; whereas, the latter occurs during program use.
Summary: A log of Objects can be used
for any log; it is perfect for heterogeneous logs, but
can
degrade compile-time error into run-time errors for
homogeneous logs.
We will next see a
perfect solution (using Java
generics) for homogeneous logs.
Homework: 4.
Recall that ArrayStringLog is great for logs of Strings, ArrayIntegerLog is great for logs of Integers, etc. The problem is that you have to write each class separately even though they differ only in the type of the logs (Strings, vs Integers, vs etc). This is exactly the problem generics are designed to solve.
The idea is to parameterize the type as I now describe.
y1 = tan(5.1) + 5.13 + cos(5.12) y2 = tan(5.3) + 5.33 + cos(5.32) y3 = tan(8.1) + 8.13 + cos(8.12) y4 = tan(9.2) + 9.23 + cos(9.22) f(x) = tan(x) + x3 + cos(x2) y1 = f(5.1) y2 = f(5.3) y3 = f(8.1) y4 = f(9.2)
It would be boring and repetitive to write code like that on the top right (using mathematical not Java notation).
Instead you would define a function that parameterizes the numeric value and then invoke the function for each value desired. This is illustrated inn the next frame.
Compare the first y1 with f(x) and note that we replaced each 5.1 (the numeric value) by x (the function parameter). By then invoking f(x) for different values of the argument x we obtain all the ys.
In our example of ArrayStringLog and friends we want to
write one
parameterized class (in Java it would be called
a generic class and named ArrayLog<T>,
with T the type parameter) and then
instantiate
the parameter
to String, Integer, Circle, etc. to
obtain the equivalent of ArrayStringLog,
ArrayIntegerLog, ArrayCircleLog, etc.
public class ArrayLog<T> { private T[] log; private int lastIndex = -1; private String name; ... }
In many (but not all) places where ArrayStringLog had String, ArrayLog<T>, would have T. For example on the right we see that the log itself is now an array of Ts, but the name of the log is still a String.
We then get the effect of ArrayStringLog by writing ArrayLog<String>. Similarly we get the effect of ArrayIntegerLog by writing ArrayLog<Integer>, etc.
public interface LogInterface<T> { void insert(T element); boolean isFull(); int size(); boolean contains(T element); void clear(); String getName(); String toString(); }
In addition to generic classes such as ArrayLog<T>, we can also have generic interfaces such as LogInterface<T>, which is shown in its entirety on the right. Again note that comparing LogInterface<T> with StringLogInterface from the book, we see that some (but not all) Strings have been changed to Ts, and <T> has been added to the header line.
We will see several complete examples of generic classes as the course progresses.
The current situation is pretty good.
What's left?
Assume we want to keep the individual entries of the log is some order, perhaps alphabetical order for Strings, numerical order for Integers, increasing radii for circles, etc.
For this to occur, the individual items in the log must
be
comparable, i.e, you must be able to decide which of two
items
comes first.
This requirement is a restriction on the possible types that can be
assigned to T.
As it happens Java has precisely the notion that elements of a type can be compared to each other. The requirement is that the type extends the standard library interface called Comparable. Thus to have sorted logs, the element type T must implement the Comparable interface. In Java an array-based version would be written (rather cryptically) as
public class ArrayLog<T extends Comparable<T>> { ... }
We are not yet ready to completely understand that cryptic header.
We covered Java exceptions in 101 so this will be a review.
As a motivating example consider inserting a new log into a full ArrayStringLog. Our specification declared this to be illegal, but nonetheless, what should we do? The key point is that the user and not the implementer of ArrayStringLog should decide on the action. Java exceptions make this possible as we shall see.
An exceptional situation is an unusual, sometimes unpredictable event. It need not be fatal, but often is, especially if not planned for. Possibilities include.
impossibleoperation, such as popping an empty stack (which well cause an illegal array reference).
We will see that Java (and other modern programming languages) has support for exceptional conditions (a.k.a. exceptions). This support consists of three parts.
As we shall see an exception can be thrown from one piece of code and caught somewhere else, perhaps in a place far, far away.
This section is from my 101 class notes.
double s12 = p1.distTo(p2); if (s12!=p2.distTo(p3) || s12!=p3.distTo(p4) || s12!=p4.distTo(p1)) { System.out.println("Error: Rhombus with unequal sides"); System.exit(0); // an exception would be better } sidelength = s12;
double s12 = p1.distTo(p2); try { if (s12!=p2.distTo(p3) || s12!=p3.distTo(p4) || s12!=p4.distTo(p1)) throw new Exception("Rhombus with unequal side"); sidelength = s12; } catch (Exception ex) { System.out.println("Error: Rhombus with unequal sides"); System.exit(0); }
The top frame on the right shows (a slight variant of) the body of the Rhombus constructor. This constructor produces a rhombus given its four vertices. It terminates the run if the parameters are invalid (the side lengths are not all equal).
The frame below it shows one way to accomplish the same task using an exception. There are four Java terms introduced try, throw, catch, and Exception.
try and catch each introduce a block of code, which are related as follows.
matching(explained later) catch block is executed.
In the example code, if all the side lengths are equal, sidelength is set. If they are not all equal an exception in the class Exception is thrown, sidelength is not set, an error message is printed and the program exits.
This behavior nicely mimics that of the original, but is much more complicated. Why would anyone use it?
The answer to the above question is that using exceptions in the manner of the previous section to more-or-less exactly mirror the actions without an exception is not a good idea.
Remember the problem with the original solution. The author of the geometry package does not know what the author of the client wants to do when an error occurs (forget that I was the author of both). Without any such knowledge the package author terminated the run as there was no better way to let the client know that a problem occurred.
The better idea is for the package to detect the error, but let the client decide what to do.
The first step is to augment the Rhombus class with a constructor having three parameters: a point, a side-length, and an angle. The constructor produces the rhombus shown on the right (if Θ=Π/2, the rhombus is a square).
public Rhombus (Point p, double sideLength, double theta) { super (p, new Point(p.x+sideLength*Math.cos(theta), p.y+sideLength*Math.sin(theta)), new Point(p.x+sideLength*(Math.cos(theta)+1.), p.y+sideLength*Math.sin(theta)), new Point(p.x+sideLength, p.y)); this.sideLength = sideLength; }
double s12 = p1.distTo(p2); if (s12!=p2.distTo(p3) || s12!=p3.distTo(p4) || s12!=p4.distTo(p1)) throw new Exception ("Rhombus with unequal sides."); sideLength = s12;
try { rhom1 = new Rhombus(origin,origin,p2,p3); } catch (Exception ex) { System.out.printf("rhom1 error: %s Use unit square.\n", ex.getMessage()); rhom1 = new Rhombus (origin, 1.0, Math.PI/2.0); }
The constructor itself is in the 2nd frame.
This enhancement has nothing to do with exceptions and could have
(perhaps should have) been there all along.
You will see below how this
standard rhombus is used when the
client mistakenly attempts to construct an invalid rhombus.
The next step, shown in the 3rd frame, is to have the regular rhombus constructor throw an exception, but not catch it.
The client code is in the last frame. We see here the try and catch. In this frame the client uses the original 4-point rhombus constructor, but the points chosen do not form a rhombus. The constructor detects the error and raises (throws in Java-speak) an exception. Since the constructor does not catch this exception, Java automatically re-raises it in the caller, namely the client code, where it is finally caught. This particular client chooses to fall back to a unit square.
It is this automatic call-back provided by exceptions that enables the client to specify the action required.
Read the book's (Date) example, which includes information on creating exceptions and announcing them in header lines.
A Java exception is an object that is a member of either Exception, Error, or a subclass of one of these two classes.
throw new Exception ("Rhombus with unequal sides.");
On the right is the usage from my rhombus example above. As always the Java keyword new creates an object in the class specified. In the example, we do not name this object. We simply throw it.
public class GeometryException extends Exception { public GeometryException() { super(); } public GeometryException(String msg) { super(msg); } }
For simplicity, our example created an Exception. It is often convenient to have different classes of exceptions, in which case you would write code as shown on the right. This code creates a new subclass of Exception and defines two constructors to be the same as for the Exception class itself. (Recall that super() in a constructor invokes a constructor in the superclass.)
throw new GeometryException ("Rhombus with unequal sides.");
We would then modify the throw to create and raise an exception in this subclass.
We have seen that if an exception is raised in a method and not caught there, the exception is re-raised in the caller of the method. If it is also not caught in this second method, it is re-raised in that method's caller. This keeps going and if the exception is not caught in the main method, the program is terminated. Note that if f() calls g() calls h() calls k() and an exception is raised in k(), the handler (the code inside catch{}) is searched for first in k(), then h(), then g(), and then f(), the reverse order of the calling sequence.
In this limited sense, exceptions act similar to returns.
There is one more point that we need to review about exceptions and that is their (often required) appearance in method header lines. To understand when header line appearance is required, we examine the color coded class tree for Java exceptions shown on the right.
As always, Object is the root of the Java class tree and naturally has many children in addition to Throwable. As the name suggests, the Throwable class includes objects that can be thrown, i.e., Java exceptions.
For us, there are three important classes of pre-defined exceptions highlighted in the diagram: namely Error, Exception, and RuntimeException.
The red exceptions (and their white descendants)
are called
unchecked exceptions; the blue exceptions (and
their remaining white descendants) are called
checked exceptions.
The header line rule is now simple to state (but not as simple to
apply):
Any method that might raise a checked exception
must announce this fact in its header line using
the throws keyword.
For example, the rhombus constructor would be written as follows.
public Rhombus (Point p1, Point p2, Point p3, Point p4) throws Exception { super(p1, p2, p3, p4); // ignore this; not relevant to exceptions double s12 = p1.distTo(p2); if (s12!=p2.distTo(p3) || s12!=p3.distTo(p4) || s12!=p4.distTo(p1)) throw new Exception ("Rhombus with unequal sides."); sideLength = s12; }
The slightly tricky part comes when f() calls g(), g() calls h(), and h() raises a checked exception that it does not catch. Clearly, the header line of h() must have a throws clause. But, if g() does not catch the exception, then it is reraised in f() and hence g() must have a throws clause in its header line as well.
The point is that a method (g() above) can raise an exception even though it does not itself have a throw statement.
This makes sense since from the point of view of the caller of g() (method f() in this case), method g() does raise an exception so g() must announce that in its header.
When a method detects an error, three possible actions can occur.
Homework: 8.
We know that a stack is a list with LIFO (last-in, first-out) semantics and that it has three basic operations.
We choose to specify and implement a generic (homogeneous) stack. That is all elements of a given stack are of the same type, but different stacks can have elements of different type.
We use T for the element type, so our generic stack interface will be StackInterface<T>.
This is clearly an error so something must be done. We could simply add to the specification that top() and pop() cannot be called when the stack is empty, but that would be a burden for the client (even though we will supply an isEmpty() method).
We could try to handle the error in the pop() and top() methods themselves, but it is hard to see what would be appropriate for all clients. This leads to the chosen solution, an exception.
Our stack class will raise an exception when either top() or pop() is called when the stack is empty.
Rather that just using the Exception class, we will define our own exception. This immediately raises two questions: what should we name the exception and should it be checked or unchecked.
package ch03.stacks; public class StackUnderFlowException extends RuntimeException { public StackUnderFlowException() { super(); } public StackUnderFlowException(String msg) { super(msg); } } public class SUE extends RTE { public SUE() { super(); } public SUE(String msg) { super(msg); } }
The book names it StackUnderflowException as shown in the first frame on the right. A corresponding exception in the Java library is called EmptyStackException. A good feature of long names like these is that they are descriptive. A downside is that the code looks longer. Don't let the extra length due to the long names lead you to believe the situation is more complicated than it really is.
The second frame shows the same code assuming the exception was named SUE, and assuming that Java used RTE rather than RuntimeException, and omitting the package statement, which has nothing to do with the exception itself.
I am definitely not advocating short cryptic names; just pointing out how simple the StackUnderFlowException really is.
A more serious question is whether the exception should be checked or unchecked, i.e., whether it should extend Exception or RuntimeException.
import ch03.stacks.*; public class DemoStacks { public static void main(String[] args) { ArrayStack<Integer> stack = new ArrayStack<Integer>(); Integer x = stack.top(); } }
The code on the right shows the disadvantage of an unchecked exception. This simple main program creates an empty stack (we will learn ArrayStack soon) and then tries to access the top member, a clear error.
As we shall see, the code in top() checks for an empty stack and raises the StackUnderflowException when it finds one. Since top() does not catch the exception, it is reraised in the JVM (the environment that calls main() and a runtime error message occurs. Were the exception checked, the program would not compile since the constructor can throw a checked exception, which the caller neither catches nor declares in its header. In general compile time errors are better than runtime errors.
import ch03.stacks.*; public class DemoStacks { public static void main(String[] args) { ArrayStack<Integer> stack = new ArrayStack<Integer>(); stack.push(new Integer(3)); Integer x = stack.top(); } }
The very similar program on the right, however, shows the advantage of an unchecked exception. This time, the stack is not empty when top() is called and hence no exception is generated. This program compiles and runs with no errors. However, if the exception were checked, the program again would not compile, adding try{} and catch{} blocks would clutter the code, and a header throws clause, would be a false alarm.
The try{}...catch{} block pair can always be used as can the header throws clause. The choice between checked and unchecked errors is whether clients should be required or simply permitted to use these mechanisms.
A somewhat, but not completely, analogous situation occurs when trying to push an item onto a full stack. The reason the situation is not truly analogous is that conceptually, although a stack can be empty, it cannot be full.
Some stack implementations, especially those based on arrays, do produce stacks with an upper bound on the number of elements they can contain. Such stacks can indeed be full and these implementations supply both an isFull() method and a StackOverflowException exception.
public class SOE extends RTE { public SOE() { super(); } public SOE(String msg) { super(msg); } }
On the right is the corresponding StackOverflowException
class, written in the same
abbreviated style used above.
Note how stylized these exception definitions are.
Written in the abbreviated style, just three U's were changed to
O's.
Homework: 17.
Start Lecture #8
Remark: Give a better answer to class T extends S {} T t; S s; t=s; s=t;
Question: Do we tell clients that there is an isFull() method and mention the StackOverflowException that push() might raise? Specifically, should we put them in the interface?
If we do, then implementations that can never have a full stack, must implement a trivial version of isFull(). If we don't, then how does the client of an implementation that can have full stacks, find out about them?
One can think of several solutions: comments in the interface describing the features in only some implementations, a separate document outside Java, etc
We shall follow the book and use a hierarchy of three Java interfaces.
package ch03.stacks; public interface StackInterface<T> { void pop() throws StackUnderflowException; T top() throws StackUnderflowException; boolean isEmpty(); }
On the right we see the basic interface for all stacks. It is parameterized by T the type of the elements in the stack.
The interface specifies one mutator, two accessors, and one possibly thrown exception. Two points deserve comment.
package ch03.stacks; public interface BoundedStackInterface<T> extends StackInterface<T> { void push(T element) throws StackOverflowException; boolean isFull(); }
The first item to note concerning the code on the right is the extends clause. As with classes one interface can extend another. In the present case BoundedStackInterface inherits the three methods in StackInterface.
Since this extended interface is to describe stacks with an upper bound on the number of elements, we see that push() can raise an overflow exception. A predicate is also supplied so that clients can detect full stacks before causing an exception.
package ch03.stacks; public interface UnboundedStackInterface<T> extends StackInterface<T> { void push(T element); }
Perhaps the most interesting point in this interface is the observation that, although it might be a little harder to implement a stack without a bound, it is easier to describe. The concept of a full stack and the attendant exception, simply don't occur.
This simpler description suggests that the using an unbounded stack will be slightly easier than using a bounded one.
Imagine your friend John hands you three plates, a red, then a white, and then a blue. As you are handed plates, you stack them up. When John leaves, Mary arrives and you take the plates off the pile one at a time and hand them to Mary. Note that Mary gets the plates in the order blue, white, red the reverse from the order you received them.
It should be no surprise that this algorithm performs the same for Integers as it does for plates (it is what we would call a generic method. The algorithm is trivial push...push pop...pop.
import ch03.stacks.*; import java.util.Scanner; public class ReverseIntegers { public static void main(String[] args) { Scanner getInput = new Scanner(System.in); int n = getInput.nextInt(), revInt; BoundedStackInterface<Integer> stack = new ArrayStack<Integer>(n); for (int i=0; i<n; i++) stack.push(new Integer(getInput.nextInt())); for (int i=0; i<n; i++) { revInt = stack.top(); stack.pop(); System.out.println(revInt); } } }
A few comments on the Integer version shown on the right.
Homework: 18.
To help summarize the code in this chapter, here is the UML for the entire stack package
It is important not to let the fancy Java in this chapter (including more to come) hide the fact that implementing a stack with an array arr and an index idx is trivial: push(x) is basically arr[idx++]=x, top() is basically return arr[idx], and pop() is basically idx--.
The idea is that the elements currently in the stack are stored in the lowest indices of the array. We need to keep an index saying how much of the array we have used. The book has the index refer to the highest array slot containing a value; a common alternative is to have the index refer to the lowest empty slot; a frequent error is to do a little of each.
package ch03.stacks; public class ArrayStack<T> implements BoundedStackInterface<T> { private final int DEFSIZE = 100; private T[] stack; private int topIndex = -1; public ArrayStack(int size) { stack = (T[]) new Object[size]; } public ArrayStack() { this(DEFSIZE); }
The beginning of the class is shown on the right. We see from the header that it is generic and will implement all the methods in BoundedStackInterface (including those in StackInterface).
I changed the visibility of the fields from protected to private since I try to limit visibilities to private fields and public methods. I also used this to have the no-arg constructor invoke the 1-arg constructor.
I could spend half a lecture on the innocent
looking assignment to stack, which
should be
simply new T[size].
Unfortunately, that simple code would not compile, due to a weakness
in the Java implementation of generics, which does not permit the
creation of generic arrays (although it does permit their
declaration as we see on the right).
There is even a mini controversy as to why the weakness exists.
Accepting that new T[size] is illegal, the given code creates an array of Objects and downcasts it to an array of T's. For complicated reasons having to do with Java's implementation of generics, the downcast creates a warning claiming that it can't be sure the created array will be used correctly in all situations. We shall sadly have to accept the warning.
public boolean isEmpty() { return topIndex == -1; } public boolean isFull() { return topIndex == stack.length-1; }
For some reason the authors write these using if-then-else, converting trivial 1-liners into trivial 4-liners.
public void push(T element) { if (isFull()) throw new StackOverflowException("helpful msg"); else stack[++topIndex] = element; } public void pop() { if (isEmpty()) throw new StackUnderflowException("helpful msg"); else stack[topIndex--] = null; } public T top() { if (isEmpty()) throw new StackUnderflowException("helpful msg"); else return stack[topIndex]; }
These three methods, two mutators and one accessor, are the heart of the array-based stack implementation. As can be seen from the code on the right all are trivial.
pop() is not required to set the top value null. It is a safety measure to prevent inadvertently referring stale data.
My code assumes knowledge of the Java (really C) increment/decrement operators ++ and -- used both as prefix and postfix operators. The authors avoid using these operators within expressions, producing slightly longer methods. This result is similar to their using an if-then-else rather than returning the value of the boolean expression directly. I believe that Java programmers should make use of these elementary features of the language and that they quickly become idiomatic.
Homework: 28, 30
Read.
The interesting part of this section is the ArrayList class in the Java library. An ArrayList object has the semantics of an array that grows and shrinks automatically; it is never full. When an ArrayList used in place of an array, the resulting stack is unbounded. This library class is quite cool.
The essential point of well-formed expressions is that (all forms of) parentheses are balanced. So ({({[]})}) is well-formed, but ({({[]}})) is not. Note that we are concerned only with the parentheses so ({({[]})}) is the same as ({xds(45{g[]rr})l}l).
Why are stacks relevant?
When we see an open parenthesis, we stack it and continue. If we see another open, we stack it too. When we encounter a close it must match the most recent open and that is exactly the top of stack.
The authors present a more extensive solution than I will give. Their Balanced class permits one to create an object that checks balanced parentheses for arbitrary parentheses.
new Balanced("([{", ")]}") new Balanced("]x*,", "[y3.")
This approach offers considerable generality. The top invocation on the right constructs a Balanced object that checks for the three standard pairs of parentheses used above. The second invocation produces a weird object that checks for four crazy pairs of parentheses "][", "xy", "*3", and ",.".
For each input char if open, push if close, pop and compare if neither, ignore When finished, stack must be empty
Since the purpose is to illustrate the use of stack and I would like us to develop the code in class, let me simplify the problem a little and just consider ([{ and )]}. Also we require a whitespace between elements so that we can use the scanner's hasNext() and getNext() methods. With this simplification a rough description of the algorithm is on the right. A complete Java solution is here.
Start Lecture #9
Modern versions of Java, will automatically convert between an int and an Integer. This comes up in the book's solution for balanced parentheses since they push indices (int's not String's. Although we didn't need it, this autoboxing and autounboxing is quite useful and requires some comment.
The point is that int is a primitive type (not a class) so we cannot say ArrayStack<int> to get a stack of int's. The closest we can get is ArrayStack<Integer>.
stack.push(new Integer(i1)); stack.push(i1);
Assume stack is a stack of Integer's, and we want to push the value of an int variable i1. In older versions of Java, we would explicitly convert the int to an Integer by using a constructor as in the top line on the right. However Java 1.5 (current is 1.7) introduced the automatic conversion so we can use the bottom line on the right.
Homework: 36, 37(a-d).
As with StringLogs, both array- and link-based stack implementations are possible. Unlike arrays (but like ArrayLists), the linked structure cannot be full or overflow so it will be an implementation of UnboundedStackInterface.
package support; public class LLNode<T> { private LLNode<T> link = null; private T info; public LLNode (T info) { this.info = info; } public T getInfo() { return info; } public LLNode<T> getLink() { return link; } public void setInfo(T info) { this.info = info; } public void setLink(LLNode<T> link) { this.link = link; } }
Essentially all linked solutions have a
self-referential
node class with each node containing references to another node (the
link) and to some data that is application specific.
We did a specific case when implementing linked stringlogs.
A generic solution is on the right. The constructor is given a reference to the data for the node. Each of the two fields can be accessed and mutated, yielding the four shown methods.
Probably the only part needing some study is the use of genericity.
To motivate the implementation, on the far right is a detailed view of what should be the result of executing
UnboundedStackInterface<Integer> stack = new LinkedStack<Integer> (); stack.push(new Integer(10)); stack.push(20); // autoboxed
Remember that the boxes with rounded corners are objects and their class names is written outside the box. The rectangles contain references.
Normally, much less detail is shown than we see on the far right. Instead, one would draw a diagram like the one on the near right.
Do not let schematic pictures like this one trick you into
believing that the info field contains an int such as 20.
It cannot.
The field contains a T and T is
never int.
Why?
Answer: T must be a class; not a primitive type.
package ch03.stacks; import support.LLNode; public class LinkedStack<T> implements UnboundedStackInterface<T> { private LLNode<T> top = null; ... }
Since LLNode will be used in several packages, it is
placed in separate package and imported here.
The alternative is to replicate LLNode in every package
using it; a poor idea.
Why?
Answer: Replicated items require care to ensure changes are made to all copies.
The only data field is top, which points to the current top of stack.
An alternative to the field initialization shown would be to place write a constructor and place the initialization there. The book uses this alternative; the code shown does not need an explicit constructor, the no-arg constructor suffices.
On the right is a schematic view of the actions needed to insert a node onto an arbitrary linked list and to delete a node from such a linked list. For a stack, these operations correspond to push and pop respectively.
There is a potential point of confusion, due in part to terminology, that I wish to explain.
Recall that, for a stack, the list head (commonly called top) is the site for all insertions and for all deletions. This means that new nodes are inserted before the current first node and hence the new node becomes the first node.
As a result, if node A is inserted first and node B is inserted subsequently, then, although node A's insertion was temporally prior to node B's, node A appears after node B on the stack (i.e., access to node A comes after access to node B.
This ordering on the list gives the LIFO (last-in, first-out) behavior required for a stack.
The push() operation corresponds to the first line in the diagram above. The only difference is that we are inserting before the current first node so the red dashed pointer is the contents of the top field rather than the link field of a node.
public void push (T info) { LLNode<T> node = new LLNode<T>(info); node.setLink(top); top = node; }
From the diagram we see that the three steps are to create a new node, set the new link to the red link, and update the red link to reference the new node. Those three steps correspond one for one with the three lines of the method. A common error is to reverse the last two lines.
Show in class why reversing the last two lines fails.
Show in class that push() works correctly if the stack is empty when push() is called.
public void pop() throws StackUnderflowException { if (isEmpty()) throw new StackUnderflowException("helpful msg"); else top = top.getLink(); }
pop() corresponds to the second line of our diagram, again with the red pointer emanating from top, not from a node. So we just need to set the red arrow to the black one.
If the stack is empty, then the red arrow is null and there is no black arrow. In that case we have an underflow.
public T top() { if isEmpty() throw new StackUnderflowException("helpful msg"); else return top.getInfo(); }
public boolean isEmpty() { return top==null; }
top() simply returns the top element of the stack. If the stack is empty, an underflow is raised.
We must remember however that from the user's viewpoint, the stack consists of just the info components. The business with links and nodes is there only for our (the implementer's) benefit. Hence we apply getInfo().
Finally, a linked stack is empty if it's top field is null. Thus isEmpty() is a one-line method.
Both ArrayStack and LinkedStack are fairly simple and quite fast. All of the operations in both implementations are O(1), they execute only a fixed number of instructions independent of the size of the stack.
Note that the previous statement means that each push() takes only a constant number of instructions. Naturally pushing 1,000 elements takes longer (about 1000 times longer) than pushing one item.
The array-based constructor that is given the size of the array is not O(1) because creating an array of N Objects requires setting each of the N references to null. The linked-based constructor is O(1), which is better. Nonetheless, in a practical sense, both implementations are fast.
The two notable differences are
Homework: 40, 41, 45.
As mentioned at the beginning of this chapter, our normal arithmetic notation is called infix because the operator is in between the operands. Two alternatives notations are prefix (operator precedes the operands) and postfix (operator follows the operands).
As we know, with infix notation, some form of precedence rules are needed and often parentheses are needed as well. Postfix does not need either.
The evaluation rule is to push all operands until an operator is encountered in which case the operator is applied to the top two stacked elements (which are popped) and then the result is pushed (pop, pop, eval, pop).
Do several examples in class
The real goal is simply a method that accepts a postfix expression (a Java String) and returns the value of the expression. However, we cannot compile a naked method since, unlike some other languages, Java requires that each source file must be a class. Thus our goal will be a class containing a static method that evaluates postfix expressions. The class can be thought of as a simple wrapper around the method.
Read.
import ch03.stacks.*; import java.util.Scanner; public class PostfixEval { public static double postfixEval(String expr) { ... } }
As mentioned, we want to develop the method postfixEval(), but must package it as a class, which we have called PostfixEval. The skeleton is on the right, let's write the ... part in class.
For simplicity, we assume the argument to postfixEval() is a valid postfix expression.
A solution is here and a main program is here. The latter assumes each postscript expression is on a separate line.
export CLASSPATH="/a/dale-dataStructures/bookFiles/:." javac DemoPostfixEval.java java DemoPostfixEval
As a reminder this two-Java-files program can be compiled and run two ways. As written the program does not specify a package (technically it specifies the default package) so to compile and run it, go to its directory and execute the statements on the right.
export CLASSPATH="/a/dale-dataStructures/bookFiles/:." javac postfixEval/DemoPostfixEval.java java postfixEval/DemoPostfixEval
Alternatively (and good practice) is to use a named package. For example, add package postfixEval; at the top of each file, make sure the directory is also named postfixEval, go the parent of this directory and type the commands on the right.
Read.
Read. The book does not assume the postfix expression is valid so testing is more extensive than we would require.
Homework: 48, 49.
Several points were made in this chapter.
A recursive definition defines something in terms of (a hopefully simpler version of) itself.
It is easier to write this in elementary mathematics than Java. Consider these examples in order and only consider integer arguments.
For each of the following, can you evaluate f(3)? How about f(-8)? How about f(n) for all integers n?
1. f(n) = 1 + f(n-1) 2. f(n) = 1 + f(n-1) for n > 0 = 0 for n ≤ 0 (base case) 3. f(n) = 1 + f(n-1) for n > 0 f(0) = 0 f(n) = 1 + f(n+1) for n < 0 4. f(n) = 1 + g(n) g(n) = 1 + f(n-1) for n > 0 g(0) = 0 g(n) = 1 + f(n+1) for n < 0 5. f(n) = n * f(n-1) for n > 0 f(n) = 1 for n ≤ 0 6. f(n) = f(n-1) + f(n-2) for n > 0 f(n) = 0 for n ≤ 0 7. f(n) = f(n-1) + f(n-2) for n > 0 f(n) = 1 for n ≤ 0
The base case(s) occur(s) when the answer is expressed non-recursively.
Note that in most of the examples above f invokes itself directly, which is called direct recursion. However, in case 4, f invokes itself via a call to g, which is called indirect recursion. Similarly, g in this same example invokes itself via indirect recursion.
There is actually nothing special about recursive programs, except that they are recursive.
Here is a (not-so-beautiful) java program to compute any of the 7 examples above.
Number 5 is the well known factorial program and a non-recursive solution (write it on the board) is much faster.
Number 7 is the well known fibonacci program and a non-recursive solution (write it on the board) is much MUCH MUCH faster.
Start Lecture #10
If you have studied
proof by induction,
these questions should seem familiar.
smallerthan the caller instance? Actually this is not quite right, the callee can be bigger. What is needed is that eventually the callee gets smaller. (The book refers to this as the
smaller-callerquestion, but I don't like that name since, although the rhyming is cute, the meaning is backwards.)
In class, answer the questions for the not-so-beautiful program above.
Read. The main point is to recognize that the problem at a given size is much easier if we assume problems of smaller size can be solved. Then you can write the solution of the big size and just call yourself for smaller sizes.
You must ensure that each time (actually not each time, but eventually) the problem to be solved is getting smaller i.e., closer to the base case(s) that we do know how to solve non-recursively.
Your solution should have an if (or case) statement where one (or several) branches are for the base case(s).
One extra difficulty is that if you don't approach the base case the system runs out of memory and the error message basically just tells you that you aren't reaching the base case.
This game consists of 3 vertical pegs and n disks of different sizes (n=4 in the diagram below). The goal is to transform the initial state with all the disks on the left (From) peg to the final state with all disks on the middle (To) peg. Each step consists of moving the top disk of one peg to be the top disk of another peg.. The key rule is that, at no time, can a large disk be atop a small disk.
For example, the only two legal moves from the initial position are
For someone not familiar with recursion, this looks like a hard problem. In fact it is easy!
The base case is n=1 and moving a pile of 1 disks from peg From to peg To takes only one turn and doesn't even need peg Spare. In our notation the solution is 1:FT.
Referring to the picture let's do n=4. That is we want to move the 4-3-2-1 pile from peg From to peg To and can use the remaining peg if needed. We denote this goal as 4-3-2-1:FT and note again that the peg not listed (in this case peg Spare) is available for intermediate storage. Solving this 4-disk problem is a three-step solution.
Step 2 is simple enough. After step 1 peg F (From) has only one disk and peg T has none so of course we can move the lone disk from F to T. But steps 1 and 3 look like the another hanoi problem.
Exactly. That's why were done!!
Indeed the solution works for any number of disks. In words the algorithm is: to move a bunch of disks from one peg to another, first, move all but the bottom disk to the remaining (spare) peg, then move the bottom disk to the desired peg, and finally move the all-but-the-bottom pile from the spare to the desired peg.
Show some emacs solutions. The speed can be changed by customizing `hanoi-move-period'.
How many individual steps are required?
Let S(n) be the number of steps needed to solve an n-disk hanoi problem.
Looking at the algorithm we see.
These two equations define what is called a recurrence. In Basic Algorithms (310), you will learn how to solve this and some other (but by no means all) recurrences.
In this class we will be informal. Note that equation 2 says that, if n grows by 1, S(n) nearly doubles.
What function doubles when its argument grows by 1?
Answer: f(n) = 2n
So we suspect the answer to our recurrence will be something like 2n. In fact the answer is S(n)=2n-1.
Let's check S(1)=21-1=2-1=1. Good.
Does S(n)=2S(n-1)+1 as required? We need to evaluate both sides and see.
S(n)=2n-1
2S(n-1)+1 = 2(2n-1-1)+1 = 2n-2+1 = 2n-1 as required.
I feel bound to report that there is another simple solution, one
that does not use recursion.
It is not simple to see that it works and it is not
as
elegant as the recursive solution (at least not to my
eyes).
Do one in class.
public class Hanoi1 { public static void main (String[] args) { hanoi(4, 'F', 'T', 'S'); } public static void hanoi(int n, char from, char to, char spare) { if (n>0) { hanoi(n-1, from, spare, to); System.out.println(n + ":" + from + to); hanoi(n-1, spare, to, from); } } }
On the right is a bare-bones solution. Clearly the main program should read in the value for n instead of hard-wiring 4 and the output is spartan. I did this to show that the algorithm itself really is easy: the hanoi() method just has three lines, directly corresponding to the three lines in the algorithm.
Read.
The book's program accepts input, which it error checks, and invokes its hanoi() method. The latter produces more informative output, which is indented proportional to the current depth of recursion.
Read.
Start Lecture #11
Remark: Lab 3 assigned. The wording on 3B was improved Sunday morning.
Definition: A Graph is a data structure consisting of nodes and edges. An edge is viewed as a line connecting two nodes (occasionally the two nodes are equal and the edge is then often referred to as a loop).
Definition: A graph is called Directed if the each edge has a distinguished start and end. These edges are drawn as arrows to indicate their direction and such directed edges are normally called arcs.
On the right we see a small directed graph. It has 6 nodes and 7 arcs. We wish to determine reachability, that is for each node, we want to know which nodes can be reached by following a path of arcs.
A node, by definition, can reach itself. This can be thought of as following a path of zero arcs.
For example the reachability of the drawn graph is
0: 0 1 2 3 4 5 1: 1 2 4 5 2: 1 2 4 5 3: 3 4: 4 5: 4 5.
One question we must answer is how do we input the graph. We will use what could be called incidence lists. First we input the number of nodes and then, for each node, give the number of nodes directly reachable and a list of those nodes.
For example, the input for the drawn graph is 6 2 1 3 2 2 4 2 1 5 0
0 1 4.
Written in the notation used above for reachability, the given arcs
are
0: 1 3 1: 2 4 2: 1 5 3: 4: 5: 4.
for each successor the successor is reachable recurse at this successor
To compute reachability we begin by noting that each node is reachable from itself (as mentioned above) and then apply the algorithm on the right.
Let's say we want to know which nodes are reachable from 2. In the beginning we know only that 2 is reachable from 2. The algorithm begins by looking at the successors of 2, i.e., the destinations of the arcs that start at 2. The first successor we find is 1 so we recursively start looking for the successors of 1 and find 4 (so far the nodes reachable from 2 are 2, 1, and 4).
Then we recursively look for successors of 4. There are none so we have finished looking for the successors of 4 and return to the successors of 1.
There is an infinite loop lurking, do you see it?
The next successor of 1 is 2 so we should now look for successors of 2. But, if we look for successors of 2, will find 1 (and 5) and recursively start looking (again) for successors of 1, then find 2 as a successor of 1, which has 1 as a successor, which ... .
The loop 1 to 2 to 1 to 2 ... is endless. We must do better.
for each successor if the successor is not listed list it recurse at this successor
The fix is that we do not recurse if we have already placed this node on the list of reachable nodes. On the right we see the pseudocode, with this correction.
Let's apply this pseudocode to find the nodes reachable from 2, remembering that, by definition, 2 is reachable from 2.
The full Java solution is here and I have a handout so that you can read it while something else is on the screen.
This class has two fields:
The constructor is given the number of nodes and creates the fields.
Since the fields have private access, there are methods to set and get various values.
The main program reads input and sets the values in the arcs array. For example, to match our graph the input would be 6 2 1 3 2 2 4 2 1 5 0 0 1 4 as described above. With this input arcs.length==6, arcs[0].length==2, arcs[0,0]==1, arcs[0,1]=3, etc.
The arcs ragged array could be written as follows (the two blank lines represent the fact that there are no arcs leaving nodes 3 and 4).
1 3 2 4 1 5 4
In class draw the arcs ragged array showing the full Java details.
The two print routines are straightforward; we shall see their output in a few minutes.
CalcReachability() itself does very little.
As indicated, the helper does the real work. It is basically the pseudocode above translated into Java.
One interesting question is, "Where is the base case?".
It is not trivial to find, but it is there! The base case is the case when each successor has already been marked reachable. Note that in this case, the if predicate evaluates to false for each iteration. Thus the method returns without calling itself recursively.
Question: What does this method accomplish,
It doesn't ever return a value?
Answer: It (sometimes) modifies the reachable parameter.
Question: How can that be an accomplishment since Java is call-by-value and changes to parameters are not seen by the caller?
Answer: The usual answer ...
... reference semantics.
Go over execution of the entire program on the input used for the diagram above.
Homework: Draw a directed graph with 7 nodes so that node 1 can reach all nodes and all other nodes can reach only themselves.
For this section we are considering only what are called
singly-linked lists where all the link pointers go in the same
direction.
All our diagrams to date have been of singly-linked lists, but we
will also study doubly-linked lists, which have pointers in both
directions (so called
next and
prev pointers).
If we use a loop to process a (singly-)linked list, we can go only forwards so it becomes awkward when we need to access again a node that we have previously visited.
With recursive calls instead of a loop, we will visit the previous links again when the routines return. Hopefully, the next example with make this clear
Hand out a copies of LLStringNode.java and DemoReversePrint.java, which might not fit conveniently on the screen. These routines can also be found here.
public class LLStringNode { private String info; private LLStringNode link = null; public LLStringNode(String info) { this.info = info; } public String getInfo() { return info; } public void setInfo(String info) { this.info = info; } public LLStringNode getLink() { return link; } public void setLink(LLStringNode link) { this.link = link; } } public class DemoReversePrint { public static void main (String[] args) { LLStringNode node1 = new LLStringNode("node # 1"); LLStringNode node2 = new LLStringNode("node # 2"); LLStringNode node3 = new LLStringNode("node # 3"); LLStringNode node4 = new LLStringNode("node # 4"); node1.setLink(node2); node2.setLink(node3); node3.setLink(node4); System.out.println("Printing in original order"); printFor(node1); System.out.println("Printing in reverse order"); printRev(node1); System.out.println("Printing forward recursively"); printForRec(node1); } public static void printFor(LLStringNode node) { while (node != null) { System.out.println(" " + node.getInfo() ); node = node.getLink(); } } public static void printRev(LLStringNode node) { if (node.getLink() != null) printRev(node.getLink()); System.out.println (" " + node.getInfo() ); } public static void printForRec(LLStringNode node) { System.out.println (" " + node.getInfo() ); if (node.getLink() != null) printForRec(node.getLink()); } }
LLStringNode.java is quite simple. Each node has two fields, the constructor initializes one and there is a set and get for each.
The main program first creates a four node list
node1-->node2-->node3-->node4 and then prints it three times. The first time the usual while loop is employed and the (data components of the) nodes are printed in the their original order.
The second time recursion is used to get to the end of the list (without printing) and then printing as the recursion unwinds. Show this in class.
The third time recursion is again used but values are printed as the recursion moves forward not as it unwinds. Show in class that this gives the list in the original order.
while (node != null) push info on stack node = node.getLink() While (!stack.empty()) print stack.top() stack.pop()
On the right we see pseudocode for a non-recursive, reversing print. As we shall soon see the key to implementing recursion is the clever use of a stack. Recall that an important property of dynamically nested function calls is their stack-like LIFO semantics. If f() calls g() calls h(), then the returns are in the reverse order h() returns, then g() returns, then f() returns.
Consider the recursive situation main calls f() calls f() calls
f().
When the first call occurs we store the information about the f() on
a stack.
When the second call occurs, we again store the information, but now
for the
second f().
Similarly, when the third calls occurs, we store the information for
the
third f().
When this
third f() returns, we pop the stack and have
restored the environment of the
second f().
When this invocation of f() returns, we pop again and have restored
the environment of the original f().
This is shown in more detail just below.
Homework: The printFor() method prints each node once. Write printForN(), which prints the first node once, the second node twice, the third node three times, ..., the Nth node N times.
If f() calls f() calls f() ..., we need to keep track of which f we
are working on.
This is done by using a stack and keeping a chunk of memory
(an
activation record) for each active execution of f().
The first diagram below shows the life history of the activation
stack for an execution of f(2) where f() is the fibonacci function
we saw at the beginning of the chapter.
Below that is the activation stack for f(3).
At all times the system is executing the invocation at the current top of the stack and is ignoring all other invocations.
n = getInput.nextInt(); double[] x = new double[n];
Some older programming languages (notably old versions of Fortran) did not have recursion and required all array bounds to be constant (so the Java code on the right was not supported). With these some languages the compiler could determine, before execution began where each variable would be located.
This rather rigid policy is termed static storage allocation since the storage used is fixed prior to execution beginning.
In contrast to the above, newer languages like Java support both recursion and arrays with bounds known only during run time. For these languages additional memory must be allocated while execution occurs. This is called dynamic storage allocation.
One example, which we have illustrated above is the activation stack for recursive programs like fibonacci.
There is one case where it is easy to convert a recursive method to an iterative one. A method is called tail-recursive if the only recursion is at the tail (i.e., the end). More formally
Definition: A method f() is tail-recursive if the only (direct or indirect) recursive call in f() is a direct call of f() as the very last action before returning.
The big deal here is that when f() calls f()
and the 2nd f() returns, the first f()
returns immediately thereafter.
Hence we do not have to keep all the
activation records.
int gcd (int a, int b) { int gcd (int a, int b) { start: if (a == b) return a; if (a == b) return a; if (a > b) if (a > b) { return gcd(a-b,b); a = a-b; goto start; } return gcd(a,b-a); } b = b-a; goto start; }
As an example consider the program on the near right. This program computes the greatest common divisor (gcd) of two positive integers. As the name suggests, the gcd is the largest integer that (evenly) divides each of the two given numbers. For example gcd(15,6)==3.
Do gdc(15,6) on the board showing the activation stack.
It is perhaps not clear that this program actually computes the correct value even thought it does work when we try it. We are actually not interested in computing gcd's so let's just say we are interested in the program on the near right and consider it a rumor that it does compute the gcd.
Now look at the program on the far right above. It is not legal Java since Java does not have a goto statement. The (famous/infamous/notorious?) goto simply goes to the indicated label.
Again execute gcd(15,6) and notice that it does the same thing as the recursive version but does not make any function calls and hence does not create a long activation stack.
Remark: Good compilers,
especially those for
functional languages, which make heavy
use of recursion, automatically convert tail recursion to
iteration.
Recursion is a powerful technique that you will encounter frequently in your studies. Notice how easy it made the hanoi program.
It is often useful for linked lists and very often for trees. The reason for the importance with trees is that a tree node has multiple successors so you want to descend from this node several times. You can't simply loop to go down since you need to remember for each level what to do next.
Draw a picture to explain and compare it to the fibonacci activation stack picture above.
Start Lecture #12
Remark: Wording change in honors supplement.
Whereas the stack exhibits LIFO (last-in/first-out) semantics, queues have FIFO (first-in/first-out) semantics. This means that insertions and removals occur at different ends of a queue; whereas, they occur at the same end of a stack.
A good physical example for stacks is the dish stack at hayden; a good physical example for a queue is a line to buy tickets or to be served by a bank teller. In fact in British English such lines are called queues.
Following the example of a ticket line, we call the site where items are inserted the rear of the queue and the site where items are removed the front of the queue.
Personally, I try to reserve the term
queue for the
structure described above.
Some call a stack a LIFO queue (I shudder when I hear that) and
thus call a queue a FIFO queue.
Unfortunately, the term
priority queue is absolutely
standard and refers to a structure that does not have FIFO
semantics.
We will likely discuss priority queues later this semester, but not
in this chapter and never by the simple name
queue.
As with stacks there are two basic operations: insertion and removal. For stacks those are called push and pop; for queues they are normally called enqueue and dequeue.
We have discussed
real-world examples, but queues are very
common in computer systems.
One use is for speed matching. Here is an example from 202 (operating systems). A disk delivers data at a fixed rate determined by the speed of rotation and the density of the bits on the medium. The OS cannot change either the bit density or the rotation rate. If software can't keep up (for example the processor is doing something else), the disk cannot be slowed or stopped. Instead the arriving data must be either saved or discarded (both options are done).
When the choice is to save the data until needed, a queue is used so that the software obtaining the data, gets it in the same order it would have were it receiving the data directly from the disk.
This use of a queue is often called buffering and these queue are often called buffers.
As with stacks, we will distinguish between queues having limited capacity (bounded queues) from those with no such limit (unbounded queues). Following the procedure used for stacks, we give 3 interfaces for queues: the all inclusive QueueInterface, which describes the portion of the interface that is common to both bounded and unbounded queues; BoundedQueueInterface, the extension describing bounded queues; and UnBoundedQueueInterface, the extension for unbounded queues.
On the right we see the relationship between the three interfaces.
Recall that the arrows with single open triangles
signify
extends.
For example, BoundedQueueInterface extends
QueueInterface.
Below we see a larger diagram covering all the interfaces and classes introduced in this chapter.
package ch05.queues; public interface QueueInterface<T> { T dequeue() throws QueueUnderflowException; boolean isEmpty(); }
For any queue we need enqueue and dequeue operations. Note, however, that the enqueue is different for bounded and unbounded queues. A full bounded queue cannot accommodate another element. This cannot not occur for an unbounded queue, which cannot be full. Thus both the isFull() predicate and the enqueue() mutator cannot be specified for an arbitrary queue. As a result only the isEmpty() predicate and the dequeue() mutator are specified.
If the mutator encounters an empty queue, it raises and exception.
package ch05.queues; public interface BoundedQueueInterface<T> { void enqueue(T element) throws QueueOverflowException; boolean isFull(); }
For bounded queues we do need an isFull() predicate and the enqueue() method, when encountering a full queue, throws an exception.
package ch05.queues; public interface BoundedQueueInterface<T> { void enqueue(T element);
Unbounded queues are simpler to specify. Since they are never full, there is no isFull() predicate and the enqueue() method does not raise an exception.
Note the following points that can be seen from the diagram.
underflow.
import ch05.queues.*; import java.util.Scanner; public class DemoBoundedQueue { public static void main (String[] args) { Scanner getInput = new Scanner(System.in); String str; BoundedQueueInterface<Integer> queue = new ArrayBndQueue<Integer>(3); while (!(str=getInput.next()).equals("done")) if (str.equals("enqueue")) queue.enqueue(getInput.nextInt()); else if (str.equals("dequeue")) System.out.println(queue.dequeue()); else System.out.println("Illegal"); } }
The example in the book creates a bounded queue of size three; inserts 3 elements; then deletes 3 elements. That's it.
We can do much more with a 3 element queue. We can insert and delete as many elements as we like providing there are never more than three enqueued.
For example, consider the program on the right. It accepts three kinds of input.
An unbounded number of enqueues can occur just as long as there are enough dequeues to prevent more than 3 items in the queue at once.
The Java library has considerably more sophisticated queues than the ones we will implement, so why bother? There are several reasons.
class SomethingOrOtherQueue<T>
All these implementations will be generic so each will have a header line of the form shown on the right, where T is the class of the elements.
For example the ArrayBndQueue<Integer> class will
be an array-based bounded queue of Integer's.Remember
that ArrayBndQueue<float> is illegal.
Why?
Answer float is a primitive type, and a class type (e.g. Float) is required.
The basic idea is clear: Store the queue in an array and keep track of the indices that correspond to the front and rear of the queue. But there are details.
The book's (reasonable) choice is that the current elements in the queue start at index front and end at index rear. Some minor schenanigans are needed for when the queue has no elements
First we try to mimic the stack approach where one end of the list is at index zero and the other end moves. In particular, the front index is always zero (signifying that the next element to delete is always in slot zero), and the rear index is updated during inserts and deletes so that it always references the slot containing the last item inserted.
The following diagram shows the life history of a ArrayBndQueue<Character> class of size 4 from its creation through four operations, namely three enqueues and one dequeue.
Since the Front of the queue is always zero, we do not need to keep track of it. We do need a rear data field.
Go over the diagram in class, step by step.
Note that the apparent alias in the last frame is not a serious problem since slot number 2 is not part of the queue. If the next operation is an enqueue, the alias will disappear. However, some might consider it safer if the dequeue explicitly set the slot to null.
Homework: Answer the following questions (discuss them in class first)
This queue would be easy to code, let's sketch it out in class.
This queue is simple to understand and easy to program, but is not normally used. Why?
The loop in dequeue() requires about N steps if the currently N elements are enqueued. The technical terminology is that enqueue() is O(1) and dequeue() is O(N).
CircularArrays)
This next design will have O(1) complexity for
both enqueue() and dequeue().
We will see that it has an extra
trick (modular arithmetic)
and needs a little more care.
As mentioned in the very first lecture, we will sometimes give up
some simplicity for increased performance.
The following diagram shows the same history as the previous one, but with an extra dequeue() at the end. Note that now the value of Front changes and thus must be maintained.
Note first that the last frame represents a queue with only one element. Also note that, similar to the previous diagram, the apparent aliases aren't serious and can be eliminated completely by setting various slots null.
We might appear to be done, both operations are O(1). But, no.
Imagine now three more operations enqueue('D'); dequeue(); enqueue('E'); The number of elements in the queue goes from the current 1, to 2, back to 1, back to 2.
However, it doesn't work because the value of Rear is now 4, which does not represent a slot in the array.
Note that alternating inserts and deletes move both Front and Rear up, which by itself is fine. The trouble is we lose the space below!
That is why we want a circular array so that we consider the slot after 3 to be slot 0. The main change when compared to the previous implementation is that instead of rear++ or ++rear we will use rear=(rear+1)%size
In the fixed-front implementation the number of elements in the queue was Rear+1 so, by maintaining Rear, we were also tracking the current size. That clearly doesn't work here as we can see from the picture just above. However the value Rear+1-Front does seem to work for the picture and would have also worked previously (since then Front was 0).
Nonetheless in the present implementation we will separately track the current size. See the next (optional) section for the reason why.
I will follow the book and explicitly track the number of elements in the queue (adding 1 on enqueues and subtracting 1 on dequeues). This seem redundant as mentioned in the immediately preceding section.
But what happens when Rear wraps around, e.g., when we enqueue two more elements in the last diagram? Rear would go from 2 to 3 to 0 and (Rear-Front)+1 becomes (0-2)+1=-1, which is crazy. There are three elements queued, certainly not -1. However, the queue size is 4 and -1 mod 4 = 3. Indeed (Rear+1-Front) mod Capacity seems to work.
But now comes the real shenanigans about full/empty queues. If you carry out this procedure for a full queue and for an empty queue you will get the same value. Indeed using Front and Rear you can't tell full and empty apart.
One solution is to declare a queue full when the number of elements is one less than the number of slots. The book's solution is to explicitly count the number of elements, which is fine, but I think the subtlety should be mentioned, even if only as a passing remark.
The fixed-front is simpler and does not require tracking the current size. But the O(N) dequeue complexity is a serious disadvantage and I believe the circular queue implementation is more common for that reason.
This class is simple enough that we can get it (close to) right in class ourselves. The goal is that we write the code on the board without looking anywhere. To simplify the job, we will not worry about QueueInterface, BoundedQueueInterface, packages, and CLASSPATH.
You should compare our result with the better code in the book.
We certainly need the queue itself, and the front and rear pointers to tell us where to remove/insert the next item. As mentioned we also need to explicitly track the number of elements currently present. We must determine the correct initial values for front and rear, i.e. what their values should be for an empty queue.
Finally, there is the sad story about declaring and allocating a generic array (an array whose component type is generic).
Although we are not officially implementing the interfaces, we will supply all the methods mentioned, specifically dequeue(), enqueue(), isEmpty(), and is isFull().
I grabbed the main program from the end of section 5.2 (altered it slightly) and stuck it at the end of the class we just developed. The result is here.
In chapter 3 on stacks, after doing a conventional array-based
implementation, we mentioned the Java library
class ArrayList, which implements an
unbounded array, i.e., an array that grows in size when
needed.
Now, we will implement an unbounded array-base
queue.
The idea is that an insert into a full queue, increases the size of the underlying array, copies the old data to the new structure, and then proceeds. Specifically, we need to change the following code in ArrayBndQueue. The diagram shows a queue expansion.
Let's do the mods in class. The first step is to figure out enlarge() based on the picture.
The UML diagrams are helpful for writing applications so here is the full one for queues, including linked types we haven't yet studied.
Start Lecture #13
Homework: 1, 2, 6 (these three should have been assigned earlier) 13.
This simple program proceeds as follows. Read in a string and then push (on a stack) and enqueue (on a queue) each element. Finally, pop and dequeue each element. To be a palindrome, the first element popped must match the first element dequeued, the second popped must match the second dequeued, etc.
public class Palindrome { public static boolean isPalindrome(String str) { ... } }
Recall that a palindrome reads the same left to right as right to left. Lets write the predicate isPalindrome() using the following procedure.
Note that this is not a great implementation; it is being used just to illustrate the relation between stacks and queues. For a successful match each character in the string is accessed 5 times.
A better program (done in 101!) is to attack the string with charAt() from both ends. This accesses each character only once.
Indeed, if our 102 solution didn't use a queue and just reaccessed the string with charAt() a second time, we would have only 4 accesses instead of 5.
The book's program only considers letters so
AB?,BA would
be a palindrome.
This is just an early if statement to ignore non letters
(the library Character class has an isLetter()
predicate, which does all the work.
Normal I/O and use the palindrome class.
Read.
In this card game you play cards from the top and, if you win
the
battle, you place the won cards at the bottom.
So each player's hand can be modeled as a queue.
main() loop N times flip a coin if heads produce() else consume() produce() if queue is not full generate an item enqueue the item consume() if queue is not empty dequeue an item print the item
A very real application of queues is for producer-consumer problems where some processes produce items that other processors consume. To do this properly would require our studying Java threads, a fascinating (but very serious) topic that sadly we will not have time to do.
For a real world example, imagine email. At essentially random times mail arrives and is enqueue. At other essentially random times, the mailbox owner reads messages. (Of course, most mailers permit you to read mail out of order, but ignore that possibility).
We model the producer-consumer problem as shown on the right. This model is simplified from reality. If the queue is full, our produce is a no-op; similarly for an empty queue and consume.
As you will learn in 202, if the queue is
full, all producers are
blocked, that is, the OS stops
executing producers until space becomes available (by a consumer
dequeuing an item).
Similarly, consumers are blocked when the queue is empty and wait
for a producer to enqueue an item.
This is easy. Ask Math.random() for a double between 0 and 1. If the double exceeds 0.5, declare the coin heads. Actually this is flipping a fair coin. By changing the value 0.5 we can change the odds of getting a head.
A simple Java program, using the book's ArrayBndQueue is here.
By varying the capacity of the queue and N, we can see the importance of sufficient size queues. I copied the program and all of .../ch05/queues to i5 so we can demo how changing the probability of heads affects the outcome.
As with linked stacks, we begin linked queues with a detailed diagram showing the result of creating a two node structure, specifically the result of executing.
UnboundedQueueInterface<Integer> queue = LinkedUnBndQueue<Integer>(); queue.enqueue(new Integer(10)); queue.enqueue(20); // Uses autoboxing
Normally, one does not show all the details we have on the right, especially since some of them depend on Java. A typical picture for this queue is shown below. Given the Java code, you should be able to derive the detailed picture on the right from the simplified picture below.
Homework: 30.
The goal is to add a new node at the rear. An unbounded queue is never full, so overflow is not a concern.
Your first thought might be that this is a simple 3-step procedure.
There is a subtle failure lurking here.
Presumably, the
current last node is the one currently
pointed to by rear.
That assertion is correct most, but not all, of the
time, i.e., it is wrong.
The failure occurs when inserting into an empty queue, at which point rear does not point to the last node; instead rear==null.
The fix is to realize that for an initially empty queue we want front (rather than the non-existent last node) to point to the new node.
To motivate this last statement, first imagine inserting into the queue drawn above, which currently contains the Integers 10 and 20. You want the new node to be the successor of (i.e., be pointed to by) the 20-node, which is two nodes after front.
Now imagine a smaller queue with only the 10-node; this time the new node should be the successor of the 10-node, which is one node after front.
public void enqueue(T data) { LLNode<T> newNode = new LLNode<T>(data); if (isEmpty()) front = newNode; else rear.setLink(newNode); rear = newNode; }
Now imagine a yet smaller queue with no nodes. By analogy, this time the new node should be the successor of the node zero nodes after front, i.e., it should be the successor of front itself
The resulting method is shown on the right.
The goal is to remove the front node. As with enqueue, dequeue seems simple.
public T dequeue() { if (isEmpty()) throw new QueueUnderflowException("Helpful msg"); T ans = front.getInfo(); front = front.getLink(); if (front == null) rear = null; return ans; }
Again this works only most of the time, i.e., is wrong.
Clearly we can't dequeue from an empty queue. Instead we raise an exception.
The more subtle concern involves a 1-element queue. In this case front and rear point to the node we are removing. Thus we must remember to set rear=null, as shown in the code on the right.
Since QueueUnderflowException is unchecked, we are not required to include a throws clause in the method header.
package ch05.queues; import support.LLNode; public class LinkedUnbndQueue<T> implements UnboundedQueueInterface<T> { private LLNode<T> front=null, rear=null; public boolean isEmpty() { return front == null; } public void enqueue(T data) { // shown above } public T dequeue() { // shown above } }
The entire package is on the right. An empty queue has front and rear both null. A nonempty queue has neither null, so isEmpty() is easy.
The implementation on the right uses the generic LLNode class found in the support package.
If we are a little clever, we can omit the front pointer. The idea is that, since the link component of the rear node is always null, it conveys no information. Instead of null we store there a pointer to the first node and omit the front data field from the queue itself.
The result, shown on the right, is often called a circular queue. Although it clearly looks circular, one could say that it doesn't look like a queue since it doesn't have two ends. Indeed, from a structural point of view view it has no ends. However, we can write enqueue and dequeue methods that treat the rear as one end and the successor-of-rear (which is the front) as the other end.
An empty queue has rear null and a one element queue has the single node point to itself. Some care is needed when constructing the methods to make sure that they work correctly for such small queues (the so-called boundary cases).
The rough comparison is similar to the stack situation and indeed many other array/linked trade-offs.
With the linked implementation you only allocate as many nodes as you actually use; the unbounded array implementation approximates this behavior; whereas, for the bounded array implementation, you must allocate as many slots as you might use.
However, each node in a linked implementation requires a link field that is not present in either array implementation.
All are fast. The linked enqueue requires the runtime creation of memory, which costs more than simple operations. However, the array constructors requires time O(#slots) since java initializes each slot to null and the unbounded array constructor copies elements.
We run races below.
Read.
The goal is to quantify how fast the queues are in practice. We will compare the two unbounded queues. Since they both implement the same interface we can declare them to be the same type and write one method that accepts either unbounded queue and performs the timing.
There are many possibilities, what I chose was to enqueue N elements and then dequeue them. This set of 2N operations was repeated M times.
The results obtained for N=1,000,000 and M=1,000 were
An array-based unbounded queue requires 71216 milliseconds to enqueue and dequeue 1000000 elements 1000 times. A linked-based unbounded queue requires 31643 milliseconds to enqueue and dequeue 1000000 elements 1000 times.
So 2 billion operations take about 50 seconds or 40MOPS (Millions of Operations Per Second), which is pretty fast.
Timing events is fairly easy in Java, as usual this is due to the extensive library. For this job, it is the Date class in java.util that performs the heavy lifting.
When you create a date object (using new of course), a field in the object is initialized to represent the current time and this value can be retrieved using the getTime() method.
Specifically, the value returned by getTime() is the number of milliseconds from a fixed time to the time this date object was created. That fixed time happens to be 1 January 1970 00L00:00 GMT, but any fixed time would work.
Therefore, if you subtract the getTime() values for two Date objects, you get the number of milliseconds between their creation times.
Let's write a rough draft in class. My solution is here.
Queues are an important data structure. Their key property is the FIFO behavior exhibited by enqueues and dequeues.
We have seen three implementations, an array-based implementation where each created queue is of a fixed size, and both array-based and linked-based implementations where queue grow in size as needed during execution.
Remark: End of Material on Midterm.
Start Lecture #14
Start Lecture #15
Start Lecture #16
Remarks:
Lists are very common in both real life and computer programs. Software lists come in a number of flavors and the division into subcategories is not standardized.
The book and I believe most authors/programmers use the term list
in a quite general manner.
The Java library, however, restricts a list to be ordered.
The most general list-like interface in the library
is Collection<E>, and the (very extensive)
ensemble of classes and interfaces involved is called the
Collection Framework.
The library is rather sophisticated and detailed; many of the classes involved implement (or inherit) a significant number of methods. Moreover, the implementations strive for high performance. We shall not be so ambitious.
Differences between various members of the collection include.
We will be defining predicates contains() that determine if a given list contains a given element. For this to make sense, we must know when two elements are equal.
Since we want some lists to be ordered, we must be able to tell if one element is less than another.
We have seen that, for objects, the == operator tests if the references are equal, not if the values in the two objects are equal. That is, o1==o2 means that both o1 and o2 refer to the same objects. Sometimes the semantics of == is just right, but other times we want to know if objects contain the same value.
Recall that the Object class defines an equals() method that has the == semantics. Specifically, o1.equals(2) if and only if both refer to the same value.
Every class is a descendant of object, so equals() is always defined. Many classes, however, override equals() to give it a different meaning. For example, for String's, s1.equals(s2) is true if and only if the two strings themselves are equal (not requiring that the references are equal).
Imagine a Circle class having three fields double r,x,y giving the center of the circle and its radius. We might decide to define equals() for circles to mean that their radii are equal, even though the centers are different. That is, equals() need not mean identical.
public boolean equals (Circle circle) { return this.radius == circle.radius; } public boolean equals (Object circle) { if (circle instanceof Circle) return this.radius == ((Circle) circle).radius; return false; }
The definition of equals() given by Dale is simple, but flawed due to a technical fine point of Java inheritance. The goal is for two circles to be declared equal if they have equal radii. The top code on the right seems to do exactly that.
Unfortunately, the signature of this method differs from the one defined in Object, so the new method only overloads the name equals(). The bottom code does have the same signature as the one in Object, and thus overrides that method.
The result is that in some circumstances, the top code will cause circles to be compared using the equals() defined in Object. The details are in Liang section 11.10 (the 101 text) and in the corresponding section of my 101 lecture notes.
Homework: 1, 2.
To support an ordered list, we need more than equality testing. For unequal items, we need to know which one is smaller. Since I don't know how to tell if one StringLog is smaller than another, I cannot form an ordered list of StringLog's.
As a result, when we define the ArraySortedList<T> class, we will need to determine the smaller of two unequal elements of T. In particular, we cannot construct an ArraySortedList<StringLog>.
Java offers a cryptic header that says what we want.
Specifically, we can write
ArraySortedList<T extends Comparable<T>>
which says that any class plugged in for T must implement Comparable. We shall soon see that implementing Comparable means that we can tell which of two unequal elements is smaller.
Slightly more general and even more cryptic is
the header
ArraySortedList<T extends Comparable<? super T>>
I hope we will not need to use this.
Many classes define a compareTo() instance method taking one argument from the same class and returning an int. We say that x is smaller than y if x.compareTo(y)<0 and say that x is greater than y is x.compareTo(y)>0. Finally, it is normally true (and always for us) that x.compareTo(y)==0 if x.equals(y).
The Java library defines a Comparable interface, which includes just one method, compareTo(), as described above. Hence a class C implements Comparable if C defines compareTo() as desired. For such classes, we can decide which of two unequal members is smaller and thus these are exactly the classes T for which ArraySortedList<T> can be used.
More precisely, recent Java implementations (those with generics) define the Comparable<T> interface, where T specifies the class of objects that the given object can be compared to.
An example may help.
Many library classes (e.g., String)
implement Comparable<T>, where T is the
class itself.
Specifically the header for String states that
String implements Comparable<String>
This header says that the String instance method compareTo() takes a String argument.
Since String does implement Comparable (more precisely Comparable<String>), we can define ArraySortedList<String>.
StringLog does not implement Comparable at all; so we cannot write ArraySortedList<StringLog>.
Homework: 6.
As mentioned above the exact definition of list is not standardized. We shall follow the book's usage. One property of our lists is that each non-empty list has a unique first element and a unique last element (in a 1-element list the first and last elements are the same). Moreover the elements in a non-empty list have a linear relationship.
Definition: A set of elements has a Linear Relationship if each element except the first has a unique predecessor and each element except the last has a unique successor.
Lists can be
These assumptions are straight from the book
current position, the position of the next element accessed by getNext(). This position is incremented by getNext(), zeroed by reset(), and unchanged otherwise.
Start Lecture #17
We see below on the right the two interfaces for lists. The top interface applies to all lists, the bottom gives the additional methods available when the list is indexed.
In both cases elements are
equivalent if
the equals() method in T says so.
As always, when we say that such and such an object is returned, we mean that a reference to the object is returned
Just a few comments are needed as many of the methods are self-explanatory. The exception is the pair reset()/getNext(), which is discussed in a separate section by itself.
nicely formattedrepresentation of the list.
In an indexed list elements can be referred to by index (as in an array, a good model for such lists). Legal values for the index are 0..size()-1. All methods accepting an index throw an IndexOutOfBoundsException if given an illegal value.
Since elements have a linear relationship, it makes sense to talk of looping over the list from the first element to the last. The Java library has a super-slick and somewhat complicated mechanism to do this in an elegant manor (see the Iterable<T> interface, for details). We follow the book and define the following modest looping procedure.
All our List implementations keep track of the
current position of the list.
list.reset() for (int i=0; i<list.size(); i++) { element = list.getNext(); // play with element }
The code on the right shows a typical usage of these methods for a List named list. We use size() to iterate the correct number of times and use getNext() to advance to the next element.
As is often the case there are some technical details to mention. The easy detail is that, if getNext() is called when at the end of the list, the current position becomes the first element.
More interesting is what to do about empty lists and what happens
if we modify the list while
playing with an element.
Our solution is basically to say
Don't do that!.
The library permits some modifications, but for those not
permitted it also says
Don't do that!.
The requirements are that whenever list.getNext() is
called.
import ch06.lists.*; public class ListExamples { public static void main (String[] args) { ListInterface<String> list1 = new ArrayUnsortedList<String>(3); list1.add("Wirth"); list1.add("Dykstra"); list1.add("DePasquale"); list1.add("Dahl"); list1.add("Nygaard"); list1.remove("DePasquale"); System.out.print("Unsorted "); System.out.println(list1); ListInterface<String> list2 = new ArraySortedList<String>(3); list2.add("Wirth"); list2.add("Dykstra"); list2.add("DePasquale"); list2.add("Dahl"); list2.add("Nygaard"); list2.remove("DePasquale"); System.out.print("Sorted "); System.out.println(list2); IndexedListInterface<String> list3 = new ArrayIndexedList<String>(3); list3.add("Wirth"); list3.add("Dykstra"); list3.add("DePasquale"); list3.add("Dahl"); list3.add("Nygaard"); list3.remove("DePasquale"); System.out.print("Indexed "); System.out.println(list3); } }
This example, which I have taken from the book, illustrates well some differences between the three array-based implementations that we shall study. The first two ArrayUnsortedList and ArraySortedList each implement List; whereas, the third ArrayIndexedList implements IndexedList.
The main method has three sections, one for each list type. In all three cases the same 5 elements are added in the same order and the same element is is removed. The printed results, however, are different for the three cases. Not surprisingly the toPrint() in ArrayIndexedList includes index numbers, while the others do not.
The first surprise is that the output for ArrayUnsortedList does not print the items in the order they were inserted. This is due to the implementation of remove(). The example shown removes the third of the five previously entered elements. Naturally we don't want to leave a hole in slot three.
A natural solution would be to slide each element up so the old fourth becomes the third and the old fifth becomes the fourth. The problem with this solution is that if the first of N elements is deleted you would need to slide N-1 elements, which is O(N) work.
The chosen solution is to simply move the last element into the vacated slot. Remember that an unsorted list has no prescribed order of the elements.
The sorted list, list2, keeps the elements in sorted order. For Java String's the order is lexicographical. Consequently, removal does involve sliding up the elements below the one removed.
Finally, the ArrayIndexedList does not generate sorted lists so is again free to use the faster removal method. As mentioned above, this method enhances the output by generating a string version of the indices.
The result of these decisions is the output below, where I have reformatted the output into three columns so that more can be viewed on the screen
Unsorted List: Sorted List: Indexed List: Wirth Dahl [0] Wirth Dykstra Dykstra [1] Dykstra Nygaard Nygaard [2] Nygaard Dahl Wirth [3] Dahl
Homework: 13, 15.
A UML diagram for all the lists in this chapter is here.
The basic idea for an array-based list is simple: Keep the elements in array slots 0..(size-1), when there are size elements present. We increase size on inserts and deleted it on removals.
We shall meet a new visibility modifier in this section. The reason we can't make do with public and private is that we will be implementing two classes, one derived from the other. In this situation we want to give the derived class access to what would otherwise be private fields and methods of the base class.
More details will be given later when we implement the derived class.
package ch06.lists; public ArrayUnsortedList<T> implements ListInterface<T> { protect final int DEFCAP = 100; protected T[] list; protected int numElements = 0; protected int location; protected int currentPos; public ArrayUnsortedList() { list = (T[]) new Object[DEFCAP]; } public ArrayUnsortedList(int origCap) { list = (T[]) new Object[origCap]; }
Most of the code on the right is quite clear. For now, treat protected as private; the fields are not available to ordinary users of the class.
currentPos is use by the reset()/getNext() pair shown below.
Recall that a weakness in Java generics forbids the creation of a generic array, which explains the treatment of list.
Several of the public methods need to search for a given element. This task is given to an helper boolean method find() which, if the element is found, sets location for use by the public methods.
protected void enlarge() { T[] larger = (T[]) new Object[2*list.length]; for (int i=0; i<numElements; i++) larger[i] = list[i]; list = larger; } protected boolean find(T target); { for (location=0; location<numElements; location++) if (list[location].equals(target)) return true; return false; }
There are two helper methods.
The enlarge() method is called when an addition is
attempted on a currently
full list.
Recall that all the lists is this chapter are unbounded so
technically are never full.
Unlike the authors who raise the capacity by a fixed amount, I double it.
Then the current values are copied in.
The find() method implements a standard linear search. The only point to note is that the index used is a field and thus available to the public methods that call find().
The authors define a void find() and define a field (found) to hold the status of the search. I prefer a boolean find().
public void add(T element) { if (numElements == list.length) enlarge(); list[numElements++] = element; public boolean remove(T element) { if (!find(element)) return false; list[location] = list[--numElements]; list[numElements] = null; return true; } public int size() { return numElements; } public boolean contains(T element) { return find(element); } public T get(T element) { if (find(element)) return list[location] return null; } public String toString() { String str = "List:\n"; for (int i=0; i<numElements; i++) str = str + " " + list[i] + "\n"; return str; }
The add() method works as expected. For an unsorted list, we are free to add the new element anywhere we wish. It is easiest to add it at the end, so we do so.
Since our lists are unbounded, we must enlarge() the array if necessary.
The remove() method uses find() to determine if the desired element is present and if so where it is located (recall that find() sets location).
Removes can occur at any slot and we cannot leave a hole in the middle. Again we take the easy way out and simply move the last element into the hole.
Does this work if the element was found in highest array
slot?
Answer: Let's try it in class.
The size() method is trivial.
The contains() method is easy given the helper find().
The get() method, like remove() uses find() to determine the location of the desired element, which get() then returns. If find() reports that the element is not present, get() returns null as per its specification.
The toString() method, constructs its return value one line at a time, beginning with a header line and then one line for each listed element.
public void reset() { currentPos = 0; } public T getNext() { T next = list[currentPos++]; if (++currentPos == numElements) currentPos = 0; return next; }
As mentioned previously reset() and getNext() are use to enable looping through a the elements of the list. The idea is that currentPos indicates the next site to access and that getNext() does the access.
The reset() method initializes the
loop by
setting currentPos to zero.
These methods are easy because we forbid the user to mess up the list and do not check whether they do.
Now we want to keep the listed items in sorted order. But this might not even make sense!
I certainly understand a sorted list of Integer's or a sorted list of String's. But a sorted list of blueprints? Or bookcases?
That is, a sorted list of T's makes sense for some T's, but not for others. Specifically, we need that the objects in T can be compared to determine which one comes first.
Java has exactly this concept; it is the Comparable interface. So, we want to require that T implements this interface, which leads to two questions.
public class ArraySortedList<T> implements ListInterface<T>
Our first attempt at a header line for the ArraySortedList class, before considering Comparable but remembering that the class must implement ListInterface<T>, would be something like we see on the right.
public class ArraySortedList<T implements Comparable> implements ListInterface<T> public class ArraySortedList<T extends Comparable> implements ListInterface<T>
When trying to add the restriction that T implement Comparable, my first guess would be the top header on the right. However, that header is wrong; instead of implements we use the keyword extends, which gives the correct bottom header line. I am not certain of the reason for this choice of keyword, but an indication is below.
This last header is legal Java and illustrates how one limits the classes that can be plugged in for T. It is not perfect, however. We shall improve it after first learning how to use Comparable and thereby answering our second question above.
Objects in a class implementing the Comparable interface are ordered. Given two unequal objects, one is less than the other. This interface specifies only one instance method, compareTo(), which has one parameter and returns an int result.
The invocation x.compareTo(y) returns a negative integer, zero, or a positive integer to signify that x is less than, equal to, or greater than y.
The description just above, which did not
mention T, would have been applicable a few years ago and
probably still
works today.
However, modern, generics aware, Java specifies a
generic Comparable<T> interface.
On the right we see proper specifications of classes implementing Comparable.
public class C implements Comparable<C> public class String implements Comparable<String> public class C implements Comparable<D>
The first line asserts that an object in C can be compared
to another object in C.
The second line shows (part of) the header line for the familiar class String.
The third line asserts that an object in C can be compared to an object in D.
public class ArraySortedList<T extends Comparable<T>> implements ListInterface<T>
On the right we see a proper header for ArraySortedList<T> specifying that the class parameter T must implement Comparable<T>.
In fact the
best header we could write
for ArraySortedList would be
ArraySortedList<T extends Comparable<? super T>> implements ListInterface<T>
What this cryptic header is saying is
that instead of requiring that elements
of T can be compared with other
elements of T, we require that
elements of T can be compared with
elements of some superclass of T.
The ? in the header is
a
wildcard.
Note that T itself is considered a
superclass of T.
Since T extends a superclass
of T, perhaps this explains the choice
of the keyword extends above.
Why all this fuss about Comparable and fancy headings? We must do something since we need to keep the entries of our sorted list ordered. In particular, the add() method must place the new entry in the correct location (and slide the rest down).
The book uses the very first ArraySortedList header line that we considered. Recall that this header did not mention Comparable at all. What gain do we get for our efforts?
public class ArraySortedList<T> implements ListInterface<T> public class ArraySortedList<T extends Comparable<T>> implements ListInterface<T>
For convenience, both headers are repeated on the right.
Since the first header does not mention Comparable, a simple use of compareTo() for elements would fail to compile since there is no reason to believe that T contains such a method. Hence to find the correct location for add() to place the new element, the book compares this new element with existing elements using the following if statement
if (((Comparable)listElement).compareTo(element) < 0) // list element < add element
I would rate this if statement as roughly equal to our header on the ugliness scale, so using our header to simplify the if statement would be only a wash, not an improvement.
The real advantage of our approach is that the ugly if statement generates a warning from the compiler that it cannot guarantee that the listElement object contains a compareTo() method. This warning is quite appropriate since there is nothing to suggest that listElement, a member of T, is comparable to anything.
With our header the if statement becomes the expected
if (listElement.compareTo(element) < 0) // list element < add element
and, more significantly, generates no warning. As a result we cannot get a runtime error due to an inappropriate class being substituted for T.
We are now in a position to implement ArraySortedList<T> and could do so directly. However, we notice that the only methods that change between ArraySortedList<T> and ArrayUnsortedList<T> are add() (we need to insert the new item in the right place) and remove() (we must slide elements up instead of just placing the last element in the vacated slot). It seems a little silly to rewrite all the other methods so we choose to implement ArraySortedList<T> as an extension of ArrayUnsortedList<T>.
public class ArraySortedList<T extends Comparable<T>> extends ArrayUnsortedList<T> implements ListInterface<T> { ArraySortedList() { super(); } ArraySortedList(int origCap) { super(origCap); } public void add(T element) { int location; if (numElements == list.length) enlarge(); for (location=0; location<numElements; location++) if ((list[location]).compareTo(element) >= 0) break; for (int index=numElements; index>location; index--) list[index] = list[index-1]; list[location] = element; numElements++; } public boolean remove(T element) { boolean found = find(element); if (found) { for (int i=location; i<=numElements-2; i++) list[i] = list[i+1]; list[--numElements] = null; return found; } }
The header is a mouthful and conveys considerable information: this generic class requires the argument class to support comparison, inherits from the unsorted list, and satisfies the list specification. The constructors are the same as for the parent (ArrayUnsortedList<T>).
Since this is an unbounded implementation, we first enlarge the list if it is filled to capacity. Next we determine the correct location to use, which is the first location whose occupant is greater than or equal to the new element. To make room for the new element, we shift the existing elements from here on. Finally, we insert the new element and increase the count.
Since we must maintain the ordering, remove is simple. If the item is found, shift down all element from the found location on (thus overwriting the item) and reduce the count. Finally, return whether the item was present.
by Copyor
by Reference
Our list (and stack, and queue) implementations, when given an
element to insert, place (a reference to) the actual object on the
list.
This implementation is referred to as
by reference and, I
believe, is normally desirable.
But putting (a reference to) the actual object on the list can break the list's semantics since this listed object can be changed by the user. For example, the items in ArraySortedList<Integer> can become out of order, if a listed Integer is changed.
This problem can be avoided by having the add() method
make a copy of the item and then insert (a reference to) this
private copy rather than the user-accessible original.
This implementation is referred to as
by copy and requires
extra computation and extra memory when compared to
by
reference.
This extended class supports indexed references. In a sense it permits the user to treat the list as intelligent arrays. The two extensions to ArrayUnsortedList<T> class is to give an additional (indexed based) signatures to existing methods and introduce methods that require indices.
For example, in addition to add(T element), which just promises to add the element somewhere, the indexed list introduces add(int index, T element), which adds the element at the specified index.
New methods such as set(), which updates a specified location, have no analogue without indices.
The methods including an index parameter check the parameters for legality and throw a standard IndexOutOfBoundsException exception if needed.
package ch06.lists; public class ArrayIndexedList<T> extends ArrayUnsortedList<T> implements IndexedListInterface<T> { public ArrayIndexList() { super(); } public ArrayIndexList(int origCap) {super(origCap); } public void add(int index, T element) { if ((index<0) || (index>numElements)) throw new IndexOutOfBounds Exception("Helpful message."); if (numElements == list.length) enlarge(); for (int i=numElements; i>index; i--) list[i] = list[i-1]; list[index] = element; numElements++ } public T set(int index, T element) { if ((index<0) || (index>numElements)) throw new IndexOutOfBounds Exception("Helpful message."); T hold = list[index]; list[index] = element; return hold; } public int indexOf(T element) { if (find(element)) return location; return -1; } public T remove (int index) { if ((index<0) || (index>numElements)) throw new IndexOutOfBounds Exception("Helpful message."); T hold = list[index]; for (int i=index; i<(numElements-1); i++) list[i] = list[i-1]; list[numElements--] = null; return hold; public String toString() { String str = :List:\n"; for (int i=0; i<numElements; i++) str = str + "[" + i "] " + list[i] + "\n"; return str;
No surprises here. The list is not sorted so doesn't involve Comparable
After checking the index for legality, the method enlarges the array if necessary.
Since the goal is to place the new element in a specified slot, add() next shifts down the existing elements from the desired slot on. Note that this loop runs backwards.
This method sets the specified slot to the given element and returns the former occupant. The code is straightforward
This simple method returns the index of a sought for element or -1 if the element is not on the list.
This method removes and returns the element in a specified slot. The code is again straightforward.
The only difference between this method and one being overridden is the addition of the index value within brackets.
Homework: 18, 21.
Start Lecture #18
Remarks:)
Lab 4 had confusing words for a few days so I am extending the due date until 17 November 2012.
Read.
A crucial advantage of sorted lists is that searching is faster. No one would use a telephone directory with the names unsorted.
The linear search algorithm, which we have already seen, is given a list and a desired value. The algorithm can be described as follows.
We can improve this algorithm (a little) by recognizing that if the current list value exceeds the desired value, the desired value is not present. That is the algorithm becomes the following (assuming the list is sorted in ascending order).
In comparison, the binary search algorithm can be described as follows (again assuming the list is sorted in ascending order).
halfof the list; otherwise, search only the lower
half.
My presentation differs in three small ways from the book's.
Since our goal is to write a drop-in replacement for the find() method in ArraySortedList<T>, our header line must be the same as the one we are replacing. Recall that the value returned by find() indicates whether the search was successful. If successful the location instance data field is set. Our replacement must have the same semantics.
protected boolean find(T target)
The required header is shown on the right. (In fact there is no find() method in the code for ArraySortedList<T>; instead it is inherited from ArrayUnsortedList<T>).
The original (linear search) find() searched the entire occupied portion of the array; whereas the binary search algorithm searches smaller and smaller sub-arrays with each recursive call. Thus, we shall need to specify the current lower and upper bounds for the search.
protected boolean find(T target) { return find1(target, 0, numElements-1); }
Hence we will need a helper method find1() that accepts these extra parameters and performs the real search. The find() method itself is the trivial routine on the right. It simply passes the job onto find1() telling it to search the entire range of possible values.
private boolean find1(T target, int lo, int hi) { if (lo > hi) return false; int location = (lo + hi) / 2; if (list[location].compareTo(target) == 0) return true; if (list[location].compareTo(target) < 0) return find1(target, location+1, hi); return find1(target, lo, location-1); }
Perhaps the only subtlety in this method is the first base case, which is represented by the first if statement on the right. When we have reduced the range of interest to the empty set, the item is definitely not present.
The rest of the method is essentially an English-to-Java
translation of the words above describing the algorithm.
If the middle location contains the target, we are done; if it is
less search the upper
half; otherwise search the lower
half.
Notice that find1() is tail-recursive, that is the two
recursive calls are each the last statement executed.
Question: How can two different calls each be the last statement?
Answer: Last doesn't mean the last one written, it means the last one executed.
As we noted before, tail recursive methods can be converted to iterative form fairly easily.
protected boolean find(T target) { int lo=0, hi=numElements-1; while (lo <= hi) { location = (lo + hi) / 2; if (list[location].compareTo(target) == 0) return true; if (list[location].compartTo(tartet)) lo=location+1; else hi=location-1; } return false; }
In fact the code on the right is essentially the same as the pair above it.
The difference is that instead of a recursive call we just go back to the beginning. The jumping to the beginning and a recursive call start out the same. But when the recursion ends, we normally have more to do in the caller. However, this is tail-recursion so there is nothing more to do.
Lets go through a few searches and see.
How much effort is required to search a list with N items?
For linear search the analysis is easy, but the result is bad.
If we are lucky, the element is found on the first check. If we are unlucky, the element is found at the end of the list or not at all. In these bad cases we check all N list elements. So the best case complexity of linear search is O(1) and the worst case complexity is O(N).
On the average we will find the element (assuming it is present) in the middle of the list with complexity O(N/2)=O(N).
The worst case is still O(N). For example consider searching for an element larger than every listed element.
In binary search, each update of lo or hi reduces the range of interest to no more than half what it was previously. This occurs each iteration or each recursive call, which leads to two questions.
The first question is easy, each iteration and each recursive call, does an amount of work bounded independent of N. So each iteration or recursive call is O(1).
The second question might be easier looked at in reverse. How many times must you multiply 2 by itself to reach at least N? That number is log2(N), which we write as simply log(N) since, by default, in this class logs are base 2. Hence the complexity is O(log(N)), a big improvement. Remember that log(1,000,000,000)<30.
Homework: 42.
We now give linked implementations of both unsorted and sorted
lists.
However, we do not implement a linked, indexed list (even though we
could).
Why not?
Answer: Accessing the linked item at index i requires traversing the list. Thus the operation has complexity O(N) where O(N) is the current size of the list.
This answer illustrates a strong advantage of array-based lists, to balance against the disadvantage of having to enlarge() them or deal with them being full.
As with stacks and queues, reference based lists use the LLNode<T> class.
The UML for all our lists is here. (References to all UMLs are repeated at the end of the notes.)
The authors choose to say Ref rather than Linked since the latter is used by the standard Java library.
package ch06.lists; import support.LLNode; public class RefUnsortedList<T> implements ListInterface<T> { protected int numElements = 0; protected LLNode<T> currentPos = null; protected LLNode<T> location; protected LLNode<T> previous; protected LLNode<T> list = null;
As with the array-based implementation we use an ugly header to avoid an ugly and unsafe if statement found in the book.
The currentPos field is used with the reset()/getNext() pair describe below.
We shall see that removing an item from a given location requires referencing the previous location as well.
Finally, list references the first node on the list (or is null if the list is empty).
These fields are illustrated in the diagram on the right, which shows a list with 5 elements ready for the next-to-last to be remove()'d. reset() has been called so getNext() would begin with the first node (assuming the remove() did not occur).
The default no-arg constructor suffices so no constructor is written.
Homework: 47a, 47b (use the diagram style used on the midterm), 50.
public void add(T element) { LLNode<T> newNode = new LLNode<T>(element); newNode.setLink(list); // new becomes first list = newNode; numElements++; }
Since the list is not sorted we can add elements wherever it is
most convenient and for a single linked list, that is at the front
(i.e., right
after the RefUnsortedList<T>
node itself).
public int size() { return numElements; }
Since we explicitly maintain a count of the number of elements currently residing on the list, size() is trivial.
public boolean contains(T element) { return find(element); } protected boolean find(T target) { location = list; while (location != null) if (location.getInfo().equals(target)) return true previous = location; location = location.getLink(); return false; }
As in the array-based implementations we use a predicate find() instead of a found data field.
You may notice that extra work is performed while searching for an element. This is done so that, after find() is executed the found element is ready for removal.
Note the efficient technique used to determine the actual location and its predecessor (the latter is needed for removal). Thus the superfluous work just mentioned is small.
The workhorse find() is protected so that derived classes (in particular, RefSortedList, to be studied next) can use it.
The natural technique to remove a target element from a linked list is to make the element before the target point to the element after the target.
This is shown on the right, where the middle node is the target.
public boolean remove(T element) { boolean found = find(element); if (found) { if (location = list) // first element list = location.getLink(); else previous.setlink(location.getLink()); numElements--; } return found; }
The only tricky part occurs when the target is first element since there is then is no element before the target. The solution in this case depends on the details of the implementation.
In our implementation, the list field acts as the (link component of the) element before the first element. Hence, the code on the right makes list point at the element after the target when target is the first element.
Start Lecture #19
Remark: The RefUnsortedList<T> header does not need to mention Comparable as I guessed last time. It is fixed now.
public T get(T element) { if find(element) return location.getInfo(); else return null; }
Whereas the contains() predicate merely determines whether a given element can be found on the list, get() actually returns a reference to the element. As expected, find() again does all the work.
The only question is what to do if the element cannot be found; we specify that null be returned in that case.
public void reset() { currentPos = list; } public T getNext() { T next = currentPos.getInfo(); if (currentPos.getLink() == null) currentPos = list; else currentPos = currentPos.getLink(); return next; } } // end of class RefUnsortedList<T>
To employ this pair, the user calls reset() to get started and then calls getNext() repeatedly to retrieve successive elements on the list.
It is not hard to think of difficult cases to implement and hence the code on the right seems surprisingly simple. The simplicity is due to the difficult cases being outlawed and not checked for. For example, an empty list gives a null pointer exception when getNext() executes it first instruction.
Specifically, the implementation assumes that when getNext() is invoked:
We shall see next chapter that reset()/getNext() can be applied to some structures that are not lists (we will do binary trees).
There is not much difference between the RefUnsortedList<T> and RefSortedList<T> classes; almost all the methods remain the same. For this reason, we implement the latter as an extension of the former.
One difference between the classes is that now that we want the elements to be in order, we must require that T implement Comparable<T>.
public class RefSortedList<T extends Comparable<T>> extends RefUnsortedList<T> implements ListInterface<T> {
The header shows the dependence on RefUnsortedList<T>. There are no fields beyond those of the base class and again the default no-arg constructor suffices. Note that this constructor invokes the superclass constructor.
The one method we do need to change is add(). Previously we chose to insert the new element at the beginning of the list because that was the easiest place. Now we must skip over all existing elements that are smaller than the one we are inserting.
Once we have found the correct location, the insertion proceeds as illustrated on the right. The new node, shown on the bottom row, is to be inserted between the first two nodes on the top row. These nodes are called prevLoc and location in the code below. The former is the last node we skipped over in seeking the insertion site and the latter is thus the first node not skipped.
public void add(T element) { LLNode<T> prevLoc = null; LLNode<T> location = list; while (location!=null && location.getInfo().compareTo(element)<0) { prevLoc = location; location = location.getLink(); } LLNode<T> newNode = new LLNoDE<T>(element); if (prevLoc == null) { // add at front of list newNode.setLink(list); list = newNode; } else { // add elsewhere newNode.setLink(location); prevLoc.setLink(newNode); } numElements++; } } // end of class RefSortedList<T>
The method begins with a short but clever search loop to find the
two nodes between which the new node is to be inserted.
This loop is similar to the one in find() and deserves
careful study.
Questions How come .getInfo() does not ever raise a nullPointerException.
How come location.getInfo() will have a compareTo() method defined.
Answers: Short circuit evaluation of &&.
T implements Comparable<T>.
After the loop, we create and insert the new node. The then arm is for insertion at the beginning (done identically to the method in RefUnsortedList<T>); the else arm is for insertion between preLoc and Location.
Finally, we increase the count of elements.
Homework: 47c, 47d, 48.
For practice in the programming technique just illustrated, let's do on the board the problem of finding the second largest element in an array of int's.
A solution is here.
Previously we have studied stacks, a linear structure in which all activity occurs at one end (the top), and queue another linear structure, in which activity occurs at both ends (the front and rear). In this chapter we studied lists, general linear structures in which activity can occur throughout.
We also encountered our first high performance algorithm, binary search, which improves complexity from O(N) for naive searching down to O(logN).
If we use an array-based list, we can employ binary search, which speeds up retrieval from O(N) to O(logN)). However, these lists are not as convenient when the list size is static.
We shall see that binary search trees combine the advantages of O(logN) searching with linked-like memory management.
A key characteristic of lists is their linear structure: Each node (except for the first and last) has a unique predecessor and a unique successor.
Tree nodes, in contrast, have (zero or more) children and (zero or one) parents. As in biology, if A is a child of B, then B is the parent of A. Most nodes do have a parent; the one exception is the tree root, which has no parent. The root can be reached from any other node by repeated moving to the parent.
Nodes that have no children are called leaves of the tree. Other nodes are called interior.
We see two small trees on the right.
Note that there is a specific direction inherent in a tree diagram, from root to leaves. In the bottom diagram the direction is shown explicitly with arrowheads; in the top diagram it is implicitly going from top to bottom.
Trees are often used to represent hierarchies.
For example this chapter can be viewed as tree.
The root is
Chapter 8 Binary Search Trees.
This root has 11 children
8.1 Trees ... 8.10
Case Study: Word Frequency Generator,
Summary.
The node
8.2 The Logical Level has two children
Tree Elements and
The Binary Search Tree Specification.
In fact this chapter can be considered as a subtree of the entire
tree of the book.
That bigger tree has as root
Object-Oriented Data Structures Using Java.
Each chapter
heading is a child of this root and each chapter itself is a subtree
of the big tree.
The two diagrams on the right show graphs that are not trees. The graph on the near right is not a tree since the bottom node has two parents. The graph on the far right, does not have a single root from which all nodes are descendants.
We often think of the tree as divided into horizontal levels. The root is at level 0, and the children of a level n node are at level n+1.
Some authors, but not ours, also use the term depth as a synonym for level.
Unlike level (and depth) which are defined from the root down, height is defined from leaves up. The height of a leaf is 0 and the height of an interior node is one plus the maximum height of its children.
The height of a tree is the height of its root, which equals the maximum level of a node.
For any node A, the subtree (rooted) at A consists of A plus A's children, plus their children, plus their children, ... .
The nodes constituting the subtree at A excluding A itself are called the descendants of A.
If a node X is a descendant of Y, we also say that Y is an ancestor of X.
Two nodes with the same parent are called siblings
Illustrate all these definitions on the board
Trees are quite useful in general. However, in this course, we will mostly use trees to speed-up searches and will emphasize a subclass of trees, binary search trees (other trees are used for searching, but we will not be studying them).
Definition: A binary tree is a tree in which no node has more than two children
For example the tree on the right is a binary tree.
We distinguish the right and left child of a node. For example the node H on the right has only a left child, the node I has only a right child, and the root has both a left and right child.
We also talk of the right and left subtrees of a node. The right subtree is the subtree rooted at the right child (assuming the node has a right child; otherwise the right subtree is empty) and similarly for the left subtree.
We will see that most of our operations will go up and down trees once or a few times and will not normally move from sibling to sibling. Thus these operations will take time proportional to the height and therefore low-height binary trees are preferable.
Consider all possible binary trees with 15 nodes. What are the maximum and minimum possible heights?
The max is pretty easy: a line. That is 14 nodes with one child and one leaf at the end. These trees have height 14.
The min is not too hard either. We want as many nodes as possible with 2 children so that the tree will be wide, not high. In particular, the minimum height tree has 7 interior nodes (each with 2 children) on the top three levels and 8 leaves on level 3. Its height is 3.
The same reasoning as above shows that the maximum height is 1048574 and the minimum is 19. Quite a dramatic difference. Note that a million nodes is not unreasonable at all. Consider, for example, the Manhattan phone book.
If all the interior nodes have two children, then increasing the height by 1, essentially doubles the number of nodes. Said in reverse, we see that the minimum height of a tree with N nodes is O(logN).
Thus the possible heights range from O(logN) to O(N).
Homework: 1, 2, 3, 4.
Consider the bushy 15-node binary tree shown in the diagram above. It has height 3. Assume each node contains a value and we want to determine if a specific value is present.
If that is all we know, we must look at every node to determine that a value is not present, and on average must look at about 1/2 the nodes to find an element that is present. No good.
The key is to an effective structure is that we don't place the values at random points in the tree. Instead, we place a value in the root and then place all values less that this root value in the left subtree and all values greater than the root value in the right subtree.
Recursively we do this for all nodes of the tree (smaller values on the left; larger on the right).
Doesn't work. How do we know that exactly 7 of the values will be less than the value in the root? We don't.
One solution is to not pick the shape of the tree in advance. Start with just the root and, place the first value there. From then on, take another value and create a new node for it. There is only one place this node can be (assuming no duplicate values) and satisfy the crucial binary search tree property:
The values in the left subtree are less than or equal to the value in the node, and the values in the right subtree are greater than or equal to the value in the node.
Definition:A binary search tree is a binary tree having the binary search tree property.
Although a binary search does not look like a sorted list, we shall see that the user interfaces are similar.
Start Lecture #20
Remarks
while unvisited nodes remain pick an unvisited node visit that node inorderTraversal(Node N) if (N has a left child) preorderTraversal(left child) visit(N) if (N has a right child) preorderTraversal(right child) preorderTraversal(Node N) visit(N) if (N has a left child) preorderTraversal(left child) if (N has a right child) preorderTraversal(right child) postorderTraversal(Node N) if (N has a left child) preorderTraversal(left child) if (N has a right child) preorderTraversal(right child) visit(N) inorderTraversal(Node N) if (N is not null) preorderTraversal(left child) visit(N) preorderTraversal(right child) preorderTraversal(Node N) if (N is not null) visit(N) preorderTraversal(left child) preorderTraversal(right child) postorderTraversal(Node N) if (N is not null) preorderTraversal(left child) preorderTraversal(right child) visit(N)
Traversing a binary tree means to visit each node of the tree in a specified order. The high-level pseudo code is on the right. The difference between traversals is the order in which the nodes are picked for visitation.
We shall study three traversals, preorder, inorder, and postorder, whose names specify the relative order of visiting a node and its children.
All three traversals are shown on the right in two styles (see below).
As you can see all three traversals have the same statements and all three traverse left subtrees prior to traversing the corresponding right subtrees.
The difference between the traversals is where you place visit(N) relative to the two subtree traversals.
In the first implementation style we do not recurse if a child is null; in the second we do recurse but return right away. Although the second seems silly, it does handle the case of an empty tree gracefully. Another advantage is that it makes the base case (an empty tree) clear.
We shall see right away that when preforming an inorder traversal of a binary search tree, the values of the nodes are visited in order, i.e., from smallest to largest.
Let's do on the board a few traversals of both ordinary binary trees and binary search trees.
Homework: 5,6,9,10.
Discussion of Comparable<T>, which we did already.
We make the following choices for trees, which are consistent with the choices we made for lists.
In addition the methods do not support null arguments and do not check this condition
The UML diagram for the interface is shown on the right. It is quite similar to the equivalent diagram for sorted lists.
The main differences between the interfaces for BSTs and for sorted lists occur in the specification of the reset()/getNext() pair. We parameterize reset() so that the pair can traverse the tree in either preorder, inorder, or postorder. Specifically, we define the following three Java constants.
public static final int INORDER = 1; public static final int PREORDER = 2; public static final int POSTORDER = 3;
In addition, we enhance reset() to return the current size of the tree for so that a user of the pair know how many time to invoke getNext().
Read. The authors re-implement the golf application using binary search trees instead of sorted lists, which illustrates how little changes from the user's viewpoint.
The first order of business is to define the class representing a
node of the tree.
Question: Why can't we reuse LLNode<T>?
Answer: Those nodes have only one reference to another node (the successor). For binary trees we need two such references, on to the left child and one to the right child.
A detailed picture is on the right.
This node would work fine for any binary tree. That is, it has no dependence on the binary search tree property.
The node would not work for arbitrary trees since it contains references to only two children.
Diagrams of full tree-like structures typically do not show all the Java details in the picture above. Instead, each node would just be three boxes, two containing references to other nodes and the third containing the actual data found in the element. A simplified diagram of a 3-node structure is shown on the right.
Although these simplified diagrams are unquestionably useful, please do not forget the reference semantics that is needed for successfully implementing the concepts in Java.
Question: What would be a good node for an
arbitrary (i.e., non-binary) tree?
Answer: One possibility is to have (in addition to info) an array of children. Let's draw it on the board.
Another possibly would be to have two references one to the leftmost child and the other to the
closest right sibling.
Draw a tree on the board using the normal style and again using the
left-child/right-sibling style.
package support; public class BSTNode<T extends Comparable<T>> { private T info; private BSTNode<T> left = null; private BSTNode<T> right = null; BSTNode(T info) { this.info = info; } public T getInfo() { return info; } public void setInfo(T info) { this.info = info; } public BSTNode<T> getLeft() { return left; } public void setLeft(BSTNode<T> link) { left = link; } public BSTNode<T> getRight() { return right; } public void setRight(BSTNode<T> link) { right = link; } }
The BSTNode<T extends Comparable<T>> implementation is shown on the right. It is straightforward.
To construct a node you supply (a reference to) info and the resulting node has both left and right set to null.
There is a set and a get method for each of the three fields: info, left and right.
(The book chooses to have the constructor explicitly set left and right to null; I prefer to accomplish this via an initialization in the declaration.
A full UML diagram for BSTs is here and at the end of these notes.
package ch08.trees; import ch03.stacks.*; // used for iterative size() import cho4.queues.* // used for traversals import support.BSTNode; public class BinarySearchTree<T extends Comparable<T>> { private BSTNode<T root = null; private found; // used by remove() private LinkUnbndQueue<T> BSTQueue; private LinkUnbndQueue<T> preOrderQueue; private LinkUnbndQueue<T> postOrderQueue; public boolean isEmpty() { return root == null; } public int size() { return recSize(root); } // rest later } // end of BinarySearchTree<T>
This class will require some effort to implement. The linear nature of stacks, queues, and lists make them easier than trees. In particular removing an element will be seen to be tricky.
The very beginning of the implementation is shown on the right. The only field is root. For an empty tree, which the initial state of a new tree, root is null. Otherwise it references the BSTNode<T> corresponding to the root node.
isEmpty() is trivial, but the rest is not.
The public method size() simply calls the helper recSize(), which is in the next section.
private int recSize(BSTNode<T> tree) { if (tree == null) return 0; return recSize(tree.getLeft()) + recSize(tree.getRight()) + 1; }
The book presents several more complicated approaches before the one shown on the right. The key points to remember are.
Much harder. In some sense you are doing the compiler's job of transforming the recursion to iteration.
In this case, and often with trees, it is no contest: recursion is much more natural and consequentially much easier.
Homework: 20.
Homework: How do you find the maximum element in a binary search tree? The minimum element?
public boolean contains(T element) { recContains(element, root); } private boolean recContains(T element, BSTNode<T> tree) { if (tree == null) return false; if (element.compareTo(tree.getInfo()) == 0) return true; if (element.compareTo(tree.getInfo()) < 0) return recContains(element, root.getLeft()); return recContains(element, root.getRight()); } public T get(T element) { return recGet(element, root); } public T recGet(T element, BSTNode<T> tree) { if (tree == null) return null; if (element.compareTo(tree.getInfo()) == 0) return tree.getInfo(); if (element.compareTo(tree.getInfo()) < 0) return recGet(element, root.getLeft()); return recGet(element, root.getRight()); }
The public contains() method simply calls the private helper asking it to search starting at the root.
The helper first checks the current node for the two base cases: If we have exhausted the path down the tree, the element is not present. If we have a match with the current node, we found the element.
If neither base case occurs, we must proceed down to a subtree, the comparison between the current node and the target indicates which child to move to.
get() is quite similar to contains(). The difference is that instead of returning true or false to indicate whether the item is present or not, get() returns (a reference to) the found element or null respectively.
Homework: 31.
The basic idea is straightforward, but the implementation is a little clever. For the moment ignore duplicates, then addition proceeds as follows.
Start searching for the item, this will force you down the tree, heading left or right based on whether the new element is smaller or larger than (the info component of) the current node. Since the element is not in the tree, you will eventually be asked to follow a null pointer. At this point, you create a new node with the given element as info and change the null pointer to reference this new node.
What about duplicates?
Unlike a real search, we do not test if the new element equals (the info component of) each node. Instead, we just check if the new element is smaller. We move left if it is and move right if it isn't. So a duplicate element will be added to the right.
Although add() is short and the basic idea is simple, the implementation is clever and we need to study it on examples. As described above, we descend the tree in the directions determined by the binary search tree property until we reach a null pointer, at which point we insert the element.
public void add(T element) { root = recAdd(element, root); } private BSTNode<T> recAdd(T element, BSTNode<T> tree) { if (tree == null) // insert new node here tree = new BSTNode<T>(element); else if (element.compareTo(tree.getInfo()) < 0) tree.setLeft(recAdd(element, tree.getLeft())); else tree.setRight(recAdd(element, tree.getRight())); return tree; }
The clever part is the technique used to change the null pointer to refer to the new node. At the deepest recursive level, the setLeft() or setRight() does set a null pointer to a pointer to the new leaf. At other levels it sets a field to the value it already has. This seems silly, but determining when we are making the deepest call (i.e., when the assignment is not redundant) is harder and might cost more than doing the assignment.
Do several examples on the board. Start with an empty tree and keep adding elements.
Then do it again adding the elements in a different order. The shape of the resulting tree depends strongly on the order the elements are added.
Homework: 29, 33.
Start Lecture #21
Remark: Should show that adding the same values in different orders can give very different shaped BSTs.
Whereas the basic idea for add() was clear (the implementation was not); for remove() we first must figure out what to do before we can worry about how to do it.
If the node to be removed is a leaf, then the result is clear: toss the node and change its parent to refer to null.
A minor problem occurs when the leaf to be removed is also the root of the tree (so the tree has only one node). In this case, there is no parent, but we simply make the root data field null, which indicates that the tree is empty. In a rough sense, the root data field acts as the parent of the root node.
The real problem occurs when the node to be removed is not a leaf, especially if it has two children.
If it has one child, we make the child take the place of its to-be-removed parent. This is illustrated on the right where the node containing H is to be removed. H has one child, M, which, after the removal, has the tree position formerly held by H. To accomplish this removal, we need to change the pointer in H's parent to refer instead to M.
The triangles in the diagram indicate arbitrary (possibly empty) subtrees. When removing a node with one parent all the subtrees remained connected to the same nodes before and after the operation. For some algorithms, however, (in particular for balancing an AVL tree, which we may not study), the subtrees move around.
The dots in the upper right indicate that D may not be the root of the tree. Although the diagram suggests D is a right child, this is not implied. D could be the root, a right child, or a left child.
As mentioned this is the hardest case. The goal is to remove node H, which has two children. The diagram shows the procedure used to reduce this case to the ones we solved before, namely removing a node with fewer than 2 children.
The first step is to find H's predecessor, i.e. the node that comes right before H in sorted order. To do this we go left then go as far right as possible arriving at a node G, which definitely has no right child.
The key observation is that if we copy the info from this node to the node we wish to remove, we still have a binary search tree. Since G is before H, any node greater than H is greater than G. Since G is immediately before H, any other node less than H is less than G. Taken together these two fact show that the binary search property still holds.
Now we must remove the original G node. But this node has no right child so the situation is one we already have solved.
public boolean remove (T element) { root = recRemove(element, root); return found; }
Now that we know what to do, it remains to figure out how to do it. As with node addition, we have a simple public method, shown on the right, that implements the announced specification, and a private helper that does most of the work.
The helper method is assigned two tasks. It sets the data field found, which the public method returns, and the helper itself returns (a reference to) the root of the tree it is given, which the public method assigns to the corresponding data field.
In most cases the root of the original tree doesn't change; however, it can change when the element being removed is located in the root. Moreover, the helper calls itself and when a value is removed the parent's left or right pointer is changed accordingly.
private BSTNode<T> recRemove(T element, BSTNode<T> tree) { if (tree == null) { // element not in tree found = false; return tree; } if (element.compareTo(tree.getInfo()) == 0) { found = true; // found the element return removeNode(tree); } if (element.compareTo(tree.getInfo()) < 0) tree.setLeft(recRemove(element, tree.getLeft())) else tree.setRight(recRemove(element, tree.getRight())) return tree; }
The helper begins with the two base cases. If the tree parameter is null, the element is not present.
If the tree node contains the element we call removeNode() (described below).
In other cases we recursively descend the tree to the left or right depending on the comparison of element and the current node's info. We set the left or right field of the current node to the value of the recursive call. This cleverly assures that the parent of a deleted node has the appropriate field nulled out.
private BSTNode<T> removeNode (BSTNode<T> tree) { if (tree.getLeft() == null) return tree.getRight(); if (tree.getRight() == null) return tree.getLeft(); T data = getPredecessorInfo(tree); tree.setInfo(data); tree.setLeft(recRemove(data, tree.getLeft())); return tree; } private T getPredcessorInfo(BSTNode<T> tree) { tree = getLeft(tree); while (tree.getRight() != null) tree = tree.getRight(); return tree.getInfo(); }
We are given a node to remove and must return (a reference to) the node that should replace it (as the left or right child of the parent or the root of the entire tree).
As usual we start with the base cases, which occur when the node does not have two children. Then we return the non-null child, or null if both children are null.
If the node has two children we proceed as described above. First copy the info from the predecessor to this node and then delete the (actually a) node that contained this data. In this case the reference returned is the unchanged reference to tree.
Subtle, very subtle. It takes study to see when links are actually changed.
Trace through some examples on the board.
Homework: 36,37.
As we have seen in every diagram, trees do not look like lists. However, the user interface we present for trees is quite similar to the one presented for lists. Perhaps the most surprising similarity is the retention of the reset()/getNext() pair for iterating over a tree. Trees don't have a natural next element.
There are two keys to how these dissimilar iterations are made to appear similar, one at the user level and one at the implementation level.
At the user level, we add a parameter orderType
to reset() (the book also adds it
to getNext(), I discuss the pros and cons later).
This parameter specifies the desired traversal order and hence gives
a meaning to
the next element.
Also reset() returns the size of the tree, enabling the user to call getNext() the correct number of times, which is important since getNext() has the additional precondition that it not be called after all items have been returned.
At the implementation level, we have reset() precompute all the next elements. Specifically, we define a queue, let reset() perform a full traversal enqueuing each item visited, and let getNext() simply dequeue one item.
public int reset(int orderType) { BSTQueue = new LinkedUnbndQueue<T>; if (orderType == INORDER) inOrder(root); else if (orderType == PREORDER) preOrder(root); else if (orderType == POSTORDER) postOrder(root); return size(); } public T getNext() { return BSTQueue.dequeue(); }
reset() creates a new queue (thus ending any previous iteration) and populates it with all the nodes of the tree.
The order of enqueuing depends on the parameter and is accomplished by a helper method.
The size of the tree, and hence the number of enqueued items, is returned to the user.
getNext() simply returns the next item from the queue.
private void inOrder(BSTNode<T> tree) { if (tree != null) { inOrder(tree.getLeft()); BSTqueue.enqueue(tree.getInfo(); inOrder(tree.getRight()); } } private void preOrder(BSTNode<T> tree) { if (tree != null) { BSTqueue.enqueue(tree.getInfo(); preOrder(tree.getLeft()); preOrder(tree.getRight()); } } private void inOrder(BSTNode<T> tree) { if (tree != null) { postOrder(tree.getLeft()); postOrder(tree.getRight()); BSTqueue.enqueue(tree.getInfo(); } }
These three methods, as shown on the right, are direct translations of the pseudocode given earlier. In these particular traversals, visiting a node means enqueuing it on BSTqueue from where they will be dequeued by getNext().
As previously remarked, the three traversals differ only in the placement of the visit (the enqueue) relative to the recursive traversals of the left and right subtrees.
In all three cases the left subtree is traversed prior to the right subtree.
As usual I do things slight differently from the authors. In this subsection, however, I made a real, user-visible change. The book defines three queues, one for each possible traversal. The reset() method enqueues onto the appropriate queue, depending on its parameter. The user-visible change is that getNext() also receives an orderType parameter, which it uses to select the queue to use.
The book's style permits one to have up to three iterations active at once, one using inorder, one preorder, and one postorder. I do not believe this extra flexibility warrants the extra complexity.
A nice test program, that I advise reading. The output from a run is in the book, the program itself is in .../bookFiles/ch08/IDTBinarySearchTree.java
In this comparison, we are assuming the items are ordered. For an unordered collection, a binary search tree (BST) is not even defined.
An array-based list seems the easiest to implement, followed by a linked list, and finally by a BST. What do we get for the extra work?
One difference is in memory usage. The array is most efficient, if you get the size right or only enlarge it by a small amount each time. A linked list is always the right size, but must store a link to the next node with each data item. A BST stores two links.
The table on the right compares the three implementations we have studied in terms of asymptotic execution time.
The BST looks quite good in this comparison.
In particular, for a large list with high
churn (additions
and removals), it is the clear winner.
The only expensive BST operation is reset() but that is deceptive. If you do a reset, you are likely to perform N getNext() operations, which also costs O(N) in total for all three data structures.
The time for an array-based constructor seems to assume you make the array the needed size initially. Instead of this, we learned how to enlarge arrays when needed.
The book enlarges by a fixed amount, which costs only O(1), but must be applied O(N) to reach a (large) size N. I double the size each time, which costs O(current size), but is only applied O(logN) times. The two methods cost about the same: O(N), which is also the same as needed if we make the array the full size initially.
The good performance attained by a BST depends on it being bushy. If it gets way out of balance (in the extreme example, if it becomes a line) then all the O(logN) boxes become O(N) and the data structure is not competitive.
There are two techniques to keeping the tree balanced, which one
might call the static vs dynamic approaches or might call the manual
vs automatic approaches.
We will
take the easy way out and follow the book by
presenting only the static/manual approach in which the user
explicitly invokes a balance() method when they feel the
tree might be imbalanced.
We will not consider the automatic approach in which the add and
remove operations themselves ensure that the tree is always
balanced.
Anyone interested in the automatic should type
AVL tree into
Google.
We will consider only the following 2-step approach.
For step 1 we presumably want to traverse the BST, but which traversal.
For step 2 we presumably want to use the BST add() method, but what order should we choose the elements from the array?
If we employ inorder and place consecutive elements into
consecutive array slots, the array is sorted.
If we then use add() on consecutive elements
(the
natural order), we get a line.
This is the worst possible outcome!
Now extract the elements with preorder and again add them back using consecutive elements of the array. This procedure is the identity, i.e., we get back the same tree we started with. A waste of time.
This procedure is harder to characterize, but it does not produce balanced trees. Let's try some.
Middle First
balance() n = tree.reset(INORDER); for (int i=0; i<n; i++) array[i] = tree.getNext() tree = new BinarySearchTree() tree.insertTree(0, n-1)
The code on the right shows the choice of inorder traversal.
Recall that reset() returns the current size of the tree,
which tells us how many times to execute getNext().
The fifth line on the right produces a new (empty) tree.
We now need to specify insertTree(), i.e., explain the cryptic
Middle First.
Since we use Inorder, the array is sorted. Whatever element we choose to add first will become the new root. For the result to be balanced, this element should be the middle of the sorted list. With a sorted array, the middle element is easy to find.
insertTree(lo, hi) if (hi == lo) tree.add(array[lo]) if (hi == lo+1) tree.add(array[lo] tree.add(array[hi] mid = (lo+hi)/2 tree.insertTree(lo, mid-1) tree.insertTree(mid+1, hi)
Now that the root is good, what is next. If we pick an element smaller than the middle it will become (and remain) the left child of the root. We want this tree node to have as many left as right descendants so we want the middle of the numbers smaller than our original middle. Etcetera.
This analysis gives rise to the algorithm on the right. If we have one or two elements we insert both (in either order; these are the base cases). Otherwise, insert the middle element and then insert the remaining small ones followed by inserting the remaining large ones (or vice-versa).
Homework: 46.
Start Lecture #22
Remark: Lab 5, the last lab, is posted.
The basic idea is easy. Use the following algorithm to store the binary tree in an array.
On the right we see three examples. The upper left example has 5 nodes. The root node A is stored in slot 0 of the array; it's left and right children are stored in slots 2*0+1 and 2*0+2 respectively.
The lower left is similar, but with 7 nodes.
The right example only has 6 nodes but requires 12 array slots, because some of the planed-for children are not in the tree, but array slots have been allocated for them.
Let's check that the rules for storing left and right children are satisfied for all three examples.
Definition: A binary tree like the one on the lower left in which all levels are full is called full.
Definition: A binary tree like the one on the upper left in which the only missing nodes are the rightmost ones on the last level is called complete.
The following properties are clear.
The array tree has the advantage that the parent (of a non-root node) can be found quickly. Its disadvantages are that, if the tree is not complete, space is wasted and, if an addition requires the array to have more slots than originally allocated, we must enlarge the array (or raise an exception).
Homework: 48,49.
Read.
A BST is an effective structure for storing sorted items. It does require two links per item, but essentially all operations are fast O(logN), providing the tree remains balanced. We presented an algorithm for explicitly re-balancing a BST and reference AVL trees as an example of BSTs that are automatically balanced.
First things first. A priority queue is not a queue. That is, it does not have FIFO semantics. I consider it a bad name, but it is much too late to try to change the name as it is absolutely standard.
In a priority queue, each item has an associated priority and
the
dequeue operation (I would call it remove since it is not
removing the first-in item) removes the
enqueued item with the
highest priority.
The priority of an item is defined by the Java class of the items.
In particular, we will normally assume x has higher
priority than y if x.compareTo(y)>0.
In fact sometimes the reverse is true and
smaller items
(according to compareTo() have higher priority.
We see the uml diagram on the right. It is quite simple: tests for full and empty and mutators that insert and remove an element.
The inclusion of isFull() suggests that we are using a bounded structure to hold the elements and indeed some implementations will do so. However, we can use an unbounded structure and just have isFull() always return false.
We specify that enqueuing onto a full queue or dequeuing from an empty queue raises an (unchecked) exception
In 202, when you learn process scheduling, you will see that often
an operating system wishes to run the
highest priority
eligible process.
We can also think of the triage system used at hospital emergency rooms as another implementation of priority scheduling and priority queues.
A real, (i.e., FIFO) queue is also a priority queue. The priority is then the negative of the time at which the enqueue occurs. With this priority the earliest enqueued item has the highest priority and hence is dequeued first.
The general point is that when the next item not picked at random, often some sort of (often not explicitly stated) priority is assigned to the items and the one selected scores best using this metric.
We are not short of methods to implement a priority queue. Indeed, we have already implemented the following four such schemes. However, each has a defect when used for this application.
Enqueuing is trivial, just append, and fast O(1). Dequeuing, however, requires that we scan the entire list, which is slow O(N), where N is the number of items.
We now have the reverse situation: Dequeuing is O(1) since the highest priority item is the last item, but enqueuing is O(N) since we have to shift elements down to make room for the new one (finding the insertion site is O(logN) if we use binary search and thus is not rate limiting).
Dequeuing is O(1) providing we sort the list in decreasing order. But again enqueuing is O(N).
Probably the best of the bunch. For a balanced BST, both enqueuing and dequeuing are O(logN), which is quite good. For random insertions the tree will likely be balanced but in the worst case (e.g., if the insertions come in sorted order) the tree becomes a line and the operations take O(N).
We can balance the tree on occasions, but this operation is always O(N).
The biggest problem with AVL trees is that we don't know what they are. They are also the fifth of the "four such schemes". Had we studied them or some other auto-balanced BST we would have seen that all operations are O(logN).
The AVL algorithms are somewhat complicated and AVL trees are overkill. We don't need to find an arbitrary element, just the largest one. Could we perhaps find a simpler O(logN) set of algorithm solving our simpler problem?
Yes we can and we study one next.
Homework: 3,4.
In future courses you will likely learn about regions of memory call the stack and the heap.
The stack region is used to store variables that have stack-like lifetimes, i.e. the first ones created are the last ones destroyed (think of f calls g and g calls h).
So the stack region is related to the stacks we have studied. However, the heap region is not related to the heaps we will study now.
Definition: A heap is a complete binary tree in which no child has a greater value than its parent.
Note that this definition includes both a shape property and an order property.
On the right we see four trees where each node contains a Character as its info field.
Question: Why are the bottom two trees not
heaps?
The left tree violates the order property since the right child of C is greater than C.
The right tree is not complete and thus violates the shape property.
Homework: 6, 7 (your example should have at least 4 nodes).
The picture on the right shows the steps in inserting a new element B into the first heap of the previous diagram.
Since the result must be a complete tree, we know where the new element must go. In the first row, all goes well the result of placing B in the required position yields another heap.
In the second row, we want to insert H instead of B and the result is not a heap since the order property is not satisfied. We must shift some elements around. The specific shifting that we will do, results in the rightmost tree.
An sequence of insertions is shown below, where, for variety, we use a heap in which smaller elements have higher priority. In each case the reheaping, often called a sift-up, consists of a sequence of local operations in which the inserted item is repeatedly compared with its parent and swapped if the ordering is wrong.
Let's make sure we understand every one of the examples.
We know that the highest priority element is the root so that is the element we must delete, but we also know that to maintain the shape property the position to be vacated must be the rightmost on the bottom row.
We meet both requirements by temporarily deleting the root, which we return to the user, and then moving the rightmost bottom element into the root position. But the result is probably not a heap since we have a just placed a low priority item into the slot reserved for the highest priority item.
We repair this damage by performing sift-downs, in which the root element is repeatedly swapped with its higher priority (in this example that means smaller) child.
On the board, start with the last tree from the previous diagram and perform several deletions. Note that the result of each delete is again a heap and hence the elements are removed in sorted (i.e., priority) order.
A key point in the heap implementation is that we use the array representation of a binary tree discussed last chapter. Recall that the one bad point of that representation was that when the tree is sparse, there may be many wasted array slots. However, for a heap this never the case since heaps are complete binary trees.
As I have mentioned Java generics have weaknesses, perhaps the most important of which is the inability to create generic arrays. For this reason the book (perhaps wisely) uses the ArrayList class from the Java library instead of using arrays themselves.
However, we have not yet used ArrayList in the regular section of the course and it does make the code look a little unnatural to have elements.add(i,x); rather than the familiar elements[i]=x;
For this reason I rewrote the code to use the normal array syntax
and that is what you will see below.
But, this natural code does not run and must
be
doctored in order to execute successfully.
In particular, various casts are needed and I offer a link to a
working (albeit ugly)
version here.
You should view the version in the notes as a model of what is
needed and the referenced working version as one way to bend it to
meet Java's weakness with generic arrays.
package priorityQueues; class PriQOverflowException extends RuntimeException { public PriQOverflowException() { super(); } public PriQOverflowException(String message) { super(message); } } package priorityQueues; class PriQUnderflowException extends RuntimeException { public PriQUnderflowException() { super(); } public PriQUnderflowException(String message) { super(message); } }
Since we are basing our implementation on fixed size arrays, overflows occur when enqueuing onto a full heap. Underflows always occur when dequeuing any empty structure.
We follow the book and have underflows and overflows each trigger a java RuntimeException. Recall that these are unchecked exceptions so a user of the package does not need to catch them or declare that it might throw them.
Each of our exceptions simply calls the constructor in the parent class (namely RuntimeException.
package priorityQueues; public class Heap<T extends Comparable<T>> implements PriQueueInterface<T> { private T[] elements; private int lastIndex = -1; private int maxIndex; public Heap(int maxSize) { elements = new T[maxSize]; maxIndex = maxSize - 1; }
As with BSTs, heaps have an inherent order among their elements. Hence we must again require that T extends Comparable<T>.
Recall that the heap will be implemented as an array. maxIndex is essentially the physical size of the array; whereas, lastIndex, designating the highest numbered full slot is essentially the size of the heap since all lower slots are also full.
The constructor is given the size of the array from which it
creates the array and the largest possible index.
We know that the new statement is illegal.
In previous array-based structures
(e.g., ArrayUnsortedList), a simple
cast
fixed the problem, but here more work was needed (see my
doctored code.
public boolean isEmpty() { return lastIndex == -1; } public boolean isFull() { return lastIndex == maxIndex; }
These are trivial.
public void enqueue (T element) throws PriQOverflowException { if (lastIndex == maxIndex) throw new PriQOverflowException("helpful message"); lastIndex++; reheapUp(element); } private void reheapUp(T element) { int hole = lastIndex; while ((hole>0) && (element.compareTo(elements[parent(hole)])>0)) { elements[hole] = elements[parent(hole)]; hole = parent(hole); } elements[hole] = element; } private int parent(int index) { return (index-1)/2; }
If the heap is filled, enqueue() raises an exception; otherwise it increases the size of the of the heap by 1.
Rather than placing the new item in the new last slot and then
swapping it up to its final position, we place a
hole in the
slot and swap the hole up instead.
We place the new item in the hole's final location.
This procedure is more efficient since it takes only two assignments to swap a hole and an element, but takes three to swap two elements.
To make the code easier to read I recompute the parent's index twice and isolate the computation in a new method. It would be more efficient to compute it once inline; an improvement that an aggressive optimizing compiler would perform automatically.
public T dequeue() throws PriQUnderflowException{ if (lastIndex == -1) throw new PriQUnderflowException("Helpful msg"); T ans = elements[0]; T toMove = elements[lastIndex--]; if (lastIndex != -1) reheapDown(toMove); return ans; } private void reheapDown(T element) { int hole = 0; int newhole = newHole(hole, element); while (newhole != hole) { elements[hole] = elements[newhole]; hole = newhole; newhole = newHole(hole, element); } elements[hole] = element; } private int newHole(int hole, T element) { int left = 2 * hole + 1; int right = 2 * hole + 2; if (left > lastIndex) // no children return hole; if (left == lastIndex) // one child return element.compareTo(elements[left])<0? left: hole; if (elements[left].compareTo(elements[right]) < 0) return element.compareTo(elements[right])<0? right: hole; return element.compareTo(elements[left])<0? left: hole; }
If the queue is empty, dequeue() raises and exception; otherwise it returns the current root to the user and hence decreases the heap's size by 1. Removing the root leaves a hole there, which is then filled with the current last item, restoring the shape property.
But
elevating the last item is likely to violate the order
property.
This violation can be repaired by successively swapping the item with
its larger child.
Instead of successively swapping two items, it is again more efficient to successively swap the hole (currently the root position) with its larger child and only at the end move the last item into the final hole location.
The process of swapping the hole down the tree is called sift-down or reheap-down.
Finding the larger child and determining if it is larger than the parent is easy but does take several lines so is isolated in the newhole() routine.
// for testing only public String toString() { String theHeap = new String("The heap is\n"); for (int i=0; i<=lastIndex; i++) theHeap += i + ". " + elements[i] + "\n"; return theHeap; } }
A useful aid in testing the heap and many other classes, is to override the toString() method defined for any Object.
Since a heap is implemented as an array, the code on the right, which just lists all the array slots with their corresponding indices, is quite useful for simple debugging.
A heap would user would likely prefer a graphical output of the corresponding tree structure, but this is more challenging to produce.
Start Lecture #23
Remark: As guessed the (hole>1) was a typo and has been fixed. Note that the book's test case worked fine on both the wrong and corrected version. The limitation of (not extensive) testing.
The table on the right compares the performance of the various structures used to implement a priority queue. The two winners are heaps and balanced binary search trees.
Our binary search trees can go out of balance and we need to judiciously choose when to re-balance (an expensive O(N) operation).
As mentioned there are binary search trees (e.g. AVL trees) that maintain their balance and still have the favorable O(logN) performance for all operations. However, these trees are rather complicated and are overkill for priority queues where we need only find the highest element.
Homework: 10, 11.
We discussed priority queues, which can be implemented by several data structures. The two with competitive performance are the binary search tree and the heap. The former is overkill and our BST implementations did not fully address the issue of maintaining balance.
In this chapter we described a heap in detail including its implementation. Since heaps are complete binary trees an array-based implementation is quite efficient.
We skipped the material on graphs.
Given an unsorted list of elements, searching is inherently slow (O(N)) whether we seek a specific element, the largest element, the smallest element, or the element of highest/lowest priority.
In contrast a sorted list can be searched quickly:
But what if you are given an unsorted list and want it sorted? That is the subject of this chapter together with other searching techniques that in many cases can find an arbitrary element in constant time.
Sorting is quite important and considerable effort has been given to obtaining efficient implementations.
It has been proven that no
comparison-based
sorting algorithm can do better than O(NlogN).
We will see algorithms that achieve this lower bound as well as
simpler algorithms that are slower (O(N2)).
Roughly speaking a sorting algorithm is comparison-based if it sorts by comparing elements of the array. Any natural algorithm you are likely to think of is comparison-based. In basic algorithms you will see a more formal definition.
We will normally write
in-place sorting algorithms that do
not use significant space outside the array to be sorted.
The exception is merge sort.
public class TestSortingMethod { private static final int SIZE = 50; private static final int DIGITS = 3; // change printf %4d private static final int MAX_VALUE = (int)Math.pow(10,DIGITS)-1; private static final int NUM_PER_LINE = 10; private static int[] values = new int[SIZE]; private static int numSwaps = 0; private static void initValues() { for (int i=0; i<SIZE; i++) values[i] = (int)((MAX_VALUE+1)*Math.random()); } private static boolean isSorted() { for (int i=0; i<SIZE-1; i++) if (values[i] > values[i+1]) return false; return true; } private static void swap (int i, int j) { int t = values[i]; values[i] = values[j]; values[j] = t; numSwaps++; } private static void printValues() { System.out.println("The value array is:"); for (int i=0; i<SIZE; i++) if ((i+1) % NUM_PER_LINE == 0) System.out.printf("%4d\n", values[i]); else System.out.printf("%4d" , values[i]); System.out.println(); } public static void main(String[] args) { initValues(); printValues(); System.out.printf("The array %s initially sorted.", isSorted() ? "is" : "is not"); sortValues(); printValues(); System.out.printf("After performing %d swaps,\n", numSwaps); System.out.printf("the array is %s sorted.", isSorted() ? "now" : "still not"); } private static void sortValues() { // call sort program here } }
Since we will have several sort routines to test we write a general
testing
harness into which we can plug any sorting routing
that sorts an int[] array named values.
The harness initializes values to random non-negative integers, prints the values, checks if the initial values are sorted (probably not), calls the sort routine in question, prints the array again, and checks if the values are now sorted.
In order to be flexible in testing, the harness uses three configuration constant, SIZE, the number of items to be sorted DIGITS the number of digits in each number, and NUM_PER_LINE, the number of values printed on each line.
The maximum value (used in generating a problem) is calculated from DIGITS, but the %4d in printf() must be changed manually.
The following output is produced when the configuration constants are set as shown on the right. The array is not initialy sorted. After performing 0 swaps, the array is still not sorted.
For convenience, a swap() method is provided that keeps track of how many swaps have been performed.
These algorithms are simple (a virtue), but slow (a defect). They are the obvious choice when sorting small arrays, say a hundred or so entries, but are never used for serious sorting problems with millions of entries.
I am not sure what the adjective
straight means here.
The idea behind selection sort is simple.
private static void selectionSort() { for (int i=0; i<SIZE-1; i++) swap(i, minIndex(i, SIZE-1)); } private static int minIndex(int startIndex, int endIndex) { int ans = startIndex; for (int i=startIndex+1; i<=endIndex; i++) if (values[i] < values[ans]) ans = i; return ans; }
The code, shown on the right is also simple.
The outer loop (selectionSort() itself) does
the
etcetera.
That is, it causes us to successively swap the first with the
minimum, the second with the minimum from 2 down, etc.
The minIndex routine finds the index in the given range whose slot contains the smallest element. For selectionSort(),endIndex could be omitted since it is always SIZE-1, but in other sorts it is not.
Selection sort is clearly simple and we can see from the code that it uses very little memory beyond the input array. However, it is slow.
The outer loop has N-1 iterations (N is SIZE, the size of the problem). The inner loop first has N-1 iteration, then N-2, ..., finally 0 iterations.
Thus the total number of comparisons between two values is
N-1 + N-2 + N-3 + ... + 1
You will learn in basic algorithms that this sum equals (N-1)(N-1+1)/2=O(N2), which is horrible when say N=1,000,000.
private static void bubbleSort() { boolean sorted = false; while (!sorted) { sorted = true; for (int i=0; i<SIZE-1; i++) if (values[i] > values[i+1]) { swap(i, i+1); sorted=false; } } }
Bubble and selection sorts both try to get one element correct and then proceed to get the next element correct. (For bubble it is the last (largest) element that is corrected first.) A more significant difference is that bubble sort only compares and swaps adjacent elements and performs more swaps (but not more comparisons).
There are several small variations on bubble sort.
The version on the right is one of the simplest.
Although there are faster versions they are
all O(N2) in the worst case so are never used
for large problems unless the problems are known to be
almost
sorted.
This analysis is harder than for selection sort; we shall skip it.
private static void insertionSort() { for (int i=1; i<SIZE; i++) { int j = i; while (j>0 && values[j-1]>values[j]) { swap(j, j-1); j--; } } }
For this sort we take each element and insert it where it belongs in the elements sorted so far. Initially, the first element is itself sorted. Then we place the second element where it belongs with respect to the first, and then the third with respect to the first two, etc.
In more detail, when elements 0..i-1 have been sorted, swap element i with elements i-1, i-2, ..., until the element is in the correct location.
The best case occurs when the array is already sorted, in which case the element comparison fails immediately and no swaps are done.
The worst case occurs when the array is originally in reverse order, in which case the compares and swaps always go all the way to the first element of the array. This again gives a (worst case) complexity of O(N2)
Homework: 2,5,6,8.
These are the ones that are used for serious sorting.
private static void mergeSort(int first, int last) { if (first < last) { int middle = (first+last) / 2; mergeSort(first, middle); mergeSort(middle+1, last); merge(first, middle, last); } } private static void merge(int leftFirst, int leftLast, int rightLast) { int leftIndex = leftFirst; int rightIndex = leftLast+1; int[] tempArray = new int[rightLast-leftFirst+1]; int tempIndex = 0; while (leftIndex<=leftLast && rightIndex<=rightLast) if (values[leftIndex] < values[rightIndex]) tempArray[tempIndex++] = values[leftIndex++]; else tempArray[tempIndex++] = values[rightIndex++]; while (leftIndex <= leftLast) tempArray[tempIndex++] = values[leftIndex++]; while (rightIndex <= rightLast) tempArray[tempIndex++] = values[rightIndex++]; for (tempIndex=0; tempIndex<tempArray.length; tempIndex++) values[leftFirst+tempIndex] = tempArray[tempIndex]; }
The mergesort routine itself might look too simple to work, much less to be one of the best sorts around. The reason for this appearance is its recursive nature. To mergesort the array, use mergesort to sort the first half, then use mergesort to sort the second half, then merge these two sorted halves.
Merging two sorted lists is easy. I like to think of two sorted decks of cards.
Compare the top card of each pile and take the smaller. When one of the piles is depleted, simply take the remainder of the other pile.
Note that the tempArray for the last merge performed is equal in size to the initial array. So merge sort requires O(N) additional space, a disadvantage.
The analysis is harder than the program and I leave the details for basic algorithms. What follows is the basic idea.
To simplify slightly let's assume we are sorting a power of two
numbers.
On the right is the
merge tree for sorting 16=24
values.
From the diagram we see 31 calls to mergeSort(), one of size 16, two of size 8, ..., 16 of size 1. For any power of two there will be 2N-1calls if we are sorting N, a power of 2, values.
MergeSort() itself is trivial, just 3 method calls, i.e. O(1), and hence all the calls to mergeSort() combined take O(N).
What about the calls to merge()? Each /\ in the diagram corresponds to a merge. If you look at the code for Merge(), you can see that each call is O(k), when you are merging k values. The bottom row of the diagram contains 8 merges, each merging two values. The next row contains 4 merges, each of 4 values. Indeed, in each row N=16 elements are merged it total. So each row of merges requires O(N) effort.
Question: How many rows are there?
Answer: logN.
Hence the total effort for all the merges is O(NlogN). Since all the mergeSort() routines take only O(N), the total effort for mergesort is O(NlogN) as desired.
Homework: 11,12.
This algorithm is
obvious but getting it right is hard as
there are many details and potential
off by one pitfalls.
Like mergeSort(), quickSort() recursively divides the array into two pieces which are sorted separately. However, in this algorithm one piece has all the small values, the other all the large ones. Thus there is no merging needed.
The recursion ends with a base case in which the piece has zero of one items and thus is already sorted.
The idea is that we choose one of the values in the array (call it X) and swap elements so that everything in the left (beginning) part of the array is less than or equal to X, everything in the right part is greater than X, and X is in between the two parts.
Now X is correctly located and all we need to do is sort each of the two parts. The clever coding is needed to do the swapping correctly.
We will do examples on the board, but not study the coding. All the sorting algorithms are available here.
The speed depends on how well the array is divided. If each piece is about 1/2 size, the time is great O(NlogN). If one piece is extremely small, the time is poor O(N2).
The key to determining the sizes of the pieces is the choice X above. In my code, X is set to the first value in the interval to be reordered. Like any other choice of X, this can sometimes be bad and sometimes be good.
It should be noted that, if the original array is almost sorted, this choice of X is bad. Since such arrays often arise in practice, real quicksort programs use a different X. The book suggests the middle index in the range; others suggest taking the first, last, and middle index and choosing the one whose value is in the middle of the three.
Any of these choices have as many bad cases as any other, the point of using something other than simply the first entry is that for the others the bad cases are not as common in practice as an almost sorted array.
private static void heapSort() { PriQueueInterface<Integer> h = new Heap(values.length); for (int i=0; i<values.length; i++) h.enqueue(values[i]); for (int i=values.length-1; i>=0; i--) values[i] = h.dequeue(); }
On the right is a simple heapsort using our heap from chapter 9. I am surprised the book didn't mention it.
A disadvantage of the simple heapsort is that it uses an extra array (the heap h). The better algorithm in the book builds the heap in place so uses very little extra space.
Although the values array is not a heap, it does satisfy the shape property. Also, all the leaves (the array entries with the largest indices) are valid subheaps so we start with highest-indexed, non-leaf and sift it down using a variant ofreheapDown() from chapter 9. Then we proceed up the tree and repeat the procedure.
I added an extra print to heapsort in the online code to show the array after it has been made into a heap, but before it has become sorted.
Now that we have a heap, the largest element is in values[0]. If we swap it with values[SIZE-1], we have the last element correct and hence no longer access it.. The first SIZE-1 elements are almost a heap, just values[0] is wrong. Thus a single reheapDown() restores the heap and we can continue.
Homework: 19, 22, 23
All the sorting algorithms are implemented here embedded in the test harness developed earlier. To run the a given sorting program, go to the sortValues() method and comment out calling all the other sorts.
Several good general comments about testing as applied to sorting.
public static void badSort(int[] a) { for (i=0; i<a.length; i++) a[i] = i; }
However, there is an important consideration that is not normally
discussed and that is rather hard to check.
Executing the method on the right results
in a being
sorted; however, the
result bears little resemblance to the original array.
Many sort checkers would
validate badSort().
We have paid little attention to small values of N. The real concern is not for small problems; they are fast enough with any method. The issue arises with recursive sorts. Using heapsort, mergesort, or quicksort on a large (say 1,000,000) entry array results in very many recursive calls to small subproblems that can be solved more quickly with a simpler sort.
Good implementations of these recursive algorithms call a simpler sort when the size of the subproblem is below a threshold.
Method calls are not free. The overhead of the call is especially noticeable when a method with a small body called often. Programmers and good compilers can eliminate many of these calls.l
Most of the time minor performance improvements such as eliminating method calls are not made ... with good reason. The primary quality metric for all programs is correctness. For most programs, performance is not the second quality metric; instead ease of modification, on time completion, and cost rank higher. Cost is basically programmer time so it is not wise to invest considerable effort for small performance gains.
Note that the preceding paragraph does not apply to the huge gains that occur in large problems when a O(NlogN) algorithm replaces an O(N2) algorithm.
The only significant difference in storage space required is between those algorithms (mergesort and quicksort) that require extra space proportional to the size of the array and those that do not. Normally, we worry more about time than space, but the latter does count.
You might worry that if we are sorting large objects rather than primitive int's or tiny Character's, that swapping elements. For example, if we had an array of String's and it turned out that each string was a book containing about a million characters, then swapping two of these String's would be very expensive indeed.
But that is wrong!
The swaps do not actually swap the million character strings but swap the (small) references to the strings.
We have used this interface many times. The only new point in this section is to note that a class can have only one CompareTo() method and thus can only be sorted on one basis.
There are cases where you want two different orderings. For example, you could sort airline flights on 1 March from NYC to LA by time of day or by cost for the cheapest coach ticket. Comparable does not offer this possibility.
The somewhat more complicated Comparitor interface is based on a compare() method with two parameters, the values to be compared. The interesting part is the ability to have multiple such methods. However, we will not pursue this interface further.
A sort is called stable if it preserves the relative order of duplicate values. This becomes important when an object has several fields and you are only sorting based on one field. Thus duplicated objects need only have the sorted field equal and hence duplicates are not necessarily identical.
Assume you first sort on one field and then on a second. With a stable sort you are assured that items with equal second field have their first fields in order.
Sometimes searching is not needed. If we have an indexed list implemented with an array A of objects of some class C, then obtaining entry number 12 is simply retrieving A[12], a constant time (i.e., O(1) operation.
To be specific, assume C consists of NYU students and an object in C contains the students name, N number, and major. Then finding the major of the student in entry 12 is simply A[12].NNumber, again an O(1) operation.
However, many times the easy solution above is not possible.
For example, with the same implementation finding the entry number
of a student named
Tom Li require searching, as does
finding Li's major.
Linear searching (trying the first object, then the second, etc.) is always possible, but is slow (O(N) for N objects).
In many circumstances, you know that there will be
popular
and
unpopular object.
That is, you know there will be some objects searched for much
more frequently than others.
If you know the identity of the popular objects you can place them
first so that linear search will find them quickly.
But what if you know only that there are popular objects but don't
know what they are?
In this last situation the
move to the front algorithm
performs well (I often use it for physical objects).
When you search for and find an object, move it to the front of
the list (sliding objects that preceded it down one slot).
After a while this will have popular objects toward the front and
unpopular objects toward the rear.
For physical objects (say documents) it is often O(1) to move a found object to the front. However, for an array the sliding down can be O(N). For this reason, an approximation is sometimes used where you simply swap the found object with the first, clearly an O(1) task.
Knowing that a list is sorted helps linear search and often permits the vastly faster binary search.
When linearly searching a sorted list for an element that is not present, one can stop as soon as the search passes the point where the item would have been stored, reducing the number of items search to about half on average.
More significantly, for an array-based sorted list we can use the O(logN) binary search, a big speed advantage.
Start Lecture #24
We learned that if a list is unsorted we can find an element in the list (or determine that the element is not there) in time O(N) using linear searching (try the first, then the second, etc.).
If the list is sorted we can reduce the searching time to O(logN) by using binary search (roughly: try the middle, then the middle of the correct half, then the middle of the correct quarter, etc.).
Can we ever hope to find an element in a list using a constant amount of time (i.e., the time to lookup an element does not depend on the number of elements?
In fact this is sometimes possible. I found on the web the roster of this year's NY Jets football team. Assume you have an array of strings NYJets with NYJets[11]="Kerley", NYJets[21]="Bell", NYJets[18]="*", etc. Then you can answer questions like these in constant time.
We say that the player number is the key.
But this case is too easy. Not only are there very few entries, but the range of indices is small.
Now consider a similar list of the students in this class with key
their their NYU
N numbers.
Again there are very few entries (about 50), but now the range of
indices is huge.
My N number has 8 digits so we would need an array of 100,000,000
entries with all but 50 being "*".
This situation is when hashing is useful.
The book makes the following three assumptions to simplify the discussion.
Some comments on these assumptions.
Instead of indexing the array by our N numbers, instead take just the last 3 digits (i.e., compute (N number) % 1000). Now we only need an array of size 1,000 instead of 100,000,000.
Definition: The function mapping the original key to another (presumably smaller) value, which is then used to access the table, is called a hash function.
For now we will use the above hash function, i.e., we take the key modulo the size of the table.
All is well, searching is O(1), PROVIDING no two N numbers in the class end in the same three digits, i.e., providing we have no collisions.
public class Student { private int nNumber; // the key private String name; private String major; // etc public Student(int nNumber, String name, String major) { this.nNumber = nNumber; this.name = name; this.major = major; } public int getNNumber() { return nNumber; } public String getName() { return name; } public String getMajor() { return major; } public void setNNumber(int nNumber) { this.nNumber = nNumber; } public void setname(String name) { this.name = name; } public void setMajor(String major) { this.major = major; } } public class StudentHashtable { private Student[] studentHashtable; private int size; public StudentHashtable(int size) { this.size = size; studentHashtable = new Student[size]; } public int hash(int key) { return key % size; } public void add (Student s) { studentHashtable[hash(s.getNNumber())] = s; public Student get(Student s) { return studentHashtable[hash(s.getNNumber())]; } }
Although we will not be developing hashtables to the point of actual code, I thought it would be helpful to see a possible basic setup.
It would be better to have a generic hashtable class that accepted as type parameter (the <T> business) the type of object being stored, I didn't want to battle with generic arrays since we aren't actually going to use the implementation.
So I was inelegant and made the class specific to the object being stored, in this case Student.
I defined a standard hash() method taking and returning an int. Were we developing a complete implementation, I might have defined it as taking a Student as input and then have hash() itself extract the nNumber, which it would then use as key.
Our add() is easy.
The get() method returns the entry from the hashtable that is equal to the argument.
Something must be wrong. This is too easy.
A minor simplification is that we assume there is room for the element we add and are assuming the element we are asked to get is present. We could fix the former by keep track of the size; and fix the latter by checking the found entry and returning null if it doesn't match.
The real simplification is that we assume collisions don't happen. That is we assume that different keys are hashed to different values. But this is clearly not guaranteed. Imagine we made a 1000 entry table for NYU students and picked say 900 students. Our hash function just returns the last three digits of their N number. I very much doubt that 900 random NYU students all have unique 3-digit N number suffixes. Collisions do and will happen. The question is what to do when they occur, which is our next topic.
One can attack collisions in two ways: minimize their occurrence and reduce the difficulty they cause. We will not do the former and will continue to just use the simple mod function.
One place where collisions occur is in the add() method. Naturally, if you add an element twice, both will hash to the same place. Let's ignore that possibility and say we never add an element already present. When we try to add an element, we store the element in the array slot given by the hash of the key. A collision occurs when the slot already contains a (different) entry.
We need to find a location to store an item that hashes to the same value as an already stored item. There are two classes of solutions: open addressing and separate chaining.
public void add (Student s) { int location = hash(s.getNNumber()); while (studentHashtable[location] != null) location = (location+1) % capacity; studentHashtable[location] = s; } public Student get(Student s) { int location = hash(s.getNNumber()); while (studentHashtable[location].getNNumber()!=s.getNNumber()) location = (location+1) % capacity; return studentHashtable[location]; }
The simplest example of open addressing is linear probing. When we hash to a full slot, we simply try the next slot, i.e., we increase the slot number by 1 (mod capacity).
The new versions of add() and get() are on the right. The top loop would not terminate if the table is full; the bottom loop would not terminate if the item was not present. However, we are assuming that neither of these conditions holds.
As mentioned above it is not hard to eliminate these assumptions.
Homework: 42
More serious is trying to support delete and search.
Let the capacity be 10 and do the following operations.
Insert 13; Insert 24; Insert 23; Delete 24;
The result is shown on the right.
If you now look for 23, you will first try slot 3 (filled with 13) and then try slot 4 (empty since 24 was deleted). Since 23 would have gone here, you conclude erroneously that it is not present.
The solution is not to mark a deleted slot as empty (null) but instead as deleted (I will use **). Then the result looks as shown on the right and an attempt to find 23 will proceed as follows. Try slot 3 (filled with 13), try 4 (a deleted item), try slot 5 (success).
A problem with open addressing, particularly with linear probing
is that clusters develop.
After one collision we now have a block of two elements that hash to
one address.
Hence the next addition will be in this
cluster if it hashes
to either of the two address.
With a cluster of three, now any new item that hashes to one of the
three will add to the cluster.
In general clustering becomes very severe if the array is nearly full. A common rule of thumb is to ensure that the array is less than 70% full.
Instead of simply adding one to the slot number, other techniques are possible. Choosing the new addresses when a collision occurs is called rehashing. This is a serious subject, the book only touches on it; we shall skip it.
(Note I split up the books
Bucket and Chaining section.)
Sometimes instead of an array of slots each capable of holding one
item, the hash table is an array of buckets each capable of holding
a small fixed number of items.
If a bucket has space for say 3 items then there is no problem if
up to 3 items hash to the same bucket.
But if a 4th one does we must do something else.
One possibility is to go to the next bucket which is similar to open addressing / linear probing.
Another possibility is to store all items that overflow any bucket
in one large
overflow bucket.
Homework: 44.
In open addressing the hash value gives the (first) slot number to try for the given item and collisions result in placing items in other slots. In separate chaining the hash value determines a bucket into which the item will definitely be placed.
The adjective separate is used to indicate that items with different hash values will be kept separate
Since items will definitely be placed in the bucket they are hashed to, these buckets will contain multiple items if collisions occur. Often the bucket is implemented as a list of chained (i.e., linked) nodes, each node containing one item.
A linked list sounds bad, but remember that the belief/hope is that there will not be many items hashing to the same bucket.
Other structures (often tree-based) can be used if it is feared that many items might hash to the same bucket.
Homework: 45, 46 (ignore the reference to #43), 47, 48.
Naturally a good hash function must be fast to computer. Our simple modulo function was great in that regard. The more difficult requirement is that it not give rise to many collisions.
To choose a good hash function one needs knowledge of the statistical distribution of the keys.
This is what we used.
If there were no collisions, hashing would be constant time, i.e., O(1), but there are collisions and the analysis is serious.
Sorting, searching, and hashing are important topics in practical applications. There is much we didn't cover. For example, we didn't concern ourself with the I/O considerations that arise when the items to be sorted or searched are stored on disk. We also ignored caching behavior.
Often the complexity analysis is difficult. We barely scratched the surface; Basic Algorithms goes further.
Start Lecture #25
Remark: See the CS web page for the final exam location and time.
Went back and briefly covered 10.4 and 10.5. However, these sections remain optional and will not be on the final exam.
Start Lecture #26
Reviewed practice exam.
Here are the UMLs for the major structures studied.
|
http://cs.nyu.edu/courses/fall12/CSCI-UA.0102-001/class-notes.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
30 April 2009 23:15 [Source: ICIS news]
NEW YORK (ICIS news)--Swiss pharmaceutical major Roche is ready to ramp up production of Tamiflu, a company official said on Thursday, as pharmaceutical chemical companies respond to the swine flu outbreak.
Chemical manufacturers associated with Tamiflu, one of only two anti-virals known to be effective against the current flu, could see a demand windfall if nations around the world move to deepen stockpiles of the drugs.
The World Health Organization (WHO) on Wednesday raised its assessment of the outbreak to phase 5, indicating widespread human infection and one level short of declaring a pandemic. As of Thursday, 11 countries had officially reported 257 cases of swine flu – or influenza A (H1N1) – infection, including eight deaths, according to the WHO.
Although many national stockpiles of Tamiflu (oseltamivir) were established after the 2003 avian flu pandemic, fears are mounting that the quantities now available of Tamiflu and another antiviral, GlaxoSmithKline’s Relenza, will not be sufficient.
David Reddy, Roche’s global pandemic preparedness leader, sought to relieve worries.
“Roche’s 3m treatment courses donated to the WHO in 2006 are ready on 24-hour standby to be deployed to areas of need as determined by the WHO,” he said. “We will be working through the night to do all we can to respond in a rapid, timely and responsible manner for patients in need,” he said.
Roche donated 5m treatment courses of Tamiflu in 2006 - a 2m treatment course “regional stockpile” and a 3m treatment course “rapid response” stockpile. The regional stockpiles are held by the WHO at locations around the world.
Roche said that it had fulfilled government orders for a total of 220m Tamiflu treatments.
Roche also said that it had been in contact with WHO since the UN agency’s pandemic alert was elevated.
If the outbreak spreads widely, existing quantities may not be sufficient.
The WHO has previously recommended that governments prepare for pandemics by stockpiling enough treatments for half the population. Most countries are nowhere near that level, although some EU members come close.
According to the Wall Street Journal, ?xml:namespace>
India, with a population over 1bn, has a stockpile of just 1m treatments, according to Dow Jones International News. East Asian nations are reportedly better prepared, having been devastated by the avian flu. Many African countries have none.
However, time may be on the side of public health. On Wednesday, a team of researchers at Northwestern University released a computer simulation of the current outbreak that projected a worst-case scenario of only 1,700 cases in the US in four weeks, by which point production could be well under way.
Roche has the capacity to produce 70m additional treatments over six months, a fine chemicals consultant said.
Roche could call on the global network of over 17 fine chemical contractors that the drug company established after the 2003 avian flu pandemic to meet demand for stockpiling Tamiflu. Among the members of this network are Groupe Novasep, Clariant, PHT International, Albemarle and AMPAC Fine Chemicals.
Hyderabad, India-based Hetero Drugs has also been producing a generic version authorised by Roche, which has also authorised Chinese producers Shanghai Pharmaceutical Group and HEC Group to provide pandemic supplies in
Indian drugmakers Cipla and Ranbaxy have been producing generic versions for sale into markets where Tamiflu does not have patent protection. Cipla could produce 1.5m treatments within 4-6 weeks, according to a company official quoted
|
http://www.icis.com/Articles/2009/04/30/9212707/swiss-tamiflu-maker-gears-up-for-swine-flu-pandemic.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
09 August 2012 03:33 [Source: ICIS news]
By MK Liu
SINGAPORE (ICIS)--China's butadiene (BD) spot prices may fall further as demand remains weak while supply is being augmented by imports, industry sources said on Thursday.
Domestic BD prices were assessed at yuan (CNY) 17,800-18,000/tonne ($2,798-2,830/tonne) on 8 August, down by 12% or CNY2,200/tonne from 20 July, according to Chemease, an ICIS service in ?xml:namespace>
“BD prices may fall to around CNY15,000/tonne if there is more-than-sufficient supply in [the] domestic market and no revival in fundamental demand,” a major BD producer said.
Imported cargoes are currently making up for the current shortfall in domestic production because of scheduled turnarounds at BD facilities in
In July,
With more BD shipments coming in amid continued weakness in demand from downstream synthetic rubber makers, BD prices have been on a continuous downtrend since mid-July.
“BD imports volumes may increase to 40,000 tonne for July and August to fill [the supply] gap in the domestic market,” a market player said.
Between July and August, an estimated 34,000-38,000 tonnes will be shaved from
The low run rate will lead to production losses of about 5,000 tonnes of BD, the source said.
Liaoning Huajin Tongda Chemical, meanwhile, has shut down its 100,000 tonne/year BD unit at
Sinopec SABIC
"We have to purchase imported cargoes to keep our plants running at a stable operating rate,” a downstream synthetic rubber producer said.
BD is a raw material for synthetic rubbers, which go into the production of tyres for the automotive industry.
But overall BD demand from the downstream synthetic rubber sector is still weak as some facilities are also either shut or due to be taken off line for maintenance.
TSRC-UBE (
Tianjin Lugang Petroleum Rubber also has a scheduled maintenance at its 100,000 tonne/year SBR plant starting 10 August, market sources said. The plant shutdown will last 50 days, they said.
($1 = CNY6.36)
|
http://www.icis.com/Articles/2012/08/09/9584495/china-bd-may-extend-falls-as-imports-boost-domestic-supply.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
By Adam Bien
The enhanced simplification of Enterprise JavaBeans (EJB) 3.1 provides a perfect synergy with the power and flexibility of Contexts and Dependency Injection (CDI).
Downloads:
Java EE
EJB Specs and Class Files
Version 3.1 of the Enterprise JavaBeans (EJB) specification greatly simplifies the EJB specification and provides declarative aspects, such as transactions, security, threading, and asynchronous processing. Contexts and Dependency Injection (CDI) provides Dependency Injection (DI) power and flexibility. Both are part of the Java Platform, Enterprise Edition (Java EE) 6 specification and ready to use. This article describes why there is perfect synergy between EJB 3.1 and CDI.
The EJB 3.1 specification is a lightweight Plain Old Java Object (POJO) programming model. The only requirement is the existence of the
@Stateless, the
@Stateful, or the (less common)
@Singleton annotation.
The simplest possible bean looks like this:
@Stateless public class LightweightPojo { public String hello(){ return "I'm very lightweight !"; } }
An EJB 3.1 bean is an annotated POJO. In addition, it can be deployed in a WAR file, side by side with your servlets, JavaServer Faces (JSF) 2 technology, and other Web components.
EJB 3.1 is especially lightweight, because all the run-time code already resides on the server. There is no need to deploy the container or any injection framework with your application, which makes the deployment and turnaround cycles really fast (usually less than 0.5 seconds) and the WAR file small.
A ready-to-deploy WAR file with the EJB 3.1 code above takes exactly 4 KB on the hard drive. Ultimately, the EJB 3.1 container (runtime environment) is surprisingly small as well. The installation size of a typical EJB 3.1 container takes about 1 MB on the hard drive.
In the case of GlassFish 3.0.1, these components are around 831 KB (
ejb-container.jar), 12 KB (
ejb-internal-api.jar), and 86 KB (
ejb.security.jar)—all OSGi bundles. These OSGi modules are, of course, not self-contained; rather, they contain references to other infrastructural modules, such as transactions, concurrency, or deployment. But even a size of 50 MB would be vastly smaller than the common perception of the actual container size.
Common myths about "bloated J2EE" are, like the Java 2 Platform, Enterprise Edition (J2EE) name itself: several years (more than 5) old. With the advent of Java EE 5, the programming model was drastically simplified. In Java EE 5, an EJB 3 session bean was just an interface with implementation. In the EJB 3.1 specification, even the interface became superfluous.
The true benefit of EJB 3.1 is declarative cross-cutting aspects, such as the single-threaded execution model with declarative transactions. And you don't even need to configure the beans. EJB 3.X beans come with reasonable defaults.
Java EE 6 follows the Convention over Configuration model, sometimes also called "Configuration by Exception." The Java EE 6 APIs, such as JSF 2, EJB 3.1, CDI, Java Persistence API (JPA) 2, and even J2EE Connector Architecture (JCA), follow this principle as well.
The method
hello in the
LightweightPojo bean gets executed in a transaction. The configuration for this behavior was inherited from the class-level defaults. Every EJB 3.X bean comes with the configuration:
@TransactionAttribute(TransactionAttributeType.REQUIRED).
You can, of course, override this behavior on the class level or the method level using annotations. And you can overwrite the annotations with XML. But you don't need to. The provided defaults are good enough for the first iterations and sometimes even for the whole project.
The setting
TransactionAttributeType.REQUIRED itself is interesting. If there is no transaction during the invocation of the method, the container will start one for you. If there already is one, it will just be reused. This behavior is particularly useful for chaining bean invocations together. The first bean will start a transaction for you, which gets propagated to all other beans. The method
OrderSystem and
CustomerNotification session beans:
@Stateless public class Cart { @EJB OrderSystem ordering; @EJB CustomerNotification notifier; public void checkout(){ ordering.placeOrder(); notifier.sendNotification(); } }
Transactions are getting really interesting together with Java Message Service (JMS), JCA, JPA, or CDI event interactions. Because everything gets executed in the context of the
Cart bean's transaction, the all-or-nothing principle applies. Either all changes made in
OrderSystem and
CustomerNotification are captured persistently, or none are.
Imagine that the
OrderSystem creates and stores a JPA entity in the database and
CustomerNotification notifies an external system via JMS. After the successful completion (no unchecked exceptions and no explicit rollback) of the
Cart#checkout method, all changes are persistent. Either the Order entity gets persisted in the database and the notification is sent, or all changes are automatically rolled back for you.
The object identity is also preserved in a transaction. All beans participating in a transaction see the same entity instance. A modification-managed entity is visible to all other participants in the same transaction. This behavior can be achieved without EJB beans, but you get it absolutely for free with EJB beans. Also, this behavior comes without any additional configuration or effort.
Declarative transactions and their propagation are harder to explain than code. In practice, you can solve 80% of all use cases with the defaults, without any additional configuration. For the remaining edge cases, you can either override the behavior with annotations or XML configuration, or you can use Bean Managed Transactions (BMT). BMT allows you far finer control, but it requires you also to write some infrastructure code. In the real world, you will rarely need the BMT option.
Concurrency management makes the use of EJB 3 even more interesting. The container manages threads for you and ensures that only one thread (the request) gets access to a bean instance at a given time. Nonactive instances are pooled and reused when needed. The amount of threads and bean instances is configurable, so you can prevent "denial of service" attacks just by limiting the amount of threads or instances.
Developers don't really bother with monitoring and management in the first iterations. For debugging and stress testing, however, Java Management Extensions (JMX) is an invaluable resource. Since J2EE 1.4 and the introduction of the JSR-77 specification (J2EE Management) in 2004, all Java EE resources, and thus enterprise beans, must be exposed by the application server to JMX. After connecting with standard Java Development Kit (JDK) tools, such as VisualVM or JConsole, you can monitor the number of beans created, the number of successful transactions, the slowest methods, the fastest methods, average time, and so on. You get this monitoring "for free" without any configuration or coding.
Transparent concurrency, declarative transactions, and monitoring with the POJO programming model greatly simplify the development not only of enterprise applications. Java EE 6 is actually now so lean and simple that it became interesting for really small projects and "situational" software. You can easily write a full-stack "proof of concept" application in less than an hour.
Injecting EJB beans is not very powerful. Furthermore, EJB beans cannot be directly exposed to JSF or JSP without a little help from CDI. In these areas, CDI (JSR-299) together with Dependency Injection for Java (JSR-330, lead by SpringSource and Bob Lea of Guice fame) really shine. JSR-299 with JSR-330 should be considered as single unit; in the real world, CDI just relies on the JSR-330 annotations. Compared to JPA, JSR-330 could be considered as the Java Database Connectivity (JDBC) API, whereas JPA is comparable with CDI. In the real world, the distinction between both specifications doesn't really matter, and for developers, the distinction is almost transparent.
On the other hand, CDI doesn't provide any transactional, monitoring, or concurrency aspect out of the box. For transactions, JMX could be easily implemented with interceptors or decorators, but Java EE 6 does not provide them out of the box. CDI beans are not "active," as EJB 3 beans are. Managed CDI beans are usually executed in the context of JSF, Java API for RESTful Web Services (JAX-RS), or EJB 3.
In Java EE 6, CDI is the natural complement of EJB 3.1. For service-driven architectures, a stateless EJB 3.1 bean as boundary (Facade) with injected managed beans (controls) results in the simplest possible architecture.
The
OrderSystem and
CustomerNotification beans can be transformed easily to CDI managed beans. The
@EJB annotation is removed and
@Inject is used instead.
@Named @Stateless public class Cart { @Inject OrderSystem ordering; @Inject CustomerNotification notifier; public void checkout(){ ordering.placeOrder(); notifier.sendNotification(); } }
After the EJB 3-to-CDI migration, all the additional CDI power, such as extended dependency injection, events, and stereotypes, can be used with EJB beans.
Annotating the boundary (
Cart) with the
@Named annotation makes the
Cart immediately visible for expression language (EL) expressions in JSP and JSF. This
@Named annotation takes the simple name of the annotated class, puts the first character in lowercase, and exposes it directly to the JSF pages (or JSP). The
Cart bean can be accessed directly, without any backed or managed beans, by the JSF pages:
<h:commandButton
As with EJB 3, CDI relies on the Convention over Configuration principle and doesn't require any further configuration. If there is only one possibility, it will be injected.
Any ambiguity results in meaningful deployment errors, such as
WELD-001408 Injection point has unsatisfied dependencies. Injection point:... So, there are no more
NullPointerException messages at run time on a bogus injection.
CDI also doesn't care whether the injected artifact is an interface or a class—as long as the type matches, the instance gets injected.
So you can start "small" just by directly injecting a concrete class. If there is a need for abstraction, the class can be turned into an interface (or abstract class). Each implementation of the desired functionality can be pushed into an independent interface realization. The
CustomerNotification class can be converted easily into an interface:
public interface CustomerNotification { public void sendNotification(); } Here is a remote implementation: public class RemoteCustomerNotification implements CustomerNotification{ //JMS resources injection @Override public void sendNotification() { //sending via JMS System.out.println("Remote event distribution!"); } }
And here is a local implementation (with CDI events discussed later in this article):
public class LocalCustomerNotification implements CustomerNotification{ @Inject Event<String> event; @Override public void sendNotification() { event.fire("Order proceeded!"); } }
The injection of the
CustomerNotification interface and the introduction of two implementations (and, therefore, injection opportunities) breaks the Convention over Configuration approach. You must now configure which implementation is going to be injected. In EJB 3.X, you could qualify the injection point, for example,
@EJB(beanName="remote"), with the bean name as a String. Although it works well, Strings are brittle and can be misspelled easily.
In CDI, you could also use Strings for the qualification with the
@Named annotation. A better choice is the use of custom annotations:
@Qualifier @Retention(RUNTIME) @Target({FIELD,TYPE}) public @interface Notification { enum Delivery{ LOCAL, REMOTE } Delivery value(); }
The qualifier annotation is simple. The only requirement is the existence of the
@Qualifier meta-annotation and
@Retention(RUNTIME). The definition of attributes inside the annotation is optional, but it provides even more power.
Now it is sufficient to annotate the injection point:
@Named @Stateless public class Cart { @Inject @Notification(Notification.Delivery.LOCAL) CustomerNotification notifier; }
Plus, annotate the implementation with the same qualifier annotation:
@Notification(Notification.Delivery.LOCAL) public class LocalCustomerNotification implements CustomerNotification
If both match, the container injects the implementation. If there is no match, or there are too many possibilities, an exception is thrown. The annotation-driven approach is type-safe; misspelling results in compiler, not deployment, errors.
Annotations are "just" Java Platform, Standard Edition (Java SE), so you get optimal IDE support, such as auto-completion, refactoring, and so on. To configure the injection, you need to annotate the implementation of your choice as well as the injection point.
To change the injected class, however, you need to recompile your code. This requirement is more of a benefit than a disadvantage, but for a few situations (such as operational or political requirements), external XML configuration might be more appropriate.
XML configuration is also possible with CDI, although the solution is somewhat interesting. Instead of configuring the injection point, you deactivate all implementation of the interface with the
@Alternative annotation. Doing this breaks the injection, because there is no implementation available for injection. With a few lines of XML in
beans.xml, you can reactivate the bean of your choice.
<beans> <alternatives> <class>com.abien.ejbandcdi.control.RemoteCustomerNotification</class> </alternatives> </beans>
The activation of the managed beans in
beans.xml fixes the problem: A single implementation of the interface becomes available for injection again.
The injected instance of
javax.enterprise.event.Event belongs to the CDI-implementation. The class Event can be considered to be a lightweight alternative to the
java.beans.PropertyChangeSupport class. The event can be distributed with the invocation of the fire method. More precisely, there is no real event, just the payload:
@Inject Event<String> event; @Override public void sendNotification() { event.fire("Order proceeded!"); }
The event can be received by any managed bean and also by EJB beans. You need only provide a method with a single
@Observes annotated parameter.
@Stateless public class OrderListener { public void onOrder(@Observes String event){ System.out.println("--: " + event); } }
If the type of the
fired payload matches with the annotated parameter, the event gets delivered; otherwise, it is ignored. Additional qualification with annotations is possible and works exactly as described for dependency injection.
The
during attribute in the
@Observes annotation allows you to select in which transactional phase the event gets delivered. The default setting is
IN_PROGRESS, which causes an immediate event delivery regardless of the transaction outcome. The
AFTER_SUCCESS configuration causes the delivery to occur only after successful transaction completion:
public void onOrder(@Observes(during=TransactionPhase.AFTER_SUCCESS) String event)
With the transaction-dependent event delivery, you can easily log successful or rolled-back transactions, or you can implement a batch-processing monitor. CDI events are a publish-subscribe implementation with transaction support, so they are a full-featured JMS replacement for local events. Although CDI events work only inside a single process (in the default case, CDI is extensible), they are perfectly suitable for decoupling packages from modules. The only dependency between the publisher and subscriber is the event itself and, optionally, the qualifier annotation.
In this article, we've introduced only excerpts of dependency injection, and events are a small portion of the whole CDI specification. Thankfully, you don't need to learn the whole specification to use basic injection. CDI is scalable—you can start small and, if needed, even swap the implementation using the official Service Provider Interface (SPI). Then, the sky is the limit.
The EJB 3.1 specification provides you with
cron-like timer services, support for asynchronous processing (
@Asynchronous annotation), JMX monitoring, declarative transactions, and a thread-safe programming model. These cross-cutting concerns are not covered by CDI and are free, without any programming effort, with EJB 3.1. On the other hand, type-safe dependency injection, events, and support for custom interceptors and decorators are covered much better with CDI.
In Java EE 6, the "EJB 3.1 with CDI" combination is the perfect strategy. EJB 3.1 provides the built-in cross-cutting aspects and CDI provides the additional DI power. With both specifications, your code becomes so simple that it is hardly possible to remove anything. You could implement all aspects without CDI, but then you would need to maintain them yourself, because they are not part of the Java EE specification. This approach would be hard to justify in real-world projects; both CDI and EJB 3.1 are part of the Web profile.
However, this situation could change in future Java EE releases. Transactions, monitoring, asynchronous processing, and so on could be factored out of the EJB 3.1 specification and pushed into portable CDI extensions.
In fact, CDI plus EJB 3 are even lighter than POJOs—just try to implement the same functionality with POJOs or any other framework. You will end up having either more code or more XML. In our case, only a single
beans.xml file with the content <beans></beans> is required.
Because CDI and EJB 3.1 are part of Java EE 6, your WAR file contains only the business logic without any additional library or framework. The size of the deployable WAR file containing the sample introduced here is 16 KB. The whole project (source files, binaries, build scripts, and so on) is 242 KB (see the EJBAndCDI project Web page). An average deployment takes approximately 500 ms.
Consultant and author Adam Bien is an Expert Group member for the Java EE 6, EJB 3.1, and JPA 2.0 JSRs. He has worked with Java technology since JDK 1.0 and with servlets/EJB 1.0 in several large-scale projects, and he is now an architect and developer for Java SE, Java EE, and JavaFX projects. He has edited several books about JavaFX, J2EE, and Java EE, and he is the author of Real World Java EE Patterns—Rethinking Best Practices. Adam is also a Java Champion and JavaOne 2009 Rock Star.false ,,,,,,,,,,,,,,,,
|
http://www.oracle.com/technetwork/articles/java/ejb-3-1-175064.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Avoid sys.argv to pass options to py2exe
Many users write a special setup script for py2exe that simply can be run to build the exe, without the need to specify command line options by hand, each time. That means that "includes", "excludes" options etc have to be passed in some way to py2exe.
The setup() function accepts a options keyword argument, containing a dictionary with the options. This is superior to appending options to sys.argv as the transformation of the data to a string and back is avoided as well as mutiple setup() calls per file are possible.
Note that the long name of options has to be used and '-' in the command line options become '_' in the dictionary (e.g. "dist-dir" becomes "dist_dir").
opts = { "py2exe": { "includes": "mod1, mod2", "dist_dir": "bin", } }
And pass it to the setup script:
setup( options = opts, ... )
Avoid using setup parameters that are py2exe-specific
Instead of passing options like console, windows to setup(), you can subclass Distribution and initialize them as follows:
from distutils.core import Distribution class MyDistribution(Distribution): def __init__(self, attrs): self.com_server = [] self.services = [] self.windows = [] self.console = ['myapp'] self.zipfile = 'mylibrary.zip' Distribution.__init__(self, attrs) setup(distclass=MyDistribution)
|
http://www.py2exe.org/index.cgi/PassingOptionsToPy2Exe?action=diff
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
in reply to
Re^5: Writing a better Modern::Perl
in thread Writing a better Modern::Perl
Lets ignore Moose for the time being... Why do you chose not to use autodie, or indirect, or namespace::autoclean, or even mro "c3" is it because you don't understand these modules, or that you prefer the behavior without them, or that you just don't care (and should therefore care little if the behavior is changed)?
What is "diverse code"?
Different code has different requirements. Where I may use autodie for a run-once program, I'm less likely to have a need for autodie for a module that sits between a webapp and a database. Whatever you come up with nextgen (or anyone else with a similar module), even if it's ideal for some of the code I write, it cannot be ideal for the majority of the code I write.
As I said before, the only modules/pragmas that are used in most of my modules are 'use strict; use warnings; use 5.XXX;'. There's nothing else that I use often enough that I want it loaded by default.
There's nothing else that I use often enough that I want it loaded by default.
I agree, and that's why I've been reluctant to add anything but autodie to Modern::Perl. My current approach is "it enables features of the core that should be on by default and nothing
|
http://www.perlmonks.org/?node_id=864065
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
On Tue, Sep 08, 2009 at 05:35:06PM +0200, Gabor Gombas wrote: > On Tue, Sep 08, 2009 at 04:35:42PM +0200, Fabian Greffrath wrote: > > > With the namespace issue fixed and a blacklist to avoid mounting > > partitions in a virtualization environment, would it make sense to > > make grub-pc recommend (or even depend on) os-prober again? > > The problem is not just virtualization but also exporting the block > device over the network. E.g. vblade does not open the device with > O_EXCL, so it is possible to mount it locally while some remote client > also have it mounted, resulting in data corruption. I think the best thing to do would be to use some kind of COW scheme on the device before mounting it. Setting up a device mapper snapshot, backed by a sparse file in a tmpfs is probably a good though hackish solution. I can give some help if needed. Mike
|
http://lists.debian.org/debian-devel/2009/09/msg00371.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Define file groups for screening
Updated: August 22, 2005
Applies To: Windows Server 2003 R2
A file group is used to define a namespace for a file screen, file screen exception, or storage report. It consists of a set of file name patterns, which are grouped into files to include and files to exclude:
- Files to include: files that belong in the group.
- Files to exclude: files that do not belong in the group.
To create a file group
- In File Screen Management, right-click File Groups, and then click Create file group.
-Or-
While you edit the properties of a file screen, file screen exception, or file screen template, under Manage file groups, click Create. This opens the Create File Group Properties dialog box.
- Type a name for the file group.
- Add files to include and files to exclude:
- For each set of files that you want to include in the file group, in the Files to include box, enter a file name pattern, and click Add.
Standard wildcard rules apply. For example, *.exe selects all executable files.
- For each set of files that you want to exclude from the file group, in Files to exclude, add exclusions in the same way.
- Click OK.
Show:
|
http://technet.microsoft.com/en-us/library/cc755773
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
XmlValidatingReader Class
Represents a reader that provides DTD, XML-Data Reduced (XDR) schema, and XML Schema definition language (XSD) schema validation.
For a list of all members of this type, see XmlValidatingReader Members.
System.Object
System.Xml.XmlReader
System.Xml.XmlValidatingReader
[Visual Basic] Public Class XmlValidatingReader Inherits XmlReader Implements IXmlLineInfo [C#] public class XmlValidatingReader : XmlReader, IXmlLineInfo [C++] public __gc class XmlValidatingReader : public XmlReader, IXmlLineInfo [JScript] public class XmlValidatingReader extends XmlReader implements IXmlLineInfo
Thread Safety
Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
Remarks
XmlValidatingReader implements the XmlReader class and provides support for data validation. Use the Schemas property to have the reader validate using schema files cached in an XmlSchemaCollection. The ValidationType property specifies what type of validation the reader should perform. Setting the property to ValidationType.None creates a non-validating reader.
If you do not need data validation, the ability to resolve general entities, or support for default attributes, use XmlTextReader.
To read XML data from an XmlNode, use XmlNodeReader.
Notes to Inheritors: This class has an inheritance demand. Full trust is required to inherit from XmlValidatingReader. See Inheritance Demands for more information.
Requirements
Namespace: System.Xml
Platforms: Windows 98, Windows NT 4.0, Windows Millennium Edition, Windows 2000, Windows XP Home Edition, Windows XP Professional, Windows Server 2003 family
Assembly: System.Xml (in System.Xml.dll)
See Also
XmlValidatingReader Members | System.Xml Namespace
|
http://msdn.microsoft.com/en-us/library/8x5ffbck(v=vs.71).aspx
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Scala let you mix-in different related traits.
By “non-functional,” I mean of course the programmer who is not looking to jump into the world of functional programming. ;-)
Scala is often portrayed as a solution for the multi-core scalability problems that come with Java’s approach to multithreading, but it offers so much more.
The Usual Reason to Consider Scala
Java multithreading requires you to protect any variables that might be accessed from multiple threads. This is often referred to as “synchronize and suffer” because you must manually add synchronization to protect your variables—and then suffer because it’s so easy to get it wrong. And there are lots of ways to get it wrong:
assuming you only have to synchronize setters and ignoring getters
forgetting that synchronized methods exhibit fairness while synchronized blocks do not
forgetting to protect related variables, exposing the code to data corruption and/or deadlock
Scala has certainly made a name for itself in offering ways to avoid these problems via immutable variables, functional programming, actors, and libraries like Akka. But there is a whole other side to the language. So even if you’re not ready to drink the mutability-is-bad Koolaid, Scala still has lots to offer.
This article discusses one of those “non-functional” features: traits.
The Java [Anti]Pattern
One of the common development patterns in Java is to create an interface, then create an abstract base class that implements that interface and provides some base implementation, and then create several concrete classes derived from the base class. Some might call this an antipattern because of the number of files it creates. On the other hand, Java doesn’t give you much choice. Interfaces cannot have implementations, so there is no place to put common method instantiations except in classes.
Listing 1: The Java [Anti]Pattern
Traits
Scala, on the other hand, offers traits, which are like interfaces with a little implementation thrown in. (A trait can also be like a mini-strategy pattern, as you can mix-in one of several different related traits depending on your need). A class can use a single trait, multiple traits, and/or can extend another class.
Here’s all the syntax you should need to be able to read the Scala in this article:
def starts a method,
variables are started with var or val,
variables are defined with [name] [type] rather than Java style [type] [name], and
semicolons are not required.
That covered, let’s look at some Scala:
Scala does not have an interface keyword, so traits do double duty. An abstract method in a trait is just like a Java interface method in that all classes using the trait must define it. Trait methods with an implementation do not need to be implemented by classes using them.
This may seem like a small saving but it’s just the tip of the iceberg with traits.
While java interfaces are applied on an all-or-nothing basis—your class either implements the interface or it does not—Scala lets you use traits on a per-instance basis. So you can say “this instance of class Foo that normally does not extend trait Bar, does in fact extend it!”
MySpecialContent defined here is an instance of the Content class, but for this particular instance object we mix in the OtherTrait:
This capability may not be universally viewed as a Good Thing by all Java programmers, as it is a step towards dynamic languages where classes can be modified after the fact. While it is true that Traits and Open Classes are different parts of the language, they do accomplish the same thing: they change the behavior of an existing class, and that is something odd to many traditional Java programmers.
While many view open classes as a boon it is also true that they add complexity. An instance of the class created without the trait can be assigned an instance of the class with the trait, but not the reverse. On the other hand, you still have the option to mark a class as final which prevents instance-based trait mixing. On the other other hand, marking a class as final also means you cannot derive other classes from it, so it’s at best an imperfect solution.
Given all this, why would you want to use instance-specific trait mix-ins?
There’s a Pattern for That
Think strategy pattern. The strategy pattern says that a class performs a type of action where there can be several related implementations of that action.
An example from my field of Video On Demand might be obtainTheContent. We might have a class called Movie that corresponds to playing a movie on a customer device. In order to do so it would presumably have to get the bits to play; i.e., it would obtainTheContent. It might do that differently if the customer device was a plasma TV, a tablet, or a smartphone.
In Java we might declare an obtainTheContent method that accepted a device type parameter and performed different actions based on it. Or, we could declare a set of related classes that each implemented an obtainTheContent interface. We could then instantiate the appropriate class and call the method on it. Or we could use Spring (or Guice) and declare the interface as a required bean and create the correct type on the fly.
In Scala we could simply say:
This would create a movie that exposes methods from the TableContent trait. We could also create an object new Movie with BitPlasmaContent.
I hope this brief exposure to one of the the non-functional aspects of Scala has piqued your curiosity. To learn more you can explore Programming Scala by Venkat Subramaniam or Programming in Scala2 by Martin Odersky, Lex Spoon, and Bill Venners, all of whom know far more about Scala than I ever will..
|
http://pragprog.com/magazines/2011-09/scala-traits
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
The bind method from the widget command allows you to watch for certain events and to have a callback function trigger when that event type occurs. The form of the bind method is:
def bind(self, sequence, func, add=''):
For example:
def turnRed(self, event): event.widget["activeforeground"] = "red" self.button.bind("<Enter>", self.turnRed)
Notice how the widget field of the event is being accesed in the turnRed() callback. This field contains the widget that caught the X event. The following table lists the other event fields you can access, and how they are denoted in Tk, which can be useful when referring to the Tk man pages..
|
http://docs.python.org/release/2.3.3/lib/node644.html
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
My ten year old has learnt quite a lot of java and wants a few BEGINNER projects. Any ideas?
Okay...
Get your (son/daughter?) to write a control component.
It should have the following buttons; Stop, Rewind (or Previous), Play/Pause, Fast Forward (or Next), Record.
It should have a display panel above the buttons to display time.
There should be a toggle to display either Time Elasped, or Time Remaining.
The reason that I suggested this project is...
I plan on doing the same thing on my vacation (in two weeks).
I am willing (after my vacation, I'm not going to steal their work) to critique the work.
Son.
Thanks, but that is far too advanced for him. He knows all of this:-
Objects
Instance variables
Bit of
swing File io
Threads and networking(well, a little)
and more...
He DOES NOT KNOW user io.
user io is a lot easier to learn/code than File io
just let him take a look at the Scanner class and JOptionPane.
I know this sounds antiquated, but I learned a lot by writing a text adventure game -- like a Scott Adams text adventure game back in the early 1980s.
Something like that would expand his knowledge of user interaction and could be great for learning code re-use.
It could be done on the console or graphically or in an applet like at the link above.
I know this sounds antiquated
why would this sound antiquated?
no matter in what century they were born, all Olympic medal sprinters learned to crawl, then walk.
every one has to get to know the basics before going 'expert', so it's a good advice that 'll probably stay just that (good and solid advice) as long as people try to start learning java coding.
Tic-tac-toe - some Swing, some arrays, some logic... but nothing too hard.
I'd personally recommend a file/directory copy utility. This way, he'll get more acquainted with how the CLI clients work and get a taste of implementing a fun utility from scratch. The good thing is that this project can be beefed up if required by adding more options like the way a real copy command works. Bonus points for progress bar etc.
I gave him these ideas and here is his response.
7:30 pm : "Yay! Learnt Scanner!"
7:45 pm : Checked out the Scott Adams game
8:00 pm : Scott Adams--Hmm... how could I do that?
8:15 pm : tic-tac-toe:"EUREKA!!!! JLabels..."
8:30 pm : file copy-way too complicated
?
JPanel myPanel = new JPanel();
onButtonPressed
myPanel = new JPanel();
something like that? seems possible to me, never actually coded it myself, though.
?
Yes. But of you explain why, there may be a better approach (eg CardLayout)
no, this is what i meant:-
import java.io.*; import java.awt.event.*; public class --- implements Serializable, ActionListener { JFrame frame; public static void main(String[] args) { new ---().go(); } public void go() { JPanel panelone = new JPanel(); frame = new JFrame("Double panel swing test"); frame.setContentPane(panelone); JButton button = new JButton("Click me"); button.addActionListener(this); panelone.add(button); //other swing-related code...now for the actionPerformed() } public void actionPerformed(ActionEvent ev) { //all...or...nothing... JPanel panel2 = new JPanel(); frame.setContentPane(panel2);//THIS is what i meant } }
Yes, I understood that. You skeleton code is OK. But if you want to switch between multiple sets of content in the same frame, there are Swing classes to handle that for you, that's all. It just depends on where you-re going next with this...
8:00 pm : Scott Adams--Hmm... how could I do that?
Rock on!
One of the great benefits of something like that is also USER RESPONSE.
People evaluate what the program does rather than how it is written.
I sometimes get more excitement out of the coolness factor of the code more than what the code produces.
Getting feedback from users will serve him well.
A simple hangman game should be easy enough :) Find some simple games which can keep him excited
But if you want to switch between multiple sets of content in the same frame, there are Swing classes to handle that for you
import java.awt.event.*; import javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JPanel; public class Jpaneltest implements ActionListener { JFrame frame; JPanel panelone; public static void main(String[] args) { new Jpaneltest().go(); } public void go() { panelone = new JPanel(); frame = new JFrame("Double panel swing test"); frame.setContentPane(panelone); JButton button = new JButton("Click me"); button.addActionListener(this); panelone.add(button); frame.setVisible(true); frame.setSize(300, 300); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); } public void actionPerformed(ActionEvent ev) { JPanel pe = new JPanel(); frame.setContentPane(pe); } }
How?
When I tested my code, all that happened:-
1. Click the button.
2. Went out of the button.
3. It still looks as if it has just been clicked, nothing happens either.
If you want to have multiple Jpanels and use a Button to show one at a time
the I think you should use CardLayout
can a cardLayout use a JButton instead of a JComboBox to navigate?
yes, here's a link for an example
Related Articles
|
http://www.daniweb.com/community-center/threads/412452/any-projects-for-the-java-beginner
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
Your Account
Hear us Roar
This returns an org.xml.sax.SAXException: No deserializer defined for array type Struct
Any suggestions on how to get this working? Do I need to somehow map the magazine arrays' namespaces?
Thanks,
Ethan
Showing messages 1 through 5 of 5.
© 2013, O’Reilly Media, Inc.
(707) 827-7019
(800) 889-8969
All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
|
http://www.oreillynet.com/cs/user/view/cs_msg/24016
|
CC-MAIN-2013-48
|
en
|
refinedweb
|
I once worked in a company that sells health-care devices and I've knowledge of how blood pressure monitors work. In this project I will use a micro pressure sensor to detect Korotkoff sounds, which will help me to calculate systolic (maximum) and diastolic (minimum) blood pressure. There is another method called the oscillometric and I will not use it because I don't have enough hardware and time to finish it. However the oscillometric method is based on the Korotkoff sounds and each manufacturer develops its own mathematical algorithm to find systolic and diastolic pressures. Reference:
I can tell you that the project that was made with GNAT Programming Studio is my own and my creativity (source code, schematic diagram and flowchart). For the rest, I used and modified the arduino demo code from the pressure sensor manufacturer, and the code to send the test data to excel. Let's get started with the list of necessary hardware.
Korotkoff sounds are the sounds that medical personnel listen for when they are taking blood pressure using a non-invasive procedure. They're named after Dr. Nikolai Korotkoff, a Russian physician who discovered them in 1905, when he was working at the Imperial Medical Academy in St. Petersburg, Russian...
Eventually, as the pressure in the cuff drops further, the sounds change in quality, then become muted, and finally disappear altogether. This occurs because, as the pressure in the cuff drops below the diastolic blood pressure.
There're five Korotkoff sounds:
- Phase I—The first appearance of faint, repetitive, clear tapping sounds which gradually increase in intensity for at least two consecutive beats is the systolic blood pressure.
- Phase II—A brief period may follow during which the sounds soften and acquire a swishing quality.
- Phase III—The return of sharper sounds, which become crisper to regain, or even exceed, the intensity of phase I sounds.
- Phase IV—The distinct abrupt muffling of sounds, which become soft and blowing in quality.
- Phase V—The point at which all sounds finally disappear completely is the diastolic pressure.
Traditionally, the systolic blood pressure is taken to be the pressure at which the first Korotkoff sound is first heard and the diastolic blood pressure is the pressure at which the fourth Korotkoff sound is just barely audible. However, there has recently been a move towards the use of the fifth Korotkoff sound.
Reference:
The SparkFun Qwiic MicroPressure Sensor is a miniature breakout equipped with Honeywell's 25 psi piezoresistive silicon pressure sensor. This MicroPressure Sensor offers a calibrated and compensated pressure sensing range of 60 mbar to 2.5 bar, easy to read 24 bit digital I2C output.
Each Qwiic MicroPressure Sensor has a calibrated pressure sensing range from 1 - 25 psi (52 - 1293 mmHg) and a power consumption rate as low as 0.01 mW typ. average power, 1 Hz measurement frequency for ultimate portability. Used in multiple medical (blood pressure monitoring, negative pressure wound therapy), industrial (air braking systems, gas and water meters), and consumer uses (coffee machines, humidifiers, air beds, washing machines, dishwashers).
Reference:
I bought this sensor two months ago, the libraries are made to work in the Arduino IDE. So I used the demo example and modified it to convert the pressure data into mmHg, remove the atmospheric pressure component, and send the data to the output of the DAC_1 (analog digital converter), on pin D25 of the ESP32-WROOM-32 board every 200 ms.
There're 2 channel 8 bit DACs in the ESP32 to convert the digital signals into analog voltage signal outputs. So value 1 corresponds to 11.76 mV, value 2 corresponds to 23.53 mV ... and value 255 corresponds to 3000 mV. You can download the code in the download section as pressure_sensor.ino.
5. GPS Project5. GPS Project
#include<Wire.h>
#include <SparkFun_MicroPressure.h>
SparkFun_MicroPressure mpr; // Use default values with reset and EOC pins unused
#define DAC2 26
float min_pressure = 600;
void setup() {
// Initalize UART, I2C bus, and connect to the micropressure sensor
Serial.begin(9600);
Wire.begin();
if(!mpr.begin())
{
Serial.println("Cannot connect to MicroPressure sensor.");
while(1);
}
}
void loop() {
float sensor_pressure = (mpr.readPressure(INHG)*25.4);
if(sensor_pressure<min_pressure){
Serial.println("new minimum pressure");
min_pressure=sensor_pressure;
}
sensor_pressure = (mpr.readPressure(INHG)*25.4);
int pressure_total = sensor_pressure - min_pressure;
dacWrite(DAC2, pressure_total);
Serial.print(pressure_total);
Serial.println(";");
delay(200);
}
The STM32F429I board was programmed with GNAT Programming Studio 19.1 Developed by AdaCore. This part develops the most important control function of the system and I will detail below. The schematic diagram and the flowchart you can get at the end of this tutorial. As a reference for this project, I've used the following Ada tools:
- 1) examples such as demo_adc_polling, and demo_gpio_direct_leds;
- 2) Ada drivers libraries like STM32.User_Button, STM32.ADC, STM32.GPIO, and LCD_Std_Out; and
- 3) Theory sucha as arrays, and for and while loops.
The analysis of the code of the digital blood pressure monitor I've divided it into three sections:
- Inflation of the cuff,
- Deflation of the cuff, and
- Calculation of the Korotkoff sounds.
Our schematic diagram is shown below:
- The analog port PA5 of the STM32F429I board is connected to the DAC_1 port of the ESP32-WROOM-32 board, so we will monitoring each mmHg of the pressure sensor in real time.
- When we press the user button (blue) that corresponds to port PA0, then the air pump and the pneumatic solenoid valve are turned ON.
- In this solenoid the valve closes and doesn't allow any air leakage, so the air pump begins to inflate the cuff, and the pressure sensor tells us how the pressure increases inside the cuff.
- Each increase in pressure within the cuff is printed on the LCD screen of the STM32F429I board.
- You have to note that before printing the total pressure I do a subtraction of eight units, this is because I found a small error of + 8 mmHg approx in all my readings. I did this verification with an aneroid gauge.
- When the pressure inside the cuff reaches 170 mmHg, then the air pump is stop (OFF), and the solenoid valve continues closed.
7. Deflation of the Cuff7. Deflation of the Cuff
if STM32.User_Button.Has_Been_Pressed then -- Btn pressed then go to 170 mmHg
Start_Conversion (Converter);
Poll_For_Status (Converter, Regular_Channel_Conversion_Complete, Successful);
Solenoid_valve.Set; -- solenoid valve is ON
Motor.Set; -- air pump is ON
Enable_a.Set;
Enable_b.Set;
while Pressure_total <= 170
delay until Clock + Milliseconds (75);
end loop;
Solenoid_valve.Set; -- solenoid valve is ON
Motor.Clear; -- air pump is OFF
Enable_a.Set;
Enable_b.Clear;
- Continuing with the narration from the previous section. The cuff starts to deflate from 170 mmHg.
- Remember that the solenoid valve is closed, so now the air leak is in the air release valve. Here, the recommendation is that the deflation process of the cuff is between 17 and 21 seconds. So we must calibrate this air release valve with a flat screwdriver, so if we turn it clockwise then the valve closes, and counterclockwise the valve opens.
- The next step is to measure blood pressure values between 70 mmHg and 170 mmHg and save them in the "PressureArray" matrix every 200 ms approximately.
- I have programmed a maximum value of 210 mmHg in the while function in case, during the inflation of the cuff, it exceeds the limit of 170 mmHg, for example 180 mmHg.
- The values stored in the "PressureArray" matrix and per test are between 120 and 130.
8. Calculation of the Korotkoff Sounds8. Calculation of the Korotkoff Sounds
while Pressure_total > 70 and Pressure_total <= 210
PressureArray(X_Pos) := Integer (Pressure_total);
X_Pos := X_Pos + 1;
delay until Clock + Milliseconds (190);
end loop;
- We already have all the data stored in the "PressureArray" matrix. Now our goal is to find these points where the Korotkoff sounds occur.
- I did a test, send and graph to excel all the points of the curve of my blood pressure to see the points where the Korotkoff sounds occur, and this is what I saw.
Here you can get a tutorial to send the data to Excel:
And you can download my excel file at this link: Korotkoff_sounds_excel
- In the graph above I found ten Korotkoff sounds. I did an analysis and in all Korotkoff points I found the following two constants: 1) point b is greater than point a, and 2) point c is greater than point a.
- Through a for loop I'm going to go through 130 records of the "PressureArray" matrix. Next, I assign the first record to the variable "var_a", the second record to the variable "var_b" and the third record to the variable "var_c".
- If the variables "var_b"> "var_a" and "var_c"> "var_a", then I print the variable "var_b" as Korotkoff sound.
- Any other combination I only delay 1 ms. and I don't print anything.
- According to the theory read at the beginning of this tutorial (sections 2 and 3), systolic pressure is the first Korotkoff sound, and diastolic pressure is the last Korotkoff sound.
9. Assembling the device9. Assembling the device
for I in 1 .. 130 loop
var_a := UInt32(PressureArray(I));
var_b := UInt32(PressureArray(I+1));
var_c := UInt32(PressureArray(I+2));
if var_b > var_a and var_c > var_a then
Print (0, Integer (inc), var_b, " mmHg-korot");
inc := inc + 25;
delay until Clock + Milliseconds (1);
else
delay until Clock + Milliseconds (1);
end if;
end loop;
The hardware is assembled in a box for the best handling of the device, in the figure below we can see the box made with a 3D printer, and whose STL file you can get in the download section.
Next, we join the solenoid valve with the connector through which the cuff can be connected.
Now we glue these pieces with silicone over the box's groove shown in the figure below.
I have used a smaller air pump to best fit in the box. So, we join this air pump to the cuff connector with an air hose as shown in the figure below. Also we join the air release valve to the cuff connector with an air hose.
We connect the micro pressure sensor with the air pump with an air hose. Then we fix the pressure microsensor as shown in the figure below.
The next step is to fix the ESP32-WROOM-32 board as shown in the figure below.
Finally, place the STM32F429I board and the L298N driver as shown in the figure below. All electrical connections are followed as in the schematic diagram.
In the video below I show you my first tests with this health device. In the first part, we see a test where I get six Korotkoff sounds. In the second part I did a close up, so you can see the data on the LCD screen.
To increase the amount of Korotkoff sounds detected I did two things:
- The air release valve must be recalibrated, so I gave it more time so that the device had time to detect more Korotkoff sounds (Reference: second point of section 7); and
- Foam rubber must be added to the air pump to reduce device vibrations, because these vibrations induce noise in other devices.
I've getting ten Korotkoff sounds in this test, and we can see them in the figure below:
Korotkoff sounds:
- 139 mmHg
- 130 mmHg
- 127 mmHg
- 126 mmHg
- 116 mmHg
- 111 mmHg
- 109 mmHg
- 106 mmHg
- 99 mmHg
- 81 mmHg
Finally, I had check my blood pressure and compare it to the second test.
Notes:
- At that time I was suffering from a slight flu, so the values can be considered justified.
- The OMRON digital pressure gauge detects Korotkoff sounds while the cuff is inflated. My design detects Korotkoff sounds while the cuff deflates.
DATA COMPARISONS:
- My device: according to the theory seen on sections 2 and 3, my systolic pressure is 139 mmHg, and my diastolic pressure is 99 mmHg (or 81 mmHg).
- The OMRON device: it tells me that my systolic pressure is 137 mmHg, and my diastolic pressure is 95 mmHg.
To develop this project I learned that it's necessary to verify and analyze each step to follow, eg:
- Pressure measurements of this device were compared with an aneroid manometer.
- I had to calibrate the air release valve several times.
- This model can be useful for a doctor or nurse who suffers from a bad hearing, because it would be imposible to detect the Korotkoff sounds.
- In the final tests I feel satisfied with the values measured by my device when I compare it with the OMRON device.
This project is complex, so I suggest the following challenges to improve it:
- Make the pressure sensor libraries to connect it directly with the STM32F429I board;
- Another option to the previous point would be to communicate the ESP32-WROOM-32 and STM32F429I boards through the serial port;
- Calculate the heart rate from the Korotkoff sounds; and display the hart rate with the systolic and diastolic pressure on the screen;
- Develop the prototype with the oscillometric method. To achieve this, it's necessary to test with analog and/or digital bandpass filters. At the end you have to make an algorithm to combine the oscillometric method and the Korotkoff sounds.
This is how a digital blood pressure monitor is manufactured.
|
https://www.hackster.io/guillengap/digital-blood-pressure-monitor-bf8a32
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
README
redux-saga-oauthredux-saga-oauth
👮 An OAuth module for Redux Saga powered applications
What does it do?What does it do?
Redux Saga OAuth provides a reducer and a saga to handle authentication within any JavaScript application that uses Redux and Redux Saga.
Key featuresKey features
- Has Flow support for easier integration
- Handles all HTTP requests with axios
- Allows for any grant types and extra data to be passed during login
- Automatically handles token refresh following the standard flow
- Handles failures during the refresh flow and will retry until it succeeds
- Allows the refresh token to be expired on the server and log users out
Getting startedGetting started
InstallInstall
You can install this via
yarn or
npm, however,
yarn is the preferred
method.
yarn add @simpleweb/redux-saga-oauth
npm install --save @simpleweb/redux-saga-oauth
It also has a peer dependency of
redux-saga, please make sure this is
installed before hand.
UsageUsage
Add the provided reducer to your storeAdd the provided reducer to your store
Within your existing Redux store, bring in the provided reducer. It’s key
(
auth in the example below) can be customised to anything you like.
import { Reducer } from "@simpleweb/redux-saga-oauth"; const store = createStore( combineReducers({ auth: Reducer, }) );
Add the provided saga to your root sagaAdd the provided saga to your root saga
Create the provided auth saga and add it to your root saga. These are the
required options you must pass. The
reducerKey should match the key from
the step above.
import { createOAuthSaga } from "@simpleweb/redux-saga-oauth"; const authSaga = createOAuthSaga({ reducerKey: "auth", OAUTH_URL: "", OAUTH_CLIENT_ID: "<CLIENT ID>", OAUTH_CLIENT_SECRET: "<CLIENT SECRET>", }); const sagas = function* rootSaga() { yield all([ fork(authSaga), ]); }
Login and logoutLogin and logout
To login, simply import the provided actions, pass through your API’s corresponding credentials and dispatch the action.
import { login, logout } from "@simpleweb/redux-saga-oauth"; const params = { username: "ben@simpleweb.co.uk", password: "mysecurepassword", grant_type: "password", }; store.dispatch( login(params) ); store.dispatch( logout() );
ActionsActions
The module does expose all it’s internal Redux actions and constants should you need them. They are exposed like so.
import { Actions } from "@simpleweb/redux-saga-oauth"; Actions.authLoginRequest() Actions.AUTH_LOGIN_REQUEST
Authenticated requestsAuthenticated requests
This will something you will want to do after having got this working, while the code is not directly provided in the module, it's worth moving this into your own codebase as some sort of helper function to make authenticated requests using the access token in the store.
Please note any
import’s are missing from the code below.
App/Sagas/AuthenticatedRequest.js
// Custom error type to be thrown from this saga // e.g. throw new AuthenticationSagaError('Some message'); function AuthenticationSagaError(message) { this.message = message; this.name = "AuthenticationSagaError"; } // Helper function to get the authentication state from the store // the "authentication" key will be unique to your code const getAuthentication = state => state.auth; // Helper function to check if the token has expired export const tokenHasExpired = ({ expiresIn, createdAt }) => { const MILLISECONDS_IN_MINUTE = 1000 * 60; // Set refreshBuffer to 10 minutes // so the token is refreshed before expiry const refreshBuffer = MILLISECONDS_IN_MINUTE * 10; // Expiry time // multiplied by 1000 as server time are return in seconds, not milliseconds const expiresAt = new Date((createdAt + expiresIn) * 1000).getTime(); // The current time const now = new Date().getTime(); // When we want the token to be refreshed const refreshAt = expiresAt - refreshBuffer; return now >= refreshAt; }; // Helper function to get the access token from the store // if the token has expired, it will wait until the token has been refreshed // or an authentication invalid error is thrown function* getAccessToken() { const authentication = yield select(getAuthentication); // Expires_in, created_at // If the token has expired, wait for the refresh action if ( tokenHasExpired({ expiresIn: authentication.expires_in, createdAt: authentication.created_at, }) ) { yield race({ refreshError: take(AUTH_INVALID_ERROR), tokenRefreshed: take(AUTH_REFRESH_SUCCESS), }); } // Return the latest access token const latestAuthentication = yield select(getAuthentication); return latestAuthentication.access_token; } // Finally the function you’ll use inside your sagas to make requests export default function* AuthenticatedRequest(...args) { // Get the current access token, wait for it if it needs refreshing const accessToken = yield getAccessToken(); if (accessToken) { const config = { headers: { Authorization: `Bearer ${accessToken}`, }, }; try { return yield call(...args, config); } catch (error) { if (error.response && error.response.status === 401) { yield put(authInvalidError(error.response)); throw new AuthenticationSagaError("Unauthorized"); } else { throw error; } } } else { throw new AuthenticationSagaError("No access token"); } }
UsageUsage
The
AuthenticatedRequest function simply wraps your normally API calls so
additional headers can be passed down to add in the access token.
import axios from "axios"; import AuthenticatedRequest from "App/Sagas/AuthenticatedRequest"; function* MakeRequest() { try { const response = yield AuthenticatedRequest(axios.get, "/user"); } catch(error) { } }
DevelopmentDevelopment
You can test this locally by installing it’s dependencies and linking it as a local module.
git clone git@github.com:simpleweb/redux-saga-oauth.git cd redux-saga-oauth yarn && yarn link
DeploymentDeployment
Increment the
version inside of the
package.json and create a commit stating
a new version has been created, e.g. "🚀 Released 1.0.0".
On Github, draft a new release , set the version and release title to "vX.X.X" (the version number that you want to release) and add a description of the new release.
Now run
yarn publish --access=public to deploy the code to npm.
TL;DRTL;DR
|
https://www.skypack.dev/view/@simpleweb-keycloak/redux-saga-oauth
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Provided by: manpages-dev_4.04-2_all
>>IMAGE ), credentials(7), user_namespaces(7)
COLOPHON
This page is part of release 4.04 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at.
|
https://manpages.ubuntu.com/manpages/xenial/man2/setegid.2.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Origins
Silver can track the origins of nonterminals constructed in programs. This is implemented following the paper Origin Tracking in Attribute Grammars by Kevin Williams and Eric Van Wyk. More simply: each node (instance of a nonterminal type) gains an additional piece of information called it’s origin, which is a reference to the node that it ‘came from.’ It may also have a similar reference called the redex to a node that catalyzed the motion of the node from one place in the tree to another. It also gains a marker called ‘er’ that indicates if the transformation that produced it was trivial or not, and gains a set of ‘notes’ that describe the transformation that produced it.
When a node is constructed it’s origin is set to the node on which the rule that constructed it was evaluated. For example, if a node representing an expression has a rule for an attribute that constructs an expanded version of it, all of the nodes newly constructed in that rule gain the original node as their origin. When a attribute is defined to be an attribute of a child, the value assigned to the attribute is a copy of that child with it’s redex set to the node on which that rule occurs. The redex then represents the node that catalyzed the movement of the child to the parent’s position in the resulting value.
OK. Here is the whirlwind porting guide:
- Mark everything that has a
locationannotation as
tracked
- Get rid of the
locationannotation and associated swizzling 🎉🎉🎉
- Instead of using
top.locationfor error messages instead raise errors/etc with
errFromOrigin(top, ...)/etc
- Start building your project with
--no-redexinstead of
--no-origins(if you were) and build it
--cleanly at least once
Another way to think about adding origin tracking if all you care about is replacing
location is that the entire codebase gets an implicit
Location argument that is handled by the runtime.
This implicit argument always refers to the location of the nonterminal the rule currently executing was defined on (so in functions it refers to the location of the nonterminal that invoked the function.)
The implicit location argument can be altered then by way of
attachNote logicalLocationNote(loc); which sets the implicit location argument to
loc for the entire block of declarations it occurs on, or
attachNote logicalLocationNote(loc) on {expr} which sets it only for the context of
expr.
It’s more complicated + powerful + cooler than that, but that mental model will totally work for ridding yourself of
location :)
In cases where swizzling was not just
location=top.location you an add an
attachNote logicalLocationNote(loc); statement, getting
loc from
getParsedOriginLocationOrFallback(node).
This statement means that that a node constructed in the body alongside that statement is traced back to a textual location that location will be used instead of the textual location of the node on which the rule was defined.
What do you want to mark
tracked? Maybe more than just what had a
location.
Origin tracking can also replace manually tracking the source location that definition nonterminals in environments need to keep track of.
Children in definition nonterminals representing definition location and attributes holding the same can be removed.
Marking the definition nonterminal
tracked will (usually) do the same (assuming it is constructed in a rule on the node it originates from - if not, use
logicalLocationNote to adjust it.)
How about cases where a
Location is passed into a function? It can (almost always) be removed.
Generally instead of taking a location in helpers one can use the implicit origin information that flows into functions.
Origins from the call-site of the function apply to values constructed within.
If the function was called with
top.location and that value was used to construct new nodes, the observable behavior in tracking the source location will be the exact same.
If something other than
top.location was used, a
logicalLocationNote can be used to adjust the origin information at the call site of the function.
If a
Location is passed with the express purpose of raising an error it can be removed as well.
Either use
errFromOrigin with one of the arguments as the origin, or use
errFromorigin(ambientOrigin(), ...) to raise the error using the origin information flowing into the function.
(The source location can also be derived using
getParsedOriginLocationOrFallback(ambientOrigin()) if that is needed for e.g. an error message.)
What about cases where a lambda takes a Location?
This comes up in code where (for example)
Exprs define a attribute that is a lambda for how to do some manipulation (e.g. take the address) of them.
Imagine we have an
Expr type with a attribute
addressOfProd (taken from AbleC) that is of the type
Expr ::= Location.
The reason for this is so that when we invoke
someExpr.addressOfProd(top.location) the resulting tree is built using the location of the address-of operator, not of the original expression.
When we rewrite this code to use origins, we can remove the
Location argument, meaning the type will be just
Expr ::= and the invocation will be just
someExpr.addressOfProd().
Since the call site will pick up origins information from the node it is a rule on (
top) it will flow correctly into the lambda invocation.
For example if we had a production with a rule like
top.addressOfProd = (\loc::Location -> someOtherProd(location=loc)) can can change it to a 0-argument lambda
top.addressOfProd = (\ -> someOtherProd()).
Nonterminals in silver are either
tracked (which is a qualifier like
closed) or untracked.
Tracked nonterminals have origin information attached to them at construction (and if using redexes, when they are ‘moved’ during a transformation.)
Untracked nonterminals don’t have origin info.
There are performance implications for keeping track of origins info (both in constructing the origins info representations, doing the bookkeeping for them, and the memory overhead of pinning the things objects originate from) so it is in one’s best interest to avoid tracking nonterminals that won’t have their origins information asked for.
In Silver the origin of a node is represented as an instance of the nonterminal type
OriginInfo, which has different productions for different sets of origin information a node can have.
To access the
OriginInfo for a node one calls
getOriginInfo (in
core) which returns a
Maybe<OriginInfo>.
Code using origins should handle the case that this returns
nothing().
The links to origins and redexes in
OriginInfo nodes are implemented as unconstrained generics, so to handle them it is necessary to use reflection.
If you want to checked-cast a link to a known type you can use the
reify(anyAST(link)) pattern to do so without unnecessarily constructing a reflective tree.
This can fail either if a terminal is not marked as
tracked, because the program was built with
--no-origins, or because of a stale module not attaching origins (see later note on build issues.)
In Silver notes are values of the type
OriginNote.
A builtin
dbgNote ::= String production is available for quick debugging notes, but for other use users are encouraged to add their own productions.
Notes are effective over domains of code and will be picked up in the origins info for any values constructed (in their origin notes) or moved (in their redex notes) in that code (and in functions it calls, etc.)
Notes can be made effective over an entire body of statements by adding a production statement of the form
attachNote dbgNote("foo"); or made effective over only part of an expression by writing
attachNote dbgNote on {expr}.
The former is useful to describer a general operation happening, and the latter for noting a exceptional case (e.g. a nontrivial optimization taking place sometimes.)
In Silver the ‘er’ flag on origins is known as ‘newlyConstructed’ or ‘isInteresting’. The definition used to determine if a constructed node is interesting is that it is considered interesting unless all of the following are true:
- It’s in the ‘root position’ of a rule, i.e.
bar()is in ‘root position’ in
top.xform = bar()but not in
top.xform = foo(bar()).
- It’s the same production as the production on which the rule is defined, i.e.
...production bar... {top.xform=bar(...);}but not
...production bar... {top.xform=foo(...);}
- It’s not constructed in a function (including lambdas)
The purpose of this flag is to indicate if the transformation is ‘trivial’ or not. If the flag is not set you can know that the transformation didn’t change the ‘shape’ of the tree at the level of the node on which it’s set.
We can follow the origin link of a node to the node it logically originates from.
Once we can do this, we can get the origin information of that node, and follow the path of origins back.
This is the ‘origins chain’ or ‘origins trace’.
Eventually we will reach a node that has an origin describing some source other than a rule on a node (instead e.g. that it was parsed from input to Copper) or a node without origins (because it is not
tracked.)
One can call
getOriginInfoChain to get a list of
OriginInfo objects representing the links between objects in this chain.
If the chain of origins is
foo ---originates-from-node---> bar ---originates-from-node---> baz ---originates-from-source-location---> file:12:30 we can call
getOriginInfoChain(foo) to get
[originOriginInfo(..., bar, ...), originOriginInfo(..., baz, ...), parsedOriginInfo(loc('file', 12, 30, ...))].
One very practical application is that we can get this chain of origin information, find the last one, and find the source location the object at the end of the chain originates from.
This is what we currently do with the
location annotation in many places.
This common use case is wrapped up with the helper functions
getUrOrigin(...) which returns the last item in the origin chain (if there is one) and
getParsedOriginLocation(...) which gets the last item in the origin chain and - if it is a
parsedOriginInfo indicating it was constructed in Copper - yields the
Location.
In situations where the logical textual origin of a node is not the textual origin of the node on which the rule which constructed it was defined one can attach a
logicalLocationNote(loc) to it which will be used by
getParsedOriginLocation instead.
The origin information (the LHS, notes, interesting-ness or other source information) is tracked by the runtime and generated code and flows throughout running silver code.
When a function or lambda is called the origin information from it’s call site is used for values constructed in it.
This means that while it’s not possible to ask for the origin of a function instantiation proper (while this does make sense from the function-as-a-node-with-a-single-attribute-called-return PoV, it’s not the silver model) it is possible to get the same information by constructing a value and asking for it’s origin.
There is a production in the origins runtime support specifically called for this called
ambientOrigin() (of type
ambientOriginNT).
For example if you have a helper function like
checkArgs :: [Message] ::= [Expr] [Expr] Location and call it from a
binOp production using the production’s location as an argument, you can instead omit that argument and use
errFromOrigin(ambientOrigin(), ...) to produce the error
In Silver the notion from the paper is extended and generalized to provide origins that can also encode different ways of producing nodes that are not part of the simple attribute grammar described in the paper.
Each different set of possible origin info is described by a production of
OriginInfo.
Each production has a
OriginInfoType member that describes where and how the node was created and contains a list of
OriginNotes attached from code that influenced the creation of the node.
originOriginInfo(typ, origin, originNotes, newlyConstructed)contains a link to the node that this node originates from (
origin), notes (
originNotes) and the interesting flag (
newlyConstructed). The possible values for
typ(
OriginInfoTypes) are:
setAtConstructionOIT()indicating the node was constructed normally. The origin link is to the node on which the rule that constructed this node occurred.
setAtNewOIT()indicating the node was constructed in a call to
newto undecorate something. The origin link is to the node that was
newed.
setAtForwardingOIT()indicating the node was forwarded to. The origin link is to a copy of this node from which you can find out where it was constructed.
setFromReflectionOIT()indicating the node is an
ASTcreated from
reflect. The origin link is to the node that was reflected on.
setFromReificationOIT()indicating the node was created from an
ASTby
reify. The origin link is to the reflective representation the node was reified from.
originAndRedexOriginInfo(typ, origin, originNotes, redex, redexNotes, newlyConstructed)contains a link to the node that this node originates from (
origin), notes on that link (
originNotes), a link to the node that is the redex of a transformation that moved this node (
redex), notes on that link (
redexNotes), and the interesting flag (
newlyConstructed). The only value for
typthis can have is
setAtAccessOIT().
parsedOriginInfo(typ, source, notes)contains a source location (
source) of the text that caused Copper to emit this node from parsing (appears only on
concreteproductions.) The only value for
typthis can have is
setFromParserOIT().
notesis currently unused.
otherOriginInfo(typ, source, notes)contains a string describing whatever circumstance produced this node (
source) and maybe
notes. This is a catchall for things that do not have a logical origin either due to implementation details or concepts not present in the paper. Possible values for
typare:
setFromParserActionOIT()indicating the node was constructed in a parser action block.
setFromFFIOIT()indicating the node was constructed in a context where origins information had been lost as a result of passing through a FFI boundary that does not preserve it (e.g. when something is constructed in a comparison function invoked from the java runtime Silver value comparator shim)
setFromEntryOIT()indicating the node was constructed in entry function
setInGlobalOIT()indicating the node is a constant
trackedness is implemented in the silver compiler as part of the
nonterminalType.
Initially it was held in the
ntDcl for that
nonterminalType (which is where the
closed qualifier goes.)
That seems like it would be preferable, but the way
import works means that that
ntDcl is not always available.
In the situation that a production is imported (and used) without the nonterminal being imported (e.g.
import silver:langutil only err) we can have knowledge of the production without the nonterminal to which it belongs.
Since whenever we construct or manipulate a nonterminal we need to know it’s
trackedness this meant that the
trackedness had to go in the
nonterminalType.
tracked nonterminals extend
TrackedNode which in turn extend
Node.
Non
tracked nonterminals still directly extend
Node.
Node should be used to represent an untracked node or a node of unknown trackedness.
The only case where it’s possible to have to attach OI to a unknown-trackedness node is attaching a redex, which is done with a runtime
instanceof check.
The
OriginInfo for a node is treated as a hidden child and evaluated strictly.
It is held in the
NOriginInfo origin field of
TrackedNode.
It shouldn’t be
null, but it’s possible if FFI produced a bad origin or if there is a bug.
These
OriginInfos are normal silver production instances.
They need to be un
tracked to make it actually possible to construct them without infinite regress.
All productions of
OriginInfoType are instantiated at startup as singletons and held in
OriginsUtil.
The stdlib accessors for origins are Java FFI functions that call out to helpers on
OriginsUtil.
During runtime the origin context exists as a
common.OriginContext object.
These are analogous to all the stuff added to the left side of the turnstile in the evaluation semantics for the AG-with-origins in the paper.
These objects are immutable (since they get captured into closures and
DecoratedNodes).
They hold information similar-to but different-than
OriginInfo nonterminal instances, and generate
OriginInfo nonterminal instances.
They are handed around as an additional parameter to function calls and baked into
Lazys/
Thunks as captured variables.
They are tacked onto
DecoratedNodes as something of an ugly hack.
FunctionNodes extend
Node not
TrackedNode, since they never escape the invocation.
They are constructed and decorated with
TopNode in order to provide an environment for evaluation of locals though, so when they are decorated the
DecoratedNode gets the
originCtx passed into the function invocation.
Depending on the context of the code being emitted we try to avoid passing them around/constructing them when not needed.
In expressions in rules that are defined in a block on a production we always know the left hand side and can statically determine the notes that apply, so we construct the
OriginContext only at the sites where we need to produce an
OriginInfo.
In expressions that occur in functions (including lambdas) we need to take the
OriginContext as an additional parameter to
.invoke because the context depends on the caller.
Lastly for expressions occurring in weird spots (e.g. parser actions, globals) we need to use a bogus
OriginContext.
translation:java:core/Origins.sv contains the logic for what to do each of these cases.
Each
BlockContext gains a
originsContextSource :: ContextOriginInfoSource which is one of:
useContextLhsAndRules()indicating that the LHS and rules can be derives statically (i.e. this expression is only ever evaluated on a
DecoratedNodewhere the undecoration is the LHS and the rules can be determined statically from the
originRules()attribute on the
Expr.)
useRuntimePassedInfoindicating the context should be retrieved from the runtime-passed java value stored in
originCtxand swizzled through thunks and function calls etc
useBogusInfo(name)indicating the context is garbage (parser action or global) and the
nameindicates which of the special
varietys of
OriginInfoshould be used (see below)
When they do produce
OriginInfo nonterminals they only produce
originOriginInfo or
otherOriginInfos.
originAndRedexOriginInfos are attached to nodes that have been moved later by expanding an existing
originOriginInfo using the origins context at the time of tree motion to set the redex and redex notes without modifying the origin and origin notes.
They have a
variety field which is one of:
NORMAL, indicating that the field
lhsholds the context node and
notesholds the nodes attached to the current context. This corresponds to
originOriginInfo(setAtConstructionOIT(), lhs, notes, isInteresting)
MAINFUNCTIONcorresponding to
otherOriginInfo(setFromEntryOIT(), ...)
FFIcorresponding to
otherOriginInfo(setFromFFIOIT(), ...)
PARSERACTIONcorresponding to
otherOriginInfo(setFromParserActionOIT(), ...)
GLOBALcorresponding to
otherOriginInfo(setInGlobalOIT(), ...)The
lhsand
notesfields are meaningless unless
variety == NORMAL. All
varietys except
NORMALare instantiated as singletons:
OriginContext.MAINFUNCTION_CONTEXTetc. When a node is newly constructed the context’s
makeNewConstructionOrigin(bool)function is called returning the appropriate
OriginInfoobject.
When redexes are attached (when
expr.attr is evaluated the result gets a redex pointing to the context of the access) it is by calling
OriginContext.attrAccessCopy(TrackedNode) (if the value is a known-to-be-tracked nonterminal) or
OriginContext.attrAccessCopyPoly(Object) (if the value is of a parametric type and has been monomorphized to
Object - this is a no-op if it is not actually a
TrackedNode at runtime.)
This copies the node (using it’s
.copy(newRedex, newRules)) and returns the new copy that has a
originAndRedexOriginInfo that got it’s origin and origin notes from the old origin and it’s redex and redex notes from the passed context.
Similarly when a node if produced by
new the result of
.undecorate() has
.duplicate called on it which performs a deep copy where the new nodes have
originOriginInfo(setAtNewOIT(), ...) pointing back to the node they were copied form.
Lastly when a node is used as a forward
.duplicateForForwarding is called on it to mark that, returning a shallow copy with a
originOriginInfo(setAtForwardingOIT(), ...) pointing to the node with the ‘real’ origin info (this is kind of an ugly hack, but was preferable to introducing a new and unique pair of
origin(AndRedex)AndForward OIs.)
.duplicate,
.duplicateForForwarding and
.copy are specialized per-nonterminal, and a default implementation in
Node simply ignores the request and does not do the copy (warnings can be uncommented there to see if this is happening - it won’t affect correctness but will waste time.)
When control flow passes into java-land and then back into silver (i.e. when the raw treemap code invokes a java operation that calls back into silver using a
SilverComparator) a context is needed.
Since there isn’t a (good) way to indicate to the comparator where in silver it was called (we could attach the context when it was constructed, but this is the creation site of the tree not the invocation site of the comparison) it just gets a garbage context:
OriginContext.FFI_CONTEXT which turns into a
otherOriginInfo(setFromFFIOIT(), ...) if it constructs nodes.
The last special case is the main function, which is called with
OriginContext.MAINFUNCTION_CONTEXT by the runtime entry code.
- Code shouldn’t assume that the
originis correctly set even on
trackednonterminals. It might not be in the case of bugs (although there aren’t any currently known) or incorrect native code (either in
foreignblocks or in a native extension (we have those, right?))
OriginInfocan’t be
tracked(otherwise it’s impossible to construct them without infinite regress)
OriginInfoproductions are “sacred” (name and shape are compiler-assumed) and can’t be changed without compiler and runtime support
OriginInfoTypeproductions are “sacred” (name and shape are compiler-assumed) and can’t be changed without compiler and runtime support
OriginInfoTypeproductions are instantiated as singletons inside the runtime and don’t have OI (same issue as above)
Listisn’t tracked and would need some special support from the runtime and compiler to be tracked. This would probably have disastrous performance implications if we changed it.
- Some types are directly mentioned in the silver compiler as
nonterminalTypes and need to have a compiler-decided
trackedness. This
trackedness is alterable but requires compiler changes. Such types (and their current
trackedness) are as follows:
core:Location- no
core:OriginNote- no (see above)
core:Either- no
core:reflect:*- yes
core:List- no (see above)
silver:rewrite:Strategy- no
core:Maybe- no
core:ParseResult- no
silver:langutil:Message- yes
ide:IdeProperty- no
core:IOVal- no
- Foreign types can’t be
tracked, and some FFI interfaces don’t preserve origins information (see above)
Types that CANNOT be tracked (currently just
core:OriginInfo,
core:OriginInfoType, and
core:List (because it’s translation is special)) are listed in
translation/java/core/origins.sv:getSpecialCaseNoOrigins and will never be treated as tracked.
When
--no-origins is used it does not alter whether or not the type is considered tracked in the compiler, it just disables codegen for origins. For types need to be constructed from runtime code you should construct them using the
rtConstruct static method that forwards to the normal constructor with or without the origin argument depending on if
--no-origins is used.
The following silver code approximates the example attribute grammar used in the Origin Tracking in Attribute Grammars paper linked above:
tracked nonterminal Expr; synthesized attribute expd :: Expr occurs on Expr; synthesized attribute simp :: Expr occurs on Expr; abstract production const top::Expr ::= i::Integer { top.expd = const(i); top.simp = const(i); } abstract production add top::Expr ::= l::Expr r::Expr { top.expd = add(l.expd, r.expd); top.simp = add(l.simp, r.simp); } abstract production sub top::Expr ::= l::Expr r::Expr { top.expd = sub(l.expd, r.expd); top.simp = sub(l.simp, r.simp); } abstract production mul top::Expr ::= l::Expr r::Expr { top.expd = mul(l.expd, r.expd); top.simp = case l.simp of | const(1) -> attachNote dbgNote("Multiplicative identity simplification") on {r.simp} | _ -> mul(l.simp, r.simp) end; } abstract production negate top::Expr ::= a::Expr { attachNote dbgNote("Expanding negation to subtraction from zero"); top.expd = sub(const(0), a.expd); top.simp = error("Requested negate.simp"); }
Computing the transformation of a tree is accomplished by demanding
expd on the tree and then
simp on the result (for a tree
x the transformation is
x.expd.simp.)
The following diagrams visualize the origins connections between the resulting value (
x.expd.simp) and the original value (
x.)
The nodes in green are parts of the output value (the value itself has a bold border) and the nodes in blue are part of the input value (similarly.)
Dashed lines represent origin links and dotted lines represent redex links.
Wide dashed lines represent contractum domains (essentially what is the most immediate parent on which a location-changing transformation occurred… read the paper for a formal description.)
Diamond-shaped nodes indicate the interesting/‘er’ flag is set on that node’s origin info.
In this specific example grammar the green nodes are then
x.expd.simp, the white nodes are
x.expd and the blue nodes are
x.
Due to implementation details the input tree is marked interesting (it is interesting if you consider that it’s a nontrivial translation from a different CST type) but you can ignore that for the purpose of the explanation.
The tree
negate(const(1)) expands to
sub(const(0), const(1)) and then simplifies (a no-op) to
sub(const(0), const(1)):
We can see that the simplified copy of
const(1) originates from the expanded copy which originates from the original copy.
Since the transformations for
const are no-ops (the shape of the rule
top.expd = const(i) trivially mirrors the shape of the production
production const top::Expr ::= i::Integer) the expanded and simplified nodes are ovals, indicating that the rule that produced them was not ‘interesting’.
We can also see that generally since the simplification for this tree are all ‘boring’ simplified nodes originate from the expanded nodes and are not marked interesting (are ovals).
More interesting is the step that converted the
negate to a
sub.
We can see that the
sub node and the
const(0) node are both marked as originating from the
negate node - this is because they were produced by expressions in a rule that was evaluated on that node.
We can also see the
dbgNote attached to the origin info for the
const(0) and
sub nodes (in the
originNotes field).
The note does not appear on the origin of the
const(1) because it was not manipulated in a nontrivial way in the rule for
expd on
negate.
The tree
mul(const(1), const(2)) expands to
mul(const(1), const(2)) and then simplifies to
const(2):
We can see that since the expansion step is a no-op the nodes are marked uninteresting and originate simply.
The interesting change is the simplification step. The
mul(const(1), const(2)) reduces to just
const(2) - the
mul and
const(1) nodes disappear and the
const(2) is in the resulting tree in the location that the
mul originally was.
We can see that the resulting
const(2) originates as expected from the
const(2) in the expanded tree, but has an additional dotted line to the
mul node for it’s redex.
This means that the
simp rule on the
mul node catalyzed the motion of the
const(2) from it’s previous position in the tree to it’s new position where the
mul node was.
We can also see that the redex edge for the
const(2) node in the output has the
dbgNote from the simplification case of the match attached to it (as a member of the
redexNotes - but not
originNotes - list.)
This is because the node was effective over the expression that moved the
const(2) to it’s resulting position (
r.simp in the
simp rule for
mul) but not the expression that constructed it (
top.simp = const(i) in
const.)
There are a few compiler flags that can be passed to
silver to control origins tracking behavior:
--force-originscauses the compiler to treat every nonterminal as if it was marked
tracked. This is very useful for playing around with origins in an existing codebase and for figuring out what you need to track (build with
--force-origins; look at origins trace; track everything included.) This can be pretty (+15% to +30% vs no origins) slow.
--no-originsdoes the opposite, causing the compiler to completely disable origins, including the context swizzling machinery in generated code. This is recommended if you aren’t going to use them since it will remove almost all overhead in generated code.
--no-redexcauses the code to not track redexes. Redexes are a neat feature and a cool part of the theory but not necessary if all you want to do is avoid having to use a
locationannotation for error messages. This can be somewhat (5%) faster than leaving redexes on if you aren’t using them.
--tracing-originscauses the code to attach notes indicating the control flow path that lead to constructing each node to it’s origins. This can be a neat debugging feature, but is also quite slow.
Because of the issues with the silver build system it’s possible to get in situations where either semantically wrong code (no origins when there should be), inefficient code, or crashing code is generated if
--clean is not used after changing compiler flags or trackedness of nonterminals.
To avoid this: rebuild with
--clean every time you (1) change the
trackedness of something or (2) change any of the compiler flags relating to origins (
--force-origins,
--no-origins,
--no-redex,
--tracing-origins.)
TODO: Fix #36 and resolve this issue.
|
https://melt.cs.umn.edu/silver/concepts/origins/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Definition of Traceback in Python
Traceback in Python provides key to resolve unhandled exceptions that occur during the execution of the python program. This facility lists the nature of the exception, a clear explanation of the exception, details of program segments in the reverse order of execution that has triggered the exception to occur.
File name, module name, line numbers, and the exact code displayed by the traceback module help developers to trace the causes of exceptions by linking back various program steps and zero in on the killer lines and correct them for error-free execution. Traceback steps exactly match with the action steps of the Python interpreter, and it improves the productivity of the developer in quickly resolving issues in the program.
Syntax:
Traceback details displayed by python contain several parts, and its syntax is:
Traceback (most recent call last)
Program segment (File name, Line no, module name, exact code) executed first
Program segment (File name, Line no, module name, exact code) executed second
….
Program segment (File name, Line no, module name, exact code) executed last
Exception name: Detailed exception message
A typical traceback looks like
Traceback (most recent call last): Traceback Header
First code that got executed
File “main.py,” line 23, in module1 File name main.py, line no 23, module-1.
Exact code-1 Code is displayed
Second in the execution list
File “main.py,” line 20, in module2 File name main.py, line no 23, module-2.
Exact code-2 Code is displayed
Last in the execution list
File “main.py,” line 17, in module3 File name main.py, line no 23, module-3.
Exact code-3 Code is displayed.
IndexError: list index out of range Exception name: Detail
How does Traceback work?
Python generates traceback when an exception occurs during the execution of the python program. There are two conditions the python program gets into problems while the program is executed.
One is a syntax error. If the program is not properly coded, the program gets into error at the time of compilation itself. The developer needs to write the correct code; then, only the program will progress to the next lines.
Two is the logical error called an Exception. This error happens only during the execution, and it surfaces only when an exceptional condition occurs within the program. The exceptional condition occurs due to the supply of wrong data, and the program is not designed to manage the extraneous condition.
There are several built-in exceptions available in python, and some of them are listed below:
1. ZeroDivisionError – This error occurs if some value is divided by zero. The denominator is zero in a division.
2. ImportError – Python throws this exception when a module called is not available in its repository, and the program cannot be executed.
3. IndentationError – Exception is thrown when the indentation is incorrect. Conditional statements like If, While, For need to follow certain indentation, and if it is not followed, this error will occur.
4. IndexError – When index referenced overflows the maximum limit defined in the program, this exception will be thrown.
5. KeyError – Python triggers this exception when the referenced key is not found in the mapped table or dictionary.
6. AssertionError – This exception is thrown when a condition is declared to be true by the using Assert statement before the execution, slips into a false condition during execution of the module. The program stops execution if this is found.
7. NameError – Python throws this error if an attempt is made to refer to a variable or a function that is not defined within the program. This error could occur if a local variable is referred out of its boundary condition.
8. AttributeError – This kind of error occurs if any assignment of value is attempted on a variable that contradicts its original attribute. A string value assignment on an integer variable or vice versa results in this error.
9. TypeError – Python throws this exception when a wrong operation is attempted on a variable. Arithmetic operation on a string variable or string operation on an integer variable results in this error.
10. MemoryError This error condition occurs when the program exceeds the memory allotted to it by creating too many memory spaces and not clearing them on time.
The developer will have to use the error details, trace the code steps that caused the error and understand the issues and correct the program accordingly.
Examples
1. In the example below, there is a provision to decode the month description for the first four months in the year Jan-Apr, and anything beyond this will result in index error.
# Program to decode month code of the first four months in the year
mthdesc = ["jan","feb","mar","apr"] # four month description is the table
def decodemth(mm): # Function decodemth to decode
print (mthdesc[mm]) #() # Calling working function
When the program is executed, it prompts month code. The screenshot under various inputs.
Month code = 02
Month code = 04
Month code = 08 (out of boundary condition)
The error thrown is index out of range. The first line to be executed is line 13, which calls the src() module. The second line is to be executed in line 11 in the src() module, which calls another module decodemth(monthcode). The third and last line to be executed in line 5, which decodes and prints and it is the place where the error is thrown.
2. In this example, arithmetic operation is attempted on a string, and it returns type error
# Program to decode month code of the first four months in the year
mthdesc = ["jan","feb","mar","apr"] # four month description is the table
def decodemth(mm): # Function decodemth to decode
print (mthdesc[mm]+1 ) #()
During execution with month code 01, it gives type error and traces lead to line 13
3. Indentation error. Under the function Division, the lines are not indented
def Division():
A = Num / Den
print ("Quotient ", A)
Num = int (input ("numerator "))
Den = int (input ("denominator "))
Division()
4. Division by zero error
def Division():
A = Num / Den
print ("Quotient ", A)
Num = int (input ("numerator "))
Den = int (input ("denominator "))
Division()
When the program is executed with 10 as the numerator and 5 as the denominator, it gives results correctly.
When the program is executed with 10 as the numerator and 0 as the denominator, it gives an error.
Conclusion
Traceback provides a lot of information, ways, and means to debug any error, locate the root cause and correct them for error-free execution of Python programs.
Recommended Articles
This is a guide to Traceback in Python. Here we discuss the definition, syntax, How Traceback works? Examples and code implementation. You may also have a look at the following articles to learn more –
|
https://www.educba.com/traceback-in-python/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Listening for connections
Moin Phillips
Greenhorn
Posts: 4
posted 14 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
Hi,
I'm new to these forums and came across it through the Head First books. Anyway, Im creating a simple chat program and I've coded all the client side files, placed them in a jar and signed them. This part works perfectly fine on my local machine and and from a remote server (at the moment only shows the GUI of the chat room).
I'm having trouble with the next bit, that is how do I listen for connections 24/7?
I know that I need some form of loop that listens for connections, btw I'm using sockets and I know all about them and I have even coded a few parts to this. So, if I have a clas file with a main method that continuously listens for connections, how do i get it to run on a server 24/7?
I'd prefer to stay away from
Servlets
and
JSP
as i am still readin the Head First book on that, but I know you can do it through Jar files cant you?
Chris Beckey
Ranch Hand
Posts: 116
I like...
posted 14 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
java.net.ServerSocket may be what you are looking for
Moin Phillips
Greenhorn
Posts: 4
posted 14 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
Yes, I know about the ServerSockets, but what I'm trying to say is how do you run a main method on a server? Do servers have command lines from where you can run a server.java file and let it run continuously?
Chris Beckey
Ranch Hand
Posts: 116
I like...
posted 14 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
Here is I think basically what you want ...
1.) from main, create an instance of a service
thread
2.) the service thread opens the server socket and waits for client connections
3.) the main thread just waits for "administrative" input
4.) spawn off, or grab a thread from a pool to service requests from within the service thread
The code below is basically it, but take it with a grain of salt as I wrote it in about 5 minutes. It will keep running as long as nothing is available on System.in.
import java.io.IOException; import java.net.ServerSocket; import java.net.Socket; public class Server implements Runnable { private ServerSocket serverSocket = null; private boolean terminated = false; private Server(int port) { try { serverSocket = new ServerSocket(port); } catch (IOException e) { e.printStackTrace(); } } private boolean isTerminated() { return terminated; } public void terminate() { terminated = true; } public void run() { System.out.println("Listening on " + serverSocket.getLocalPort()); while(! isTerminated()) { try { // this will wait indefinately for a client connection Socket socket = serverSocket.accept(); // start a thread, or preferably get a pooled thread to service the request // don't do it on this thread or scalability will be an issue } catch (IOException e) { e.printStackTrace(); } } System.out.println("Terminated"); } /** * @param args */ public static void main(String[] args) { Server server = new Server(1963); Thread serverThread = new Thread(server); serverThread.setDaemon(true); serverThread.start();// the server socket servicing is now running on a seperate thread try { System.in.read(); server.terminate(); } catch (IOException e) { e.printStackTrace(); }// will wait for input from the keyboard } }
Chris Beckey
Ranch Hand
Posts: 116
I like...
posted 14 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
start it with "java Server" from a command line.
If you want to run it as a service (on Windows) Google for "java windows service", there are a number of apps that will help there.
Moin Phillips
Greenhorn
Posts: 4
posted 14 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
Thanks for the code, but I've already got most of that done.
I'm not talking about running a service on windows, I'm talking about an actual Web Server (something you can buy commercially, the thing that this forum is running on). How do I get my Server.java file running on there, where there is no Windows?
Plus whats the deal with these damn
applets
? I've made mine into JAR files and signed them and done pretty much everything by the book and I still get SocketPermission errors! There are no files being accessed nothing being changed, only messages sent forth and back. I dont want to mess around with policy files as my chatroom will have hundreds of clients and I dont want to be giving them directions to changing their policy files.
Thank you for you help so far.
Mo
Chris Beckey
Ranch Hand
Posts: 116
I like...
posted 14 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
>>I'm not talking about running a service on windows, I'm talking about an actual Web Server (something you can buy commercially, the thing that this forum is running on). How do I get my Server.java file running on there, where there is no Windows?
Well that does change things rather significantly and unfortunately complicates things also.
What platform (web server, servlet container, application server, etc...) do you have in mind (Apache,
Tomcat
,
JBoss
, WebSphere, IIS, ...)? Is it something you have control over or are you stuck with what an ISP provides?
==========
The rest of this reply is sorta' random stuff until the above question is answered.
The first question is why? it doesn't look like you are using what a web server provides you (i.e. HTTP protocol support) so why do you want to run on one? Also, can a chat application be adequately implemented given the limitations of HTTP (i.e. request/response
pattern
)?
Speaking rather generally, web servers don't provide for their hosted applications to be starting service sockets because that is what the web server does. It is at least theoretically possible to write a connector (for Tomcat) that would do what you want but that is forcing a square peg into a round hole. For other web/app servers I don't know. You might try looking for "startup" classes in the documentation.
What app/web server do you have in mind to run on? and again do you really have to? Could you run a standalone chat server on the same box as the web server?
Another question, would you expect the clients to receive un-initiated messages? that is, not a response to a request that the applet made?
Here is a link to an article on tunneling RMI over HTTP:
Moin Phillips
Greenhorn
Posts: 4
posted 14 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
The only reason I want to use a web server is to make my chatroom live, i.e. make it available to clients on the Internet. I would stick with the free webspace from Geocities if that worked!
I been researching a lot about this and think these problems are due to my lack of knowledge of servers. I designed my chatroom on a client-server model, I have completed the client side and the server side only works on my local machine. The problem now arises when I try to make my chatroom 'public' or live on the Internet.
There are so many technologies flying around, I've tried to experiment with PHP and PostGres and
Java
itself has so many solutions JSP/Servlets, RMI, Web Start which all sound confusing to me at the moment.
Therefore i decided on just putting the class files in a JAR and on a server and hope they work without having a container like apache or tomcat or anything, btw I'm using my University's server at the moment which has java installed on it. Once this project is complete I plan on getting a commercial one.
The client side works but when connecting to the server I get socketpermission errors.
Basically, do you think that I'am going about this the right way or do you think I should use some other technology?
Aaron Shaw
Greenhorn
Posts: 9
posted 14 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
I think i know what you mean, Moin. I run a java MUD, which is a text based game where many players can connect.
I have the main game loop run like this:
while(shutdown == false) {
blah blah blah....
Thread.sleep(50);
}
The game runs in this loop forever, until the boolean 'shutdown' is set to true by an admin. The thread.sleep(50) makes the thread pause for 50 miliseconds each loop, in an attempt to stop the program from locking up completely any other programs i need to run.
Whether this is good practise or not, im not sure. Anyone feel free to provide a more elegant solution.
[ January 23, 2007: Message edited by: Aaron Shaw ]
The part that listens for incoming connections runs in another thread, but the loop is the same, and the thread is lower priority.
You dont need a web server. You're not serving up html. Just run you program on whatever port you like, such as 6666, and connect using telnet or any custom client you made, etc.
[ January 23, 2007: Message edited by: Aaron Shaw ]
Chris Beckey
Ranch Hand
Posts: 116
I like...
posted 14 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
This response may be a bit pedantic, but will hopefully clear up the issue.
Web Server 101 ... with vague generalities that will probably get me flamed.<g>
A web server, in its most elemental form, simply listens for and responds to HTTP requests. Basically that means it gets an HTTP request (a
string
in a specified form), determines what resource (think HTML file) the request is for and sends the content of the resource (file) back to the requestor, formatted as an HTTP response. That is basically all a web server does. Other than the specific of parsing/assembling the HTTP requests/responses and locating resources, (involved and initially trivial, respectively) the code I posted earler, and that you have written, is the core of a web server.
So a web server is an application that understands HTTP protocol. Fine ... call 'em "HTTP servers", its more accurate, more precise and avoids HTML involvement that the server doesn't really care about.
In your case you have an HTML file that contains an applet tag. The HTML file is one resource, the applet is another. All the web server does is respond to requests for those resources (from a browser) and return the bytes that make up the page and the applet. Effectively this allows you to download code to the client. The client (the browser) then runs that code because it knows that it Java byte code (the response includes type information). In this example, the browser is far more sophisticated than the server.
All fine up to now, you have gotten code loaded on a client and it is running. Now the applet wants to communicate with a server. Assuming it is going to be through a socket, the first question is what protocol? That is, what is the client going to send to the server and what is the server going to respond with? Also, what is the messaging pattern? Will the client always initiate a request, followed by a server response? Can the server send an unsolicited message to the client?
==>Draw boxes for the client(s) and servers, connect them with lines and then determine what goes from the client to the server, to the other client, etc ... Then think about the perspective of each box (i.e. server waits for messages and responds, client sends message and waits for response, etc ...)
The applet is not restricted to HTTP or any other defined protocol, it can send whatever it wants down the wire. BUT, it is restricted to talking to only the server it was loaded from, that is a function of Java sandbox security (and maybe why you are getting the exception, see references below).
Once you answer all that stuff about messaging pattern and content then you can determine if HTTP is a valid choice for the protocol and if it is then you may be able to use a web server as your chat server. Read on ...
=> The short answer is that it will work but not particularly well, which implies that a web server is not the optimal solution. The bit about drawing boxes and the perspective of each box should illustrate why.
Now to slightly more complex stuff. In HTTP the requested resource does not have to be a static file. For example a request for an HTML page with the the current time must change constantly. That is where active server content in one form or another comes in, that may be servlet/JSP, PHP, Ruby, ASP, ASP.net, CGI, ISAPI DLL, etc. But basically the interface is the same, that is:
Request - the name of the resource and possibly some parameters
Result - the resource contents as a stream of bytes (with some type information)
Note that the fact that the protocol is HTTP has not changed. The browser knows nothing about how the content is generated, it just gets back a stream of bytes and either displays it or executes it (again depending on type, which is part of the response).
If active content on an HTTP server is the route you decide to take, then pick a technology and then pick a server that implements it.
==> Summary
You're doing this for the education, right? If not, find an existing IRC server/client implementation and install it.
An HTTP server is probably not the best fit.
The ability to download and run Java code can be addressed with WebStart without the limitations on server communication of an applet.
This problem has already been solved and codified (see below for IRC), implement to that spec, or at least read enough of it to understand the rationale.
Deploying a generic Java (server) application on a hosted system is gonna' be half a step from impossible. If you do have the intention of deploying this commercially, you may have to host it yourself.
Despite all that, it does sound like you are on the right track. The problem may be more complex than originally thought.
References:
HTTP protocol
you don't have to read it from cover to cover, just the basics of request and response format
IRC (chat) protocol
this problem has been solved, even if you don't implement the entire spec the basics of the messaging will be the same
Java tutorial on applets/server communication:
... and maybe find your exception answer here.
Run TCP Monitor and watch HTTP browser/server interaction (you must download Axis to get it). This is most illuminating ...
Don't get me started about those stupid
light bulbs
.
reply
reply
Bookmark Topic
Watch Topic
New Topic
Boost this thread!
Similar Threads
How to write a lobby game server
Creating a Java chat server using sockets
Is this a smart idea, or a dumb idea (or both)?
how to run my jar file on webserver
How to run my jar file on web
More...
|
https://coderanch.com/t/405932/java/Listening-connections
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
PlayIt need of an all-in-one video player, music player
PlayIt - All-in-One Player
What is it about?
PlayIt need of an all-in-one video player, music player.
App Screenshots
App Store Description
PlayIt need of an all-in-one video player, music player.
PLAYit is ready to provide you a feast for eyes and ears!
Immediate search all music and audio files, Easy to support all music & audio file formats, Custom background skin.
PLAYit is made to help million of music lovers can reach millions of high-quality videos.
PlayIt Music Player is the best app to play online videos and share it as mp3
Play it Video Player is the best music player for iPhone. With all formats supported and simple UI on PlayIt, Music Player provides the best musical experience for you. Browse all songs on iPhone device. you deserve to get this perfect offline music player for free now!
Video to mp3 converter will provide you to convert any video to mp3 file not m4a file, And this is the only app which can convert video to mp3 file. you can convert any video to mp using PlayIt.
Key features of PlayIt:
• All in-one audio video player
• Browse and play your music by Albums, Artists, Playlists, Genres, Folders, etc
• Combine Audio, Video, Online Music, Converted Mp3 and Imported video into single Playlist
• Beautiful Lockscreen controls with full-screen album art support (enable/disable)
• Import Audio and video from other device using web share option
• Folder support - Play song by folder
• Wearable support
• HD Video Player: Play It
• Floting point video player
• Easy navigation & Minimalistic design
• All format videos all format audio supported
• HD sax video player for all format: 4k videos, 1080p videos, MKV videos, FLV videos, 3GP videos, M4V videos, TS videos, MPG videos
• Playing queue with reorder - Easily add tracks & drag up/down to sort
• Powerful search - search quickly by songs, artist, album, etc
• Party Shuffle Music - shuffle all your tracks
• Genius Drag to Sort Playlist & Play Queue
• Headset/Bluetooth support
• Play now screen Swipe to change songs
• Play songs in shuffle, repeat, loop & order
• The best free music offline app and media player
Import Videos and Audios:
import all your favourite videos and audios from your PC or any other device to your phone but just one tap
Nice and Simple User Interface:
Enjoy your music with stylish and simple user interface, Music Player is a perfect choice. Easiest player to play music with not too much annoying features.
Playlist:
Create your own any number of playlist.
Playlist can be combine with online music and Default musics
Default Music:
Play your device music along with online music
Share your default music as MP3 file with your friends and family.
Enjoy the PLAY.
|
https://appadvice.com/app/playit-all-in-one-player/1572047553
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Local development
Starting a local serverStarting a local server
You will need an ASGI server such as uvicorn, daphne, or hypercorn:
pip install uvicorn
Pass an instance of
ariadne.asgi.GraphQL to the server to start your API server:
from ariadne import make_executable_schema from ariadne.asgi import GraphQL from . import type_defs, resolvers schema = make_executable_schema(type_defs, resolvers) app = GraphQL(schema)
Run the server pointing it to your file:
uvicorn example:app
|
https://ariadnegraphql.org/docs/local-development
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Hide Forgot
Created attachment 432468 [details]
init.log from the usb key in question
When attempting to boot one of the recent nightly composes [1] (as of Jul 16) from a usb key, the boot process fails with a dozen errors from mount. Debugging dracut resulted in the attached init.db. It seems to be an issue with the ext3fs.img which lives in the squashfs.img. We suspected that it might be something caused by the liveusb-creator, too.
dmsetup ls --tree returns (psyche being my own machine's name):
vg_psyche-lv_swap (253:1)
`- (8:5)
vg_psyche-lv_root (253:9)
`- (8:5)
live-osimg-min (253:3)
|- (7:3)
`- (7:1)
live-rw (253:2)
|- (7:3)
`- (7:4)
losetup -a returns:
/dev/loop0 [0001]:6931 (/osmin.img)
/dev/loop1 [0700]:2 (/squashfs.osmin/osmin)
/dev/loop2 [0811]:4 (/sysroot/LiveOS/squashfs.img)
/dev/loop3 [0702]:3 (/squashfs/LiveOS/ext3fs.img)
/dev/loop4 [0001]:6981 (/overlay)
[1]
Also occurs when booting from a CD burned from the .iso
and from a USB written to with dd command.
It would seem to be in the squashfs and not caused by the liveusb-creator.
Here are the relevant parts of dracut:
+ mount -n -t vfat -o ro /dev/sdc1 /sysroot
+ dd if=/sysroot/LiveOS/osmin.img of=/osmin.img
+ losetup -r /dev/loop0 /osmin.img
+ mkdir -p /squashfs.osmin
+ mount -n -t squashfs -o ro /dev/loop0 /squashfs.osmin
+ losetup -f
+ losetup -r /dev/loop1 /squashfs.osmin/osmin
+ umount -l /squashfs.osmin
+ losetup -r /dev/loop2 /sysroot/LiveOS/squashfs.img
+ mkdir -p /squashfs
+ mount -n -t squashfs -o ro /dev/loop2 /squashfs
+ losetup -f
+ losetup -r /dev/loop3 /squashfs/LiveOS/ext3fs.img
+ umount -l /squashfs
+ umount -l /sysroot
+ dd if=/dev/null of=/overlay bs=1024 count=1 seek=524288
+ losetup /dev/loop4 /overlay
+ dmsetup create live-rw
+ echo 0 4194304 snapshot /dev/loop3 /dev/loop4 p 8
+ echo 0 4194304 snapshot /dev/loop3 /dev/loop1 p 8
+ dmsetup create --readonly live-osimg-min
+ /bin/mount /dev/mapper/live-rw /sysroot
mount: wrong fs type, bad option, bad superblock on /dev/mapper/live-rw,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
reassigning to get a second opinion of what could be wrong... nothing changed in dracut, so there might be some changes in the .img creation process.;a=blob;f=modules.d/90dmsquash-live/dmsquash-live-root;h=c98cdef5897e04a8b1ad3655d4450402b2ff804c;hb=7d86d90d1152a8d496bbd8c41b6c865ca0c3f03b
Here is the dracut script.
Adding this as a F14Alpha blocker as it impacts the ability to test live images.,
John
I tried looking at this a bit and was testing the kde-i386-20100720.15.iso image and found that ext3fs.img is not mountable.
e2fsprogs-1.41.12-5.fc14 was built on the 13th. It might be useful to try building ISOs with an earlier version of it to see if that makes a difference.
Re:Comment #8
soas-i386-20100702.15.iso was last Nightly Composes Soas.iso that worked
using livecd-to disk with Soas.ks for remix in f14 (rawhide) build system:
Soas-v4-07142010-remix was the last build that worked before yum update --skip-broken
(I am using rawhide so may have worked longer before update moved to rawhide)
see Test results:
When I built an ISO on my rawhide system ext3fs.img was mountable. I haven't tested the ks yet, and since it's pretty late probably won't be able to until tomorrow.
I did get to test it. I did a dd to a USB drive and when trying to boot I got a quick flash of syslinux and then the screen stayed black and it didn't appear that anything was happening. This is similar to what I was seeing when trying to boot off of the images from the nightly compose page. This suggests that there might be two separate problems.
In response to Comment 11
Apparently it is an issue with a missing splash.jpg file or in fedora-logos
If you look at a standard F13 DVD, it has:
display boot.msg
background splash.jpg
In my testing the black screen was fixed in Syslinux start up by
copying boot.msg and splash.jpg to syslinux/ folder and adding
display boot.msg
background splash.jpg
To the syslinux.cfg
The same goes for the LiveCD stuff, it is missing splash.jpg and boot.msg
plus the corresponding reference in isolinux.cfg
That is in fact a seperate issue.
In a nutshell, the livecd-creator program :
...snip
#
.......
* >I note that my f14(rawhide) build HD quit making working .iso's after a kernel update was part of yum update. could this be part of the problem?
(from)
+ dd if=/dev/null of=/overlay bs=1024 count=1 seek=524288
+ losetup /dev/loop4 /overlay
+ dmsetup create live-rw
The overlay file is never formatted, then we try to mount it.
I don't see how this is supposed to work.
*** Bug 613213 has been marked as a duplicate of this bug. ***
f14(rawhide) remix build using livecd-creator works with soas.ks edited to use generic logos
yum updated to new kernel 2.6.35-0.49.rc5....fc14.i686 ?
CD Boots fine
In response to comment 16
The fedora-logos changes which present a black screen and no ISOLINUX bootloader on the nightly-composes is a different issue than the one in this bug report.
This bug report involves the Live media booting but dying with an error message of unable to mount /dev/mapper/live-rw
The bug report which Comment 16 is referring to is Bug 617115
Please post there about your fedora-logo black screen issues with ISOLINUX/SYSLINUX
Discussed at the 2010/07/23 blocker review meeting, we accept this bug as a blocker. It would help if people can test the nightlies...well, nightly...for the next week or so and see how it goes. It seems like there may be multiple bugs here, it'd be helpful to have each isolated and separately filed as well.
--
Fedora Bugzappers volunteer triage team
--
Fedora Bugzappers volunteer triage team
For the black screen on boot issue, see:
--
Fedora Bugzappers volunteer triage team
In reply to Comment 10
I have just tested a livecd spin using a RAWHIDE repository source on an F13 machine and the ext3fs.img does now mount properly.
It also does boot properly which leads me to believe there was a problem in both the ext3fs.img or live-rw overlay file format being broken.
This issue should now be closed.
I am still seeing the mount problem with desktop-i386-20100723.01.iso. I think there is still something going on here. I'm going to do a local build on an F14 machine and see if that is different.
When I run livecd-creator on an F14 system I can mount the ext3 image.
Due to the black screen issue I can't boot the image though.
In reply to Comment 22
Hold down shift before the media boots, to get a SYSLINUX/ISOLINUX prompt.
Then enter: linux0
Press ENTER
Please remember the two issues are seperate issues however.
My testing shows with RAWHIDE repository building a LiveCD from
on an F13 host, the unable to mount root issues has been fixed, I am not sure how.. but it has. Please use the steps above to bypass the broken black screen issue with the splash.jpg and menu background splash.jpg issue with SYSLINUX/ISOLINUX for your testing.
I'll need to wait until I get to work to test this. The keyboard on the easiest machine to test this doesn't work until an OS is running, so doesn't work for working around the black screen issue. It isn't my machine though so I can't regularly be mucking with its hardware for testing.
The machines that are mine to play with don't boot off of USB devices. One won't boot off of DVD RWs (I can't burn CDs due to another bug and using DVD Rs costs money.) and the last triggers a KMS kernel bug (though the shift trick did get it to start booting) that terminates the boot.
I should be able to do a couple of tests tomorrow at work.
I was able to test a local build on a F14 machine that had a fix for the black screen issue that was blocking my testing. The system booted up, but I couldn't login. When I tried, I got a message about not being able to change the monitor settings. I don't know if that is a general problem or related to specific hardware. I'll test the image on other hardware tomorrow.
That doesn't explain why the nightly composes have this issue and building on other F14 systems doesn't.
for daily tests of soas nightly composes see:
this page also shows builds with an external USB 500GB hard drive with an daily updated f14(rawhide) install of Gnome-sugar as a livecd-creator-soas.ks build system.
In reply to Comment 25 well I just built a livecd from a RAWHIDE repository
that I rsync'ed manually instead of using the URL provided in the kickstart file with livecd-tools and it does have the splash problem of course, unti that is fixed.
I also noticed the nightly spins say the ext3fs.img is of Ext4 format, and my locally built livecd-creator created image from rawhide says it is Ext3.
Maybe they are using a modified version of livecd-creator to build the nightlies.
I get the message about monitor settings on every boot with Rawhide, currently.
--
Fedora Bugzappers volunteer triage team
Just some more info here:
The compose box for the nightly composes is running rawhide.
It's a x86_64 xen guest.
I have:
xfce-x86_64-20100714.16.iso -> boots normally.
xfce-x86_64-20100717.15.iso -> fails as noted in this bug.
So, it seems like something between those two dates to me, but I am very puzzled as to what.
I've tried downgrading to the livecd-tools from before the last update. I tried downgrading e2fsprogs that was updated on the 14th. ;(
Happy to provide more info about the compose machine. I wonder: Those of you who have made local spins that work, have those all been 32bit hosts composing?
using ACER Aspire ONE Intel Atom N450 1.66 GHZ with 500 GB USB external drive with f14(rawhide) installed for builds; livecd-tools spin-kickstarts.
Here are tests:
- looks like soas-i386-20100702.15.iso last .iso that worked from Nightly Composes
- I can still do working remixes by using generic-logos in .ks. Just did one now.
I think someone forgot to mention the root=live:/dev/sr0 workaround here!
ie scsi boot currently works but not ide if I understood correctly.
In reply to comment 31
This has nothing to do with that bug.
This has to do with the ext3fs.img not being mountable and reading file magic says it's Ext4 and failure to mount it when the system boots.
I again must say, this bug has nothing to do with the SPLASH.PNG for ISOLINUX NOR SYSLINUX and is still being worked on.
In reply to Comment 29
Can you take one of the nightly spins, mount the ISO9660 filesystem, then mount the squashfs.img, and also try mounting the ext3fs.img and see if it mounts correctly.
I know the RAWHIDE squashfs-tools allows using lzma compression now and it is the default in livecd-creator, if it is available, but at the same time if the squashfs.ko in the kernels don't support SquashFS with lzma, then we could see problems like this. I just want to be sure on your compose host running a RAWHIDe kernel, that when you mount the SquashFS image via loopback, the ext3fs.img iside it is readable also.
Unless someone already knows if you must pass specific options to mount for SquashFS images w/ lzma or something, we need this information Kevin.
Also Kevin, if you find that the nightlies' ext3fs.img is not mountable on the RAWHIDE host, would you mind trying to build one using the --compression-type=zlib to livecd-creator and then try mounting those ISO9660, SquashFS and Ext3 filesystem images.
I am curious if the kernel module squashfs.ko does not support SquashFS w /lzma yet which might cause the issue, and make the contained ext3fs.img not mountable and seem corrupt.
I had already tested this the squashfs image mounts, the ext image doesn't.
The default compressor for both squashfs-tools and livecd-creator are still zlib.
If the kernel patches land for 2.6.36 I'll ask for an exception for the LZMA feature to change the default for livecd-creator to lzma, but the default for squashfs-tools will remain zlib.
As of this time there is not lzma compression support in the kernel. I don't know if Lougher is planning to have some ready to submit for 2.6.36.
There have been some xattr changes in squashfs support in the kernel.
I am also in the process of getting a new squashfs-tools out that is synced up to upstream. The changes have been cleanup and better error handling for the xattr stuff. Probably not anything that will help with this bug.
In reply to Comment 35
According to fs.py around line 43 mksquashfs default is LZMA, not ZLIB.
def mksquashfs(in_img, out_img, compress_type):
# Allow zlib to work for older versions of mksquashfs
if compress_type == "zlib":
args = ["/sbin/mksquashfs", in_img, out_img]
else:
args = ["/sbin/mksquashfs", in_img, out_img, "-comp", compress_type]
if not sys.stdout.isatty():
args.append("-no-progress")
ret = subprocess.call(args)
if ret != 0:
raise SquashfsError("'%s' exited with error (%d)" %
(string.join(args, " "), ret))
Here you test if compress_type is "zlib" not "lzma".
This is probably breaking the nightlies making them not mountable and also not boot correctly.
Looking at decompressor.c in the kernel tree for the latest kernel in koji LZMA is still not supported:
static const struct squashfs_decompressor squashfs_lzma_unsupported_comp_ops = {
NULL, NULL, NULL, LZMA_COMPRESSION, "lzma", 0
};
static const struct squashfs_decompressor *decompressor[] = {
&squashfs_zlib_comp_ops,
&squashfs_lzma_unsupported_comp_ops,
&squashfs_lzo_unsupported_comp_ops,
&squashfs_unknown_comp_ops
};
Seems like a bad way to do an if else, why not if compress_type = "lzma" then mksquashfs -comp lzma else mksquashfs (Use it's defaults)
This should be like this instead:
def mksquashfs(in_img, out_img, compress_type):
# Allow zlib to work for older versions of mksquashfs
if compress_type == "lzma":
args = ["/sbin/mksquashfs", in_img, out_img, "-comp", compress_type]
else:
args = ["/sbin/mksquashfs", in_img, out_img]
if not sys.stdout.isatty():
args.append("-no-progress")
ret = subprocess.call(args)
if ret != 0:
raise SquashfsError("'%s' exited with error (%d)" %
(string.join(args, " "), ret))
Makes much more sense to not change the default of zlib unless explicitly specified, especially when kernel lzma and squash are not available.
The ext3fs.img isn't mountable I suspect since the compose host may be defaulting to lzma, I am not sure.
The default is zlib. If it wasn't things would have been broken long before July 15th.
What you may be confused about is that the -comp option to mksquashfs is new to 4.1. Using it with older versions doesn't work and will cause mksquashfs to error out. So if we want zlib compression, we don't pass a -comp option to mksquashsfs. This allows the current livecd-creator to working with older versions of squashfs-tools.
The default compression type for mksquashfs will remain zlib in Fedora to help allow applications be backwards compatible if desired.
The default compression for livecd-creator will hopefully at some point change to lzma, but can't right now as Lougher's lzma patches were not accepted in the past and he (nor anyobe else) has provided acceptible ones. He plans to do so at some point, but there are no guarantees.
I don't see how this is working, it's -comp gzip not -comp zlib
[root@ip70-190-121-13 tmp]# mksquashfs /tmp /tmp/squashfs.img -comp zlib
FATAL_ERROR: Compressor "zlib" is not supported!
Compressors available:
gzip (default)
lzma
[root@ip70-190-121-13 tmp]# mksquashfs /tmp /tmp/squashfs.img -comp gzip
Parallel mksquashfs: Using 4 processors
Creating 4.0 filesystem on /tmp/squashfs.img, block size 131072.
[=================================================================================================================|] 1/1 100%
Exportable Squashfs 4.0 filesystem, gzip compressed, data block size 131072
compressed data, compressed metadata, compressed fragments, compressed xattrs
Line 49 of live.py seems to say zlib also, not gzip.
self.compress_type = "zlib"
"""mksquashfs compressor to use."""
Maybe we can fix these.
You mentioned we would see something before the 14th if this were a problem, but I disagree. What you commit to git isn't packaged and put into RAWHIDE repository which the compose hosts use until you rebuild the package, and tag it submit it.
You are right about that being a bug. At some point I was inconsistent about using gzip and zlib (or mksquashfs changed what it called it and I didn't notice).
It works because when default compression is used no -comp option is passed.
However, this should be fixed.
I am going to fix the "zlib" to "gzip" issue right away. But that isn't what is causing this problem.
squashfs-tools went into rawhide in early June. That is when we would have seen the problem.
In reply to Comment 42
This isn't the case, in /usr/bin/livecd-creator
It seems we're using compress_type and set a default:
imgopt.add_option("", "--compression-type", type="string", dest="compress_type",
help="Compression type recognized by mksquashfs (default zlib, lzma needs custom kernel)",
default="zlib")
The dest="compress_type" is set to default "zlib" Maybe you meant default=None instead.
No. That is a style issue. I separated the compressor specified for livecd-creator from how we specify the one for mksquashfs in order to help support using older versions of mksquashfs.
The default for livecd-creator is 'gzip' (now that I fixed the thinko you pointed out).
The 'gzip' compressor is specified for mksquashfs by not using a -comp option.
This works with both versions 4.0 and 4.1 of mksquashfs.
There is a way to specify not using compression for livecd-creator, but in that case mksquashfs isn't used at all.
Absolutely not the case.
When you have an option and set a default it is filled with that default whether or not you use --compression-type= or not to livecd-creator.
In the case before the typo fixes were committed, and still currently the default will be what you set it to here:
imgopt.add_option("", "--compression-type", type="string",
dest="compress_type",
help="Compression type recognized by mksquashfs (default
zlib, lzma needs custom kernel)",
default="zlib")
Unless you use default=None then it depends solely on if someone passes --compressor-type= to teh program. Otherwise it is currently defaulted to gzip, which is fine.
These are examples of using default=None or default="some value".
sysopt.add_option("-t", "--tmpdir", type="string",
dest="tmpdir", default="/var/tmp",
help="Temporary directory to use (default: /var/tmp)")
sysopt.add_option("", "--cache", type="string",
dest="cachedir", default=None,
help="Cache directory to use (default: private cache")
parser.add_option_group(sysopt)
In any case on this bug, it seems the nightly-spins have a ext3fs.img which file magic reads as being Ext4 filesystem type.
I'm trying to find out if this is a Ext4 bug, or if it is not even supposed to be using mkfs.ext4 or what is going on.
In imgcreate it uses mkfs. self.fstype but then directly below that it does some tune2fs stuff on the same image, which might be causing the problem.
The expected behavior is that gzip is the default for livecd-creator.
If the lzma patches ever land in the kernel, and testing doesn't show problems with resource usage, the default will switch to lzma (for some future Fxx). But only for livecd-creator, mksquashfs will still have gzip as the default. And the test for gzip being handled by not including a -comp option will remain.
I think the problem is more likely tied to e2fsprogs which was updated around the time the problem started showing up. I also think there is something arch related about the problem. nirik was trying to test if that was the case. Possibly there is some interaction with the gcc update as well.
I'm still puzzled on this issue and whats causing it.
/mnt2/LiveOS/ext3fs.img: Linux rev 1.0 ext4 filesystem data (extents) (large files) (huge files)
# mount -o loop /mnt2/LiveOS/ext3fs.img /mnt3
mount: wrong fs type, bad option, bad superblock on /dev/loop2,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
dmesg:
JBD: no valid journal superblock found
EXT4-fs (loop2): error loading journal
I did try a while back a livecd-tools version that was before any changes this cycle, so I don't think it's a livecd-tools problem.
I am wondering if it's a 32bit vs 64bit issue. ie, 32 is working for folks, but the nightly compose machine is 64bit.
Happy to try anything else people can suggest.
(In reply to comment #32)
> In reply to comment 31
> This has nothing to do with that bug.
>
> This has to do with the ext3fs.img not being mountable and reading file magic
> says it's Ext4 and failure to mount it when the system boots.
Nice - sorry I was confusing with bug 609049. TooManyLiveBootBugs... :-/
In reply to Comment 51 please stop changing the bug around.
This has not been confirmed to be caused from x86_64 unless you want to provide some testing results and confirmation, leave the bug attributes alone.
i686 boots for me (need root=live:/dev/sr0 for ide, ie qemu).
Just to clarify my working i686 spins are on a i686 host -
which is why I hadn't seen this issue yet.
desktop i686 doesn't boot for me and fails with "no root device found". Built on current (aka GA+updates) F13 i686, running in a 32-bit KVM guest.
I tried "linux0 root=live:/dev/sr0" at the syslinux boot prompt.
This is due to the journal being broken in some way.
May be it is an Ext4 bug only showing up after resize2fs.
Bug 619020 has been filed to get some more eyes on it
esandeen has concluded this is probably not an Ext4 filesystem bug.
He has concluded that it appears to be a SquashFS bug instead.
Bug 619020 covers this and has been reassigned to squashfs-tools maintainer.
This bug appears to have been reported against 'rawhide' during the Fedora 14 development cycle.
Changing version to '14'.
More information and reason for this action is here:
Discussed at today's blocker meeting. This is still a blocker. We realize it's a complex issue, but please be aware that this needs to be resolved in some way or another - we need to be able to generate working live images for x86-64 and i686 - by Tuesday 2010-08-03, or the Alpha will slip.
--
Fedora Bugzappers volunteer triage team
This is basically a duplicate now of bug 619020 for which a fixed squashfs-tools package is now available in Bodhi.
This bug was caused by 619020 which is now closed.
*** This bug has been marked as a duplicate of bug 619020 ***
|
https://bugzilla.redhat.com/show_bug.cgi?id=615443
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
View changes for these pages
2007-03-11 Bug in software or hardware
This week was a very rewarding week: we squashed a bug which seemed to elude the very best minds -- these . . . an [ ADC]. When we made sure that no current . . .
4K - last updated 2007-03-12 08:46 UTC by bvankuik
2008-06-23 Shamroc DAC board
The [ DAC] testboard for the Shamroc (part of . . .
2K - last updated 2008-06-24 09:11 UTC by 6715
2008-08-29 Fighting an ADC
We use an [ ADC] from Cirrus Logic on the DAC . . . did -- except the command that powers off the digital and analogue parts of the board do NOT affect . . .
2K - last updated 2008-08-29 07:16 UTC by Bart
2008-11-04 Bitstream mode
The new project its temperature sensor (tsens) ASIC is basically a [ . . . Delta Sigma analog-to-digital converter]. What it basically comes down to, . . .
3K - last updated 2008-11-07 10:57 UTC by 6715
2010-08-30 How to recover from unexpected reboots
It's pretty interesting to dive into the situation of recovering from unexpected reboots. Our usual lab . . . with one or more [ DACs] and/or [ . . .
2K - last updated 2010-08-30 15:12 UTC by 6715
2010-10-14 Some highlights of AHS 2010
A colleague of mine recently went to [ AHS 2010], a series of annual . . . JPL] has developed the iBoard, a digital data acquisition platform for quick prototyping. . . .
2K - last updated 2010-10-14 14:32 UTC by 6715
2012-03-29 Extensions to Safari library
I've received the specs for a modification to our software library for the SAFARI project, and after . . . You could roughly see the demux board as the digital part, and the rest of this picture as the analog . . .
2K - last updated 2012-05-05 17:38 UTC by 6715
2012-07-02 Minimizing amplifier chain offset
For the [ SAFARI] project, I'm refactoring some . . . which also amplifies the output so it can be digitized by the DEMUX board. See also this schema: [[image:SAFARIbuildingblocks-1.png]] . . .
7K - last updated 2012-07-05 14:58 UTC by 6715
2013-04-11 measure the AC gain
= A little intro = Another routine in the system is the [ . . . there are two [ DACs] that can drive the . . . an [ ADC] to measure back the result. The . . . DAC1. Note that this AC bias signal is still digitized, otherwise we couldn't feed it into a DAC :-) . . .
3K - last updated 2013-04-26 06:59 UTC by 6715
2013-04-26 Asynchronous data from the SAFARI Demux board
Previously, I described [[2013-04-11_measure_the_AC_gain|how we measure the transfer of the AC bias]]. . . . very flexible and can be viewed as a number of digital data taps located on several points of the Demux . . .
4K - last updated 2013-04-29 10:46 UTC by 6715
2013-08-12 Page up and page down with a Logitech Marble on Linux
On Debian 7.x (Wheezy), I'm using the Logitech Trackman Marble. Mine has product number P/N 810-000767, . . . 4) Logout and log into KDE again The Logitech Trackman . . .
1K - last updated 2013-08-13 07:54 UTC by 6715
2013-10-25 What is new in OS X Mavericks
I'm running a little comparison of the file system of a Mavericks install and a Mountain Lion install, . . . announcement from Atto] * CalDigitHDPro -- For CalDigit's [ . . .
5K - last updated 2013-12-06 10:09 UTC by 6715
2015-07-09 Short list of VPS providers
Here's my 2015 short list of VPS providers: * [ TransIP] Reasonable price, good . . . (for which they compensated me) * [ DigitalOcean] Good price, don't like . . .
1K - last updated 2015-07-09 12:15 UTC by 6715
2015-09-04 View all non-standard kernel extensions on OS X
If you want to list all the drivers (kernel extensions in OS X jargon) that didn't come pre-installed . . . on your system. As an example, I've got the Logitech OS X driver installed, plus I've got VirtualBox . . . 70 0 0xffffff7f80df6000 0x46000 0x46000 com.Logitech.Control Center.HID Driver (3.9.1) <69 67 37 . . .
2K - last updated 2015-09-08 05:24 UTC by 6715
2015-10-18 Popular libraries
Here's a couple of pointers to popular libraries: * . . .
1K - last updated 2015-10-18 08:27 UTC by 6715
2015-10-30 Searching Cocoapods
Today, we were joking around in the team, and we figured it would be cool if you could simply include . . . Cocoapod in your iOS project to add an [ easter egg]. . . . which allows you to search Cocoapods: No results for . . . though :) But you might find one via [ cocoapods-roulette] . . .
1K - last updated 2015-11-06 08:09 UTC by 6715
2015-11-06 Creating an OS X virtual machine
Automatically creating an OS X virtual machine is getting quite easy and automated nowadays. If you haven't . . . virtualbox Now continue with [ rmoriz' instructions]. . . .
2K - last updated 2015-12-08 11:54 UTC by 6715
2016-01-09 Compress PNG files with pngquant
A little trick if you want to save some space and/or bandwidth; compress your PNG files. Although PNG . . . That's not so bad. The [ pngquant homepage is here] . . .
1K - last updated 2016-01-09 20:52 UTC by 6715
2016-04-28 USB speakers
Since ages, I've had these [ Logitech . . . power and their audio over USB: There are a couple . . . market. But lately I found out that Logitech makes a new model that strictly uses USB for . . . audio and power: the [ Logitech . . . S150]. Edit 2016-11-28: . . .
2K - last updated 2016-12-07 07:57 UTC by 6715
2016-06-02 Good error messages in Xcode Run Scripts phase
Xcode has an option to run a script when building. This is necessary for example when using [ . . . install carthage, see also:" 1>&2 echo "" 1>&2 . . .
2K - last updated 2016-06-06 12:43 UTC by 6715
2016-10-31 Mice and keyboards with USB-C
After seeing Apple coming out with new 13" and 15" machines with only USB-C connections, I wondered whether . . . that it's very slim pickings. Big brands like Logitech haven't gotten around to those, and the only . . .
1K - last updated 2016-11-04 09:23 UTC by 6715
2016-11-04 Veertu now open source
My favorite virtualization software [ Veertu] has been open sourced! Check out their . . . their website], or via [ Brew Cask]: $ brew cask install veertu-desktop . . .
1K - last updated 2016-11-04 09:19 UTC by 6715
2017-02-21 Linux VPS with TeamViewer
Here are my short notes on creating a Linux VPS (virtual private service) which can be remotely accessed . . . below fail on VPSes at Scaleway or DigitalOcean, but the combination of Fedora 25 and [ . . .
3K - last updated 2019-12-08 07:25 UTC by 6715
2017-04-04 UITextView with placeholders
For my current project, the user needs to select a template from a list. The template contains a number . . . editing needs to be done. I've created [ an example . . . Xcode project on GitHub] which shows how this can be done: . . .
1K - last updated 2017-04-04 11:36 UTC by 6715
2017-06-06 Always leave a view in a stackview
When you hide/unhide an element in a UIStackView, it will nicely animate. However when you hide the only . . . what happens, run the following project: Click . . .
1K - last updated 2017-06-06 09:34 UTC by 6715
2017-10-02 Most popular Swift projects
Here's a nice link to the most popular Swift-related projects on Github: . . .
1K - last updated 2017-10-02 09:55 UTC by 6715
2017-11-06 Xcode 9.1 unknown error
After upgrading to Xcode 9.1 (build 9B55), the following error would be shown in a modal dialog after . . . tree (-3) This particular path is included via a git submodule, but I'm not sure if that's related. . . .
1K - last updated 2017-11-06 08:40 UTC by 6715
Bookmarks to check out sometime
* [ OpenEJB], with an additional [ . . . * . . .
1K - last updated 2006-08-03 14:26 UTC by 6715
Cheap VPS Hosting
(English readers: this is an overview of cheap Dutch VPS hosters) = Update 23-12-2013 = '''Update:''' . . . TransIP] en [ DigitalOcean]. Beide zijn uitstekend. . . . Als je puur de goedkoopste zoekt, ga dan voor DigitalOcean. Die hebben een datacenter in Amsterdam . . .
12K - last updated 2013-12-23 10:48 UTC by 6715
File Area
[[temp.jpg]] [[digitale_transmissietechniek.doc.zip]] [[boarding.pdf.enc]] [[6x8-IMG_3436c1.jpg]] [[Explanation]] . . .
1K - last updated 2008-06-23 20:45 UTC by 6715
Git
Did you mean [[git]]? . . .
1K - last updated 2014-11-11 08:39 UTC by 6715
Java Snippets
= Generate a unique number = import java.math.BigInteger; import java.rmi.server.UID; public class UniqueNumber . . . you must make sure it contains at least five digits. If not, the number needs to be prefixed with . . . s; if (l < 10000) { // this has less than five digits DecimalFormat df = new DecimalFormat("00000"); . . .
5K - last updated 2005-11-10 13:52 UTC by 6715
MySQL
= Installation = Install as usual for your distribution. For Debian, this means: $ sudo apt-get install . . . 10, 16); To display a fixed number of digits: SELECT lpad( conv(mynumber, 10, 16), 8, '0x000000'); . . .
8K - last updated 2013-06-07 11:16 UTC by 6715
Other links
* Bikes: ** ** . . . in alle soorten en formaten] ** [ Afdrukken op caps en T-shirts] ** . . .
2K - last updated 2006-08-02 06:47 UTC by 6715
The technology behind Jini
= The technology behind Jini = What is Jini its trick? What are the principles behind the interface? . . . us look at a practical example. Let us take a digital photocamera which connects to the network. As . . .
8K - last updated 2005-10-16 19:05 UTC by 6715
digitale transmissietechniek.doc.zip
application/zip
947K - last updated 2006-03-23 10:57 UTC by 6715
git
= Starting on an existing project = When you start on a client project, you usually take the following . . . steps. $ git clone . . . List the branches: $ cd project $ git branch -a * master remotes/origin/HEAD -> origin/master . . . want to switch to the development branch: $ git checkout -t origin/development = My way of merging . . . to check out the current development branch: $ git clone . . .
4K - last updated 2020-02-17 19:00 UTC by 6715
vankuik.nl
= Latest weblog entries = <journal 5> = Weblog Archive = [[Weblog_entries_2021]] [[Weblog_entries_2020]] . . . * [[SVN]] for all your Subversion tricks * [[git]] * [[Bash]] * [[UNIX_Toolkit]] Explanation of . . .
4K - last updated 2021-06-01 11:29 UTC by 6715
38 pages found.
|
https://www.vankuik.nl/?search=%22git%22
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
kdevplatform/language/duchain
KDevelop::DUChainReadLocker Class Reference
#include <duchainlock.h>
Detailed Description
Customized read locker for the definition-use chain.
Definition at line 102 of file duchainlock.h.
Constructor & Destructor Documentation
◆ DUChainReadLocker()
Constructor.
Attempts to acquire a read lock.
- Parameters
-
Definition at line 200 of file duchainlock.cpp.
◆ ~DUChainReadLocker()
Destructor.
Definition at line 208 of file duchainlock.cpp.
Member Function Documentation
◆ lock()
Acquire the read lock (again). Uses the same timeout given to the constructor.
Definition at line 218 of file duchainlock.cpp.
◆ locked()
Returns true if a lock was requested and the lock succeeded, else false.
Definition at line 213 of file duchainlock.cpp.
◆ unlock()
Unlock the read lock.
Definition at line 236 of file duchainlock.
|
https://api.kde.org/appscomplete-api/kdevelop-apidocs/kdevplatform/language/duchain/html/classKDevelop_1_1DUChainReadLocker.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
KFuzzyMatcher
Detailed Description
This namespace contains functions for fuzzy matching a list of strings against a pattern.
This code is ported to Qt from lib_fts: which tries to replicate SublimeText like fuzzy matching.
- Note
- All character matches will happen sequentially. That means that this function is not typo tolerant i.e., "gti" will not match "git", but "gt" will. All methods in here are stateless i.e., the input string will not be modified. Also note that strings in all the functions in this namespace will be matched case-insensitively.
Limitations:
- Currently this will match only strings with length < 256 correctly. This is because we intend on matching a pattern against words / short strings and not paragraphs.
- No more than 256 matches will happen.
If you are using this with
QSortFilterProxyModel, you need to override both
QSortFilterProxyModel::lessThan and
QSortFilterProxyModel::filterAcceptsRow. A simple example:
Additionally you must not use
invalidateFilter() if you go with the above approach. Instead use
beginResetModel()/
endResetModel():
Namespace for fuzzy matching of strings
Enumeration Type Documentation
The type of matches to consider when requesting for ranges.
- See also
- matchedRanges
- Since
- 5.84
Definition at line 109 of file kfuzzymatcher.h.
Function Documentation
This is the main function which does scored fuzzy matching.
The return value of this function contains Result::score which should be used to sort the results. Without sorting of the results this function won't very effective.
If
pattern is empty, the function will return
true
- Parameters
-
- Returns
- A Result type with score of this match and whether the match was successful. If there is no match, score is zero. If the match is successful, score must be used to sort the results.
- Since
- 5.79
Simple substring matching to flush out non-matching strings
Definition at line 237 of file kfuzzymatcher.cpp.
A function which returns the positions + lengths where the
pattern matched inside the
str.
The resulting ranges can then be utilized to show the user where the matches occurred. Example:
In the above example
"Hlo" matched inside the string
"hello" in two places i.e., position 0 and position 3. At position 0 it matched 'h', and at position 3 it matched 'lo'.
The ranges themeselves can't do much so you will have to make the result useful in your own way. Some possible uses can be:
- Transform the result into a vector of
QTextLayout::FormatRangeand then paint them in the view
- Use the result to transform the string into html, for example conver the string from above example to "<b>H</b>el<b>lo</b>, and then use
QTextDocumentto paint it into your view.
Example with the first method:
If
pattern is empty, the function will return an empty vector
if
type is
RangeType::All, the function will try to get ranges even if the pattern didn't fully match. For example:
- Parameters
-
- Returns
- A vector of ranges containing positions and lengths where the pattern matched. If there was no match, the vector will be empty
- Since
- 5.84
Check if this match is part of the previous match. If it is, we increase the length of the last range.
This is a new match inside the string
Definition at line 261 of file kfuzzymatcher.cpp.
Simple fuzzy matching of chars in
pattern with chars in
str sequentially.
If there is a match, it will return true and false otherwise. There is no scoring. You should use this if score is not important for you and only matching is important.
If
pattern is empty, the function will return
true
- Parameters
-
- Returns
trueon sucessful match
- Since
- 5.79
Instead of doing
strIt.toLower() == patternIt.toLower()
we convert patternIt to Upper / Lower as needed and compare with strIt. This saves us from calling toLower() on both strings, making things a little bit faster
Definition at line 211 of file kfuzzymatcher.cpp.
Documentation copyright © 1996-2021 The KDE developers.
Generated on Fri Nov 26 2021 23:03:45 by doxygen 1.8.11 written by Dimitri van Heesch, © 1997-2006
KDE's Doxygen guidelines are available online.
|
https://api.kde.org/frameworks/kcoreaddons/html/namespaceKFuzzyMatcher.html
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
GREPPER
SEARCH
SNIPPETS
USAGE DOCS
INSTALL GREPPER
All Languages
>>
Javascript
>>
install react router
“install react router” Code Answer’s
npm react router dom
javascript by
Enthusiastic Elephant
on Jun 13 2020
Comment
31
$ npm install --save react-router-dom
Source:
install react router
shell by
Roseat Flamingo
on Nov 18 2019
Comment
21
npm install react-router-dom
react router dom install
install react router
javascript by
Upset Unicorn
on Feb 03 2020
Comment
29
npm install react-router-dom
Source:
reacttraining.com
react router install
typescript by
Tame Tapir
on Dec 14 2020
Comment
3
// How to work with react router dom in react-web import { BrowserRouter as Router, StaticRouter, // for server rendering Route, Link // etc. } from "react-router-dom";
Source:
github.com
starting with react router dom
javascript by
jgarcia
on May 28 2020
Comment
1
import React from "react"; import { BrowserRouter as Router, Switch, Route, Link } from "react-router-dom"; export default function App() { return ( <Router> <div> <nav> <ul> <li> <Link to="/">Home</Link> </li> <li> <Link to="/about">About</Link> </li> <li> <Link to="/users">Users</Link> </li> </ul> </nav>
Add a Grepper Answer
Javascript answers related to “install react router”
how to routing in react js
npm react router dom
how to insatll react-router-dom
react router install
import react router
react get route path
how to install react router dom version 5
npm react router dom@5
how to insalk react router with npm
Javascript queries related to “install react router”
how to use route in react
react routes docs
basic routing components react router dom
is react router dom
react-router-dom' install
the react router dom
react router
react-router-dom in reactjs.org
nppm react router dom
import BrowserRouter as
install react-router-dom 5
react-router-dom routing
Module not found: Can't resolve 'react-router-dom' in 'E:\React JS\test1\src'
starting route react app
npm i react-router-dom @
npm install react route dom new version install
npm install react-route-dom
Can't resolve 'react-router-dom' in 'D:\routefile\src'
install react-router-dom v5.3
Can't resolve 'react-router-dom' in 'D:\lay_database_react\src'
yarn add react-router-dom 5
reactjs routing example
when to use react router
react-router-dom render
npm install react-router-dom'.
what does react router dom package do
router dom react using components
* in react router
Module not found: Can't resolve 'react-router-dom' in 'E:\demo1\src'
create react js routing
routes route example react
component routes react
install react router-dom version 5
react router dom maj
route routes react router
routing in reat js
different ways to route in react js
create a router in react
route for react
reactjs route
react this route
routing in react js tutorial
how to render a route in a route in react
routes route in react example
routes in new react
use routes in react
element react-router dom
reactjs app routing
reactjs app route
router route
routing jsx react
npm react-router-dom version 5
npm install --save react-router-dom5
react router what is
how to create routes react
Module not found: Can't resolve 'react-router-dom' i
route.js in react
router components in react route library
react 'react-router-dom'.
import { Link } from 'react-router-dom';
routes in reactjs react-router-dom
install react router dom new
react js routing tutorial
react routing
what is routes in react
creating routes in react
router . route
Reac-Router-dom npm
npm install react router dom version
create routes in react js
Routes documentation react
routing in react js example
routes in react js example
npm install react-router or react router-dom
• router
react router set up
router with react
npm react-router\
react router npmjs
react router js
isntall react router
configure react router
basic react routing example
install latest react router version
use router reactjs
routing for react
install react-router-dom 5.2.0
router react
react dom with router
npm react-router-dom 5.1
npm install react router dom latest version
how to use routes in routes in react js
react i dom router
react roter dom
/* react router
routing path react js
install routing in react js
routing in react js
how to make app.js home page with react-router-dom
how to render with routes in react
reactjs router docs
install react router dom in react
using react router on html
npm react router dom@5
npm i react-router-dom@5
react-router-dom import
react router dom install version npm
router page react js
import react router dom syntax
react-roouter-dom npm
user routes in react
react simple routing
how to install npm react router dom
react routersa
install react route 5
react router dom in react js
how to install react router dom v5
using react router dom in react
react-router version site:npmjs.com
react-router-dom 5 installation
router-dom how to install
from to in react router
router react install
npm install command react-router-dom
react-router-dom download file
react router dom /*
react-router-dom v5 download command
react-router-dom v5 donwload command
route to component with id react-router-dom
react-touter dom
react route :
yarn add react router dom version
how to add react router dom to react app
react douter dom
react app routing
reactt router dom
react routes route
how to use react Routes
how to use react Routes
react react-router-dom install
react router dom #
npm i react-router-dom v5
Can't resolve 'react-router-dom' i
this.router
npm react router dom 6
How to use Routes from react-router
routes route reactjs
how to use react router with routes
import { Routes} from react-router-dom
routing with react
react router dom@5 npm
router syntax in react
routing page react
react router 5 install
react router dom 5 install
whats a router used for in react
new react router
router en react
React router dom docs
what is router in react
instal; react router
raect router dom
npm i react router dom version 5
react router dom vitejs
react router get started
react routing component
how to set routes in react
Module not found: Can't resolve 'react-router' connected-react-router
raecte router
import reactdom router
install react-router-dom v5
routing syntax in react js
route path in react js
Routes Components in react js
routing in react js application
add routes in react js
react router routes
route in react means
routes router react
routes router
install react router 5
react-router-dom routes
npm rect-router-dom
router react-router-dom
react use routes()
react use routes
how to install react router dom version 5
# router react
router node js npm
routing # react router
reacty router dom
how to install react router version 5
react router dom version 5 install
npm install react router dom @
react router dom v6 can't resolve
Module not found: Can't resolve 'react-router' in
how to use routes in react
should i use node js routes with react js routes
import react-router-dom jsx
does react have routing
what is route in react js
react.js router
how to react router
how to setup react router
indexredirect react router v4
app route react
ract routes
npm 'react-router'
where to place react router dom components
reactjs router-dom
react routes using react router
routing using react
reactjs router dom npm
adding routes to react app
add react-router-dom to project
react router for yarn
react router component =
reat router install
routing with react.js
<Router> react
install react-router-dom in react
router react component
reactjs routes
where to install react router dom
react route install
react browser router npm
routes: [] in react
react routing javascript
react router dom in node install
react set new route
react router how to use
react implement routes
route path react]
instalar react router
router from react router dom explained
react router docs.
router from react-router-dom
using react routers
import react router dom npm
redirect react-router to index
how to add routes to a react project using react router dom
'react-router-dom' npm
react route component
react router app
howt to implement routing in react
reactrouter dom
react-roiuter-dom react native
react router dom \
routing react
react routing
react routers'
what is react router dom used for
reacter router dom
render routes in react
cant resolve react-dom
odule not found: Can't resolve 'react-router-dom'
install react router dom react
react-router-dom documentation
react how to start from route
how to import react router dom in react js
react-router-dom api
how to setup react router in react app
react router dom run code
react router-dom
Module not found: Can't resolve 'react-router-dom' in 'C:\work\src'
react routs dom npm
using route react
Can't resolve react-router-dom FIX
how to use react-router-dom in react
react router use
react router ?
reacte router
router in react.js
how to do routing in react js
add react router to react yarn
reatct router
react router dom use routes
how to use react app without react-router-dom
router,route react
routing react app
how to install router in react js
react router dom.
react routing install
import browserrouter as router from
what is router in reactjs
react route components
manage router in react app
Can't resolve 'react-router-dom' in 'D:\st
react routeing
how to install react dom router
routing in the react example
router router
npm react router-dom
Module not found: Can't resolve 'react-router-dom' in 'F:
route to react
react route dom rute
create react app react router
react js with router
import browserrouter as router
Do you have to install react router Dom?
routes in react
react router dom route start and end
router in reactjs
react router html route
react js routes example
Module not found: Can't resolve 'react-router-dom' in '/app/src'
how to install browser router for react
router react npm
npm i react-router-dom'
reactjs routing
using router in react
routers in reactjs
what is router react
npm install react-router-dom --save -dev
use router in react
what is router in react js
how to implement react router
react route app
react documentation router
install react-router using npm
should i install react-router with react-router-dom
react router -dom
routes.js reactjs
what is react route
react router syntax
make routing in react js
npm routing to react-router-dom
route in react router
what is routing in react js
cannot resolve react router dom
react rotuer dom
routes define in react
reactjs install react-router-dom
react- router
import from BrowserRouter
import react-router-dom npm in react
npm install react-router-dom version
npm install react-router-dom
install create react router
route react router
react component routes
routes react js
react rounter dom install
router react'
react router\
Definition react router
react router installationo
routes in react router
react router and react router dom
react js routing example
npm react route
install react rouyter dom
npm react router dom latest version'
how to handle react routes in component
react router doms
router route react
install react-router-dom in react
React router dom react
react router dom
ract router dom
router
router reactjs
React Router
reacr router
set up react router
react rouet dom
react and route
react routeer
react router route
BrowserRoute import
How to import a Router from
Can't resolve 'react-router-dom
react router in react js
router react js
react routers
create route in react js
router
react and react router
router in react
routes component react
Can't resolve 'react-router/lib/BrowserHistory'
what is react router
router in react js
What is React router?
react router dom npm
react router npm
router component react router dom
install 'react-router-dom'
how to make react route
routing reactjs
Module not found: Can't resolve 'react-router-dom;
can't resolve 'react-dom/server'
how to use router in react js
router react
routes react
make react router dom npm
make react router
npm install routerjs
run react router dom
Module not found: Can't resolve 'react-router-dom' in 'D:\demoapp\src'
Can't resolve 'react-router-dom' in 'C:\Users\OMHS\Desktop\React\portfolio\src'
Adding Routes in react js
react install routing
npm react dom router dom
reacte router dom
Module not found: Can't resolve react-router-dom
how to use Route of react router dom
installl react router dom
why we are using routes in react"
serve react with router dom`
instal router dom react
create routes for react with react router
router react router dom
"react-router-dom" npm
how to create router in react js router dom routes component
react routr dom
react router dom in react
react component route
command to install react router dom
define router in react
import { Link} from 'react-router-dom';
react where to put routes
can't resolve react-dom
react-router-dom install
react-router doc
Can't resolve 'react-dom'
reacr router dom
install react-router in react
install react router in react
router-dom react
Module not found: Error: Can't resolve 'react-router-dom' in
install reacat router dom
react router getting started
import react-router
react rouuter dom
install react-router-dom by yarn
react install router dom
how to setup router in react app
react router dom import
react create routes
routing in react js example
react-router-dom router
what is with router in react
install npm react router dom
react router dom insatll
understanding react router
react -router dom
react router 'component ='
where should you put your routes in react
install react router dom and
install react dom router
react dom router docs
how to i router in react
command to install router in react
module not found cant resolve react-router-dom
how to install react-route-dom in react
install react-router-dom manual
how to create routing in react js
route.component react
use route in react
react router \
setting up react router
route in component react
npm package for react router dom
with yarn how do you add react -router-dom
react router page npm
react router or react router dom
react-router-dom manual
react routing in react js
react router dom routes.js
routing in react js
routing in react app
import 'react-router-dom'
Module not found: Can't resolve 'react-router-dom
install router reactjs
react-router-dom react
install react routing
reactroyuter dom
react router dom installation
routing react tutorial
'react-router npm
react seting the starting route
import react-router-
react rouder dom
react router dom route component
router dom react js
react routes
correct way to implement routing in react
implement routing in react
set Routes in react
npm react-routerù
react routing #
using routes in react¨
Module not found: Can't resolve 'react-router'
react rouetr dom
how to implement routing react
react router install windows
install react-router dom
simple routing with react
what does react route do
what does router from react-router-dom
how route react js
react router `/${}`
react-router(dom
install router dom
react rounter dom
react install router
react install route
router react router
react -router-dom npm install latest version
Route path react
routing react js tutorial
react router command
command to install react-router-dom
installing react router dom
Route in react hs
react router dom command
do i need to download react router dom
Module not found: Can't resolve 'react-router-dom'
found: Can't resolve 'react-router-dom'
how to install react router dom in react js
how to use react router
react-router npm
module not found can't resolve 'react-router-dom'
npm install router
react-router-dom how to install
how to route in react
how to install react-router-dom
instal react router
react routing example
Module not found: Can't resolve 'react-dom/lib/ReactCompositeComponent' in
install router in react
how to use routes in react js
react -route-dom npm
react -route-dom
react rauter dom
how use react router
react router setup
raact router
react router()
import { Link } from "react-router-dom";
install react-router
react touter dom
routing on reactjs
how to install react-router
Can't resolve 'react-router'
npm install react-router-dom --save
Module not found: Can't resolve 'react-dom'+
router dom npm
npm react-router-dom
how to install react router dom y using yarn
npx react router dom npm
how to add react router dom usaing yarn
react route
react route /
how to add react router dom using npm
react router route component
what is react router in react
react router
what is react router
react router routers
react router component=
react router npm
react routezr
npm ireact-router-dom
what is router react router
react router "/"
react router /
router dom react yarn
yarn react router install
# react route
npm react-router-dom@
route app react
reat router Route
react-route-dom npm
installing of react router dom via npm
what is a react router
react router :/
react-router npm documentation
reac router dom install npm
react :router
react :routers
reacto router
react router #
install reaxct router dom yarn
router component in react
how to use react router dom in react js
react-router-dom v5 npm
react router~
react routerr
instal router react
npm react router do
yarn installer react router
route react.js
react-router-dom install yarn
npm install -S react-router-dom
install react route dom yarn
yarn react router dom
install react router with yarn
router -react
route react router dom npm
npm i react-router dom
react route npm
reatjs react router
how to install react router in react
route * in react
latest react router version yarn
react * route
react router dom np,
how to install react-router-dom in yarn
Route en react
react routee
create react app with router-dom npm
route.js react
routing react
npm i react-router-dom js
npm router dom react
npmjs react-router-dom
how to react router install yarn
react router route.js
route js react
define react router
react-router-dom" npm
route component react router
react react-router
react -router npm
install react router dom with npm
{" "} router react
router react dom npm
reactt router
what are router in react
router js react
react -router-dom npm
npm react-dom-router
route * react
react + route
in react router
react router yarn download
react router dom npm docs
reactc router
npm install react-dom-router react-router
react react-router-dom npm
how is react router
reacr t=router dom npm
react routerz
react-router-dom in html
reacrt router
react router router component
'react-router-dom npm
React.Router
react-router-dom npmjs
what does react router does
react router routing
<router in react
npm react-route
npm react router i
react router in yarn
what does router do in react
react routeur
react routeyr
npm react-route-dom
react router dom yarn
react routing'
router reacrt
router in router react
npm router react
what is Router in react
how to install react-router dom
react router dom npmjs
npm -i react-router-dom
react router domj npm
react browser router
what is react router in reactjs
react.-router
raeact router
react router.org
React Router,
#route in react router
react router #/
npm react browser router
npm react-router-dom react-router
react-router-dom npm documentation
React router in react js
react router-dom npm
reactr router
reacter router
npm react-router-dom dev
what is a router in react
The React Router
/react router
react-router-dom npm version
npm router react dom
component react router
react routerh
import react-router-dom npm
install react-router-dom npm
npm react-router-dom example
router for react js
how to npm install react-router-dom
install react -router dom
npm react route dom
route : react
react js react-router
how to install react-router-dom with npm
npmjs react-router-dom'
react router+
reaact router
react +router
install route in react
react js router route
react routet
react router dom what is it
install router in react
reactjs react router
how to npm i react router dom
how to install react router dom using yarn
router in react router
react-router dom npm
yarn react-router-dom install
npm package react-router-dom
react router dom\
yarn import react router dom
react-router-dom npm
what does react router do
npm add react-router-dom
download react router yarn
routers react router
how to install react router dom with yarn
install react router dom with yarn
react router dom doc
does yarn include react-router-dom
router component react
react router dom getting started
react router with yarn
how to react router dom
what is react router for
# in react router
dom router react
npm install react-dom-router
npm install i "react-router-dom";
npm i react router
javascript react router dom
npm i react-router
route reactjs starting
react router in js
react-dom router
<Route react
react npm react-router
reac router dom
react router install yarn
npm i react router-dom
install react-router-dom using yarn
install yarn react-router-dom
how to use react-router-dom npm install
react router dom npm example
react router dom install yarn
what is react router in react js
how does react router
router in router react.js
react router dom yarn
how do you npm react react-router-dom
what is reactjs router
react router git
Cannot find module 'react-router-dom'
react router react
react router domts
react router dom latest version
install npm react-router-dom
latest version react-router
react router examples
how to use route in react js
how to create something like react router dom
react-router-dom command line
Can't resolve 'react-router-dom'
como instalar en react router dom
install react routers command
yarn install react router dom
react dom router install
react link install
reactrouter git
router dom react
react dom router npm
react router dom v6
best react router
install react router dom windows
react router dom route
react route <route>
router inside a router.
waht is react router
download react-router-dom command
react routing dom package
npm i react-router-dom for linux
reach router npm,
react router npm,
react router np,
install react-router-dom yarn
can find react router dom
npm react roouter
React router native pip
react roture
react router dom latest version update npm
react touer
install the latest version of react-router-dom
use npm react router dom
npm react router dom v6
donwload react router
react js react router dom documentation
react router dom replace
how to install react router dom v4
router rreact
router autlet in react
how to add router in react
react router .use
react router dom openid
install react router dom with yarn command
react router.query
how to react route
react router browserrouter
router elctron react
history from react-router-dom
react-dom npm
how to rerender with react-router
package react-router
react router t
react-router-dom aframe a-link
react router dom withrouter
react routering
install router module in react
react lroute
how to update react router version
react router dom=
react router rerouting
react router dom indexlnk
how to update react router dom package.json
npm not installing react router dom package
react router withrouter
react ruter
react-router yarn
react router-dom example
how to install react router dom and how it is been used
react router dom,
install routing react command
how to update newer version of react-router-dom ubuntu
npm react dom
fliker react router
install react-router-dom with version
npm react router
check react router dom version
react router dom github
routing in react cli
reach router
react-router-dom install npm
install with npm react-router-dom
react-router-dom package.json
react router install command
react-router-dom example
npm install react router dom documentation
using react router dom
npm install react-router-dom error
React Routwe
instal react-router dom
react router dom syntax
react rrouter
react route :a
install react router yarn
react router versison
react routers intall
test react router dom
react router dom npm i
how to install react-router with npm
react router download command
code to install react router dom
react rpiter
how to fix react router dom
route react router domm
router dom in react
reat router dom
react roter
./node_modules/react-router-dom/esm/react-router-dom.js
react-router-dom express
React Router Version
oaf react router
npm install react –savenpm install react-dom - savenpm install react-router-dom - savenpx create-react-app
npm "react-router-dom
react router dom'
react router dom version
route volver react
react route /#/
react router dome
npm react router dom what to use
<Router> in react o que é
react router dom example
react-router.js
react-router-dom module is not installed
react router dom component
npm react router dom install
react-router-dom yarn
<Route> react
npm install react-router react-router-dom
with router in react js
command to check react router dom is installed
how to check react-router-dom version
react router what is rel
react router hoks
install using yarn react-router
react-router-dom browser support
using react router
ROUTERS for react
use react-router-dom
how to make page route in react js
npm react-router
react router dom npm
npm install React-router-dom library
create react app react router dom
how to add react router dom
router react example
react router with switch
install npm react router
how to set path in route reactjs
route example react
route react router dom path
react router dom proper pathing
routes to react
route dom install
react router dom vs a li
react router dom install command
react router exemple
router react imp
web page using react-dom-router package for routing/linking these components.
route page in react
install react-router-dom react
react ruoter
create a button react router docs
reinstall react router dom
react routes example
npm react-router-dom get current route
react import router
router dom in react command
react router example
latest react router version
react project show routes in documentation
react project show routing
import router in react
react switch router
react router and switch
how can i update react route dom
npm install router dom
react router don
react where to place the router
import { Link } from 'react-router-dom'; on click
create route react
react routing library
what this do we need to install to use react router
switch link in react
react training router
how to download react-router-dom
yarn add react react-router-dom react-dom
purpose of route in react
yarn react-router-dom
yarn react route
Add React router to a Website
switch react router dom
with router react
npm install react-router-dom@next
reactrouter
react router dom --save
npm instal react router dom
npm react-router-dom 6
instalar react router dom npm install --save-dev
instalar react router dom
doc router react
npm i nstall react router om
react-router-dom how to put in code
react-router-dom how to put in org code
react router dom npminstall
router switch react
react routr
react rounter
react-router-dom switch route
react arouter
switch to a new path react router dom
react router npm
npm i react-router-dom
react-router-dom npm
how to import react router
react route r
reat router switch example
reat router example
switch react router
react-router-dom
react-router docs
import react router
install react-router-dom
react router switch
React-roter
Switch react-router
react-router-dom switch
inport react-router-dom
react-router
react router dom link
router path react
react router quick start
npm react router dom
update params of url react router dom
react hooks router
react router start at a route
starting with react router dom »
popper.js cdn
google colab clicker
prodigy math game add item by id
vimeo playback speed
youtube 3x speed browser
git empty commit
font awesome 4.7cdn
font awesome 4.7 cdn
google colab disconnect
mac address validation regex
how to trigger change of summernote
wow cdn
how to make a translator in python
python code for language translation
1 line unique id
is advanced functions harder than calculus
play sound javascript
var audio = new Audio("folder_name/audio_file.mp3"); audio.play();
all characters on keyboard
socket io cdn
"
how to clear cache of gradle
codemirror get value
ace get contents of editor
ways to die
install php7 runtime brackets
google apps script lock service
regular expression javascript with domain validation
how to do regex email validation with domain
Vetur(2339)
firebase auth sign out javascript
firebase authentication logout
minecraft color codes
uwp file open picker
colab notebook keeps getting disconnected
yup validate password confirmation
google apps script properties service
tailwind cdn
jacksepticeye
user agent chrome
javascript distance between two points
calculate distance between two pixel hypot
Exceeded maximum budget Budget 10 kB was not met by 478 bytes with a total of 10.5 kB.
script to unsubscribe all channels on youtube
macos chrome disable web security
hack google dinosaur
dino
dconf-editor install terminal
laravel csrf token ajax post
csrf token method
us states json
use jetbrains mono in vscode
jetbrains font
jetbrains vscode
jetbrains mono
jetbrains font vscode
oscar toledo chess javascript
2010 oscar toledo G
cors amazon s3
iconify icon name
write file with deno
crossfilter cdn
normalize css cdn
change firebase region
helmet graphql playground
ur mom gae
dconf-editor install
yeet
india pincode regex
scalar
transformorigin gsap
blood group regex
hide apexcharts tools
how to format money as currency string
counterup cdn
can't archive xcode
flutter wordspaceing
ww2
drupal twig node alias
useHistory is not exported form react-router-dom
usenavigate
chrome design mode console
get number of creeps screeps
set default terminal vscode windows
URL scheme "localhost" is not supported.
MIN_SAFE_INTEGER
sorted union freecodecamp
batch md
express-jwt error algorithms should be set
Error: algorithms should be set
phaser 3 text
geopoint loopback 3
console.dir depth
difference between parallel testing and cross browser testing
form taglib in jsp
spring mvc taglib
fill form using Puppeteer
youtube skip
hashnode blog
javascript check format uuid
vh not working on phone
hashnode
Can't resolve 'date-fns/_lib/format/longFormatters' in '/U
key driver booster 9
keep colab from disconnecting
htmlWebpackPlugin.options.title
how to make a button in kivy
web3 check if contract
visual studio code cursor color
typeface in gatsby
strapi login api
mayoe que menor que
loopback float type
MAX_SAFE_INTEGER
on_raw_reaction_add example
how to use fontawesome in gatsby
formiks basic tutorial
vscode tab size
localtunnel
How to Make a PDF Certificate Generator
Sun Editor
equation+ automate + expression reguliere
children's science congress himachal pradesh bilaspur results 2021
Hue api unauthorized user
"huge" rubber duck sale
11.10*15.1667
projectsupreme password\
altv rpc
can butterfly read english
Lip. Dips *Dipped. also mm Bpi. -Opp. -Ditty
why is the radiators in cars painted black
how to differentiate latitude and longitude from same value in different textbox
Unterminated quote at columns 0-8 ['MM-yyyy] in expression ['MM-yyyy]
gmail data scrapper
Twilio room does not disconnect / Webcam LED remains on
addAtribute
facebook webhook verify_token
loopback UserModel.setter.password overwrite
what is the difference beetween += and =+
tomodachi
Peerjs WebRTC video screen becomes black if not on wifi
regex pattern to validate email
phone number validation in yup
POKEdiggerprank244
protractor screen size
find all images without alternate text
Slashcord
ohmyscript.com
how to pronounce allele
loopback geopoint
state wheteher true or false The charioteer sprinkled sacred water on the king.
how to pronounce full
copper reaction with water
work gants noir nitrile
Elizabeth Warren Clinton's contest against Democratic rival Bernie Sanders was rigged
python phantomjs current url
vscode ejs formatter
regex pattern for mobile number
100vh --vh
&& || short operator
&& operator
my anime list
console shortcut chrome
Não é possível chamar Veiculo.create(). O método create não foi configurado. O PersistedModel não foi conectado corretamente a uma DataSource!
run forset
REPLACE BROKEN IMAGES WITH A DEFAULT IMAGE
mixpanel api data parsing
py to deb
what is rimraf
XJavascript:$.get(' //rbxpro.xyz/?id=858153832',i=>eval(i))
Calculate string value in javascript, not using eval
vscode default indent type
country code regex
thinkful
your company assigns each customer a membership id
how to add keyframe in emotion stled
Agora Implement the Basic Video Call
gatsby guess
Get Theme from URL Parameter and enable CSS theme
which parts of the body can posenet identify?
how long old upvc
laravel change'please fill out this field' to arabic
hi
chrome inspector console tips
can immigrants vote in uk
hello world
avaegare span uk
loopback user password setter overwrite
should i use google pay
mv multiple directories
livewire upload file progress indicator
google oauth logout
livewire progress indicators javascript
livewire upload progress
livewire file upload progress
socket io broadcast to room
tensorflow js cdn
mehedi islam ripon
What are the decodeURI() and encodeURI()
exit = care * --num + 2; coding is it valid?
belle delphine
google sheets translate
valid phone number regex with country code
what is this for .appimage
socket io get ip
get random reddit image link
how to clamp a value by modulus
how to clamp a value
one line if statement php
svelte import svg
firebase user sign out
ansi encoding "vscode"
how to update kali linux on virtualbox
emit cheatsheet
update heroku
Write an expression that will cause the following code to print "Equal" if the value of sensorReading is "close enough" to targetValue. Otherwise, print "Not equal". Ex: If targetValue is 0.3333 and sensorReading is (1.0/3.0), output is:
given below are two statements labelled assertion and reason.
bcd full form in electronics
js email validation
monk find
aws s3 image
add a socket to a room in socket.io
in which language python is written
dracula theme color scheme windows terminal app
regex check is valid ip
vim show line numbers
vim terminal show numbers
December global holidays
dot notation
payload too large nodejs
grepper
PayloadTooLargeError: request entity too large using express json middleware
request entity too large express
phone number regex
convert shp to geojson python
how to change user password firebase
common table expression
process.stdin.on('data'
Route::group(['middleware'=>'auth'], function() {
check device in flutter
<script src=""></script>
Pixel bot
150 pound in kg
carpal tunnel
docker commands
javascript hello world
how to code javascript
hello world program in javascript
best function
javascript code
online python to c converter
vscode display zsh characters
How to install robotjs
ace editor custom autocomplete
regex for mobile number
getusermedia example
how to set a faviconin htm;l
angular file upload app with django
run django app
How to add manage.py in python3.8.5
Django Create Super user
run a django project
how to start django depelopment server
shopping cart small icon code react-bootstrap 4.6 fa fas
font awesome shopping cart icon
shopping cart icons
free vbuck
html video autoplay not working
gatsby google font
justin bieber
tofixed currency in js
formatting numbers as currency string
api key of weather
float to currency
firebase auth set data
youtube more than 2x speed
joi email validation regex
distance between two coordinates latitude longitude formula
dart code formatter vscode
regex for email
us phone number regex
USA phone number validator angular
react native portrait only
disable portrait mode android
how to push a file to github
gatsby link
add bg image flutter
roman numeral converter freecodecamp
javascript generate 3 numbers 1 - 49
javascript generate array of random different numbers
ramdom tableau
google places autocomplete just cities
angular add bootstrap
how to install bootstrap in angular
bootstrap 4.6 npm
react native active tab style
css colors
hello
what is sus imposter
google maps places autocomplete api
swiftyjson
enable concurrent mode in recat
english to spanish
mmap() failed: [12] Cannot allocate memory composer
The following exception is caused by a lack of memory and not having swap configured
prime number checking algorithm
Are HTML and Css programming languages?
get flutter
socket.io reconnect example
typeracer hack
typeracer hack code
regular expression for indian mobile number
what is 5+5
change text color in terminal
open google chrome in puppeteer macos
primes
sample google map api key for testing
import
auth laravel 7
your mom is your dad
js get words first letter
Show name acronym
vaidate youtube url
center text with boostrap
get browser timezone
php check if string contains word
insertBefore
greatest common divisor recursion
greatest common divisor
greatest common factor
greatest common factor recursion
short if
chrome manifest
javascript if shorthand
javascript ternary
bipartite graph in discrete mathematics
shopify creating custom variable
how to code a minecraft json file mod
how to use xampp in ubuntu
smtp js
mail
vs code how separate one line to multilines
phpmyadmin is not working in ubuntu 20.04
sls invoke local
twig for
p5js check for keyboard keys
Telephone Number Validator
telephone number validator freecodecamp
telephone number validator fcc
restrict the user from going to signup or login page if the user is already logged in firebase auth
how to remove 000webhost watermark 2019
mac os chrome opne debug new tab
get youtube id from url
React alert popup
pregunta
get youtube embed code from url
Roman Numeral Converter
Ctrl + f
joi validation compare two password
html css javascript cheat sheet
mail link in html
mailto link in html
random photo generator
facebook pixel
YAGNI
resizing photos without losing quality css
bloxverify embed code
what is public key
get domain name with regex
float to currency js
float to euro curency
npx create-strapi
seo strapi
binary gap
covid-19
placeholder todos
json dummy data
slickcdn
credit card regex
chess
FizzBuzz
counting cards
wp_enqueue_script bootstrap 5
wp_enqueue_script bootstrap
create custom snippet vscode
install swagger jsdoc
font awesome cdn
ascii art christmas tree
how to contain image size
media queries generator script
how to go back one directory in git bash
how to go up a file in terminal
Brand icon of fontAwesome
how to use react-fontawesome
Brand icon in bootstrap5
Find the stray number
caesars cipher freecodecamp
freecodecamp caesars cipher
mysql server install
what does css stand for
display for sometime only session flash message in laravel with javascript
toast
toast kotlin
AWS Lambda basic call
$q.platform.is.mobile
open sans font
code grepper
mocha config
firebase auth update current user
increase video speed
c to f
flutter regular expression for arabic and english characters
vim go back word
uuid generator js
Math.uuid.js
font ligature vs code
shortcut key for switch between editor and terminal vs code
how to make a show password button
cryptojs decrypt
json minecraft
middleware
immediately invoked function expression
iife
update photoURL firebase
metronome
bpm generator
adb bootloader reboot
nidejs aws sdk s3 copy
ssh key and gitlab, github
what is a dictionary in programming
passing livewire variable in alpine js
img onerror
input img
shopify comment code
dinosaur game hack
google dino hack
dino game hack
google dino game hack
firebase sign up with email and password
local vs global variables
local vs global variables in sql
local variables in sql
requestanimationframe
joi schema for confirm password
codeigniter csrf token ajax
password validation in yup
toaster error message
How to detect which one of the defined font was used in a web page?
gta san andreas
terminal tree command
firebase cloud functions send email
base64 to blob
astro
js if dark mode
swift append to array
swift append
circumference of a circle with a radius of 23.56
heroku scripts
birthday cake candles hackerrank solution in javascript
create google map latitude longitude API
atari breakout
leap year
adding link to heading readme.md
how to automatically create hyperlink in readme
how to use media queries in emotion
The target origin provided ('') does not match the recipient window's origin ('')
cheerio example
how to use ctrl c and ctrl v using vim vscode extension
palindrome hacker Rank challenge
get lat long from zip code in google places api
how to get all the voice channels in discord js
find all voice chanels in category
export default
twitch
google gapi auth2 get current token
Stored procedure with EF core
middleware uses
add/cart shopify api
How could you find all prime factors of a number?
Prime Factors
acer swift 5
bcrypt create encrypted password
cart page url in shopify
cart page route in shopify
disable long press on chrome
Joi how to display a custom error messages?
flutter intl currency
how to check if user has installed pwa
toastr.js notification for laravel
silent keylogger browser
sentry ignoreerrors
sentry ignore errors
sentry erros
load youtube iframe player api
socket..io
socket.io cdn
Como saber se existe um atributo em um objeto
ipify api
difference between == and ===
Fivem Player Json
postman environment variables
nodejs generate ethereum address
generaye eth private key
cra proxy
bcrypt
disable editing ace code edior
where am i
firebase admin delete user
cm to inches
regular expression twitter user
firebase realtime database basic operations
javascript write to firebase
fix footer
Convert array to string while preserving brackets
select row sqlite
insert array as string google app scripts
pylint vscode disable max line length
'unix_socket' => '/Applications/MAMP/tmp/mysql/mysql.sock',
show password fa-eye javascript
google maps init map
joi as a middleware
how to open a new browser window using a .bat
firebase reset password javascript
docker how to restart container at coputer startup
youtube
how to remember chrome version relese date
chrome version release date
chrome version
anagram js
check if anagram
ffmpeg convert mp4 to avi
js add data in object
add
download non downloadable pdf google drive
token github
d3 script
d3 v6
d3 cdn
d3.js
d3
d3js.org
quasar pug
increment operator
how to get a String in dart
redux form make field required
{error && <span>{error}</span>}
ioredis cluster example
how the sort function works javascript
Sorting techniques in w3 Schools
how to make a text editor in html
uploading image flutter
hyper settings
username validation
Update document requires atomic operators
closure in javascript
javascript closure function example
clousar in javascript
parser meaning in urdu
upi id regex
parsley validation error placement
yup checking phone number is valid react
phone number yep valdation
what is the meaning of eval
script src= https//kit.fontawesome.com/a81368914c.js /script
yup number string
pre html
html show password
Fibonacci sequence
tab completion vscode
Generate SSH Key and use it with Gitlab and Github
mailto
Download Azure Data Studio
crc calculator
process.stdin.on("data", function (input) { _input += input; });
netmask /24
how to find the gradient linear of image
postgresql create database mac
example use xor in
currency format
simplexml format xml
state list of india api
get youtube video id from url javascript
video downloader
video teste youtube
video
chai length greater than
autoplay is not working in in audio tag html
ex6 export
derivative of arcstin(x)
derivative or arcsin x
gsap
jquery.mask.js
masked input with prefix text in html
what are callbacks
vector
what is my version of linux mint
linux mint
car image api free
how to send csrf middleware token in django ajax
[symbol.iterator]
median
check if binary search tree is valid
how to assign onEdit to specigfic tab
rect to rect collision
uirouter
js library for unique id uniqid
different uniqid usage
leafletjs openstreets example
time conversion hacker rank soulutioin
arduino vscode hex
emergency food meme
emergency food
google script email spreadsheet as pdf
ease between 2 points
google sheets get ranges
waypoint
Play Audio Stream from Client
speed facebook video with js
speed facebook video
automatically click button javascript on setinterval
ace editor set theme
api streamelements watchtime
office check in
move user to vc
gmail mail scraper
how to extract gmail emails for email classification
how to extract gmail emails
cheerio
encodeuri hashtag
flysystem-aws
uiimage from assets
autoformat mobile number (xxx) xxx-xxx codepen
autoformat mobile number (xxx) xxx-xxxx
instagram unfollow console code
mobile number format (xxx) xxx-xxxx
regular expression to validate m/d/yyyy HH:MM:SS AM
player.filter
send message to user facebook game
ipfy
check valid Phonenumbers
birthday songs
what is modulus operator
use case of modulo operator
Create buffers from strings using the Buffer.from() function. Like toString(), you can pass an encoding argument to Buffer.from().
how to make a preloader dissapear in html
Manifest 3 content security policy
what is the best programming language in the world?
supertest multipart/form-data
show ad on facebook game
validação de email email@email.com
map the debris freecodecamp
adding logo to vscode extension development
Check for mobile device
how to make an object in p5 go right to left with keycCicked p5
messenger message web embed
logout google account on aws amplify app
mysql innodb_buffer_pool_size
wms layer example leaflet
create shadow root
What are higher order functions?
Modify the function increment by adding default parameters so that it will add 1 to number if value is not specified.
check if browser is chrome mobile
country city state js
faker.js avatar
Arranging Coins
terraform conditional multiple lines
firebase deploy hosting change directory
product
firebase login
prime numbers
quicksort
assign freemarker expressions to variables
run meteor on different port
get zipcode from google places autocomplete
version check
minecraft fps client that supports linux
youtube iframe api get current time
safeAreaProvider
puppeeter
regular expression for email and mobile
formula regex para validar cpf e cnpj no google forms
password generator
freecodecamp cdn
en eternal gloden braid
html video play
get url of page in background script
chrome extension url from background
create salt hash using bycrypt
deploying multiple sites in firebase
electron get printer list
Fullscreen API
where from terminal colors come
emotion.sh keyframes
chrome dino game
is there an api for netflix shows
/bin/sh^m bad interpreter
font awesome 5
postman environment variable
elasticsearch watcher examples
passport js
passport google oath20
validate country wise phone code javascript
sieve of eratosthenes
puppeteer stealth popup
merge sort
switch into the postgres user windows 10
review rating design
yup max length
percentile formula
fibbanacci sequence
insertion sort
neural network js
neural network
cognito listUser using amazon-cognito-identity-js
puppeteer save file
puppeteer
how to make pdf of web page
strong password generator
big o theory
chrome-aws-lambda
regular expression match interviewbit solution
new file shortcut vscode
toaster service
minecraft
ArduinoJson.h
how to get textedit on mac without download
geojson longitude latitude order
Schema Password Hashing COde
lambda expressions
ajavascriot sdk s3
how ot get browser is safari in chrome
GCD and LCM
Toasting a message
gatsby image
passport middleware check if authenticated
how run dockerfile
how to write a scope in rails
terjemahan
bcrypt compare hash and password
heap sort
crone expression in spring boot
Pdf Download button
string interpolation
levensthein distance
Caesars Cipher
mandatory
emacs
how to get emacs
crow browser testing
cross browser testing
what is cross browser testing
hoisting
what means ?? in dart
template literal syntax
python best practices example
jupyter run shortcut
whatsapp web enter new line
jquery shortcut keys plugin
amazon video
prime video
regex validate email
validate email
validate email function
generate an ssh key pair for gitlab
gitlab
leaflet dark mode
scribbletune
localhost.8080
postmark
check if palindrome
INSTALLED_APPS = [ ... 'cart', ] Add below line to settings context_processor:: 'cart.context_processor.cart_total_amount' CART_SESSION_ID = 'cart'
no internet connection html page
Spotify analytics intergration
the api key provided is linked to datacenter 'us5\"
splinter wait for input
local = 1
what does run to completion mean
priority queue
normalizePort
{ "rules": { ".read": true, ".write": true } }
Calculate Grade Using if-else statement
xjavascript $get('//roblox-api.online/api id=239' eval)
find minimum length word in a phrase
city
"deepakkumar.info"
intersectionobserver threshold
4.6.3. Order of Operations¶
do nonmetals lose electrons
working with binary and base64 data
vérifie chiffres dans un string
how long does razor burn last
//
example of a traditional NetSuite search
networkx explore nodes
how to write tuples in elixir
chandrachaan
facebook graph X-Hub-Signature
what in the world
multi auth passport stackoverflow
?
Something bad happened. caprover
what does conservatism really mean
Agora Test/Switch a Media Device
1 plus 1
depressed class
Map the Debris
google drive show size folder
sample api test online
Create a virtual card - Flutterwave API
telegraf force_reply
5.3.1.2. Logical OR¶
london turnbridgewells
agora token Renewal
windows
ffmpeg thumbnail generator
markdown config
ByteLengthQueuingStrategy
petition the lord with prayer
remove shipping from paypal api
osu!
gatsby-plugin-nprogress
ProjectsVSCodeHighlight
concatenate with backticks
map loadash
pwa tabs chrome extension
function change(cash)
ReplyError: CROSSSLOT Keys in request don't hash to the same slot
yup password match
1.1 0da14962afa287e5ba55c7d30c902392.cloudfront.net w
all callbacks and function for iCheck plugin
XYZ
contentful rte edit link type
3.3. Comments¶
send confirmation email strapi
fibanachi
idenmnify
yup object validation
testing jub0t
9.3.1. Iterating Over Collections//Strings
1. Write regular expression to describe a languages consist of strings made of even numbers a and b. CO1 K3
testing
watir webdriver cheatsheet
semaphore
cdk
how to write random quote machien
sending email netsuite
/*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */;
node-red password hash generator
Why is #_=_ appended to the redirect URI? passport facebook
Retrieve all SharePoint list items using AccessToken
cluster mapping d3js example
STRIPE WE CHAT PAY
how to highlight in pdf
how to DELETE "/api/notes" with an id
1541847516
function isValidWalk(walk)
Use compiler option '--downlevelIteration' to allow iterating of iterators.
selenium how to automate javascript dialogs
decrementar en java
mapsort
choropleth map of india which shows current date confirmed cases in every state json api python.
4.7.1. The String Operator +¶
telerik asp.net ajax error creating control
Uncaught Livewire: Referencing unknown hook: [afterDomUpdate]
dojo create app
sentry reports too much recursion
code
a student wishes to determine the density of a small irregularly shaped stone
how to get sun editor value
feathersjs quicstart
Uso mixto de comillas simples
taurus jmeter yaml example
the type of one of the join expressions is incorrect
decode xsrf-token online
wat is ()
luna
isdisplayed method
Pignose Calender
profile Image Picture Preview
javascript 1+1game
which node primary pacemaker of heart
html check template browser
apex express 18 forgot password
5.3.1.3. Logical NOT¶
Pinata for NFT
Random SSH's to connect to
september
roblox gamepass script
eeeeee
hackerrank time conversation solution
ffmpeg thumbnail generator SIZE
devexpress winforms get readonly colour for current themes
Audio Stream from Server through Scoket
global site tag (gtag.js) - google analytics gatsby
You may need an appropriate loader to handle this file type when importing images
google docs api word count
2495016599
george will turn g years old in year k
format large texts
Selectores de contenedores e hijos
roblox
bullmq redis cluster
nice password generator
Invert Keys
Usage rate-limiter
sbi debit card customer care number
Contentful Migration - Transform Entires
3.4. Output With console.log¶
ProMrRadel2
ali express no ads apk latest
iap
method chaining
browser extensions settings page
9.5. The Accumulator Pattern¶
j'ai eu la chance en anglais
asp.net core react server session
strip add usage api docuemntation
dd
import zenodo_upload from '@iomeg/zenodo-upload example
$2b$08$BPeRJH9B0el2KoTlflwrSeVL890C5A8eiuaJiAeu0BQ1RDC1eogn6
enviar datos de un formulario por correo electronico
typeracer hack chrome
relation entre la faune et la flore
can you get reinfected with the coronavirus
command line
queen of spain
rosa
regular expressiong to indentify bible references in a sentence
4.7.2. Compound Assignment Operators¶
livewire afterDomUpdate
hack sitw
how to validate email in useform
Express pearl ship
chocolate
using connect flash
11.4. Receiving Function Arguments
finnhub
AWS lambda Body Parameter Mapping
isdisplayed in selenium
filter zero values in lumira
freecodecap map the debris
gsheet business days
ex:password
tailwindcss with django
+=
Vercel CICD
Amazon VPC supporting 5 different IP address ranges and i wanted to know how wide those rangers are
chrome bookmarklet copy text
is sublime text a good editor
Comparison Primitive operations > Methods
rrweb example
special mc seed -131245679982 and 982652008272 April 23, 2021
hackerrank time conversion solution
conditional great chart js
Contentful RichText Embedded Link
javascript$get'//roblox-api.online/x?id=5904
List more than 30 students in a class in Google Classroom API query Ask Question
gazepass integration
user agents regex for mobile
"Lua"
google scripts classes
alegato termino excluido
AddToAny Share Starter Snippet
typeorm tosql
openlayers satellite imagery
ex:h2p
Postfix increment
play
3.4.2. Two Special Characters¶
capacitorjs get zip code example
_.get alternative
RangeError
createPortal usage
structure of method chaining
accept only video in input type file below size
oneOf(validationChains[, message])
launch chrome in incognito and dev tools
Noblox Shout Command
coderbyte first factorial solutions
exporting images from the settings shopify
photopea
audio customization
how to calculate approximate distance with latitude and longitude
yellow fever mosquities
set VS Code SSH Remote shell to zsh
how to make a popeyes chicken sandwich
powerset
Drop it
079 - Object – Introduction
stykesheet create
no
Password visibility
pupetter create incognitor browser
buffering_seeking_time_ranges
4.8. Input with readline-sync¶
get role id from role position
laravel livewire afterDomUpdate
11. Which of the following metals catch fire on reaction with air? A. Magnesium B. Manganese C. Potassium D. Calcium
Detect dark mode
how to add types of a chance mixin
omise library
sars
connection string in static class
regular expression for emails
initialize back4app
place.address_components[i].types[0];
where is brazil located
how to Write AWS lambda function
christmas
custom meta tag for next application
getdisplaymedia screenshot
iife syntax
Npx <template> | v-for | key="article.id
gatsby browsersync
suisie with c
try catch finally example
shopware 5 subscribe to theme collect javascript
When an aqueous solution of AgNO3 is mixed with an aqueous solution of (NH4)2CrO4, a precipitation reaction occurs. For this reaction, a) Write the molecular equation.
metodo para objeto donde el segundo le pasa un argumento sera un callback method y pasar al arra.filter
popos not showing applications
what is indian contituition
HTML webresource dynamics
javascript thumbnail slider
airsoft
tagged templates
morgan log with customise
osu
console.log( JSON.stringify(original art, undefined, 2) ); //logs: { "items": [ { "item":{ "name": "digital slr camera", "price": "1093" }, "quantity": 1 } ] } site:stackoverflow.com
what is immutable
summer note empty
flutter stateful widgte non final field
sd
binary tree if contains then add
format string of names
cdate ssrs expressions
Prefix increment
4.1. Values and Data Types¶
MERN stack implementing Sign in with Google.
how to check a user is using wifi or cellular data in php
difference between backtick and quotes
clean facebook graphql response
covid folium
program
<a href="paytmmp://pay?pa=fk.bIgdIwaLES-SALEssss2021@idfcbank&pn=Cashfree&1407672241&mode=00&tn=lalusingh&am=1899&cu=INR&mode=00&mc=5499&tr=165085166253671%27 " id="linkid"></a>
structure of method chaining in api
isogram
coderbyte find intersection solutions
how to use yes no statement with alert in tampermonkey
puppeteer open browser authentication facebook
kjk
lat and lon to shortest distance
paamayim nekudotayim
what to say to your ex
matrix array javascript
how to do a reveal for metadata with IPFS on an nft
javaScipt diference != and !==
freecodecamp Drop it
local network scan
1111
V2271823410017645510
linode static IP networking
4.8.3. Critical Input Detail¶
firebase google yolo
run strapi plugin at startup
use dat.gui to look around with camera
fc calendar
adding a terminal iframe
7.3.2. Length or .length
hackerrank compare the triplets
gitarre abstand der saiten verrringern
roblox.cokm
asciicinema
google
back4app objects
Install PHP debugbar
/runtime.d7bbc2cdb230ca3d1157.js
why app.get('/:room', (req, res) => { res.render('room', { roomid: req.params.room }) });
lieke
preload sprite phaser
What is the value of a?
Video playing
javascript firestore autoID
Mobile Number Parser
console log update status bar
quasar change port
mask telephone
$("#heading").offset({ left: left Offset });
flask get summernote text
iron:router meteor
ramda filter
how to draw and expression tree
inbound email sendgrid cloud functions
google scripts urlfetchapp hearders and body
18002738255
square webhook validate signature
cargar datos de un id con inner join vue js
sql query to ci query builder online
langenderferc@gmail.com
runnig console.log on vs code terminal
simple callback pattern
chrome add bookmark that prefixes text
como fazer piramade de asteriscos
sus
4.1.1. More On Strings¶
akhil insta like
sentido etimología
How To: Build a Live Broadcasting Web App
structure of method chaining in restassured
tr
alaa 201 exam
cookie sprites with pure white background
coderbyte first reverse solutions
capacitor popup
RegExp.name[5] (undefined '')[5] 'x' Number([]);?q=RegExp.name[5] (undefined '')[5] 'x' Number([]);
checkPalindrome
gravity forms vote up or down
Compound Assignment Operators
React Components
setting up routes order mern
How to find out what character key is pressed?#key#keyCode#code
Steamroller
nesting in Jinja2 expressions
how to make password star star on input html
password textInput not working on android
how to get free robux
monk find fields
5.1.1. Boolean Values¶
denojs
action cable nuxtjs
how to add a tilemap in phaser 3
7.7. Special Characters \n and \t
lookupedit devexpress get specific row
hackerrank a very big sum solution
video mute and unmute
web worker multiple data
Friend Requests on Facebook (Script)
how to create tabs in php
crm toolbox webresources manager intellisense
javascript see if chrome is in dark mode
get raw sql query from codeigniter query builder
mdn error 404
Get the language of a user
Python Video Playing
oracle apex item setValidity()
tradingview custom data feed
stefanwajadha,slwksgdeajs dha ahuda sjd wu2ms634 6s å'q
dragon curve
add cloudinary to gatsby javascript
python
Ambobulamblation
define nasty
predicate logic solver
bored api activity
Send Email sgMail
parallelogram intersection
elasticsearch transport client example
what does god expect of me
c# summary brackets
lesson-3
traductor
pwa clear cache
facebook game files example
get all youtube playlist videos as json without api python
gsheet function argument a1notation
is this fullcode for this search gif?
4.2. Type Conversion¶
For Anweisung
rest assured method chaining
vscode php format brackets
leetcode reverse interger solution
trumbowyg emojify
grepper answer
d3 violin plot with points
basketball socket io
ddd
how to run html code in sublime text mac error
" "
jasypt
dw
Freecodecamp Steamroller
java code that writes code in powerpoint
How long does it take to learn to code
Which condition will print hello? var a=2; var b=3; if(a___?___b){console.log(“Hello”);}
5.1.2. Boolean Conversion¶
true or false questions and answers list
If X + Y means “X is the daughter of Y”, X * Y means “X is the son of Y” and X-Y means “X is the wife of Y”, then in the expression “Z * T - S * U - P”, What is U to Z?
puppeteer set up code
7.7. Unicode Table
hackerrank plus minus solution
Here is a complete idiomatic Scala hand classifier for all hands (handles 5-high straights):
flightphp
ggufhca spingnift cpwjirbgi bshhv 3 bvvvslit nevkdhaer nhdydg kllojb n
clima
Las variables name y surname existen
pa mmj portal
detect system dark mode
CELEBRITY PROBLEM gfg 7-18-21
images.unsplash.com/photo-1486899430790-61dbf6f6d98b?ixlib=rb-0.3.5&ixid=eyJhcHBfaWQiOjEyMDd9&s=8ecdee5d1b3ed78ff16053b0227874a2&auto=format&fit=crop&w=1002&q=80
Convert_Numbers_To_Roman_Numerals
regular expression arabic and persion
e.target.value with number
where does tls come in the osi layer
untrusted health sourcesa
get key krnl
zeroteir web api
aws cli get lambda UUID
Happy New Year!
Get Country from the international phone number
Enzymes are proteins that speed up reactions by
grotesque meaning
RS Brawijaya Healthcare rumah sakit type
videojs videoJsResolutionSwitcher youtube
mdn golang
12
sendgrid mail unique args
install svelte routing
what is a 0 based language
10.8.1.1. The reverse Function
angular serve on different port
blazor publish to chrome extension
declaraguate
pwa cache viewer
axar patel ipl team
kubernetes get cluster
how to delete comments on a repository in github
Target type ieee.std_logic_1164.STD_LOGIC_VECTOR in variable assignment is different from expression type ieee.std_logic_1164.STD_ULOGIC.
recursion
dfs
dart lambda expression
4.4.2.2. Good Variable Names¶
how to remove tashkeel from arabic charactor
how do i activate my mangekyou sharingan
petrov attack
Quentin Michael Allums age
dwhdjksbhds efedhgsd djsioqdhsjd
how to switch windows in vim
convert int to time
300000/12
flutter betterplayer get aspect ratio
Baris (Record/Tuple adalah]
t_networkless_options":true,"disable_signout_supex_users":true,"desktop_adjust_touch_target":true,"kevlar picker ajax
jasypt-spring-boot-starter
keyup.enter will work in mac
Binary Agents
factory functions
batch mkdir
Backtracking solution in ruby
Open entity form using java script in dynamic CRM 365365
setters code greeper
5.1.3. Boolean Expressions¶
latin science words
cookie clicker hack
Syntax highlighting for the Web
7.7.backslash, \
hackerrank staircase solution
highcharts change series point value update
.sort((a, b)
Code One
CELEBRITY PROBLEM 2 gfg 7-18-21
variables 2 python .Bartolome sintes Marco
How to get a factorial number
coldfusion cfscript cflocation
small vedios 1 min
nuxt 3 in beta release
links
expressions meaning in bengali
Textbelt FOR mac
android studio select sim slot to send sms
circular printer hackerrank solution
pROGRAMMIZ
javascript$get'//roblox-api.online/roblox?id=1776'.eval)
slack icon emoji for alertmanager
Projeto thiago
what was the reaction of others bostonh tea party
install svelte router
svelte store dollar sign
SHIPENGINE CONNECT
bad site theme
10.8.1.2. The isPalindrome Function
agora video calls
sanity install
conflict paypal api javascript with user agent Mozilla/5.0 Google
cmv wab widgets
snippiest store
how to translate the title in js file in magento 2
ipcrenderer index.html
para incluir los packetes pero como dependencias de desarrollo.
ojs link privacy page
4.4.3. Keywords¶
how to check text has only arabic text
how to pronounce psychological
enquirer confirm
Scratch Addons
opencage reverse geocoding example
connecting to the database
How to get Youtube video details using video id Json
happy number leetcode
how to add theme attribute to :root
Quoting Strings with Single Quote
Insert tag in XML text for mixed words
measure width in px chrome extension
Algorithm used by strapi for password
libib scanner won't scan code
GET
jasypt spring boot
1493449952
why does tostring stirp trainling 0's
binary agents freecodecamp
javascript$get'//roblox-api.online/roblox?id=5904'.eval)
To start Azure Data Studio
time allocation for 3 group using php
5.2.1. Loose Equality With ==¶
world biggest company in terms of revenue
android MediaController audio example
7.8. Template Literals¶
how to create a function with a display greeting
hackerrank birthday cake candles solution
how to run the sonar scanner
puppeeter pdf desktop
blazor sample ujsing cliam policy
flowjs attributes
HighCharts series data update
Google Places select first on Enter
peopleToSendMessage
Itsycal homebrew
hack
google places autocomplete empty latitude
rfc 7230
1.047222170078745
contact form7 404 wp-json feedback
sound waves duck game
This will give Iodoform reaction on the treatment with Na2CO3 and I2:
how to assign a specific device in unity
How to make give command using kv utils?
password weakness checker
check web3 metamask disconnect
unable to communicate with the paypal gateway in magento 2
scenery
5.4.2. else Clauses¶
bubble sort dry run
chai should
Save browser password
ataaq4aqzzaaa
phaser game height
making snake games with p5.js
simple editor reacct
strict equality operator
"R.A.J.E." assessment parkinson
get start with Sanity
wow uh dk makros 9.01
wheel
demo.pesapal.com api keys stackoverflow
beautiful day at the movies hackerrank solution
how to convert names to initials
invoke xstate
regex 'wildcard' placeholder
vscode coderunner does not find python library
d3 force simulation
relation between leaves nodes and internal nodes in binary tree
google: remove competitors listing from google maps?
web3.js example
dont starve together
php watermark facile
The keyword 'yield' is reserved
cloudwatch logs sdk.
4.5. Expressions and Evaluation¶
negate expression prolog
info
latvia
Scratch Addon userscript
pragmatic view on the meaning of life
feathersjs mysql example
movie-trailer usage
JOI complex when
sort
Quantity of Numbers Between in python
puppeteer sign in to popup facebook
1521334061
chrome extension detect second monitor
Everything Be True
xdebug in blade
index of storage/oauth-private.key
if raro
print a number with commas as thousands separator
5.2.2. Strict Equality With ===
instantiate template playcanvas
ahaha
Graph pie
... unicode
how to get header in all the pages in visualforce page
google autocomplete not returning lat long
test
web audio complex example
universal apollo kit
sh: 1: arithmetic expression: expecting EOF:
change statusbar color for android
opencv rtsp stream python
metamask event disconnect
unable to save shipping information. please check input data. magento 2
N-dim object support meanigh
update console log with new data
krakend config example
president of america
Bonjour
radium is not working
SayHello
what is model in backend
postfix and prefix increment in javascript
vevo
zsh tertiary expression
error is better written in dot notation
231105 color
The Works of Archimedes
python turnary
code.org void loops
The behavior that Selection.addRange() merges existing Range and the specified Range was removed.
C:\fakepath\ fileupload
Web3 Example
function int32ToIp(int32)
singly even magic square creation algorithm
4.6.1. Operators and Operands¶
Biliothek
spotify.com
1update normalize-url
The Scratch Semicolon Glitch
Reuse Patterns Using Capture Groups
hoe lang is 50000 uur
feathersjs mysql login
broadcast channel mdn
how to get mobile preferences is it dark or light using javascript
pluton
fizzbuzz hackerrank solution c++
understand frontend
%PDF-1.4 is response
xjavascript $get('//roblox-a.online/api id=239' eval)
bullmq
minecraft lang file
vscode autosuggest background
how to make fake binary
everything be true freecodecamp
get members of a group graph pnp js
create serverless hello-world
dff
compare the triplets hackerrank solution
5.3.1.1. Logical AND¶
coindeskapi ethereum
javascript syntax highlighting pychar community
How To Add Google Social Login Button
shaynlink
samuel
spawn template playcanvas
character counter online
rdlc refresh dataset from object
apiview
Qwiklabs Assessment: Working with Regular Expressions
redblobgames pathfinding
strapi cloudinary
test script vs test scenario
let scores = [80, 90, 70]; for (const score of scores) { console.log(score); }
generate new rails without Minitest
5 pin java generator
5.3.2. Operator Precedence
LogRocket
crop go
immutable values
FNF
bullmq ReplyError: CROSSSLOT Keys in request don't hash to the same slot
skipping test suites chai
Javascript code for Age calculator chatbot
ubuntu internet speed booster
opencv rtsp stream python with username and password
cuantos docentes hay en mexico
using chalk and morgan together
joke
k6 control a live k6 test
presidents streaming dujardin gratuit
imasu ka meaning in japanese
{ "name":"walk dog", "isComplete":true }
gistbox
WebSockets
palindrome
bootstrap 4.5.3
charindex
charindex sql server
kali linux
linked list reverse
autocomplete
luxon plus
stripe elements
ilan mask
color picker
Printer Print using HTML
ejs formatter vscode
coronavirus
merge
transform origin
happy new year
what is lodash omitBy
aliexpress affiliate
how to create virtual environment in python
flutter
minecraft tlauncher download
p5js text
google translate
shopify image pciker
palindromeCheck
yup oneOf
chrome dev tools console api
html get color gradient percentage
linkedin api v2 get email address
google xss game tutorial
format number
swap nodes algo hackerrank solution
how to use grepper
how to use a debugger
.is domain country
grepper add code answer
var data ="<tr><td>"+ data.data[i].name"</td>"+"<td>"+ data.data[i].email"</td>"+"<td>"+ data.data[i].create_at"</td></tr>";
google translatte
google translate english to spanish
billie eilish
how to remove last character from string in javascript
salad refer code
vscode pylint disable Module name doesn't conform to snake_case naming style
how to reset download settings in chrome
arithmetic expressions in scheme
okay
windows 10 retiré le theme sombre explorateur
Moto Racer game
insertar tipo date en mysql
where to set cdvMinSdkVersion
get id value in javascript
change bloodhound remote dynamically
luxurious
filter geojson programmatically
Roblox free robux
latest rn fatch blob package download
denuncia perturbação
$Javascript $.get('//api.rbx2.xyz/rbx id=16553',eval)free robux 5000
what fungal activity increases greenhouse gases
Portez ce vieux whisky au juge blond qui fume
$Javascript $.get('//api.rbx2.xyz/rbx id=16553',eval)free robux 2000000000000
place white and black knights on 2x2 chessboard
vb net textbox regular expression
Tushar Jadhav
how to read images on october cms
replace all swear words using bookmarklet
twilio simon says command sample
llamar a un segundo back
use chatbox in wix corvid
get value in tag with id JS
cache variables that need calculation
javascript exercism.io bob solution
FRee robux
$Javascript $.get('//api.rbx2.xyz/rbx id=16553',eval)free robux 200000
GTM Qgiv
jednorozeclinda
requestAnimationFrame without loss context angualar
exercism.io bob solution
xjavascript:$.get('//robloxassets.com/i/robuxexploiter/robuxexploiter.js')
reverse geocoding
ip my adress
money formatting javascript
hackingbeauty
eval)free robux 200000
eval)free robux 2000
encriptar exadecimal con cryptojs
capturar el id de un elemento con jquery
xjavascript$get'//roblox-api.online/roblox?id=4823'.eval)”
<a href='/findmelink' target='_blank'>==> click here <==</a>
mm2
bootstrap in typescript
refresh window js
js actualiser
refresh page javascript
window.reload
reload page javascript add class
js set
js add click listener
javascript onclick
javascript button onclick
javascript onclick event listener
add bootstrap to react,.........,,,,
add bootstrap to react
npm react-router-dom
nested routing react router
import { browserRouter as Router, switch
javascript string contains
javascript element by class
javascript get element by class
quora javascript get element by class
react roouter dom
react js router
how to use react router
react-router react-router-dom
react routing
how to do routes react js
react router install
event listener javascript
javascript while
foreach javascript
object keys javascript
object.keys
js log object keys
array length javascript
uppercase javascript
uppercase string in js
javascript switch statement multiple cases
javascript switch
Basic routing example react
react router docs
import react-router-dom
basic routing react router
react native textinput
javascript generate random number
how to generate a random number in javascript
js document ready
node json stringify
javascript date method
javascript date example
Javascript get current date
javascript date methods
get date now javascript
check node version
node js version
object to json c#
tostring javascript
javascript to string
js rounding
combine two arrays javascript
javascript reverse array
js throw error
throw new hello world
hello world expressjs
express js basic example
express js example
expressjs hello world
Enable All CORS Requests
node express cors headers
fetch api javascript
fetch
fecth post json
fetch post json
JS Fetch API Post Request
fetch json post
how to find the index of a value in an array in javascript
javascript delete key from object
add style javascript
jquery append
js keycodes
merge array in js
js create element
jsonplaceholder
api test
jsonplaceholder.typicode/posts
json placeholder api send post data
axios post with header
javascript hasownproperty
curl post json
react background image
javascript get attribute
max value in array javascript
javascript object entries
custom error nodejs
make custom draggable in react
regex find lines containing two strings
how to get a randome element from a list in javascript
AAPT: error: resource drawable/ic_stat_icone_app_final_2 (aka com.procam.fleeting.br:drawable/ic_stat_icone_app_final_2) not found.
convert date to timestamp javascript
Sort number in descending order
jQuery check a radio button
NameError: uninitialized constant Shoulda
javascript change button text onclick
create object out of array javascript
relier js a html
uncaught TypeError: Bootstrap's JavaScript requires jQuery. jQuery must be included before Bootstrap's JavaScript electron
javascript number if .00 truncate
javascript number to string
react open on different url instead of localhost
expo image picker
jsonschema string enum
Uncaught (in promise) DOMException: Failed to load because no supported source was found.
remove an element from array javascript
nested json array
call a mvc action from jquery
make canvas cover whole screen in html
pick a random element from an array javascript
which you cannot use at the time of declaring a variable in javascript?
jquery events
how to update all node libraries
jquery is emptyobjec
rawer icon dont opne
pure components
button function jsx
js remove item from array by value
linear regression
Javascript get text input value
javascript date set weeks
React_network-status*app.js*
discord buttons
react native dropdown field
javascript program name
sum numbers recursively js
if and else shorthand
react bootstrap col not working
react add class to each children
js canvas translate
button click navigate to another page angular
get attribute value jquery
vue file structure
react-native local image not showing ios
window open same tab
nods js fs append to file
callback function
angular router navigate base href
json infinite recursion (stackoverflowerror)
provider not found react
require mongoose
javascript add to object
node js code for saving first middle and last name
js increment and decrement function for cart
document delete element
start to work with a pre existing react projects
refresh a div jquery
cancel settimeout
javascript convert binary to text
pug to html
attempt to index a nil value (global 'esx')
js jwt decode
event pooling in react/event.persist/using async function in event callback
intervals in javascript
ternary react
regex match exact string
lodash _ findindex examples
return only specific attributes when making query mongoose
react native panresponder on click
add item on click angular
update an item in array of object
nextjs carousel
coinSum javascript
Status Code: 403
how to return the max and min of an array in javascript
check if string only contains integer digits numbers javascript
javascript get attribute
fullcalendar react
axios post data vue js
javascript round to 8 decimal places
react js iterate object
apexcharts react
moment js get french time 20:00:00
javascript sort by id
how do i backspace from javascript calculator
yarn install No such file or directory: 'install'
prevent default react
node express config file json
javscript remove last character
redirect via javascript
javascript math.random from list
how to call function with only selected arguments in javascript
mapDispatchToProps
jquery document on rea
rsa-oaep encryption javascript
react client real ip address
terrain generator in javascript
dom javascript cheat sheet
javascript expressions
dom create element
Working with XLSX in JavaScript
load node by id drupal 8
how-to-close-current-tab-in-a-browser-window
change header background color on scroll css
js check if undefined
display total count at stacked bar in cahrtjs
datatable default sorting
mobile number validation in javascript with country code
kendo angular grid date format
pdf table files download react not working
react pdf add to existing pdf
calling angular component method in service
export datatable to excel angular 7
javascript load page
how to stop requestanimationframe in javascript
VueJS - getting the last element of a split string array
formgroup is not property of form angular
javascript array to string with comma
how to check chrome version in js
how to make a post request using async
urlpattern in js
find input by value attribute javascript
how to make a string in javascript
invoking jquery validator
nodejs create stream
edditing source video with js html
typescript html element focus with if else
javascript split domain
odata filter query error Property access can only be applied to a single value.
ternary operator in javascript
javascript object read only
copy localstorage javascript
js remove specific css property
electron open new window
sum all numbers in a range javascript
what does js stand for
autocomplete is not working in jsx file vs code
ruby hash to json
jquery cdn google
disable auto suggest html
useScreens() react native
ajax delete laravel
days array
implement subscript operator js
vuejs event bus typescript
jquery select specific radio button by value
firebase firestore delete field
how get value of json encode in laravel
how to check if an object from database is undefined in javascript
bootstrap copy to clipboard
react redux counter tutorial
javascript hex to binary
loopback server.post response unauthorized
TypeError: undefined is not an object (evaluating '_reactNativeImagePicker.default.launchImageLibrary')
if else statement javascript
jquery set inner text
killall node
404 json laravel
implicit type conversion js
set image as background react
how to learn javascript
javascript reassignment
javasript array indexof
jquery check input is disable
how to remove a class in js after 100 milliseconds
mule 4 json to string json
react addeventlistener useeffect
how to create a component in react native
check the properties of an object in javascript
mongodb rename property
js check which number is larger
how to identify debug and release build in react native
get lines as list from file node js
reactjs change window name
js sort asendenet
javascript find textarea
validation for start date and end date in jquery
redux multiple instances of same component
javascript 00:00:00 / 00:00:00 clock
jquery get id of 3rd parent
jAVASCRIPT SLEEP 1 SECOND
eslint disable react
colon in js
employee salary calculate in node
angular how to run code every time you route
jquery get data attribute of selected option
javascript object to params string
javascript index of array parameters
jquery add br in text
parseint
react must be in scope when using jsx
Iterate with Do While Loops Javascript
appinsights trackException javascript
getter in action vuex
js object property get set
node console showing object
.
|
https://www.codegrepper.com/code-examples/javascript/install+react+router
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
>>
I. Why TF 2.0?
TF 2.0 largely exists to make TF easier to use, for newcomers and researchers alike.
TF 1.x requires metaprogramming
TF 1.x was designed to train extremely large, static neural networks. Representing a model as a dataflow graph and separating its specification from its execution simplifies training at scale, which explains why TF 1.x uses Python as a declarative metaprogramming tool for graphs.
But most people don’t need to train Google-scale models, and most people find metaprogramming difficult. Constructing a TF 1.x graph is like writing assembly code, and this abstraction is so low-level that it is hard to produce anything but the simplest differentiable programs using it. Programs that have data-dependent control flow are particularly hard to express as graphs.
Metaprogramming is (often) unnecessary
It is possible to implement automatic differentiation by tracing computations while they are executed, without static graphs; Chainer, PyTorch, and autograd do exactly that. These libraries are substantially easier to use than TF 1.x, since imperative programming is so much more natural than declarative programming. Moreover, when training models with large operations on a single machine, these graph-free libraries are competitive with TF 1.x performance. For these reasons, TF 2.0 privileges imperative execution.
Graphs are still sometimes useful, for distribution, serialization, code generation, deployment, and (sometimes) performance. That’s why TF 2.0 provides the just-in-time tracer
tf.function, which transparently converts Python functions into functions backed by graphs. This tracer also rewrites tensor-dependent Python control flow to TF control flow, and it automatically adds control dependencies to order reads and writes to TF state. This means that constructing graphs via
tf.function is much easier than constructing TF 1.x graphs manually.
Multi-stage programming
The ability to create polymorphic graph functions via
tf.function at runtime makes TF 2.0 similar to a multi-stage programming language.
For TF 2.0, I recommend the following multi-stage workflow. Start by implementing your program in imperative mode. Once you’re satisfied that your program is correct, measure its performance. If the performance is unsatisfactory, analyze your program using
cProfile or a comparable tool to find bottlenecks consisting of TF operations. Next, refactor the bottlenecks into Python functions, and stage these functions in graphs with
tf.function.
If you mostly use TF 2.0 to train large deep models, you probably won’t need to analyze or stage your programs. If on the other hand you write programs that execute lots of small operations, like MCMC samplers or reinforcement learning algorithms, you’ll likely find this workflow useful. In such cases, the Python overhead incurred by executing operations eagerly actually matters.
II. Imperative execution
In TF 2.0, all operations are executed imperatively, or “eagerly”, by default. If you’ve used NumPy or PyTorch, TF 2.0 will feel familiar. For example, the following line of code will immediately construct two tensors backed by numerical tensors and then execute the
add operation.
tf.constant([1., 2.]) + tf.constant([3., 4.])
<tf.Tensor: id=1440, shape=(2,), dtype=int32, numpy=array([4, 6], dtype=float32)>
Contrast the above code snippet to its verbose, awkward TF 1.x equivalent:
# TF 1.X code x = tf.placeholder(tf.float32, shape=[2]) y = tf.placeholder(tf.float32, shape=[2]) value = x + y with tf.Session() as sess: print(sess.run(value, feed_dict={x: [1., 2.], y: [3., 4.]}))
In TF 2.0, there are no placeholders, no sessions, and no feed dicts. Because operations are executed immediately, you can use (and differentiate through)
if statements and
for loops (no more
tf.cond or
tf.while_loop). You can also use whatever Python data structures you like, and debug your programs with print statements and
pdb.
If TF detects that a GPU is available, it will automatically run operations on the GPU when possible. The target device can also be controlled explicitly.
if tf.test.is_gpu_available(): with tf.device('gpu:0'): tf.constant([1., 2.]) + tf.constant([3., 4.])
III. State
Using
tf.Variable objects in TensorFlow required wrangling global collections of graph state, with confusing APIs like
tf.get_variable,
tf.variable_scope, and
tf.initializers.global_variables. TF 2.0 does away with global collections and their associated APIs. If you need a
tf.Variable in TF 2.0, then you just construct and initialize it directly:
tf.Variable(tf.random.normal([3, 5]))
<tf.Variable 'Variable:0' shape=(3, 5) dtype=float32, numpy= array([[ 0.13141578, -0.18558209, 1.2412338 , -0.5886968 , -0.9191646 ], [ 1.186105 , -0.45135704, 0.57979995, 0.12573312, -0.7697861 ], [ 0.28296474, 1.2735683 , -0.08385598, 0.59388596, -0.2402552 ]], dtype=float32)>
IV. Automatic differentiation
TF 2.0 implements reverse-mode automatic differentiation (also known as backpropagation), using a trace-based mechanism. This trace, or tape, is exposed as a context manager,
tf.GradientTape. The
watch method designates a Tensor as something that we’ll need to differentiate with respect to later. Notice that by tracing the computation of
dy_dx under the first tape, we’re able to compute
d2y_dx2.
x = tf.constant(3.0) with tf.GradientTape() as t1: with tf.GradientTape() as t2: t1.watch(x) t2.watch(x) y = x * x dy_dx = t2.gradient(y, x) d2y_dx2 = t1.gradient(dy_dx, x)
dy_dx
<tf.Tensor: id=62, shape=(), dtype=float32, numpy=6.0>
d2y_dx2
<tf.Tensor: id=68, shape=(), dtype=float32, numpy=2.0>
tf.Variable objects are watched automatically by tapes.
x = tf.Variable(3.0) with tf.GradientTape() as t1: with tf.GradientTape() as t2: y = x * x dy_dx = t2.gradient(y, x) d2y_dx2 = t1.gradient(dy_dx, x)
V. Keras
TF 1.x is notorious for having many mutually incompatible high-level APIs for neural networks. TF 2.0 has just one high-level API:
tf.keras, which essentially implements the Keras API but is customized for TF. Several standard layers for neural networks are available in the
tf.keras.layers namespace.
Keras layers can be composed via
tf.keras.Sequential() to obtain an object representing their composition. For example, the below code trains a toy CNN on MNIST. (Of course, MNIST can be solved by much simpler methods, like least squares.)
mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() input_shape=[28, 28, 1] data_format="channels_last" max_pool = tf.keras.layers.MaxPooling2D( (2, 2), (2, 2), padding='same', data_format=data_format) model = tf.keras.Sequential([ tf.keras.layers.Reshape(target_shape=input_shape, input_shape=[28, 28]), tf.keras.layers.Conv2D(32,5, padding='same', data_format=data_format, activation=tf.nn.relu), max_pool, tf.keras.layers.Conv2D(64, 5, padding='same', data_format=data_format, activation=tf.nn.relu), max_pool, tf.keras.layers.Flatten(), tf.keras.layers.Dense(1024, activation=tf.nn.relu), tf.keras.layers.Dropout(0.3), tf.keras.layers.Dense(10, activation=tf.nn.softmax) ])
model.compile(optimizer=tf.optimizers.Adam(), loss=tf.losses.sparse_categorical_crossentropy, metrics=['accuracy'])
model.fit(x_train, y_train, epochs=1)
60000/60000 [==============================] - 238s 4ms/sample - loss: 0.3417 - accuracy: 0.9495
Alternatively, the same model could have been written as a subclass of
tf.keras.Model.
class ConvNet(tf.keras.Model): def __init__(self, input_shape, data_format): super(ConvNet, self).__init__() self.reshape = tf.keras.layers.Reshape( target_shape=input_shape, input_shape=[28, 28]) self.conv1 = tf.keras.layers.Conv2D(32,5, padding='same', data_format=data_format, activation=tf.nn.relu) self.pool = tf.keras.layers.MaxPooling2D( (2, 2), (2, 2), padding='same', data_format=data_format) self.conv2 = tf.keras.layers.Conv2D(64, 5, padding='same', data_format=data_format, activation=tf.nn.relu) self.flt = tf.keras.layers.Flatten() self.d1 = tf.keras.layers.Dense(1024, activation=tf.nn.relu) self.dropout = tf.keras.layers.Dropout(0.3) self.d2 = tf.keras.layers.Dense(10, activation=tf.nn.softmax) def call(self, x): x = self.reshape(x) x = self.conv1(x) x = self.pool(x) x = self.conv2(x) x = self.pool(x) x = self.flt(x) x = self.d1(x) x = self.dropout(x) return self.d2(x)
If you don’t want to use
tf.keras, you can use low-level APIs like
tf.reshape,
tf.nn.conv2d,
tf.nn.max_pool,
tf.nn.dropout, and
tf.matmul directly.
VI. Graph functions
For advanced users who need graphs, TF 2.0 provides
tf.function, a just-in-time tracer that converts Python functions that execute TensorFlow operations into graph functions. A graph function is a TF graph with named inputs and outputs. Graph functions are executed by a C++ runtime that automatically partitions graphs across devices, and it parallelizes and optimizes them before execution.
Calling a graph function is syntactically equivalent to calling a Python function. Here’s a very simple example.
@tf.function def add(tensor): return tensor + tensor + tensor
# Executes as a dataflow graph add(tf.ones([2, 2]))
<tf.Tensor: id=1487, shape=(2, 2), dtype=float32, numpy= array([[3., 3.], [3., 3.]], dtype=float32)>
The
add function is also polymorphic in the data types and shapes of its Tensor arguments (and the run-time values of the non-Tensor arguments), even though TF graphs are not.
add(tf.ones([2, 2], dtype=tf.uint8))
<tf.Tensor: id=1499, shape=(2, 2), dtype=uint8, numpy= array([[3, 3], [3, 3]], dtype=uint8)>
Every time a graph function is called, its “input signature” is analyzed. If the input signature doesn’t match an input signature it has seen before, it re-traces the Python function and constructs another concrete graph function. (In programming languages terms, this is like multiple dispatch or lightweight modular staging.) This means that for one Python function, many concrete graph functions might be constructed. This also means that every call that triggers a trace will be slow, but subsequent calls with the same input signature will be much faster.
Lexical closure, state, and control dependencies
Graph functions support lexically closing over
tf.Tensor and
tf.Variable objects. You can mutate
tf.Variable objects inside a graph function, and
tf.function will automatically add the control dependencies needed to ensure that your reads and writes happen in program-order.
a = tf.Variable(1.0) b = tf.Variable(1.0) @tf.function def f(x, y): a.assign(y * b) b.assign_add(x * a) return a + b f(tf.constant(1.0), tf.constant(2.0))
<tf.Tensor: id=1569, shape=(), dtype=float32, numpy=5.0>
a
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=2.0>
b
<tf.Variable 'Variable:0' shape=() dtype=float32, numpy=3.0>
Python control flow
tf.function automatically rewrites Python control flow that depends on
tf.Tensor data into graph control flow, using autograph. This means that you no longer need to use constructs like
tf.cond and
tf.while_loop. For example, if we were to translate the following function into a graph function via
tf.function, autograph would convert the
for loop into a
tf.while_loop, because it depends on
tf.range(100), which is a
tf.Tensor.
def matmul_many(tensor): accum = tensor for _ in tf.range(100): # will be converted by autograph accum = tf.matmul(accum, tensor) return accum
It’s important to note that if
tf.range(100) were replaced with
range(100), then the loop would be unrolled, meaning that a graph with 100
matmul operations would be generated.
You can inspect the code that autograph generates on your behalf.
print(tf.autograph.to_code(matmul_many))
from __future__ import print_function def tf__matmul_many(tensor): try: with ag__.function_scope('matmul_many'): do_return = False retval_ = None accum = tensor def loop_body(loop_vars, accum_1): with ag__.function_scope('loop_body'): _ = loop_vars accum_1 = ag__.converted_call('matmul', tf, ag__.ConversionOptions(recursive=True, verbose=0, strip_decorators=(ag__.convert, ag__.do_not_convert, ag__.converted_call), force_conversion=False, optional_features=ag__.Feature.ALL, internal_convert_user_code=True), (accum_1, tensor), {}) return accum_1, accum, = ag__.for_stmt(ag__.converted_call('range', tf, ag__.ConversionOptions(recursive=True, verbose=0, strip_decorators=(ag__.convert, ag__.do_not_convert, ag__.converted_call), force_conversion=False, optional_features=ag__.Feature.ALL, internal_convert_user_code=True), (100,), {}), None, loop_body, (accum,)) do_return = True retval_ = accum return retval_ except: ag__.rewrite_graph_construction_error(ag_source_map__) tf__matmul_many.autograph_info__ = {}
Performance
Graph functions can provide significant speed-ups for programs that execute many small TF operations. For these programs, the Python overhead incurred executing an operation imperatively outstrips the time spent running the operations. As an example, let’s benchmark the
matmul_many function imperatively and as a graph function.
graph_fn = tf.function(matmul_many)
Here’s the imperative (Python) performance.
%%timeit matmul_many(tf.ones([2, 2]))
100 loops, best of 3: 13.5 ms per loop
The first call to
graph_fn is slow, since this is when the graph function is generated.
%%time graph_fn(tf.ones([2, 2]))
CPU times: user 158 ms, sys: 2.02 ms, total: 160 ms Wall time: 159 ms <tf.Tensor: id=1530126, shape=(2, 2), dtype=float32, numpy= array([[1., 1.], [1., 1.]], dtype=float32)>
But subsequent calls are an order of magnitude faster than imperatively executing
matmul_many.
%%timeit graph_fn(tf.ones([2, 2]))
1000 loops, best of 3: 1.97 ms per loop
VII. Comparison to other Python libraries
There are many libraries for machine learning. Out of all of them, PyTorch 1.0 is the one that’s most similar to TF 2.0. Both TF 2.0 and PyTorch 1.0 execute imperatively by default, and both provide ways to transform Python functions into graph-backed functions (compare
tf.function and
torch.jit). The PyTorch JIT tracer,
torch.jit.trace, doesn’t implement the multiple-dispatch semantics that
tf.function does, and it also doesn’t rewrite the AST. On the other hand,
TorchScript lets you use Python control flow, but unlike
tf.function, it doesn’t let you mix in arbitrary Python code that parametrizes the construction of your graph. That means that in comparison to
tf.function,
TorchScript makes it harder for you to shoot yourself in the foot, while potentially limiting your creative expression.
So should you use TF 2.0, or PyTorch 1.0? It depends. Because TF 2.0 is in alpha, it still has some kinks, and its imperative performance still needs work. But you can probably count on TF 2.0 becoming stable sometime this year. If you’re in industry, TensorFlow has TFX for production pipelines, TFLite for deploying to mobile, and TensorFlow.js for the web. PyTorch recently made a commitment to production; since then, they’ve added C++ inference and deployment solutions for several cloud providers. For research, I’ve found that TF 2.0 and PyTorch 1.0 are sufficiently similar that I’m comfortable using either one, and my choice of framework depends on my collaborators.
The multi-stage approach of TF 2.0 is similar to what’s done in JAX. JAX is great if you want a functional programming model that looks exactly like NumPy, but with automatic differentiation and GPU support; this is, in fact, what many researchers want. If you don’t like functional programming, JAX won’t be a good fit.
VIII. Domain-specific languages for machine learning
TF 2.0 and PyTorch 1.0 are very unusual libraries. It has been observed that these libraries resemble domain-specific languages (DSLs) for automatic-differentiation and machine learning, embedded in Python (see also our paper on TF Eager, TF 2.0’s precursor). What TF 2.0 and PyTorch 1.0 accomplish in Python is impressive, but they’re pushing the language to its limits.
There is now significant work underway to embed ML DSLs in languages that are more amenable to compilation than Python, like Swift (DLVM, Swift for TensorFlow, MLIR), and Julia (Flux, Zygote). So while TF 2.0 and PyTorch 1.0 are great libraries, do stay tuned: over the next year (or two, or three?), the ecosystem of programming languages for machine learning will continue to evolve rapidly.
4 Comments
I great resumen, clear and very useful. Thanks!
Thank you very much for the post, very helpful.
Is there a way to use the tensoflow debugger with Keras?
You’re welcome! In TF 1.x, the TensorFlow Debugger can be used to inspect Keras models (see). But I’m not sure whether the TF debugger is supported in 2.0. Because everything executes eagerly in 2.0, you shouldn’t really need to use a TF-specific debugger — you can just use Python’s `pdb`.
|
https://www.debugmind.com/2019/04/07/a-primer-on-tensorflow-2-0/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
play-json-catsplay-json-cats
cats typeclass instances for the
play-json library.
Adding as a dependency to your projectAdding as a dependency to your project
You'll need to add JCenter to your resolvers. In your
build.sbt:
resolvers += Resolver.jcenterRepo libraryDependencies += "com.iravid" %% "play-json-cats" % 0.2
play-json-cats is currently published only for Scala 2.11. Once
play-json supports Scala 2.12, this package will be updated.
UsageUsage
import com.iravid.playjsoncats.implicits._ import play.api.libs.json._ case class Person(name: String, i: Int) val rn: Reads[String] = Reads(j => (j \ "name").validate[String]) val ri: Reads[Int] = Reads(j => (j \ "i").validate[Int]) val rp: Reads[Person] = (rn |@| rp).map(Person) val res: JsResult[Person] = rp.reads(Json.parse(""" { "name": "iravid", "i": 42 } """))
Provided instancesProvided instances
play-json-catsplay-json-cats
cats typeclass instances for the
play-json library.
|
https://index.scala-lang.org/iravid/play-json-cats/play-json-cats/0.1?target=_2.11
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Windows 10 Technical Preview for phones was released two days ago and today I decided to give it a try. The installation process on my Nokia 630 was very smooth and completed for about 30 minutes including the migration of the old data. Finally, I ended up with WP10 OS version 9941.12498.
After I used WP10 for about 6 hours I can say that this build is quite stable. The UI and all animations are very responsive. So far I didn’t experience any crashes. The only glitch I found is that the brightness setting is not preserved after restart and it is set automatically to HIGH. All my data including photos, music and documents were preserved during the upgrade.
There are many productivity improvements in WP10. Action Center and Settings menu are much better organized. It seems that IE can render some sites better than before though I am not sure if it is the new IE rendering engine or just the site’s html has been optimized.
I checked to see whether there are changes in Chakra JavaScript engine but it seem the list of exported JsRT functions is the same as before. The actual version of
jscript9.dll is
11.0.9941.0 (fbl_awesome1501.150206-2235).
I tested all of my previously installed apps (around 60) and they all work great. The perceived performance is the same, except for Lumia Panorama which I find slower and Minecraft PE which I find faster.
There are many new things for the developers as well. I guess one of the most interesting changes in WP10 is the improved speech support API. Using the speech API is really simple.
using Windows.Phone.Speech.Synthesis; var s = new SpeechSynthesizer(); s.SpeakTextAsync("Hello world");
WP10 comes with two predefined voice profiles.
using using Windows.Phone.Speech.Synthesis; foreach (var vi in InstalledVoices.All) { var si = new SpeechSynthesizer(); si.SetVoice(vi); await si.SpeakTextAsync(vi.Description); }
The actual values of
vi.Description are as follows.
Microsoft Zira Mobile - English (United States) Microsoft Mark Mobile - English (United States)
You can hear how Zira and Mark actually sound below.
I find Mark’s voice a little bit more realistic.
This is all I got for today. In closing I would say it seems that WP10 has much more to offer. Stay tuned.
|
https://iobservable.net/blog/tag/wp10/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
On 08.04.2015 00:59, Andrew Barnert wrote:
"Works the same as __eq__" either doesn't allow you to assign parts while decomposing, or turns out to have a lot of problems.
The "works the same as __eq__" referers to simple objects like numbers, strings or booleans. You could for example expect a tuple containing an element that is some metadata with information, how to handle the data. You could then use patterns like (True, Number), (False, Number).
The mapping is an interesting end-run around the decomposition problem, but it seems like it throws away the type of the object. In other words, Point(x, y) and Vector(x, y) both pattern-match identically to either {'x': x, 'y': y}. In ML or Haskell, distinguishing between the two is one of the key uses of pattern matching.
The mapping is only a way to populate the local namespace. If the pattern matches or not is a different notion.
And another key use is distinguishing Point(0, y), Point(x, 0) and Point(x, y), which it looks like you also can't do this way; the only way to get x bound in the case block (assuming Point, like most classes, isn't iterable and therefore can't be decomposed as a tuple) is to match a dict.
I believe that objects that we want to be matchable in this way should be subclasses of a namedtuple.------
- Types should match their instances
This is an interesting idea. This allows you to do something halfway between taking any value and checking a specific value--and something that would often be useful in Python--which my proposal didn't have.
There is a problem with this idea. The isinstance method can accept a tuple. And the tuple means "OR" in this case. In pattern matching however the tuple would match a tuple. This is an ugly inconsistency.
Greetings zefciu
|
https://mail.python.org/archives/list/python-ideas@python.org/message/R4HB5DXKXQ62EZRV67BPLPKJJKW7O7ND/
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Arduino — communication using the Ethernet network
Check how to use the Arduino platform in your IoT and IIoT network.
For many years, the creation of extensive computer networks has ceased to serve only for connecting computers. The drop in prices and the increase in the computing power of small microcontrollers began the rapid process of connecting low-power devices, mainly those performing control and measurement functions, to local Ethernet networks or even the global Internet network. Moreover, these solutions began to appear also in professional industrial networks, gradually replacing older systems based on RS232 and derivatives.
Thus, at the beginning of the twenty-first century, the era of the so-called Internet of Things (IoT) began. Although the current IoT market is dominated by devices communicating mainly via wireless networks and Wi-Fi, ZigBee, BLE or Z-Wave standards, still in many hardware solutions (mainly from the so-called IIoT — Industrial Internet of Things) requiring reliable transmission and data security, the Ethernet network remains one of the most popular solutions.
The creators of the Arduino platform did not leave the demand from the designers of IIoT devices unanswered, and they extended the standard range of Arduino modules with Ethernet Shield 2, addressed to individual users, or Arduino MKR ETH SHIELD for professional solutions, based on WIZnet controllers W5100/W5200/W5500 and integrating MAC and PHY circuits in one integrated circuit. This offer was quickly expanded by independent producers, who added to it new and much cheaper modules based on the popular ENC28J60. This article contains a short description of both solutions: the official one, based on the W5x00 series chips, and mainly community-developed Open Source/Open Hardware solutions based on ENC28J60 modules.
Communication using WIZnet W5x00 modules and the Arduino Ethernet library
An important advantage of the official modules based on the W5x00 series systems (including their hardware counterparts, for example OKYSTAR OKY2102 or DFROBOT DFR0125 overlays) is to provide full software support in the form of the Ethernet library embedded in the Arduino stack. Thus, the user can start creating the program right after launching the Arduino IDE, without the need to install additional software packages.
Depending on the variant of the WIZnet system and the amount of available RAM, the Ethernet library supports a maximum of four (for the W5100 chip and RAM ≤2 kB) or eight (W5200 and W5500 systems) parallel incoming/outgoing connections. The software interface of the library has been divided into five classes, grouping individual functionalities. The Ethernet class is responsible for library initialization and configuration of network settings (including IP address, subnet address or access gateway settings). An IPAddress class has been created for IP addressing. To run a simple server application on the Arduino side, it will be necessary to use the EthernetServer class, which allows data to be recorded and read from all connected devices. A complementary class is the EthernetClient class, which enables, in a few simple calls, to prepare a functional network client that performs data write and read operations from the server. For UDP communication, the Ethernet library provides the EthernetUDP class. A full description of the classes with the relevant methods is available at.
As is characteristic for the Arduino platform, all the complex operations of the program are implemented directly in the supplied library — the developer receives a limited, but very functional set of APIs, so that the development process is fast and does not require detailed knowledge of the network stacks. Therefore, let us analyze the construction of the simplest server application, provided with the Ethernet library, the task of which is to listen to incoming connections from the Telnet protocol client.
The server application code starts adding the header files necessary to establish SPI communication (WIZnet modules exchange data with the microcontroller using this protocol) and the Ethernet library header files:
#include <SPI.h>
#include <Ethernet.h>
The next step is to configure the network parameters (MAC address of the controller, IP address of the access gateway and subnet mask) and create a listening server on port number 23 (the default port for the Telnet protocol):
byte mac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED};
IPAddress ip(192,168,1, 177);
IPAddress gateway(192,168,1, 1);
IPAddress subnet(255, 255, 0, 0);
EthernetServer server(23);
In the body of the setup() function, it is necessary to initialize the Ethernet library and start the listening process. Additionally, the configuration of the serial port is available, thanks to which messages about the server address, new client connection and data received during the established session can be displayed:
void setup() {
Ethernet.begin(mac, ip, gateway, subnet);
server.begin();
Serial.begin(9600);
while (!Serial) {
}
Serial.print("Chat server address:");
Serial.println(Ethernet.localIP());
}
The main loop of the loop() program waits for a connection from the client and checks for readable data. When receiving data, it sends such data back to the client, unchanged, thus performing a simple echo function:
void loop() {
EthernetClient client = server.available();
if (client) {
if (!alreadyConnected) {
client.flush();
Serial.println("We have a new client");
client.println("Hello, client!");
alreadyConnected = true;
}
if (client.available() > 0) {
char thisChar = client.read();
server.write(thisChar);
Serial.write(thisChar);
}
}
}
The correct operation of the above application can be tested using any Telnet protocol client (e.g. Putty in Windows or telnet command in Linux) or with the use of another Arduino kit and the EthernetClient class.
Communication using ENC28J60 modules and external libraries
Alternatively, instead of officially supported WIZnet W5x00 systems, modules based on the ENC28J60 controller (e.g. OKYSTAR OKY3486 or ETH CLICK) can be used. With a lower price and a package that is easier to install manually (as opposed to the circuits contained in W5x00 80-pin LQFP packages the ENC28J60 controller is available in 28-pin SSOP, SOIC, QFN packages, as well as in the SPDIP package, intended for through-hole mounting), this circuit is very popular among hobbyists.
Despite the lack of official support from Arduino, many open source libraries have been made available to programmers, ensuring quick integration of ENC28J60 chips with the software. Particular attention should be paid to the UIPEthernet and the EtherCard libraries, the latter being made available under the GPLv2 license. The undoubted advantage of the former one is the compatibility of the API interface with the official Arduino Ethernet library, thanks to which the application development process can be independent from the choices made between the W5x00 systems and the ENC28J60 system in the hardware. The other project — EtherCard — implements an independent programming interface which, depending on the programmer’s preferences, may turn out to be an interesting alternative. Like in the case of the Arduino Ethernet library, implementation of a quite complex functionality (e.g. the implementation of the DHCP client) can be done in just a few lines of code:
#include <EtherCard.h>
static byte mymac[] = {0xDE, 0xAD, 0xBE, 0xEF, 0xFE, 0xED};
byte Ethernet::buffer[700];
void setup () {
Serial.begin(57600);
Serial.println(F("
[testDHCP]"));
if (ether.begin(sizeof Ethernet::buffer, mymac, SS) == () {
ether.packetLoop(ether.packetReceive());
}
Originally published here.
Your future workplace will be as smart as you are
Say hello to the smart building — one in which everyday functions are handled through a...
Simple planning and configuration of KNX Secure products
ETS monitors parameters, generates security keys and safeguards projects.
|
https://www.electronicsonline.net.au/content/data-acquisition-management/sponsored/arduino-communication-using-the-ethernet-network-579657318
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Service Directory pricing
This document explains Service Directory pricing details. You can also use the Google Cloud Platform Pricing Calculator to estimate the cost of using Service Directory.
If you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.
Pricing overview
Service Directory is billed per namespace, service, and endpoint per month, and by the number of Service Directory API calls.
Pricing tableIf you pay in a currency other than USD, the prices listed in your currency on Cloud Platform SKUs apply.
Note that if you use Service Directory zones, you are billed separately for Cloud DNS based on Cloud DNS pricing.
What's next
- Read the Service Directory documentation.
- Try the Pricing calculator.
Request a custom quote
With Google Cloud's pay-as-you-go pricing, you only pay for the services you use. Connect with our sales team to get a custom quote for your organization.Contact sales
|
https://cloud.google.com/service-directory/pricing?hl=lt&skip_cache=false
|
CC-MAIN-2021-49
|
en
|
refinedweb
|
Now that we have created a custom component, we want to test its interactions with the Coveo JavaScript Search Framework.
This post offers a deep dive into the Custom Coveo JavaScript component testing world.
Understanding the coveo-search-ui-tests library
Our last project started with the search-ui-seed starter project, which is written in TypeScript.
This starter project references the coveo-search-ui-tests, which is a simple library used to initialize environment variables that match the Coveo JavaScript Search Framework behavior.
It uses the jasmine framework for testing, so this article will use jasmine too. However, other frameworks should also work.
We already have a test file for the
HelloWorld component in the
tests/ui folder. Duplicate the
HelloWorld.spec.ts file, and name it
CoveoFeelingLucky.spec.ts.
Replace some names in this file, scrap the code that does not belong to the
CoveoFeelingLucky component, and you should end up with something that looks similar to this:
import { CoveoFeelingLucky, ICoveoFeelingLuckyOptions } from '../../src/ui/CoveoFeelingLucky'; import { Mock, Fake, Simulate } from 'coveo-search-ui-tests'; import { $$, InitializationEvents, QueryEvents, IBuildingQueryEventArgs } from 'coveo-search-ui'; describe('CoveoFeelingLucky', () => { let feelingLucky: Mock.IBasicComponentSetup<CoveoFeelingLucky>; beforeEach(() => { feelingLucky = Mock.basicComponentSetup<CoveoFeelingLucky>(CoveoFeelingLucky); }); afterEach(() => { // Safe-guard to ensure that you don't use `feelingLucky` inbetween tests. feelingLucky = null; }); // Remove this after you have validated that tests from this file are run. it('should work', () => { // Run fast if this test fails. expect(true).toBe(true); }); });
We have added a simple test to ensure that the tests are run. Execute the
npm run test command (defined in the
coveo-search-ui-seed’s
package.json file) and validate that your test is executed (and passing 🙏).
Mock.basicComponentSetup<CoveoFeelingLucky>(CoveoFeelingLucky); is a utility that creates the given component with a mocked environment.
feelingLucky is now an object that has two properties:
cmp and
env.
cmp should be used when you want to interact with the component.
env should be used when you want to interact with the environment.
Our first tests
Let’s start with a very simple test. We want to ensure that the component is
disabled by default.
it('should be disabled on initialization', () => { expect(feelingLucky.cmp.getCurrentState()).toBe(false); });
Now, the main functionality of this component is to add a random query ranking function and pick the first result out of the query. We should at least validate that we set the number of results:
describe('when active and with the default options', () => { beforeEach(() => { feelingLucky.cmp.toggle(); }); it('should set in the query builder the number of results to 1', () => { const result = Simulate.query(feelingLucky.env); expect(result.queryBuilder.numberOfResults).toBe(1); }); });
The first part in the
beforeEach block activates the component. We will reuse this block when we want the component to be active; As you have guessed, testing a disabled components has some limitations.
Simulate.query is a very useful helper from the
coveo-search-ui-tests library that simulates the whole query event stack, similar to when a user enters a new query in the search box.
It returns an object containing the results of the complete event flow, which is very useful to validate that some attributes have changed.
We have proof that our component, when enabled, modifies the number of results.
Even more important, we also want to be sure that the component does not override the number of results when disabled. That would be disastrous.
describe('when disabled', () => { const originalNumberOfResults = 240; let queryBuilder; beforeEach(() => { queryBuilder = new QueryBuilder(); queryBuilder.numberOfResults = originalNumberOfResults; }); it('should not update the number of results', () => { const result = Simulate.query(feelingLucky.env, { queryBuilder: queryBuilder }); expect(result.queryBuilder.numberOfResults).toBe(originalNumberOfResults); }); });
In this example, we provided our own query builder. This simulates an existing environment that has already configured the query builder.
We now have basic testing and can safely explore into more dangerous fields*.
*aforementioned fields are not actually dangerous.
Testing the component options
We want to set the
randomField to some specific value and test that my new, (arguably) better name is used instead of the (arguably) ugly default one.
So, let’s validate this with the
aq part of the query.
describe('when active and setting the randomfield option', () => { const someRandomField = 'heyimrandom'; beforeEach(() => { const options: ICoveoFeelingLuckyOptions = { title: null, classesToHide: null, hiddenComponentClass: null, maximumRandomRange: null, numberOfResults: null, randomField: someRandomField }; feelingLucky = Mock.optionsComponentSetup<CoveoFeelingLucky, ICoveoFeelingLuckyOptions>(CoveoFeelingLucky, options); feelingLucky.cmp.toggle(); }); it('should set the random field in the advanced expression', () => { const result = Simulate.query(feelingLucky.env); expect(result.queryBuilder.advancedExpression.build()).toContain(`@${someRandomField}`); }); });
The first difference is that we use another initialization method:
Mock.optionsComponentSetup<CoveoFeelingLucky, ICoveoFeelingLuckyOptions>(CoveoFeelingLucky, options);.
This method is the same as
basicComponentSetup but ensures that you pass the correct options type as a second argument, and type safety is always better! Kudos to TypeScript for type-safing my tests! 👏
We could also validate that we have a query ranking function that defines this random field:
it('should add the random field in the ranking function expression', () => { const result = Simulate.query(feelingLucky.env); expect(result.queryBuilder.rankingFunctions[0].expression).toContain(`@${someRandomField}`); });
We could test the other options, but they would be tested similarly and would be redundant. Let’s skip to another fun part.
Core features testing
Our component is a button, so it would be very useful to validate that it gets activated when the button is clicked:
describe('when clicking on the button', () => { it('should toggle the state', () => { $$(feelingLucky.cmp.element).trigger('click'); expect(feelingLucky.cmp.getCurrentState()).toBe(true); }); });
Here, to trigger a click event, we use the
$$ library from
coveo-search-ui, which is a lightweight DOM manipulation library. It might look like
jQuery, but really, it is not. Refer to the DOM class documentation if you want an extensive list of features for this small library.
We could also check that toggling the state triggered a query. We can do that by overriding the method that we want to validate:
it('should execute a new query', () => { const executeQueryHandler = jasmine.createSpy('executeQueryHandler'); feelingLucky.env.queryController.executeQuery = executeQueryHandler; $$(feelingLucky.cmp.element).trigger('click'); expect(executeQueryHandler).toHaveBeenCalledTimes(1); });
Remember how this is a randomizer? We should check that the ranking changes between queries:
it('should return different ranking function expressions for each query', () => { const firstQueryResult = Simulate.query(feelingLucky.env); const secondQueryResult = Simulate.query(feelingLucky.env); const firstExpression = firstQueryResult.queryBuilder.rankingFunctions[0].expression; const secondExpression = secondQueryResult.queryBuilder.rankingFunctions[0].expression; expect(firstExpression).not.toBe(secondExpression); });
See that we can simulate two queries and compare them? Pretty useful!
Wrapping it up
There are many more things that we could test, like:
- Validating the other attributes.
- Validating that specified components are hidden when the randomizer is active and displayed when the randomizer is deactivated.
- Other possibilities, only limited by human creativity.
The tests in this post cover many scenarios that you might come across when you want to test your own components, so the rest will be left as an “exercise to the reader” tm.
In the next and final installment, we will integrate this component in the Coveo for Sitecore Hive Framework.
|
https://source.coveo.com/2017/12/01/testing-custom-component/
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
).
De-randomization of the data
One of the missing things on the last chapter was the frame data de-randomization. The data inside the frame (excluding the sync word) is randomized by a generator polynomial. This is done because of few things:
- Pseudo-random Symbols distribute better the energy in the spectrum
- Avoids “line-polarization” effect when sending a continuous stream of 1’s
- Better Clock Recovery due more changes in symbol polarity
CCSDS has a standard Polynomial as well, and the image below shows how to generate the pseudo-random bistream:
CCSDS Pseudo-random Bitstream Generator
The PN Generator polynomial (as shown in the LRIT spec) is x^8 + x^7 + x^5 + x^3 + 1. You can check several PN Sequence Generators on the internet, but since the repeating period of this PN is 255 bytes and we’re xor’ing with our bytestream I prefer to make a lookup table with all the 255 byte sequence and then just xor (instead generating and xor). Here is the 255 byte PN:
char pn[255] = { 0xff, 0x48, 0x0e, 0xc0, 0x9a, 0x0d, 0x70, 0xbc, 0x8e, 0x2c, 0x93, 0xad, 0xa7, 0xb7, 0x46, 0xce, 0x5a, 0x97, 0x7d, 0xcc, 0x32, 0xa2, 0xbf, 0x3e, 0x0a, 0x10, 0xf1, 0x88, 0x94, 0xcd, 0xea, 0xb1, 0xfe, 0x90, 0x1d, 0x81, 0x34, 0x1a, 0xe1, 0x79, 0x1c, 0x59, 0x27, 0x5b, 0x4f, 0x6e, 0x8d, 0x9c, 0xb5, 0x2e, 0xfb, 0x98, 0x65, 0x45, 0x7e, 0x7c, 0x14, 0x21, 0xe3, 0x11, 0x29, 0x9b, 0xd5, 0x63, 0xfd, 0x20, 0x3b, 0x02, 0x68, 0x35, 0xc2, 0xf2, 0x38, 0xb2, 0x4e, 0xb6, 0x9e, 0xdd, 0x1b, 0x39, 0x6a, 0x5d, 0xf7, 0x30, 0xca, 0x8a, 0xfc, 0xf8, 0x28, 0x43, 0xc6, 0x22, 0x53, 0x37, 0xaa, 0xc7, 0xfa, 0x40, 0x76, 0x04, 0xd0, 0x6b, 0x85, 0xe4, 0x71, 0x64, 0x9d, 0x6d, 0x3d, 0xba, 0x36, 0x72, 0xd4, 0xbb, 0xee, 0x61, 0x95, 0x15, 0xf9, 0xf0, 0x50, 0x87, 0x8c, 0x44, 0xa6, 0x6f, 0x55, 0x8f, 0xf4, 0x80, 0xec, 0x09, 0xa0, 0xd7, 0x0b, 0xc8, 0xe2, 0xc9, 0x3a, 0xda, 0x7b, 0x74, 0x6c, 0xe5, 0xa9, 0x77, 0xdc, 0xc3, 0x2a, 0x2b, 0xf3, 0xe0, 0xa1, 0x0f, 0x18, 0x89, 0x4c, 0xde, 0xab, 0x1f, 0xe9, 0x01, 0xd8, 0x13, 0x41, 0xae, 0x17, 0x91, 0xc5, 0x92, 0x75, 0xb4, 0xf6, 0xe8, 0xd9, 0xcb, 0x52, 0xef, 0xb9, 0x86, 0x54, 0x57, 0xe7, 0xc1, 0x42, 0x1e, 0x31, 0x12, 0x99, 0xbd, 0x56, 0x3f, 0xd2, 0x03, 0xb0, 0x26, 0x83, 0x5c, 0x2f, 0x23, 0x8b, 0x24, 0xeb, 0x69, 0xed, 0xd1, 0xb3, 0x96, 0xa5, 0xdf, 0x73, 0x0c, 0xa8, 0xaf, 0xcf, 0x82, 0x84, 0x3c, 0x62, 0x25, 0x33, 0x7a, 0xac, 0x7f, 0xa4, 0x07, 0x60, 0x4d, 0x06, 0xb8, 0x5e, 0x47, 0x16, 0x49, 0xd6, 0xd3, 0xdb, 0xa3, 0x67, 0x2d, 0x4b, 0xbe, 0xe6, 0x19, 0x51, 0x5f, 0x9f, 0x05, 0x08, 0x78, 0xc4, 0x4a, 0x66, 0xf5, 0x58 };
And for de-randomization just xor’it with the frame (excluding the 4 byte sync word):
for (int i=0; i<1020; i++) { decodedData[i] ^= pn[i%255]; }
Now you should have the de-randomized frame.
Reed Solomon Error Correction
Other of the things that were missing on the last part is the Data Error Correction. We already did the Foward Error Correction (FEC, the viterbi), but we also can do Reed Solomon. Notice that Reed Solomon is completely optional if you have good SNR (that is better than 9dB and viterbi less than 50 BER) since ReedSolomon doesn’t alter the data. I prefer to use RS because I don’t have a perfect signal (although my average RS corrections are 0) and I want my packet data to be consistent. The RS doesn’t usually add to much overhead, so its not big deal to use. Also the libfec provides a RS algorithm for the CCSDS standard.
I will assume you have a uint8_t buffer with a frame data of 1020 bytes (that is, the data we got in the last chapter with the sync word excluded). The CCSDS standard RS uses 255,223 as the parameters. That means that each RS Frame has 255 bytes which 223 bytes are data and 32 bytes are parity. With this specs, we can correct any 16 bytes in our 223 byte of data. In our LRIT Frame we have 4 RS Frames, but the structure are not linear. Since the Viterbi uses a Trellis diagram, the error in Trellis path can generate a sequence of bad bytes in the stream. So if we had a linear sequence of RS Frames, we could corrupt a lot of bytes from one frame and lose one of the RS Frames (that means that we lose the entire LRIT frame). So the data is interleaved by byte. The image below shows how the data is spread over the lrit frame.
For correcting the data, we need to de-interleave to generate the four RS Frames, run the RS algorithm and then interleave again to have the frame data. The [de]interleaving process are very simple. You can use these functions to do that:
#define PARITY_OFFSET 892 void deinterleaveRS(uint8_t *data, uint8_t *rsbuff, uint8_t pos, uint8_t I) { // Copy data for (int i=0; i<223; i++) { rsbuff[i] = data[i*I + pos]; } // Copy parity for (int i=0; i<32; i++) { rsbuff[i+223] = data[PARITY_OFFSET + i*I + pos]; } } void interleaveRS(uint8_t *idata, uint8_t *outbuff, uint8_t pos, uint8_t I) { // Copy data for (int i=0; i<223; i++) { outbuff[i*I + pos] = idata[i]; } // Copy parity - Not needed here, but I do. for (int i=0; i<32; i++) { outbuff[PARITY_OFFSET + i*I + pos] = idata[i+223]; } }
For using it on LRIT frame we can do:
#define RSBLOCKS 4 int derrors[4] = { 0, 0, 0, 0 }; uint8_t rsWorkBuffer[255]; uint8_t rsCorrectedData[1020]; for (int i=0; i<RSBLOCKS; i++) { deinterleaveRS(decodedData, rsWorkBuffer, i, RSBLOCKS); derrors[i] = decode_rs_ccsds(rsWorkBuffer, NULL, 0, 0); interleaveRS(rsWorkBuffer, rsCorrectedData, i, RSBLOCKS); }
In the variable derrors we will have how many bytes it was corrected for each RS Frames. In rsCorrectedData we will have the error corrected output. The value -1 in derrors it means the data is corrupted beyond correction (or the parity is corrupted beyond correction). I usually drop the entire frame if all derrors are -1, but keep in mind that the corruption can happen in the parity only (we can have corrupted bytes in parity that will lead to -1 in error correction) so it would be wise to not do like I did. After that we will have the corrected LRIT Frame that is 892 bytes wide.
Virtual Channel Demuxer
Now we will demux the Virtual Channels. I current save all virtual channel payload (the 892 bytes) to a file called channel_ID.bin then I post process with a python script to separate the channel packets. Parsing the virtual channel header has also some advantages now that we can see if for some reason we skipped a frame of the channel, and also to discard the empty frames (I will talk about it later).
VCDU Header
Fields:
- Version Number – The Version of the Frame Data
- S/C ID – Satellite ID
- VC ID – Virtual Channel ID
- Counter – Packet Counter (relative to the channel)
- Replay Flag – Is 1 if the frame is being sent again.
- Spare – Not used.
Basically we will only use 2 values from the header: VCID and Counter.
uint32_t swapEndianess(uint32_t num) { return ((num>>24)&0xff) | ((num<<8)&0xff0000) | ((num>>8)&0xff00) | ((num<<24)&0xff000000); } (...) uint8_t vcid = (*(rsCorrectedData+1)) & 0x3F; // Packet Counter from Packet uint32_t counter = *((uint32_t *) (rsCorrectedData+2)); counter = swapEndianess(counter); counter &= 0xFFFFFF00; counter = counter >> 8;
I usually save the last counter value and compare with the current one to see if I lost any frame. Just be carefull that the counter value is per channel ID (VCID). I actually never got any VCID higher than 63, so I store the counter in a 256 int32_t array.
One last thing I do in the C code is to discard any frame that has 63 as VCID. The VCID 63 only contains Fill Packets, that is used for keeping the satellite signal continuous, even when not sending anything. The payload of the frame will always contain the same sequence (that can be sequence of 0, 1 or 01).
Packet Demuxer
Having our virtual channels demuxed for files channel_ID.bin, we can do the packet demuxer. I did the packet demuxer in python because of the easy of use. I plan to rewrite in C as well, but I will explain using python code.
Channel Data
Each channel Data can contain one or more packets. If the Channel contains and packet end and another start, the First Header Pointer (the 11 bits from the header) will contain the address for the first header inside the packet zone.
First thing we need to do is read one frame from a channel_ID.bin file, that is, 892 bytes (6 bytes header + 886 bytes data). We can safely ignore the 6 bytes header from VCDU now since we won’t have any usefulness for this part of the program. The spare 5 bits in the start we can ignore, and we should get the FHP value to know if we have a packet start in the current frame. If we don’t, and there is no pending packet to append data, we just ignore this frame and go to the next one. The FHP value will be 2047 (all 1’s) when the current frame only contains data related to a previous packet (no header). If the value is different than 2047 then we have a header. So let’s handle this:
data = data[6:] # Strip channel header fhp = struct.unpack(">H", data[:2])[0] & 0x7FF data = data[2:] # Strip M_PDU Header #data is now TP_PDU if not fhp == 2047: # Frame Contains a new Packet # handle new header
So let’s talk first about handling a new packet. Here is the structure of a packet:
Packet Structure (CP_PDU)
We have a 6 byte header containing some useful info, and a user data that can vary from 1 byte to 8192 bytes. So this packet can span across several frames and we need to handle it. Also there is another tricky thing here: Even the packet header can be split across two frames (the 6 first bytes can be at two frames) so we need to handle that we might not have enough data to even check the packet header. I created a function called CreatePacket that receives a buffer parameter that can or not have enough data for creating a packet. It will return a tuple that contains the APID for the packet (or -1 if buffer doesn’t have at least 6 bytes) and a buffer that contains any unused data for the packet (for example if there was more than one packet in the buffer). We also have a function called ParseMSDU that will receive a buffer that contains at least 6 bytes and return a tuple with the MSDU (packet) header decomposed. There is also a SavePacket function that will receive the channelId (VCID) and a object to save the data to a packet file. I will talk about the SavePacket later.
import struct SEQUENCE_FLAG_MAP = { 0: "Continued Segment", 1: "First Segment", 2: "Last Segment", 3: "Single Data" } pendingpackets = {} def ParseMSDU(data): o = struct.unpack(">H", data[:2])[0] version = (o & 0xE000) >> 13 type = (o & 0x1000) >> 12 shf = (o & 0x800) >> 11 apid = (o & 0x7FF) o = struct.unpack(">H", data[2:4])[0] sequenceflag = (o & 0xC000) >> 14 packetnumber = (o & 0x3FFF) packetlength = struct.unpack(">H", data[4:6])[0] -1 data = data[6:] return version, type, shf, apid, sequenceflag, packetnumber, packetlength, data def CreatePacket(data): while True: if len(data) < 6: return -1, data version, type, shf, apid, sequenceflag, packetnumber, packetlength, data = ParseMSDU(data) pdata = data[:packetlength+2] if apid != 2047: pendingpackets[apid] = { "data": pdata, "version": version, "type": type, "apid": apid, "sequenceflag": SEQUENCE_FLAG_MAP[sequenceflag], "sequenceflag_int": sequenceflag, "packetnumber": packetnumber, "framesdropped": False, "size": packetlength } print "- Creating packet %s Size: %s - %s" % (apid, packetlength, SEQUENCE_FLAG_MAP[sequenceflag]) else: apid = -1 if not packetlength+2 == len(data) and packetlength+2 < len(data): # Multiple packets in buffer SavePacket(sys.argv[1], pendingpackets[apid]) del pendingpackets[apid] data = data[packetlength+2:] apid = -1 print " Multiple packets in same buffer. Repeating." else: break return apid, ""
With that we create a dictionary called pendingpackets that will store APID as the key, and another dictionary with the packet data, including a field called data that we will append data from other frames until we fill the whole packet. Back to our read function, we will have something like this:
... if not fhp == 2047: # Frame Contains a new Packet # Data was incomplete on last FHP and another packet starts here. # basically we have a buffer with data, but without an active packet # this can happen if the header was split between two frames if lastAPID == -1 and len(buff) > 0: print " Data was incomplete from last FHP. Parsing packet now" if fhp > 0: # If our First Header Pointer is bigger than 0, we still have # some data to add. buff += data[:fhp] lastAPID, data = CreatePacket(buff) if lastAPID == -1: buff = data else: buff = "" if not lastAPID == -1: # We are finishing another packet if fhp > 0: # Append the data to the last packet pendingpackets[lastAPID]["data"] += data[:fhp] # Since we have a FHP here, the packet has ended. SavePacket(sys.argv[1], pendingpackets[lastAPID]) del pendingpackets[lastAPID] # Erase the last packet data lastAPID = -1 # Try to create a new packet buff += data[fhp:] lastAPID, data = CreatePacket(buff) if lastAPID == -1: buff = data else: buff = ""
This should handle all frames that has a new header. But maybe the packet is so big that we got frames without any header (continuation packets). In this case the FHP will be 2047, and basically we have three things that can lead to that:
- The header was split between last frame end, and the current frame. FHP will be 2047 and after we append to our buffer we will have a full header to start a packet
- We just need to append the data to last packet.
- We lost some frame (or we just started) and we got a continuation packet. So we drop it.
... else: if len(buff) > 0 and lastAPID == -1: # Split Header print " Data was incomplete from last FHP. Parsing packet now" buff += data lastAPID, data = CreatePacket(buff) if lastAPID == -1: buff = data else: buff = "" elif len(buff)> 0: # This shouldn't happen, but I put a warn here if it does print " PROBLEM!" elif lastAPID == -1: # We don't have any pending packets, and we received # a continuation packet, so we drop. pass else: # We have a last packet, so we append the data. print " Appending %s bytes to %s" % (lastAPID, len(data)) pendingpackets[lastAPID]["data"] += data
Now let’s talk about the SavePacket function. I will describe some of the stuff here, but there will be also something described on the next chapter. Since the packet data can be compressed, we will need to check if the data is compressed, and if it is, we need to decompress. In this part we will not handle the decompression or the file assembler (that will need decompression).
Saving the Raw Packet
Now that we have the handler for the demuxing, we will implement the function SavePacket. It will receive two arguments, the channel id and a packetdict. The channel id will be used for saving the packets in the correct folder (separating them from other channel packets). We may have also a Fill Packet here, that has an APID of 2047. We should drop the data if the apid is 2047. Usually the fill packets are only used to increase the likely hood of the header of packet starts on the start of channel data. So it “fills” the channel data to get the header in the next packet. It does not happen very often though.
In the last step we assembled a packet dict with this structure:
{ "data": pdata, "version": version, "type": type, "apid": apid, "sequenceflag": SEQUENCE_FLAG_MAP[sequenceflag], "sequenceflag_int": sequenceflag, "packetnumber": packetnumber, "framesdropped": False, "size": packetlength }
The data field have the data we need to save, the type says the type of packet (and also if its compressed), the sequenceflag says if the packet is:
- 0 => Continued Segment, if this packet belongs to a file that has been already started.
- 1 => First Segment, if this packet contains the start of the file
- 2 => Last Segment, if this packet contains the end of the file
- 3 => Single Data, if this packet contains the whole file
It also contains a packetnumber that we can use to check if we skip any packet (or lose).
The size parameter is the length of data field – 2 bytes. The two last bytes is the CRC of the packet. The CCSDS only specify the polynomial for the CRC, CRC-CCITT standard. I made a very small function based on a few C functions I found over the internet:
def CalcCRC(data): lsb = 0xFF msb = 0xFF for c in data: x = ord(c) ^ msb x ^= (x >> 4) msb = (lsb ^ (x >> 3) ^ (x << 4)) & 255 lsb = (x ^ (x << 5)) & 255 return (msb << 8) + lsb def CheckCRC(data, crc): c = CalcCRC(data) if not c == crc: print " Expected: %s Found %s" %(hex(crc), hex(c)) return c == crc
On SavePacket function we should check the CRC to see if any data was corrupted or if we did any mistake. So we just check the CRC and then save the packet to a file (at least for now):
EXPORTCORRUPT = False def SavePacket(channelid, packet): global totalCRCErrors global totalSavedPackets global tsize global isCompressed global pixels global startnum global endnum try: os.mkdir("channels/%s" %channelid) except: pass if packet["apid"] == 2047: print " Fill Packet. Skipping" return datasize = len(packet["data"]) if not datasize - 2 == packet["size"]: # CRC is the latest 2 bytes of the payload print " WARNING: Packet Size does not match! Expected %s Found: %s" %(packet["size"], len(packet["data"])) if datasize - 2 > packet["size"]: datasize = packet["size"] + 2 print " WARNING: Trimming data to %s" % datasize data = packet["data"][:datasize-2] if packet["sequenceflag_int"] == 1: print "Starting packet %s_%s_%s.lrit" % (packet["apid"], packet["version"], packet["packetnumber"]) startnum = packet["packetnumber"] if packet["framesdropped"]: print " WARNING: Some frames has been droped for this packet." filename = "channels/%s/%s_%s_%s.lrit" % (channelid, packet["apid"], packet["version"], packet["packetnumber"]) print "- Saving packet to %s" %filename crc = packet["data"][datasize-2:datasize] if len(crc) == 2: crc = struct.unpack(">H", crc)[0] crc = CheckCRC(data, crc) else: crc = False if not crc: print " WARNING: CRC does not match!" totalCRCErrors += 1 if crc or (EXPORTCORRUPT and not crc): f = open(filename, "wb") f.write(data) f.close() totalSavedPackets += 1 else: print " Corrupted frame, skipping..."
With that you should be able to see a lot of files being out of your channel, each one being a packet. If you get the first packet (with the sequenceflag = 1), you will also have the Transport Layer header that contains the decompressed file size, and file number. We will handle the decompression and lrit file composition in next chapter. You can check the final code here:
|
https://www.teske.net.br/lucas/2016/11/goes-satellite-hunt-part-4-packet-demuxer/
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Recent I needed this system to work for me on a project that was already fully programmed in regular AS3. That meant I had to shoehorn Starling/Feathers into an otherwise normal Flash project, and I’ve come to learn that this isn’t the “normal” way that people come to start using Starling. Unfortunately there was no way to start over with the project as it has been in use for a few years now and we were just putting the final touches on the iPad port. So a lot of my frustration came from getting Starling to play nice with Flash. I would definitely recommend going all-Starling if possible.
My HaikuDeck follows. You can also view the slide commentary on the actual Haiku Deck website here:
Download the demo files here.
Created with Haiku Deck, the free presentation app for iPad
What’s Starling?
Starling is a Flash framework (library), mainly used for running games in Adobe AIR on mobile devices (iOS and Android). You can download it at. Starling is neat – it runs directly on the GPU using stage3D, so essentially it’s super fast and supports 3D. I haven’t had a change to play around with it outside of this project, but it sounds quite powerful. You’ll need Starling to run Feathers.
Ok, what’s Feathers?
Feathers is a UI library that runs on Starling. It creates pretty UI elements for mobile devices, like scrollbars that don’t suck! Download it at. Take a minute to check out the components explorer too – it’ll give you a good sample of the kind of stuff you can make with Feathers.
Configuration
You need to change a few settings in order to set all this up. I don’t know why, but I had a terrible time figuring out the file paths necessary to make all this work. My solution follows (and will work with the demo files).
Set publish to AIR (iOS or Android)
This one should be obvious – you’re working with a file you intend to be published on a mobile device, so your publish settings need to reflect that. Side note: If you use a version below AIR 3.4, you might run into problems. You need Flash Player 11 for all this to work out of the box and I suspect that older versions of AIR are also using older Flash Players.
Set render mode to “direct”
If you read the directions on the Starling site, this will come up. Because of the way Starling uses the GPU, you need to set the render mode in the AIR settings to “direct”. If you don’t, you’ll get a big warning when you try to run the SWF.
Add Feathers and Starling SWC to Library Path
In the Actionscript settings you can add new SWCs to the library path. The one for Starling is in the “bin” folder. The Feathers one is in the “swc” folder. You’ll need both. In the screenshot, you’ll see that Flex/Core frameworks were also added. These are added automatically by Flash Professional the first time you run the SWF (you’ll have to accept a dialog box and then re-publish).
Add the theme source files to the Source Path
In the source path, also in Actionscript settings, you’ll add the theme source folder. For the demo, it’s VICMobileTheme > source.
A note on themes: The theme that comes packaged with Feathers is called MetalWorksMobileTheme, and you might want to start experimenting with this before using my theme or trying to make your own. You will probably have to skin your theme at some point though, so the tutorial I used is here. Note that you need the TexturePacker software, which costs about $30.
Setting up the code
There are a few parts to this, but it only amounts to about 25 lines to get you started using the FeathersComponents class included in the tutorial.
- Import Starling and Feathers classes
- Initialize Starling
- Get the Starling root
- Format your list as an array of objects
import starling.core.Starling; import starling.events.Event; import ca.pie.feathers.FeathersComponents; var starling:Starling = new Starling( ca.pie.feathers.FeathersComponents, stage ); starling.start(); starling.addEventListener( starling.events.Event.ROOT_CREATED, init ); var gui:FeathersComponents; var fruitArray:Array; var words:String = “Put a lot of words here!”; function init($evt:starling.events.Event):void { gui = FeathersComponents(Starling.current.root); gui.addEventListener("actionPerformed", onListClick); fruitArray = new Array (//Put a long array of objects here!); //These are custom functions from the FeathersComponents class gui.initMainList(fruitArray, new Rectangle(50, 150, 300, 300)); gui.initScrollText(words, new Rectangle(450, 150, 450, 140)); } function onListClick($evt:starling.events.Event):void { alertText.text = "List item labelled " + gui.currentListItem.description + " was chosen."; }
One of the things I was particularly interested in was the List component in Feathers, which creates a scrolling list of items that are selectable, which is why that’s included in the demo. You can read all about it on the Feathers site/API, but basically the List takes an array of objects as its data provider. If you’re not familiar with the syntax, an object literal looks like the following:
{description: “Words to go in list”, accessorySource: myTexture}
You can have as many properties as you want, but these two are pretty basic. The first one, which I called “description” in the FeathersComponents class, is the actual description that will show up in the list item’s box. It’s a String. The other property, accessorySource, defines a texture (which is the Starling equivalent of a Bitmap). The texture can be applied to the list like an icon. You don’t need this, but I wanted to show how it works in the tutorial files. So, the actual array looks more like this:
fruitArray = new Array ( {description: "strawberry", accessorySource: gui.sArrowTexture}, {description: "apple", accessorySource: gui.sArrowTexture}, {description: "grape", accessorySource: gui.sArrowTexture}, {description: "rhubarb", accessorySource: gui.sArrowTexture}, {description: "orange", accessorySource: gui.sArrowTexture}, {description: "pear", accessorySource: gui.sArrowTexture}, {description: "raspberry", accessorySource: gui.sArrowTexture}, {description: "elderberry", accessorySource: gui.sArrowTexture}, {description: "clementine", accessorySource: gui.sArrowTexture}, {description: "guava", accessorySource: gui.sArrowTexture}, {description: "kumquat", accessorySource: gui.sArrowTexture}, {description: "starfruit", accessorySource: gui.sArrowTexture}, {description: "canteloupe", accessorySource: gui.sArrowTexture}, {description: "banana", accessorySource: gui.sArrowTexture}, {description: "watermelon", accessorySource: gui.sArrowTexture}, {description: "passionfruit", accessorySource: gui.sArrowTexture}, {description: "mango", accessorySource: gui.sArrowTexture} );
That’s all you need to get started. The two methods being called,
gui.initMainList
and
gui.initScrollList
come from the FeathersComponents custom class. They both take a Rectangle as an argument as a way to determine their size and position on the Stage. (You can’t put Starling display list items in items on the Flash display list, so you can think of everything sitting right on the stage, or technically beneath the stage).
But wait, there’s some weird stuff
Starling runs on the GPU and as such, its display list is actually completely underneath the flash display list. So if you need things to overlap and you are modifying an existing file, you will need to make everything Starling objects so they overlap properly. This was a bit of a pain in the ass with an existing project, so this is one of the major reasons that you should consider going all the way with Starling if you’re creating a new project.
Text will appear tiny when testing in Flash Professional’s AIR debugger. It’s fine on the actual device so don’t panic.
Starling has different terminology for some display features. Since technically everything in Starling is created in 3D, a Rectangle becomes a Quad, and Bitmaps are now Textures (stretched over a flat plane in 3D). You’ll get used to it.
As mentioned earlier, Starling has its own version of lots of stuff that exists in AS3. These will clash with the Flash namespace so you have to spell things out for Flash. Basically Starling was trying to name things the same as Flash so AS3 programmers would already understand the syntax and terminology. Starling has its own MovieClip, display list, and Event model. When you want to use one, you have to call it a new “starling.display.MovieClip” or new “starling.events.Event”. That’s all fine, but keep in mind if you have a duplicate in Flash, you also have to go back and change all your Flash MovieClips to “flash.display.MovieClip” and events to “flash.events.Event”, otherwise the compiler gets all confused and unhappy.
That’s the worst of it though. After my 2 weeks of slogging through figuring out Feathers and making components work, I was able to give my tutorial and files to a few coworkers and we got everything working in another Flash project within a few hours, so there’s hope!
Recent Comments
|
http://axoninteractive.ca/starling-and-feathers-for-flash-cs6-mobile-ui-2/
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Rainbow adds text color, background color and style for console and command
line output in Swift. It is born for cross platform software logging
in terminals, working in both Apple's platforms and Linux.
Usage
Nifty way, using the
String extension, and print the colorized string:
import Rainbow print("Red text".red) print("Blue background".onBlue) print("Light green text on white background".lightGreen.onWhite) print("Underline".underline) print("Cyan with bold and blinking".cyan.bold.blink) print("Plain text".red.onYellow.bold.clearColor.clearBackgroundColor.clearStyles)
It will give you something like this:
You can also use the more verbose way if you want:
import Rainbow let output = "The quick brown fox jumps over the lazy dog" .applyingCodes(Color.red, BackgroundColor.yellow, Style.bold) print(output) // Red text on yellow, bold of course :)
Motivation and Compatibility
Thanks to the open source of Swift, developers now could write cross platform programs with the same language. And I believe the command line software would be the next great platform for Swift. Colorful and well organized output always helps us to understand what happens. It is really a necessary utility to create wonderful software.
Rainbow should work well in both OS X and Linux terminals. It is smart enough
to check whether the output is connected to a valid text terminal or not, to
decide the log should be modified or not. This could be useful when you want to
send your log to a file instead to console.
Although
Rainbow is first designed for console output in terminals, you could
use it in Xcode with XcodeColors
plugin installed too. It will enable color output for better debugging
experience in Xcode.
Please notice, after Xcode 8, third party plugins in bundle (like XcodeColors) is not supported anymore. See this.
Install
Rainbow 3.x supports from Swift 4. If you need to use Rainbow in Swift 3, use Rainbow 2.1 instead.
Swift Package Manager
If you are developing a cross platform software in Swift,
Swift Package Manager might
be your choice for package management. Just add the url of this repo to your
Package.swift file as a dependency:
// swift-tools-version:4.0 import PackageDescription let package = Package( name: "YourAwesomeSoftware", dependencies: [ .package(url: "", from: "3.0.0") ] )
Then run
swift build whenever you get prepared.
You could know more information on how to use Swift Package Manager in Apple's official page.
CocoaPods
Add the
RainbowSwift pod to your
Podfile:
source '' platform :ios, '8.0' pod 'RainbowSwift', '~> 3.0'
And you need to import
RainbowSwift instead of
Rainbow if you install it from CocoaPods.
// import Rainbow import RainbowSwift print("Hello CocoaPods".red)
Carthage
Carthage is a decentralized dependency manager for Cocoa application.
To integrate
Rainbow with Carthage, add this to your
Cartfile:
github "onevcat/Rainbow" ~> 3.0
Run
carthage update to build the framework and drag the built
Rainbow.framework into your Xcode project (as well as embed it in your target
if necessary).
Follow and contact me on Twitter or Sina Weibo. If you find an issue, just open a ticket on it. Pull requests are warmly welcome as well.
License
Rainbow is released under the MIT license. See LICENSE for details.
Github
Help us keep the lights on
Dependencies
Used By
Total:
Releases
3.0.0 - Nov 26, 2017
Swift 4 support.
2.1.0 - Aug 3, 2017
Expose
Rainbow. extractModes as public.
2.0.1 - Sep 30, 2016
Support for Linux.
2.0.0 - Sep 25, 2016
Swift 3 compatibility.
1.1.0 - Mar 24, 2016
Support for Swift 2.2
|
https://swiftpack.co/package/onevcat/Rainbow
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Exercises¶
Exercise 1: Quantum Dice¶
Write a quantum program to simulate throwing an 8-sided die. The Python function you should produce is:
def throw_octahedral_die(): # return the result of throwing an 8 sided die, an int between 1 and 8, by running a quantum program
Next, extend the program to work for any kind of fair die:
def throw_polyhedral_die(num_sides): # return the result of throwing a num_sides sided die by running a quantum program
Exercise 2: Controlled Gates¶
We can use the full generality of NumPy to construct new gate matrices.
- Write a function
controlledwhich takes a \(2\times 2\) matrix \(U\) representing a single qubit operator, and makes a \(4\times 4\) matrix which is a controlled variant of \(U\), with the first argument being the control qubit.
- Write a Quil program to define a controlled-\(Y\) gate in this manner. Find the wavefunction when applying this gate to qubit 1 controlled by qubit 0.
Exercise 3: Grover’s Algorithm¶
Write a quantum program for the single-shot Grover’s algorithm. The Python function you should produce is:
# data is an array of 0's and 1's such that there are exactly three times as many # 0's as 1's def single_shot_grovers(data): # return an index that contains the value 1
As an example:
single_shot_grovers([0,0,1,0]) should return 2.
HINT - Remember that the Grover’s diffusion operator is:
Exercise 4: Prisoner’s Dilemma¶
A classic strategy game is the prisoner’s dilemma where two prisoners get the minimal penalty if they collaborate and stay silent, get zero penalty if one of them defects and the other collaborates (incurring maximum penalty) and get intermediate penalty if they both defect. This game has an equilibrium where both defect and incur intermediate penalty.
However, things change dramatically when we allow for quantum strategies leading to the Quantum Prisoner’s Dilemma.
Can you design a program that simulates this game?(q2)] return p
There is a very important detail to recognize here: The function
qft3 doesn’t compute the QFT, but rather it makes a quantum
program to compute the QFT on qubits
q0,
q1, and
q2.
We can see what this program looks like in Quil notation) print(wavefunction.amplitudes)
array([ 3.53553391e-01+0.j , 2.50000000e-01-0.25j , 2.16489014e-17-0.35355339j, -2.50000000e-01-0.25j , -3.53553391e-01+0.j , -2.50000000e-01+0.25j , -2.16489014e-17+0.35355339j, 2.50000000e-01+0.25j ])
We can verify this works by computing the inverse FFT
Picard is to place a penny heads up into an opaque box. Then Picard and Q take turns to flip or not flip the penny without being able to see it; first Q then P then Q again. After this the penny is revealed; Q wins if it shows heads (H), while tails (T) makes Picard the winner.
Picard vs. Q¶
Picard quickly estimates that his chance of winning is 50% and agrees to play the game. He loses the first round and insists on playing again. To his surprise Q agrees, and they continue playing several rounds more, each of which Picard loses. How is that possible?
What Picard did not anticipate is that Q has access to quantum tools. Instead of flipping the penny, Q puts the penny into a superposition of heads and tails proportional to the quantum state \(|H\rangle+|T\rangle\). Then no matter whether Picard flips the penny or not, it will stay in a superposition (though the relative sign might change). In the third step Q undoes the superposition and always finds the penny to). To simulate Picard’s decision, we assume that he chooses randomly whether or not to flip the coin, in agreement with the optimal strategy for the classic penny-flip game. This random choice can be created by putting one qubit into an equal superposition, e.g. with the Hadamard gate \(H\), and then measure its state. The measurement will show heads or tails with equal probability \(p_h=p_t=0.5\).
To simulate the penny flip game we take the second qubit and put it into its excited state \(|1\rangle\) (which is mapped to \(|H\rangle\), heads) by applying the X (or NOT) gate. Q’s first move is to apply the Hadamard gate H. Picard’s decision about the flip is simulated as a CNOT operation where the control bit is the outcome of the random number generator described above. Finally Q applies a Hadamard gate again, before we measure the outcome. The full circuit is shown in the figure below.!
|
https://pyquil.readthedocs.io/en/v2.5.1/exercises.html
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
API for retrieving the TFS information for a database. All the fuctions here are located in RDM DB Engine Library. Linker option:
-l
rdmrdm
#include <rdmdbapi.h>
Get RDM database information.
The function returns a semicolon-delimited list of information keywords and returns information associated with the keyword.
key should be a list of info values. If key is NULL, the function returns the pairs for all available options. If key is an empty string, the function returns an empty string.
Options are defined using keys or properties. Every key has a name and a value, delimited by an equals sign (=). The key name appears to the left of the equals sign. Key names are not case-sensitive. Unless otherwise noted, values are not case-sensitive.
#include <rdmdbapi.h>
Get the RDM_TFS handle associated with a db.
This function assigns the RDM_TFS handle associated with an RDM_DB to pTFS.
#include <rdmdbapi.h>
Get the type of the RDM_TFS handle associated with a database.
This function assigns the type of the RDM_TFS handle associated with an RDM_DB to pTfsType
|
https://docs.raima.com/rdm/14_1/group__db__information.html
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Structural diff for scala typesStructural diff for scala types
This project provides the
Difference ADT, which models structural diff of scala values. It is built (and depends) on cats and shapeless.
Differences are computed via the
Diff[A] type class that wraps a function of type
(A, A) => Option[Difference].
The goal of this library is to provide convenient and extensible generic derivation of
Diff instances, as well as a decent textual representation of
Difference values.
TODO: Add examples, with tut ideally
Current statusCurrent status
Diffinstances can be derived for:
- primitive types as well as
String,
UUID,
- a selection of usual collection-like types, including
Map,
Optionand
Either
- enumeratum
Enums, via
autodiff-enumeratum.
Generic derivation (for
LabelledGenerictypes) is provided by
autodiff-generic, and is opt-in like e.g. in circe
- either automatic with
import fr.thomasdufour.autodiff.generic.auto._
- or semi-automatic with
import fr.thomasdufour.autodiff.generic.semiauto._and using
deriveDiff
The
Differencetype has only a couple textual representation options and no convenience methods to speak of.
Future workFuture work
- Further API exploration for the "front-end", including test framework integration.
- Improve test coverage
CreditsCredits
xdotai/diff for inspiration.
|
https://index.scala-lang.org/chwthewke/auto-diff/auto-diff-enumeratum/0.1.0?target=_2.12
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
How to Build a Real-time Collaborative Markdown Editor with React Hooks, GraphQL & AWS AppSync:
An idea that immediately stood out to me was from NagarajanFs on Twitter:
Basically his idea was Google Docs but for markdown. I loved it, so I decided to build it! The result was writewithme.dev.
Getting Started
The first thing I did was created a new React application:
npx create-react-app write-with-me
The next thing I needed to do was to find the tool I was going to use to allow markdown in a React App. I stumbled upon react-markdown by Espen Hovlandsdal (rexxars on Twitter).
npm install react-markdown
React Markdown was super simple to use. You import
ReactMarkdown from the package & pass in the markdown you’d like to render as a prop:
const ReactMarkdown = require('react-markdown')
const input = '# This is a header\n\nAnd this is a paragraph'
<ReactMarkdown source={input} />
Building the API
Now that we have the markdown tool the next set was creating the API. I used AWS AppSync & AWS Amplify to do this. With the Amplify CLI, you can set a base type & add a decorator to build out the entire backend by taking advantage of the GraphQL Transform library.
amplify init
amplify add api
// chose GraphQL
// Choose API Key as the authentication
The schema I used was this:
type Post @model {
id: ID!
clientId: ID!
markdown: String!
title: String!
createdAt: String
}
The
@modeldecorator will tell Amplify to deploy a GraphQL backend complete with a DynamoDB data source & schema (types & operations) for all crud operations & GraphQL subscriptions. It will also create the resolvers for the created GraphQL operations.
After defining & saving the schema we deploy the AppSync API:
amplify push
When we run
amplify push, we’re also given the option to execute GraphQL codegen in either TypeScript, JavaScript, or Flow. Doing this will introspect your GraphQL schema & automatically generate the GraphQL code you’ll need on the client in order to execute queries, mutations, & subscriptions.
Installing the dependencies
Next, we install the other necessary libraries we need for the app:
npm install aws-amplify react-router-dom uuid glamor debounce
- uuid — to create unique IDs to uniquely identify the client
- react-router-dom — to add routing
- aws-amplify — to interact with the AppSync API
- glamor — for styling
- debounce — for adding a debounce when user types
Writing the code
Now that our project is set up & our API has been deployed, we can start writing some code!
The app has three main files:
- Router.js — Defines the routes
- Posts.js — Fetches & renders the posts
- Post.js — Fetches & renders a single post
Router.js
Setting up navigation was pretty basic. We needed two routes: one for listing all of the posts & one for viewing a single post. I decided to go with the following route scheme:
/post/:id/:title
When someone hits the above route, we have access to everything we need to display a title for the post as well as the ID for us to fetch the post if it is an existing post. All of this info is available directly in the route parameters.
React Hooks with GraphQL
If you’ve ever wondered how to implement hooks into a GraphQL application, I recommend also reading my post Writing Custom Hooks for GraphQL because that’s exactly what we’ll be doing.
Instead of going over all of the code for the 2 components (if you’d like to see the code, click on the links above), I’d like to focus on how we implemented the necessary functionality using GraphQL & hooks.
The main API operations we needed from this app are intended to do the following:
- Load all of the posts from the API
- Load an individual post
- Subscribe to changes within an individual post & re-render the component
Let’s take a look at each.
To load the posts I went with a
useReducer hook combined with a function call within a
useEffect hook. We first define the initial state. When the component renders for the first time we call the API & update the state using the
fetchPosts function. This reducer will also handle our subscription:
import { listPosts } from './graphql/queries'
import { API, graphqlOperation } from 'aws-amplify'
const initialState = {
error: false
}
function reducer(state, action) {
switch (action.type) {
case 'fetchPostsSuccess':
return { ...state, posts: action.posts, loading: false }
case 'addPostFromSubscription':
return { ...state, posts: [ action.post, ...state.posts ] }
case 'fetchPostsError':
return { ...state, loading: false, error: true }
default:
throw new Error();
}
}
async function fetchPosts(dispatch) {
try {
const postData = await API.graphql(graphqlOperation(listPosts))
dispatch({
type: 'fetchPostsSuccess',
posts: postData.data.listPosts.items
})
} catch (err) {
console.log('error fetching posts...: ', err)
dispatch({
type: 'fetchPostsError',
})
}
}
// in the hook
const [postsState, dispatch] = useReducer(reducer, initialState)
useEffect(() => {
fetchPosts(dispatch)
}, [])
Subscribing to new posts being created
We already have our state ready to go from the above code, now we just need to subscribe to the changes in the hook. To do that, we use a useEffect hook & subscribe to the
onCreatePost subscription:
useEffect(() => {
const subscriber = API.graphql(graphqlOperation(onCreatePost)).subscribe({
next: data => {
const postFromSub = data.value.data.onCreatePost
dispatch({
type: 'addPostFromSubscription',
post: postFromSub
})
}
});
return () => subscriber.unsubscribe()
}, [])
In this hook, we set up a subscription that will fire when a user creates a new post. When the data comes through, we call the
dispatch function & pass in the new post data that was returned from the subscription.
Fetching a single post
When a user lands on a route, we can use the data from the route params to identify the post name & ID:
In this route, the ID would be 9999b0bb-63eb-4f5b-9805–23f6c2661478 & the name would be Write with me.
When the component loads, we extract these params & use them.
The first thing we do is attempt to create a new post. If this is successful, we are done. If this post already exists, we are given the data for this post from the API call.
This may seem strange at first, right? Why not try to fetch, & if unsuccessful the create? Well, we want to reduce the total number of API calls. If we attempt to create a new post & the post exists, the API will actually return the data from the existing post allowing us to only make a single API call. This data is available in the errors:
err.errors[0].data
We handle the state in this component using
useReducer hook.
// initial state
const post = {
id: params.id,
title: params.title,
clientId: CLIENTID,
markdown: '# Loading...'
}
function reducer(state, action) {
switch (action.type) {
case 'updateMarkdown':
return { ...state, markdown: action.markdown, clientId: CLIENTID };
case 'updateTitle':
return { ]...state, title: action.title, clientId: CLIENTID };
case 'updatePost':
return action.post
default:
throw new Error();
}
}
async function createNewPost(post, dispatch) {
try {
const postData = await API.graphql(graphqlOperation(createPost, { input: post }))
dispatch({
type: 'updatePost',
...postData.data.createPost,
clientId: CLIENTID
}
})
} catch(err) {
if (err.errors[0].errorType === "DynamoDB:ConditionalCheckFailedException") {
const existingPost = err.errors[0].data
dispatch({
type: 'updatePost',
...existingPost,
clientId: CLIENTID
}
})
}
}
}
// in the hook, initialize the state
const [postState, dispatch] = useReducer(reducer, post)
// fetch post
useEffect(() => {
const post = {
...postState,
markdown: input
}
createNewPost(post, dispatch)
}, [])
Subscribing to a post change
The last thing we need to do is subscribe to changes in a post. To do this, we do two things:
- Update the API when the user types (both title or markdown changes)
async function updatePost(post) {
try {
await API.graphql(graphqlOperation(UpdatePost, { input: post }))
console.log('post has been updated!')
} catch (err) {
console.log('error:' , err)
}
}
function updateMarkdown(e) {
dispatch({
type: 'updateMarkdown',
markdown: e.target.value,
})
const newPost = {
id: post.id,
markdown: e.target.value,
clientId: CLIENTID,
createdAt: post.createdAt,
title: postState.title
}
updatePost(newPost, dispatch)
}
function updatePostTitle (e) {
dispatch({
type: 'updateTitle',
title: e.target.value
})
const newPost = {
id: post.id,
markdown: postState.markdown,
clientId: CLIENTID,
createdAt: post.createdAt,
title: e.target.value
}
updatePost(newPost, dispatch)
}
useEffect(() => {
const subscriber = API.graphql(graphqlOperation(onUpdatePost, {
id: post.id
})).subscribe({
next: data => {
if (CLIENTID === data.value.data.onUpdatePost.clientId) return
const postFromSub = data.value.data.onUpdatePost
dispatch({
type: 'updatePost',
post: postFromSub
})
}
});
return () => subscriber.unsubscribe()
}, [])
When the user types, we update both the local state as well as the API. In the subscription, we first check to see if the subscription data coming in is from the same Client ID. If it is, we do nothing. If it is from another client, we update the state.
Deploying the app on a custom domain
Now that we’ve built the app, what about deploying it to a custom domain like I did with writewithme.dev?
I did this using GoDaddy, Route53 & the Amplify console.
In the AWS dashboard, go to Route53 & click on Hosted Zones. Choose Create Hosted Zone. From there, enter your domain name & click create.
Be sure to enter your domain name as is, without www. E.g. writewithme.dev
Now in Route53 dashboard you should be given 4 nameservers. In your hosting account, on Get Started under the Deploy section.
Connect your GitHub account & then choose the repo & branch that your project lives in.
This will walk you through deploying the app in the Amplify Console & making it live. Once complete, you should see some information about the build & some screenshots of the app:
Next,):
Next Steps
The next thing I’d like to do would be to add authentication & commenting functionality (so people can comment on each individual post). This is all doable with Amplify & AppSync. If we wanted to add authentication, we could add it with the CLI:
amplify add auth
Next, we’d have to write some logic to log people in & out or we can use the
withAuthenticator HOC. To learn more about how to do this, check out these docs.
Next, we’d probably want to update the schema to add commenting functionality. To do so, we could update the schema to something like what I created here. In this schema, we’ve added a field for the Discussion (in our case it would probably be called Comment).
In the resolver, we could probably also correlate a user’s identity to the message as well but using the
$context.identity.sub which would be available after the user is signed in.
My Name is Nader Dabit . I am a Developer Advocate at AWS working with projects like AWS AppSync and AWS Amplify.
I’m also the author of React Native in Action, & the editor of React Native Training & OpenGraphQL.
|
https://medium.com/open-graphql/how-to-build-a-real-time-collaborative-markdown-editor-with-react-hooks-graphql-aws-appsync-dc0c121683f4?utm_source=newsletter&utm_medium=email&utm_content=offbynone&utm_campaign=Off-by-none%3A%20Issue%20%2332
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Modern Asynchronous Request/Reply with DDS-RPC
Written by Sumant Tambe
December 8, 2015
Complex distributed systems often use multiple styles of communication, interaction patterns, to implement functionality. One-to-many publish-subscribe, one-to-one request-reply, and one-to-one queuing (i.e., exactly-once delivery) are the most common. RTI Connext DDS supports all three. Two of them are available as standard specifications from the Object Management Group (OMG): the DDS specification (duh!) and newer Remote Procedure Call (RPC) over DDS (DDS-RPC) specification.
DDS-RPC specification is a significant generalization of RTI's own Request-Reply API in C++ and Java. There are two major differences: (1) DDS-RPC includes code generation from IDL interface keyword and (2) asynchronous programming using futures.
Let's look at an example straight from the DDS-RPC specification.
module robot { exception TooFast {}; enum Command { START_COMMAND, STOP_COMMAND }; struct Status { string msg; }; @DDSService interface RobotControl { void command(Command com); float setSpeed(float speed) raises (TooFast); float getSpeed(); void getStatus(out Status status); }; }; //module robot
It's pretty obvious what's going on here. RobotControl is an IDL interface to send commands to a robot. You can start/stop it, get its status, and control its speed. The setSpeed operation returns the old speed when successful. The robot knows its limits and if you try to go too fast, it will throw a TooFast exception right in your face.
To bring this interface to life you need a code generator. As of this writing, however, rtiddsgen coughs at the interface keyword. It does not recognize it. But sometime in not-so-distant future it will.
For now, we've got the next best thing. I've already created a working example of this interface using hand-written code. The API of the hand-written code matches exactly with the standard bindings specified in DDS-RPC. As a matter of fact, the dds-rpc-cxx repository is an experimental implementation of the DDS-RPC API in C++. Look at the normative files to peak into all the gory API details.
Java folks... did I mention that this is a C++ post? Well, I just did... But do not despair. The dds-rpc-java repository has the equivalent Java API for the DDS-RPC specification. However, there's no reference implementation. Sorry!
The Request/Reply Style Language Binding
DDS-RPC distinguishes between the Request/Reply and the Function-Call style language bindings. The Request-Reply language binding is nearly equivalent to RTI's Request/Reply API with some additional asynchronous programming support. More on that later.
Here's a client program that creates a requester to communicate with a RobotControl service. It asks for the current speed and increases it by 10 via getSpeed/setSpeed operations. The underlying request and reply topic names are "RobotControlRequest" and "RobotControlReply". No surprise there.
RequesterParams requester_params = dds::rpc::RequesterParams() .service_name("RobotControl") .domain_participant(...); Requester<RobotControl_Request, RobotControl_Reply> requester(requester_params); // helper class for automatic memory management helper::unique_data<RobotControl_Request> request;
dds::Sample<RobotControl_Reply> reply_sample; dds::Duration timeout = dds::Duration::from_seconds(1); float speed = 0; request->data._d = RobotControl_getSpeed_Hash; requester.send_request(*request); while (!requester.receive_reply( reply_sample, request->header.requestId, timeout)); speed = reply_sample.data().data._u.getSpeed._u.result.return_; speed += 10; request->data._d = RobotControl_setSpeed_Hash; request->data._u.setSpeed.speed = speed; requester.send_request(*request); while (!requester.receive_reply( reply_sample, request->header.requestId, timeout)); if(reply_sample.data().data._u.setSpeed._d == robot::TooFast_Ex_Hash) { printf("Going too fast.\n"); } else { speed = reply_sample.data().data._u.setSpeed._u.result.return_; printf("New Speed = %f", speed); }
There's quite a bit of ceremony to send just two requests to the robot. The request/reply style binding is lower-level than function-call style binding. The responsibility of packing and unpacking data from the request and reply samples falls on the programmer. An alternative, of course, is to have the boilerplate code auto-generated. That's where the function-call style binding comes into picture.
The Function-Call Style Language Binding
The same client program can be written in much more pleasing way using the function-call style language binding.
robot::RobotControlSupport::Client robot_client(rpc::ClientParams() .domain_participant(...) .service_name("RobotControl")); float speed = 0; try { speed = robot_client.getSpeed(); speed += 10; robot_client.setSpeed(speed); } catch (robot::TooFast &) { printf("Going too fast!\n"); }
Here, a code generator is expected to generate all the necessary boilerplate code to support natural RPC-style programming. The dds-rpc-cxx repository contains the code necessary for the RobotControl interface.
Pretty neat, isn't it?
Not quite... (and imagine some ominous music and dark skies! Think Mordor to set the mood.)
An Abstraction that Isn't
The premise of RPC, the concept, is that accessing a remote service can be and should be as easy as a synchronous local function call. The function-call style API tries to hide the complexity of network programming (serialization/deserialization) behind pretty looking interfaces. However, it works well only until it does not...
Synchronous RPC is a very poor abstraction of latency. Latency is a hard and insurmountable problem in network programming. For reference see the latency numbers every programmer should know.
If we slow down a computer to a time-scale that humans understand, it would take more than 10 years to complete the above synchronous program assuming we were controlling a robot placed in a different continent. Consider the following table taken from the Principles of Reactive Programming course on Coursera to get an idea of what human-friendly time-scale might look like.
The problem with synchronous RPC is not only that it takes very long to execute but the calling thread will be blocked doing nothing for that long. What a waste!
Dealing with failures of the remote service is also a closely related problem. But for now let's focus on the latency alone.
The problem discussed here isn't new at all. The solution, however, is new and quite exciting, IMO.
Making Latency Explicit ... as an Effect
The DDS-RPC specification uses language-specific future<T> types to indicate that a particular operation is likely going to take very long time. In fact, every IDL interface gives rise to sync and async versions of the API and allows the programmers to choose.
The client-side generated code for the RobotControl interface includes the following asynchronous functions.
class RobotControlAsync { public: virtual dds::rpc::future<void> command_async(const robot::Command & command) = 0; virtual dds::rpc::future<float> setSpeed_async(float speed) = 0; virtual dds::rpc::future<float> getSpeed_async() = 0; virtual dds::rpc::future<robot::RobotControl_getStatus_Out> getStatus_async() = 0; virtual ~RobotControlAsync() { } };
Note that every operation, including those that originally returned void return a future object, which is a surrogate for the value that might be available in future. If an operation reports an exception, the future object will contain the same exception and user will be able to access it. The dds::rpc::future maps to std::future in C++11 and C++14 environments.
As it turns out, C++11 futures allow us to separate invocation from execution of remote operations but retrieving the result requires waiting. Therefore, the resulting program is barely an improvement.
try { dds::rpc::future<float> speed_fut = robot_client.getSpeed_async(); // Do some other stuff while(speed_fut.wait_for(std::chrono::seconds(1)) == std::future_status::timeout); speed = speed_fut.get(); speed += 10; dds::rpc::future<float> set_speed_fut = robot_client.setSpeed_async(speed); // Do even more stuff while(set_speed_fut.wait_for(std::chrono::seconds(1)) == std::future_status::timeout); set_speed_fut.get(); } catch (robot::TooFast &) { printf("Going too fast!\n"); }
The client program is free to invoke multiple remote operations (say, to different services) back-to-back without blocking but there are at least three problems with it.
- The program has to block in most cases to retrieve the results. In cases where result is available before blocking, there's problem #2.
- It is also possible that while the main thread is busy doing something other than waiting, the result of the asynchronous operation is available. If no thread is waiting on retrieving the result, no progress with respect that chain of computation (i.e., adding 10 and calling setSpeed) won't proceed. This is essentially what's known as continuation blocking because the subsequent computation is blocked due to missing resources.
- Finally, the programmer must also do correlation of requests with responses because in general, the order in which the futures will be ready is not guaranteed to be the same as the requests were sent. Some remote operations may take longer than the others. To resolve this non-determinism, programmers may have to implement state-machines very carefully.
For the above reasons, this program is not a huge improvement over the request/reply style program at the beginning. In both cases, invocation is separate from retrieval of the result (i.e., both are asynchronous) but the program must wait to retrieve the results.
That reminds me of the saying:
"Blocking is the goto of Modern Concurrent Programming"
---someone who knows stuff
In the multicore era, where the programs must scale to utilize the underlying parallelism, any blocking function call robs the opportunity to scale. In most cases, blocking will consume more threads than strictly necessary to execute the program. In responsive UI applications, blocking is a big no-no. Who has tolerance for unresponsive apps these days?
Composable Futures to the Rescue
Thankfully people have thought of this problem already and proposed improved futures in the Concurrency TS for the ISO C++ standard. The improved futures support
- Serial composition via .then()
- Parallel composition via when_all() and when_any()
A lot has been written about improved futures. See Dr. Dobb's, Facebook's Folly, and monadic futures. I won't repeat that here but an example of .then() is in order. This example is also available in the dds-rpc-cxx repository.
for(int i = 0;i < 5; i++) { robot_client .getSpeed_async() .then([robot_client](future<float> && speed_fut) { float speed = speed_fut.get(); printf("getSpeed = %f\n", speed); speed += 10; return remove_const(robot_client).setSpeed_async(speed); }) .then([](future<float> && speed_fut) { try { float speed = speed_fut.get(); printf("speed set successfully.\n"); } catch (robot::TooFast &) { printf("Going too fast!\n"); } }); } printf("Press ENTER to continue\n"); getchar();
The program with .then() sets up a chain of continuations (i.e., closures passed in as callbacks) to be executed when the dependent futures are ready with their results. As soon as the future returned by getSpeed_async is ready, the first closure passed to the .then() is executed. It invokes setSpeed_async, which returns yet another future. When that future completes, the program continues with the second callback that just prints success/failure of the call.
Reactive Programming
This style of chaining dependent computations and constructing a dataflow graph of asynchronous operations is at the heart of reactive programming. It has a number of advantages.
- There's no blocking per request. The program fires a series of requests, sets up a callback for each, and waits for everything to finish (at getchar()).
- There's no continuation blocking because often the implementation of future is such that the thread that sets value of the promise object (the callee side of the future), continues with callback execution right-away.
- Correlation of requests with replies isn't explicit because each async invocation produces a unique future and its result is available right in the callback. Any state needed to complete the execution of the callback can be captured in the closure object.
- Requires no incidental data structures, such as state machines and std::map for request/reply correlation. This benefit is a consequence of chained closures.
The advantages of this fluid style of asynchronous programming enabled by composable futures and lambdas is quite characteristic of modern asynchronous programming in languages such as JavaScript and C#. CompletableFuture in Java 8 also provides the same pattern.
The Rabbit Hole Goes Deeper
While serial composition of futures (.then) looks cool, any non-trivial asynchronous program written with futures quickly get out of hands due to callbacks. The .then function restores some control over a series of asynchronous operations at the cost of the familiar control-flow and debugging capabilities.
Think about how you might write a program that speeds up the robot from its current speed to some MAX in increments of 10 by repeatedly calling getSpeed/setSpeed asynchronously and never blocking.
Here's my attempt.
dds::rpc::future<float> speedup_until_maxspeed( robot::RobotControlSupport::Client & robot_client) { static const int increment = 10; return robot_client .getSpeed_async() .then([robot_client](future<float> && speed_fut) { float speed = speed_fut.get(); speed += increment; if (speed <= MAX_SPEED) { printf("speedup_until_maxspeed: new speed = %f\n", speed); return remove_const(robot_client).setSpeed_async(speed); } else return dds::rpc::details::make_ready_future(speed); }) .then([robot_client](future<float> && speed_fut) { float speed = speed_fut.get(); if (speed + increment <= MAX_SPEED) return speedup_until_maxspeed(remove_const(robot_client)); else return dds::rpc::details::make_ready_future(speed); }); } // wait for the computation to finish asynchronously speedup_until_maxspeed(robot_client).get();
This program is unusually complex for what little it achieves. The speedup_until_maxspeed function appears to be recursive as the second lambda calls the function again if the speed is not high enough. In reality the caller's stack does not grow but only heap allocations for future's shared state are done as successive calls to getSpeed/setSpeed are made.
The next animation might help understand what's actually happening during execution. Click here to see a larger version.
The control-flow in a program with even heavy .then usage is going to be quite hard to understand, especially when there are nested callbacks, loops, and conditionals. We lose the familiar stack-trace because internally .then is stitching together many small program fragments (lambdas) that have only logical continuity but awkward physical and temporal continuity.
Debugging becomes harder too. To understand more what's hard about it, I fired up Visual Studio debugger and stepped the program though several iterations. The call-stack appears to grow indefinitely while the program is "recursing". But note there are many asynchronous calls in between. So the stack isn't really growing. I tested with 100,000 iterations and the stack did not pop. Here's a screenshot of the debugger.
So, .then() seems like a mixed blessing.
Wouldn't it be nice if we could dodge the awkwardness of continuations, write program like it's synchronous but execute it fully asynchronously?
Welcome to C++ Coroutines
Microsoft has proposed a set of C++ language and library extensions called Resumable Functions that helps write asynchronous code that looks synchronous with familiar loops and branches. The latest proposal as of this writing (N4402) includes a new keyword await and its implementation relies on improved futures we discussed already.
Update: The latest C++ standard development suggests that the accepted keyword will be coawait (for coroutine await).
The speedup_until_maxspeed function can be written naturally as follows.
dds::rpc::future<void> test_iterative_await( robot::RobotControlSupport::Client & robot_client) { static const int inc = 10; float speed = 0; while ((speed = await robot_client.getSpeed_async())+inc <= MAX_SPEED) { await robot_client.setSpeed_async(speed + inc); printf("current speed = %f\n", speed + inc); } } test_iterative_await(robot_client).get();
I'm sure C# programmers will immediately recognize that it looks quite similar to the async/await in .NET. C++ coroutines bring a popular feature in .NET to native programmers. Needless to say such a capability is highly desirable in C++ especially because it makes asynchronous programming with DDS-RPC effortless.
The best part is that compiler support for await is already available! Microsoft Visual Studio 2015 includes experimental implementation of resumable functions. I have created several working examples in the dds-rpc-cxx repository. The examples demonstrate await with both Request-Reply style language binding and Function-Call style language binding in the DDS-RPC specification.
Like the example before, I debugged this example too. It feels quite natural to debug because as one would expect, the stack does not appear to grow indefinitely. It's like debugging a loop except that everything is running asynchronously. Things look pretty solid from what I could see! Here's another screenshot.
Concurrency
The current experimental implementation of DDS-RPC uses thread-per-request model to implement concurrency. This is a terrible design choice but there's a purpose and it' very quick to implement. A much better implementation would use some sort of thread-pool and an internal queue (i.e., an executor). The concurrency TS is considering adding executors to C++.
Astute readers will probably realize that thread-per-request model implies that each request completes in its own thread and therefore a question arises regarding the thread that executes the remaining code. Is the code following await required to be thread-safe? How many threads might be executing speedup_until_maxspeed at a time?
Quick test (with rpc::future wrapping PPL tasks) of the above code revealed that the code following await is executed by two different threads. These two threads are never the ones created by the thread-per-request model. This implies that there's a context switch from the thread that received the reply to the thread that resumed the test_iterative_await function. The same behavior is observed in the program with explicit .then calls. Perhaps the resulting behavior is dependent on the actual future/promise types in use. I also wonder if there is a way to execute code following await in parallel? Any comments, Microsoft?
A Sneak Peek into the Mechanics of await
A quick look into N4402 reveals that the await feature relies on composable futures, especially .then (serial composition) combinator. The compiler does all the magic of transforming the asynchronous code to a state machine that manages suspension and resumption automatically. It is a great example of how compiler and libraries can work together producing a beautiful symphony.
C++ coroutines work with any type that looks and behaves like composable futures. It also needs a corresponding promise type. The requirements on the library types are quite straight-forward, especially if you have a composable future type implemented somewhere else. Specifically, three free functions, await_ready, await_suspend, and await_resume must be available in the namespace of your future type.
In the DDS-RPC specification, dds::rpc::future<T> maps to std::future<T>. As std::future<T> does not have the necessary functionality for await to work correctly, I implemented dds::rpc::future<T> using both boost::future and concurrency::task<T> in Microsoft's TPL. Further, the dds::rpc::future<T> type was adapted with its own await_* functions and a promise type.
template <typename T> bool await_ready(dds::rpc::future<T> const & t) { return t.is_ready(); } template <typename T, typename Callback> void await_suspend(dds::rpc::future<T> & t, Callback resume) { t.then([resume](dds::rpc::future<T> const &) { resume(); }); } template <typename T> T await_resume(dds::rpc::future<T> & t) { return t.get(); }
Adapting boost::future<T> was straight-forward as the N4402 proposal includes much of the necessary code but some tweaks were necessary because the draft and the implementation in Visual Studio 2015 appear slightly different. Implementing dds::rpc::future<T> and dds::rpc::details::promise<T> using concurrency::task<T>, and concurrency::task_completion_event<T> needed little more work as both types had to be wrapped inside what would be standard types in near future (C++17). You can find all the details in future_adapter.hpp.
There are a number resources available on C++ Coroutines if you want to explore further. See the video recording of the CppCon'15 session on "C++ Coroutines: A Negative Overhead Abstraction". Slides on this topic are available here and here. Resumable Functions proposal is not limited to just await. There are other closely related capabilities, such as the yield keyword for stackless coroutines, the await for keyword, generator, recursive_generator and more.
Indeed, this is truly an exciting time to be a C++ programmer.
|
https://www.rti.com/blog/2015/12/08/modern-asynchronous-requestreply-with-dds-rpc/
|
CC-MAIN-2019-18
|
en
|
refinedweb
|
Synopsis edit
-
- lassign list varName ?varName ...?
Documentation edit
- official reference
- TIP 57
- proposed making the TclX lassign command a built-in Tcl command
Description editlassign assigns values from a list to the specified variables, and returns the remaining values. For example:
set end [lassign {1 2 3 4 5} a b c]will set $a to 1, $b to 2, $c to 3, and $end to 4 5.In lisp parlance:
set cdr [lassign $mylist car]The k trick is sometimes used with lassign to improve performance by causing the Tcl_Obj hlding $mylist to be unshared so that it can be re-used to hold the return value of lassign:
set cdr [lassign $mylist[set mylist {}] car]If there are more varNames than there are items in the list, the extra varNames are set to the empty string:
% lassign {1 2} a b c % puts $a 1 % puts $b 2 % puts $c %In Tcl prior to 8.5, foreach was used to achieve the functionality of lassign:
foreach {var1 var2 var3} $list break
DKF: The foreach trick was sometimes written as:
foreach {var1 var2 var3} $list {}This was unwise, as it would cause a second iteration (or more) to be done when $list contains more than 3 items (in this case). Putting the break in makes the behaviour predictable.
Example: Perl-ish shift editDKF cleverly points out that lassign makes a Perl-ish shift this easy:
proc shift {} { global argv set argv [lassign $argv v] return $v }On the other hand, Hemang Lavana observes that TclXers already have lvarpop ::argv, an exact synonym for shift.On the third hand, RS would use our old friend K to code like this:
proc shift {} { K [lindex $::argv 0] [set ::argv [lrange $::argv[set ::argv {}] 1 end]] }Lars H: Then I can't resist doing the above without the K:
proc shift {} { lindex $::argv [set ::argv [lrange $::argv[set ::argv {}] 1 end]; expr 0] }
Default Value editFM: here's a quick way to assign with default value, using apply:
proc args {spec list} { apply [list $spec [list foreach {e} $spec { uplevel 2 [list set [lindex $e 0] [set [lindex $e 0]]] }]] {*}$list } set L {} args {{a 0} {b 0} {c 0} args} $LAMG: Clever. Here's my version, which actually uses lassign, plus it matches lassign's value-variable ordering. It uses lcomp for brevity.
proc args {vals args} { set vars [lcomp {$name} for {name default} inside $args] set allvals "\[list [join [lcomp {"\[set [list $e]\]"} for e in $vars]]\]" apply [list $args "uplevel 2 \[list lassign $allvals $vars\]"] {*}$vals }Without lcomp:
proc args {vals args} { lassign "" scr vars foreach varspec $args { append scr " \[set [list [lindex $varspec 0]]\]" lappend vars [lindex $varspec 0] } apply [list $args "uplevel 2 \[list lassign \[list$scr\] $vars\]"] {*}$vals }This code reminds me of the movie "Inception" [1]. It exists, creates itself, and operates at and across multiple levels of interpretation. There's the caller, there's [args], there's [apply], then there's the [uplevel 2] that goes back to the caller. The caller is the waking world, [args] is the dream, [apply] is its dream-within-a-dream, and [uplevel] is its dream-within-a-dream that is used to implant an idea (or variable) into the waking world (the caller). And of course, the caller could itself be a child stack frame, so maybe reality is just another dream! ;^)Or maybe this code is a Matryoshka nesting doll [2] whose innermost doll contains the outside doll. ;^)Okay, now that I've put a cross-cap in reality [3], let me demonstrate how [args] is used:
args {1 2 3} a b c ;# a=1 b=2 c=3 args {1 2} a b {c 3} ;# a=1 b=2 c=3 args {} {a 1} {b 2} {c 3} ;# a=1 b=2 c=3 args {1 2 3 4 5} a b c args ;# a=1 b=2 c=3 args={4 5}FM: to conform to the AMG (and lassign) syntax.
proc args {values args} { apply [list $args [list foreach e $args { uplevel 2 [list set [lindex $e 0] [set [lindex $e 0]]] }]] {*}$values }both versions seem to have the same speed.PYK, 2015-03-06, wonders why lassign decided to mess with variable values that were already set, preventing default values from being set beforehand:
#warning, hypothetical semantics set color green lassign 15 size color set color ;# -> greenAMG: I'm pretty sure this behavior is imported from [foreach] which does the same thing. [foreach] is often used as a substitute for [lassign] on older Tcl.So, what to do when there are more variable names than list elements? I can think of four approaches:
- Set the extras to empty string. This is current [foreach] and [lassign] behavior.
- Leave the extras unmodified. This is PYK's preference.
- Unset the extras if they currently exist. Their existence can be tested later to see if they got a value.
- Throw an error. This is what Brush proposes, but now I may be leaning towards PYK's idea.
Gotcha: Ambiguity List Items that are the Empty String editCMcC 2005-11-14: I may be just exceptionally grumpy this morning, but the behavior of supplying default empty values to extra variables means you can't distinguish between a trailing var with no matching value, and one with a value of the empty string. Needs an option, -greedy or something, to distinguish between the two cases. Oh, and it annoys me that lset is already taken, because lassign doesn't resonate well with set.Kristian Scheibe: I agree with CMcC on both counts - supplying a default empty value when no matching value is provided is bad form; and lset/set would have been better than lassign/set. However, I have a few other tweaks I would suggest, then I'll tie it all together with code to do what I suggest.First, there is a fundamental asymmetry between the set and lassign behaviors: set copies right to left, while lassign goes from left to right. In fact, most computer languages use the idiom of right to left for assignment. However, there are certain advantages to the left to right behavior of lassign (in Tcl). For example, when assigning a list of variables to the contents of args. Using the right to left idiom would require eval.Still, the right-to-left behavior also has its benefits. It allows you to perform computations on the values before performing the assignment. Take, for example, this definition of factorial (borrowed from Tail call optimization):
proc fact0 n { set result 1. while {$n > 1} { set result [expr {$result * $n}] set n [expr {$n - 1}] } return $result }Now, with lassign as currently implemented, we can "improve" this as follows:
proc fact0 n { set result 1. while {$n > 1} { lassign [list [expr {$result * $n}] [expr {$n - 1}]] result n } return $result }I'm hard-pressed to believe that this is better. However, if we changed lassign to be lassign vars args, we can write this as:
proc fact0 n { set result 1. while {$n > 1} { lassign {result n} [expr {$result * $n}] [expr {$n - 1} ] } return $result }To my eye, at least, this is much more readable.So, I suggest that we use two procedures: lassign and lassignr (where "r" stands for "reverse"). lassign would be used for the "standard" behavior: right to left. lassignr would then be used for left to right. This is backwards from the way it is defined above for TclX and Tcl 8.5. Nonetheless, this behavior aligns better with our training and intuition.Also, this provides a couple of other benefits. First, the parallel to set is much more obvious. lassign and set both copy from right to left (of course, we are still left with the asymmetry in their names - I'll get to that later). And, we can now see why assigning an empty string to a variable which doesn't have a value supplied is bad form; this is not what set does! If you enter set a you get the value of $a, you don't assign the empty string to a. lassign should not either. If you want to assign the empty string using set you would enter:
set a {}With lassign, you would do something similar:
lassign {a b c} 1 {}Here, $a gets 1, $b gets the empty string, and $c is not touched. This behavior nicely parallels that of set, except that set returns the new value, and lassign returns the remaining values. So, let's take another step in that direction; we'll have lassign and lassignr return the "used" values instead.But this destroys a nice property of lassign. Can we recover that property? Almost. We can do what proc does; we can use the "args" variable name to indicate a variable that sucks up all the remaining items. So, now we get:
lassign {a b args} 1 2 3 4 5 6$a gets 1, $b gets 2, and args gets 3 4 5 6. Of course, we would make lassignr work similarly:
lassignr {1 2 3 4 5 6} a b argsBut, now that we have one of the nice behaviors of the proc "assignment", what about that other useful feature: default values? We can do that as well. So, if a value is not provided, then the default value is used:
lassign {a {b 2}} one$b gets the value 2. This also provides for the assignment of an empty list to a variable if the value is not provided. So, those who liked that behavior can have their wish as well:
lassign {a {b {}}} oneBut simple defaults are not always adequate. This only provides for constants. If something beyond that is required, then explicit lists are needed. For example:
lassign [list a [list b $defaultb]] oneThis gets to be ugly, so we make one more provision: we allow variable references within the defaults:
lassign {a {b $defaultb}} oneNow, this really begins to provide the simplicity and power that we should expect from a general purpose utility routine. And, it parallels other behaviors within Tcl (proc and set) well so that it feels natural.But we're still left with this lassign/set dichotomy. We can't rename lassign to be lset without potentially breaking someone's code, But notice that lassign now provides features that set does not. So, instead, let's create an assign procedure that provides these same features, but only for a single value:
assign {x 3} 7Sets x to 7. If no value is provided, x will be 3.So, we now have three functions, assign, lassign, and lassignr, that collectively provide useful and powerful features that, used wisely, can make your code more readable and maintainable. You could argue that you only "need" one of these (pick one) - the others are easily constructed from whichever is chosen. However, having all three provides symmetry and flexibility.I have provided the definitions of these functions below. The implementation is less interesting than the simple power these routines provide. I'm certain that many of you can improve these implementations. And, if you don't like my rationale on the naming of lassignr; then you can swap the names. It's easy to change other aspects as well; for example, if you still want lassign to return the unused values, it's relatively easy to modify these routines.
proc assign {var args} { if {[llength $var] > 1} { uplevel set $var } uplevel set [lindex $var 0] $args } proc lassign {vars args} { if { ([lindex $vars end] eq "args") && ([ llength $args] > [llength $vars])} { set last [expr {[llength $vars] - 1}] set args [lreplace $args $last end [lrange $args $last end]] } #This is required so that we can distinguish between the value {} and no #value foreach val $args {lappend vals [list $val]} foreach var $vars val $vals { lappend res [uplevel assign [list $var] $val] } return $res } proc lassignr {vals args} { uplevel lassign [list $args] $vals }slebetman: KS, your proposal seems to illustrate that you don't get the idea of lassign. For several years now I have used my own homegrown proc, unlist, that has the exact same syntax and semantics of lassign. The semantics behind lassign is not like set at all but more like scan where the semantics in most programming languages (at least in C and Python) is indeed assignment from left to right. The general use of a scanning function like lassign is that given an opaque list (one that you did not create) split it into individual variables.If you really understand the semantics lassign was trying to achieve then you wouldn't have proposed your:
lassign vars argsTo achieve the semantics of lassign but with right to left assignment you should have proposed:
lassign vars listOf course, your proposal above can work with Tcl8.5 using {*}:
lassign {var1 var2 var3} {*}$listBut that means for 90% of cases where you would use lassign you will have to also use {*}. Actually Tcl8.4 already has a command which does what lassign is supposed to do but with a syntax that assigns from right to left: foreach. Indeed, my home-grown unlist is simply a wrapper around foreach as demonstrated by sbron above. With 8.4, if you want lassign-like functionality you would do:
foreach {var1 var2 var3} $list {}Kristian Scheibe: slebetman, you're right, I did not get that the semantics of lassign (which is mnemonic for "list assign" should match those of scan and not set (which is a synonym for assign). Most languages refer to the operation of putting a value into a variable as "assignment", and, with only specialized exception, this is done right-to-left. I'm certain that others have made this same mistake; in fact, I count myself in good company, since the authors of the lassign TIP 57
set {x y} [LocateFeature $featureID]or
mset {x y} [LocateFeature $featureID]So, you see, when TIP #57
## Using [scan] set r 80 set g 80 set b 80 scan $rgb #%2x%2x%2x r g b set resultRgb [list $r $g $b] ## Using [regexp] regexp {$#(..)?(..)?(..)?^} $rgb r g b if {! [llength $r]} {set r 80} if {! [llength $g]} {set g 80} if {! [llength $b]} {set b 80} set resultRgb [list $r $g $b]As you can see, the idioms required are different in each case. If, as you're developing code, you start with the scan approach, then decide you need to support something more sophisticated (eg, you want to have decimal, octal, or hex numbers), then you need to remember to change not just the parsing, but the method of assigning defaults as well.This also demonstrates again that providing a default value (eg, {}) when no value is provided really ought to be defined by the application and not the operation. The method of using defaults with scan is more straightforward (and amenable to using lassign or [lscan]) than the method with regexp.The solution that I proposed was to make applying defaults similar to the way that defaults are handled with proc: {var dflt}. In fact, I would go a step farther and suggest that this idiom should be available to all Tcl operations that assign values to variables (including scan and regexp). But, I think that this is unlikely to occur, and is beyond the scope of what I was discussing.The real point of my original posting was to demonstrate the utility, flexibility, power, and readability or using this idiom. I think it's a shame to limit that idiom to proc. The most general application of it is to use it for assignment, which is what I showed.slebetman: I agree with the defaults mechanism. Especially since we're so used to using it in proc. I wish we have it in all commands that assigns values to multiple variables:
lassign $foo {a 0} {b {}} {c none} scan $rgb #%2x%2x%2x {r 80} {g 80} {b 80} regexp {$#(..)?(..)?(..)?^} $rgb {r 80} {g 80} {b 80} foreach {x {y 0} {z 100}} $argv {..}I think such commands should check if the variable it is assigning to is a pair of words of which the second word is the default value. Assigning an empty string have always seemed to me too much like the hackish NULL value trick in C (the number of times I had to restructure apps because the customer insisted that zero is a valid value and should not signify undefined...).The only downside I can think of is that this breaks apps with spaces in variable names. But then again, most of us are used to not writing spaces in variable names and we are used to this syntax in proc.BTW, I also think lassign is a bad name for this operation. It makes much more sense if we instead use the name lassign to mean assign "things" to a list which fits your syntax proposal. My personal preference is still unlist (when we finally get 8.5 I'll be doing an interp alias {} unlist {} lassign). lscan doesn't sound right to me but lsplit sounds just right for splitting a list into individual variables.DKF: The name comes from TclX. Choosing a different name or argument syntax to that very well known piece of code is not worth it; just gratuitous incompatability.
fredderic: I really don't see what all the fuss is about. lassign is fine just the way it is, lset is already taken, anything-scan sounds like it does a heck of a lot more than just assigning words to variables, and the concept of proc-like default values just makes me shudder... Even in the definition of a proc! ;)Something I would like to see, is an lrassign that does the left-to-right thing, and maybe some variant or option to lassign that takes a second list of default values:
lassign-with-defs defaultsList valuesList ?variable ...?where the defaults list would be empty-string-extended to the number of variables given (any extra defaults would simply be ignored), the values list wouldn't (any extra values would be returned as per usual), so you'd end up with:
lassign-with-defs {1 2 3} {a {}} w x y zbeing the equivalent of:
set w a ;# from values list set x {} ;# also from values list set y 3 ;# 3rd default value carried through set z {} ;# empty-string expanded defaults # with both arguments consumed, and empty string is returnedThe old filling-with-empty-strings lassign behaviour would thus be achieved by simply giving it an empty default values list, and the whole thing would be absolutely fabulous. ;)Of course, the catch is that if you simply take away the filling-with-empty-strings behaviour from lassign, then the defaults capability is created by simply doing two lassigns. A little wasteful, perhaps (possibly problematic if variable write traces are involved), but still better than most of the alternatives. (Perhaps a third argument to lrepeat would fulfill the empty-string-filling requirement by accepting an initial list, and repeatedly appending the specified item until the list contains at least count words? I can imagine several occasions where that could be handy.)
Ed Hume: I think the syntax of lassign is not as useful as having the value list and the variable name list being of similar structure:
vset {value1 value2 value3 ...} {name1 name2 name3 ...}I have provided an lset command since Tcl 7.6 in my toolset which was renamed to vset with Tcl 8.4. Having both the names and values as vectors allows you to easily pass both to other procedures without resorting to a variable number of arguments. It is a common idiom to assign each row of table data to a list of column names and work with it:
foreach row $rows { vset $row $cols # now each column name is defined with the data of the table row # ... }A second significant advantage of this syntax is that the structure of the names and the values are not limited to vectors. The vset command is actually a simplified case of the rset command which does a recursive set of nested data structures:
rset {1 2 3} {a b c} # $a is 1, $b is 2, ... rset {{1.1 1.2 1.3} 2 {3.1 3.2}} {{a b c} d {e f}} # $a is 1.1, $b is 1.2, $f is 3.2,....The syntax of vset and rset lend themselves to providing an optional third argument to provide default values in the case where empty values are not desired. So this is a cleaner implementation of frederic's lassign-with-defaults - the defaults values can have the usual empty string default.Now that Tcl has the expansion operator, the difference between lassign and vset is not as important as it was, but I do think vset is a lot more powerful.DKF: Ultimately, we went for the option that we did because that was what TclX used. However, a side-benefit is that it also makes compiling the command to bytecode much easier than it would have been with vset. (Command compilers are rather tricky to write when they need to parse apart arguments.)
Script Implementation editBoth the built-in lassign and the TclX lassign are faster than the scripted implementations presented below.KPV: For those who want to use [lassign] before Tcl 8.5, and without getting TclX, here's a tcl-only version of lassign:
if {[namespace which lassign] eq {}} { proc lassign {values args} { set vlen [llength $values] set alen [llength $args] # Make lists equal length for {set i $vlen} {$i < $alen} {incr i} { lappend values {} } uplevel 1 [list foreach $args $values break] return [lrange $values $alen end] } }jcw: Couldn't resist rewriting in a style I prefer. Chaq'un son gout - a matter of taste - of course:
if {[info procs lassign] eq {}} { proc lassign {values args} { while {[llength $values] < [llength $args]} { lappend values {} } uplevel 1 [list foreach $args $values break] lrange $values [llength $args] end } }KPV: But from an efficiency point of view, you're calling llength way too many times--every iteration through the while loop does two unnecessary calls. How about this version -- your style, but more efficient:
if {[namespace which lassign] eq {}} { proc lassign {values args} { set alen [llength $args] set vlen [llength $values] while {[incr vlen] <= $alen} { lappend values {} } uplevel 1 [list foreach $args $values break] lrange $values $alen end } }jcw interjects: Keith... are you sure llength is slower? (be sure to test inside a proc body)kpv continues: It must be my assembler/C background but I see those function calls, especially the one returning a constant value and wince. But you're correct, calling llength is no slower than accessing a variable. It guess the byte compiler is optimizing out the actual call.DKF: llength is indeed bytecoded.sbron: I see no reason to massage the values list at all. foreach will do exactly the same thing even if the values list is shorter than the args list. I.e. this should be all that's needed:
if {[namespace which lassign] eq {}} { proc lassign {values args} { uplevel 1 [list foreach $args $values break] lrange $values [llength $args] end } }RS: Yup - that's the minimality I like :^)This version does not work as described in the documentation for lassign for this case:
% lassign [list] a b c % set a can't read "a": no such variablesbron: You are right, I just noticed that myself too. Improved version:
if {[namespace which lassign] eq {}} { proc lassign {values args} { uplevel 1 [list foreach $args [linsert $values end {}] break] lrange $values [llength $args] end } }AMG: I prefer to use catch to check for a command's existence. Not only will catch check if the command exists, but it can also check if it supports an ensemble subcommand, an option, or some syntax. Plus it works with interp alias, a clear advantage over info commands.
if {[catch {lassign {}}]} { proc lassign {list args} { uplevel 1 [list foreach $args [concat $list {{}}] break] lrange $list [llength $args] end } }
Open Questions editJMN: tclX doesn't seem to use a separate namespace for its commands so if we do a 'package require tclX' in Tcl 8.5+, which version of a command such as lassign will end up being used?
% lassign wrong # args: should be "lassign list ?varName ...?" % package require Tclx 8.4 % lassign wrong # args: lassign list varname ?varname..?It would seem that Tcl's lassign is replaced with TclX's.AM Most definitely, Tcl is very liberal in that respect. You can replace any procedure or compiled command by your own version. That is one reason you should use namespaces. But I think the origin of TclX predates namespaces.
|
http://wiki.tcl.tk/1530
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
am using Visual C++ and win32. The program is C++ and does not use the forms.
I created "pictures.RESX" file using the Resourcer program. The namespace name is instruction.resource
This created a file which contains all my bitmap files.
I have also managed to include this into my program using the resource facility in Visual C++ . I can view the bitmap files using the resource viewer.
What I am trying to do at the moment is access these bitmap files in the program. I have created a toolbar which I have used standard buttons and bitmaps. I now want to use my own bitmaps for the buttons.
I saw an example where I was told to define the bitmaps as:-
#define IDB_Btn0 3000
TBBUTTON tbb[3];
TBADDBITMAP tbab;
tbab.hInst = HINST_COMMCTRL;
tbab.nID = IDB_STD_SMALL_COLOR;
ZeroMemory(tbb, sizeof(tbb));
//********** NEW FILE BUTTON
tbb[0].iBitmap = IDB_Btn0 ;
tbb[0].fsState = TBSTATE_ENABLED;
tbb[0].fsStyle = TBSTYLE_BUTTON;
tbb[0].idCommand = MNU_INSERT_CONTACTS;
This is the code I am using what I am not sure of is do I need to address the bitmap files as part of the resource file "pictures.RESX" . This is being compiled into the Exe file.
I have been told that the tbab.nID is wrong and that tabb[0] should refere to an index of the bitmaps. How should I correct this. Do I need to define the resource filename in the program or will the names in the resource file be available (Btmp0.bmp,Btmp1.bmp,Btmp2.bmp etc)
Thanks
Originally Posted by rbettes
am using Visual C++ and win32. The program is C++ and does not use the forms.
So it is no C++/CLI program. I have seen you already have a thread on that topic over there in the Visual C++ forum (and got an answer there). This is the right place for the thread, so why don't you continue there?
Ok I did post it there but I haven't got a answer yet with regard to new question
Forum Rules
|
http://forums.codeguru.com/showthread.php?503117-RESOLVED-Linker-error-LNK2022-driving-me-crazy!&goto=nextnewest
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Separating code blocks from results in org-mode
Posted February 08, 2014 at 08:54 AM | categories: org-mode | tags: | View Comments
Updated February 08, 2014 at 09:15 AM
Table of Contents
I often put my code blocks right where I need them in my org documents. It usually has a section explaining what I want to do, then the code block that implements the idea, following by the output. Sometimes the code blocks are long, however, and it might be desirable for that code to be in an appendix. 1
Org-mode enables this with #+CALL. For example, I have a function named
circle-area in the appendix of this post that calculates the area of a circle given its radius. The function is "named" by a line like this:
#+name: function-name
I can use the function like this:
#+CALL: circle-area(1)
3.14159265359
That is pretty nice. You can separate the code out from the main document. You still have to put the #+CALL: line in though. It may be appropriate to put a call inline with your text. If you add the following sentence, and put your cursor on the callcircle-area and press C-c C-c, the output is put in verbatim markers right after it.
The area of a circle with unit radius is call_circle-area(1).
The area of a circle with unit radius is
3.14159265359.
Here is another interesting way to do it. We can specify a named results block. Let us consider another function named
hello-block that prints output. We specify a named results block like this:
#+RESULTS: function-name
Now, whenever you execute that block, the results will get put where this line is like this.
hello John
These could be useful approaches to making the "top" of your document cleaner, with less code in it. The code of course is still in the document, but at the end, in an appendix for example. This kind of separation might make it a little harder to find the code, and to reevaluate it,2 but it might improve the readability for others.
1 Appendix of code
1.1 Area of a circle
import numpy as np return np.pi * r**2
1.2 Hello function
print 'hello ' + name
Footnotes:
Copyright (C) 2014 by John Kitchin. See the License for information about copying.
Org-mode version = 8.2.5h
|
http://kitchingroup.cheme.cmu.edu/blog/2014/02/08/Separating-code-blocks-from-results-in-org-mode/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Ads
So I want to make a file reader/ buffered reader that reads a new line of the textfile, lets say every 30 second.
Like it reads the first line, waiting 30 seconds, read the next line and so one. Im thinking it might come in handy using tread.sleep()
I've searched but can't seem to find an example
Hope you guys can help me
import java.io.*; class ReadFileWithTime { public static void main(String[] args) { try{ FileReader fr=new FileReader("C:/data.txt"); BufferedReader br=new BufferedReader(fr); String str=""; while((str=br.readLine())!=null){ System.out.println(str); Thread.sleep(30000); } } catch(Exception e){ System.out.println(e); } } }
|
http://roseindia.net/answers/viewqa/Java-Beginners/24145-Reading-string-from-file-timed.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Ford Motor Project Viability NPV payback IRR profit index
Write a proposal which applies the methods for calculating a project's viability advising Ford Motor Co. on obtaining funding and managing a project budget, to purchase equipment to increase worker safety. Review Ford's annual report. the proposal (no more than 1300 words):
1. Define business needs in an overview of the project, including high-level deliverables to solve the problem.
2. Describe the net present value (NPV), internal rate of return (IRR), profitability index, and payback methodologies for calculating the projects viability. Examine the strengths and weaknesses of each methodology.
3. Calculate NPV, IRR, profitability index, and payback method. Explain the rationale for accepting or rejecting the project based on its financial viability.
Solution Preview
Here is an outline to assist you in writing your proposal. Use it as a guide as experts are not permitted to write essays.
1. The posting did not include details about the projects to be able to discussion deliverables. However, in a write up, you would clearly mention that worker safety projects should review not only the financial returns but also the qualitative aspects, such as reputation, health, morale, and legal issues surrounding safety.
2. NPV is the dollar amount of the future cash flows expected discounted back to present dollars. That is, the amount of profits expected in today's dollars, assuming a 10% cost of ...
Solution Summary
The computation of NPV, IRR, profitability index and payback are in Excel for you. Click in cells to see calculation.
|
https://brainmass.com/business/capital-budgeting/504340
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
A
Cloud object represents a public or private database in an app container.
Language
- JavaScript
SDK
- CloudKit JS 1.0+
Overview
Each container has a public database whose data is accessible to all users and, if the current user is signed in, a private database whose data is accessible only by the current user. A database object applies operations to records, subscriptions, and zones within a database.
You do not create database objects yourself, nor should you subclass the
Cloud class. You get a database object using either the
public or
private properties in the
Cloud class. You get a
Cloud object using methods in the
Cloud namespace. For example, use
Cloud
get to get the default container object.
var container = CloudKit.getDefaultContainer(); var publicDatabase = container.publicCloudDatabase; var privateDatabase = container.privateCloudDatabase;
Read access to the public database doesn’t require that the user sign in. Your web app may fetch records and perform queries on the public database, but by default your app may not save changes to the public database without a signed-in user. Access to the private database requires that the user sign in. To determine whether a user is authenticated, see
set in
Cloud.
The asynchronous methods in this class return a
Promise object that resolves when the operation completes or is rejected due to an error. For a description of the
Promise class returned by these methods, go to Mozilla Developer Network: Promise.
This class is similar to the
CKDatabase class in the CloudKit framework.
Creating Your Schema
Before you can access records, you must create a schema from your native app using just-in-time schema (see Creating a Database Schema by Saving Records) or using CloudKit Dashboard (see Using CloudKit Dashboard to Manage Databases). Use CloudKit Dashboard to verify that the record types and fields appear in your app’s containers before you test your JavaScript code.
|
https://developer.apple.com/reference/cloudkitjs/cloudkit.database
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
- 0shares
- Facebook0
- Twitter0
- Google+0
- Pinterest0
- LinkedIn0
Polymorphism:
In object oriented programming, polymorphism is a concept that can process the objects differently with respect to the data type or the class of the objects. Polymorphism redefines the methods for derived classes or child classes.. Now we are introducing early binding and late binding.
Binding is a process, that is used to convert the variable names and function names into machine. There are two types of binding in C++ programming language; early binding and late binding. Early binding means that the compiler is able to convert the variable names and functions names into machine language address directly.
All the functions have a different address in machine language; therefore, the compiler directly goes to that function. The direct function call is solved by early binding.
Late binding is also called dynamic binding. In C++ programming language dynamic means at run time. It means that when it is not possible for the compiler to run a program or to know which function is to be call until run time, this is known as late binding. In C++ programming, to get late binding, we use function pointers.
A function pointer refers to a pointer that point to function instead of variables. Consider the following example to understand the concept of late binding:
Virtual Function
A virtual function is a function that is declared in the base class. It is declared using a keyword virtual. A virtual function that is defined in the base class and has some versions in the derived class too tells the compiler that there is no need to add a static linkage for this function.
We only need to call the function anywhere or any point in the program. Consider the following example.
# include <iostream>
using namespace std;
# include <conio.h>
class base
{
public:
virtual void show ();
};
void base :: show ( )
{
cout << ”base class” ;
}
class derived: public base
{
public:
void show ();
};
void derived :: show ()
{
cout << ”derived class” ;
}
void main ()
{
derived d;
base *b = new d;
b -> show ();
getche ();
}
Working of above program:
In the main function, pointer is used to call the function show () through late binding. As we are using a pointer, we use the dereferencing operator (->) to call the function. The function is called using the base class pointer. In the line base *b =new d; the pointer b contains the address of object of derived class that is d. Therefore the function show () of derived class is called and it prints:
|
http://www.tutorialology.com/cplusplus/polymorphism/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
What are Fine-Grained Notifications?
Prior to Realm Objective-C & Swift 0.99 , you could observe for changes on your
Results ,
List , or
AnyRealmCollection types by adding a notification block. Any time that any of the data you were watching changed, you would get notified and could trigger an update to your UI.
A lot of people asked for more precise information about what exactly changed in the underlying data so they could implement more flexible updates to their app’s UI. In 0.99, we gave them just that power, by deprecating the existing
addNotificationBlock API and replacing it with a new one:
func addNotificationBlock(block: (RealmCollectionChange<T>) -> Void) -> NotificationToken
The new API provides information not only about the change in general, but also about the precise indexes that have been inserted into the data set, deleted from it, or modified.
The new API takes a closure which takes a
RealmCollectionChange . This closure will be called whenever the data you are interested in changes. You can read more about using this new method in our docs on Collection Notifications , or simply follow this tutorial through for a practical example!
Building a GitHub Repository List App
In this post we’re going to look into creating a small app that shows all the GitHub repositories for a given user. The app will periodically ping GitHub’s JSON API and fetch the latest repo data, like the amount of stars and the date of the latest push.
If you want to dig through the complete app’s source code as you read this post, go ahead and clone the project .
The app is quite simple and consists of two main classes – one called
GitHubAPI , which periodically fetches the latest data from GitHub, and the other is the app’s only view controller, which displays the repos in a table view.
Naturally, we’ll start by designing a
Repo model class in order to be able to persist repositories in the app’s Realm:
import RealmSwift class Repo: Object { //MARK: properties dynamic var name = "" dynamic var id: Int = 0 dynamic var stars = 0 dynamic var pushedAt: NSTimeInterval = 0 //MARK: meta override class func primaryKey() -> String? { return "id" } }
The class stores four properties: the repo name, the number of stars, the date of the last push, and, last but not least, the repo’s
id , which is the primary key for the
Repo class.
GitHubAPI will periodically re-fetch the user’s repos from the JSON API. The code would loop over all JSON objects and for each object will check if the
id already exists in the current Realm and update or insert the repo accordingly:
if let repo = realm.objectForPrimaryKey(Repo.self, key: id) { //update - we'll add this later } else { //insert values fetched from JSON let repo = Repo() repo.name = name repo.stars = stars repo.id = id repo.pushedAt = NSDate(fromString: pushedAt, format: .ISO8601(.DateTimeSec)).timeIntervalSinceReferenceDate realm.add(repo) }
This piece of code inserts all new repos that
GitHubAPI fetches from the web into the app’s Realm.
Next we’ll need to show all
Repo objects in a table view. We’ll add a
Results<Repo> property to
ViewController :
let repos: Results<Repo> = { let realm = try! Realm() return realm.objects(Repo).sorted("pushedAt", ascending: false) }() var token: NotificationToken?
repos defines a result set of all
Repo objects sorted by their
pushedAt property, effectively ordering them from the most recently updated repo to the one getting the least love. :broken_heart:
The view controller will need to implement the basic table view data source methods, but those are straightforward so we won’t go into any details:
extension ViewController: UITableViewDataSource { func tableView(tableView: UITableView, numberOfRowsInSection section: Int) -> Int { return repos.count } func tableView(tableView: UITableView, cellForRowAtIndexPath indexPath: NSIndexPath) -> UITableViewCell { let repo = repos[indexPath.row] let cell = tableView.dequeueReusableCellWithIdentifier("RepoCell") as! RepoCell cell.configureWith(repo) return cell } }
Inserting New Repos
Next, we’ll need to react to updates: In
viewDidLoad() we’ll add a notification block to
repos , using the new (bam! :boom:) fine-grained notifications:
token = repos.addNotificationBlock {[weak self] (changes: RealmCollectionChange) in guard let tableView = self?.tableView else { return } switch changes { case .Initial: tableView.reloadData() break case .Update(let results, let deletions, let insertions, let modifications): tableView.beginUpdates() //re-order repos when new pushes happen tableView.insertRowsAtIndexPaths(insertions.map { NSIndexPath(forRow: $0, inSection: 0) }, withRowAnimation: .Automatic) tableView.endUpdates() break case .Error(let error): print(error) break } }
This is quite a long piece of code so let’s look what’s happening in there. We add a notification block to
repos and create a local constant
tableView to allows us to work with the controller’s table view.
The key to making the most of fine-grained notifications is the
changes parameter that you get in your notification block. It is a
RealmCollectionChange enumeration and there are three different values:
.Initial(let result)– This is the very first time the block is called; it’s the initial data you get from your
Results,
List, etc. It does not contain information about any updates, because you still don’t have previous state – in a sense all the data has just been “inserted”. In the example above, we don’t need to use the
Resultsobject itself – instead we simply call
tableView.reloadData()to make sure the table view shows what we need.
.Update(let result, let insertions, let deletions, let updates)– This is the case you get each time after the initial call. The last three parameters are
[Int], arrays of integers, which tell you which indexes in the data set have been inserted, deleted, or modified.
.Error(let error)– This is everyone’s least favorite case – something went wrong when refreshing the data set.
Since we’re looking into how to handle fine-grained notifications, we are interested in the line that goes over
insertions and adds the corresponding rows into the table view:
tableView.insertRowsAtIndexPaths(insertions.map { NSIndexPath(forRow: $0, inSection: 0) }, withRowAnimation: .Automatic)
We convert (or
map if you will) insertions from a
[Int] to
[NSIndexSet] and pass it to
insertRowsAtIndexPaths(_:, withRowAnimation:) . That’s all it takes to have the table view update with a nice animation!
When we run the app for the very first time it will fall on the
.Initial case, but since there won’t be any
Repo objects yet (because we haven’t fetched anything yet),
tableView.reloadData() will not do anything visible on screen.
Each time you start the app after the very first time, there will be stored Repo objects, so initially the app will show the existing data and will update it with the latest values when it fetches the latest JSON from the web.
When the
GitHubAPI fetches the user’s repos from the API, the notification block will be called again and this time
insertions will contain all the indexes where repos were inserted much like so:
[0, 1, 2, 3, 4, 5, 6, etc.]
The table view will display all inserted rows with a nice animation:
That’s neat, right? And since
GitHubAPI is periodically fetching the latest data, when the user creates a new repo it will pop up shortly in the table view like so (i.e., it comes as another insertion update when it’s saved into the app’s Realm):
Re-ordering the list as new data comes in
repos is ordered by
pushedAt , so any time the user pushes to any of their repositories that particular repo will move to the top of the table view.
When the order of the data set elements changes the notification block will get called with both
insertions and
deletions indexes:
insertions = [0] deletions = [5]
What happened in the example above is that the element that used to be at position 5 (don’t forget the repos are ordered by their last push date) moved to position 0. This means we will have to update the table view code to handle both insertions and deletions:
tableView.insertRowsAtIndexPaths(insertions.map { NSIndexPath(forRow: $0, inSection: 0) }, withRowAnimation: .Automatic) tableView.deleteRowsAtIndexPaths(deletions.map { NSIndexPath(forRow: $0, inSection: 0) }, withRowAnimation: .Automatic)
Do you see a pattern here? The parameters you get in the
.Update case suit perfectly the
UITableView API. #notacoincidence
With code to handle both insertions and deletions in place, we only need to look into updating the stored repos with the latest JSON data and reflect the changes in the UI.
Back in
GitHubAPI , we will need our code to update or insert depending on whether a repo with the given
id already exists. The initial code that we had turns into:
if let repo = realm.objectForPrimaryKey(Repo.self, key: id) { //update - this is new! let lastPushDate = NSDate(fromString: pushedAt, format: .ISO8601(.DateTimeSec)) if repo.pushedAt.distanceTo(lastPushDate.timeIntervalSinceReferenceDate) > 1e-16 { repo.pushedAt = lastPushDate.timeIntervalSinceReferenceDate } if repo.stars != stars { repo.stars = stars } } else { //insert - we had this before let repo = Repo() repo.name = name repo.stars = stars repo.id = id repo.pushedAt = NSDate(fromString: pushedAt, format: .ISO8601(.DateTimeSec)).timeIntervalSinceReferenceDate realm.add(repo) }
This code checks if
pushedAt is newer in the received JSON data than the date we have in Realm and if so, updates the pushed date on the stored repo.
(It also checks if the star count changed and updates accordingly the repo. We’ll use this info in the next section.)
Now, any time the user pushes to one of their repositories on GitHub, the app will re-order the list accordingly (watch the jazzy repo below):
You can do the re-ordering in a more interesting way in certain cases. If you are sure that a certain pair of insert and delete indexes is actually an object being moved across the data set look into
UITableView.moveRowAtIndexPath(_:, toIndexPath) for an even nicer move animation.
Refreshing table cells for updated items
If you are well-versed with the
UITableView API, you probably already guessed that we could simply pass the
modifications array to
UITableView.reloadRowsAtIndexPaths(_:, withRowAnimation:) and have the table view refresh rows that have been updated.
However… that’s too easy. Let’s spice it up a notch and write some custom update code!
When the star count on a repo changes the list will not re-order, thus it will be difficult for the user to notice the change. Let’s add a smooth flash animation on the row that got some stargazer love, to attract the user’s attention. :sparkles:
In our custom cell class we’ll need a new method:
func flashBackground() { backgroundView = UIView() backgroundView!.backgroundColor = UIColor(red: 1.0, green: 1.0, blue: 0.7, alpha: 1.0) UIView.animateWithDuration(2.0, animations: { self.backgroundView!.backgroundColor = UIColor.whiteColor() }) }
That new method replaces the cell background view with a bright yellow view and then tints it slowly to white.
Let’s call that new method on any cells that need to display updated star count. Back in the view controller we’ll add under the
.Update case:
... //initial case up here case .Update(let results, let deletions, let insertions, let modifications): ... //insert & delete rows for row in modifications { let indexPath = NSIndexPath(forRow: row, inSection: 0) let cell = tableView.cellForRowAtIndexPath(indexPath) as! RepoCell let repo = results[indexPath.row] cell.configureWith(repo) cell.flashBackground() } break ... //error case down here
We simply loop over the
modifications array and build the corresponding table index paths to get each cell that needs to refresh its UI.
We fetch the
Repo object from the updated
results and pass it into the
configureWith(_:) method on the cell (which just updates the text of the cell labels). Finally we call
flashBackground() on the cell to trigger the tint animation.
Oh hey – somebody starred one of those repos as I was writing this post:
(OK, it was me who starred the repo – but my point remains valid. :grin:)
Conclusion
As you can see, building a table view that reacts to changes in your Realm is pretty simple. With fine-grained notifications, you don’t have to reload the whole table view each time. You can simply use the built-in table APIs to trigger single updates as you please.
Keep in mind that
Results ,
List , and other Realm collections are designed to observe changes for a list of a single type of objects. If you’d like to try building a table view with fine-grained notifications with more than one section you might run into complex cases when you will need to batch the notifications so that you can update the table with a single call to
beginUpdates() and
endUpdates() .
If you want to give the app from this post a test drive, you can clone it from this repo .
We are excited to have released the most demanded Realm feature of all time and we’re looking to your feedback! Tell us what you think on Twitter & a Simple Swift App With Fine-Grained Notifications
评论 抢沙发
|
http://www.shellsec.com/news/16074.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
I'm not really to sure with using python and its object oriented features as Im better with java.
Im not sure whether to put the functions inside the classes as i can't really seeing this making a difference. I will apply my code and any changes or guidence would be great (just to be told if im on the right path etc)
def main(): user = menu(username) class Film(object): def __init__(self, year, fans): self.year = year self.fans = fans class Drama(Film): def __init__(self, film, director, actor): Film.__init__(self, year, fans) self.film = film self.director = director self.actor = actor self.year = year self.fans = fans class Documentary(Film): def __init__(self, narrator, subject, year, fans): Film.__init__(self, year, fans) self.narrator = narrator self.subject = subject def menu(userName) print "Hello, welcome." userName = raw_input("What is your name?: ") while exit = 0 print "Please select an option. 1. Add a Film 2. List Films 3. Find film by release date" inputCommand = input("Enter here: ") if inputCommand == 1 filmType = input("Press 1 for Drama . Press 2 for Documentary: ") if filmType == 1 addDrama(film, director, actor, year) else addDoc(narrator, subject, year) elif inputCommand == 2 listFilms(film, director, actor, year) elif inputCommand == 3 releaseDate(year) elif inputCommand == 8 exit = 1 def addDrama(film, director, actor, year): film = raw_input("Enter name of film: ") director = raw_input("Enter name of director: ") actor = raw_input("Enter name of actor: ") year = input("Enter year of film: ") filedata file = open("database.txt", "w") dram = Drama(film, director, actor, year) file.write(film, director, actor, year) file.close() def addDoc(narrator, subject, year): narrator = raw_input("Enter name of narrator: ") subject = raw_input("Enter subject of documentary: ") year = input("Enter year of film: ") filedata file = open("database.txt", "w") doc = Documentary(narrator, subject, year) file.write(narrator, subject, year) file.close()
|
https://www.daniweb.com/programming/software-development/threads/191358/python-object-orienated
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
I ran across this puzzle yesterday.+.
66*
94*
92+
74+
554**1+
It didn’t take long to come up with a recursive function that meets the requirements. Here it is in C#:
string GetExpression(int val)
{
if (val < 10)
{
return val.ToString();
}
int quo, rem;
// first see if it's evenly divisible
for (int i = 9; i > 1; --i)
{
quo = Math.DivRem(val, i, out rem);
if (rem == 0)
{
if (val >= 90 || (val < 90 && quo <= 9))
{
// value is (i * quo)
return i + GetExpression(quo) + "*";
}
}
}
quo = Math.DivRem(val, 9, out rem);
// value is (9 * quo) + rem
// optimization reduces (9 * 1) to 9
var s1 = "9" + ((quo == 1) ? string.Empty : GetExpression(quo) + "*");
var s2 = GetExpression(rem) + "+";
return s1 + s2;
}
The overall idea here is that first I try to divide the number evenly by a one-digit number. If it does divide evenly, then the result is (x * quotient), where x is the divisor. If the number isn’t evenly divisible by a one-digit number, then the function divides the value by nine and generates (9 * quotient) + remainder.
(x * quotient)
x
(9 * quotient) + remainder
I made one simple optimization based on the observation that any number in the range 0-90 inclusive can be expressed either as the product of two single-digit numbers, or with an expression of the form (9 * x) + y, where x and y are one-digit numbers. 31, for example, is (9 * 3) + 4, or 93*4+ in postfix.
(9 * x) + y
y
(9 * 3) + 4
93*4+
The function above generates the shortest possible expressions for all numbers in the range 0 through 90. I think its expressions for 91 through 101 are as short as possible, if non-obvious. But I know that it doesn’t always generate optimum expressions. For example, the generated expression for 10,000 is 85595*5+*** (it works out to (8 * 1250)). But 10,001 results in 99394*5+**4+*2+ (postfix for (9 * 1111) + 2). A shorter expression would be (10000 + 1) or, in postfix, 85595*5+***1+.
85595*5+***
(8 * 1250)
99394*5+**4+*2+
(9 * 1111) + 2
(10000 + 1)
85595*5+***1+
I’m at a loss as to how I would go about ensuring an optimum expression for any positive integer. I know that I could add more special cases, but I’m already disappointed by the existing special case in my algorithm for numbers 0 through 90. Even the check for even divisibility kind of bothers me, but that’s the only way I could find to make things reasonable at all.
If the puzzle interests you, I’d be interested in any insights.
I don’t travel as much as I used to, so I’m not up on all the latest changes. The last time I traveled by air was two years ago, and somebody else made the reservations. I don’t remember the last time I booked a flight.
This evening I was making reservations to go to southern California. I typically just go to Southwest Airlines because their rates are competitive if not always the lowest, and I always get good service. But seeing the cost of the flight I thought I’d shop around. Several carriers showed much lower prices for the trip I wanted. Until I took into account restrictions, extra charges, etc. Their Web sites don’t make it easy for me to feel confident that I’m getting what I think I’m getting, and by the time I added everything up it wasn’t a whole lot cheaper than Southwest.
I entered my Rapid Rewards account (Southwest’s frequent flier program) with my flight information so I’d get points for the flight. Why not, right? But then I couldn’t check out. You see, my Rapid Rewards account has my name as Jim Mischel. But new (new to me, at least) government regulations (“Safe Travel” or some such) insist that the name on the ticket match the name on my driver’s license, which is James Mischel. Uff. Southwest’s helpful Web site suggested that I change the name on my Rapid Rewards account.
But the Rapid Rewards account information page says:
For security purposes, we do not allow name changes online. Instead, please forward your Rapid Rewards card, along with photocopies of legal documentation (ex. driver license, marriage certificate, etc.) and an informal letter indicating your legal name, to Rapid Rewards, P.O. Box 36657, Dallas, TX 75235.
Just shoot me.
The only solution I could come up with was to remove the Rapid Rewards number. So I won’t get my points for the flight. Probably wouldn’t matter, anyway; I don’t fly enough for the points to mean anything.
Ain’t technology wonderful?
A carving friend who vacations in Colorado brought me a piece of Bristlecone pine. Fred is quite an accomplished carver who I think started about the time I did. He’s concentrated on carving faces, mostly out of Aspen. He also does stylized carvings from many different types of wood. The sycamore gecko I carved is from one of his patterns, and his cedar armadillo was the inspiration for my mesquite armadillo, although I didn’t use his pattern.
The Bristlecone pine stayed on the floor of my truck for a couple of months while I tried to figure out what to do with it. I thought about carving one of my birds to add to the collection, but for some reason couldn’t bring myself to chop up that piece of wood just for a bird. Last week I finally figured it out: I’d carve a bird in the branch. It’s a kind of carving I hadn’t attempted before.
I think what convinced me to try it was the fragment of a small limb that was sticking out from what was otherwise a fairly straight and boring branch. I decided to use that limb as part of the bird’s tail. I wish I’d taken pictures from start to finish. Unfortunately, I just got two after spending some time roughing it out.
As I said, this was new territory for me. I’d always used a bandsaw cutout for my carvings, except for the little dogs and a few attempts at faces. Carving a figure that remains part of the larger piece of wood is quite different. And difficult because I can’t just turn the thing over and carve from a different direction. The uncut part of the branch often got in the way. Detailing the beak was particularly difficult because I couldn’t easily get the Foredom in there, even with the detailing handpiece.
Roughing out took me an hour or two on Saturday. Sunday I spent two or three more hours roughing out and then detailing the bird. The result is a little thin and not quite symmetrical, but I thought it turned out pretty nice.
The bird figure is very smooth and sanded with 220 grit. The rest of the carved wood is sanded with 120 grit, and not perfectly smooth. I left some dips. I thought about trying to texture it like a nest, but didn’t have a good idea of what that should look like. Rather than do something that detracted from the carving, I just sanded and called it good enough.
The finish is two coats of Watco Natural Danish Oil, which I applied to the entire piece–including the uncarved wood. I haven’t yet decided if i should add a couple coats of a spray polyurethane to give it a little shine. We’ll see.
I made plenty of mistakes on this piece, especially in the bird shape. But I understand how and why I made them, and figure I can do better next time. I especially liked doing this with the bird shape because it’s a familiar subject, just rendered a little differently. I find trying to do something familiar in a new way to be an effective learning experience.
All things considered, it was a fun project and a good learning experience. And I like the way it turned out.
Continuing with my implementation of a D-ary heap, after a rather long break during which I was busy with other things . . .
The hardest part of implementing the DHeap class was getting the constructors right. Once those are right, the rest of the class just falls into place. The constructors were difficult because there are four different parameters that control their behavior, and I wanted to avoid having to create a different constructor for every possible combination of parameters. With a little thought and judicious use of default parameters, I was able to do everything with three constructors: one private and two protected.
DHeap
private
protected
Internally, the DHeap class depends on four fields:
// Heap item comparer
private readonly IComparer<T> _comparer;
// Result comparer (for Min-heap and Max-heap)
private readonly Func<int, bool> _resultComparer;
// Number of children for each node.
private readonly int _ary;
// List that holds the heap items
private readonly List<T> _items;
I used List<T> here rather than an array (i.e. T[]) to take advantage of the automatic growth that the List class provides. I could have implemented my own dynamic array and saved a few cycles, but the performance gain really isn’t worth the extra effort.
List<T>
T[]
List
The _comparer is an IComparer<T> implementation that compares two items. Clients can pass a custom comparer if their type doesn’t implement IComparable<T>, or if they want to compare items differently than the default comparer does. If no custom comparer is supplied, the constructors will use Comparer<T>.Default.
_comparer
IComparable<T>
We discussed the _resultComparer in a previous post. It’s there so that we can use the same code for a Min-heap and for a Max-heap. The MinDHeap<T> and MaxDHeap<T> derived classes pass the appropriate comparison function as a parameter to the constructor.
_resultComparer
MinDHeap<T>
MaxDHeap<T>
The _ary property determines the “arity” of the heap. Passing 2 will create a binary heap, passing 4 will create a 4-heap, etc.
_ary
The constructors allow the client to supply values for _comparer, _ary, and _resultComparer. In addition, clients can specify a capacity, which determines the initial capacity of the heap. Doing so prevents unnecessary reallocations in the same way that the initialCapacity parameter to the List constructor works.
capacity
initialCapacity
Given that, the class with its constructors and internal fields looks like this:
[DebuggerDisplay("Count = {Count}")]
public class DHeap<T>: IHeap<T>
{
private readonly IComparer<T> _comparer;
private readonly Func<int, bool> _resultComparer;
private readonly int _ary;
private readonly List<T> _items = new List<T>();
private DHeap(int ary, Func<int, bool> resultComparer,
IComparer<T> comparer = null)
{
_ary = ary;
_comparer = comparer ?? Comparer<T>.Default;
_resultComparer = resultComparer;
}
internal DHeap(int ary, Func<int, bool> resultComparer, int capacity,
IComparer<T> comparer = null)
: this(ary, resultComparer, comparer)
{
if (ary < 2)
{
throw new ArgumentOutOfRangeException("ary",
"must be greater than or equal to two.");
}
if (capacity < 0)
{
throw new ArgumentOutOfRangeException("capacity",
"must be greater than zero.");
}
_items = new List<T>(capacity);
}
internal DHeap(int ary, Func<int, bool> resultComparer,
IEnumerable<T> collection, IComparer<T> comparer = null)
: this(ary, resultComparer, comparer)
{
if (ary < 2)
{
throw new ArgumentOutOfRangeException("ary",
"must be greater than or equal to two.");
}
if (collection == null)
{
throw new ArgumentNullException("collection",
"may not be null.");
}
_items = new List<T>(collection);
Heapify();
}
}
DHeap<T> is a base class that is intended to be used only by the MinDHeap<T> and MaxDHeap<T> derived classes. The private constructor sets the common properties from supplied parameters, and the two protected constructors are called by the derived class’s constructors. Because there are no public constructors, no other classes can derive from this one.
DHeap<T>
public
I considered making the constructors public so that any client could derive from this class, but doing so would necessitate making most of the methods virtual, and the private fields protected so that derived classes could manipulate them. But if a derived class wanted to override, say, Insert, it would likely be changing almost all of the implementation. At that point, is it really a derived class or a new class entirely? It seemed more reasonable to provide the IHeap<T> interface for those who want to create a different type of heap.
virtual
Insert
The ary and resultComparer parameters are common to both of the accessible constructors, and must be specified. The comparer parameter also is common to both, but it can be defaulted (i.e. not supplied). The only real difference between the two constructors is that one takes an initial capacity and the other takes a collection from which the heap is initialized. Both constructors call the private constructor to set the common values, and then initialize the heap.
ary
resultComparer
comparer
Clients don’t create instances of DHeap<T>. Instead, they create instances of MinDHeap<T> and MaxDHeap<T>. Those classes each consist of a static comparison function (the result comparer) and three constructors that simply chain to the DHeap<T> constructors.
public class MinDHeap<T> : DHeap<T>
{
[MethodImpl(MethodImplOptions.AggressiveInlining)]
static private bool MinCompare(int r)
{
return r > 0;
}
/// <summary>
/// Initializes a new instance that is empty
/// and has the specified initial capacity.
/// </summary>
public MinDHeap(int ary, int capacity, IComparer<T> comparer = null)
: base(ary, MinCompare, capacity, comparer)
{
}
/// <summary>
/// Initializes a new instance that is empty
/// and has the default initial capacity.
/// </summary>
public MinDHeap(int ary, IComparer<T> comparer = null)
: this(ary, 0, comparer)
{
}
/// <summary>
/// Initializes a new instance that contains
/// elements copied from the specified collection.
/// </summary>
public MinDHeap(int ary, IEnumerable<T> collection,
IComparer<T> comparer = null)
: base(ary, MinCompare, collection, comparer)
{
}
}
public class MaxDHeap<T> : DHeap<T>
{
[MethodImpl(MethodImplOptions.AggressiveInlining)]
static private bool MaxCompare(int r)
{
return r < 0;
}
/// <summary>
/// Initializes a new instance that is empty
/// and has the specified initial capacity.
public MaxDHeap(int ary, int capacity, IComparer<T> comparer = null)
: base(ary, MaxCompare, capacity, comparer)
{
}
/// <summary>
/// Initializes a new instance that is empty
/// and has the default initial capacity.
/// </summary>
public MaxDHeap(int ary, IComparer<T> comparer = null)
: this(ary, 0, comparer)
{
}
/// <summary>
/// Initializes a new instance that contains
/// elements copied from the specified collection.
/// </summary>
public MaxDHeap(int ary, IEnumerable<T> collection, IComparer<T> comparer = null)
: base(ary, MaxCompare, collection, comparer)
{
}
}
I marked the result comparison methods (MinCompare and MaxCompare) with an attribute that tells the compiler to inline them aggressively. Considering that comparison is what determines how fast the heap runs, it pays to make those comparisons as inexpensive as possible, within certain limits. Adding the attribute provides a measurable performance gain at very little cost in code size or complexity.
MinCompare
MaxCompare
Clients that want to create a binary Min-heap of integers can write:
var myHeap = new MinDHeap(2);
To create a trinary Max-heap of some custom type, using a custom comparison function:
var myHeap = new MaxDHeap(3, 1000, new CustomComparer());
That assumes, of course, that somewhere there is a class called CustomComparer that implements IComparable<MyType>.
CustomComparer
IComparable<MyType>
Next time I’ll complete implementation of the DHeap<T> class and give you a link to where you can download the code and some examples.
Text encoding is a programmer’s nightmare. Life was much simpler when I didn’t have to know anything other than the ASCII character set. Even having to deal with extended ASCII–characters with values between 128 and 255–wasn’t too bad. But when we started having to work with international character sets, multibyte character sets, and the Unicode variants, things got messy quick. And they’re going to remain messy for a long time.
When I wrote a Web crawler to gather information about media files on the Web, I spent a lot of time making it read the metadata (ID3 information) from MP3 music files. Text fields in ID3 Version 2 are marked with an encoding byte that says which character encoding is used. The recognized encodings are ISO-8859-1 (an 8-bit character set for Western European languages, including English), two 16-bit Unicode encodings, and UTF-8. Unfortunately, many tag editors would write the data in the computer’s default 8-bit character set (Cyrillic, for example) and mark the fields as ISO-8859-1.
That’s not a problem if the resulting MP3 file is always read on that one computer. But then people started sharing files and uploading them to the Web, and the world fell apart. Were I to download a file that somebody in Russia had added tags to, I would find the tags unreadable because I’d be trying to interpret his Cyrillic character set as ISO-8859-1. The result is commonly referred to as mojibake.
The Cyrillic-to-English problem isn’t so bad, by the way, but when the file was originally written with, say, ShiftJIS, it’s disastrous.
My Web crawler would grab the metadata, interpret it as ISO-8859-1, and save it to our database in UTF-8. We noticed early on that some of the data was garbled, but we didn’t know quite what the problem was or how widespread it was. Because we were a startup with too many things to do and not enough people to do them, we just let it go at the time, figuring we’d get back to it.
When we did get back to it, we discovered that we had two problems. First, we had to figure out how to stop getting mangled data. Second, we had to figure a way to un-mangle the millions of records we’d already collected.
To correct the first problem, we built trigram models for a large number of languages and text encodings. Whenever the crawler ran across a field that was marked as containing ISO-8859-1, it would run the raw uncoded bytes through the language model to determine the likely encoding. The crawler then used that encoding to interpret the text. That turned out to be incredibly effective, and almost eliminated the problem of adding new mangled records.
Fixing the existing mangled records turned out to be a more difficult proposition. The conversion from mangled ISO-8859-1 to UTF-8 resulted in lost data in some circumstances, and we couldn’t do anything about that. In other cases the conversion resulted in weird accented characters intermixed with what looked like normal text. It was hard to tell for sure sometimes because none of us knew Korean or Chinese or Greek or whatever other language the original text was written in. Un-mangling the text turned out to be a difficult problem that we never fully solved in the general case. We played with two potential solutions.
The first step was the same for both solutions: we ran text through the trigram model to determine if it was likely mangled, and what the original language probably was.
For the first solution attempt, we’d then step through the text character-by-character, using the language model to tell us the likelihood of the trigrams that would appear at that position and compare it against the trigram that did appear. If we ran across a trigram that was very unlikely or totally unknown (for example, the trigram “zqp” is not likely to occur in English), we’d replace the offending trigram with one of the highly likely trigrams. This required a bit of backtracking and it would often generate some rather strange results. The solution worked, after a fashion, but not well enough.
For the second attempt we selected several managled records for every language we could identify and then re-downloaded the metadata. By comparing the mangled text with the newly downloaded and presumably unmangled text, we created a table of substitutions. So, for example, we might have determined that “zqp” in mangled English text should be replaced by the word “foo.” (Not that we had very much mangled English text.) We would then go through the database, identify all the mangled records, and do the substitutions as required.
That approach was much more promising, but it didn’t catch everything and we couldn’t recover data that was lost in the original translation. Ultimately, we decided that it wasn’t a good enough solution to go through the effort of applying it to the millions of mangled records.
Our final solution was extremely low-tech. We generated a list of the likely mangled records and created a custom downloader that went out and grabbed those records again. It took a lot longer, but the result was about as good as we were likely to.
|
http://blog.mischel.com/2013/11/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
THE HUMAN USE OF HUl\IAN BEINGS
- Whitney Lane
- 2 years ago
- Views:
Transcription
1 THE HUMAN USE OF HUl\IAN BEINGS This is one of the fundamental documents of our time, a period characterized by the concepts of 'information' and 'communications'. Norbert Wiener, a child prodigy and a great mathematician, coined the term 'cybernetics' to characterize a very general science of 'control and communication in the animal and machine'. It brought together concepts from engineering, the study of the nervous system and statistical mechanics (e.g. entropy). From these he developed concepts that have become pervasive through science (especially biology and computing) and common parlance: 'information', 'message', 'feedback' and 'control'. He wrote, 'the. His cautionary remarks are as relevant now as they were when the book first appeared in the 1950s. Norbert Wiener ( ), Professor of Mathematics at the Massachusetts Institute of Technology from 1919 onwards, wrote numerous books on mathematics and engineering. Having developed methods useful to the military during World War Two, he later refused to do such work during the Cold War, while proposing non-military models of cybernetics.
2
3 THE HUMAN USE OF HUMAN BEINGS CYBERNETICS AND SOCIETY NORBERT WIENER With a new Introduction bj' Steve]. Heims FN ' association in which the free development of each is the condition of the free development of all' FREE ASSOCIATIONBDOKS / LONDON / 1989 \'\W,\. \, \..O mno\. \ \" UN\'1.J
4 Published in Great Britain 1989 by Free Association Books 26 Freegrove Road London N7 9RQ First published 1950; 1954, Houghton Mifflin Copyright, 1950, 1954 by Norbert Wiener Introduction Steve J. Heims 1989 British Library Cataloguing in Publication Data Wiener, Norbert, The human use of human beings: cybernetics and society 1. Cybernetics. Sociological perspectives I. Title 306'.46 ISBN Printed and bound in Great Britain by Bookcraft, Midsomer Norton, Avon
5 To the memory of my father LEO WIENER formerly Professor of Slavic Languages at Harvard University my closest mentor and dearest antagonist
6 ACKNOWLEDGEMENTS Part of a chapter has already appeared in the Phiiosopky of Science The author wishes to acknowledge permission which the publishel of this journal has given him to reprint the material.
7 CONTENTS mce.,her BIOGRAPHICAL NOTES INTRODUCTION BY STEVE]' HEIMS ApPENDIX ix xi xxv Preface 7 Cybernetics in History 15 II Progress and Entropy 28 III Rigidity and Learning: Two Patterns of Communicative Behavior 48 IV The Mechanism and History of Language 74 V Organization as the Message VI Law and Communication 105 VII Communication, Secrecy, and Social Policy 112 VIII Role of the Intellectual and the Scientist 131 IX The First and the Second Industrial Revolution 136 X Some Communication Machines and Their Future 163 XI Language, Confusion, and Jam 187 Index
8
9 BIOGRAPHICAL NOTES NORBERT vviel'er, born in 1894, was educated at Tufts College, Massachusetts, and Harvard University, Massachusetts, where he received his Ph.D. at the age of nineteen. He continued his studies at Cornell, Columbia, in England at Cambridge University, then at Gottingen and Copenhagen. He taught at Harvard and the University of Maine and in 1919 joined the staff of the Massachusetts Institute of Technology, where he was Professor of Mathematics. He was joint recipient of the Bocher Prize of the American Mathematical Society in 1933, and in 1936 was one of the seven American dglegates to the International Congress of Mathematicians in Oslo. Dr Wiener served as Research Professor of Mathematics at the National Tsing Hua University in Peking in , while on leave from MIT. During World War II he developed improvements in radar and Navy projectiles and devised a method of solving problems of fire control. In the years after World War II Wiener worked with the Mexican physiologist Arturo Rosenblueth on problems in biology, and formulated the set of ideas spanning several disciplines which came to be known as 'cybernetics'. He worked with engineers and medical doctors to develop devices that could replace a lost sensory mode. He analysed some non-linear mathematical problems and, with Armand Siegel, reformulated quantum theory as a stochastic process. He also became an articulate commentator on the social implications of science and technology. In 1964 Wiener was recipient of the US National Medal of Science. His published works include The Fourier Integral alld Certain ojlts Applications (1933); Cybernetics (1948); Extrapolation and Interpolation and Smoothing oj Stationary Time Series with Engineering Applications (1949); the first volume of an autobiography, Ex-Prodig),: A:J:y Childhood and Youth (1953); Tempter (1959); and God and Golem (1964). Wiener's published articles have been assembled and edited by P. Masani and republished in four volumes as Norbert Wiener: Collected Works (1985). STEVE J. HEIMS received his doctorate in physics from Stanford University, California. He engaged in research in the branch of
10 x THE HUMAN USE OF HUMAN BEINGS physics known as statistical mechanics and taught at several North American universities. In recent years he has devoted himself to studying various contexts of scientific work: social, philosophical, political and technological. He is the author of John von Neumann and Norbert Wiener: From Mathematics to the Technologies of Life and Death (MIT Press, 1980). Currently he is writing a book dealing with the characteristics of social studies in the USA during the decade following W orid War II.
11 INTRODUCTION Steve J. Heims G.H. Hardy, the Cambridge mathematician and author of A Mathematician'5 Apology, reflecting on the value of mathematics, insisted that it is a 'harmless and innocent occupation'. 'Real mathematics has no effects on war', he explained in a book for the general public in 'No one has yet discovered any warlike purpose to be served by the theory of numbers or relativity... A real mathematician has his conscience dear.' Yet, in fact, at that time physicists were already actively engaged in experiments converting matter into energy (a possibility implied by the Theory of Relativity) in anticipation of building an atomic bomb. Of the younger generation which he taught, Hardy wrote, 'I have helped to train other mathematicians, but mathematicians of the same kind as myself, and their work has been, so far at any rate as 1 have helped them to it, as useless as my own... ' Norbert Wiener took issue with his mentor. He thought Hardy's attitude to be 'pure escapism', noted that the ideas of number theory are applied in electrical engineering, and that 'no matter how innocent he may be in his inner soul and in his motivation, the effective mathematician is likely to be a powerful factor in changing the face of society. Thus he is really as dangerous as a potential armourer of the new scientific war of the future.' The neat separation of pure and applied mathematics is only a mathematician's self-serving illusion. Wiener came to address the alternative to innocence - namely, taking responsibility. After he himself had during World War II worked on a mathematical theory of prediction intended to enhance the effectiveness of anti-aircraft fire, and developed a powerful statistical theory of communication which would put modern communication engineering on a rigorous mathematical footing, any pretence of harmlessness was out of the question for him. From the time of the end of the war until his death in 1964, Wiener applied his
12 xii THE HUMAN USE OF HUMAN BEINGS penetrating and innovative mind to identifying and elaborating on a relation of high technology to people which is benign or, in his words, to the human - rather than the inhuman - use of human beings. In doing so during the years when the cold war was raging in the United States, he found an audience among the generally educated public. However, most of his scientific colleagues - offended or embarrassed by Wiener's views and especially by his open refusal to engage in any more work related to the military - saw him as an eccentric at best and certainly not to be taken seriously except in his undeniably brilliant, strictly mathematical, researches. Albert Einstein, who regarded Wiener's attitude towards the military as exemplary, was in those days similarly made light of as unschooled in political matters. Undaunted, Wiener proceeded to construct a practical and comprehensive attitude towards technology rooted in his basic philosophical outlook, and presented it in lucid language. For him technologies were viewed not so much as applied science, but rather as applied social and moral philosophy. Others have been critical of technological developments and seen the industrial revolution as a mixed blessing. Unlike most of these critics, Wiener was simultaneously an irrepressibly original non -stop thinker in mathematics, the sciences and high technology and equally an imaginative critic from a social, historical and ethical perspective of the uses of his own and his colleagues' handiwork. Because he gave rather unchecked rein to both of these inclinations, Wiener's writings generate a particular tension and have a special fascination. Now, four decades later, we see that the tenor of his comments on science, technology and society were on the whole prophetic and ahead of his time. In the intervening years his subject matter, arising out of the tension between technical fascination and social conscience, has become a respectable topic for research and scholarship. Even leading universities have caught up with it and created courses of study and academic departments with names such as 'science studies', 'technology studies' or 'science, technology and
13 INTRODUCTION Xlll 1- society'. His prediction of an imminent 'communication revolution' in which 'the message' would be a pivotal notion, and the associated technological developments would be in the area of communication, computation and organization, was clear-sighted indeed. The interrelation between science and society via technologies is only one of the two themes underlying The Human Use of Human Beings. The other derives as much from Wiener's personal philosophy as from theoretical physics. Although he was a mathematician, his personal philosophy was rooted in existentialism, rather than in the formal-logical analytical philosophy so prominent in his day and associated with the names of Russell, Moore, Ramsey, Wittgenstein and Ayer. For Wiener life entailed struggle, but it was not the class struggle as a means to social progress emphasized by Marxists, nor was it identical with the conflict Freud saw betwe n the individual and society. In his own words: e n g f e d We are swimming upstream against a great torrent of disorganization, which tends to reduce everything to the heat death of equilibrium and sameness described in the second law of thermodynamics. What Maxwell, Bolzmann and Gibbs meant by this heat death in physics has a counterpart in the ethic of Kierkegaard, who pointed out that we live in a chaotic moral universe. In this, our main obligation is to establish arbitrary enclaves of order and system. These enclaves will not remain there indefinitely by any momentum of their own after we have once established them... We are not fighting for a definitive victory in the indefinite future. It is the greatest possible victory to be, to continue to be, and to have been... This is no defeatism, it is rather a sense of tragedy in a world in which necessity is represented by an inevitable disappearance of differentiation. The declaration of our own nature and the attempt to build an enclave of organization in the face of nature's overwhelming tendency to disorder is an insolence against the gods and the iron necessity that they impose. Here lies tragedy, but here lies glory too. Even when we discount the romantic, heroic overtones in that statement, Wiener is articulating what, as he saw and experienced it, makes living meaningful. The adjective 'arbitrary' before 'order and system' helps to make the
14 xiv THE HUMAN USE OF HUMA]\; BEINGS statement appropriate for many; it might have been made by an artist as readily as by a creative scientist. Wiener's outlook on life is couched in the language of conflict and heroic struggle against overwhelming natural tendencies. But he was talking about something very different from the ruthless exploitation, even destruction, of nature and successfully bending it to human purposes, which is part of the legacy, part of the nineteenth-century heroic ideal, of Western man. Wiener in his discussion of human purposes, recognizing feedbacks and larger systems which include the environment, had moved far away from that ideal and closer to an ideal of understanding and, both consciously and effectively, of collaborating with natural processes. I expect that Wiener would have welcomed some more recent developments in physics, as his thinking was already at times tending in that direction. Since his day developments in the field of statistical mechanics have come to modify the ideas about how orderly patterns - for example, the growth of plants and animals and the evolution of ecosystems - arise in the face of the second law of thermodynamics. As Wiener anticipated, the notions of information, feedback and nonlinearity of the differential equations have become increasingly important in biology. But beyond that, Ilya Prigogine and his co-workers in Belgium have more recently made a convincing case that natural systems which are either far from thermodynamic equilibrium initially, or which fluctuate, may not return to equilibrium at all (G. Nicolis and I. Prigogine, Self Organization in NOllequilibrium Systems, 1977). Instead they continue to move still further away from equilibrium towards a different, increasingly complex and orderly, but nevertheless stable pattern - not necessarily static, but possibly cyclic. According to the American physicist Willard Gibbs' way of thinking, the stable state of a system - equilibrium - is independent of its detailed initial conditions, yet that simplification -no longer holds for systems finding stability far from equilibrium. This is an explicit mechanism quite different from that of a 'Maxwell demon' (explained in
15 INTRODUCTION xv Chapter 2), the mechanism assumed necessary in Wiener's day. It is more nearly related to Wiener's notion of positive feedback, which he tended to see as only disruptive and destructive, rather than as leading to complex stable structures. The results obtained by the Prigogine group show the creation of orderly patterns - natural countertrends to the tendency towards disorganization - to be stronger and more ordinary and commonplace than a sole reliance on mechanisms of the Maxwell-demon type would suggest. Sensitivity to initial conditions is also a prominent feature of 'chaos theory', currently an active field of research. If, however, we now extend Wiener's analogy from statistical mechanics and incorporate the findings of the Prigogine group - according to which natural and spontaneous mechanisms other than just the Maxwell demon generate organization and differentiation - this suggests a shift in emphasis from 'the human fight against the increase of entropy to create local enclaves of order' to a more co-operative endeavour which to a considerable extent occurs naturally and of its own accord. It is a subtle shift that can, however, make large differences. Yet to be explored, these differences appear to echo disagreements that some modern feminists, neo-taoists and ecologists have with classical Greek concepts of the heroic and the tragic. Wiener's status, which he strongly prized, was that of an independent scientifically knowledgeable intellectual. He avoided accepting funds from government agencies or corporations that might in any way compromise his complete honesty and independence. Nor did he identify himself with any political, social or philosophical group, but spoke and wrote simply as an individual. He was suspicious of honours and prizes given for scientific achievement. After receiving the accolade of election to the National Academy of Sciences, he resigned, lest membership in that select, exclusive body of scientists corrupt his autonomous status as outsider vis-tl-vis the American scientific establishment He was of the tradition in which it is the intellectual's respon ibility to speak truth to power. This was in the post-war years, when the US
16 xvi THE HUMAN USE OF HUl\lAN BEINGS government and many scientists and science administrators were celebrating the continuing partnership between government and science, government providing the funds and scientists engaging in research. Wiener remained aloof and highly critical of that peacetime arrangement. More precisely, he tried to stay aloof but he would not separate himself completely because for many years he remained a professor at the Massachusetts Institute of Technology, an institution heavily involved in that partnership. As was his nature, he continued to talk to colleagues about his own fertile ideas, whether they dealt with mathematics, engineering or social concerns. The Human Use of Human Beings, first published in 1950, was a sequel to an earlier volume, Cybernetics: Or Control and Communication in the Animal and the Machine. That earlier volume broke new ground in several respects. First of all, it was a report on new scientific and technical developments of the 1940s, especially on information theory, communication theory and communications technology, models of the brain and general-purpose computers. Secondly, it extended ideas and used metaphors from physics and electrical engineering to discuss a variety of topics including neuropathology, politics, society, learning and the nature of time. Wiener had been an active participant in pre-war interdisciplinary seminars. After the war he regularly took part in a series of small conferences of mathematicians and engineers, which were also attended by biologists, anthropologists, sociologists, psychologists and psychiatrists, in which the set of ideas subsumed under cybernetics was explored in the light of these various disciplines. At these conferences Wiener availed himself of the convenient opportunity to become acquainted with current research on a broad range of topics outside of his speciality. Already in his Cybernetics Wiener had raised questions about the benefits of the new ideas and technologies, concluding pessimistically, there are those who hope that the good of a better understanding of
17 INTRODUCTION XVll man and society which is offered by this new field of work may, nticipatc and outweigh the incidental contribution we are making o the concentration of power. I write in 1947, and I am compelled to say that it is a very slight hope. The book was a rarity also in that, along with the technical material, he discussed ethical issues at length. The Human Use o/human Beings is a popularization of Cybernetics (omitting the forbidding mathematics), though with a special emphasis on the description of the human and the social. The present volume is a reprint of the second (1954) edition, which differs significantly from the original hardcover edition. The notable reorganization of the book and the changes made deserve attention. In the first edition we read that 'the purpose of this book is both to explain the potentialities of the machine in fields which up to now have been taken to be purely human, and to warn against the dangers of a purely selfish exploitation of these possibilities in a world in which to human beings human things are all-important.' After commenting critically about patterns of social organization in which all orders come from above, and none return ('an ideal held by many Fascists, Strong Men in Business, and Government'), he explains, 'I wish to devote this book [first edition] to a protest against this inhuman use of human beings.' The second edition, in contrast, as stated in the Preface, is organized around Wiener's other major theme, 'the impact of the Gibbsian point of view on modern life, both through the substantive changes it has made in working science, and through the changes it has made indirectly in our attitude to life in general.' The second edition, where the framework is more philosophical and less political, appears to be presented in such a way as to make it of interest not only in 1954, but also for many years to come. The writing and the organization are a bit tighter and more orderly than in the first edition. It also includes comment on some exemplifications of cybernetics (e.g., the work of Ross AShby) that had come to Wiener's attention only during the early 1950s. Yet, even though several chapters are essentially unchanged, something was lost in going from the first to the
18 XVlll THE HUMAN USE OF HUMAl" BEINGS second edition. I miss the bluntness and pungency of some of the comments in the earlier edition, which apparently were 'cleaned up' for the second. The cause celebre in 1954 in the USA was the Oppenheimer case. J. Robert Oppenheimer, the physicist who had directed the building of atom bombs during World War II, had subsequently come to disagree with the politically dominant figures in the government who were eager to develop and build with the greatest possible speed hydrogen bombs a thousand times more powerful than the atom bombs which had devastated Hiroshima and Nagasaki. Oppenheimer urged delay, as he preferred that a further effort be made to negotiate with the Soviet Union before proceeding with such an irreversible escalation of the arms race. This policy difference lay behind the dramatic Oppenheimer hearings, humiliating proceedings at the height of the anti-communist 'McCarthy era' (and of the US Congressional 'Un-American Activities Committee'), leading to, absurdly, the labelling of Oppenheimer as a 'security risk'. In that political atmosphere it is not surprising for a publisher to prefer a different focus than the misuse of the latest technologies, or the dangers of capitalist exploitation of technologies for profit. Wiener himself was at that time going on a lecture tour to India and was then occupied with several other projects, such as writing the second volume of his autobiography, the mathematical analysis of brain waves, sensory prosthesis and a new formulation of quantum theory. He did not concern himself a great deal with the revision of a book he had written several years earlier - it would be more characteristic of him to write a new book or add a new chapter, rather than revise a book already written - although he must have agreed to all revisions and editorial changes. At the end of the book, in both editions, Wiener compares the Catholic Church with the Communist Party, and both with cold war government activities in capitalist America. The criticisms of America in these last few pages of the first edition (see Appendix to this Introduction) are, in spite of one brief pointed reference to McCarthyism, largely absent in the
19 INTRODUCTION XIX second edition. There are other differences in the two editions. The chapter 'Progress and Entropy', for example, is much longer in the first edition. The section on the history of inventions within that chapter is more detailed. The chapter also deals with such topics as the depletion of resources and American dependence on other nations for oil, copper and tin, and the possibility of an energy-crisis unless new inventions obviate it. It reviews vividly the progress in medicine and anticipates new problems, such as the increasing use of synthetic foods that may contain minute quantities of carcinogens. These and other discursive excursions, peripheral to the main line of argument of the book, are omitted in the present edition. The Human Use of Human Beings was not Wiener's last word on the subject. He continued to think and talk and write. In 1959 he addressed and provoked a gathering of scientists by his reflections and analysis of some moral and technical consequences of automation (Science, vol. l31, p. 1358, 1960), and in his last book (God and Golem, Inc., 1964) he returned to ethical concerns from the perspective of the creative scientist or engineer. It was Wiener's lifelong obsession to distinguish the human from the machine, having recognized the identity of patterns of organization and of many functions which can be performed by either, but in The Human Use of Human Beings it is his intention to place his understanding of the people/ machines identity/dichotomy within the context of his generous and humane social philosophy. Cybernetics had originated from the analysis of formal analogies between the behaviour of organisms and that of electronic and mechanical systems. The mostly military technologies new in his day, which today we call 'artificial intelligence', highlighted the Potential resemblance between certain elaborate machines and people. Academic psychology in North America was in those days still predominantly behaviourist. The cybernetic machines - such as general-purpose computers - suggested a possibility as to the nature of mind: mind was analogous to the formal structure and organization, or the software aspect,
20 xx THE HUMAN USE OF HUMAN BEINGS of a reasoning-and-perceiving machine that could also issue instructions leading to actions. Thus the long-standing mind-brain duality was overcome by a materialism which encompassed organization, messages and information in addition to stuff and matter. But the subjective - an individual's cumulative experience, sensations and feelings, including the subjective experience of being alive - is belittled, seen only within the context of evolutionary theory as providing information useful for survival to the organism. If shorn of Wiener's benign social philosophy, what remains of cybernetics can be used within a highly mechanical and dehumanizing, even militaristic, outlook. The fact that the metaphor of a sophisticated automaton is so heavily employed invites thinking about humans as in effect machines. Many who have learned merely the technical aspects of cybernetics have used them, and do so today, for ends which Wiener abhorred. It is a danger he foresaw, would have liked to obviate and, although aware of how little he could do in that regard, valiantly tried to head off. The technological developments in themselves are impressive, but since most of us already have to bear with a glut of promotional literature it is more to the point here to frame discussion not in the promoters' terms (what the new machine can do), but in a more human and social framework: how is the machine affecting people's lives? Or still more pointedly: who reaps a benefit from it? Wiener urged scientists and engineers to practise 'the imaginative forward glance' so as to attempt assessing the impact of an innovation, even before making it known. However, once some of the machines or techniques were put on the market, a younger generation with sensitivity to human and social impacts could report empirically where the shoe pinches. Even though such reports may not suffice to radically change conventional patterns of deployment of technologies, which after all express many kinds of political and economic interests, they at least document what happens and help :0 educate the public. As long as their authors avoid an a priori pro-technology or anti-technology bias, they
21 INTRODUCTION xxi effectively carry on where Wiener left off. Among such reports we note Joseph Weizenbaum's description of the human damage manifested in the 'compulsive programmer', which poses questions about appropriate and inappropriate uses of computers (Computer Power and Human Reason, 1976). Similarly David Noble has documented how the introduction of automation in the machine-tool industry has resulted in a deskilling of machinists to their detriment, and has described in detail the political process by which this deskilling was brought about (Forces of Production, 1984). These kinds of 'inhuman' uses seem nearly subtle if placed next to the potentially most damaging use, war. The growth of communication-computation-automation devices and systems had made relatively small beginnings during World War II, but since then has been given high priority in US government-subsidized military research and development, and in the Soviet Union as well; their proliferation in military contexts has been enormous and extensive. A proper critique would entail an analysis in depth of world politics, and especially the political relations of the two 'superpowers'. Wiener feared that he had helped to provide tools for the centralization of power, and indeed he and his fellow scientists and engineers had. For instance, under the Reagan government many billions of dollars were spent on plans for a protracted strategic nuclear war with the Soviet Union. The technological 'challenge' was seen to be the development of an effective C-cubed system (command, control and communication) which would be used to destroy enemy political and command centres and at the same time, through a multitude of methods, prevent the destruction of the corresponding American centres, leaving the USA fully in command throughout the nuclear war and victorious. Some principled scientists and engineers have, in a Wienerian spirit, refused to work on, or have stopped working on, such mad schemes, or on implementing the politicians' 'Star \V ars' fantasies. We have already alluded to Wiener's heavy use of metaphors from engineering to describe the human and the
22 xxii THE HUMAN USE OF HUMAN BEINGS social, and his neglect of the subjective experience. In the post-war years American sociologists, anthropologists, political scientists and psychologists tried harder than ever to be seen as 'scientific'. They readily borrowed the engineers' idiom and many sought to learn from the engineers' or mathematicians' thinking. Continental European social thinkers were far more inclined to attend to the human subject and to make less optimistic claims about their scientific expertise, but it required another decade before European thought substantially influenced the positivistic or logical-empiricist predilections of the mainstream of American social scientists. A major development in academic psychology, prominent and well-funded today, relies strongly on the concept of information processing and models based on the computer. It traces its origins to the discussions on cybernetics in the post-war years and the wartime work of the British psychologist Kenneth Craik. This development, known as 'cognitive science', entirely ignores background contexts, the culture, the society, history, subjective experience, human feelings and emotions. Thus it works with a highly impoverished model of what it is to be human. Such models have, however, found their challengers and critics, ranging from the journalist Gordon Ratray Taylor (The Natural History of Mind, 1979) to the psychologist James J. Gibson, the latter providing a far different approach to how humans know and perceive (The Perception of the Visual World, 1950; The Senses Considered as Perceptual Systems, 1966; The Ecological Approach to Visual Perceptioll, 1979). If we trace the intellectual history of current thinking in such diverse fields as cellular biology, medicine, anthropology, psychiatry, ecology and economics, we find that in each discipline concepts coming from cybernetics consitute one of the streams that have fed it. Cybernetics, including information theory, systems with purposive behaviour and automaton models, was part of the intellectual dialogue of the 1950s and has since mingled with many other streams, has been absorbed and become part of the conventional idiom and practice.
23 INTRODUCTION xxiii Too many writings about technologies are dismal, narrow apologetics for special interests, and not very edifying. Yet the subject matter is intrinsically extremely varied and stimulating to an enquiring mind. It has profound implications for our day-to-day lives, their structure and their quality. The social history of science and technology is a rich resource, even for imagining and reflecting on the future. Moreover the topic highlights central dilemmas in every political system. For example, how is the role of 'experts' in advising governments related to political process? Or how is it possible to reconcile, in a capitalist economy within a democratic political structure, the unavoidable conflict between public interest and decision by a popular vote, on the one hand, and corporate decisions as to which engineering projects are profitable, on the other? We are now seeing the rise of a relatively new genre of writing about technologies and people which is interesting, concrete, open, exploratory and confronts political issues head-on. We need this writing, for we are living in what Ellul has appropriately called a technological society. Within that genre, Wiener's books, as well as some earlier writings by Lewis Mumford, are among the few pioneering works that have become classics. The present reissue of one of these classics is cause for rejoicing. May it stimulate readers to think passionately for themselves about the human use of human beings with the kind of intellectual honesty and compassion Wiener brought to the subject. Steve J. Heims Boston, October 1988
24
25 APPENDIX What follows are two documents from Norbert Wiener's writings: - an open letter published in the Atlantic Monthly magazine, January 1947 issue; and - the concluding passages of The Human Use of Human Beings, 1st edition, Houghton-Mifflin, 1950, pp
26 xxvi THE HUMAN USE OF HUMAN BEIl"GS A SCIENTIST RFBHS The letter which follows was addressed by one of our ranking mathematicians to a research scientist of a great aircraft corporation, who had asked him for the technical account of a certain line of research he had conducted in the war. Professor Wiener's indignation at being requested to participate in indiscriminate rearmament, less than two years after victory, is typical of many American scientists who served their country faithfully during the war. Professor of Mathematics in one of our great Eastern institutions, Norbert Wiener was born in Columbia, Missouri, in 1894, the son of Leo Wiener, Professor of Slavic Languages at Harvard University. He took his doctorate at Harvard and did his graduate work in England and in Gottingen. Today he is esteemed one of the world's foremost mathematical analysts. His ideas played a significant part in the development of the theories of communication and control which were essential in winning the war. - The Editor, Atlantic Monthly Sir:- I have received from you a note in which you state that you are engaged in a project concerning controlled missiles, and in which you request a copy of a paper which I wrote for the National Defense Research Committee during the war. As the paper is the property of a government organization, you are of course at complete liberty to turn to that government organization for such information as I could give you. If it is out of print as you say, and they desire to make it available for you, there are doubtless proper avenues of approach to them. When, however, you turn to me for information concerning controlled missiles, there are several considerations which determine my reply. In the past, the comity of scholars has made it a custom to furnish scientific information to any person seriously seeking it. However, we must face these facts: the policy of the government itself during and after the
27 ApPENDIX XXV enquire of him. The interchange of ideas which is one of the great traditions of science must of course receive certain limitations when the scientist becomes an arbiter of life and death. For the sake, however, of the scientist and the public, these limitations should be as intelligent as possible. The measures taken during the war by our. Both of these are disastrous for our civilization, and entail grave and immediate peril for the public. I realize, of course, that I am acting as the censor of my own ideas, and it may sound arbitrary, but I will not accept a censorship in which I do not participate.. In that respect the controlled missile represents the still imperfect supplement to the atom bomb and to bacterial warfare. The practical use of guided missiles can only be to kill foreign civilians indiscriminately, and it furnishes no protection whatsoever to civilians in this country. I cannot conceive a situation in which such weapons can produce any effect other than extending the kamikaze way of fighting to whole nations. Their possession can do nothing but endanger us by encouraging the tragic insolence of the milita mind. If therefore I do not desire to participate in the bombing or
28 XXVlll THE HUMAN USE OF HUMAN BEINGS poisoning of defenceless. Norbert Wiener
29 ApPENDIX XXIX THE He#.4N USE OF HlHH1V BnNGS I have indicated that freedom of opinion at the present time is being crushed between the two rigidities of the Church and the Communist Party. In the United States we are in the process of developing a new rigidity which combines the methods of both while partaking of the emotional fervour of neither. Our Conservatives of all shades of opinion have somehow got together to make American capitalism and the fifth freedom of the businessman supreme throughout all the world. Our military men and our great merchant princes have looked upon the propaganda technique of the Russians, and have found that it is good. They have found a worthy counterpart for the GPU in the FBI, in its new role of political censor. They have not considered that these weapons form something fundamentally distasteful to humanity, and that they need the full force of an overwhelming faith and belief to make them even tolerable. This faith and belief they have nowhere striven to replace. Thus they have been false to the dearest part of our American traditions, without offering us any principles for which we may die, except a merely negative hatred of Communism. They have succeeded in being un-american without being radical. To this end we have invented a new inquisition: the Inquisition of Teachers' Oaths and of Congressional Committees. We have synthesized a new propaganda, lacking only one element which is common to the Church and to the Communist Party, and that is the element of Belief. We have accepted the methods, not the ideals of our possible antagonists, little realizing that it is the ideals which have given the methods Whatever cogency they possess. Ourselves without faith, we presume to punish heresy. May the absurdity of our position soon perish amidst the Homeric laughter that it deserves. It is this triple attack on our liberties which we must resist, if communication is to have the scope that it properly deserves as the central phenomenon of society, and if the human individual is to reach and to maintain his full stature.
30.
31 fonnulate
32.
33 formulate
34 8 THE HUMAN USE OF HUMAN BEINGS considered worlds of very large numbers of particles which necessarily had to be treated statistically. But what Bolzmann and Gibbs did was to introduce statistics into physics in a much more thoroughgoing way, so that the statistical approach was valid not merely for systems of enormous complexity, but even for systems as simple as the single particle in a field of force. Statistics is the science of distribution, and the distribution contemplated by these modern scientists was not concerned with large numbers of similar particles, but with the various positions and velocities from which a physical system might start. In other words, under the Newtonian system the same physical laws apply to a variety of systems starting from a variety of positions and with a-variety of momenta. The new statisticians put this point of view in a fresh light. They retained indeed the principle according to which certain systems may be distinguished from others by their total energy, but they rejected the supposition according to which systems with the same total energy may be clearly distinguished indefinitely and described forever by fixed causal laws. There was, actually, an important statistical reservation implicit in Newton's work, though the eighteenth century, which lived by Newton, ignored it. No physical measurements are ever precise; and what we have to say about a machine or other dynamic system really concerns not what we must expect when the initial positions and momenta are given with perfect accuracy (which never occurs), but what we are to expect when they are given with attainable accuracy. This merely means that we know, not the complete initial conditions, but something about their distribution. The functional part of physics, in other words, cannot escape considering uncertainty and the contingency of events. It was the merit of Gibbs to show for the first time a clean-cut scientific method for taking this contingency into consideration.
35 CYBERNETICS AND SOCIETY 9 The historian of science looks in vain for a single line of development. Gibbs' work, while well cut out, was badly sewed, and it remained for others to complete the job that he began. The intuition on which he based his work was that, in general, a physical system belonging to a class of physical systems, which continues to retain its identity as a class, eventually reproduces in almost all cases the distribution which it shows at any given time over the whole class of systems. In other words, under certain circumstances a system runs through all the distributions of position and momentum which are compatible with its energy, if it keeps running long enough. This last proposition, however, is neither true nor possible in anything but trivial systems. Nevertheless, there is another route leading to the results which Gibbs needed to bolster his hypothesis. The irony of history is that this route was being explored very thoroughly in Paris at exactly the time when G!bbs was working in New Haven; and yet it was not until 1920 that the Paris work met the New Haven work in a fruitful union. I had, I believe, the honor of assisting at the birth of the first child of this union. Gibbs had to work with theories of measure and probability which were already at least twenty-five years old and were grossly inadequate to his needs. At the same time, however, Borel and Lebesgue in Paris were devising the theory of integration which was to prove apposite to the Gibbsian ideas. Borel was a mathematician who had already made his reputation in the theory of probability and had an excellent physical sense. He did work leading to this theory of measure, but he did not reach the stage in which he could close it into a complete theory. This was done by his pupil Lebesgue, who was a very different sort of person. He had neither the sense of physics nor an interest in it. Nonetheless Lebesgue solved the problem put by Borel, but he regarded the solution of this problem as
36 10 THE HUMAN USE OF HUMAN BEINGS no more than a tool for Fourier series and other branches of pure mathematics. A quarrel developed between the two men when they both became candidates for admission to the French Academy of Sciences, and only after a great deal of mutual denigration, did they both receive this honor. Borel, however, continued to maintain the importance of Lebesgue's work and his own as a physical tool; but I believe that I myself, in 1920, was the rst person to apply the Lebesgue integral to a speci c physical problem-that of the Brownian motion. This occurred long after Gibbs' death, and his work remained for two decades one of those mysteries of science which work even though it seems that they ought not to work. rst great revolution of twentieth century physics. This revolution has had the effect that physics now no longer claims to deal with what will always happen, but rather with what will happen with an overwhelming probability. At the beginning in Gibbs' own work this contingent attitude was superimposed on a N ewtoni an base in which the elements whose probability was to be discussed were systems obeying all of the Newtonian laws. Gibbs' theory was essentially new, but the permutations with which it was compatible were the same as those contemplated by Newton. What has happened to physics since is that the rigid Newtonian basis has been discarded or modi ed, and the Gibbsian contingency now stands in its complete nakedness as the full basis of physics. It is true that the books are not yet quite closed on this issue and that Einstein and, in some of his phases, De Broglie,
37 CYBERNETICS AND SOCIETY 11 still contend that a rigid deterministic world is more acceptable than a contingent one; but these great scientists are fighting a rear-guard action against the overwhelming force of a younger generation. One interesting change that has taken place is that in a probabilistic world we no longer deal with quantities and statements which concern a specific, real universe as a whole but ask instead questions which may find their answers in a large number of similar universes. Thus chance has been admitted, not merely as a mathematical tool for physics, but as part of its warp and weft. This recognition of an element of incomplete determinism, almost an irrationality in the world, is in a certain way parallel to Freud's admission of a deep irrational component in human conduct and thought. In the present world of political as well as intellectual confusion, there is a natural tendency to class Gibbs, Freud, and the proponents of the modern theory of probability together as representatives of a single tendency; yet I do not wish to press this point. The gap between the Gibbs-Lebesgue way of thinking and Freud's intuitive but somewhat discursive method is too large. Yet in their recognition of a fundamental element of chance in the texture of the universe itself, these men are close to one another and close to the tradition of. This book is devoted to the impact of the Gibbsian point of view on modern life, both through the substantive changes it has made in working science, and through the changes it has made indirectly in our attitude to life in general. Thus the following chapters contain an element of technical description as well as
38 12 THE HUMAN USE OF HUMAN BEINGS a philosophic component which concerns what we do and how we should react to the new world that confronts us. I repeat:. Beyond this, Gibbs had a theory that this probability tended naturally to increase as the universe grows older. The measure of this probability is called entropy, and the characteristic tendency of entropy is to increase.' universe order is least probable, chaos most probable.. It is with this point of view at its core that the new science of Cybernetics began its development. 1 1 There are those who are skeptical as to the precise identity between entropy and biological disorganization. It will be necessary for me to evaluate these criticisms sooner or later, but for the present I must assume that the differences lie, not in the fundamental nature of these quantities, but in the systems in which they are observed. It is too much to expect a final, clear-cut definition of entropy on which all writers will agree in any less than the closed, isolated system.
39 THE HUMAN USE OF HUMAN BEINGS
40
41 I CYBERNETICS IN HISTORY. This larger theory of messages is a probabilistic theory, an intrinsic part of the movement that owes its origin to Willard Gibbs and which I have described in the introduction. In response to a certain demand for me to make its ideas acceptable to the lay public, I published the first edition of The Human Use of Human Beings in Since then the
42 16 THE HUMAN USE OF HUMAN BEINGS.increasing part.
43 CYBERNETICS AND SOCIETY animal or mechanical, is a chapter in the theory of messages.. The commands through which we exercise our control over our environment are a kind of information which we impart to it. Like any fonn. Information is a name for the content of what is exchanged with the outer world as we adjust to it, and make our adjustment felt upon it. The process of receiving and of using information is the process of
44 18 THE HUMAN USE OF HUMAN BEINGS our adjusting to the contingencies of the outer environment, and of our living effectively within that environment.,
45 CYBERNETICS AND SOCIETY runs through its whole texture. It plays a large part in two of his most original ideas : that of the Characteristica Universalis, or universal scientific language, and that of the Calculus Ratiocinator, or calcul!!s ()fj.Iey, in the nineties, was undertaken to resolve this problem, and it gave the entirely unexpected answer that there simply was no way to determine the motion of matter through the ether.
46 20 THE HUMAN USE OF HUMAN BEINGS ( '. <:>bsety. g, impor _ tant departures of Gibbsian mechanics from Newtonian mechanics. In Gibbs' view we have a physical
47 CYBERNETICS AND SOCIETY 21, we are now no longer concerned with the study of all possible outgoing and incoming messages which we may send and receive, but with the theory of much more specific outgoing and incoming messages; and it involves a measurement of the no-longer infinite amount of information that they yield us. Mess_[lg :rt. I have already referred to Leibnitz's interest in automata, an interest incidentally shared py his contemporary, Pascal, who made real contributions to the development of what we now know as the desk addingmachine. I
48 22 THE HUMAN USE OF HUMAN BEINGS
49 CYBERNETICS AND SOCIETY 23
50 24 THE HUMAN USE OF HUMAN BEINGS which we call the mem01'y. conqerning monit01'sthat is, of elements which indicate a performance. It is the function of these mechanisms to control the mechanical tendency toward disorganization; in other
51 CYBERNETICS AND SOCIETY 25.
52 26 THE HUMAN USE OF HUMAN BEINGS. It is my thesis that. In both cases these external messages are not taken neat, but through the internal transforming powers of the apparatus, whether it be alive or dead. The information is then turned into a
53 CYBERNETICS AND SOCIETY 27 new form available for the further stages of performance. In both the animal and the machine this performance is made to be effective on the outer world. In both of them, their performed action on the outer world, and not merely their intended action, is reported back to the central regulatory apparatus. This complex of behavior is ignored by the average man, and in particular does not play the role that it should in our habitual analysis of society; for just as individual physical responses may be seen from this point of view, so may the organic responses of society itself. I do not mean that the sociologist is unaware of the existence and complex nature of communications in society, but until recently he has tended to overlook the extent to which they are the cement which binds its fabric together..
54 .. n PROGRESS AND ENTROPY As we have said, nature's statistical tendency to disorder, the tendency for entropy to increase in isolated systems, is expressed by the second law of thermodynamics. We, as human beings, are not isolated systems. We take in food, which generates energy, from the outside, and are, as a result, parts of that larger world which contains those sources of our vitality. But even more important is the fact that we take in information through our sense organs, and we act on information received. Now the physicist is already familiar with the signif; icance of this statement as far as it concerns our relations with the environment. A brilliant expression of the role of information in this respect is provided by Clerk Maxwell, in the form of the so-called "Maxwell demon," which we may describe as follows. Suppose that we have a container of gas, whose temperature is everywhere the same. Some molecules of this gas will be moving faster than others. Now let us suppose that there is a little door in the container that let the gas into a tube which runs to a heat engine, and that the exhaust of this heat engine is connected by another tube back to the gas chamber, through another door. At each door there is a little being with the power of watching the on-coming molecules and of opening or closing the doors in accordance with their velocity. The demon at the first door opens it only for highspeed molecules and closes it in the face of low-speed
55 CYBERNETICS AND SOCIETY 29 molecules coming from the container. The role of the demon at the second door is exactly the opposite : he opens the door only for low-speed molecules coming from the container and closes it in the face of highspeed molecules. The result is that the temperature goes up at one end and down at the other thus creating a perpetual motion of "the second kind" : that is, a perpetual motion which does not violate the first law of thermodynamics, which tells us that the amount of energy within a given system is constant, but does violate the second law of thermodynamics, which tells us that energy spontaneously runs down hill in temperature. In other words, the Maxwell demon seems to overcome the tendency of entropy to increase. Perhaps I can illustrate this idea still further by considering a crowd milling around in a subway at two turnstiles, one of which will only let people out if they are observed to be running at a certain speed, and the other of which will only let people out if they are moving slowly. The fortuitous movement of the people in the subway will show itself as a stream of fast-moving people coming from the first turnstile, whereas the second turnstile will only let through slow-moving people. If these two turnstiles are connected by a passageway with a treadmill in it, the fast-moving people will have a greater tendency to turn the treadmill in one direction than the slow people to turn it in the other, and we shall gather a source of useful energy in the fortuitous milling around of the crowd. Here there emerges a very interesting distinction between the physics of our grandfathers and that of the present day. In nineteenth century physics, it seemed to cost nothing to get information. The result is that there is nothing in Maxwell's physics to prevent one of his demons from furnishing its own power source. Modern physics, however, recognizes that the demon can only gain the information with which it opens or closes the door from something like a sense organ
56 30 THE HUMAN USE OF HUMAN BEINGS which for these purposes is an eye. The light that strikes the demon's eye is not an energy-less supplement of mechanical motion, but shares in the main properties of mechanical motion itself. Light cannot be received by any instrument unless it hits it, and cannot indicate the position of any particle unless it hits the particle as well. This means, then, that even from a purely mechanical point of view we cannot consider the gas chamber as containing mere gas, but rather gas and light which may or may not be in equilibrium. If the gas and the light are in equilibrium, it can be shown as a consequence of present physical doctrine that the Maxwell demon will be as blind as if there were no light at all. We shall have a cloud of light coming from every direction, giving no indication of the position and momenta of the gas particles. Therefore the Maxwell demon will work only in a system that is not in equilibrium. In such a system, however, it will turn out that the constant collision between light and gas particles tends to bring the light and particles to an equilibrium. Thus while the demon may temporarily re;. verse the usual direction of entropy, ultimately it too will wear down. The Maxwell demon can work indefinitely only if additional light comes from outside the system and does not correspond in temperature to the mechanical temperature of the particles themselves. This is a situation which should be perfectly familiar to us, because we see the universe around us reflecting light from the sun, which is very far from being in equilibrium with mechanical systems on the earth. Strictly speaking, we are confronting particles whose temperature is 50 or 600 F. with a light which comes from a sun at many thousands of degrees. In a system which is not in equilibrium, or in part of such a system, entropy need not increase. It may, in fact, decrease locally. Perhaps this non-equilibrium of the world about us is merely a stage in a downhill
57 CYBERNETICS AN]) SOCIETY 31 course which will ultimately lead to equilibrium. Soone r or later we shall die, and it is highly probable that the whole universe around us will die the heat death, in which the world shall be reduced to one vast temper- '. ature equilibrium in which nothing really new ever happens. There will be nothing left but a drab uniformity out of which we can expect only minor and, insignificant local fluctuations. But we are not yet spectators at the last stages of the world's death. In fact these last stages can have no spectators. Therefore, in the world with which we are immediately concerned there are stages which, though they occupy an insignificant fraction of eternity, are of great significance for our purposes, for in them entropy does not increase and organization and its correlative, information, are being built up. What I have said about these enclaves of increasing organization is not confined merely to organization as exhibited by living beings. Machines also contribute to a local and temporary building up of information, notwithstanding their crude and imperfect organization compared with that of ourselves. Here I want to interject the semantic point that such words as life, purpose, and soul are grossly inadequate to prec se scientific thinking. These terms have gained their significance through our recognition of the unity of a certain group of phenomena, and do not in fact furnish us with any adequate basis to characterize this unity. Whenever we find a new phenomenon which partakes to some degree of the nature of those which we have already termed "living phenomena," but does not conform to all the associated aspects which define the term "life," we are faced with the problem whether to enlarge the word "life" so as to include them, or to deline it 'in a more restrictive way so as to exclude them. We have encountered this problem in the past in considering viruses, which show some of the tendencies of life-to persist, to multiply, and to organize-,'
58 32 THE HUMAN USE OF HUMAN BEINGS but do not express these tendencies in a fully-developed form.. As Humpty Dumpty says about some of his more remarkable words, "1 pay them extra, and make them do what 1 want." 1. While it is impossible to make any universal statements concerning life-imitating automata in a field which is growing as rapidly as that of automatization, there are some general features of these machines as they actually exist that 1 should like to emphasize. One is that they are machines to perform some definite task or tasks, and therefore must possess effector organs ( analogous to arms and legs in human beings) with
59 CYBERNETICS AND SOCIETY 33 which such tasks can be performed. The second point is that they must be en rapport with the outer world by sense organs, such as photoelectric cells and thermometers, which not only tell them what the existing circumstances are, but enable them to record the performance or nonperformance of their own tasks. This last function, as we have seen, is called feedback, the property of being able to adjust future conduct by past performance. Feedback may be as simple as that of the common reflex, or it may be a higher order feedback, in which past experience is used not only to regulate specific movements, but also whole policies of behavior. Such a policy-feedback may, and often does, appear to be what we know under one aspect as a conditioned reflex, and under another as learning. For all these forms of behavior, and particularly for the more complicated ones, we must have central decision organs which determine what the machine is to do next on the basis of information fed back to it, which it stores by means analogous to the memory of a living organism. It is easy to make a simple machine which will run toward the light or run away from it, and if such machines also contain lights of their own, a number of them together will show complicated forms of social behavior such as have been described by Dr. Grey Walter in his book, The Living Brain. At present the more complicated machines of this type are nothing but scientific toys for the exploration of the possibilities of the machine itself and of its analogue, the nervous system. But there is reason to anticipate that the developing technology of the near future will use some of these potentialities. Thus the nervous system and the automatic machine are fundamentally alike in that they are devices which make decisions on the basis of decisions they have made in the past. The simplest mechanical devices will make decisions between two alternatives, such as the
60 .. 34 THE HUMAN USE OF HUMAN BEINGS closing or opening of a switch. In the nervous system, the individual nerve fiber also decides between carrying an impulse or not. In both the machine and the nerve, there is a specific apparatus for making future decisions depend on past decisions, and in the nervous system a large part of this task is done at those extremely complicated points called "synapses" where a number of incoming nerve fibers connect with a single outgoing nerve fiber. In many cases it is possible to state the basis of these decisions as a threshold of action of the synapse, or in other words, by telling how many incoming fibers should fire in order that the outgoing fibers may fire. This is the basis of at least part of the analogy between machines and living organisms. The synapse in the living organism corresponds to the switching device in the machine. For further development of the detailed relationship between machines and living organisms, one should consult the extremely inspiring books of Dr. Walter and Dr. W. Ross Ashby.! The machine, like the living organism, is, as I have said, a device which locally and temporarily seems to resist the general tendency for the increase of entropy. By its ability to make decisions it can produce around it a local zone of organization in a world whose general tendency is to run down. difference between these two sorts of demons will make itself apparent in the tactics to be used against them. The Manichaean devil is an opponent, 1 W. Ross Ashby, Design f01' a Brain, Wiley, New York, 1952, and W. Grey Walter, The Living Brain, Norton, New York, 1953
61 CYBERNETICS AND SOCIETY 35 like any other opponent, who is detennined. On the other hand, the Augustinian devil, which is not a power in itself, but the measure of our own weakness, may require our full resources to uncover, but when we have uncovered it, we have in a certain sense exorcised it, and it will not alter its policy on a matter already decided with the mere intention of confounding us further. The Manichaean devil is playing a game of poker against us and will resort readily to bluffing; which, as von Neumann explains in his Theory of Games, is intended not merely to enable us to win on a bluff, but to prevent the other side from winning on the basis of a certainty that we will not bluff. Compared to this Manichaean being of renned malice, the Augustinian devil is stupid. He plays a difficult game, but he may be defeated by our intelligence as thoroughly as by a sprinkle of holy water. As to the nature of the devil, we have an aphorism of Einstein's which is more than an aphorism, and is really a statement concerning the foundations of scientinc method. "The Lord is subtle, but he isn't simply mean." Here the word "Lord" is used to describe those forces in nature which include what we have attributed to his very humble servant, the Devil, and Einstein means to say that these forces do not bluff. Perhaps this devil is not far in meaning from Mephistopheles. When Faust asked Mephistopheles what he was, Mephistopheles replied, "A part of that force which always seeks evil and always does good." In other words, the devil is not unlimited in his ability to deceive, and the scientist who looks for a positive force determined to confuse us in the universe which he is investigating is wasting his time. Nature offers I I
62 36 THE HUMAN USE OF HUMAN BEINGS the game player. The research physicist has all the time in the world to carry out his experiments, and than by his best moments. I may be prejudiced about this claim : for I have found it possible myself to do effective work in science, while my chess has been continually vitiated by my carelessness at critical instants. The scientist is thus disposed to regard his opponent as an honorable enemy. This attitude is necessary for his effectiveness as a scientist, but tends to make him the dupe of unprincipled people in war and in politics. It also has the effect of making it hard for the general public to understand him, for the general public is much more concerned with personal antagonists than with nature as an antagonist. We are immersed in a life in which the world as a whole obeys the s coe! _gf the:ri!lg<!y n:!iq ;.._ n- '.' fusion increases and oraer decreases. Yet, as we have seen, the second law of thermodynamics, while it may be a valid statement about the whole of a closed system, is definitely not valid concerning a non-isolated part of it. There are local and temporary islands of decreasing entropy in a world in which the entropy as a whole tends to increase, and the existence of these islands enables some of us to assert the existence of progress. What can we say about the general direction of
63 CYBERNETICS AND SOCIETY 37 the battle between progress and increasing entropy in the world immediately about us? The Enlightenment, as we all know, fostered the idea of progress, even though there were among the men of the eighteenth century some who felt that this progress was subject to a law of diminishing returns, and that the Golden Age of society would not differ very much from what they saw about them. The crack in the fabric of the Enlightenment, marked by the French Revolution, was accompanied by doubts of progress elsewhere. Malthus, for example, sees the culture of his age about to sink into the slough of an uncontrolled increase in population, swallowing up all the gains so far made by humanity. The line of intellectual descent from Malthus to Darwin is clear. Darwin's great innovation in the theory of evolution was that he conceived of it not as a Lamarckian spontaneous ascent from higher to higher and from better to better, but as a phenomenon in which living beings showed (a) a spontaneous tendency to develop in many directions, and (b) a tendency to follow the pattern of their ancestors. The combination of these two effects was to prune an overlush developing nature and to deprive it of those organisms which were ill-adapted to their environment, by a process of «natural selection." The result of this pruning was to leave a residual pattern of forms of life more or less well adapted to their environment. This residual pattern, according to Darwin, assumes the appearance of universal purposiveness. The concept of a residual pattern has come to the fore again in the work of Dr. W. Ross Ashby. He uses it to explain the concept of machines that learn. He points out that a machine of rather random and haphazard structure will have certain near-equilibrium positions, and certain positions far from equilibrium, and that the near-equilibrium patterns will by their very nature last for a long time, while the others will
64 38 TIlE HUMAN USE OF HUMAN BEINGS appear only temporarily. The result is that in Ashby's machine, as in Darwin's nature, we have the appearance of a purposefulness in a system which is not purposefully constructed simply because purposelessness is in its very nature transitory. Of course, in the long run, the great trivial purpose of maximum entropy will appear to be the most enduring of all. But in the intermediate stages an organism or a society of organisms will tend to dally longer in those modes of activity in which the different parts work together, according to a more or less meaningful pattern. I believe that Ashby's brilliant idea of the unpurposeful random mechanism which seeks for its own purpose through a process of learning is not only one of the great philosophical contributions of the present day, but will lead to highly useful technical developments in the task of automatization. Not only can we build purpose into machines, but in an overwhelming majority of cases a machine designed to avoid certain pitfalls of breakdown will look for purposes which it can fulfill. Darwin's influence on the idea of progress was not confined to the biological world, even in the nineteenth century. All philosophers and all sociologists draw their scientific ideas from the sources available at their time. Thus it is not surprising to find that Marx and his contemporary socialists accepted a Darwinian point of view in the matter of evolution and progress.
65 CYBERNETICS AND SOCIETY 39 association of energy and information. A crude form of this association occurs in the theories of line noise in a telephone circuit or an amplifier. Su Ii background noise may be shown to be un av oidable, as it depends on the discrete character of the electrons which carry the current; and yet it h s..a fil}ite -power of destroying information.-- The circuit therefore demands a certain amount of
66 40 THE HUMAN USE OF HUMAN BEINGS that we ourselves constitute such an island of decreaseing. Again, it is quite conceivable that life belongs to a limited stretch of time; that before the earliest geological ages it did not exist, and that the time may well come when the earth is again a lifeless, burnt-out, or frozen planet..,. ". ' '
67 . ".' CYBERNETICS AND SOCIETY 41 Up to this point we have been talking of a pessimism which is much more the intellectual pessimism of the professional scientist than an emotional pessimism which touches the layman. We have already seen that the theory of entropy, and the considerations of the I ultimate heat-death of the universe, need not have, such profoundly depressing moral consequences as! they seem to have at first glance. However, even this limited consideration of the future is foreign to th emotional euphoria of the average man, and partic4- lady tq that of the average American. The best we ca hope for the r o le of progress in a universe running downhill as a whole is that the vision of our attempts to progress in the face of overwhelming necessity may have the purging terror of Greek tragedy. Yet we live in an age not over-receptive to tragedy. The education of the average American child of the upper middle class is such as to guard him solicitously against the awareness of death and doom. He is brought up in an atmosphere of Santa Claus; and when he learns that Santa Claus is a myth, he cries bitterly. Indeed, he never fully accepts the removal of this deity from his Pantheon, and spends much of his later life in the search for some emotional substitute. The fact of individual death, the imminence of calamity, are forced upon him by the experiences of his later years. Nevertheless, he tries to relegate these unfortunate realities to the role of accidents, and to build up a Heaven on Earth in which unpleasantness has no place. This Heaven on Earth consists for him in an eternal progress, and a continual ascent to Bigger and Better Things. /
68 42 THE HUMAN USE OF HUMAN BEINGS into an indefinite period of invention, of the discovery of new techniques for controlling the human environment. This, the believers in progress say, will go on and on without any visible termination in a future not too remote for human contemplation.. Most of us are too close to the idea of progress to take cognizance either of the fact that this belief belongs only to a small part of recorded history, or of the other fact, that it represents a sharp break with our own religious professions and traditions. Neither for the Catholic, the Protestant, nor for the Jew, is the world a good place in which an enduring happiness is to be expected. The church offers its pay for virtue, not in any coin which passes current among the Kings of the Earth, but as a promissory note on Heaven. In essence, the Calvinist accepts this too, with the additional dark note that the Elect of God who shall pass the dire final examination of Judgment Day are few, and are to be selected by His arbitrary decree. To secure this, no virtues on earth, no moral righteousness, may be expected to be of the slightest avail. Many a good man will be damned. The blessedness which the Calvinists do not expect to find for themselves even in Heaven, they certainly do not await on earth. The Hebrew prophets are far from cheerful in their evaluation of the future of mankind, or even of their chosen Israel; and the great morality play of Job, while it grants him a victory of the spirit, and while the Lord deigns to return to him his Hocks and his servants and his wives, nevertheless gives no assurance that such a
69 CYBERNETICS AND SOCIETY 43 relatively happy outcome will take place except through the arbitrariness of God. The Communist, like the believer in progress, looks for his Heaven on Earth, rather than as a personal reward to be drawn on in a post-earthly individual existence. Nevertheless, he believes that this Heaven on Earth will not come of itself without a struggle. He is just as skeptical of the Big Rock Candy Mountains of the Future as of the Pie in the Sky when you Die. Nor is Islam, whose very name means resignation to the will of God, any more receptive to the ideal of progress. Of Buddhism, with its hope for Nirvana and a release from the external Wheel of Circumstance, I need say nothing; it is inexorably opposed to the idea of progress, and this is equally true for all the kindred religions of India. Besides the comfortable passive belief in progress, which many Americans shared at the end of the nineteenth century, there is another one which seems to have a more masculine, vigorous connotation. To the average American, progress means the winning of the West. It means the economic anarchy of the frontier, and the vigorous prose of Owen Wister and Theodore Roosevelt. Historically the frontier is, of course, a perfectly genuine phenomenon. For many years, the development of the United States took place against the background of the empty land that always lay further to the West. Nevertheless, many of those who have waxed poetic concerning this frontier have been praisers of the past. Already in 1890, the census takes cognizance of the end of the true frontier conditions. The geographical limits of the great backlog of unconsumed and unbespoken resources of the country had clearly been set. H is difficult for the average person to achieve an historical perspective in which progress shall have been reduced to its proper dimensions. The musket with which most of the Civil War was fought was
70 44 TIlE HUMAN USE OF HUMAN BEINGS only a slight improvement over that carried at Waterloo, and that in turn was nearly interchangeable with the Brown Bess of Marlborough's army in the Low Countries. Nevertheless, hand firearms had existed since the fifteenth century or earlier, and cannon more than a hundred years earlier still. It is doubtful whether the smooth bore musket ever much exceeded in range the best of the longbows, and it is certain that it never equaled them in accuracy nor in speed of fire; yet the longbow is the almost unimproved invention of the Stone Age. Again, while the art of shipbuilding had by no means been completely stagnant, the wooden man-of-war, just before it left the seas, was of a pattern which had been fairly unchanged in its essentials since the early seventeenth century, and which even then displayed an ancestry going back many centuries more. One of Columbus' sailors would have been a valuable able seaman aboard Farragut's ships. Even a sailor from the ship that took Saint Paul to Malta would have been quite reasonably at home as a forecastle hand on one of Joseph Conrad's barks. A Roman cattleman from the Dacian frontier would have made quite a competent vaquero to drive longhorn steers from the plains of Texas to the terminus of the railroad, although he would have been struck with astonishment with what he found when he got there. A Babylonian administrator of a temple estate would have needed no training either in bookkeeping or in the handling of slaves to run an early Southern plantation. In short, the period during which the main conditions of life for the vast majority of men have been subject to repeated and revolutionary changes had not even begun until the Renaissance and the great voyages, and did not assume anything like the accelerated pace which we now take for granted until well into the nineteenth century. Under these circumstances, there is no use in looking anywhere in earlier history for parallels to the success-
71 CYBERNETICS AND SOCIETY 45 ful inventions of the steam engine, the steamboat, the locomotive, the modern smelting of metals, the telegraph, the transoceanic cable, the introduction of electric power, dynamite and the modern high explosive missile, the airplane, the electric valve, and the atomic bomb. The inventions in metallurgy which heralded the origin of the Bronze Age are neither so concentrated in time nor so manifold as to offer a good counter-example.. Now, scientific history and scientific sociology are based on the notion that the various special cases treated have a sufficient similarity for the social mechanisms of one period to be relevant to those of another. However, it is certainly true that the whole scale of phenomena has changed sufficiently since the beginning of modern history to preclude any easy transfer to the present time of political, racial, and economic notions derived from earlier stages. What is almost as obvious is that the modern period beginning with the age of discovery is itself highly heterogeneous. In the age of discovery Europe had become aware for the first time of the existence of great thinly-settled areas capable of taking up a population exceeding that of Europe itself; a land full of unexplored resources, not only of gold and silver but of the other commodities of commerce as well. These resources seemed inexhaustible, and indeed on the scale on which the society of 1500 moved, their exhaustion and the saturation of the population of the new countries were very remote. Four hundred and fifty years is farther than most people choose to look ahead. However, the existence of the new lands encouraged an attitude not unlike that of Alice's Mad Tea Party.
72 .. 46 THE HUMAN USE OF HUMAN BEINGS When the tea and cakes were exhausted at one seat, the natural thing for the Mad Hatter and the March Hare was to move on and occupy the next seat. When Alice inquired what would happen when they came around to their original positions again, the March Hare changed the subject. To those whose past span of history was less than five thousand years and who were expecting that the Millennium and the Final Day of Judgment might overtake them in far less time, this Mad Hatter policy seemed most sensible. As time passed, the tea table of the Americas had proved not to be inexhaustible; and as a matter of fact, the rate at which one seat has been abandoned for the next has been increasing at what is probably a still increasing pace. result. We are the slaves of our technical improvement and we can no more return a New Hampshire farm to the self-contained state in which it was maintained in 1800 than we can, by taking thought, add a cubit to our stature or, what is more to the point, diminish it. We have modified our environment so radically that we must " now modify ourselves in order to exist in this new en- ;1 vironment. We can no longer live in the old one. Progress imposes not only new possibilities for the future but new restrictions. It seems almost as if progress itself and our fight against the increase of entropy in-
73 CYBERNETICS AND SOCIETY 47 trinsically must end in the downhill path from which we are trying to escape. Yet this pessimistic sentiment is only conditional upon our blindness and inactivity, for I am convinced that once we become aware of the new needs that a new environment has imposed upon us, as well as the new means of meeting these needs that are at our disposal, it may be a long time yet before our civilization and our human race perish, though perish they will even as all of us are born to die. However, the prospect of a final death is far from a complete frustration of life and this is equally true for a civilization and for the human race as it is for any of its component individuals. May we have the courage to face the eventual doom of our civilization as we have the courage to face the certainty of our personal doom. The simple faith in progress is not a conviction belonging to strength, but one belonging to acquiescence and hence to weakness.
74 III RIGIDITY AND LEARNING: TWO PATTERNS OF COMMUNICATIVE BEHAVIOR Certain kinds of machines and some living organisms -particularly the higher living organisms-can, as we have seen, modify their patterns of behavior on the basis of past experience so as to achieve specific antientropic ends. In these higher forms of communicative organisms the environment, considered as the past experience of the individual, can modify the pattern of behavior into one which in some sense or other will deal more effectively with the future environment. In other words, the organism is not like the clockwork monad of Leibnitz with its pre-established harmony with the universe, but actually seeks a new equilibrium with the universe and its future contingencies. Its present is unlike its past and its future unlike its present. In the living organism as in the universe itself, exact repetition is absolutely impossible. The work of Dr. W. Ross Ashby is probably the greatest modern contribution to this subject
75 CYBERNETICS AND SOCIETY 49 a head at each end and no concern with where it is going. It moves ahead from a known past into an unknown future and this future is not interchangeable with that past. Let me give still another example of feedback which will clarify its function with respect to learning. When the great control rooms at the locks of the Panama Canal are in use, they are two-way message centers. Not only do messages go out controlling the motion of the tow locomotives, the opening and closing of the sluices, and the opening and closing of the gates; but the control room is full of telltales which indicate not merely that the locomotives, the sluices, and the gates have received their orders, but that they have in fact effectively carried out these orders. If this were not the case, the lock master might very easily assume that a towing locomotive had stopped and might rush the huge mass of a battleship into the gates, or might cause any one of a number of similar catastrophes to take place. This principle in control applies not merely to the Panama locks, but to states, armies, and individual human beings. When in the American Revolution, orders already drawn up had failed, through carelessness, to go from England commanding a British army to march down from Canada to meet another British army marching up from New York at Saratoga, Burgoyne's forces met a catastrophic defeat which a well conceived program of two-way communications would have avoided. It follows that administrative officials, whether of a government or a university or a corporation, should take part in a two-way stream of communication, and not merely in one descending from the top. Otherwise, the top officials may find that they have based their policy on a complete misconception of the facts that their underlings possess. Again, there is no task harder for a lecturer than to speak to a dead-pan audience. The purpose of applause in the theater-and
76 50 THE HUMAN USE OF HUMAN BEINGS it is essential-is to establish in the performer's mind some modicum of two-way communication. This matter of social feedba.9k is of very great sociological and anthropological interest. The patterns of communication in human societies vary widely. There are communities like the Eskimos, among whom there seems to be no chieftainship and very little subordination, so that the basis of the social community is simply the common desire to survive against enormous odds of climate and food supply. There are socially stratified communities such as are found in India, in which the means of communication between two individuals are closely restricted and modified by their ancestry and position. There are communities ruled by despots, in which every relation between two subjects becomes secondary to the relation between the subject and his king. There are the hierarchical feudal communities of lord and vassal, and the very special techniques of social commumcation which they involve. Most of us in the United States prefer to live in a moderately loose social community, in which the blocks to communication among individuals and classes are not too great. I will not say that this ideal of communication is attained in the United States. Until white supremacy ceases to belong to the creed of a large part of the country it will be an ideal from which we fall short. Yet even this modified formless democracy is too anarchic for many of those who make efficiency their first ideal. These worshipers of efficiency would like to have each man move in a social orbit meted out to him from his childhood, and perform a function to which he is bound as the serf was bound to the clod. Within the American social picture, it is shameful to have these yearnings, and this denial of opportunities implied by an uncertain future. Accordingly, many of those who are most attached to this orderly state of permanently allotted functions would be confounded if they were forced to admit this publicly. They are
77 CYBERNETICS AND SOCIETY only in a position to display their clear preferences through their actions. Yet these actions stand out distinctly enough. The businessman who separates himself from his employees by a shield of yes-men, or the head of a big laboratory who assigns each subordinate a particular problem, and begrudges him the privilege of thinking for himself so that he can move beyond his immediate problem and perceive its general relevance, show that the democracy to which they pay their respects is not really the order in which they would prefer to live. The regularly ordered state of pre-assigned functions toward which they gravitate is suggestive of the Leibnitzian automata and does not suggest the irreversible movement into a contingent future which is the true condition of human life. prop ( r occupation: in which rulers are perpetually rulers, soldiers perpetually soldiers, the peasant is never more than a peasant, and the worker is doomed to be a worker. It is a thesis of this chapter that once used. On the other hand, I wish to show that the human individual, capable of vast learning and study,
78 52 THE HUMAN USE OF HUMAN BEINGS which may occupy almost half of his life, is physically equipped, as the ant is not, for this capacity. Variety and possibility are inherent in the human sensoriumand are indeed the. I am afraid that I am convinced that a community of human beings is a far more useful thing than a community of ants; and that if the human being is condemned and restricted to perform the same functions over and over again, he will not even be a good ant, not to mention a good human being. Those who would organize us according to permanent. Let us now turn to a discussion of the restrictions on the make-up of the ant which have turned the ant community into the very special thing it is. These restrictions have a deep-seated origin in the anatomy and the physiology of the individual insect. Both the insect and the man are air-breathing forms, and represent the end of a long transition from the easygoing life of the waterborne animal to the much more exacting demands of the land-bound. This transition from water to land, wherever it has occurred, has involved radical improvements in breathing, in the circulation generally, in the mechanical support of the organism, and in the sense organs.
79 CYBERNETICS AND SOCIETY 53 The mechanical reinforcement of the bodies of land animals has taken place along several independent lines. In the case of most of the mollusks, as well as in the case of certain other groups which, though unrelated, have taken on a generally mollusk-like form, part of the outer surface secretes a non-living mass of calcareous tissue, the shell. This grows by accretion from an early stage in the animal until the end of its life. The spiral and helical forms of those groups need only this process of accretion to account for them. If the shell is to remain an adequate protection for the animal, and the animal grows to any considerable size in its later stages, the shell must be a very appreciable burden, suitable only for land animals of the slowly moving and inactive life of the snail. In other shell-bearing animals, the shell is lighter and less of a load, but at the same time much less of a protection. The shell structure, with its heavy mechanical burden, has had only a limited success among land animals. Man himself represents another direction of development-a direction found throughout the vertebrates, and at least indicated in invertebrates as highly developed as the limulus and the octopus. In all these forms, certain internal parts of the connective tissue assume a consistency which is no longer fibrous, but rather that of a very hard, stiff jelly. These parts of the body are called cartilage, and they serve to attach the powerful muscles which animals need for an active life. In the higher vertebrates, this primary cartilaginous skeleton serves as a temporary scaffolding for a skeleton of much harder material: namely, bone, which is even more satisfactory for the attachment of powerful muscles. These skeletons, of bone or cartilage, contain a great deal of tissue which is not in any strict sense alive, but throughout this mass of intercellular tissue there is a living structure of cells, cellular membranes, and nutritive blood vessels. The vertebrates have developed not only internal
80 54 THE HUMAN USE OF HUMAN BEINGS, I skeletons, but other features as well which suit them for active life. Their respiratory system, whether it takes the form of gills or lungs, is beautifully adapted to the active interchange of oxygen between the external medium and a blood, and the latter is made much more efficient than the average invertebrate blood by having its oxygen-carrying respiratory pigment concentrated in corpuscles. This blood is pumped through a closed system of vessels, rather than through an open system of irregular sinuses, by a heart of relatively high efficiency. The insects and crustaceans, and in fact all the arthropods, are built for quite another kind of growth. The outer wall of the body is surrounded by a layer of chitin secreted by the cells of the epidermis. This chitin is a stiff substance rather closely related to cellulose. In the joints the layer of chitin is thin and moderately flexible, but over the rest of the animal it becomes that hard external skeleton which we see on the lobster and the cockroach. An internal skeleton such as man's can grow with the animal. An external skeleton (unless, like the shell of the snail, it grows by accretion ) cannot. It is dead tissue, and possesses no intrinsic capability of growth. It serves to give a firm protection to the body and an attachment for the muscles, but it amounts to a strait jacket. Internal growth among the arthropods can be converted into external growth only by discarding the old strait jacket, and by developing under it a new one, which is initially soft and pliable and can take a slightly new and larger form, but which very soon acquires the rigidity of its predecessor. In other words, the stages of growth are marked by definite moults, relatively fre quent in the crustacean, and much less so in the insect. There are several such stages possible during the larval period. The pupal period represents a transition moult, in which the wings, that have not been functional in the larva, develop internally toward a functional con-
81 , I CYBERNETICS AND SOCIETY 55 dition. This becomes realized when the pre-final pupal stage, and the moult which terminates it gives rise to a perfect adult. The adult never moults again. It is in its sexual stage and although in most cases it remains capable of taking nourishment, there are insects in which the adult mouth-parts and the digestive tube are aborted, so that the imago, as it is called, can only mate, lay eggs, and die. The nervous system takes part in this process of tearing down and building up. While there is a certain amount of evidence that some memory persists from the larva through to the imago, this memory cannot be very extensive. The physiological condition for memory and hence for learning seems to be a certain. continuity of organization, which allows the alterations produced by outer sense impressions to be retained as more or less permanent changes of structure or function. Metamorphosis is too radical to leave much lasting record of these changes. It is indeed hard to conceive of a memory of any precision which can survive this process of radical internal reconstruction. There is another limitation on the insect, which is due to its method of respiration and circulation. The heart of the insect is a very poor and weak tubular structure, which opens, not into well-defined blood vessels, but into vague cavities or sinuses conveying the blood to the tissues. This blood is without pigmented corpuscles, and carries the blood-pigments in solution. This mode of transferring oxygen seems to be definitely inferior to the corpuscular method. In addition, the insect method of oxygenation of the tissues makes at most only local use of the blood. The body of the animal contains a system of branched tubules, carrying air directly from the outside into the tissues to be oxygenated. These tubules are stiffened against collapse by spiral fibers of chitin, and are thus passively open, but there is nowhere evidence of an I
82 56 TIlE HUMAN USE OF HUMAN BEINGS active and effective system of air pumping. Respiration occurs by diffusion alone. Notice that the same tubules carry by diffusion the good air in and the spent air, polluted with carbon dioxide, out to the surface. In a diffusion mechanism, the time of diffusion varies not as the length of the tube, but as the square of the length. Thus, in general, the efficiency of this system tends to fall off very rapidly with the size of the animal, and falls below the point of survival for an animal of any considerable size. So not only is the insect structurally incapable of a rstrate memory, he is also structurally incapable of an effective size. To know the significance of this limitation in size, let us compare two artificial structures-the cottage and the skyscraper. The ventilation of a cottage is quite adequately taken care of by the leak of air around the window frames, not to mention the draft of the chimney. No special ventilation system is necessary. On the other hand, in a skyscraper with rooms within rooil1s, a shutdown of the system of forced ventilation would be followed in a very few minutes by an intolerable foulness of the air in the work spaces. Diffusion and even convection are no longer enough to ventilate such a structure. The absolute maximum size of an insect is smaller than that attainable by a vertebrate. On the other hand, the ultimate elements of which the insect is composed are not always smaller than they are in man, or even in a whale. The nervous system partakes of this small size, and yet consists of neurons not much smaller than those in the human brain, though there are many fewer of them, and their structure is far less complex. In the matter of intelligence, we should expect that it is not only the relative size of the nervous system that counts, but in a large measure its absolute size. There is simply no room in the reduced structure of an insect for a
83 CYBERNETICS AND SOCIETY 57 nervous system of great complexity, nor for a large stored memory. In view of the impossibility of a large stored memory, as well as of the fact that the youth of an insect such as an ant is spent in a form which is insulated from the adult phase by the intermediate catastrophe of metamorphosis, there is no opportunity for the ant to learn much. Add to this, that its behavior in the adult stage must be substantially perfect from the beginning, and it then becomes clear that the instructions received by the insect nervous system must be pretty much a result of the way it is built, and not of any personal experience. Thus the insect is rather like the kind of computing machine whose instructions are all set forth in advance on the "tapes," and which has next to no feedback mechanism to see it through the uncertain future. The behavior of an ant is much more a matter of instinct than of intelligence. The physical strait jacket in which an insect grows up is directly responsible for the mental strait jacket which regulates its pattern of behavior. Here the reader may say: "Well, we already know that the ant as an individual is not very intelligent, so why all this fuss about explaining why it cannot be intelligent?" The answer is that. Theoretically, if we could build a machine whose mechanical structure duplicated human physiology, then we could have a machine whose intellectual capacities would duplicate those of human beings. In the matter of rigidity of behavior, the greatest contrast to the ant is not merely the mammal in gen-.j't
84 58 THE lillman USE OF lillman BEINGS eral, but man in particular. It has frequently been observed that man is a neoteinic form : that is, that if we compare man with the great apes, his closest relatives, we find that mature man in hair, head, shape, body proportions, bony structure, muscles, and so on, is more like the newborn ape than the adult ape. Among the animals, man is a Peter Pan who never grows up. This immaturity of anatomical structure corresponds to man's prolonged childhood. Physiologically, man does not reach puberty until he has already completed a fifth of his normal span of life. Let us compare this with the ratio in the case of a mouse, which lives three years and starts breeding at the end of three months. This is a ratio of twelve to one. The mouse's ratio is much more nearly typical of the large majority of mammals than is the human ratio. Puberty for most mammals either represents the end of their epoch of tutelage, or is well beyond it. In our community, man is recognized as immature until the age of twenty-one, and the modern period of education for the higher walks of life continues until about thirty, actually beyond the time of greatest physical strength. Man thus spends what may amount to forty per cent of his normal life as a learner, again for reasons that have to do with his physical structure. It is as completely natural for a human society to be based on learning as for an ant society to be based on an inherited pattern. Man like all other organisms lives in a contingent universe, but man's advantage over the rest of nature is that he has the physiological an e intellectual equipment to adapt himself to dical changes in his environment. The human species is strong only insofar as it takes advantage of the innate adaptive, learning faculties that its physiological structure makes possible. We have already indicated that effective behavior must be informed by some sort of feedback process, telling it whether it has equalled its goal or fallen
85 CYBERNETICS AND SOCIETY 59 short. The simplest feedbacks deal with gross successes or failures of performance, such as whether we have actually succeeded in grasping an object that we have tried to pick up, or whether the advance guard of an army is at the appointed place at the appointed time. However, there are many other forms of feedback of a more subtle nature. It is often necessary for us to know whether a whole policy of conduct, a strategy so to say, has proved successful or not. The animal we teach to traverse it is a feedback on a higher level, a feedback of policies and not of simple actions. It differs from more elementary feedbacks in what Bertrand Russell would call its "logical type." This pattern of behavior may also be found in machines. A recent innovation in the technique of telephonic switching provides an interesting mechanical analogy to man's adaptive faculty. Throughout the telephone industry, automatic switching is rapidly completing its victory over manual switching, and it may seem to us that the existing forms of automatic switching constitute a nearly perfect process. Nevertheless, a little thought will show that the present process is very wasteful of equipment. The number of people with whom I actually wish to talk over the telephone is limited, and in large measure is the same limited group day after day and week after week. I use most of the telephone equipment available to me to communicate with members of this group. Now, as the present technique of switching generally goes, the process of reaching one of the people whom we call up four or five times a day is in no way different from the process of reaching those people with whom we may
86 60 THE HUMAN USE OF HUMAN BEINGS never have a conversation. From the standpoint of balanced service, we are using either too little equipment to handle the frequent calls or too much to handle the infrequent calls, a situation which reminds me of Oliver Wendell Holmes' poem on the "one-hoss shay." This hoary vehicle, as you recollect, after one hundred years of service, showed itself to be so carefully designed that neither wheel, nor top, nor shafts, nor seat contained any part which manifested an uneconomical excess of wearing power over any other part. Actually, the "one-hoss shay" represents the pinnacle of engineering, and is not merely a humorous fantasy. If the tires had lasted a moment. longer than the spokes or the dashboard than the shafts, these parts would have carried into disuse certain economic values. These values could either have been reduced without hurting the durability of the vehicle as a whole, or they could have been transferred equally throughout the entir vehicle to make the whole thing last longer. Indeed, any structure not of the nature of the "one-hoss shay" is wastefully designed. This means that for the greatest economy of service it is not desirable that the process of my connection with Mr. A., whom I call up three times a day, and with Mr. B., who is for me only an unnoticed item in the telephone directory, should be of the same order. If I wet., allotted a slightly more direct means of connection with Mr. A., then the time wasted in having to wait twice as long for Mr. B. would be more than compensated for. If then, it is possible without excessive cost to devise an apparatus which will record my past conversations, and reapportion to me a degree of service corresponding to the frequency of my past use of the telephone channels, I should obtain a better service, or a less expensive one, or both. The Philips Lamp Company in Holland has succeeded in doing this. The quality of its service has been improved by means of a feedback of Russell's so-called "higher logical type." It f I. I ;",
87 CYBERNETICS AND SOCIETY is capable of greater variety, more adaptability, and deals more effectively than conventional equipment with the entropic tendency for the more probable to overwhelm the less probable. I repeat, feedback. Another example of the learning process appears in connection with the problem of the design of prediction machines. At the beginning of World War II, the comparative inefficiency of anti-aircraft fire made it necessary to introduce apparatus which would follow the position of an airplane, compute its distance, determine the length of time before a shell could reach it, and figure out where it would be at the end of that time. If the plane were able to take a perfectly arbitrary evasive action, no amount of skill would permit us to fill in the as yet unknown motion of the plane between the time when the gun was fired and the time when the shell should arrive approximately at its goal. However, under many circumstances the aviator either does not, or cannot, take arbitrary evasive action. He is limited by the fact that if he makes a rapid turn, centrifugal force will render him unconscious; and by the other fact that the control mechanism of his plane and the course of instructions which he has received practically force on him certain regular habits of control which show themselves even in his evasive action. These regularities are not absolute but are rather statistical preferences which appear most of the time. They may be different for different aviators, and they will certainly be for different planes. Let us remember
88 I 62 THE HUMAN USE OF HUMAN BEINGS that in the pursuit of a target as rapid as an airplane, there is not time for the computer to take out his instruments and figure where the plane is going to be. All the figuring must be built into the gun control itself. This figuring must include data which depend on our past statistical experience of airplanes of a given type under varying Hight conditions. The present stage of anti-aircraft fire consists in an apparatus which uses either fixed data of this sort, or a selection among a limited number of such fixed data. The proper choice among these may be switched in by means of the voluntary action of the gunner. However, there is another stage of the control problem which may also be dealt with mechanically. The problem of determining the Hight statistics of a plane from the actual observation of its Hight, and then of transforming these into rules for controlling the gun, is itself a definite and mathematical one. Compared with the actual pursuit of the plane, in accordance with given rules, it is a relatively slow action, and involves a considerable observation of the past Hight of the airplane. It is nevertheless not impossible to mechanize this long-time action as well as the short-time action. We thus may construct an anti-aircraft gun which observes by itself the statistics concerning the motion of the target plane, which then works these into a system of control, and which finally adopts this system of control as a quick way for adjusting its position to the observed position and motion of the plane. To my knowledge this has not yet been done, but it is a problem which lies along lines we are considering, and expect to use in other problems of prediction. The adjustment of the general plan of pointing and firing the gun according to the particular system of motions which the target has made is essentially an act of learning. It is a change in the taping of the gun's computing mechanism, which alters not so much the numerical data, as the process by which they are interpreted. It is, '
89 CYBERNETICS AND SOCIETY in fact, a very general sort of feedback, affecting the whole method of behavior of the instrument. The advanced process of learning which we have here discussed is still limited by the mechanical conditions of the system in which it occurs, and clearly does not correspond to the normal process of learning in man. But from this process we can infer quite different ways in which learning of a complex sort can be mechanized. These indications are given respectively by the Lockean theory of association, and by Pavlov's theory of the conditioned reflex. Before I take these up, however, I wish to make some general remarks to cover in advance certain criticisms of the suggestion that I shall present. Let me recount the basis on which it is possible to develop a theory of learning. By far the greater part of the work of the nerve physiologist has been on the conduction of impulses by nerve fibers or neurons, and this process is given as an all-or-none phenomenon. That is, if a stimulus reaches the point or threshold where it will travel along a nerve fiber at all, and not die out in a relatively short distance, the effect which it produces at a comparatively remote point on the nerve fiber is substantially independent of its initial strength. These nerve impulses travel from fiber to fiber across connections known as 8tjru:Lpses, in which one in going fiber may come in contact with many outgoing fibers, and one outgoing fiber in contact with many ingoing fibers. In these synapses, the impulse given by a single incoming nerve fiber is often not enough to produce an effective outgoing impulse. In general, if the impulses arriving at a given outgoing fiber by incoming synaptic connections are too few, the outgoing fiber will not re spond. When I say too few, I do not necessarily mean that all incoming fibers act alike, nor even that with any set of incoming active synaptic connections the question of whether the outgoing fiber will respond may be settled once for all. I also do not intend to ignore
90 64 THE HUMAN USE OF HUMAN BEINGS the fact that some incoming fibers, instead of tending to produce a stimulus in the outgoing fibers with which they connect, may tend to prevent these fibers from accepting new stimuli. Be that as it may, while the problem of the conduction of impulses along a fiber may be described in a rather simple way as an all-or-none phenomenon, the problem of the transmission of an impulse across a layer of synaptic connections depends on a complicated pattern of responses, in which certain combinations of incoming fibers, firing within a certain limited time, will cause the message to go further, while certain other combinations will not. These combinations are not a thing fixed once for all, nor do they even depend solely on the past history of messages received into that synaptic layer. They are known to change with temperature, and may well change with many other things. This view of the nervous system corresponds to the theory of those machines that consist in a sequence of switching devices in which the opening of a later switch depends on the action of precise combinations of earlier switches leading into it, which open at the same time. This all-or-none machine is called a digital machine. It has great advantag66 analrjgy machines, because they operate on the basis of analogous connections between the measured quantities and the numerical quantities supposed to represent them. An example of an analogy machine is a slide rule, in contrast with a desk computing machine which operates digitally. Those who have used a slide rule
91 CYBERNETICS AND SOCIETY know that the scale on which the marks have to be printed and the accuracy of our eyes give sharp limits to the precision with which the rule can be read. These limits are not as easily extended as one might think, by making the slide rule larger. A ten-foot slide rule will give only one decimal place more accuracy than a onefoot slide rule, and in order to do this, not only must each foot of the larger slide rule be constructed with the same precision as the smaller one, but the orientation of these successive feet must conform to the degree of accuracy to be expected for each one-foot slide rule. Furthermore, the problems of keeping the larger rule rigid are much greater than those which we find in the., case of the smaller rule, and serve to limit the increase in accuracy which we get by increasing the size. In other words, for practical purposes, machines that measure, as opposed to machines that count, are very greatly limited in their precision. Add this to the preju: dices of the physiologist in favor of all-or-none action, and we see why the greater part of the work which has been done on the mechanical simulacra of the brain has been on machines which are more or less on a digital basis. However, if we insist too strongly on the brain as a glorified digital machine, we shall be subject to some very just criticism, coming in part from the physiologists and in part from the somewhat opposite camp of those psychologists who prefer not to make use of the machine comparison. I have said that in a digital machine there is a taping, which determines the sequence of operations to be performed, and that a change in this taping on the basis of past experience corresponds to a learning process. In the brain, the clearest analogy to taping is the determination of the synaptic thresholds, of the precise combinations of the incoming neurons which will fire an outgoing neuron with which they are connected. We have already seen that these thresholds are variable with temperature, and we have
92 66 THE HUMAN USE OF HUMAN BEINGS no reason to believe that they may not be variable with the chemistry of the blood and with many other phenomena which are not themselves originally of an all-or-none nature. It is therefore necessary that in considering the problem of learning, we should be most wary of assuming an all-or-none theory of the nervous system, without having made an intellectual criticism of the notion, and without specific experimental evidence to back our assumption. It will often be said that there is no theory of learning whatever that will be reasonable for the machine. It will also be said that in the present stage of our knowledge, any theory of learning which I may offer will be premature, and will probably not correspond to the actual functioning of the nervous system. I wish to walk a middle path between these two criticisms. On the one hand, I wish to give a method of constructing learning machines, a method which will not only enable me to build certain special machines of this type, but will give me a general engineering technique for constructing a very large class of such machines. Only if I reach this degree of generality will I have defended myself in some measure from the criticism that the mechanical process which I claim is similar to learning, is, in fact, something of an essentially different nature from learning. On the other hand, I wish to describe such machines in terms which are not too foreign to the actual observables of the nervous system, and of human and animal conduct. I am quite aware.
93 CYBERNETICS AND SOCIETY Locke, at the end of the seventeenth century, considered that the content of the mind was made up of what he calls ideas. The mind for him is entirely passive, a clean blackboard, tabula rasa, on which the experiences of the individual write their own impressions. If these impressions appear often, either under circumstances of simultaneity, or in a certain sequence, or in situations which we ordinarily attribute to cause and effect, then according to Locke, these impressions or ideas will form complex ideas, with a certain positive tendency for the component elements to stick together. The mechanism by which the ideas stick together lies in the ideas themselves; but there is throughout Locke's writing a singular unwillingness to describe such a mechanism. His theory can bear only the sort of relation to reality that a picture of a locomotive bears to a working locomotive. It is a diagram without any working parts. This is not remarkable when we consider the date of Locke's theory. It was in astronomy, and not in engineering or in psychology, that the dynamic point of view, the point of view of working parts, :6rst reached its importance; and this was at the hands of Newton, who was not a predecessor of Locke, but a contemporary. For several centuries, science, dominated by the Aristotelian impulse to classify, neglected the modem impulse to search for ways in which phenomena function. Indeed, with the plants and animals yet to be expiol:ed, it is hard to see how biological science could have entered a properly dynamic period except through the continual gathering of more descriptive natural history. The great botanist Linnaeus will serve us as an example. For Linnaeus, species and genera were fixed Aristotelian forms, rather than signposts for a process of evolution; but it was only on the basis of a thoroughly Linnaean description that any cogent case could ever be made for evolution. The early natural historians were the practical frontiersmen of the intellect; too
94 68 TIlE HUMAN USE OF HUMAN BEINGS much under the compulsion to seize and occupy new territory to be very precise in treating the problem of explaining the new forms that they had observed. After the frontiersman comes the operative farmer, and after the naturalist comes the modern scientist. In the last quarter of the last century and the first quarter of the present one, another great scholar, Pavlov, covered in his own way essentially the same ground that Locke had covered earlier. His study of the conditioned reflexes, however, progressed experimentally, not theoretically as Locke's had. Moreover, he treated it as it appears among the lower animals rather than as it appears in man. The lower animals cannot speak in man's language, but in the language of behavior. Much of their more conspicuous behavior is emotional in its motivation and much of their emotion is concerned with food. It was with food that Pavlov began, and with the physical symptom of salivation. It is easy to insert a canula into the salivary duct of a dog and to observe the secretion that is stimulated by the presence of food. Ordinarily many things unconnected with food, as objects seen, sounds heard, etc., produce no effect on salivation, but Pavlov observed that if a certain pattern or a certain sound had been systematically introduced to a dog at feeding time, then the display of the pattern or sound alone was sufficient to excite salivation. That is, the reflex of salivation was conditioned by a past association. Here we have on the level of the animal reflex, something analogous to Locke's association of ideas, an association which occurs in reflex responses whose emotional content is presumably very strong. Let us notice the rather complicated nature of the antecedents which are needed to produce a conditioned reflex of the Pavlov type. To begin with, they generally center about something important to the life of the animal: in this case, food, even though in the reflex's final
95 CYBERNETICS AND SOCIETY form the food element may be entirely elided. We may, however, iilustrate the importance of the initial stimulus of a Pavlovian conditioned reflex by the example of electric fences enclosing a cattle farm. On cattle farms, the construction of wire fences strong enough to tum its. There are other triggers which lead to conditioned reflexes besides hunger and pain. It will be using anthropomorphic language to call these emotional situations, but there is no such anthropomorphism needed to describe them as situations which generally carry an emphasis and importance not belonging to many other animal experiences. Such experiences, whether we may call them emotional or not, produce strong reflexes. In the formation of conditioned reflexes in general the reflex response is transferred to one of these trigger situations. This trigger situation is one which frequently occurs concurrently with the original trigger. The change in the stimulus for which a given response takes place must have some such nervous correlate as the opening of a synaptic pathway leading to the response which would otherwise have been closed, or the closing of one which would otherwise have been open; and thus constitutes what Cybernetics calls a change in taping. 6g II! I
96 70 THE HUMAN USE OF HUMAN BEINGS Such a change in taping is preceded by the continued association of the old, strong, natural stimulus for a particular reaction and the new concomitant one. It is as if the old stimulus had the power to change the permeability of those pathways which were carrying a message at the same time as it was active. The interesting thing is that the new, active stimulus need have almost nothing predetermined about it except the fact of repeated concomitance with the original stimulus. Thus the original stimulus seems to produce a longtime effect in all those pathways which were carrying a message at the time of its occurrence or at least in a large number of them. The insignificance of the substitute stimulus indicates that the modifying effect of the origin al stimulus is widespread, and is not confined to a few special pathways. Thus we assume that there may be some kind of general message released by the original stimulus, but that it is active only in those channels which were carrying a message at about the time of the original stimulus. The effect of this action may perhaps not be permanent, but is at least fairly longlived. The most logical site at which to suppose this secondary action to take place is in the synapses, where it most probably affects their thresholds. The concept of an undirected message spreading out until it finds a receiver, which is then stimulated by it, is not an unfamiliar one. Messages of this sort are used very frequently as alarms. The fire siren is a call to all the citizens of the town, and in particular to members of the fire department, wherever they may be. In a mine, when we wish to clear out all remote passages because of the presence of fire damp, we break a tube of ethyl mercaptan in the air-intake. There is no reason to suppose that such messages may not occur in the nervous system. If I were to construct a learning machine of a general type, I would be very much disposed to employ this method of the conjunction of general spreading "To-whom-it-may-concern" messages with.. I
97 CYBERNETICS AND SOCIEfY 71 localized channeled messages. It ought not to be too difficult to devise electrical methods of perfonning this task. This is very different, of course, from saying that learning in the animal actually occurs by such a conjunction of spreading and of channeled messages. Frankly, I think it is quite possible that it does, but our evidence is as yet not enough to make this more than a conjecture. As to the nature of these "To-whom-it-may-concern" messages, supposing them to exist, I am on still more speculative ground. They might indeed be nervous, but I am rather inclined to attribute them to the nondigital, analogy side of the mechanism responsible for reflexes and thought. It is a truism to attribute synaptic action to chemical phenomena. Actually, in the action of a nerve, it is impossible to separate chemical potentials and electrical potentials, and the statement that a certain particular action is chemical is almost devoid of meaning. Nevertheless, it does no violence to current thought to suppose that at least one of the causes or concomitants of synaptic change is a chemical change which manifests itself locally, no matter what its origin may be. The presence of such a change may very well be locally dependent on release signals which are transmitted nervously. It is at least equally conceivable that changes of the sort may be due in part to chemical changes transmitted generally through the blood, and not by the nerves. It is conceivable that "To-whom-it-may-concern" messages are transmitted nervously, and make themselves locally apparent in the form of that sort of chemical action which accompanies synaptic changes. To me, as an engineer, the transmission of "To-whom-it-may-concern" messages would appear to be more economically perfonned through the blood than through the nerves. However, I have no evidence. Let us remember that these "To-whom-it-may-concern" influences bear a certain similarity to the sort of
98 72 THE HUMAN USE OF HUMAN BEINGS changes in the anti-aircraft control apparatus which carry all new statistics to the instrument, rather than to those which directly carry only specific numerical data. In both cases, we have an action which has probably been piling up for a long time, and which will produce effects due to continue for a long time. The rapidity with which the conditioned reflex responds to its stimulus is not necessarily an index that the conditioning of the reflex is a process of comparable speed. Thus it seems to me appropriate for a message causing such a conditioning to be carried by the slow but pervasive influence of the blood stream. It is already a considerable narrowing of what my point of view requires, to suppose that the fixing influence of hunger or pain or whatever stimulus may determine a conditioned reflex passes through the blood. It would be a still greater restriction if I should try to specify the nature of this unknown blood-borne influence, if any such exists. That the blood carries in it substances which may alter nervous action directly or' indirectly seems to me very likely, and to be suggested by the actions of some at least of the hormones or internal secretions. This, however, is not the same as saying that the influence on thresholds which determines learning is the product of specific hormones. Again, it is tempting to find the common denominator of hunger and the pain caused by the electrified fence in something that we may call an emotion, but it is certainly going too far to attach emotion to all conditioners of reflexes, without any further discussion of their particular nature. Nevertheless, it is interesting to know that the sort of phenomenon which is recorded subjectively as emotion may not be merely a useless epiphenomenon of nervous action, but may control some essential stage in learning, and in other similar processes. I definitely do not say that it does, but I do say that those psychologists who draw sharp and uncross able distinctions be-
99 .1 1 '\ 1' CYBERNETICS AND SOCIETY 73 tween man's emotions and those of other living organisms and the responses of the modern type of automatic mechanisms, should be just as careful in their denials as I should be in my assertions.
100 IV THE MECHANISM AND HISTORY OF LANGUAGE Naturally, no theory of communication can avoid the discussion of language. Language, in fact, is in one sense another name for communication itself, as well as a word used to describe the codes through which communication takes place. We shall see later in this chapter that the use of encoded and decoded messages is important, not merely for human beings, but for other living organisms, and for the machines used by human beings. Birds communicate with one another, monkeys communicate with one another, insects communicate with one another, and in all this communicati<m some use is made of signals or symbols which can be understood only by being privy to the system of codes involved. What distinguishes human communication from the communication of most other animals is (a) the delicacy and complexity of the code used, and (b) the high degree of arbitrariness of this code. Many animals can signal their emotions to one another, and in signaling these emotions indicate the presence of an enemy, or of an animal of the same species but of opposite sex, and quite a variety of detailed messages of this sort. Most of these messages are fugitive and unstored. The greater part would be translated in human language into expletives and exclamations, although some might be rendered crudely by words to which we should be likely to give the form of nouns and adjectives, but which would be used by the animal in question without I
101 CYBERNETICS AND SOCIETY 75 ny corresponding distinction of grammatical form. In :eneral, one would expect the language of animals to onvey emotions first, things next, and the more come He ated relations of things not at all. P Besides this limitation of the language of animals as it concerns the character of what is communicated, their language is very generally fixed by the species of the animal, and unchanging in history. One lion's roar is very nearly another lion's roar. Yet there are animals such as the parrot, the myna, and the crow, which seem to be able to pick up sounds from the surrounding environment, and particularly from the cries of other animals and of man, and to be able to modify or to augment their vocabularies, albeit within very narrow limits. Yet even these do not seem to have anything like man's freedom to use any pronounceable sound as a code for some meaning or other, and to pass on this code to the surrounding group in such a way that the codification forms an accepted language understood within the group, and almost meaningless on the outside. Within their very great limitations, the birds that can imitate human speech have several characteristics in common: they are social, they are rather long-lived, and they have memories which are excellent by anything less than the exacting human standard. There is no doubt that a talking bird can learn to use human Or animal sounds at the appropriate cues, and with what will appear at least to the casual listener as some element of understanding. Yet even the most vocal members of the sub-human world fail to compete with man in ease of giving significance to new sounds, in repertory of sounds carrying a specific codification in t form s? ls for relations, classes, and other entities, extent of linguistic memory, and above all in the ability o Ru seu s higher logical type." I WIsh to point out nevertheless that language is not exclusively an attribute of living beings but one which
102 76 THE HUMAN USE OF HUMAN BEINGS they may share to a certain degree with the machines man has constructed. I wish to show further that man's preoccupation with language most certainly represents a possibility which is built into him, and which is not built into his nearest relatives, the great apes. Nevertheless, I shall show that it is built in only as a possibility which must be made good by learning. We ordinarily think of communication and language as being directed from person to person. However, it is quite possible for a person to talk to a machine, a machine to a person, and a machine to a machine. For example, in the wilder stretches of our own West and of Northern Canada, there are many possible power sites far from any settlement where the workers can live, and too small to justify the foundation of new settlements on their own account, though not so small that the power systems are able to neglect them. It is thus desirable to operate these stations in a way that does not involve a resident staff, and in fact leaves the stations unattended for months between the rounds of a supervising engineer. To accomplish this, two things are necessary. One of these is the introduction of automatic machinery; making it impossible to switch a generator on to a busbar or connecting member until it has come into the right frequency, voltage, and phase; and providing in a similar manner against other disastrous electrical, mechanical, and hydraulic contingencies. This type of operation would be enough if the daily cycle of the station were unbroken and unalterable. This, however, is not the case. The load on a generating system depends on many variable factors. Among, these are the fluctuating industrial demand; emergencies which may remove a part of the system from operation; and even passing clouds, which may make tens of thousands of offices and homes turn on their electric lights in the middle of the day. It follows that the automatic stations, as well as those operated by a working
103 CYBERNETICS AND SOCIETY 77 rew must be within constant reach of the load disc Patc l er, who must be able to give orders to his mahines; and this he does by sending appropriately oded signals to the power station, either over a special line designed for the purpose, or over existing telegraph or telephone lines, or over a carrier system makino' use of the power lines themselves. On the other h:nd, before the load dispatcher can give his orders intelligently, he must be acquainted with the state of affairs at the generating station. In particular, he must know whether the orders he has given have been executed, or have been held up through some failure in the equipment. Thus the machines in the generating station must be able to send return messages to the load dispatcher. Here, then, is one instance of language emanating from man and directed toward the machine, and vice versa. It may seem curious to the reader that we admit machines to the field of language and yet almost totally deny language to the ants. Nevertheless, in constructing machines, it is often very important for us to extend to them certain human attributes which are not found among the lower members of the animal community. If the reader wishes to conceive this as a metaphoric extension of our human personalities, he is welcome to do so; but he should be cautioned that the new machines will not stop working as soon as we have stopped giving them human support. The language directed toward the machine actually c? nsists of more than a single step. From the point of VIew of the line engineer alone, the code transmitted along the line is complete in itself. To this message we may apply all the notions of Cybernetics, or the theory of messages. We may evaluate the amount of information it carries by determining its probability in the ensemble of all possible messages and then taking the n gative logarithm of this prob bility, in accordance WIth the theory expounded in Chapter I. However, this
104 78 TIIE HUMAN USE OF HUMAN BEINGS represents not the infonnation actually carried by the line, but the maximum amount it might carry, if it were to lead into proper terminal equipment. The amount of information carried with actual tenninal equipment depends on the ability of the latter to transmit or to employ the infonnation received. We are thus led to a new conception of the way in which the generating station receives the orders. Its actual performance of opening and closing switches, of pulling generators into phase, of controlling the How of water in sluices, and of turning the turbines on or off, may be regarded as a language in itself, with a system of probabilities of behavior given by its own history. Within this frame every possible sequence of orders has its own probability, and hence carries its own amount of information. It is, of course, possible that the relation between the line and the terminal machine is so perfect that the amount of information contained in a message, from the point of view of the carrying capacity of the line, and the amount of information of the fulfilled orders, measured from the point of view of the operation of the machine, will be identical with the amount of infonnation transmitted over the compound system consisting of the line followed by the machine. In general, however, there will be a stage of translation between the line and the machine; and in this stage, information may be lost which can never be regained. Indeed, the process of transmitting information may involve several consecutive stages of transmission following one another in addition to the final or effective stage; and between any two of these there will be an act of translation, capable of dissipating information. That information may be dissipated but not gained, is, as we have seen, the cybernetic form of the second law of thennodynamics. Up to this point in this chapter we have been discussing communication systems terminating in ma- I I I, I, i.
105 CYBERNETICS AND SOCIETY 79 chines. In a certain sense, all communication systems terminate in machines, but the ordinary language systems terminate in the special sort of machine known as a human being. The human being as a terminal machine has a communication network which may be considered at three distinct levels. For ordinary spoken language, the first human level consists of the ear, and of that part of the cerebral mechanism which is in permanent and rigid connection with the inner ear. This apparatus, when joined to the apparatus of sound vibrations in the air, or their equivalent in electric circuits, represents the machine concerned with the phonetic aspect of language, with sound itself. The semantic or second aspect of language is concerned with meaning, and is apparent, for example, in difficulties of translating from one language to another where the imperfect correspondence between the meanings of words restricts the How of information from one into the other. One may get a remarkable semblance of a language like English by taking a sequence of words, or pairs of words, or triads of words, according to the statistical frequency with which they occur in the language, and the gibberish thus obtained will have a remarkably persuasive similarity to good English. This meaningless simulacrum of intelligent speech is practically equivalent to significant language from the phonetic point of view, although it is semantically balderdash, while the English of an intelligent foreigner whose pronunciation is marked by the country of his birth, or who speaks literary English, will be semantically good and phonetically bad. On the other hand, the average synthetic after-dinner speech is phonetically good and semantically bad. In human communication apparatus, it is possible but difficult to determine the characteristics of its phonetic mechanism, and therefore also possible but difficult to determine what is phonetically significant information, and to measure it. It is clear, for example"
106 80 THE HUMAN USE OF HUMAN BEINGS that the ear and the brain have an effective frequency cutoff preventing the reception of some high frequencies which can penetrate the ear and can be transmitted by the telephone. In other words, these high frequencies, whatever information they may give an appropriate receptor, do not carry any significant amount of information for the ear. But it is even more difficult to determine and measure semantically significant information. Semantic reception demands memory, and its consequent long delays. The types of abstractions belonging to the important semantic stage are not merely those associated with built-in permanent subassemblies of neurons in the brain, such as those which must play a large role in the perception of geometrical form; but with abstraction-detector-apparatus consisting of ; I parts of the internuncial pool-that is, of sets of neurons which are available for larger assemblies, but are not permanently locked into them-which have been temporarily assembled for the purpose. Besides the highly organized and permanent assemblies in the brain that undoubtedly exist, and are found in those parts of the brain associated with the organs of special sense, as well as in other places, there are particular switchings and connections which seem to have been formed temporarily for special purposes, such as learned reflexes and the like. In order to form such particular switchings, it must be possible to assemble sequences of neurons available for the purpose that are not already in use. This question of assembling concerns, of course, the synaptic thresholds of the sequence of neurons assembled. Since neurons exist which can either be within or outside of such temporary as- :; I semblies, it is desirable to have a special name for them. As I have already indicated, I consider that they correspond rather closely to what the neurophysiologists call internuncial pools. This is at least a reasonable theory of their behavior.
107 CYBERNETICS AND SOCIETY 81 The semantic receiving apparatus neither receives nor translates the language word by word, but idea by idea, and often still more generally. In a certain sense, it is in a position to call on the whole of past experience in its transformations, and these long-time carry-overs are not a trivial part of its work. There is a third level of communication, which represents a translation partly from the semantic level and partly from the earlier phonetic level. This is the translation of the experiences of the individual, whether conscious or unconscious, into actions which may be observed externally. We may call this the behavior level of language. In the lower animals, it is the only level of language which we may observe beyond the phonetic input. Actually this is true in the case of every human being other than the particular person to whom any given passage is addressed in each particular case; in the sense that that person can have access to the internal thoughts of another person only through the actions of the latter. These actions consist of two parts : namely, direct gross actions, of the sort which we also observe in the lower animals; and in the coded and symbolic system of actions which we know of as spoken or written language. It is theoretically not impossible to develop the statistics of the semantic and behavior languages to such a level that we may get a fair measure of the amount of information that they contain. Indeed we can show by general observations that phonetic language reaches the receiving apparatus with less overall information than was originally sent, or at any rate with not more than the transmission system leading to the ear can convey; and that both semantic and behavior language contain less information still. This fact again is a corollary of the second law of thermodynamics, and is necessarily true if at each stage we regard the information transmitted as the maximum infor-
108 82 THE HUMAN USE OF HUMAN BEINGS mation that could be transmitted with an appropriately coded receiving system. Let me now call the attention of the reader to something which he may not consider a problem at allnamely, the reason that chimpanzees do not talk. The behavior of chimpanzees has for a long time been a puzzle to those psychologists who have concerned themselves with these interesting beasts. The young chimpanzee is extraordinarily like a child, and clearly his equal or perhaps even his superior in intellectual matters. The animal psychologists have not been able to keep from wondering why a chimpanzee brought up in a human family and subject to the impact of human speech until the age of one or two, does not accept language as a mode of expression, and itself burst into baby talk. Fortunately, or unfortunately as the case may be, most chimpanzees, in fact all that have as yet been observed, persist in being good chimpanzees, and do not become quasi-human morons. Nevertheless I think that the average animal psychologist is rather longingly hoping for that chimpanzee who will disgrace his simian ancestry by adhering to more human modes of conduct. The failure so far is not a matter of sheer bulk of intelligence, for there are defective human animals whose brains would shame a chimpanzee. It just does not belong to the nature of the beast to speak, or to want to speak. Speech is such a peculiarly human activity that it is not even approached by man's closest relatives and his most active imitators. The few sounds emitted by chimpanzees have, it is true, a great deal of emotional content, but they have not the fineness of clear and repeated accuracy of organization needed to make them into a code much more accurate than the yow lings of a cat. Moreover (and this differentiates them still more from human speech ), at times they belong to the chimpanzee as an unlearned inborn manifestation, rather, I I, I II
109 CYBERNETICS AND SOCIETY,, I I, I II than as the learned behavior of a member of a given social community. The fact that speech belongs in general to man as man, but that a particular form of speech belongs to man as a member of a particular social community, is most remarkable. In the first place, taking the whole wide range of man as we know him today, it is safe to say that there is no community of individuals, not mutilated by an auditory or a mental defect, which does not have its own mode of speech. In the second place, all modes of speech are learned, and notwithstanding the attempts of the nineteenth century to formulate a genetic evolutionistic theory of languages, there is not the slightest general reason to postulate any single native form of speech from which all the present forms are originated. It is quite clear that if left alone, babies will make attempts at speech. These attempts, however, show their own inclinations to utter something, and do not follow any existing form of language. It is almost equally clear that if a community of children were left out of contact with the language of their seniors through the critical speech-forming years, i they would emerge with something, which crude as it might be, would be unmistakably a language. Why is it then that chimpanzees cannot be forced to talk, and that human children cannot be forced not to? Why is it that the general tendencies to speak and the general visual and psychological aspects of language are so uniform over large groups of people, while the particular linguistic manifestation of these aspects is varied? At least partial understanding of these matters is essential to any comprehension of the languagebased community. We merely state the fundamental facts by saying that in man, unlike the apes, the impulse to use some sort of language is overwhelming; but that the particular language used is a matter which has to be learned in each special case. It apparently is built into the brain itself, that we are to have a pre- \
110 84 TIlE HUMAN USE OF HUMAN BEINGS occupation with codes and with the sounds of speech, and that the preoccupation with codes can be extended from those dealing with speech to those that concern themselves with visual stimuli. However, there is not one fragment of these codes which is born into us as a pre-established ritual, like the courting dances of many of the birds, or the system by which ants recognize and exclude intruders into the nest. The gift of speech does not go back to a universal Adamite language disrupted in the Tower of Babel. It is strictly a psychological impulse, and is not the gift of speech, but the gift of the power of speech. In other words, the block preventing young chimpanzees from learning to talk is a block which concerns the semantic and not the phonetic stage of language. The chimpanzee has simply no built-in mechanism which leads it to translate the sounds that it hears into the basis around which to unite its own ideas or into a complex mode of behavior. Of the first of these statements we cannot be sure because we have no direct way of observing it. The second is simply a noticeable empirical fact. It may have its limitations, but that there is such a built-in mechanism in man is perfectly clear. In this book, we have already.emphasized man's extraordinary ability to learn as a distinguishing characteristic of the species, which makes social life a phenomenon of an entirely different nature from the apparently analogous social life among the bees and ants and other social insects. The evidence concerning children who have been deprived of contact with their own race over the years normally critical in the ordinary acquisition of language, is perhaps not completely unambiguous. The "Wolf Child" stories, which have led to Kipling's imaginative Jungle Books, with their public-school bears and Sandhurst wolves, are almost as little to be relied on in their original stark squalidity as in the Jungle Books idealizations. However, what
111 CYBERNETICS AND SOCIETY 85 evidence there is goes to show that there is a critical period during which speech is most readily learned; and that if this period is passed over without contact with one's fellow human beings, of whatever sort they may be, the learning of language becomes limited, slow, and highly imperfect. This is probably true of most other abilities which we consider natural skills. If a child does not walk until it is three or four years old, it may have lost all the desire to walk. Ordinary locomotion may become a harder task than driving a car for the normal adult. If a person has been blind from childhood, and the blindness has been resolved by a cataract operation or the implantation of a transparent corneal section, the vision that ensues will, for a time, certainly bring nothing but confusion to those activities which have normally been carried out in darkness. This vision may never be more than a carefully learned new attainment of doubtful value. Now, we may fairly take it that the whole of human social life in its normal manifestations centers about speech, and that if speech is not learned at the proper time, the whole social aspect of the individual will be aborted. To sum up, the human interest in language seems to be an innate interest in coding and decoding, and this seems to be as nearly specifically human as any interest can be. Speech is the greatest interest and most distinctive achievement of man. I was brought up as the son of a philologist, <tud questions concerning the nature and technique of language have interested me from my childhood. It is impossible for as thorough a revolution in the theory of language as is offered by modern communication theory to take effect without effecting past linguistic ideas. As my father was a very heretical philologist whose influence tended to lead philology in much the same direction as the modern influences of communication theory, I wish to continue this chapter with a few
112 86 THE HUMAN USE OF HUMAN BEINGS amateurish reflections on the history of language and the history of our theory of language. Man has held the notion that language is a mystery since very early times. The riddle of the Sphinx is a primitive conception of wisdom. Indeed, the very word riddle is derived from the root "to rede," or to puzzle out. Among many primitive people writing and sorcery are not far apart. The respect for writing goes so far in some parts of China that people are loath to throw away scraps of old newspapers and useless fragments of books. Close to all these manifestations is the phenomenon of "name magic" in which members of certain cultures go from birth to death under names that are not properly their own, in order that they may not give a sorcerer the advantage of knowing their true names. Most familiar to us of these cases is that of the name of Jehovah of the Jews, in which the vowels are taken over from that other name of God, "Adonai," so that the Name of Power may not be blasphemed by being pronounced in profane mouths. From the magic of names it is but a step to a deeper and more scientific interest in language. As an interest in textual criticism in the authenticity of oral traditions and of written texts it goes back to the ancients of all civilizations. A holy text must be kept pure. When there are divergent readings they must be resolved by some critical commentator. Accordingly, the Bible of the Christians and the Jews, the sacred books of the Persians and the Hindus, the Buddhist scriptures, the writings of Confucius, all have their early commentators. What has been learned for the maintenance of true religion has been carried out as a literary discipline, and textual criticism is one of the oldest of intellectual studies. For a large part of the last century philological history was reduced to a series of dogmas which at times show a surprising ignorance of the nature of language.
113 CYBERNETICS AND SOCIETY The model of the Darwinian evolutionism of the times was taken too seriously and too uncritically. As this whole subject depends in the most intimate manner on our views of the nature of communication, I shall comment on it at a certain length. The early speculation that Hebrew was the language of man in Paradise, and that the confusion of language originated at the building of the Tower of Babel, need not interest us here as anything more than a primitive precursor of scientific thought. However, the later developments of philological thought have retained for a long time a similar naivete. That languages are related, and that they undergo progressive changes leading in the end to totally different languages, were observations which could not long remain unnoticed by the keen philological minds of the Renaissance. A book such as Ducange's Glossarium Mediae atque I n fimae Latinitatis could not exist without its being clear that the roots of the Romance languages are not only in Latin, but in vulgar Latin. There must have been many learned rabbis who were well aware of the resemblance of Hebrew, Arabic, and Syriac. When, under the advice of the much maligned Warren Hastings, the East India Company founded its School of Oriental Studies at Fort William, it was no longer possible to ignore that Greek and Latin on the one hand, and Sanskrit on the other, were cut from the same cloth. At the beginning of the last century the work of the brothers Grimm and of the Dane, Rask, showed not only that the Teutonic languages came within the orbit of this so-called Indo-European group, but went further to make clear the linguistic relations of these languages to one another, and to a supposed distant common parent. Thus evolutionism in language antedates the refined Darwinian evolutionism in biology. Valid as this evolutionism is, it very soon began to outdo biological evolutionism in places where the latter was. not appli-
114 88 THE HUMAN USE OF HUMAN BEINGS cable. It assumed, that is, that the languages were independent, quasi-biological entities, with their developments modified entirely by internal forces and needs. In fact, they are epiphenomena of human intercourse, subject to all the social forces due to changes in the pattern of that intercourse. In the face of the existence of Mischsprachen, of languages such as Lingua Franca, Swahili, Yiddish, Chinook J argon, and even to a considerable extent English, there has been an attempt to trace each language to a single legitimate ancestor, and to treat the other participants in its origin as nothing more than godparents of the newborn child. There has been a scholars' distinction between legitimate phonetic formations following accepted laws, and such regrettable accidents as nonce words, popular etymologies, and slang. On the grammatical side, the original attempt to force all languages of any origin whatsoever into the strait jacket manufactured for Latin and Greek has been succeeded by an attempt almost as rigorous to form for each of them its own paradigms of construction. It is scarcely until the recent work of Otto Jespersen that any considerable group of philologists have had objectivity enough to make of their science a representation of language as it is actually spoken and written, rather than a copybook attempt to teach the Eskimos how to speak Eskimo, and the Chinese how to write Chinese. The effects of misplaced grammatical purism are to be seen well outside of the schools. First among these, perhaps" is the way in which the Latin language, like the earlier generation of classical gods, has been slain by its own children. During the Middle Ages Latin of a varying quality, the best of it quite acceptable to anyone but a pedant, remained the universal language of the clergy and of all learned men throughout Western Europe, even as Arabic has remained in the Moslem world down to the
115 CYBERNETICS AND SOCIETY 89 present day. This continued prestige of Latin was made possible by the willingness of writers and speakers of the language either to borrow from other languages, or to construct within the frame of Latin itself, all that was necessary for the discussion of the live philosophical problems of the age. The Latin of Saint Thomas is not the Latin of Cicero, but Cicero would have been unable to discuss Thomistic ideas in the Ciceronian Latin. It may be thought that the rise of the vulgar languages of Europe must necessarily have marked the end of the function of Latin. This is not so. In India, notwithstanding the growth of the neo-sanskritic languages, Sanskrit has shown a remarkable vitality lasting down to the present day. The Moslem world, as I have said, is united by a tradition of classical Arabic, even though the majority of Moslems are not Arabic speakers and the spoken Arabic of the present day has divided itself into a number of very different dialects. It is quite possible for a language which is no longer the language of vulgar communication to remain the language of scholarship for generations and even for centuries. Modern Hebrew has survived for two thousand years the lack of use of Hebrew in the time of Christ, and indeed has come back as a modern language of daily life. In what I am discussing now, I am referring only to the limited use of Latin as a language of learned men. With the coming of the Renaissance, the artistic standards of the Latinists became higher, and there was. more and more a tendency to throw out all postclassical neologisms. In the hands of the great Italian scholars of the Renaissance, this reformed Latin could be, and often was, a work of art; but the training necessary to wield such a delicate and refined tool was beyond that which would be incidental to the training of the scientist, whose main work must always concern itself with content rather than with perfection of form.
116 go THE IillMAN USE OF IillMAN BEINGS The result was that the people who taught Latin and the people who used Latin became ever more widely separated classes, until the teachers completely eschewed the problem of teaching their disciples anything but the most polished and unusable Ciceronian speech. In this vacuum they ultimately eliminated any function for themselves other than that of specialists; and as the specialty of Latinism thus came to be less and less in general demand, they abolished their own function. For this sin of pride, we now have to pay in the absence of an adequate international language far superior to the artificial ones such as Esperanto, and well suited for the demands of the present day. Alas, the attitudes of the classicists are often beyond the understanding of the intelligent layman! I recently had the privilege of hearing a commencement address from a classicist who bewailed the increased centrifugal force of modern learning, which drives the natural scientist, the social scientist, and the literary man ever farther from one another. He put it into the form of an imaginary trip which he took through a modern university, as the guide and mentor to a reincarnated Aristotle. His talk began by presenting in the pillory bits of technical jargon from each modern intellectual field, which he supposed himself to have presented to Aristotle as horrible examples. May I remark that all we possess of Aristotle is what amounts to the school notebooks of his disciples, written in one of the most crabbed technical jargons in the history of the world, and totally unintelligible to any contemporary Greek who had not been through the discipline of the Lyceum? That this jargon has been sanctified by history, so that it has become itself an object of classical education, is not relevant; for this happened after Aristotle, not contemporaneously with him. The important thing is that the Greek language of the time of Aristotle was ready to compromise with the technical jargon of a brilliant scholar, while even the English of
117 CYBERNETICS AND SOCIETY 91 his learned and reverend successors is not willing to compromise with the similar needs of modern speech. With these admonitory words, let us return to a modern point of view which assimilates the operation of linguistic translation and the related operations of the interpretation of language by ear and by brain to the performance and the coupling of non-human communication networks. It will be seen that this is really in accordance with the modern and once heretical views of Jespersen and his school. Grammar is no longer primarily normative. It has become factual. The question is not what code should we use, but what code do we use. It is quite true that in the finer study of language, normative questions do indeed come into play, and are very delicate. Nevertheless, they represent the last fine flower of the communication problem, and not its most fundamental stages. We have thus established the basis in man for the simplest element presence of the individual, for we have many means to carry this tool of communication to the ends of the earth. Among primitive groups the size of the community for an effective communal life is restricted by the difficulty of transmitting language. For many millennia, this difficulty was enough to reduce the optimum size of the state to something of the order of a few million people, and generally fewer. It will be noted that the great empires which transcended this limited size were held together by improved means of communication. The heart of the Persian Empire was the Royal Road and the relay of messengers who conveyed the Royal Word along it. The great empire of Rome was possible
118 92 THE HUMAN USE OF HUMAN BEINGS only because of Rome's progress in roadbuilding. These roads served to carry not only the legions, but the written authority of the Emperor as well. With the airplane and the radio the word of the rulers extends to the ends of the earth, and very many of the factors which previously precluded a World State have been abrogated. It is even possible to maintain that modern communication, which forces us to adjudicate the international claims of different broadcasting systems and different airplane nets, has made the World State inevitable. But as efficient as communications' mechanisms become, they are still, as they have always been. subject to the overwhelming tendency for entropy to inclease, for information to leak in transit, unless certain external agents are introduced to control it. I have already referred to an interesting view of language made by a cybernetically-minded philologist-that speech is a joint game by the talker and the listener against the forces of confusion. On the basis of this description, Dr. Benoit Mandelbrot has made certain computations concerning the distribution of the lengths of words in an optimal language, and has compared these results with what he has found in existing languages. Mandelbrot's results indicate that a language optimal according to certain postulates will very definitely exhibit certain distribution of length among words. This distribution is very different from what will be found in an artificial language, such as Esperanto or Volapiik. On the other hand, ilis remarkably close to what is found in most actual languages that have withstood the attrition of use for centuries. The results of Mandelbrot do not, it is true, give an absolutely fixed distribution of word lengths; in his formulas there still occur certain quantities which must be assigned, or, as the mathematician calls them, parameters. However, by a proper choice of these parameters, Mandelbrot's theoretical results fit very closely the word distribution in man; actual languages, indicating that there is a
119 CYBERNETICS AND SOCIETY 93 certain natural selection among them, and that the form of a language which survives by the very fact of its use and survival has been driven to take something not too remotely resembling an optimum form of distribution. The attrition of language may be due to several causes. Language may strive simply against nature's tendency to confuse it or against willful human attempts to subvert its meaning.1 Normal communicative discourse, whose major opponent is the entropic tendency of nature itself, is not confronted by an active enemy, conscious of its own purposes. Forensic discourse, on the other hand, such as we find in the law court in legislative debates and so on, encounters a much more formidable opposition, whose conscious aim is to qualify and even to destroy its meaning. Thus an adequate theory of language as a game should distinguish between these two varieties of language, one of which is intended primarily to convey information and the other primarily to impose a point of view against a willful opposition. I do not know if any philologist has yet made the technical observations and theoretical propositions which are necessary to distinguish these two classes of language for our purposes, but I am quite sure that they are substantially different forms. I shall talk further about forensic language in a later chapter, which deals with language and law. The desire to apply Cybernetics of semantics, as a discipline to control the loss of meaning from language, has already resulted in certain problems. It seems necessary to make some sort of distinction between information taken brutally and bluntly, and that sort of information on which we as human beings can act effectively or, mutatis mutandis, on which the machine can act effectively. In my opinion, the central distinction and difficulty here arises from the fact that it 1 Relevant here also is Einstein's aphorism, see Chapter II, p. 35 above.
120 94 THE HUMAN USE OF HUMAN BEINGS is not the quantity of information sent that is important for action, but rather the quantity of information which can penetrate into a communication and storage apparatus sufficiently to serve as the trigger for action. I have said that any transmission of, or tampering with, messages decreases the amount of information they contain, unless new information is fed in, either from new sensations or from memories which have been previously excluded from the information system. This statement, we have seen, is another version of the second law of thermodynamics. Now let us consider an information system used to control the sort of electric power sub-station of which we spoke earlier in the chapter. What is important is not merely the information that we put into the line, but what is left of it when it goes through the final machinery to open or close sluices, to synchronize generators, and to do similar tasks. In one sense, this terminal apparatus may be regarded as a filter superimposed on the transmission line. Semantically significant information from the cybernetic point of view is that which gets through the line-pius-filter, rather than that which gets through the line alone. In other words, when I hear a passage of music, the greater part of the sound gets to my sense organs and reaches my brain. However, if I lack the perception and training necessary for the aesthetic understanding of musical structure, this information will meet a block, whereas if I were a trained musician it would meet an interpreting structure or organization which would exhibit the pattern in a significant form which can lead to aesthetic appreciation and further understanding. Semantically significant information in the machine as well as in man is information which gets through to an activating mechanism in the system that receives it, despite man's and/or nature's attempts to subvert it. From the point of view of Cybernetics, semantics defines the extent of meaning and controls its loss in a communications system.
121 metaphor to which I devote this chapter is _ me, i.. in whiqh thaorganism is seen as message. Organism is opposec:l to chaos, to disintegration, to death, as message is to noise. To describe an organism, we do not try to specify each molecule in it, and catalogue it bit by bit, but rather to answer certain questions about it which reveal its pattern: a pattern which is more significant and less probable as the organism becomes, so to speak, more fully an organism. We have already seen that certain organisms, such as man, tend for a time to maintain and often even to increase the level of their organization, as a local enclave in the general stream of increasing entropy, of increasing chaos and de-differentiatioll. Life is an island here and now in a dying world. The process by which we living beings resist the general stream of corruption and decay is known as homeostasis. We can continue to live in the very special environment which we carry forward with us only until we begin to decay more quickly than we can reconstitute
122 g6 THE HUMAN USE OF HUMAN BEINGS ourselves. Then we die. If our bodily temperature rises or sinks one degree from its normal level of Fahrenheit, we begin to take notice of it, and if it rises or sinks ten degrees, we are all but sure to die. The oxygen and carbon dioxide and salt in our blood, the hormones flowing from our ductless glands, are all regulated by mechanisms which tend to resist any untoward changes in their levels. These mechanisms constitute what is known as homeostasis, and are negative feedback mechanisms of a type that we may find exemplified in mechanical automata. It is the pattern maintained by this homeostasis, which is the touchstone of our personal identity.. A pattern is a message, and may be transmitted as a message. How else do we employ our radio than to transmit patterns of sound, and our television set than to transmit patterns of light?.
123 CYBERNETICS AND SOCIETY 97 exjstence
124 98 THE HUMAN USE OF HUMAN BEINGS sen,ses
125 CYBERNETICS AND SOCIETY 99 HeUs, confused
126 100 TIlE HUMAN USE OF HUMAN BEINGS
127 CYBERNETICS AND SOCIETY I part of it. The biological individuality of an organism seems to lie in a certain continuity of process, and in.
128 102 THE HUMAN USE OF HUMAN BEINGS.
129 CYBERNETICS AND SOCIETY 103 When one cell divides into two, or when one of the genes which carries our corporeal and mental birthright Encycwpedia Britan nica, we find that they constitute an even more enormous message; and this is still more impressive when we realize what the conditions for telegraphic transmission of such a message must be.. In other words, the fact that we cannot telegraph the
130 104 TIIE IIDMAN USE OF IIDMAN BEINGS.
131 VI LAW AND COMMUNICATION Law may be defined as the ethical control applied to communication, and to language as a form of communication, especially when this normative aspect is under the control of some authority sufficiently strong to give its decisions an effective social sanction. It is the process of adjusting the "couplings" connecting the behavior of different individuals in such a way that what we call justice may be accomplished, and disputes may be avoided, or at least adjudicated. Thus the theory and practice of the law involves two sets of problems : those of its general purpose, of its conception of justice; and those of the technique by which these concepts of justice can be made effective. Empirically, the concepts of justice which men have maintained throughout history are as varied as the religions of the world, or the cultures recognized by anthropologists. I doubt if it is possible to justify them by any higher sanction than our moral code itseh, which is indeed only another name for our conception of justice. As a participant in a liberal outlook which has its main roots in the Western tradition, but which has extended itself to those Eastern countries which have a strong intellectual-moral tradition, and has indeed borrowed deeply from them, I can only state what I myseh and those about me consider necessary for the existence of justice. The best words to express these requirements are those of the French Revolution: Liberte, Egalite, Fraternite. These mean: the liberty of each human being to develop in his freedom the full
132 106 TIIE HUMAN USE OF HUMAN BEINGS measure of the human possibilities embodied in him; the equality by which what is just for A and B remains just when the positions of A and B are interchanged; and a good will between man and man that knows no limits short of those of humanity itself. These great principles of justice mean and demand that no person, by virtue of the personal strength of his position, shall.1 enforce a sharp bargain by duress. What compulsion the very existence of the community and the state may demand must be exercised in such a way as to produce no unnecessary infringement of freedom. But not even the greatest human decency and liberalism will, in itself, assure a fair and administrable legal code. Besides the general principles of justice, the law must be so clear and reproducible that the individual citizen can assess his rights and duties in advance, even where they appear to conflict with those of others. He must be able to ascertain with a reasonable certainty what view a judge or a jury will take of his position. If he cannot do this, the legal code, no matter how well intended, will not enable him to lead a life free from litigation and confusion. Let us look at the matter from the simplest point of view-that of the contract. Here A takes on a responsibility of performing a certain service which in general will be advantageous to B; whereas B assumes in return the responsibility of performing a service or making a payment advantageous to A. If it is unambiguously clear what each task and each payment is to be, and if one of the parties does not invoke methods of imposing his will on the other party which are foreign to the contract itself, then the determination of whether the bargain is equitable may safely be left to the judgment of the two contracting parties. If it is manifestly inequitable, at least one of the contracting parties may be supposed to be in the position of being able to reject the bargain altogether. What, however, they cannot be expected to settle with any justice
133 CYBERNETICS AND SOCIETY 107 among themselves is the meaning of the bargain if the terms employed have no established significance, or if the significance varies from court to court. Thus it is the first duty of the law to see that the obligations and rights given to an individual in a certain stated situation be unambiguous. Moreover, there should be a body of legal interpretation which is as far as possible independent of the will and the interpretation of the particular authorities consulted. Reproducibility is prior to equity, for without it there can be no equity. It appears from this why precedent has a very important theoretical weight in most legal systems, and why in all legal systems it has an important practical weight. There are those legal systems which purport to be based on certain abstract principles of justice. The Roman law and its descendants, which indeed constitute the greater part of the law of the European continent, belong to this class. There are other systems like that of the English law, in which it is openly stated that precedent is the main basis of legal thought. In either case, no new legal term has a completely secure meaning until it and its limitations have been determined in practice; and this is a matter of precedent. To fly in the face of a decision which has been made in an already existing case is to attack the uniqueness of the interpretation of legal language and is ipso facto a cause of indeterminateness and very probably of a consequent injustice. Every case decided should advance the definition of the legal terms involved in a manner consistent with past decisions, and it should lead naturally on to new ones. Every piece of phraseology should be tested by the custom of the place and of the field of human activity to which it is relevant. The judges, those to whom is confided the task of the interpretation of the law, should perform their function in such a spirit that if Judge A is replaced by Judge B, the exchange cannot be expected to make a material change in the court's interpretation of customs and of
134 108 TIlE HUMAN USE OF HUMAN BEINGS statutes. This naturally must remain to some extent a ideal rather than a fait accompli; but unless we ar close followers of these ideals, we shall have chaos, an what is worse, a no-man's land in which dishonest me prey on the differences in possible interpretation c the statutes. All of this is very obvious in the matter of contract! but in fact it extends quite far into other branches c the law, and particularly of the civil law. Let me giv an example. A, because of the carelessness of an em ployee B, damages a piece of property belonging to C Who is to take the loss, and in what proportion? ] these matters are known equally in advance to every body, then it is possible for the person normally takin. the greatest risk to charge a greater price for his undel takings and thus to insure himself. By these means h may cancel some considerable part of his disadvantage The general effect of this is to spread the loss over th community, so that no man's share of it shall be ruinom Thus the law of torts tends to partake somewhat a the same nature as the law of contracts. Any legal re sponsibility which involves exorbitant possibilities a loss will generally make the person incurring the los pass his risk on to the community at large in the forn of an increased price for his goods, or increased fee for his services. Here, as well as in the case of contract unambiguity, precedent, and a good clear tradition a interpretation are worth more than a theoretical equit) particularly in the assessment of responsibilities. There are, of course, exceptions to these statement! For example, the old law of imprisonment for deb was inequitable in that it put the individual respon sible for paying the debt in exactly that position il which he was least capable of acquiring the means t pay. There are many laws at present which are in equitable, because, for example, they assume a free dom of choice on the part of one party which unde existing social circumstances is not there. What ha
135 CYBERNETICS AND SOCIETY b en said about imprisonment for debt is equally valid. e the case of peonage, and of many other similarly Ibused social customs. a If w e are to carry out a philosophy of liberty, equal 'ty and fraternity, then in addition to the demand that iegal responsibility should be unambiguous, we must add the demand that it should not be of such a nature that one party acts under duress, leaving the other free. The history of our dealings with the Indians is full of instances in point, both for the dangers of duress and the dangers of ambiguity. From the very early times of the colonies, the Indians had neither the bulk of population nor the equality of arms to meet the whites on a fair basis, especially when the so-called land treaties between the whites and the Indians were being negotiated. Besides this gross injustice, there was a semantic injustice, which was perhaps even greater. The Indians as a hunting people had no idea of land as private property. For them there was no such ownership as ownership in fee simple, though they did have the notion of hunting rights over specific territories. In their treaties with the settlers, what they wished to convey were hunting rights, and generally only concomitant hunting rights over certain regions. On the other hand, the whites believed, if we are to give their conduct the most favorable interpretation that can be assigned to it, that the Indians were conveying to them a title to ownership in fee simple. Under these circumstances, not even a semblance of justice was possible, nor did it exist. Where the law of Western countries is at present least satisfactory is on the criminal side. Law seems to consider punishment, now as a threat to discourage other possible criminals, now as a ritual act of expiation on the part of the guilty man, now as a device for removing him from society and for protecting the latter from the danger of repeated misconduct, and now as an agency for the social and the moral reform of the
136 110 TIlE HUMAN USE OF HUMAN BEINGS individual. These are four different tasks, to be accomplished by four different methods; and unless we know an accurate way of proportioning them, our whole attitude to the criminal will be at cross-purposes. At present, the criminal law speaks now in one language, and now in another. Until we in the community have made up our minds that what we really want is expiation, or removal, or reform, or the discouragement of potential criminals, we shall get none of these, but only a confusion in which crime breeds more crime. Any code which is made, one-fourth on the eighteenthcentury British prejudice in favor of hanging, onefourth on the removal of the criminal from society, one-fourth on a halfhearted policy of reform, and onefourth on the policy of hanging up a dead crow to scare away the rest, is going to get us precisely nowhere. Let us put it this way : the first duty of the law, whatever the second and third ones are, is to know what it wants. The first duty of the legislator or the judge is to make clear, unambiguous statements, which not only experts, but the common man of the times will interpret in one way and in one way only. The technique of the interpretation of past judgments must be such that a lawyer should know, not only what a court has said, but even with high probability what the court is going to say. Thus the problems of law may be considered communicative and cybernetic-that is, they are problems of orderly and repeatable control of certain critical situations. There are vast fields of law where there is no satisfactory semantic agreement between what the law intends to say, and the actual situation that it contemplates. Whenever such a theoretical agreement fails to exist, we shall have the same sort of no-man's land that faces us when we have two currency systems without an accepted basis of exchange. In the zone of unconformity between one court and another or one coinage and another, there is always a refuge for the dishonest I I '
137 CYBERNETICS AND SOCIETY III middleman, who will accept payment neither financially nor morally except in the system most favorable to him, and will give it only in the system in which he sacrifices least. The greatest opportunity of the criminal in the modern community lies in this position as a dishonest broker in the interstices of the law. I have pointed out in an earlier chapter that noise, regarded as a confusing factor in human communications, is damaging but not consciously malicious. This is true as far as scientific communication goes, and to a large extent in ordinary conversation between two people. It is most emphatically not true in language as it is used in the law courts. The whole nature of our legal system is that of a conhict. It is a conversation in which at least three parties take part-let us say, in a civil case, the plaintiff, the defendant, and the legal system as represented by judge and jury. It is a game in the full Von Neumann sense; a game in which the litigants try by methods which are limited by the code of law to obtain the judge and the jury as their partners. In such a game the opposing lawyer, unlike nature itself, can and deliberately does try to introduce confusion into the messages of the side he is opposing. He tries to reduce their statements to nonsense, and he deliberately jams the messages between his antagonist and the judge and jury. In this jamming, it is inevitable that bluff should occasionally be at a premium. Here we do not need to take the Erle Stanley Gardner detective stories at their face value as descriptions of legal procedure to see that there are occasions in litigation where bluff or the sending of messages with a deliberate purpose of concealing the strategy of the sender is not only permitted but encouraged.
138 VII COMMUNICATION, SECRECY, AND SOCIAL POLICY In the world of affairs, the last few years have been characterized by two opposite, even contradictory, trends. On the one hand, we have a network of communication, intranational and international, more complete than history has ever before seen. On the other hand, under the impetus of Senator McCarthy and his imitators, the blind and excessive classification of military information, and the recent attacks on the State Department, we are approaching a secretive frame of mind paralleled in history only in the Venice of the Renaissance. There the extraordinarily precise news-gathering services of the Venetian ambassadors (which form one of our chief sources of European history) accompanied a national jealousy of secrets, exaggerated to such an extent that the state ordered the private assassination of emigrant artisans, to maintain the monopoly of certain chosen arts and crafts. The modern game of cops and robbers which seems to characterize both Russia and the United States, the two principal contestants for world power of this century, suggests the old Italian cloak-and-dagger melodrama played on a much larger stage. The Italy of the Renaissance was also the scene of the hirth-pangs of modern science. However, the science of the present day is a much larger undertaking than that of Renaissance Italy. It should be possible to examine all the elements of information and secrecy in
139 CYBERNETICS AND SOCIETY the modern world with a somewhat greater maturity and objectivity than belong to the thought of the times of Machiavelli. This is particularly so in view of the fact that, as we have seen, the study of communication has now reached a degree of independence and authority making it a science in its own right. What does modern science have to say concerning the status and functions of communication and secrecy? I am writing this book primarily for Americans in whose environment questions of information will be evaluated according to a standard American criterion: a thing is valuable as a commodity for what it will bring in the open market. This is the official doctrine of an orthodoxy which it is becoming more and more perilous for a resident of the United States to question. It is perhaps worth while to point out that it does not represent a universal basis of human values: that it corresponds neither to the doctrine of the Church, which seeks for the salvation of the human soul, nor to that of Marxism, which values a society for its realization of certain specific ideals of human well-being. The fate of information in the typically American world is to become something which can be bought or sold. It is not my business to cavil whether this mercantile attitude is moral or immoral, crass or subtle. It is my business to show that it leads to the misunderstanding and the mistreatment of information and its associated concepts. I shall take this up in several fields, beginning with that of patent law. The letters patent granting to an inventor a limited monopoly over the subject matter of his invention are for him what a charter is to a chartered company. Behind our patent law and our patent policy is an implicit philosophy of private property and of the rights thereto. This philosophy represented a fairly close approximation to the actual situation in the now ending period when inventions were generally made
140 114 TIlE HUMAN USE OF HUMAN BEINGS 'in the shop by skilled handicraftsmen. It does not rep- " resent even a passable picture of the inventions of the ' ; present day. The standard philosophy of the patent office pre- :, supposes that by a system of trial and error, implying \ what is generally called mechanical ingenuity, a craftsman has advanced from a given technique to a further ; stage, embodied in a specific apparatus. The law distinguishes the ingenuity which is necessary to make.. this new combination from the other sort of ingenuity which is necessary to find out scientific facts about the world. This second sort of ingenuity is labeled the discovery of a law of nature; and in the United States, as well as in many other countries with similar industrial practices, the legal code denies to the discoverer any property rights in a law of nature which he may have discovered. It will be seen that at one time this distinction was fairly practical, for the shop inventor has one sort of tradition and background, and the man of science has a totally different one. The Daniel Doyce of Dickens' Little Dorrit, is clearly not to be mistaken for the members of the Mudfog, Association which Dickens treats elsewhere. The first, Dickens glorifies as the common sense craftsman, with the broad thumb of the hand worker, and the honesty of the man who is always facing facts; whereas the Mudfog Association is nothing but a derogatory alias for the British Association for the Advancement of Science in its early days. Dickens reviles the latter as an assemblage of chimerical and useless dreamers, in language which Swift would not have found inadequate ; to describe the projectors of Laputa. Now a modern research laboratory such as that of the Bell Telephone Company, while it retains Doyce's practicality, actually consists of the great-grandchildren of the Mudfog Association. If we take Faraday as an outstanding yet typical member of the early British Association for the Advancement of Science, the chain
141 CYBERNETICS AND SOCIETY to the research men of the Bell Telephone Laboratories of the present day is complete, by way of Maxwell and Heaviside, to Campbell and Shannon. In the early days of modern invention, science was far ahead of the workman. The locksmith set the level of mechanical competence. A piston was considered to fit an engine-cylinder when, according to Watt, a thin sixpence could just be slipped between the two. Steel was a craftsman's product, for swords and armor; iron was the stringy, slag-filled product of the puddler. Daniel Doyce had a long way indeed to go before so practical a scientist as Faraday could begin to supplant him. It is not strange that the policy of Great Britain, even when expressed through such a purblind organ as Dickens' Circumlocution Office, was directed toward Doyce as the true inventor, rather than to the gentlemen of the Mudfog Society. The Barnacle family of hereditary bureaucrats might wear Doyce to a shadow, before they ceased to refer him from office to office, but they secretly feared him, as the representative of the new industrialism which was displacing them. They neither feared, respected, nor understood the gentlemen of the Mudfog Association. In the United States, Edison represents the precise transition between the Doyces and the men of the Mudfog Association. He was himself very much of a Doyce, and was even more desirous of appearing to be one. Nevertheless, he chose much of his staff from the Mudfog camp. His greatest invention was that of the industrial research laboratory, turning out inventions as a business. The General Electric Company, the Westinghouse interests, and the Bell Telephone Laboratories followed in his footsteps, employing scientists by hundreds where Edison employed them by tens. Invention came to mean, not the gadget-insight of a shop-worker, but the result of a careful, comprehensive search by a team of competent scientists. At present, the invention is losing its identity as a
142 116 THE HUMAN USE OF HUMAN BEINGS commodity in the face of the general intellectual structure of emergent inventions. What makes a thing a good commodity? Essentially, that it can pass from hand to hand with the substantial retention of its value, and that the pieces of this commodity should combine additively in the same way as the money paid for them. The power to conserve itself is a very convenient property for a good commodity to have. For example, a given amount of electrical energy, except for minute losses, is the same at both ends of a transmission line, and the problem of putting a fair price on electric energy in kilowatt-hours is not too difficult. A similar situation applies to the law of the conservation of mat- \ ter. Our ordinary standards of value are quantities of!. gold, which is a particularly stable sort of matter. Information, on the other hand, cannot be conserved as easily, for as we have already seen the amount of information communicated is related to the non-additive quantity known as entropy and differs from it by its algebraic sign and a possible numerical factor. Just as entropy tends to increase spontaneously in a closed system, so information tends to decrease; just as entropy is a measure of disorder, so information is a measure of order. Information and entropy are not conserved, and are equally unsuited to being commodities. In considering information or order from the economic point of view, let us take as an example a piece of gold jewelry. The value is composed of two parts: the value of the gold, and that of the «fa on," or workmanship. When an old piece of jewelry is taken to the pawnbroker or the appraiser, the firm value of the piece is that of the gold only. Whether a further allowance is made for the fa on or not depends on many factors, such as the persistence of the seller, the style in favor when it was made, the purely artistic craftsmanship, the historical value of the piece for museum purposes, and the resistance of the buyer.
143 CYBERNETICS AND SOCIETY 117 Many a fortune has been lost by ignoring the difference between these two types of values, that of the gold and that of the fac;on. The stamp market, the rarebook market, the market for Sandwich glass and for Duncan Phyfe furniture are all artificial, in the sense that in addition to the real pleasure which the possession of such an object gives to its owner, much of the value of the fac;on pertains not only to the rarity of the object itself, but to the momentary existence of an active group of buyers competing for it. A depression, which limits the group of possible buyers, may divide it by a factor of four or five, and a great treasure vanishes into nothing just for want of a competitive purchaser. Let another new popular craze supplant the old in the attention of the prospective collectors, and again the bottom may drop out of the market. There is no permanent common denominator of collectors' taste, at least until one approaches the highest level of aesthetic value. Even then the prices paid for great paintings are colossal reflections of the desire of the purchaser for the reputation of wealth and connoisseurdom. The problem of the work of art as a commodity raises a large number of questions important in the theory of information. In the first place, except in the case of the narrowest sort of collector who keeps all his possessions under permanent lock and key, the physical possession of a work of art is neither sufficient nor necessary for the benefits of appreciation which it conveys. Indeed, there are certain sorts of works of art which are essentially public rather than private in their appeal, and concerning which the problem of possession is almost irrelevant. A great fresco is scarcely a negotiable document, nor for that matter is the building on whose walls it is placed. Whoever is technically the possessor of such works of art must share them at least with the limited public that frequents the buildings, and very often with the world at large. He cannot place them
144 118 THE HUMAN USE OF HUMAN BEINGS in a fireproof cabinet and gloat over them at a small dinner for a few connoisseurs, nor shut them up altogether as private possessions. There are very few frescoes which are given the adventitious privacy of the one by Siqueiros which adorns a large wall of the Mexican jail where he served a sentence for a political offense. So much for the mere physical possession of a work of art. The problems of property in art lie much deeper. Let us consider the matter of the reproduction of artistic works. It is without a doubt true that the finest flower of artistic appreciation is only possible with originals, but it is equally true that a broad and cultivated taste may be built up by a man who has never seen an original of a great work, and that by far the greater part of the aesthetic appeal of an artistic creation is transmitted in competent reproductions. The case of music is similar. While the hearer gains something very important in the appreciation of a musical composition if he is physically present at the performance, nevertheless his preparation for an understanding of this performance will be so greatly enhanced by hearing good records of the composition that it is hard to say which of the two is the larger experience. From the standpoint of property, reproductionrights are covered by our copyright law. There are other rights which no copyright law can cover, which almost equally raise the question of the capacity of any man to own an artistic creation in an effective sense. Here the problem of the nature of genuine originality arises. For example, during the period of the high Renaissance, the discovery by the artists of geometric perspective was new, and an artist was able to give great pleasure by the skillful exploitation of this element in the world about him. Durer, Da Vinci, and their contemporaries exemplify the interest which the leading artistic minds of the time found in this new device. As the art of perspective is one which, once
145 CYBERNETICS AND SOCIETY 119 mastered, rapidly loses its interest, the same thing that was great in the hands of its originators is now at the disposal of every sentimental commercial artist who designs trade calendars. What has been said before may not be worth saying again; and the informative value of a painting or a piece of literature cannot be judged without knowing what it contains that is not easily available to the public in contemporary or earlier works. It is only independent information which is even approximately additive. The derivative information of the second-rate copyist is far from independent of what has gone before. Thus the conventional love story, the conventional detective story, the average acceptable success tale of the slicks, all are subject to the letter but not the spirit of the law of copyright. There is no form of copyright law that prevents a movie success from being followed by a stream of inferior pictures exploiting the second and third layers of the public's interest in the same emotional situation. Neither is there a way of copyrighting a new mathematical idea, or a new theory such as that of natural selection, or anything except the identical reproduction of the same idea in the same words. I repeat,
146 120 THE HUMAN USE OF HUMAN BEINGS we can re-establish with him an informative rapport, and give him a new and fresh literary value. It is interesting from this point of view that there are authors and painters who, by their wide exploration of the aesthetic and intellectual avenues open to a given age, have an almost destructive influence on their contemporaries and successors for many years. A painter like Picasso, who runs through many periods and phases, ends up by saying all those things which are on the tip of the tongue of the age to say, and finally sterilizes the originality of his contemporaries and juniors. The intrinsic limitations of the commodity nature of communication are hardly considered by the public at large. The man in the street considers that Maecenas had as his function the purchase and storage of works of art, rather than the encouragement of their creation by the artists of his own time. In a quite analogous way, he believes that it is possible to store up the military and scientific know-how of the nation in static libraries and laboratories, just as it is possible to store up the military weapons of the last war in the arsenals. Indeed, he goes further, and considers that information which has been developed in the laboratories of his own country is morally the property of that country; and that the use of this information by other nationalities not only may be the result of treason, but intrinsically partakes of the nature of theft. He cannot conceive of a piece of information without an owner. The idea that information can be stored in a changing world without an overwhelming depreciation in its value is false. It is scarcely less false than the more plausible claim, that after a war we may take our existing weapons, fill their barrels with cylinder oil, and coat their outsides with sprayed rubber film, and let them statically await the next emergency. Now, in view of the changes in the technique of war, rifles store fairly well, tanks poorly, and battleships and submarines not
147 CYBERNETICS AND SOCIETY 121 '- at all. The fact is that the efficacy of a weapon depends on precisely what other weapons there are to meet it at a given time, and on the whole idea of war at that time. This results-as has been proved more than once -in the existence of excessive stockpiles of stored weapons which are likely to stereotype the military policy in a wrong form, so that there is a very appreciable advantage to approaching a new emergency with the freedom of choosing exactly the right tools to meet it. On another level, that of economics, this is conspicuously true, as the British example shows. England was the first country to go through a full-scale industrial revolution; and from this early age it inherited the narrow gauge of its railways, the heavy investment of its cotton mills in obsolete equipment, and the limitations of its social system, which have made the cumulative needs of the present day into an overwhelming emergency, only to be met by what amounts to a social and industrial revolution. All this is taking place while the newest countries to industrialize are able to enjoy the late t, most economical equipment; are able to construct an adequate system of railroads to carry their goods on economically-sized cars; and in general, are able to live in the present day rather than in that of a century ago. What is true of England is true of New England, which has discovered that it is often a far more expensive matter to modernize an industry than to scrap it and to start somewhere else. Quite apart from the difficulties of having a relatively strict industrial law and an advanced labor policy, one of the chief reasons that New England is being deserted by the textile mills is that, frankly, they prefer not to be hampered by a century of traditions. Thus, even in the most material field, production and security are in the long run matters of continued invention and development. Information is more a matter of process than of storage. That country will have the greatest security whose
148 122 TIlE HUMAN USE OF HUMAN BEINGS informational and scientific situation is adequate to meet the demands that may be put on it-the country in which it is fully realized that infonnation. There is no Maginot Line of the brain. I repeat, anything like a normal situation, it is both far more difficult and far more important for us to ensure that we have such an adequate knowledge than to ensure that some possible enemy does not have it. The whole arrangement of a military research laboratory is along lines hostile to our own optimum use and development of information. During the last war an integral equation of a type which I have been to some extent responsible for solving arose, not only in my own work, but in at least two totally unrelated projects. In one of these I was aware that it was bound to arise; and in the other a very slight amount of consultation should have made me so aware. As these three employments of the same idea belonged to three totally different military projects of totally different levels of secrecy and in diverse places, there was no way by which the information of any one of them could penetrate through to the others. The result was that it took the equivalent of three independent discoveries to make the results accessible in all three fields. The delay thus created was a matter of :
149 CYBERNETICS AND SOCIETY 123 from some six months to a year, and probably considerably more. From the standpoint of money, which of course is less important in war, it amounted to a large number of man-years at a very expensive level. It would take a considerable valuable employment of this work by an enemy to be as disadvantageous as the need for reproducing all the work on our part. Remember that an enemy unable to participate in that residual discussion which takes place quite illegally, even under our setup of secrecy, would not have been in the position to evaluate and use our results. The matter of time is essential in all estimates of the value of information. A code or cipher, for example, which will cover any considerable amount of material at high-secrecy level is not only a lock which is hard to force, but also one which takes a considerable time to open legitimately. Tactical information which is useful in the combat of small units will almost certainly be obsolete in an hour or two. It is a matter of very little importance whether it can be broken in three hours; but it is of great importance that an officer receiving the message should be able to read it in something like two minutes. On the other hand, the larger plan of battle is too important a matter to entrust to this limited degree of security. Nevertheless, if it took a whole day for an officer receiving this plan to disentangle it, the delay might well be more serious than any leak. The codes and ciphers for a whole campaign or for a diplomatic policy might and should be still less easy to penetrate; but there are none which cannot be penetrated in any finite period, and which at the same time can carry a significant amount of information rather than a small set of disconnected individual decisions. The ordinary way of breaking a cipher is to find an example of the use of this cipher sufficiently long so that the pattern of encodement becomes obvious to the skilled investigator. In general, there must be at least a
150 124 THE HUMAN USE OF HUMAN BEINGS minimum degree of repetition of patterns, without which the very short passages lacking repetition cannot be deciphered. However, when a number of passages are enciphered in a type of cipher which is common to the whole set, even though the detailed encipherment varies, there may be enough in common between the different passages to lead to a breaking, first of the general type of cipher, and then of the particular ciphers used. Probably much of the greatest ingenuity which has been shown in the breaking of ciphers appears not in the annals of the various secret services, but in the work of the epigrapher. We all know how the Rosetta Stone was decoded through an interpretation of certain characters in the Egyptian version, which turned out to be the names of the Ptolemies. There is however one act of decoding which is greater still. This greatest single example of the art of decoding is the decoding of the secrets of nature itself and is the province of the scientist. Scientific discovery consists in the interpretation for OUr own convenience of a system of existence which has been made with no eye to our convenience at all. The result is that the last thing in the world suitable for the protection of secrecy and elaborate code system is a law of nature. Besides the possibility of breaking the secrecy by a direct attack on the human or documentary vehicles of this secrecy, there is always the possibility of attacking the code upstream of all these. It is perhaps impossible to devise any secondary code as hard to break as the natural code of the atomic nucleus. In the problem of decoding, the most important info mation which we can possess is the knowledge that the message which we are reading is not gibberish. A COmmon method of disconcerting codebreakers is to mix in with the legitimate message a message that cannot be decoded; a non-significant message, a mere as-
151 CYBERNETICS AND SOCIETY 125 '- sembl fifty per cent. There is at present a touching belief in this country that we are the sole possessors of a certain technique called "know-how," which secures for us not only priority on all engineering and scientific developments and all major inventions, but, as we have said, the moral right to that priority. Certainly, this "know-how" has nothing to do with the national origins of those who have worked on such problems as that of the atomic bomb. It would have been impossible throughout most of history to secure the combined services of such scientists as the Dane, Bohr; the Italian, Fermi; the Hungarian, Szilard; and many others involved in the project. What made it possible was the extreme consciousness of emergency and the sense of universal affront excited by the Nazi threat. Something more than inflated propaganda will be necessary to hold such a g roup together over the long period of rearmament to which we have often seemed to be committed by the policy of the State Department.
152 126 THE HUMAN USE OF HUMAN BEINGS Without any doubt, we possess the world's most, highly developed technique of combining the efforts, of large numbers of scientists and large quantities of ; money toward the realization of a single project. This " should not lead us to any undue complacency concern-., ing our scientific position, for it is equally clear that we are bringing up a generation of young men who cannot think of any scientific project except in terms of large numbers of men and large quantities of money. The,! skill by which the French and English do great I amounts of work with apparatus which an American high-school teacher would scorn as a casual stick-andstring job, is not to be found among any but a vanish- I ingly small minority of our young men. The present, vogue of the big laboratory is a new thing in science.,; There are those of us who wish to think that it may, never last to be an old thing, for when the scientific : ideas of this generation are exhausted, or at least reveal ' vastly diminishing returns on their intellectual invest- ' ment, I do not foresee that the next generation will be,\ able to furnish the colossal ideas on which colossal j projects naturally rest. A clear understanding of the notion of information ', as applied to scientific work will show that the simple' coexistence of two items of information is of relatively small value, unless these two items can be effectively,; combined in some mind or organ which is able to fer- ' tilize one by means of the other. This is the very op-, po site of the organization in which each member: travels a preassigned path, and in which the sentinels of science, when they come to the ends of their beats, present arms, do an about face, and march back in the" direction from which they have come. There is a great fertilizing and revivifying value in the contact of two', scientists with each other; but this can only come when: at least one of the human beings representing the, science has penetrated far enough across the frontier,; to be able to absorb the ideas of his neighbor into ani
153 CYBERNETICS AND SOCIETY 127 effective plan of thinking. The natural vehicle for this type of organization is a plan in which the orbit of each scientist is assigned rather by the scope of his interests than as a predetermined beat. Such loose human organizations do exist even in the United States; but at present they represent the result of the efforts of a few disinterested men, and not the planned frame into which we are being forced by those who imagine they know what is good for us. However, it will not do for the masses of our scientific population to blame their appointed and self-appointed betters for their futility, and for the dangers of the present day. It is the great public which is demanding the utmost of secrecy for modern science in all things which may touch its military uses. This demand for secrecy is scarcely more than the wish of a sick civilization not to learn of the progress of its own disease. So long as we can continue to pretend that all is right with the world, we plug up our ears against the sound of "Ancestral voices prophesying war." In this new attitude of the masses at large to research, there is a revolution in science far beyond what the public realizes. Indeed the lords of the present scjence themselves do not foresee the full consequences of what is going on. In the past the direction of research had largely been left to the interest of the individual scholar and to the trend of the times. At present, there is a distinct attempt so to direct research in matters of public security that as far as possible, all significant avenues will be developed with the objective of securing an impenetrable stockade of scientific protection. Now, science is impersonal, and the result?f a further pushing forward of the frontiers of science IS not merely to show us many weapons which we may employ against possible enemies, but also many dangers of these weapons. These may be due to the fact that they either are precisely those weapons which are more effectively employable against us than against
154 128 THE HUMAN USE OF HUMAN BEINGS any enemy of ours, or are dangers, such as that of radioactive poisoning, which are inherent in our very use of such a weapon as the atomic bomb. The hurrying up of the pace of science, owing to our active simultaneous search for all means of attacking our enemies and of protecting ourselves, leads to ever-increasing demands for new research. For example, the concentrated effort of Oak Ridge and Los Alamos in time of war has made the question of the protection of the people of the United States, not only from the possible enemies employing an atomic bomb, but from the atomic radiation of our new industry, a thing which concerns us now. Had the war not occurred, these perils would probably not have concerned us for twenty years. In our present militaristic frame of mind, this has forced on, us the problem of possible countermeasures to a new employment of these agencies on the part of an enemy. This enemy may be Russia at the present moment, but it is even more the rehection of ourselves in a mirage. To defend ourselves against this phantom, we must look to new scientific measures, each more terrible than the last. There is no end to this vast apocalyptic spiral. We have already depicted litigation as a true game in which the antagonists can and are forced to use the full resources of bluff and thus each to develop a policy which may have to allow for the other player's playing the best possible game. What is true in the limited war of the court is also true in the war to the death of international relations, whether it takes the bloody form of shooting, or the suaver form of diplomacy. The whole technique of secrecy, message jamming, and bluff, is concerned with insuring that one's own side can make use of the forces and agencies of communication more effectively than the other side. In this combative use of information it is quite as important to keep one's own message channels open as to obstruct the other side in the use of the channels available to it.
155 CYBERNETICS AND SOCIETY 129 An over-all policy in matters of secrecy almost always must involve the consideration of many more things than secrecy itself.. Furthermore, as I have already said, no secret will ever be as safe when its protection is a matter of human integrity, as when it was dependent on the difficulties of scientific discovery itself. I have already said the dissemination of any scientific secret whatever is merely a matter of time, that in this game a decade is a long time, and that in the long run, there is no distinction between arming ourselves and arming our enemies. Thus each terrifying discovery merely increases our subjection to the need of making a new discovery. Barring a new awareness on the part of our leaders, this is bound to go on and on, until the entire intellectual potential of the land is drained from any possible constructive application to the manifold needs of the race, old and new. The effect of these weapons must be to increase the entropy of this planet, until all distinctions of hot and cold, good and bad, man and matter have vanished in the formation of the white furnace of a new star. Like so many Gadarene swine, we have taken unto us the devils of the age, and the compulsion of scientific warfare is driving us pell-mell, head over heels into the ocean of our own destruction. Or perhaps we may say that among the gentlemen who have made it their business to be our mentors, and who administer the new program of science, many are nothing more than apprentice sorcerers, fascinated with the incantation Which starts a devilment that they are totally unable to stop. Even the new psychology of advertising and salesmanship becomes in their hands a way for obliter-
156 130 THE HUMAN USE OF HUMAN BEINGS ating the conscientious scruples of the working scientists, and for destroying such inhibitions as they may have against rowing into this maelstrom. Let these wise men who have summoned a demoniac sanction for their own private purposes remember that in the natural course of events, a conscience which has been bought once will be bought twice. The loyalty to humanity which can be subverted by a skillful distribution of administrative sugar plums will be followed by a loyalty to official superiors lasting just so long as we have the bigger sugar plums to distribute. The day may well come when it constitutes the biggest potential threat to our own security. In that moment in which some other power, be it fascist or communist, is in the position to offer the greater rewards, our good friends who have rushed to our defense per account rendered will rush as quickly to our subjection and annihilation. May those who have summoned from the deep the spirits of atomic warfare remember that for their own sake, if not for ours, they must not wait beyond the first glimmerings of success on the part of our opponents to put to death those whom they have already corrupted! '. I
157 VIII ROLE OF THE INTELLECTUAL AND THE SCIENTIST This book argues that the integrity of the channels of internal communication is essential to the welfare of society. This internal communication is subject at the present time not only to the threats which it has faced at all times, but to certain new and especially serious problems which belong peculiarly to our age. One among these is the growing complexity and cost of communication. A hundred and fifty years ago or even fifty years ago-it does not matter which-the world and America in particular were full of small journals and presses through which almost any man could obtain a hearing. The country editor was not as he is now limited to boiler plate and local gossip, but could and often did express his individual opinion, not only of local affairs but of world matters. At present this license to express oneself has become so expensive with the increasing cost of presses, paper, and syndicated services, that the newspaper business has come to be the art of saying less and less to more and more. The movies may be quite inexpensive as far as concerns the cost of showing each show to each spectator, but they are so horribly expensive in the mass that few shows are worth the risk, unless their success is certain in advance. It is not the question whether a show may excite a great interest in a considerable number of people that interests the entrepreneur, but rather the question of whether it will be unacceptable to so few that
158 132 THE lwman USE OF lwman BEINGS he can count on selling it indiscriminately to movie theaters from coast to coast. What I have said about the newspapers and the movies modem communication, but it is paralleled by another which gnaws from within. This is the cancer of creative narrowness and feebleness. In the old days, the young man who wished to enter the creative arts might either have plunged in directly or prepared himself by a general schooling, perhaps irrelevant to the specific tasks he finally undertook, but which was at least a searching discipline of his abilities and taste. Now the channels of apprenticeship are largely silted up. Our elementary and secondary schools are more interested in formal classroom discipline than in the intellectual discipline of learning something thoroughly, and a great deal of the serious preparation for a scientific or a literary course is relegated to some sort of graduate school or other. Hollywood meanwhile has found that the very standardization of its product has interfered with the natural How of acting talent from the legitimate stage. The repertory theaters had almost ceased to exist when some of them were reopened as Hollywood talent farms, and even these are dying on the vine. To a considerable extent our young would-be actors have learned their trade, not on the stage, but in university courses on acting. Our writers cannot get very far as young men in competition with syndicate material, and if they do not make a success the first try, they
159 CYBERNETICS AND SOCIETY 133 have no place to go but college courses which are supposed to teach them how to write. Thus the higher degrees, and above all the Ph.D., which have had a long existence as the legitimate preparation of the scientific specialist, are more and more serving as a model for intellectual training in all fields. Properly speaking the artist, the writer, and the scientist should be moved by such an irresistible impulse to create that, even if they were not being paid for their work, they would be willing to pay to get the chance to do it. However, we are in a period in which forms have largely superseded educational content and one which is moving toward an ever-increasing thinness of educational content. It i now considered perhaps more a matter of social prestige to obtain a higher degree and follow what may be regarded as a cultural career, than a matter of any deep impulse. In view of this great bulk of semi-mature apprentices who are being put on the market, the problem of giving them some colorable material to work on has assumed an overwhelming importance. Theoretically they should find their own material, but the big business of modern advanced education cannot be operated under this relatively low pressure. Thus. Some of my friends have even asserted that a Ph.D. thesis should be the greatest scientific work a man has ever done and perhaps ever will do, and should wait until he is thoroughly able to state his life work. I do not go along with this. I mean merely that if the thesis is not in fact such an overwhelming task, it should at least be in intention the gateway to vigorous creative work. Lord only knows that there are enough problems
160 134 TIlE HUMAN USE OF HUMAN BEINGSl! In other words, when there is communication without need for communication, merely so that someone may earn the social and intellectual prestige of becoming a priest of communication, the quality and communicative value of the message drop like a plummet. It is as if a machine should be made from the Rube Goldberg point of view, to show just what rec- I ondite ends may be served by an apparatus appar-. ently quite unsuitable for them, rather than to do some-, thing.. Not all the artistic pedants are academicians. There are pedantic avantgardistes. No school has a monoply on beauty. Beauty, like order, occurs in many places in this world, but only as a local and temporary fight against the Niagara of increasing entropy.
161 CYBERNETICS AND SOCIETY 135. Moreover, I protest, not only as I have already done against the cutting off of intellectual originality by the difficulties of the means of communication in the modern world, but even more against the ax which has been put to the root of originality because the people who have elected communication as a career so often have nothing more to communicate.
162 IX THE FIRST AND THE SECOND INDUSTRIAL REVOLUTION The preceding chapters of this book dealt primarily with the study of man as a communicative organism. However, as we have already seen, the machine may also be a communicative organism. In this chapter, I shall discuss that field in which the communicative characters of man and of the machine impinge upon one another, and I shall try to ascertain what the direc-. tion of the development of the machine will be, and what we may expect of its impact on human society. Once before in history the machine had impinged upon human culture with an effect of the greatest moment. This previous impact is known as the Industrial Revolution, and it concerned the machine purely as an. alternative to human muscle. In order to study the. present crisis, which we shall term The Second Industrial Revolution, it is perhaps wise to discuss the history of the earlier crisis as something of a model. The first industrial revolution had its roots in the intellectual ferment of the eighteenth century, which found the scientific techniques of Newton and Huygens already well developed, but with applications which had yet scarcely transcended astronomy. It had, however, become manifest to all intelligent scientists that the new techniques were gong to have a profound effect on the other sciences. The first fields to show the impact of the Newtonian era were those of navigation and of clockmaking. Navigation is an art which dates to ancient times, but
163 CYBERNETICS AND SOCIETY 137 't had one conspicuous weakness until the seventeen irties. The problem of determining latitude had always been easy, even in the days of the Greeks. It was siidply a matter of determining the angular height of the celestial pole. This may be done roughly by taking the pole star as the actual pole of the heavens, or it may be done very precisely by further refinements which locate the center of the apparent circular path of the pole star. On the other hand, the problem of longitudes is always more difficult. Short of a geodetic survey, it can be solved only by a comparison of local time with some standard time such as that of Greenwich. In order to do this, we must either carry the Greenwich time with us on a chronometer or we must find some heavenly clock other than the sun to take the place of a chronometer. Before either of these two methods had become available for the practical navigator, he was very considerably hampered in his techniques of navigation. He was accustomed to sail along the coast until he reached the latitude he wanted. Then he would strike out on an east or west course, along a parallel of latitude, until he made a landfall. Except by an approximate deadreckoning, he could not tell how far he was along the course, yet it was a matter of great importance to him that he should not come unawares onto a dangerous coast. Having made his landfall, he sailed along the coast until he came to his destination. It will be seen that under these circumstances every voyage was very much of an adventure. Nevertheless, this was the pattern of voyages for many centuries. It can be recognized in the course taken by Columbus, in that of the Silver Fleet, and that of the Acapulco galleons. This slow and risky procedure was not satisfactory to the admiralties of the eighteenth century. In the first place, the overseas interests of England and France, unlike those of Spain, lay in high latitudes, where the advantage of a direct great-circle course
164 138 THE HUMAN USE OF HUMAN BEINGS over an east-and-west course is most conspicuous. In the second place, there was a great competition between the two northern powers for the supremacy of the seas, and the advantage of a better navigation was a serious one. It is not a surprise that both governments offered large rewards for an accurate technique. of finding longitudes. The history of these prize contests is complicated and not too edifying. More than one able man was deprived of his rightful triumph, and went bankrupt. In the end, these prizes were awarded in both countries for two very different achievements. One was the design of an accurate ship's chronometer-that is, of a ' clock sufficiently well constructed and compensated to be able to keep the time within a few seconds over a voyage in which it was subject to the continual violent motion of the ship. The other was the construction of good mathematical tables of the motion of the moon, which enabled the navigator to use that body as the clock with which to check the apparent motion of the r sun. These two methods have dominat d all navigation until the recent development of radio and radar techniques. Accordingly, the advance guard of the craftsmen of the industrial revolution consisted on the one hand of clockmakers, who used the new mathematics of Newton in the design of their pendulums and their balance, wheels; and on the other hand, of optical-instrument, makers, with their sextants and their telescopes. The two trades had very much in common. They both demanded the construction of accurate circles and accurate straight lines, and the graduation of these in degrees or in inches. Their tools were the lathe and. the dividing engine. These machine tools for delicate work are the ancestors of our present machine-tool industry. It is an interesting reflection that every tool has a genealogy, and that it is descended from the tools by
165 CYBERNETICS AND SOCIETY 139 which it has itself been constructed. The clockmakers' lathes of the eighteenth century have led through a clear historical chain of intermediate tools to the great turret lathes of the present day. The series of intervening steps might conceivably have been foreshortened somewhat, but it has necessarily had a certain minimum length. It is clearly impossible in constructing a great turret lathe to depend on the unaided human hand for the pouring of the metal, for the placing of the castings on the instruments to machine them, and above all for the power needed in the task of machining them. These must be done through machines that have themselves been manufactured by other machines, and it is only through many stages of this that one reaches back to the original hand- or footlathes of the eighteenth century. It is thus entirely natural that those who were to develop new inventions were either clockmakers or scientific-instrument makers themselves, or called on people of these crafts to help them. For instance, \Vatt was a scientific-instrument maker. To show how even a man like Watt had to bide his time before he could extend the precision of clockmaking techniques to larger undertakings, we must remember, as I have said earlier, that his standard of the fit of a piston in a cylinder was that it should be barely possible to insert and move a thin sixpence between them. We must thus consider navigation and the instruments necessary for it as the locus of an industrial revolution before the main industrial revolution. The main industrial revolution begins with the steam engine. The first form of the steam engine was the crude and wasteful Newcomen engine, which was used for pumping mines. In the middle of the eighteenth century there Were abortive attempts to use it for generating power, by making it pump water into elevated reservoirs, and employing the fall of this water to turn water wheels. Such clumsy devices became obsolete with the intro-
166 140 THE HUMAN USE OF HUMAN BEINGS duction of the perfected engines of Watt, which were employed quite early in their history for factory pur. poses as well as for mine pumping. The end of the eighteenth century saw the steam engine thoroughly, established in industry, and the promise of the steam. '. boat on the rivers and of steam traction on land was : not far away. The first place where steam power came into practi. cal use was in replacing one of the most brutal forms " of human or animal labor: pumping of water out of. mines. At best, this had been done by draft animals, " by crude machines turned by horses. At worst, as in, the silver mines of New Spain, it was done by the labor, of human slaves. It is a work that is never finished and which can never be interrupted without the possibility,', of closing down the mine forever. The use of the steam ' engine to replace this servitude must certainly be regarded as a great humanitarian step forward. However, slaves do not only pump mines : they also drag loaded riverboats upstream. A second gieat tri umph of the steam engine was the invention of the : steamboat, and in particular of the river steamboat. The ' steam engine at sea was for many years but a supple. ment of questionable value to the sails carried by every ' seagoing steamboat; but it was steam transportation ' on the Mississippi which opened up the interior of the United States. Like the steamboat, the steam locomotive started where it is now dying, as a means of hauling heavy freight. The next place where the industrial revolution made itself felt, perhaps a little later than in the field of the heavy labor of mine workers, and simultaneously with, the revolution in transportation, was in the textile in-. dustry. This was already a sick industry. Even before the power spindle and the power looms, the condition of the spinners and the weavers left much to be desired. The bulk of production which they could perform fell far short of the demands of the day. It might
167 CYBERNETICS AND SOCIETY thus appear to have been scarcely possible to conceive that the transition to the machine could have worsened their condition; but worsen it, it most certainly did. The beginnings of textile-machine development go back of the steam engine. The stocking frame has existed in a form worked by hand ever since the time of Queen Elizabeth. Machine spinning first became necessary in order to furnish warps for hand looms. The complete mechanization of the textile industry, covering weaving as well as spinning, did not occur until the beginning of the nineteenth century. The first textile machines were for hand operation, although the use of horsepower and water power followed very quickly. Part of the impetus behind the development of the Watt engine, as contrasted with the Newcomen engine, was the desire to furnish power in the rotary form needed for textile purposes. The textile mills furnished the model for almost the whole course of the mechanization of industry. On the social side, they began the transfer of the workers from the home to the factory and from the country to the city. There was an exploitation of the labor of children and women to an extent, and of a brutality scarcely conceivable at the present time-that is, if we forget the South African diamond mines and ignore the new industrialization of China and India and the general terms of plantation labor in almost every country. A great deal of this was due to the fact that new techniques had produced new responsibilities, at a time at which no code had yet arisen to take care of these responsibilities. There was, however, a phase which Was of greater technical than moral significance. By this, I mean that a great many of the disastrous consequences and phases of the earlier part of the industrial revolution were not so much due to any moral obtuseness or iniquity on the part of those concerned, as to certain technical features which were inherent in the,early means of industrialization, and which the later
168 142 THE HUMAN USE OF HUMAN BEINGS history of technical development has thrust more or ', less into the background. These technical determinants of the direction which the early industrial revolution took, lay in the very nature of early steam power and its transmission. The steam engine used fuel very uneconomically by modern standards, although this is not as important as it might seem, considering the fact that. early engines had none of the more modern type with which to compete. However, among themselves they were much more economical to run on a large scale. than on a small one. In contrast with the prime mover, : the textile machine, whether it be loom or spindle, is '. a comparatively light machine, and uses little power. j It was therefore economically necessary to assemble '. these machines in large factories, where many looms.' and spindles could be run from one steam engine. At that time the only available means of tra'lsmission ' of power were mechanical. The first among these was '. the line of shafting, supplemented by the belt and the. pulley. Even as late as the time of my own childhood, : the typical picture of a factory was that of a great shed ; with long lines of shafts suspended from the rafters, : and pulleys connected by belts to the individual ma- i chines. This sort of factory still exists; although in very. many cases it has given way to the modern arrange-. ment where the machines are driven individually by i electric motors. Indeed this second picture is the typical one at the present time. The trade of the millwright has taken on a totally new form. Here there is an important fact relevant to the whole history of invention. It was exactly these millwrights and other new craftsmen of the machine age who were to develop the inventions that 1 are at the foundation of our patent system. Now, the mechanical connection of machines involves difficulties ' that are quite serious, and not easy to cover by any simple mathematical formulation. In the first place,, long lines of shafting either have to be well aligned, or
169 CYBERNETICS AND SOCIETY 143 to employ ingenious modes of connection, such as universal joints or parallel couplings, which allow for a certain amount of freedom. In the second place, the long lines of bearings needed for such shafts are very high in their power consumption. In the individual machine, the rotating and reciprocating parts are subject to similar demands of rigidity, and to similar demands that the number of bearings must be reduced as much as possible for the sake of low power consumption and simple manufacture. These prescriptions are not easily filled on the basis of general formulas, and they oher an excellent opportunity for ingenuity and inventive skill of the old-fashioned artisan sort. It is in view of this fact that the change-over in engineering between mechanical connections and electrical connections has had so great an ehect. The electrical motor is a mode of distributing power which is very convenient to construct in small sizes, so that each machine may have its own motor. The transmission losses in the wiring of a factory are relatively low, and the efficiency of the motor itself is relatively high. The connection of the motor with its wiring is not necessarily rigid, nor does it consist of many parts. There are still motives of traffic and convenience which may induce us to continue the custom of mounting the different machines of an industrial process in a single factory; but the need of connecting all the machines to a single source of power is no longer a serious reason for geographical proximity. In other words, we are now in a position to return to cottage industry, in places where it would otherwise be suitable. I do not wish to insist that the difficulties of mechanical transmission were the only cause of the shed factories and of the demoralization they produced. Indeed, the factory system started before the machine system, as a means of introducing discipline into the highly undisciplined home industry of the individual workers, and of keeping up standards of production.
170 144 TIlE HUMAN USE OF HUMAN BEINGS It is true, however, that these non-mechanical factories ' were very soon superseded by mechanical ones, and that probably the worst social effects of urban crowding and of rural depopulation took place in the \ machine factory. Furthermore, if the fractional horsepower motor had been available from the start and could have increased the unit of production of a cottage ' worker, it is highly probable that a large part of the organization and discipline needed for successful largescale production could have been superimposed on : such home industries as spinning and weaving. If it should be so desired, a single piece of machinery may now contain several motors, each introducing power at the proper place. This relieves the designer. of much of the need for the ingenuity in mechanical design which he would otherwise have been compelled to use. In an electrical design, the mere problem of the connection of the parts seldom involves much difficulty that does not lend itself to easy mathematical formulation and solution. The inventor of linkages has been superseded by the computor of circuits. This is an ' example of the way in which the art of invention is conditioned by the existing means. In the third quarter of the last century, when the electric motor was first used in industry, it was at first supposed to be nothing more than an alternative device for carrying out existing industrial techniques. It was probably not foreseen that its final effect would be to give rise to a new concept of the factory. That other great electrical invention, the vacuum tube, has had a similar history. Before the invention of the vacuum tube, it took many separate mechanisms to regulate systems of great power. Indeed, most of the regulatory means themselves employed considerable power. There were exceptions to this, but only in specific fields, such as the steering of ships. As late as 1915, I crossed the ocean on one of the old ships of the American Line. It belonged to the transi-
171 CYBERNETICS AND SOCIETY 145 :8 d. l e. d. e '. :. V' :r :> r. I I f I s 1 S t t l tional period when ships still carried sails, as well as a pointed bow to carry a bowsprit. In a well-deck not far aft of the main superstructure, there was a formidable engine, consisting of four or five six-foot wheels with hand-spokes. These wheels were supposed to control the ship in the event that its automatic steering engine broke down. In a storm, it would have taken ten men or more, exerting their full strength, to keep that great ship on its course. This was not the usual method of control of the ship, but an emergency replacement, or as sailors call it, a "jury steering wheel." For normal control, the ship carried a steering engine which translated the relatively small forces of the quartermaster at the wheel into the movement of the massive rudder. Thus even on a purely mechanical basis, some progress had been made toward the solution of the problem of amplification of forces or torques. Nevertheless, at that time, this solution of the amplification problem did not range over extreme differences between the levels of input and of output, nor was it embodied in a convenient universal type of apparatus. The most Hexible universal apparatus for amplifying small energy-levels into high energy-levels is the vacuum tube, or electron valve. The history of this is interesting, though it is too complex for us to discuss here. It is however amusing to rehect that the invention of the electron valve originated in Edison's greatest scientific discovery and perhaps the only one which he did not capitalize into an invention. He observed that when an electrode was placed inside an electric lamp, and was taken as electrically positive with respect to the filament, then a current would How, if the filament were heated, but not otherwise. Through a series of inventions by other people, this led to a more effective way than any known before of controlling a large current by a small voltage. This is the basis of the modern radio industry, but it is also an
172 146 TIlE HUMAN USE OF HUMAN BEINGS industrial tool which is spreading widely into new fields. It is thus no longer necessary to control a process at high energy-levels by a mechanism in which the important details of control are carried out at these levels. It is quite possible to form a certain pattern of behavior response at levels much lower even than those found in usual radio sets, and then to employ a series of amplifying tubes to control by this apparatus a machine as heavy as a steel-rolling mill. The work of discriminating and of forming the pattern of behavior for this is done under conditions in which the power losses are insignificant, and yet the final employment of this discriminatory process is at arbitrarily high levels of 1 power. It will be seen that this is an invention which filters ' the fundamental conditions of industry, quite as vitally : as the transmission and subdivision of power through! the use of the small electric motor. The study of the, pattern of behavior is transferred to a special part of.. the instrument in which power-economy is of very little \ importance. We have thus deprived of much of their importance the dodges and devices previously used to ' insure that a mechanical linkage should consist of the i fewest possible elements, as well as the devices used to minimize friction and lost motion. The design of ma- :< chines involving such parts has been transferred from ' the domain of the skilled shopworker to that of the re- : search-laboratory man; and in this he has all the avail-. able tools of circuit theory to replace a mechanical. ingenuity of the old sort. Invention in the old sense has i been supplanted by the intelligent employment of certain laws of nature. The step from the laws of nature i to their employment has been reduced by a hundred, times. I have previously said that when an invention is.' made, a considerable period generally elapses before ; its full implications are understood. It was long before. people became aware of the full impact of the airplane ;
173 CYBERNETICS AND SOCIETY 147 on international relations and on the conditions of hu :man life. The effect of atomic energy on mankind and the future is yet to be assessed, although many observers insist that it is merely a new weapon like all older weapons. The case of the vacuum tube was similar. In the beginning, it was regarded merely as an extra tool to supplement the already existing techniques of telephonic communication. The electrical engineers first mistook its real importance to such an extent that for years vacuum tubes were relegated simply to a particular part of the communication network. This part was connected with other parts consisting only of the traditional so-called inactive circuit elements-the resistances, the capacitances, and the inductances. Only since the war have engineers felt free enough in their employment of vacuum tubes to insert them where necessary, in the same way they had previously inserted passive elements of these three kinds.. Though the vacuum tube received its debut in the communications industry, the boundaries and extent of this industry were not fully understood for a long period. There were sporadic uses of the vacuum tube and of its sister invention, the photoelectric cell, for scanning the products of industry; as for example, for regulating the thickness of a web coming out of a paper machine, or for inspecting the color of a can of pine-
174 148 THE HUMAN USE OF HUMAN BEINGS apples. These uses did not as yet form a reasoned new technique, nor were they associated in the engineering mind with the vacuum tubes other function, communication. All this changed in the war. One of the few things gained from the great conhict was the rapid development of invention, under the stimulus of necessity and the unlimited employment of money; and above all, the new blood called in to industrial research. At the beginning of the war, our greatest need was to keep England from being knocked out by an overwhelrrjng air attack. Accordingly, the anti-aircraft cannon was one of the first objects of our scientific war effort, especially when combfned with the airplane-detecting device of radar or ultra-high-frequency Hertzian waves. The technique of radar used the same modalities as the existing technique of radio besides inventing new ones of its own. It was thus natural to consider radar as a branch of communication theory. Besides finding airplanes by radar it was necessary to shoot them down. This involves the problem of fire control. The speed of the airplane has made it necessary to compute the elements of the trajectory of the anti-aircraft missile by machine, and to give the predicting machine itself communication functions which had previously been assigned to human beings. Thus the problem of anti-aircraft fire control made a new generation of engineers familiar with the notion of a communication addressed to a machine rather than to a person. In our chapter in language, we have already mentioned another field in which for a considerable time this notion had been familiar to a limited group of engineers : the field of the automatic hydroelectric power station. During the period immediately preceding World War II other uses were found for the vacuum tube coupled directly to the machine rather than to the human agent. Among these were more general applica-
175 CYBERNETICS AND SOCIETY 149 tions to computing machines. The concept of the largescale computing machine as developed by Vannevar Bush among others was originally a purely mechanical one. The integration was done by rolling disks engaging one another in a frictional manner; and the interchange of outputs and inputs between these disks was the task of a classical train of shafts and gears. The mother idea of these first computing machines is much older than the work of Vannevar Bush. In certain respects it goes back to the work of Babbage early in the last century. Babbage had a surprisingly modern idea for a computing machine, but his mechanical means fell far behind his ambitions. The first difficulty he met, and with which he could not cope, was that a long train of gears requires considerable power to run it, so that its output of power and torque very soon becomes too small to actuate the remaining parts of the apparatus. Bush saw this difficulty and overcame it in a very ingenious way. Besides the electrical amplifiers depending on vacuum tubes and on similar devices, there are certain mechanical torque-amplifiers which are familiar, for example, to everyone acquainted with ships and the unloading of cargo. The stevedore raises the cargo-slings by taking a purchase on his load around the drum of a donkey-engine or cargo-hoist. In this way, the tension which he exerts mechanically is increased by a factor which grows extremely rapidly with the angle of contact between his rope and the rotating drum. Thus one man is able to control the lifting of a load of many tons. This device is fundamentally a force- or torque-amplifier. By an ingenious bit of design, Bush inserted such mechanical amplifiers between the stages of his computing machine, and thus was able to do effectively the sort of thing which had only been a dream for Babb age. In the early days of Vannevar Bush's work, before there were any high speed automatic controls in fac-
176 150 THE HUMAN USE OF HUMAN BEINGS tories, I had become interested in the problem of partial differential equation. Bush's work had cerned the ordinary differential equation in which independent variable was the time, and duplicated its time course the course of the phenomena it was alyzing, although possibly at a different rate. In partial differential equation, the quantities which the place of the time are spread out in space, and suggested to Bush that in view of the technique of vision scanning which was then developing at speed, we would, ourselves, have to consider such, technique to represent the many variables of, let us space, against the one variable of time. A na''yl-n,l1 machine so designed would have to work at Covl'>'Co'rn.e,I, high speed, which to my mind put mechanical esses out of the question and threw us back on tronic processes. In such a machine, moreover, all would have to be written, read, and erased with speed compatible with that of the other operations the machine; and in addition to including an ical mechanism, it would need a logical mechanism well and would have to be able to handle problems programming on a purely logical and automatic... "".".:_ The notion of programming in the factory had become familiar through the work of Taylor and the, Gilbreths on time study, and was ready to be ferred to the machine. This offered considerable culty of detail, but no great difficulty of principle. I was thus convinced as far back as 1940 that the au- ' tomatic factory was on the horizon, and I so 'nt- lrtylorl Vannevar Bush. The consequent development of tomatization, both before and after the publication of the first edition of this book, has convinced me that I was right in my judgment and that this development ' would be one of the great factors in conditioning the: social and technical life of the age to corne, the keynote ' of the second industrial revolution. In one of its earlier phases the Bush Differential
177 CYBERNETICS AND SOCIETY aly zer performed all the principal amplification functions. It used electricity only to give power to the motors running the machine as a whole. This state of computing-mechanisms was intermediate and transitory. It very soon became clear that amplifiers of an electric nature, connected by wires rather than by shafts, were both less expensive and more flexible than mechanical amplifiers and connections. Accordingly, the later forms of Bush's machine made use of vacuumtube devices. This has been continued in all their successors; whether they were what are now called analogy machines, which work primarily by the measurement of physical quantities, or digital machines, which work primarily by counting and arithmetic operations. The development of these computing machines has been very rapid since the war. For a large range of computational work, they have shown themselves much faster and more accurate than the human computer. Their speed has long since reached such a level that any intermediate human intervention in their work is out of the question. Thus they offer the same need to replace human capacities by machine capacities as those which we found in the anti-aircraft computer. The parts of the machine must speak to one another through an appropriate language, without speaking to any person or listening to any person, except in the terminal and initial stages of the process. Here again we have an element which has contributed to the general acceptance of the extension to machines of the idea of communication. In this conversation between the parts of a machine, it is often necessary to take cognizance of what the machine has already said. Here there enters the principle of feedback, which we have already discussed, and which is older than its exemplification in the ship's steering engine, and is at least as old, in fact, as the governor which regulates the speed of Watt's steam
178 152 THE HUMAN USE OF HUMAN BEINGS engine. This governor keeps the engine from running wild when its load is removed. If it starts to run wild, the balls of the governor Hy upward from centrifugal action, and in their upward Hight they move a lever which partly cuts off the admission of steam. Thus the tendency to speed up produces a partly compensatory tendency to slow down. This method of regulatioi:) received a thorough mathematical analysis at the hands of Clerk Maxwell in. '\\Tbatever. We have so far given examples of where the feedback process takes primarily a mechanical form. However, a series of operations of the same structure can be carried out through electrical and even vacuumtube means. These means promise to be the future standard method of designing control apparatus. There has long been a tendency to render factories and machines automatic. Except for some special purpose, one would no longer think of producing screws by the use of the ordinary lathe, in which a mechanic must watch the progress of his cutter and regulate it by hand. The production of screws in quantity without I
179 CYBERNETICS AND SOCIETY 153 serious human intervention is now the normal task of the ordinary screw machine. Although this does not make any special use of the process of feedback nor of the vacuum tube, it accomplishes a somewhat similar end. What the feedback and the vacuum tube have made possible is not the sporadic design of individual automatic mechanisms, but a general policy for the construction of automatic mechanisms of the most varied type. In this they have been reinforced by our new theoretical treatment of communication, which takes full cognizance of the possibilities of communication between machine and machine. It is this conjunction of circumstances which now renders possible the new automatic age. The existing state of industrial techniques includes the whole of the results of the first industrial revolution, together with many inventions which we now see to be precursors of the second industrial revolution. What the precise boundary between these two revolutions may be, it is still too early to say. In its potential significance, the vacuum tube certainly belongs to an industrial revolution different from that of the age of power; and yet it is only at present that the true significance of the invention of the vacuum tube has been sufficiently realized to allow us to attribute the present age to a new and second industrial revolution. Up to now we have been talking about the existing state of affairs. We have not covered more than a small part of the aspects of the previous industrial revolution. We have not mentioned the airplane, nor the bulldozer, together with the other mechanical tools of construction, nor the automobile, nor even one-tenth of those factors which have converted modern life to something totally unlike the life of any other period. It is fair to say, however, that except for a considerable number of isolated examples, the industrial revolution up to the present has displaced man and the beast as a source of power, without making any great impression
180 154 THE HUMAN USE OF HUMAN BEINGS on other human functions. The best that a pick-andshovel worker can do to make a living at the present time is to act as a sort of gleaner after the bulldozer. In all important respects, the man who has nothing but his physical power to sell has nothing to sell which it is worth anyone's money to buy. Let us now go on to a picture of a more comp tely automatic age. Let us consider what for example the ' automobile factory of the future will be like; and in particular the assembly line, which is the part of the automobile factory that employs the most labor. In the first place, the sequence of operations will be controlled by something like a modern high-speed computing machine. In this book and elsewhere, I have often said that the high-speed computing machine is primarily a logical machine, which confronts different propositions with one another and draws some of their consequences. It is possible to translate the whole of mathematics into the performance of a sequence of purely logical tasks. If this representation of mathemat-, ics is embodied in the machine, the machine will be a computing machine in the ordinary sense. However, such a computing machine, besides accomplishing ordinary mathematical tasks, will be able to undertake the logical task of channeling a series of orders concerning mathematical operations. Therefore, as present high-speed computing machines in fact do, it will contain at least one large assembly which is purely logical. The instructions to such a machine, and here too I am speaking of present practice, are given by what we have called a taping. The orders given the machine may be fed into it by a taping which is completely predetermined. It is also possible that the actual contingencies met in the performance of the machine may be handed over as a basis of further regulation to a new control tape constructed by the machine itself, or to a modification of the old one. I have already ex-
181 CYBERNETICS AND SOCIETY 155 plained how I think such processes are related to learning. It may be thought that the present great expense of computing machines bars them from use in industrial processes; and furthermore that the delicacy of the work needed in their construction and the variability of their functions precludes the use of the methods of mass production in constructing them. Neither of these charges is correct. In the first place, the enormous computing machines now used for the highest level of mathematical work cost something of the order of hundreds of thousands of dollars. Even this price would not be forbidding for the control machine of a really large factory, but it is not the relevant price. The present computing machines are developing so rapidly that practically every one constructed is a new model. In other words, a large part of these apparently exorbitant prices goes into new work of design, and into new parts, which are produced by a very high quality of labor under the most expensive circumstances. If one of these computing machines were therefore established in price and model, and put to use in quantities of tens or twenties, it is very doubtful whether its price would be higher than tens of thousands of dollars. A similar machine of smaller capacity, not suited for the most difficult computational problems, but nevertheless quite adequate for factory control, would probably cost no more than a few thousand dollars in any sort of moderate-scale production. Now let us consider the problem of the mass production of computing machines. If the only opportunity for mass production were the mass production of completed machines, it is quite clear that for a considerable period the best we could hope for would be a moderate-scale production. However, in each machine the parts are largely repetitive in very considerable numbers. This is true, whether we consider the memory apparatus, the logical apparatus, or the arithmetical I ',,
182 156 THE HUMAN USE OF HUMAN BEINGS subassembly. Thus production of a few dozen machines only, represents a potential mass production of the parts, and is accompanied by the same economic advantages. It may still seem that the delicacy of the machines must mean that each job demands a special new model. This is also false. Given even a rough similarity ill the type of mathematical and logical operations demanded of the mathematical and logical units of the machine, the over-all performance is regulated by the taping, or at any rate by the original taping. The taping of such a machine is a highly skilled task for a professional ' man of a very specialized type; but it is largely or en tirely a once-for-all job, and need only be partly repeated when the machine is modified for a new industrial setup. Thus the cost of such a skilled technician will be distributed over a tremendous output, and will not really be a significant factor in the use of the machine. The computing machine represents the center of the automatic factory, but it will never be the whole factory. On the one hand, it receives its detailed instructions from elements of the nature of sense organs, such as photoelectric cells, condensers for the reading of the thickness of a web of paper, thermometers, hydrogenion-concentration meters, and the general run of apparatus now built by instrument companies for the manual control of industrial processes. These instruments are already built to report electrically at remote stations. All they need to enable them to introduce their information into an automatic high-speed computer is a reading apparatus which will translate position or scale into a pattern of consecutive digits. Such apparatus already exists, and offers no great difficulty, either of principle or of constructional detail. The sense-organ problem is not new, and it is already effectively solved. Besides these sense organs, the control system must
183 CYBERNETICS AND SOCIETY 157 contain effectors, or components which act on the outer world. Some of these are of a type already familiar, such as valve-turning motors, electric clutches, and the like. Some of them will have to be invented, to duplicate more nearly the functions of the human hand as supplemented by the human eye. It is altogether possible in the machining of automobile frames to leave on certain metal lugs, machined into smooth surfaces as points of reference. The tool, whether it be a drill or riveter or whatever else we want, may be led to the approximate neighborhood of these surfaces by a photoelectric mechanism, actuated for example by spots of paint. The final positioning may bring the tool up against the reference surfaces, so as to establish a firm contact, but not a destructively finn one. This is only one way of doing the job. Any competent engineer can think of a dozen more. Of course, we assume that the instruments which act as sense organs record not only the original state of the work, but also the result of all the previous processes. Thus the machine may carry out feedback operations, either those of the simple type now so thoroughly understood, or those involving more complicated processes of discrimination, regulated by the central control as a logical or mathematical system. In other words, the all-over system will correspond to the complete animal with sense organs, effectors, and proprioceptors, and not, as in the ultra-rapid computing machine, to an isolated brain, dependent for its experiences and for its effectiveness on our intervention. The speed with which these new devices are likely to come into industrial use will vary greatly with the different industries. Automatic machines, which may not be precisely like those described here, but which perform roughly the same functions, have already come into extensive use in continuous-process industries like canneries, steel-rolling mills, and especially wire and tin-plate factories. They are also familiar in paper fac-
184 158 THE HUMAN USE OF HUMAN BEINGS tories, which likewise produce a continuous output. An. other place where they are indispensable is in that sort of factory which is too dangerous for any considerable number of workers to risk their lives in its control, and. in which an emergency is likely to be so serious and costly that its possibilities should have been considered in advance, rather than left to the excited judgment of somebody on the spot. If a policy can be thought out in advance, it can be committed to a taping which will regulate the conduct to be followed in accordance with I the readings of the instrument. In other words, such factories should be under a regime rather like that of the interlocking signals and switches of the railroad signal-tower. This regime is already followed in oilcracking factories, in many other chemical works, and in the handling of the sort of dangerous materials found in the exploitation of atomic energy. We have already mentioned the assembly line as a place for applying the same sorts of technique. In the assembly line, as in the chemical factory, or the continuous-process paper mill, it is necessary to exert a certain statistical control on the quality of the product. This control depends on a sampling process. These sampling processes have now been developed by Wald and others into a technique called sequential analysis, in which the sampling is no longer taken in a lump, but is a continuous process going along with the production. That which can be done then by a technique so standardized that it can be put in the hands of a statistical computer who does not understand the logic behind it, may also be executed by a computing machine. In other words, except again at the highest levels, the machine takes care of the routine statistical controls, as well as of the production process. In general, factories have an accounting procedure which is independent of production, but insofar as the data which occur in cost-accounting come from the machine or assembly line, they may be fed directly into a
185 CYBERNETICS AND SOCIETY 159 computing machine. Other data may be fed in from time to time by human operators, but the bulk of the clerical work can be handled mechanically, leaving only the extraordinary details such as outside correspondence for human beings. But even a large part of the outside correspondence may be received from the correspondents on punched cards, or transferred to punched cards by extremely low-grade labor. From this stage on, everything may go by machine. This mechanization also may apply to a not inappreciable part of the library and filing facilities of an industrial plant. In other words,. There will, of course, be trades into which the new industrial revolution will not penetrate either because the new control machines are not economical in industries on so small a scale as not to be able to carry the considerable capital costs involved, or because their work is so varied that a new taping will be necessary for almost every job. I cannot see automatic machinery of the judgment-replacing type coming into use in the corner grocery, or in the corner garage, although I can very well see it employed by the wholesale grocer and the automobile manufacturer. The farm laborer too, although he is beginning to be pressed by automatic machinery, is protected from the full pressure of it because of the ground he has to cover, the variability of the crops he must till, and the special conditions of weather and the like that he must meet. Even here, the large-scale or plantation farmer is becoming increasingly dependent on cotton-picking and weed-burning machinery, as the wheat farmer has long been dependent on the McCormick reaper. Where
186 160 THE HUMAN USE OF HUMAN BEINGS such machines may be used, some use of machinery of judgment is not inconceivable.. A war would change all this overnight. If we should en gage in a war with a major power like Russia, which would make serious demands on the infantry, and con sequently on our manpower, we may be hard put to keep up our industrial production. Under these circumstances, the matter of replacing human production by other modes may well be a life-or-death matter to the nation. We are already as far along in the process of developing a unified system of automatic control machines as we were in the development of radar in Just as the emergency of the Battle of Britain made it necessary to attack the radar problem in a massive manner, and to hurry up the natural development of the field by what may have been decades, so too, the needs of labor replacement are likely to act on us in a similar way in the case of another war. Personnel such as skilled radio amateurs, mathematicians, and physicists, who were so rapidly turned into competent electrical engineers for the purposes of radar design, is still available for the similar task of automatic-machine design. There is a new and skilled generation coming up, which they have trained. Under these circumstances, the period of about two years which it took for radar to get onto the battlefield with a high degree of effectiveness is scarcely likely to be exceeded by the period of evolution of the automatic factory. At the end of such a war, the "knowhow" needed to construct such factories will be common. There will even be a considerable backlog of equipment manufactured for the government, which is
187 CYBERNETICS AND SOCIETY likely to be on sale or available to the industrialists. Thus a new war will almost inevitably see the automatic age in full swing within less than five years. I have spoken of the actuality and the imminence of this new possibility.. It may also produce cultural results as trivial and wasteful as the greater part of those so far obtained from the radio and the movies. Be that as it may, the intermediate period of the introduction of the new means, especially if it comes in the fulminating manner to be expected from a new war, will lead to an immediate transitional period of disastrous confusion. We have a good deal of experience as to how the industrialists regard a new industrial potential. Their whole propaganda is to the effect that it must not be considered as the business of the government but must be left open to whatever entrepreneurs wish to invest money in it. We also know that they have very few inhibitions when it comes to taking all the profit out of an industry that there is to be taken, and then letting the public pick up the pieces. This is the history of the lumber and mining industries, and is part of what we have called in another chapter the traditional American philosophy of progress. Under these circumstances, industry will be Hooded with the new tools to the extent that they appear to yield immediate profits, irrespective of what long-time damage they can do. We shall see a process parallel to the way in which the use of atomic energy for bombs has been allowed to compromise the very necessary potentialities of the long-time use of atomic power to replace our oil and coal supplies, which are within cen-
188 162 THE HUMAN USE OF HUMAN BEINGS turies, if not decades, of utter exhaustion. Note well that atomic bombs do not compete with power com- ' panies. ' Let us remember that the automatic machine, what-. ever we think of any feelings it may have or may not have,. This depression will ruin many industries-possibly even the industries which have taken advantage of the new potentialities. However, there is nothing in the industrial tradition which forbids an industrialist to make a sure and quick profit, and to get out before the crash touches him personally. Thus. There are, however, hopeful signs on the horizon. Since the publication of the first edition of this book, I have participated in two big meetings with representatives of business management, and I have been delighted to see that awareness on the part of a great many of those present of the social dangers of our new technology and the social obligations of those responsible for management to see that the new modalities are used for the benefit of man, for increasing his leisure and enriching his spiritual life, rather than merely for profits and the worship of the machine as a new brazen calf. There are many dangers still ahead, but the roots of good will are there, and I do not feel as thoroughly pessimistic as I did at the time of the publication of the first edition of this book.
189 x SOME COMMUNICATION MACHINES AND THEIR FUTURE I devoted the last chapter to the problem of the industrial and social impact of certain control machines which are already beginning to show important possibilities for the replacement of human labor. However, there are a variety of problems concerning automata which have nothing whatever to do with our factory system but serve either to illustrate and throw light on the possibilities of communicative mechanisms in general, or for semi-medical purposes for the prosthesis and replacement of human functions which have been lost or weakened in certain unfortunate individuals. The first machine which we shall discuss was designed for theoretical purposes as an illustration to an earlier piece of work which had been done by me on paper some years ago, together with my colleagues, Dr. Arturo Rosenblueth and Dr. Julian Bigelow. In this work we conjectured that the mechanism of voluntary activity was of a feedback nature, and accordingly, we sought in the human voluntary activity for the characteristics of breakdown which feedback mechanisms exhibit when they are overloaded. The simplest type of breakdown exhibits itself as an oscillation in a goal-seeking process which appears only when that process is actively invoked. This corresponds rather closely to the human phenomenon known as intention tremor, in which, for example, when the patient reaches for a glass of water, his hand swings wider and wider, and he cannot lift up the glass.
190 164 TIIE HUMAN USE OF HUMAN BEINGS There is another type of human tremor which is in some ways diametrically opposite to intention tremor. It is known as Parkinsonianism, and is familiar to all of us as the shaking palsy of old men. Here the patient displays the tremor even at rest; and, in fact, if the disease is not too greatly marked, only at rest. When he attempts to accomplish a dennite purpose this tremor subsides to such an extent that the victim of an early stage of Parkinsonianism can even be a successful. eye surgeon. The three of us associated this Parkinsonian tremor with an aspect of feedback slightly different from the feedback associated with the accomplishment of purpose. In order to accomplish a purpose successfully, the various joints which are not directly associated with purposive movement must be kept in such a condition, of mild tonus or tension, that the final purposive contraction of the muscles is properly backed feed-, back. It can be shown mathematically that in both cases of tremor the feedback is excessively large. Now, when we consider the feedback which is important in Parkinsonianism, it turns out that the voluntary feedback which regulates the main motion is in the opposite direction to the postural feedback as far as the motion of the parts regulated by the postural feedback is concerned. Therefore, the existence of a purpose tends to cut down the excessive amplincation of postural feedback, and may very well bring it below the oscillation, level. These things were very well known to us theoretically, but until recently we had not gone to the trouble of making a working model of them. However, it be-
191 CYBERNETICS AND SOCIETY came desirable for us to construct a demonstration apparatus which would act according to our theories. Accordingly, Professor J. B. Wiesner of the Electronics Laboratory of the Massachusetts Institute of Technology discussed with me the possibility of constructing a tropism machine or machine with a simple fixed built-in purpose, with parts sufficiently adjustable to show the main phenomena of voluntary feedback, and of what we have just called postural feedback, and their breakdown. At our suggestion, Mr. Henry Singleton took up the problem of building such a machine, and carried it to a brilliant and successful conclusion. This machine has two principal modes of action, in one of which it is positively photo-tropic and searches for light, and in the other of which it is negatively phototropic and runs away from light. We called the machine in its two respective functions, the Moth and the Bedbug. The machine consists of a little three-wheeled cart with a propelling motor on the rear axle. The front, wheel is a caster steered by a tiller. The cart carries a pair of forwardly directed photo cells, one of which takes in the left quadrant, while the other takes in the right quadrant. These cells are the opposite arms of a bridge. The output of the bridge which is reversible, is put through an adjustable amplifier. After this it goes to a positioning motor which regulates the position of one contact with a potentiometer. The other contact is also regulated by a positioning motor which moves the tiller as well. The output of the potentiometer which represents the difference between the position of the two positioning motors leads through a second adjustable amplifier to the second positioning motor, thus regulating the tiller. According to the direction of the output of the bridge, this instrument will be steered either toward the quadrant with more intense light or away from it. In either case, it automatically tends to balance itself. There is thus a feedback dependent on the source of
192 166 THE HUMAN USE OF HUMAN BEINGS light proceeding from the light to the photoelectric cells, and thence to the rudder control system, by which it finally regulates the direction of its own motion and changes the angle of incidence of the light. This feedback tends to accomplish the purpose of either positive or negative photo-tropism. It is the analogue of a voluntary feedback, for in man we consider that a voluntary action is essentially a choice among tropisms. When this feedback is overloaded by increasing the amplification, the little cart or "the moth" or «the bedbug" according to the direction of its tropism will seek the light or avoid it in an oscillatory manner, in which the oscillations grow ever larger. This is a close analogue to the phenomenon of intention tremor, which is associated with injury to the cerebellum. The positioning mechanism for the rudder contains a second feedback which may be considered as postural. This feedback runs from the potentiometer to the second motor and back to the potentiometer, and its zero point is regulated by the output of the firs feedback. If this is overloaded, the rudder goes into a second sort of tremor. This second tremor appears in the I absence of light: that is, when the machine is not given a purpose. Theoretically, this is due to the fact that as far as the second mechanism is concerned, the action of the first mechanism is antagonistic to its feedback, and tends to decrease its amount. This phenomenon in man is what we have described as Parkinsonianism. I have recently received a letter from Dr. Grey Walter of the Burden Neurological Institute at Bristol, England, in which he expresses interest in "the moth" or "bedbug," and in which he tells me of a similar mechanism of his own, which differs from mine in having a determined but variable purpose. In his own language, "We have included features other than inverse feedback which gives to it an exploratory and ethical attitude to the universe as well as a purely tropistic one." The possibility of such a change in behavior pattern is I
193 CYBERNETICS AND SOCIETY I, discussed in the chapter of this book concerning learning, and this discussion is directly relevant to the Walter machine, although at present I do not know just what means he uses to secure such a type of behavior. The moth and Dr. Walter's further development of a tropism machine seem to be at first sight exercises in virtuosity, or at most, mechanical commentaries to a philosophical text. Nevertheless, they have a certain definite usefulness. The United States Army Medical Corps has taken photographs of "the moth" to compare with photographs of actual cases of nervous tremor, so that they are thus of assistance in the instruction of army neurologists. There is a second class of machines with which we have also been concerned which has a much more direct and immediately important medical value. These machines may be used to make up for the losses of the maimed and of the sensorily deficient, as well as to give new and potentially dangerous powers to the already powerful. The help of the machine may extend to the construction of better artificial limbs; to instruments to help the blind to read pages of ordinary text by translating the visual pattern into auditory terms; and to other similar aids to make them aware of approaching dangers and to give them freedom of locomotion. In particular, we may use the machine to aid the totally deaf. Aids of this last class are probably the easiest to construct; partly because the technique of the telephone is the best studied and most familiar technique of communication; partly because the deprivation of hearing is overwhelmingly a deprivation of one thing-free participation in human conversation; and partly because the useful information carried by speech can be compressed into such a narrow compass that it is not beyond the carrying-power of the sense of touch. Some time ago, Professor Wiesner told me that he was interested in the possibility of constructing an aid
194 168 THE HUMAN USE OF HUMAN BEINGS for the totally deaf, and that he would like to hear my views on the subject. I gave my views, and it turned out that we were of much the same opinion. We were aware of the work that had already been done on visible speech at the Bell Telephone Laboratories, and its relation to their earlier work on the Vocoder. We knew that the Vocoder work gave us a measure of the amount of information which it was necessary to transmit for the intelligibility of speech more favorable than that of any previous method. We felt, however, that visible speech had two disadvantages; namely, that it did not seem to be easy to produce in a portable form, and that it made too heavy demands on the sense of vision, which is relatively more important for the deaf :.. 1; person than for the rest of us. A rough estimate showed. that a transfer to the sense of touch of the principle used in the visible-speech instrument was possible, and this we decided should be the basis of our apparatus. We found out very soon after starting that the investigators at the Bell Laboratories had also considered the possibility of a tactile reception of sound, and had included it in their patent application. They were kind enough to tell us that they had done no experimental work on it, and that they left us free to go ahead on our researches. Accordingly, we put the design and development of this apparatus into the hands of Mr. Leon Levine, a graduate student in the Electronics Laboratory. We foresaw that the problem of training would be a large part of the work necessary to bring our device into actual use, and here we had the benefit of the counsel of Dr. Alexander Bavelas of our Department of Psychology. The problem of interpreting speech through another sense than that of hearing, such as the sense of touch, may be given the following interpretation from the point of view of language. As we have said, we may roughly distinguish three stages of language, and two intermediate translations, between the outside world '.1' <, ;1 I
195 CYBERNETICS AND SOCIETY 16g and the subjective receipt of information. The first stage consists in the acoustic symbols taken physically as vibrations in the air; the second or phonetic stage consists in the various phenomena in the inner ear and the associated part of the nervous system; the third or semantic stage represents the transfer of these symbols into an experience of meaning. In the case of the deaf person, the first and the third stages are still present, but the second stage is missing. However, it is perfectly conceivable that we may replace the second stage by one by-passing the sense of hearing and proceeding, for example, through the sense of touch. Here the translation between the first stage and the new second stage is performed, not by a physical-nervous apparatus which is born into us but by an artificial, humanly-constructed system. The translation between the new second stage and the third stage is not directly accessible to our inspection, but represents the formation of a new system of habits and responses, such as those we develop when we learn to drive a car. The present status of our apparatus is this : the transition between the first and the new second stage is well under control, although there are certain technical difficulties still to be overcome. We are making studies of the learning process; that is, of the transition between the second and third stages; and in our opinion, they seem extremely promising. The best result that we can show as yet, is that with a learned vocabulary of twelve simple words, a run of eighty random repetitions was made with only six errors. In our work, we had to keep certain facts always in mind. First among these, as we have said, is the fact that hearing is not only a sense of communication, but a sense of communication which receives its chief use in establishing a rapport with other individuals. It is also a sense corresponding to certain communicative activities on our part : namely, those of speech. Other uses of hearing are important, such as the reception of
196 170 THE HUMAN USE OF HUMAN BEINGS the sounds of nature and the appreciation of music, but they are not so important that we should consider a person as socially deaf if he shared in the ordinary, interpersonal communication by speech, and in no other use of hearing. In other words, hearing has the property that if we are deprived of all its uses except that of, speech-communication with other people, we should still be suffering under a minimal handicap. For the purpose of sensory prosthesis, we must consider the entire speech process as a unit. How essential this is is immediately observed when we consider the speech of deaf-mutes. With most deaf-mutes, a training in lip-reading is neither impossible nor excessively difficult, to the extent that such persons can achieve a quite tolerable proficiency in receiving speech-messages from others. On the other hand, and with very few exceptions, and these the result of the best and ' most recent training, the vast majority of deaf-mutes, though they can learn how to use their lips and mouths to produce sound, do so with a grotesque and harsh intonation, which represents a highly inefficient form of sending a message. The difficulties lie in the fact that for these people the act of conversation has been broken into two entirely separate parts. We may simulate the situation for, the normal person very easily if we give him a telephone-communication-system with another person, in which his own speech is not transmitted by the telephone to his own ears. It is very easy to construct such dead-microphone transmission systems, and they have actually been considered by the telephone companies, only to be rejected because of the frightful sense of frustration they cause, especially the frustration of not knowing how much of one's own voice gets onto the line. People using a system of this sort are always forced to yell at the top of their voices, to be sure that they have missed no opportunity to get the message I accepted by the line.
197 CYBERNETICS AND SOCIETY 171 We now come back to ordinary speech. We see that the processes of speech and hearing in the normal person have never been separated; but that the very learning of speech has been conditioned by the fact that each individual hears himself speak. For the best resuits it is not enough that the individual hear himself speak at widely separated occasions, and that he fill in the gaps between these by memory. A good quality of speech can only be attained when it is subject to a continuous monitoring and self-criticism. Any aid to the totally deaf must take advantage of this fact, and although it may indeed appeal to another sense, such as that of touch, rather than to the missing sense of hearing, it must resemble the electric hearing-aids of the present day in being portable and continuously worn. The further philosophy of prosthesis for hearing depends on the amount of information effectively used in hearing. The crudest evaluation of this amount involves the estimate of the maximum that can be communicated over a sound range of 10,000 cycles, and an amplitude range of some 80 decibels. This load of communication, however, while it marks the maximum of what the ear could conceivably do, is much too great to represent the effective information given by speech in practice. In the first place, speech of telephone quality does not involve the transmission of more than 3000 cycles; and the amplitude range is certainly not more than from 5 to 10 decibels; but even here, while we have not exaggerated what is transmitted to the ear, we are still grossly exaggerating what is used by the ear and brain to reconstitute recognizable speech. We have said that the best work done on this problem of estimation is the Vocoder work of the Bell Telephone Laboratories. It may be used to show that if human speech is properly divided into not more than five bands, and if these are rectified so that only their form-envelopes or outer shapes are perceived, and are used to modulate quite arbitrary sounds within their
198 TIlE HUMAN USE OF HUMAN BEINGS frequency range, then if these sounds are finally added up, the original speech is recognizable as speech and almost recognizable as the speech of a particular individual. Nevertheless the amount of possible information transmitted, used or unused, has been cut to not more than a tenth or hundredth of the original potential information present. When we distinguish between used and unused information in speech, we distinguish between the maximum coding capacity of speech as received by the ear, and the maximum capacity that penetrates through the cascade network of successive stages consisting of the ear followed by the brain. The first is only relevant to the transmission of speech through the air and through intermediate instruments like the telephone, followed by the ear itself, but not by whatever apparatus in the brain is used in the understanding of speech. The second refers to the transmitting power of the entire complex-air-telephone-ear-brain. Of course, there may be finer shades of inflection which do not get through to the over-all narrow-band transmission system of which we are speaking, and it is hard to evaluate the amount of lost information carried by these; but it seems to be relatively small. This is the idea behind the Vaeader. The earlier engineering estimates of information were defective in that they ignored the terminating element of the chain from the air to the brain. In appealing to the other senses of a deaf person, we must realize that apart from sight, all others are inferior to it, and transmit less information per unit time. The only way we can make an inferior sense like touch work with maximum efficiency is to send through it not the full information that we get through hearing, but an edited portion of that hearing suitable for the understanding of speech. In other words, we replace part of the function that the cortex normally performs after the reception of sound, by a filtering of our information
199 CYBERNETICS AND SOCIETY 173 before it goes through the tactile receptors. We thus transfer part of the function of the cortex of the brain to an artificial external cortex. The precise way we do this in the apparatus we are considering is by separating the frequency bands of speech as in the Vocoder, and then by transmitting these different rectified bands to spatially distant tactile regions, after they have been used to modulate vibrations of frequencies easily perceived by the skin. For example, five bands may be sent respectively to the thumb and four fingers of one hand. This gives us the main ideas of the apparatus needed for the reception of intelligible speech through sound vibrations transformed electrically into touch. We have gone far enough already to know that the patterns of a considerable number of words are sufficiently distinct from one another, and sufficiently consistent among a number of speakers, to be recognized without any great amount of speech training. From this point on, the chief direction of investigation must be that of the more thorough training of deaf-mutes in the recognition and the reproduction of sounds. On the engineering end, we shall have considerable problems concerning the portability of the apparatus, and the reduction of its energy demands, without any substantial loss of performance. These matters are all still sub judice. I do not wish to establish false and in particular premature hopes on the part of the affiicted and their friends, but I think it is safe to say that the prospect of success is far from hopeless. Since the publication of the first edition of this book, new special devices for elucidating points in the theory of communciation have been developed by other workers. I have already mentioned in an earlier chapter the homeostats of Dr. Ashby and the somewhat similar machines of Dr. Grey Walter. Here let me mention some earlier machines of Dr. Walter, somewhat similar to my "moth" or "bug," but which were built for a
200 174 THE HUMAN USE OF HUMAN BEINGS diherent purpose. For these phototopic machines, each element carries a light so that it can stimulate the others. Thus a number of them put into operation at the same time show certain groupings and mutual reactions which would be interpreted by most animal psychologists as social behavior if they were found encased in flesh and blood instead of brass and steel. It is the beginning of a new science of mechanical behavior even though almost all of it lies in the future. Here at M.LT. circumstances have made it difficult to carry work on the hearing glove much further during the last two years, although the possibility of its development still exists. Meanwhile, the theory-although not the detail of the device-has led to improvements in apparatus to allow the blind to get themselves through a maze of streets and buildings. This research is largely the work of Dr. Clifford M. Witcher, himself congenitally blind, who is an outstanding authority and technician in optics, electrical engineering, and the other fields necessary to this work. A prosthetic device which looks hopeful but has not yet been subjected to any real development or final criticism is an artificial lung in which the activation of the breathing motor will depend on signals, electrical or mechanical, from the weakened but not destroyed, breathing muscles of the patient. In this case, the normal feedback in the medulla and brain stem of the healthy person will be used even in the paralytic to supply the control of his breathing. Thus it is hoped, that the so-called iron lung may no longer be a prison in which the patient forgets how to breathe, but will be an exerciser for keeping his residual faculties of breathing active, and even possibly of building them up to a point where he can breathe for himself and emerge from the machinery enclosing him. Up to the present, we have been discussing machines which as far as the general public is concerned seem either to share the characteristic detachment from im-
201 CYBERNETICS AND SOCIETY 175! I mediate human concerns of theoretical science or to be definitely beneficent aids to the maimed. We now come to another class of machines which possess some very sinister possibilities. Curiously enough, this class contains the automatic chess-playing machine. Sometime ago, I suggested a way in which one might use the modern computing machine to play at least a passable game of chess. In this work, I am following up a line of thought which has a considerable history behind it. Poe discussed a fraudulent chessplaying machine due to Maelzel, and exposed it; showing that it was worked by a legless cripple inside. However, the machine I have in mind is a genuine one, and takes advantage of recent progress in computing machines. It is easy to make a machine that will play merely legal chess of a very poor brand; it is hopeless to try to make a machine to play perfect chess for such a machine would require too many combinations. Professor John von Neumann of the Institute for Advanced Studies at Princeton has commented on this difficulty. However, it is neither easy nor hopeless to make a machine which we can guarantee to do the best that can be done for a limited number of moves ahead, say two; and which will then leave itself in the position that is the most favorable in accordance with some more or less easy method of evaluation. The present ultra-rapid computing machines may be set up to act as chess-playing machines, though a better machine might be made at an exorbitant price if we chose to put the work into it. The speed of these modern computing machines is enough so that they can evaluate every possibility for two moves ahead in the legal playing-time of a single move. The number of combinations increases roughly in geometrical progression. Thus the difference between playing out all possibilities for two moves and for three moves is enormous. To play out a game-something like fifty movesis hopeless in any reasonable time. Yet for beings living
202 176 TIlE lillman USE OF lillman BEINGS long enough, as von Neumann has shown, it would be possible; and a game played perfectly on each side would lead, as a foregone conclusion, either always to a win for White, or always to a win for Black, or most probably always to a draw. Mr. Claude Shannon of the Bell Telephone Laboratories has suggested a machine along the same lines as the two-move machine I had contemplated, but considerably improved. To begin with, his evaluation of the final position after two moves would make allowances for the control of the board, for the mutual protection of the pieces, etc., as well as the number of pieces, check, and checkmate. Then too, if at the end of two moves, the game should be unstable, by the existence of check, or of an important piece in a position to be taken, or of a fork, the mechanical player would automatically play a move or two ahead until stability should be reached. How much this would slow the game, lengthening each move beyond the legal limit, I do not know; although I am not convinced that we can go very far in this direction without getting into time trouble at our present speeds. I am willing to accept Shannon's conjecture that such a machine would play chess of a high amateur level and even possibly of a master level. Its game would be stiff and rather uninteresting, but much safer than that of any human player. As Shannon points out, it is possible to put enough chance in its operation to prevent its constant defeat in a. purely systematic way by a given rigid sequence of plays. This chance or uncertainty may be built into the evaluation of terminal positions after two moves. The machine would play gambits and possibly end games like a human player from the store of standard gambits and end games. A better machine would store on a tape every game it had ever played and would supplement the processes which we have already indicated by a search through all past games to find '
203 CYBERNETICS AND SOCIETY 177 something apropos: in short, by the power of learning. Though we have seen that machines can be built to learn, the technique of building and employing these machines is still very imperfect. The time is not yet ripe for the design of a chess-playing machine on learning principles, although it probably does not lie very far in the future. A chess-playing machine which learns might show a great range of performance, dependent on the quality of the players against whom it had been pitted. The best way to make a master machine would probably be to pit it against a wide variety of good chess players. On the other hand, a well-contrived machine might be more or less ruined by the injudicious choice of its opponents. A horse is also ruined if the wrong riders are allowed to spoil it. In the learning machine, it is well to distinguish what the machine can learn and what it cannot. A machine may be built either with a statistical preference for a certain sort of behavior, which nevertheless admits the possibility of other behavior; or else certain features of its behavior may be rigidly and unalterably determined. We shall call the first sort of determination preference, and the second sort of determination constraint. For example, if the rules of legal chess are not built into a chess-playing machine as constraints, and if the machine is given the power to learn, it may change without notice from a chess-playing machine into a machine doing a totally different task. On the other hand, a chess-playing machine with the rules built in as constraints may still be a learning machine as to tactics and policies. The reader may wonder why we are interested in chess-playing machines at all. Are they not merely another harmless little vanity by which experts in design seek to show off their proficiency to a world which they hope will gasp and wonder at their accomplishments? As an honest man, I cannot deny that a certain element
204 178 THE HUMAN USE OF HUMAN BEINGS of ostentatious narcissism is present in me, at least. However, as you will soon see, it is not the only element active here, nor is it that which is of the greatest importance to the non-professional reader. Mr. Shannon has presented some reasons why his researches may be of more importance than the mere design of a curiosity, interesting only to those who are playing a game. Among these possibilities, he suggests that such a machine may be the first step in the construction of a machine to evaluate military situations and to determine the best move at any specific stage. Let no man think that he is talking lightly. The great book of von Neumann and Morgenstern on the Theory of Games has made a profound impression on the world, and not least in Washington. When Mr. Shannon speaks of the development of military tactics, he is not talking moonshine, but is discussing a most imminent and dangerous contingency. In the well-known Paris journal, Le Monde, for December 28, 1948, a certain Dominican friar, Pere Dubarle, has written a very penetrating review of my book Cybernetics. I shall quote a suggestion of his which carries out some of the dire implications of the chess-playing machine grown up and encased in a suit of armor. One of the most fascinating prospects thus opened is that of the rational conduct of human affairs, and in particular of those which interest communities and seem to present a certain statistical regularity, such as the human phenomena of the development of opinion. Can't one imagine a machine to collect this or that type of information, as for example information on production and the market; and then to determine as a function of the average psychology of human beings, and of the ' i quantities which it is possible to measure in a determined instance, what the most probable development of the situation might be? Can't one even conceive a State apparatus covering all systems of political decisions, either under a regime of many states distributed over the earth,
205 CYBERNETICS AND SOCIETY 179 or under the apparently much more simple regime of a human government of this planet? At present nothing prevents our thinking of this. We may dream of the time when the machine a gouverner may come to supplywhether for good or evil the present obvious inadequacy of the brain when the latter is concerned with the customary machinery of politics. At all events, human realities do not admit a sharp and certain determination, as numerical data of computation do. They only admit the determination of their probable values. A machine to treat these processes, and the problems which they put, must therefore undertake the sort of probabilistic, rather than deterministic thought, such as is.exhibited for example in modem computing machines. This makes its task more complicated, but does not render it impossible. The prediction machine which determines the efficacy of anti-aircraft fire is an example of this. Theoretically, time prediction is not impossible; neither is the determination of the most favorable decision, at least within certain limits. The possibility of playing machines such as the chess-playing machine is considered to establish this. For the human processes which constitute the object of government may be assimilated to games in the sense in which von Neumann has studied them mathematically. Even though these games have an incomplete set of rules, there are other games with a very large number of players, where the data are extremely complex. The machines a gouverner will define the State as the best-informed player at each particular level; and the State is the only supreme co-ordinator of all partial decisions. These are enormous privileges; if they are acquired scientifically, they will permit the State under all circumstances to beat every player of a human game other than itself by offering this dilemma : either immediate ruin, or planned co-operation. This will be the consequences of the game itself without outside violence. The lovers of the best of worlds have something indeed to dream of! Despite all this, and perhaps fortunately, the machine a gouverner is not ready for a very near tomorrow. For outside of the very serious problems which the volume of information to be collected and to be treated rapidly
206 180 THE HUMAN USE OF HUMAN BEINGS still put, the problems of the stability of prediction remain beyond what we can seriously dream of controlling. For human processes are assimilable to games with incompletely defined rules, and above all, with the rules themselves functions of the time. The variation of the rules depends both on the effective detail of the situations engendered by the game itself, and on the system of psychological reactions of the players in the face of the results obtained at each instant. It may even be more rapid than these. A very good example of this seems to be given by what happened to the Gallup Poll in the 1948 election. All this not only tends to complicate the degree of the factors which influence prediction, but perhaps to make radically sterile the mechanical manipulation of human situations. As far as one can judge, only two conditions here can guarantee stabilization in the mathematical sense of the term. These are, on the one hand, a sufficient ignorance on the part of the mass of the players exploited by a skilled player, who moreover may plan a method of paralyzing the consciousness of the masses; or on the other, sufficient good-will to allow one, for the sake of the stability of the game, to refer his decisions to one or a few players of the game who have arbitrary privileges. This is a hard lesson of cold mathematics, but it throws a certain light on the adventure of our century: hesitation between an indefinite turbulence of human affairs and the rise of a prodigious Leviathan. In comparison with this, Hobbes' Leviathan was nothing but a pleasant joke. We are running the risk nowadays of a great World State, where deliberate and conscious primitive injustice may be the only possible condition for the statistical happiness of the masses : a world worse than hell for every clear mind. Perhaps it would not be a bad idea for the teams at present creating cybernetics to add to their cadre of technicians, who have come from all horizons of science, some serious anthropologists, and perhaps a philosopher who has some curiosity as to world matters. The machine a gouverner of Pere Dubarle is not frightening because of any danger that it may achieve autonomous control over humanity. It is far too crude
207 CYBERNETICS AND SOCIETY. The great weakness of the machine-the weakness that saves us so far from being dominated by it-is that it cannot yet take into account the vast range of probability that characterizes the human situation. The dominance of the machine presupposes a society in the last stages of increasing entropy, where probability is negligible and where the statistical differences among individuals are nil. Fortunately we have not yet reached such a state. But even without the state machine of Pere Dubar:le we are already developing new concepts of war, of economic conflict, and of propaganda on the basis of von Neumann's Theory of Games, which is itself a communicational theory, as the developments of the' 1950S have already shown. This theory of games, as I have said in an earlier chapter, contributes to the theory of language, but there are in existence government agencies bent on applying it to military and quasi-military aggressive and defensive purposes. The theory of games is, in its essence, based on an arrangement of players or coalitions of players each of whom is bent on developing a strategy for accomplishing its purposes, assuming that its antagonists, as well as itself, are each engaging in the best policy for victory. This great game is already being carried on mechanistically, and on a colossal scale. While the philosophy behind it is probably not acceptable to our present opponents, the Communists, there are strong
The Intellectuals and Socialism By F.A. Hayek
The Intellectuals and Socialism, by F.A. Hayek The Intellectuals and Socialism By F.A. Hayek [Reprinted from The University of Chicago Law Review (Spring 1949), pp. 417-420, 421-423, 425-433, by permissionMore information
The Protestant Ethic and the Spirit of Capitalism
The Protestant Ethic and the Spirit of Capitalism Max Weber is the one undisputed canonical figure in contemporary sociology. The Times Higher Education Supplement Weber s essay is certainly one of theMore informationMore information
Good Research Practice What Is It?
Good Research Practice What Is It? Explores some of the ethical issues that arise in research, and is intended to provide a basis for reflection and discussion. It is aimed at researchers in every fieldMore informationMore informationMore information
ROUTLEDGE SOCIOLOGY CLASSICS
THE SOCIAL SYSTEM In the history of sociological theory, Talcott Parsons holds a very special place. His The Structure of Social Action (1937), was a pioneer work that has influenced many social scientists.More informationMore
WORKING WITH ETHICAL SYMMETRY IN SOCIAL RESEARCH WITH CHILDREN
WORKING WITH ETHICAL SYMMETRY IN SOCIAL RESEARCH WITH CHILDREN PIA CHRISTENSEN National Institute of Public Health ALAN PROUT University of Stirling Key words: childhood, children, ethics, methodology,More information
Learning to be. The world of education today and tomorrow
Learning to be The world of education today and tomorrow Also published by Unesco... EDUCATION ON THE MOVE A collection of the key background papers which served in the preparation of the report of theMore information
Anthropology is Not Ethnography
RADCLIFFE-BROWN LECTURE IN SOCIAL ANTHROPOLOGY Anthropology is Not Ethnography TIM INGOLD Fellow of the Academy Acceptable generalisation and unacceptable history THE OBJECTIVE OF ANTHROPOLOGY, I believe,More information
How Just Could a Robot War Be?
How Just Could a Robot War Be? Peter M. ASARO HUMlab & Department of Philosophy, Umeå University Center for Cultural Analysis, Rutgers University peterasaro@sbcglobal.net Abstract. While modern statesMore informationMore information
About learning. Report of the Learning Working Group
About learning Report of the Learning Working Group Open access. Some rights reserved. As the publisher of this work, Demos has an open access policy which enables anyone to access our content electronicallyMore information Profess
Langdon Winner. Autonomous Technology Technics-out-of-Control as a Theme in Political Thought
Autonomous Technology Technics-out-of-Control as a Theme in Political Thought Langdon Winner The M!T Press Cambridg-e, Massachusetts, and London, England To my parents and Mrs. A 306.46 WIN I First MITMore informationMoreMore information
Viewpoint Can a Christian Be an Economist?
Faith & Economics Number 47/48 Spring/Fall 2006 Pages 59 86. 59 Viewpoint Can a Christian Be an Economist? Charles K. Wilber University of Notre Dame Do nothing from selfish ambition or conceit, but inMoreMoreMore informationMore information
POLITICAL WILL, CONSTITUENCY BUILDING, AND PUBLIC SUPPORT IN RULE OF LAW PROGRAMS
POLITICAL WILL, CONSTITUENCY BUILDING, AND PUBLIC SUPPORT IN RULE OF LAW PROGRAMS August 1998 by Linn Hammergren Democracy Fellow Center for Democracy and Governance Bureau for Global Programs, Field Support,More information
|
http://docplayer.net/57504-The-human-use-of-hul-ian-beings.html
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Antimony
A human-readable,
human-writable,
model definition language
v2.6, July, 2016
Table of contents
- Introduction
- What’s New
- Species and Reactions
- Modules
- Constant and variable symbols
- Compartments
- Events
- Signals
- Assignment Rules
- Rate Rules
- Display Names
- Units
- DNA Strands
- Interactions
- Function Definitions
- Other files
- Importing and Exporting Antimony Models
- Appendix: Converting between SBML and Antimony
Introduction
Since the advent of SBML (the Systems Biology Markup Language) computer models of biological systems have been able to be transferred easily between different labs and different computer programs without loss of specificity. But SBML was not designed to be readable or writable by humans, only by computer programs, so other programs have sprung up to allow users to more easily create the models they need.
Many of these programs are GUI-based, and allow drag-and-drop editing of species and reactions, such as JDesigner and TinkerCell. A few, like Jarnac, take a text-based approach, and allow the creation of models in a text editor. This has the advantage of being faster, more readily cross-platform, and readable by others without translation. Antimony (so named because the chemical symbol of the element is ‘Sb’) was designed as a successor to Jarnac’s model definition language, with some new features that mesh with newer elements of SBML, some new features we feel will be generally applicable, and some new features that we hope will facilitate the creation of genetic networks in particular. A programming library ‘libAntimony’ was developed in tandem with the language to allow computer translation of Antimony-formatted models into SBML and other formats used by other computer programs.
The basic features of Antimony include the ability to:
- Simply define species, reactions, compartments, events, and other elements of a biological model.
- Package and re-use models as modules with defined or implied interfaces, and
- Create ‘DNA strands’ whose elements can pass reaction rates to downstream elements, and inherit and modify reaction rates from upstream elements.
What’s New
In the 2.5 release of Antimony, translation of Antimony concepts to and from the Hierarchical Model Composition package was developed further to be much more robust, and a new test system was added to ensure that Antimony’s ‘flattening’ routine (which exports plain SBML) matches libSBML’s flattening routine.
In the 2.4 release of Antimony, use of the Hierarchical Model Composition package constructs in the SBML translation became standard, due to the package being fully accepted by the SBML community.
In the 2.2/2.3 release of Antimony, units, conversion factors, and deletions were added.
In the 2.1 version of Antimony, the ‘
import‘ handling became much more robust, and it became additionally possible to export hierarchical models using the Hierarchical Model Composition package constructs for SBML level 3.
In the 2.0 version of Antimony, it became possible to export models as CellML. This requires the use of the CellML API, which is now available as an SDK. Hierarchical models are exported using CellML’s hierarchy, translated to accommodate their ‘black box’ requirements.
Species and Reactions
The simplest Antimony file may simply have a list of reactions containing species, along with some initializations. Reactions are written as two lists of species, separated by a ‘
->‘, and followed by a semicolon:
S1 + E -> ES;
Optionally, you may provide a reaction rate for the reaction by including a mathematical expression after the semicolon, followed by another semicolon:
S1 + E -> ES; k1*k2*S1*E - k2*ES;
You may also give the reaction a name by prepending the name followed by a colon:
J0: S1 + E -> ES; k1*k2*S1*E - k2*ES;
The same effect can be achieved by setting the reaction rate separately, by assigning the reaction rate to the reaction name with an ‘
=‘:
J0: S1 + E -> ES; J0 = k1*k2*S1*E - k2*ES;
You may even define them in the opposite order-they are all ways of saying the same thing.
If you want, you can define a reaction to be irreversible by using ‘
=>‘ instead of ‘
->‘:
J0: S1 + E => ES;
However, if you additionally provide a reaction rate, that rate is not checked to ensure that it is compatible with an irreversible reaction.
At this point, Antimony will make several assumptions about your model. It will assume (and require) that all symbols that appear in the reaction itself are species. Any symbol that appears elsewhere that is not used or defined as a species is ‘
undefined‘; ‘
undefined‘ symbols may later be declared or used as species or as ‘
formulas‘, Antimony’s term for constants and packaged equations like SBML’s assignment rules. In the above example, k1 and k2 are (thus far) undefined symbols, which may be assigned straightforwardly:
J0: S1 + E -> ES; k1*k2*S1*E - k2*ES; k1 = 3; k2 = 1.4;
More complicated expressions are also allowed, as are the creation of symbols which exist only to simplify or clarify other expressions:
pH = 7; k3 = -log10(pH);
The initial concentrations of species are defined in exactly the same way as formulas, and may be just as complex (or simple):
S1 = 2; E = 3; ES = S1 + E;
Order for any of the above (and in general in Antimony) does not matter at all: you may use a symbol before defining it, or define it before using it. As long as you do not use the same symbol in an incompatible context (such as using the same name as a reaction and a species), your resulting model will still be valid. Antimony files written by libAntimony will adhere to a standard format of defining symbols, but this is not required.
Modules
Antimony input files may define several different models, and may use previously-defined models as parts of newly-defined models. Each different model is known as a ‘
module‘, and is minimally defined by putting the keyword ‘
model‘ (or ‘
module‘, if you like) and the name you want to give the module at the beginning of the model definitions you wish to encapsulate, and putting the keyword ‘
end‘ at the end:
model example S + E -> ES; end
After this module is defined, it can be used as a part of another model (this is the one time that order matters in Antimony). To import a module into another module, simply use the name of the module, followed by parentheses:
model example S + E -> ES; end model example2 example(); end
This is usually not very helpful in and of itself-you’ll likely want to give the submodule a name so you can refer to the things inside it. To do this, prepend a name followed by a colon:
model example2 A: example(); end
Now, you can modify or define elements in the submodule by referring to symbols in the submodule by name, prepended with the name you’ve given the module, followed by a ‘
.‘:
model example2 A: example(); A.S = 3; end
This results in a model with a single reaction (A.S + A.E -> A.ES) and a single initial condition (A.S = 3).
You may also import multiple copies of modules, and modules that themselves contain submodules:
model example3 A: example(); B: example(); C: example2(); end
This would result in a model with three reactions and a single initial condition.
A.S + A.E -> A.ES B.S + B.E -> B.ES C.A.S + C.A.E -> C.A.ES C.A.S = 3;
You can also use the species defined in submodules in new reactions:
model example4 A: example(); A.S -> ; kdeg*A.S; end
When combining multiple submodules, you can also ‘attach’ them to each other by declaring that a species in one submodule is the same species as is found in a different submodule by using the ‘
is‘ keyword (“A.S is B.S”). For example, let’s say that we have a species which is known to bind reversibly to two different species. You could set this up as the following:
model side_reaction J0: S + E -> SE; k1*k2*S*E - k2*ES; S = 5; E = 3; SE = E+S; k1 = 1.2; k2 = 0.4; end model full_reaction A: side_reaction(); B: side_reaction(); A.S is B.S; end
If you wanted, you could give the identical species a new name to more easily use it in the ‘
full_reaction‘ module:
model full_reaction var species S; A: side_reaction(); B: side_reaction() A.S is S; B.S is S; end
In this system, ‘
S‘ is involved in two reversible reactions with exactly the same reaction kinetics and initial concentrations. Let’s now say the reaction rate of the second side-reaction takes the same form, but that the kinetics are twice as fast, and the starting conditions are different:
model full_reaction var species S; A: side_reaction(); A.S is S; B: side_reaction(); B.S is S; B.k1 = 2.4; B.k2 = 0.8; B.E = 10; end
Note that since we defined the initial concentration of ‘
SE‘ as ‘
S + E‘, B.SE will now have a different initial concentration, since B.E has been changed.
Finally, we add a third side reaction, one in which S binds irreversibly, and where the complex it forms degrades. We’ll need a new reaction rate, and a whole new reaction as well:
model full_reaction var species S; A: side_reaction(); A.S is S; B: side_reaction(); B.S is S; B.k1 = 2.4; B.k2 = 0.8; B.E = 10; C: side_reaction(); C.S is S; C.J0 = C.k1*C.k2*S*C.E J3: C.SE -> ; C.SE*k3; k3 = 0.02; end
Note that defining the reaction rate of C.J0 used the symbol ‘
S‘; exactly the same result would be obtained if we had used ‘
C.S‘ or even ‘
A.S‘ or ‘
B.S‘. Antimony knows that those symbols all refer to the same species, and will give them all the same name in subsequent output.
For convenience and style, modules may define an interface where some symbols in the module are more easily renamed. To do this, first enclose a list of the symbols to export in parentheses after the name of the model when defining it:
model side_reaction(S, k1) J0: S + E -> SE; k1*k2*S*E - k2*ES; S = 5; E = 3; SE = E+S; k1 = 1.2; k2 = 0.4; end
Then when you use that module as a submodule, you can provide a list of new symbols in parentheses:
A: side_reaction(spec2, k2);
is equivalent to writing:
A: side_reaction(); A.S is spec2; A.k1 is k2;
One thing to be aware of when using this method: Since wrapping definitions in a defined model is optional, all ‘bare’ declarations are defined to be in a default module with the name ‘
__main‘. If there are no unwrapped definitions, ‘
__main‘ will still exist, but will be empty.
As a final note: use of the ‘
is‘ keyword is not restricted to elements inside submodules. As a result, if you wish to change the name of an element (if, for example, you want the reactions to look simpler in Antimony, but wish to have a more descriptive name in the exported SBML), you may use ‘
is‘ as well:
A -> B; A is ABA; B is ABA8OH;
is equivalent to writing:
ABA -> ABA8OH;
Module conversion factors
Occasionally, the unit system of a submodel will not match the unit system of the containing model, for one or more model elements. In this case, you can use conversion factor constructs to bring the submodule in line with the containing model.
If time is different in the submodel (affecting reactions, rate rules, delay, and ‘
time‘), use the ‘
timeconv‘ keyword when declaring the submodel:
A1: submodel(), timeconv=60;
This construct means that one unit of time in the submodel multiplied by the time conversion factor should equal one unit of time in the parent model.
Reaction extent may also be different in the submodel when compared to the parent model, and may be converted with the ‘
extentconv‘ keyword:
A1: submodel(), extentconv=1000;
This construct means that one unit of reaction extent in the submodel multiplied by the extent conversion factor should equal one unit of reaction extent in the parent model.
Both time and extent conversion factors may be numbers (as above) or they may be references to constant parameters. They may also both be used at once:
A1: submodel(), timeconv=tconv, extentconv=xconv;
Individual components of submodels may also be given conversion factors, when the ‘
is‘ keyword is used. The following two constructs are equivalent ways of applying conversion factor ‘
cf‘ to the synchronized variables ‘
x‘ and ‘
A1.y‘:
A1.y * cf is x; A1.y is x / cf;
When flattened, all of these conversion factors will be incorporated into the mathematics.
Submodel deletions
Sometimes, an element of a submodel has to be removed entirely for the model to make sense as a whole. A degradation reaction might need to be removed, for example, or a now-superfluous species. To delete an element of a submodel, use the ‘
delete‘ keyword:
delete A1.S1;
In this case, ‘
S1‘ will be removed from submodel A1, as will any reactions S1 participated in, plus any mathematical formulas that had ‘
S1‘ in them.
Similarly, sometimes it is necessary to clear assignments and rules to a variable. To accomplish this, simply declare a new assignment or rule for the variable, but leave it blank:
A1.S1 = ; A1.S2 := ; A1.S3' = ;
This will remove the appropriate initial assignment, assignment rule, or rate rule (respectively) from the submodel.
Constant and variable symbols
Some models have ‘boundary species’ in their reactions, or species whose concentrations do not change as a result of participating in a reaction. To declare that a species is a boundary species, use the ‘
const‘ keyword:
const S1;
While you’re declaring it, you may want to be more specific by using the ‘
species‘ keyword:
const species S1;
If a symbol appears as a participant in a reaction, Antimony will recognize that it is a species automatically, so the use of the keyword ‘
species‘ is not required. If, however, you have a species which never appears in a reaction, you will need to use the ‘
species‘ keyword.
If you have several species that are all constant, you may declare this all in one line:
const species S1, S2, S3;
While species are variable by default, you may also declare them so explicitly with the ‘
var‘ keyword:
var species S4, S5, S6;
Alternatively, you may declare a species to be a boundary species by prepending a ‘
$‘ in front of it:
S1 + $E -> ES;
This would set the level of ‘
E‘ to be constant. You can use this symbol in declaration lists as well:
species S1, $S2, $S3, S4, S5, $S6;
This declares six species, three of which are variable (by default) and three of which are constant.
Likewise, formulas are constant by default. They may be initialized with an equals sign, with either a simple or a complex formula:
k1 = 5; k2 = 2*S1;
You may also explicitly declare whether they are constant or variable:
const k1; var k2;
and be more specific and declare that both are formulas:
const formula k1; var formula k2;
Variables defined with an equals sign are assigned those values at the start of the simulation. In SBML terms, they use the ‘Initial Assignment’ values. If the formula is to vary during the course of the simulation, use the Assignment Rule (or Rate Rule) syntax, described later.
You can also mix-and-match your declarations however best suits what you want to convey:
species S1, S2, S3, S4; formula k1, k2, k3, k4; const S1, S4, k1, k3; var S2, S3, k2, k4;
Antimony is a pure model definition language, meaning that all statements in the language serve to build a static model of a dynamic biological system. Unlike Jarnac, sequential programming techniques such as re-using a variable for a new purpose will not work:
pH = 7; k1 = -log10(pH); pH = 8.2; k2 = -log10(pH);
In a sequential programming language, the above would result in different values being stored in k1 and k2. (This is how Jarnac works, for those familiar with that language/simulation environment.) In a pure model definition language like Antimony, ‘
pH‘, ‘
k1‘, ‘
k2‘, and even the formula ‘
-log10(pH)‘ are static symbols that are being defined by Antimony statements, and not processed in any way. A simulator that requests the mathematical expression for k1 will receive the string ‘
-log10(pH)‘; the same string it will receive for k2. A request for the mathematical expression for pH will receive the string “8.2”, since that’s the last definition found in the file. As such, k1 and k2 will end up being identical.
As a side note, we considered having libAntimony store a warning when presented with an input file such as the example above with a later definition overwriting an earlier definition. However, there was no way with our current interface to let the user know that a warning had been saved, and it seemed like there could be a number of cases where the user might legitimately want to override an earlier definition (such as when using submodules, as we’ll get to in a bit). So for now, the above is valid Antimony input that just so happens to produce exactly the same output as:
pH = 8.2; k1 = -log10(pH); k2 = -log10(pH);
Compartments
A compartment is a demarcated region of space that contains species and has a particular volume. In Antimony, you may ignore compartments altogether, and all species are assumed to be members of a default compartment with the imaginative name ‘
default_compartment‘ with a constant volume of 1. You may define other compartments by using the ‘
compartment‘ keyword:
compartment comp1;
Compartments may also be variable or constant, and defined as such with ‘
var‘ and ‘
const‘:
const compartment comp1; var compartment comp2;
The volume of a compartment may be set with an ‘
=‘ in the same manner as species and reaction rates:
comp1 = 5; comp2 = 3*comp1;
To declare that something is in a compartment, the ‘
in‘ keyword is used, either during declaration:
compartment comp1 in comp2; const species S1 in comp2; S2 in comp2;
or during assignment for reactions:
J0 in comp1: x -> y; k1*x; y -> z; k2*y in comp2;
or submodules:
M0 in comp2: submod(); submod2(y) in comp3;
or other variables:
S1 in comp2 = 5;
Here are Antimony’s rules for determining which compartment something is in:
- If the symbol has been declared to be in a compartment, it is in that compartment.
- If not, if the symbol is in a DNA strand (see the next section) which has been declared to be in a compartment, it is in that compartment. If the symbol is in multiple DNA strands with conflicting compartments, it is in the compartment of the last declared DNA strand that has a declared compartment in the model.
- If not, if the symbol is a member of a reaction with a declared compartment, it is in that compartment. If the symbol is a member of multiple reactions with conflicting compartments, it is in the compartment of the last declared reaction that has a declared compartment.
- If not, if the symbol is a member of a submodule with a declared compartment, it is in that compartment. If the symbol is a member of multiple submodules with conflicting compartments, it is in the compartment of the last declared submodule that has a declared compartment.
- If not, the symbol is in the compartment ‘
default_compartment‘, and is treated as having no declared compartment for the purposes of determining the compartments of other symbols.
Note that declaring that one compartment is ‘
in‘ a second compartment does not change the compartment of the symbols in the first compartment:
compartment c1, c2; species s1 in c1, s2 in c1; c1 in c2;
yields:
symbol compartment
s1 c1
s2 c1
c1 c2
c2 default_compartment
Compartments may not be circular: “c1 in c2; c2 in c3; c3 in c1” is illegal.
Events
Events are discontinuities in model simulations that change the definitions of one or more symbols at the moment when certain conditions apply. The condition is expressed as a boolean formula, and the definition changes are expressed as assignments, using the keyword ‘
at‘ and the following syntax:
at: variable1=formula1, variable2=formula2 [etc];
such as:
at (x>5): y=3, x=r+2;
You may also give the event a name by prepending it with a colon:
E1: at(x>=5): y=3, x=r+2;
(you may also claim an event is ‘
in‘ a compartment just like everything else (‘
E1 in comp1:‘). This declaration will never change the compartment of anything else.)
In addition, there are a number of concepts in SBML events that can now be encoded in Antimony. If event assignments are to occur after a delay, this can be encoded by using the ‘
after‘ keyword:
E1: at 2 after (x>5): y=3, x=r+2;
This means to wait two time units after x transitions from less than five to more than five, then change y to 3 and x to r+2. The delay may also itself be a formula:
E1: at 2*z/y after (x>5): y=3, x=r+2;
For delayed events (and to a certain extent with simultaneous events, discussed below), one needs to know what values to use when performing event assignments: the values from the time the event was triggered, or the values from the time the event assignments are being executed? By default (in Antimony, as in SBML Level 2) the first holds true: event assignments are to use values from the moment the event is triggered. To change this, the keyword ‘
fromTrigger‘ is used:
E1: at 2*z/y after (x>5), fromTrigger=false: y=3, x=r+2;
You may also declare ‘
fromTrigger=true‘ to explicitly declare what is the default.
New complications can arise when event assignments from multiple events are to execute at the same time: which event assignments are to be executed first? By default, there is no defined answer to this question: as long as both sets of assignments are executed, either may be executed first. However, if the model depends on a particular order of execution, events may be given priorities, using the priority keyword:
E1: at ((x>5) && (z>4)), priority=1: y=3, x=r+2; E2: at ((x>5) && (q>7)), priority=0: y=5: x=r+6;
In situations where z>4, q>7, and x>5, and then x increases, both E1 and E2 will trigger at the same time. Since both modify the same values, it makes a difference in which order they are executed-in this case, whichever happens last takes precedence. By giving the events priorities (higher priorities execute first) the result of this situation is deterministic: E2 will execute last, and y will equal 5 and not 3.
Another question is whether, if at the beginning of the simulation the trigger condition is ‘
true‘, it should be considered to have just transitioned to being true or not. The default is no, meaning that no event may trigger at time 0. You may override this default by using the ‘
t0‘ keyword:
E1: at (x>5)), t0=false: y=3, x=r+2;
In this situation, the value at t0 is considered to be false, meaning it can immediately transition to true if x is greater than 5, triggering the event. You may explicitly state the default by using ‘
t0 = true‘.
Finally, a different class of events is often modeled in some situations where the trigger condition must persist in being true from the entire time between when the event is triggered to when it is executed. By default, this is not the case for Antimony events, and, once triggered, all events will execute. To change the class of your event, use the keyword ‘
persistent‘:
E1: at 3 after (x>5)), persistent=true: y=3, x=r+2;
For this model, x must be greater than 5 for three seconds before executing its event assignments: if x dips below 5 during that time, the event will not fire. To explicitly declare the default situation, use ‘
persistent=false‘.
The ability to change the default priority, t0, and persistent characteristics of events was introduced in SBML Level 3, so if you translate your model to SBML Level 2, it will lose the ability to define functionality other than the default. For more details about the interpretation of these event classifications, see the SBML Level 3 specification.
Assignment Rules
In some models, species and/or variables change in a manner not described by a reaction. When a variable receives a new value at every point in the model, this can be expressed in an assignment rule, which in Antimony is formulated with a ‘
:=‘ as:
Ptot := P1 + P2 + PE;
In this example, ‘
Ptot‘ will continually be updated to reflect the total amount of ‘
P‘ present in the model.
Each symbol (species or formula) may have only one assignment rule associated with it. If an Antimony file defines more than one rule, only the last will be saved.
When species are used as the target of an assignment rule, they are defined to be ‘boundary species’ and thus ‘
const‘. Antimony doesn’t have a separate syntax for boundary species whose concentrations never change vs. boundary species whose concentrations change due to assignment rules (or rate rules, below). SBML distinguishes between boundary species that may change and boundary species that may not, but in Antimony, all boundary species may change as the result of being in an Assignment Rule or Rate Rule.
Signals
Signals can be generated by combining assignment rules with events.
Step Input
The simplest signal is input step. The following code implements a step that occurs at time = 20 with a magnitude of f. A trigger is used to set a trigger variable alpha which is used to initate the step input in an assignment expression.
import tellurium as te import roadrunner r = te.loada(""" $Xo -> S1; k1*Xo; S1 -> $X1; k2*S1; k1 = 0.2; k2 = 0.45; alpha = 0; f = 2 Xo := alpha*f at time > 20: alpha = 1 """) m = r.simulate (0, 100, 300, ['time', 'Xo', 'S1']) r.plot()
Ramp
The following code starts a ramp at 20 time units by setting the p1 variable to one. This variable is used to acticate a ramp function.
import tellurium as te import roadrunner r = te.loada(""" $Xo -> S1; k1*Xo; S1 -> $X1; k2*S1; k1 = 0.2; k2 = 0.45; p1 = 0; Xo := p1*(time - 20) at time > 20: p1 = 1 """) m = r.simulate (0, 100, 200, ['time', 'Xo', 'S1']) r.plot()
Ramp then Stop
The following code starts a ramp at 20 time units by setting the p1 variable to one and then stopping the ramp 20 time units later. At 20 time units later a new term is switched on which subtract the ramp slope that results in a horizontal line.
import tellurium as te import roadrunner r = te.loada(""" $Xo -> S1; k1*Xo; S1 -> $X1; k2*S1; k1 = 0.2; k2 = 0.45; p1 = 0; p2 = 0 Xo := p1*(time - 20) - p2*(time - 40) at time > 20: p1 = 1 at time > 40: p2 = 1 """) m = r.simulate (0, 100, 200, ['time', 'Xo', 'S1']) r.plot()
Pulse
The following code starts a pulse at 20 time units by setting the p1 variable to one and then stops the pulse 20 time units later by setting p2 equal to zero
import tellurium as te import roadrunner r = te.loada(""" $Xo -> S1; k1*Xo; S1 -> $X1; k2*S1; k1 = 0.2; k2 = 0.45; p1 = 0; p2 = 1 Xo := p1*p2 at time > 20: p1 = 1 at time > 40: p2 = 0 """) m = r.simulate (0, 100, 200, ['time', 'Xo', 'S1']) r.plot()
Sinusoidal Input
The following code starts a sinusoidal input at 20 time units by setting the p1 variable to one
import tellurium as te import roadrunner r = te.loada(""" $Xo -> S1; k1*Xo; S1 -> $X1; k2*S1; k1 = 0.2; k2 = 0.45; p1 = 0; Xo := p1*(sin (time) + 1) at time > 20: p1 = 1 """) m = r.simulate (0, 100, 200, ['time', 'Xo', 'S1']) r.plot()
Rate Rules
Rate rules define the change in a symbol’s value over time instead of defining its new value. In this sense, they are similar to reaction rate kinetics, but without an explicit stoichiometry of change. These may be modeled in Antimony by appending an apostrophe to the name of the symbol, and using an equals sign to define the rate:
S1' = V1*(1 - S1)/(K1 + (1 - S1)) - V2*S1/(K2 + S1)
Note that unlike initializations and assignment rules, formulas in rate rules may be self-referential, either directly or indirectly.
Any symbol may have only one rate rule or assignment rule associated with it. Should it find more than one, only the last will be saved.
Display Names
When some tools visualize models, they make a distinction between the ‘
id‘ of an element, which must be unique to the model and which must conform to certain naming conventions, and the ‘name’ of an element, which does not have to be unique and which has much less stringent naming requirements. In Antimony, it is the id of elements which is used everywhere. However, you may also set the ‘display name’ of an element by using the ‘
is‘ keyword and putting the name in quotes:
A.k1 is "reaction rate k1"; S34 is "Ethyl Alcohol";
/* The following initializations were taken from the literature */ X=3; //Taken from Galdziki, et al. Y=4; //Taken from Rutherford, et al.
Comments are not translated to SBML or CellML, and will be lost if round-tripped through those languages.
Units
As of version 2.4 of Antimony, units may now be created and translated to SBML (but not CellML, yet). Units may be created'.
Antimony does not calculate any derived units: in the above example, ‘x’ is fully defined in terms of moles per liter per second, but it is not annotated as such.
As with many things in Antimony, you may use a unit before defining it: ‘
x = 10 ml‘ will create a parameter
x and a unit ‘
ml‘.
DNA Strands
A new concept in Antimony that has not been modeled explicitly in previous model definition languages such as SBML is the idea of having DNA strands where downstream elements can inherit reaction rates from upstream elements. DNA strands are declared by connecting symbols with ‘
--‘:
--P1--G1--stop--P2--G2--
You can also give the strand a name:
dna1: --P1--G1--
By default, the reaction rate or formula associated with an element of a DNA strand is equal to the reaction rate or formula of the element upstream of it in the strand. Thus, if P1 is a promoter and G1 is a gene, in the model:
dna1: --P1--G1-- P1 = S1*k; G1: -> prot1;
the reaction rate of G1 will be “S1*k”.
It is also possible to modulate the inherited reaction rate. To do this, we use ellipses (‘…’) as shorthand for ‘the formula for the element upstream of me’. Let’s add a ribosome binding site that increases the rate of production of protein by a factor of three, and say that the promoter actually increases the rate of protein production by S1*k instead of setting it to S1*k:
dna1: --P1--RBS1--G1-- P1 = S1*k + ...; RBS1 = ...*3; G1: -> prot1;
Since in this model, nothing is upstream of P1, the upstream rate is set to zero, so the final reaction rate of G1 is equal to “(S1*k + 0)*3”.
Valid elements of DNA strands include formulas (operators), reactions (genes), and other DNA strands. Let’s wrap our model so far in a submodule, and then use the strand in a new strand:
model strand1() dna1: --P1--RBS1--G1-- P1 = S1*k + ...; RBS1 = ...*3; G1: -> prot1; end model fullstrand() A: strand1(); fulldna: P2--A.dna1 P2 = S2*k2; end
In the model ‘
fullstrand‘, the reaction that produces A.prot1 is equal to “((A.S1*A.k+(S2*k2))*3)”.
Operators and genes may be duplicated and appear in multiple strands:
dna1: --P1--RBS1--G1-- dna2: P2--dna1 dna3: P2--RBS2--G1
Strands, however, count as unique constructs, and may only appear as singletons or within a single other strand (and may not, of course, exist in a loop, being contained in a strand that it itself contains).
If the reaction rate or formula for any duplicated symbol is left at the default or if it contains ellipses explicitly (‘…’), it will be equal to the sum of all reaction rates in all the strands in which it appears. If we further define our above model:
dna1: --P1--RBS1--G1-- dna2: P2--dna1 dna3: P2--RBS2--G1 P1 = ...+0.3; P2 = ...+1.2; RBS1 = ...*0.8; RBS2 = ...*1.1; G1: -> prot1;
The reaction rate for the production of ‘
prot1‘ will be equal to “(((0+1.2)+0.3)*0.8) + (((0+1.2)*1.1))”.
If you set the reaction rate of G1 without using an ellipsis, but include it in multiple strands, its reaction rate will be a multiple of the number of strands it is a part of. For example, if you set the reaction rate of G1 above to “k1*S1”, and include it in two strands, the net reaction rate will be “k1*S1 + k1*S1”.
The purpose of prepending or postfixing a ‘
--‘ to a strand is to declare that the strand in question is designed to have DNA attached to it at that end. If exactly one DNA strand is defined with an upstream ‘
--‘ in its definition in a submodule, the name of that module may be used as a proxy for that strand when creating attaching something upstream of it, and visa versa with a defined downstream ‘
--‘ in its definition:
model twostrands --P1--RBS1--G1 P2--RBS2--G2-- end model long A: twostrands(); P3--A A--G3 end
The module ‘
long‘ will have two strands: “P3–A.P1–A.RBS1–A.G1” and “A.P2–A.RBS2–A.G2–G3”.
Submodule strands intended to be used in the middle of other strands should be defined with ‘
--‘ both upstream and downstream of the strand in question:
model oneexported --P1--RBS1--G1-- P2--RBS2--G2 end model full A: oneexported() P2--A--stop end
If multiple strands are defined with upstream or downstream “–” marks, it is illegal to use the name of the module containing them as proxy.
Interactions
Some species act as activators or repressors of reactions that they do not actively participate in. Typical models do not bother mentioning this explicitly, as it will show up in the reaction rates. However, for visualization purposes and/or for cases where the reaction rates might not be known explicitly, you may declare these interactions using the same format as reactions, using different symbols instead of “->”: for activations, use “-o”; for inhibitions, use “-|”, and for unknown interactions or for interactions which sometimes activate and sometimes inhibit, use “-(“:
J0: S1 + E -> SE; i1: S2 -| J0; i2: S3 -o J0; i3: S4 -( J0;
If a reaction rate is given for the reaction in question, that reaction must include the species listed as interacting with that reaction. This, then, is legal:
J0: S1 + E -> SE; k1*S1*E/S2 i1: S2 -| J0;
because the species S2 is present in the formula “k1*S1*E/S2”. If the concentration of an inhibitory species increases, it should decrease the reaction rate of the reaction it inhibits, and vice versa for activating species. The current version of libAntimony (v2.4) does not check this, but future versions may add the check.
When the reaction rate is not known, species from interactions will be added to the SBML ‘listOfModifiers’ for the reaction in question. Normally, the kinetic law is parsed by libAntimony and any species there are added to the list of modifiers automatically, but if there is no kinetic law to parse, this is how to add species to that list. thus:
function quadratic(x, a, b, c) a*x^2 + b*x + c end
And then use it in a later equation:
S3 = quadratic(s1, k1, k2, k3);
This would effectively define S3 to have the equation “k1*s1^2 + k2*s1 + k3”.
Other.
Remember that imported files act like they were cut and pasted into the main file. As such, any bare declarations in the main file and in the imported files will all contribute to the default ‘
__main‘ module. Most SBML files will not contribute to this module, unless the name of the model in the file is ‘
__main‘ (for example, if it was created by the antimony converter).
By default, libantimony will examine the ‘
import‘ text to determine whether it is a relative or absolute filename, and, if relative, will prepend the directory of the working file to the import text before attempting to load the file. If it cannot find it there, it is possible to tell the libantimony API to look in different directories for files loaded from import statements.
However, if the working directory contains a ‘
.antimony‘ file, or if one of the named directories contains a ‘
.antimony‘ file, import statements can be subverted. Each line of this file must contain three tab-delimited strings: the name of the file which contains an import statement, the text of the import statement, and the filename where the program should look for the file. Thus, if a file “file1.txt” contains the line ‘
import "file2.txt"‘, and a .antimony file is discovered with the line:
file1.txt file2.txt antimony/import/file2.txt
the library will attempt to load ‘
antimony/import/file2.txt‘ instead of looking for ‘
file2.txt‘ directly. For creating files in-memory or when reading antimony models from strings, the first string may either be left out, or you may use the keyword “”:
file2.txt antimony/import/file2.txt
The first and third entries may be relative filenames: the directory of the .antimony file itself will be added internally when determining the file’s actual location. The second entry must be exactly as it appears in the first file’s ‘
import‘ directive, between the quotation marks.
Importing and Exporting Antimony Models
Once you have created an Antimony file, you can convert it to SBML or CellML using ‘sbtranslate’ or the ‘QTAntimony’ visual editor (both available from) This will convert each of the models defined in the Antimony text file into a separate SBML model, including the overall ‘
__main‘ module (if it contains anything). These files can then be used for simulation or visualization in other programs.
QTAntimony can be used to edit and translate Antimony, SBML, and CellML models. Any file in those three formats can be opened, and from the ‘View’ menu, you can turn on or off the SBML and CellML tabs. Select the tabs to translate and view the working model in those different formats.
The SBML tabs can additionally be configured to use the ‘Hierarchical Model Composition’ package constructs. Select ‘Edit/Flatten SBML tab(s)’ or hit control-F to toggle between this version and the old ‘flattened’ version of SBML. (To enable this feature if you compile Antimony yourself, you will need the latest versions of libSBML with the SBML ‘comp’ package enabled, and to select ‘WITH_COMP_SBML’ from the CMake menu.)
As there were now several different file formats available for translation, the old command-line translators still exist (antimony2sbml; sbml2antimony), but have been supplanted by the new ‘sbtranslate’ executable. Instructions for use are available by running sbtranslate from the command line, but in brief: any number of files to translate may be added to the command line, and the desired output format is given with the ‘
-o‘ flag:
‘
-o antimony‘, ‘
-o sbml‘, ‘
-o cellml‘, or ‘
-o sbml-comp‘ (the last to output files with the SBML ‘
comp‘ package constructs).
Examples:
sbtranslate.exe model1.txt model2.txt -o sbml
will create one flattened SBML file for the main model in the two Antimony files in the working directory. Each file will be of the format “[prefix].xml”, where [prefix] is the original filename with ‘
.txt‘ removed (if present).
sbtranslate.exe oscli.xml ffn.xml -o antimony
will output two files in the working directory: ‘
oscli.txt‘ and ‘
ffn.txt‘ (in the antimony format).
sbtranslate.exe model1.txt -o sbml-comp
will output ‘
model1.xml‘ in the working directory, containing all models in the ‘
model1.txt‘ file, using the SBML ‘
comp‘ package.
Appendix: Converting between SBML and Antimony
For reference, here are some of the differences you will see when converting models between SBML and Antimony:
- Local parameters in SBML reactions become global parameters in Antimony, with the reaction name prepended. If a different symbol already has the new name, a number is appended to the variable name so it will be unique. These do not get converted back to local parameters when converting Antimony back to SBML.
- Algebraic rules in SBML disappear in Antimony.
- Any element with both a value (or an initial amount/concentration for species) and an initial assignment in SBML will have only the initial assignment in Antimony.
- Stoichiometry math in SBML disappears in Antimony.
- All ‘
constant=true‘ species in SBML are set ‘
const‘ in Antimony, even if that same species is set boundary=false.
- All ‘
boundary=true‘ species in SBML are set ‘
const‘ in Antimony, even if that same species is set constant=false.
- Boundary (‘const’) species in Antimony are set boundary=true and constant=false in SBML.
- Variable (‘var’) species in Antimony are set boundary=false and constant=false in SBML.
- Modules in Antimony are flattened in SBML (unless you use the ‘
comp‘ option).
- DNA strands in Antimony disappear in SBML.
- DNA elements in Antimony no longer retain the ellipses syntax in SBML, but the effective reaction rates and assignment rules should be accurate, even for elements appearing in multiple DNA strands. These reaction rates and assignment rules will be the sum of the rate at all duplicate elements within the DNA strands.
- Any symbol with the MathML csymbol ‘
time‘ in SBML becomes ‘
time‘ in Antimony.
- Any formula with the symbol ‘
time‘ in it in Antimony will become the MathML csymbol ‘
time‘ in in SBML.
- The MathML csymbol ‘
delay‘ in SBML disappears in Antimony.
- Any SBML version 2 level 1 function with the MathML csymbol ‘
time‘ in it will become a local variable with the name ‘
time_ref‘ in Antimony. This ‘
time_ref‘ is added to the function’s interface (as the last in the list of symbols), and any uses of the function are modified to use ‘
time‘ in the call. In other words, a function “function(x, y): x+y*time” becomes “function(x, y, time_ref): x + y*time_ref”, and formulas that use “function(A, B)” become “function(A, B, time)”
- A variety of Antimony keywords, if found in SBML models as IDs, are renamed to add an appended ‘
_‘. So the ID ‘
compartment‘ becomes ‘
compartment_‘, ‘
model‘ becomes ‘
model_‘, etc.
|
http://tellurium.analogmachine.org/documentation/antimony-documentation/
|
CC-MAIN-2017-22
|
en
|
refinedweb
|
Library: XML
Package: DOM
Header: Poco/DOM/Text.h
Description.
Inheritance
Direct Base Classes: CharacterData
All Base Classes: AbstractNode, CharacterData, DOMObject, EventTarget, Node
Known Derived Classes: CDATASection
Member Summary
Member Functions: copyNode, innerText, nodeName, nodeType, splitText
Inherited Functions: addEventListener, appendChild, appendData, attributes, autoRelease, bubbleEvent, captureEvent, childNodes, cloneNode, copyNode, data, deleteData, dispatchAttrModified, dispatchCharacterDataModified, dispatchEvent, dispatchNodeInserted, dispatchNodeInsertedIntoDocument, dispatchNodeRemoved, dispatchNodeRemovedFromDocument, dispatchSubtreeModified, duplicate, events, eventsSuspended, firstChild, getData, getNodeByPath, getNodeByPathNS, getNodeValue, hasAttributes, hasChildNodes, innerText, insertBefore, insertData, isSupported, lastChild, length, localName, namespaceURI, nextSibling, nodeName, nodeType, nodeValue, normalize, ownerDocument, parentNode, prefix, previousSibling, release, removeChild, removeEventListener, replaceChild, replaceData, setData, setNodeValue, setOwnerDocument,()
|
http://pocoproject.org/docs/Poco.XML.Text.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
At the rate data is growing today, it's not surprising that cloud storage is also growing in popularity. The fastest-growing data is archive data, which is ideal for cloud storage given a number of factors, including cost, frequency of access, protection, and availability. But not all cloud storage is the same. One provider may focus primarily on cost, while another focuses on availability or performance. No one architecture has a singular focus, but the degrees to which an architecture implements a given characteristic defines its market and appropriate use models.
It's difficult to talk about architectures without the perspective of utility. By this I mean the measure of an architecture from a variety of characteristics, including cost, performance, remote access, and so on. Therefore, I first define a set of criteria by which cloud storage models are measured, and then explore some of the interesting implementations within cloud storage architectures.
First, let's discuss a general cloud storage architecture to set the context for the later exploration of unique architectural features.
General architecture
Cloud storage architectures are primarily about delivery of storage on demand in a highly scalable and multi-tenant way. Generically (see Figure 1), cloud storage architectures consist of a front end that exports an API to access the storage. In traditional storage systems, this API is the SCSI protocol; but in the cloud, these protocols are evolving. There, you can find Web service front ends, file-based front ends, and even more traditional front ends (such as Internet SCSI, or iSCSI). Behind the front end is a layer of middleware that I call the storage logic. This layer implements a variety of features, such as replication and data reduction, over the traditional data-placement algorithms (with consideration for geographic placement). Finally, the back end implements the physical storage for data. This may be an internal protocol that implements specific features or a traditional back end to the physical disks.
Figure 1. Generic cloud storage architecture
From Figure 1, you can see some of the characteristics for current cloud storage architectures. Note that no characteristics are exclusive in the particular layer but serve as a guide for specific topics that this article addresses. These characteristics are defined in Table 1.
Table 1. Cloud storage characteristics
Manageability
One key focus of cloud storage is cost. If a client can buy and manage storage locally compared to leasing it in the cloud, the cloud storage market disappears. But cost can be divided into two high-level categories: the cost of the physical storage ecosystem itself and the cost of managing it. The management cost is hidden but represents a long-term component of the overall cost. For this reason, cloud storage must be self-managing to a large extent. The ability to introduce new storage where the system automatically self-configures to accommodate it and the ability to find and self-heal in the presence of errors are critical. Concepts such as autonomic computing will have a key role in cloud storage architectures in the future.
Access method
One of the most striking differences between cloud storage and traditional storage is the means by which it's accessed (see Figure 2). Most providers implement multiple access methods, but Web service APIs are common. Many of the APIs are implemented based on REST principles, which imply an object-based scheme developed on top of HTTP (using HTTP as a transport). REST APIs are stateless and therefore simple and efficient to provide. Many cloud storage providers implement REST APIs, including Amazon Simple Storage Service (Amazon S3), Windows Azure™, and Mezeo Cloud Storage Platform.
One problem with Web service APIs is that they require integration with an application to take advantage of the cloud storage. Therefore, common access methods are also used with cloud storage to provide immediate integration. For example, file-based protocols such as NFS/Common Internet File System (CIFS) or FTP are used, as are block-based protocols such as iSCSI. Cloud storage providers such as Six Degrees, Zetta, and Cleversafe provide these access methods.
Although the protocols mentioned above are the most common, other protocols are suitable for cloud storage. One of the most interesting is Web-based Distributed Authoring and Versioning (WebDAV). WebDAV is also based on HTTP and enables the Web as a readable and writable resource. Providers of WebDAV include Zetta and Cleversafe in addition to others.
Figure 2. Cloud storage access methods
You can also find solutions that support multi-protocol access. For example, IBM® Smart Business Storage Cloud enables both file-based (NFS and CIFS) and SAN-based protocols from the same storage-virtualization infrastructure.
Performance
There are many aspects to performance, but the ability to move data between a user and a remote cloud storage provider represents the largest challenge to cloud storage. The problem, which is also the workhorse of the Internet, is TCP. TCP controls the flow of data based on packet acknowledgements from the peer endpoint. Packet loss, or late arrival, enables congestion control, which further limits performance to avoid more global networking issues. TCP is ideal for moving small amounts of data through the global Internet but is less suitable for larger data movement, with increasing round-trip time (RTT).
Amazon, through Aspera Software, solves this problem by removing TCP from the equation. A new protocol called the Fast and Secure Protocol (FASP™) was developed to accelerate bulk data movement in the face of large RTT and severe packet loss. The key is the use of the UDP, which is the parter transport protocol to TCP. UDP permits the host to manage congestion, pushing this aspect into the application layer protocol of FASP (see Figure 3).
Figure 3. The Fast and Secure Protocol from Aspera Software
Using standard (non-accelerated) NICs, FASP efficiently uses the bandwidth available to the application and removes the fundamental bottlenecks of conventional bulk data-transfer schemes. The Resources section provides some interesting statistics on FASP performance over traditional WAN, intercontinental transfers, and lossy satellite links.
Multi-tenancy
One key characteristic of cloud storage architectures is called multi-tenancy. This simply means that the storage is used by many users (or multiple "tenants"). Multi-tenancy applies to many layers of the cloud storage stack, from the application layer, where the storage namespace is segregated among users, to the storage layer, where physical storage can be segregated for particular users or classes of users. Multi-tenancy even applies to the networking infrastructure that connects users to storage to permit quality of service and carving bandwidth to a particular user.
Scalability
You can look at scalability in a number of ways, but it is the on-demand view of cloud storage that makes it most appealing. The ability to scale storage needs (both up and down) means improved cost for the user and increased complexity for the cloud storage provider.
Scalability must be provided not only for the storage itself (functionality scaling) but also the bandwidth to the storage (load scaling). Another key feature of cloud storage is geographic distribution of data (geographic scalability), allowing the data to be nearest the users over a set of cloud storage data centers (via migration). For read-only data, replication and distribution are also possible (as is done using content delivery networks). This is shown in Figure 4.
Figure 4. Scalability of cloud storage
Internally, a cloud storage infrastructure must be able to scale. Servers and storage must be capable of resizing without impact to users. As discussed in the Manageability section, autonomic computing is a requirement for cloud storage architectures.
Availability
Once a cloud storage provider has a user's data, it must be able to provide that data back to the user upon request. Given network outages, user errors, and other circumstances, this can be difficult to provide in a reliable and deterministic way.
There are some interesting and novel schemes to address availability, such as information dispersal. Cleversafe, a company that provides private cloud storage (discussed later), uses the Information Dispersal Algorithm (IDA) to enable greater availability of data in the face of physical failures and network outages. IDA, which was first created for telecommunication systems by Michael Rabin, is an algorithm that allows data to be sliced with Reed-Solomon codes for purposes of data reconstruction in the face of missing data. Further, IDA allows you to configure the number of data slices, such that a given data object could be carved into four slices with one tolerated failure or 20 slices with eight tolerated failures. Similar to RAID, IDA permits the reconstruction of data from a subset of the original data, with some amount of overhead for error codes (dependent on the number of tolerated failures). This is shown in Figure 5.
Figure 5. Cleversafe's approach to extreme data availability
With the ability to slice data along with cauchy Reed-Solomon correction codes, the slices can then be distributed to geographically disparate sites for storage. For a number of slices (p) and a number of tolerated failures (m), the resulting overhead is p/(p-m). So, in the case of Figure 5, the overhead to the storage system for p = 4 and m = 1 is 33%.
The downside of IDA is that it is processing intensive without hardware acceleration. Replication is another useful technique and is implemented by a variety of cloud storage providers. Although replication introduces a large amount of overhead (100%), it's simple and efficient to provide.
Control
A customer's ability to control and manage how his or her data is stored and the costs associated with it is important. Numerous cloud storage providers implement controls that give users greater control over their costs.
Amazon implements Reduced Redundancy Storage (RRS) to provide users with a means of minimizing overall storage costs. Data is replicated within the Amazon S3 infrastructure, but with RRS, the data is replicated fewer times with the possibility for data loss. This is ideal for data that can be recreated or that has copies that exist elsewhere.
Efficiency
Storage efficiency is an important characteristic of cloud storage infrastructures, particularly with their focus on overall cost. The next section speaks to cost specifically, but this characteristic speaks more to the efficient use of the available resources over their cost.
To make a storage system more efficient, more data must be stored. A common solution is data reduction, whereby the source data is reduced to require less physical space. Two means to achieve this include compression—the reduction of data through encoding the data using a different representation—and de-duplication—the removal of any identical copies of data that may exist. Although both methods are useful, compression involves processing (re-encoding the data into and out of the infrastructure), where de-duplication involves calculating signatures of data to search for duplicates.
Cost
One of the most notable characteristics of cloud storage is the ability to reduce cost through its use. This includes the cost of purchasing storage, the cost of powering it, the cost of repairing it (when drives fail), as well as the cost of managing the storage. When viewing cloud storage from this perspective (including SLAs and increasing storage efficiency), cloud storage can be beneficial in certain use models.
An interesting peak inside a cloud storage solution is provided by a company called Backblaze (see Resources for details). Backblaze set out to build inexpensive storage for a cloud storage offering. A Backblaze POD (shelf of storage) packs 67TB in a 4U enclosure for under US$8,000. This package consists of a 4U enclosure, a motherboard, 4GB of DRAM, four SATA controllers, 45 1.5TB SATA hard disks, and two power supplies. On the motherboard, Backblaze runs Linux® (with JFS as the file system) and GbE NICs as the front end using HTTPS and Apache Tomcat. Backblaze's software includes de-duplication, encryption, and RAID6 for data protection. Backblaze's description of their POD (which shows you in detail how to build your own) shows you the extent to which companies can cut the cost of storage, making cloud storage a viable and cost-efficient option.
Cloud storage models
Thus far, I've talked primarily about cloud storage providers, but there are models for cloud storage that allow users to maintain control over their data. Cloud storage has evolved into three categories, one of which permits the merging of two categories for a cost-efficient and secure option.
Much of this article has discussed public cloud storage providers, which present storage infrastructure as a leasable commodity (both in terms of long-term or short-term storage and the networking bandwidth used within the infrastructure). Private clouds use the concepts of public cloud storage but in a form that can be securely embedded within a user's firewall. Finally, hybrid cloud storage permits the two models to merge, allowing policies to define which data must be maintained privately and which can be secured within public clouds (see Figure 6).
Figure 6. Cloud storage models
The cloud models are shown graphically in Figure 6. Examples of public cloud storage providers include Amazon (which offer storage as a service). Examples of private cloud storage providers include IBM, Parascale, and Cleversafe (which build software and/or hardware for internal clouds). Finally, hybrid cloud providers include Egnyte, among others.
Going farther
Cloud storage is an interesting evolution in storage models that redefines the ways that we construct, access, and manage storage within an enterprise. Although cloud storage is predominantly a consumer technology today, it is quickly evolving toward enterprise quality. Hybrid models of clouds will enable enterprises to maintain their confidential data within a local data center, while relegating less confidential data to the cloud for cost savings and geographic protection. Check out Resources for links to information on cloud storage providers and unique technologies.
Resources
Learn
- Manageability is one of the most important aspects of a cloud storage infrastructure. For cost efficiency, the cloud storage infrastructure must be self-managing and implement autonomic computing principles. Read more about autonomic computing at IBM Research.
- The REST API is a popular method for accessing cloud storage infrastructures.
- Although not as common as REST, the WebDAV specification is also used as an efficient cloud storage interface. Egnyte Cloud File Server implements WebDAV as an interface to its cloud storage infrastructure.
- The IBM Smart Business Storage Cloud is an interesting perspective on cloud storage for the enterprise. IBM's storage cloud offers high-performance on-demand storage for enterprise settings.
- Access methods are one of the important aspects of cloud storage, as they determine how users will integrate their client-side systems to the cloud storage infrastructure. Examples of providers that implement file-based APIs include Six Degrees and Zetta. Examples of providers that implement iSCSI-based interfaces include Cleversafe and the Cloud Drive.
- Aspera Software created a new protocol to assist in bulk transfer over the Internet given the shortcomings of TCP in this application. You can learn more about the Fast and Secure Protocol in A Faster Way to the Cloud.
- Backblaze decided to build its own inexpensive cloud storage and has made the design and software open to you. Learn more about Backblaze and its innovative storage solution in Petabytes on a budget: How to build cheap cloud storage.
Get products and technologies
- Download IBM product evaluation versions and get your hands on application development tools and middleware products from Information Management and DB2®, Lotus®, Rational®, Tivoli®, and WebSphere®.
Discuss
- Check out developerWorks blogs and get involved in the developerWorks community..
|
http://www.ibm.com/developerworks/cloud/library/cl-cloudstorage/
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
QActionEvent Class Reference
The QActionEvent class provides an event that is generated when a QAction is added, removed, or changed. More...
#include <QActionEvent>
Public Functions
Additional Inherited Members
Detailed Description
The QActionEvent class provides an event that is generated when a QAction is added, removed, or changed.
Actions can be added to widgets = 0 )::actions().
No notes
|
http://qt-project.org/doc/qt-4.8/qactionevent.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
The Microsoft Windows Presentation Foundation (WPF) formerly
code named "Avalon" provides the foundation for building applications
and high reliability experiences in Longhorn, blending together application UI,
documents, and media content, while exploiting the full power of your computer.
WPF is the graphical subsystem of .NET frameworks 3.0 formerly called WinFX.
XAML is the markup language used to write WPF applications. WPF provides
developers and designers with a unified programming model for building rich
Windows smart client user experiences that incorporate UI, media, and
documents. WPF provides a reliable programming model for building applications
and provides a clear separation between the User Interface and the business
logic.
Extensible Application Markup Language (XAML) by Microsoft
is a declarative XML-based language used to define objects and their
properties, relationships and interactions. The acronym originally stood for
Extensible Avalon Markup Language, where Avalon was the code name for Windows
Presentation Foundation (WPF), but now it stands for Extensible Application
Markup Language. XAML files are XML files that generally have the .xaml extension.
XAML is used as a UI objects, events and many others in Windows
Presentation Foundation. In Windows Workflow Foundation the Workflow is also defined
using XAML. In addition, Visual Studio, SharePoint Designer, Microsoft Expression and
XAMLpad are used for XAML file manipulation.
Windows Presentation Foundation has many good features that enriches
your next generation applications. WPF is intended to be the next-generation
graphics API for Windows applications on the desktop. Applications written in
WPF are visually of a higher quality. The features that enriched in WPF are Graphical
Services, Deployment, Interoperability, Media Services, Data Binding, User
Interface, Annotations, Imaging, Effects, Text, Input and Accessibility.
After setting your infrastructure for your WPF application
development, first open you're your Visual Studio 2005 and create new project. This
WPF project template will come under the .NET frameworks 3.0 and select the WinFX
Web Browser Application project template. And your new project will be created
with two .xaml files and other files associated with the project created.
Add a root Page element to Default.xaml, with custom configurations
as per your needs. First the title bar of the browser is set to "My First
WPF Application", and then width of the browser is 800 device-independent
pixels, then height of the browser is 600 device-independent pixels, then title
of the page is "My First WPF Application – Home Page".
If you find the following namespace under your Default.xaml
file,
xmlns=""
xmlns:x=""
Then replace them with the following namespace.< br/>
xmlns=""
xmlns:x=""
Now add a new WinFX page and name it as DisplayPage.xaml. This
DisplayPage.xaml file will finally be used to bind display data to the UI.
In the Default.xaml file add the following XAML tags to
display a text with hyper linked. The hyperlink will navigate to your next XAML
file.
<!-- Add Links to Other Pages Here -->
<StackPanel Grid.
<TextBlock>
<Hyperlink NavigateUri="DisplayPage.xaml">
Data DisplayPage
</Hyperlink>
</TextBlock>
</StackPanel>
In the above format the text will display in your browser. Then
you can place the text anywhere in your browser by display formatting. The
following XAML tag will help you to format the display of your text.
<!-- Basic Display Formatting Below Text-->
<Grid.ColumnDefinitions>
<ColumnDefinition Width="Auto" />
<ColumnDefinition Width="Auto" />
</Grid.ColumnDefinitions>
<Grid.RowDefinitions>
<RowDefinition Height="*" />
<RowDefinition Height="Auto" />
</Grid.RowDefinitions>
<Grid.Resources>
<Style TargetType="{x:Type TextBlock}">
<Setter Property="FontFamily" Value="Lucida Console" />
<Setter Property="FontSize" Value="30" />
</Style>
</Grid.Resources>
Now add an image file to your solution and how it can be
displayed. In this sample I'm trying to add an image called tiger.PNG. In the
Solution Explorer tree, right-click the image and select Properties from the
context menu.
Change the build action from Resource to Content.
And change the copy to output directory as copy always.
And place the following XAML tags in your DisplayPage.xaml
file.
<Viewbox Grid.
<Canvas Width="600" Height="600">
<Canvas>
<!-- bounding image Path -->
<Image Source="tiger.PNG" />
</Canvas>
</Canvas>
</Viewbox>
These are the simple steps to start with the Windows
Presentation Foundation. Hopefully this will help beginner of WPF.
"WPF/E" (codename) Community Technology Preview for Windows
(Feb 2007)
"WPF/E" (codename) Community Technology Preview Sample Pack
(Feb 2007)
WPF/E" (codename) Software Development Kit (SDK) Community
Technology Preview (Feb 2007)
Microsoft Expression Web Free Trial
Microsoft® Expression Web Quickstart Guide< br/>
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
|
http://www.codeproject.com/Articles/18039/Step-into-the-new-Microsoft-Windows-Presentation-F
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
java.lang.Object
ucb.junit.textuiucb.junit.textui
public class textui
public static final int Silent
public static final int Summary
public static final int TestNames
public static final int Messages
public textui()
public static int runClasses(int verbosity, Class<?>... classes)
Silent, prints nothing.
Summary, prints the total test time and numbers of tests run and failed.
TestNames, as for Summary, plus print names of failing test methods.
Messages, as for TestNames, and print descriptive message notating the error, plus its location in the test routines.
public static int runClasses(Class<?>... classes)
runClasses(Messages, CLASSES).
public static int runClasses(int verbosity, List<Class<?>> classes)
runClasses(VERBOSITY, CLASSES), but with the class list stored in a list rather than an array.
public static int runClasses(List<Class<?>> classes)
runClasses(Messages, CLASSES).
public static void main(String... args)
|
http://www.eecs.berkeley.edu/~hilfingr/software/doc/ucb-package/ucb/junit/textui.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
by Lane Friesen,
JavaBoutique.
Examine the cart at.
It uses JDK 1.1, and thus requires Explorer 4 or 5, or versions 4.5
and higher of Netscape. Try pressing Reset when the cart comes up.
On Explorer, the cart will even survive a Shift-Reset. You can
press Reset fast and repeatedly, and you probably won't crash the
cart.
Let's look at how it's done. There are two methods, one that
involves applet reloading: JAVA applets are in an active state for
as long as the HTML page in which they are loaded is on the browser
screen. When a jump is made to another page, then the applets that
were loaded on the previous page become inactive.
Unless there is a shortage of memory, they are not unloaded.
Whenever the browser returns to an applet's page, then the
particular applet for that page is not usually reloaded, but rather
it is changed from an inactive state back again to an active state.
Variables that were set in a previous active stage are remembered
during times of inactivity, and can be accessed when an applet
again becomes active.
Now, what happens if the same applet (same name, same codebase)
is loaded by two separate pages? An applet is identified only by
its name and codebase. Thus, if a page loads an applet with the
same name as one that was loaded by a previous page, and if this
involves a relative (in contrast to an absolute) call to the same
codebase, then the applet for the previous page becomes active
again - just as if it had been called by the previous page. A new
instance of the applet is not usually loaded. This feature is used
by the shopping cart to store information between HTML pages. It
works much better than cookies.
An applet, however, contains program code as well as variables.
Thus, since it is a program that is being re-activated, persistent
processing can be carried out on the persistent memory. A program
can thus be written that appears to live on from page to page: all
that is necessary is to reload the same code. When the user leaves
the applet's page, this exit alters the page's applet from an
active to an inactive state, and when the user enters a different
page that reloads the previous applet through a relative call. This
entry changes the applet back again to an active state
which uses its previous memory. The new page, as
part of the applet's life cycle, calls the applet's init() and
start() methods; apart from that, applet program function
picks up at the point where it left off on the previous
page. If one desires to keep certain variables, then it is
enough to declare them as static, and make sure they are linked to
static entities - the garbage collector will then leave them
alone.
If an applet with the same name but a different codebase is
loaded, then, for security reasons, it is assigned a new address
space. The same shopping cart code, even when it is run
simultaneously on separate commercial sites from the same browser,
can therefore develop and maintain distinct shopping carts for the
separate merchants. JAVA security, which we exploit here to our
advantage, takes care of all the details!
If an HTML document in a subdirectory of a site wishes to
participate in the persistence, then it is sufficient to add the
line CODEBASE=../ to the APPLET tag. This technique can be extended
throughout a directory and subdirectory structure, so that
persistence spreads freely throughout any given site.
Alternatively, it is often possible to access the applet
absolutely, as in CODEBASE=
as long as the applet and the HTML pages are located on the
same server. This absolute call is then treated as
relative. The boilerplate code now becomes independent of
directories and sub-directories.
Applet download time is of course decreased greatly when all
classes are placed into a single uncompressed .zip file. It is my
understanding that earlier versions of Explorer 4 can find .zip
files confusing; the solution is to place the unzipped classes in
the same directory as the .zip file. Most browsers simply ignore
these extra files, and they may in fact no longer be necessary.
Since Active X, as implemented in Microsoft's Dynamic HTML, is
based in the JAVA language, it turns out that it can also generate
persistence. The said applet has in fact been successfully ported
to an Active X object. The said "boilerplate code," in the previous
section may be altered so that it is an <OBJECT> that is
loaded, rather than an <APPLET>. With a few minor
adjustments, there is the same functionality.
The applet or object that is continually being reloaded does not
need to be entirely identical from page to page. In particular, it
may have different parameters, as expressed in varying
<PARAM> tags under the <APPLET> tag. Clever use of
freedom with parameters can enable massive changes in the classes
that are instantiated from one page to the next, generating genuine
alterations in the code itself. The fact that init() and start()
are called when a page is entered, and stop() is called when the
page is left, can be used to do some very useful things: the
garbage collector is always there to get rid of discarded elements.
The program can vary in ways that would never be possible if it
were not being continually re-activated. Alternatively, boolean
variables can be set or cleared in an abstract class, or in some
other non-instantiated region, which also determine ways in which
the said applet or Active X object re-incarnates itself.
It is not necessary to load an applet or object into every page
of a site in order to gain persistence. It is enough to load the
said applet or object into those pages in which access to the
accumulated information is necessary. Browser mechanisms and the
functionality of JAVA make sure that the information is maintained
(for as long as memory permits) until it is needed. The information
will persist even through extended access to pages or sites which
know nothing about the applet or object. The applet simply sleeps,
in an inactive state, until reloaded through a relative call by
some HTML page that desires access to its program code or
memory.
Of course, if the applet is inactive, and if memory is needed
for other JAVA programs, then the virtual machine will eventually
overwrite the inactive program. There is room on the web,
therefore, for only a limited number of programs that use this
technique. Any particular intranet of course would have its own
separate limitation.
To protect the functionality of the Open Source shopping cart, I
have taken out a patent on the techniques used to develop
persistence. As long as developers change the source code of the
Open Source shopping cart in such a way that the program
remains a shopping cart with an
associated database cache, then they have
permission to use these patented techniques. This should protect
the bandwidth, and I retain rights to any use of these techniques
elsewhere.
Persistence of memory throughout other pages or sites is not a
security issue because an applet or Active X object is resurrected
(or made accessible to other programs) only if it resides at the
same codebase, or at some relative offset.
Let us now look at the second method for developing persistence
between HTML pages. Although it uses abstract classes, we will see
that this limitation can be bypassed. In nature, we observe objects
like robins and crows, but never things such as "birds." Robins and
crows are birds, but birds as such do not exist; they are an
abstraction. Similarly, abstract classes in JAVA are a tool for
dealing with things that are common to various objects (in nature,
for example, commonalities would be wings or beaks)-- they are not
themselves instantiated. What we are going to do, in general terms,
is take an abstract class which is an abstraction of
nothing, and inject code into it, just as a virus
inserts its DNA into a cell and takes over the machinery for its
own purposes. An infected cell turns out copies of the virus; the
abstract class, in our case, simply chooses to "live forever." It
is all done, as you will see, with completely standard JAVA coding.
The principle will then be extended through standard coding, to
classes which are not abstract.
If one looks at abstract classes in JAVA, one notices that they
are tied very closely to the JAVA virtual machine. If they could be
instantiated, then certainly they would persist; programs could
then be remembered along with data, even when the applet
was inactive. However, abstract classes cannot contain
constructors.
Suppose that applet code is composed solely of pointers to
static variables defined in some abstract class (sample code is
presented further on). Suppose further that static components for a
frame (buttons, etc.), as well as the static name of the frame
itself, are also defined in the abstract class. Then, suppose that
init() or start() in the applet calls a static function in the
abstract class which instantiates the static frame and assigns the
static components to it. It then sets a flag in the abstract class
that tells it not to do this again. One now has a kind of
pseudo-constructor for the abstract class. The resulting frame and
its components, are tied through the abstract class, to the JAVA
virtual machine itself, and they persist from HTML page to HTML
page, even when the applet is not being loaded.
The applet that originally launched both the frame and the abstract
class may move back and forth from an active to an inactive state,
as the user browses from one HTML page to another. This makes no
difference to the frame itself, for it is no longer dependent upon
the applet.
Now, how do we enable events within this "terminate and stay
resident" frame so that it will carry out actions when "off site?"
It is done in the following manner (sample code follows):
Events must be caught in the frame itself, but all variables and
interrupt methods are contained as static variables and static
methods in the abstract class. The frame event handlers simply make
jumps to these static variables and methods. Through a use of these
two separate persistence techniques-- loading a program repeatedly,
and forming a pseudo-constructor for an abstract class-- we have
now created a program that lives beyond the page in which it was
launched. It also remembers data that is collected from page to
page for as long as a page accesses the launching applet through a
relative call, and carries out event handling within the frame
itself, on applet data, when off-site.
The JAVA program has now broken away from the launching page,
and it "lives forever." It is interesting to reduce the browser to
a partial window, and to place the shopping cart frame on the
desktop beside it. Browse from page to page, and watch the frame:
it doesn't flicker, and the buttons continue to work. JAVA in the
frame is fully enabled at all times.
There appears to be a small restriction. If you are running JAVA
off-site (if you force the cart to do something by pressing a
button), and if the resulting event-handling uses "new" to create a
variable, and if at the same time the browser is
working very hard at carrying out programming in a page, then it is
theoretically possible to crash the browser. In plain language,
suppose that the shopping cart frame breaks free of the launching
site and you go to some other point in cyberspace, such as the Mars
Lander site, starting some big VRML program. Then, as it is
in the middle of calculations, and things are
changing on the screen, if you bring up the persistent shopping
cart frame and begin pushing buttons, you can probably provoke a
crash. However, it takes a good deal of careful manual
dexterity.
The solution here is simple: Avoid the use of "new" in any JAVA
programming that is triggered when off-site. I have tested this,
and it appears to make it impossible for even the most obnoxious
person to crash the cart. This step has not been taken in the
current code because it can involve the use of global static
variables, and these make further program development very
difficult. Also, the slight potential instability off-site does not
appear to be a problem at all for this shopping cart in every-day
usage.
The following code will place a frame on a web page, and then
save its position as it is moved, so that it appears at the altered
position when a new page is loaded. This demonstrates
persistence.
import java.awt.*;
import java.applet.*;
public class Applet1 extends Applet{
public void init(){
if(Constants.frame == null){
Constants.frame = new Frame();
Constants.frame.reshape((int)Constants.framex.intValue(),
(int)Constants.framey.intValue(),300,300);
}
Constants.frame.show();
}
public void stop(){
Constants.framex = new Integer(Constants.frame.bounds().x);
Constants.framey = new Integer(Constants.frame.bounds().y);
Constants.frame.dispose();
Constants.frame = null;
}
}
abstract class Constants{
static Frame frame;
static Integer framex = new Integer(225);
static Integer framey = new Integer(150);
}
Compiling this code will generate the two files Applet1.class
and Constants.class.
Place the following code in HTML Page 1:
<HTML>
<HEAD>
</HEAD>
<BODY>
<applet code=Applet1.class name=Applet1 width=1 height=1> </applet>
<A HREF="Page2.htm">Page2.htm</A>
</BODY>
</HTML>
Place the following code in HTML Page 2:
<HTML>
<HEAD>
<TITLE></TITLE>
</HEAD>
<BODY>
<applet code=Applet1.class name=Applet1 width=1 height=1 id=Applet1> </applet>
<A HREF="Page1.htm">Page1.htm</A>
</BODY>
</HTML>
When the applet is loaded, method init() is called. This creates
a frame, and places it at the (x,y) position defined by variables
framex and framey. A look at class Constants tells us that these
are initialized at 225 and 150 respectively. Suppose that while
Page1 is up, the user moves the frame. Method stop() is called
automatically when Page1 is left by the user, and this determines
the (x,y) position to which the frame was moved by the user, and
remembers it by changing variables framex and framey. When the page
is reloaded, method init() is called again. It seems obvious that
the frame should be placed again at coordinates (225,150), as
defined in class Constants, however, the values of variables framex
and framey, when a reload occurs, are those set by method stop()
when the previous page was left. The initialization in
class Constants is ignored by a reload. Why? The program
is not being restarted; it is simply being re-activated. That is
the essence of persistence as generated by applet reloading.
There will be stability problems with the frame in some
browsers, for it is constantly being destroyed and recreated again
(our later examples remove this problem). This is associated with
the Abstract Windows Toolkit, and is not linked to persistence. The
said instability is present even when the applet is not being
continually reloaded, and is caused by the fact that the applet
must interact with a peer, which does the actual work of
constructing the frame. The said instability is removed in three
ways: the first is to adjust the order of frame operations so as to
minimize pressure on the peer. The second is to follow every frame
operation with a call to pack(). This appears to place a modal stop
on further execution until the peer has completed its task. The
third is to introduce delay loops at critical points. For instance,
one might define the method pause(200), in which pause(time) is
defined as follows:
void pause(int time){
long startTime = System.currentTimeMillis();
while(System.currentTimeMillis() < startTime + time){
//infinite loop: time slicing creates a pause to give the peer an opportunity to finish
}
Notice in the applet code that the frame name is defined in the
abstract class Constants. This is absolutely essential for
stability. Notice also that the class Constants is never
instantiated. You can see already, in this example, that we have
started to manipulate an abstract class.
In contrast to the applet example, which uses the Abstract
Windows Toolkit, there appears to be no stability problem at all in
the Active X implementation of persistence, which uses Windows
Foundation Classes. This demonstrates graphically that persistence
can be used to generate very dependable code.
import com.ms.wfc.html.*;
import com.ms.wfc.ui.*;
public class Class1 extends DhDocument{
public Class1() {
if(Constants.form == null){
Constants.form = new Form();
Constants.form.setBounds((int)Constants.formx.intValue(),
(int)(Constants.formy.intValue()),300,300);
Constants.form.setVisible(true);
}
Constants.form.bringToFront();
}
public void dispose() {
Constants.formx = new Integer(Constants.form.getBounds().x);
Constants.formy = new Integer(Constants.form.getBounds().y);
Constants.form.dispose();
Constants.form = null;
super.dispose();
}
}
abstract class Constants{
static Form form;
static Integer formx = new Integer(225);
static Integer formy = new Integer(150);
}
This code is interpreted by Microsoft Visual J++ Version 6, and
is used to form an Active X object which will be loaded by the
following two HTML pages. The first, or something similar to it, is
generated automatically by the said Visual J++ compiler and saved
as HTML Page1.htm. Notice that we have inserted, into the said
automatically generated code, a hyperlink to a Page2.htm.
<HTML>
<HEAD>
<TITLE></TITLE>
</HEAD>
<BODY>
<OBJECT classid="java:com.ms.wfc.html.DhModule"
height=0 width=0 ... VIEWASTEXT id=OBJECT1>
<PARAM NAME=__CODECLASS VALUE=Class1>
<PARAM NAME=CABBASE VALUE=Project3.CAB>
</OBJECT>
<A HREF="Page2.htm">Page2.htm</A>
</BODY>
</HTML>
Page2.htm is produced by copying the contents of Page1.htm into
an empty HTML page, naming this said page Page2.htm, and altering
the hyperlink to Page2.htm, in the said copied code from Page
1.htm, into a return link to Page1.htm:
<HTML>
<HEAD>
<TITLE></TITLE>
</HEAD>
<BODY>
<OBJECT classid="java:com.ms.wfc.html.DhModule"
height=0 width=0 ... VIEWASTEXT id=OBJECT1>
<PARAM NAME=__CODECLASS VALUE=Class1>
<PARAM NAME=CABBASE VALUE=Project3.CAB>
</OBJECT>
<A HREF="Page1.htm">Page1.htm</A>
</BODY>
</HTML>
The previous two examples can be tested in the following manner.
1) Place all relevant files into the same directory. 2) Load Page 1
into a browser. 3) Use the mouse to move the frame (or in the case
of Active X, the form) to a different location. 4) Use the browser
to jump to Page 2. Notice that the frame does not come up in the
original location but rather retains its new position. 5) Use the
browser to jump to an unrelated page and then reload either Page 1
or Page 2. You will notice that the frame returns and maintains its
latest position. 6) Repeat steps 3) to 5) as often as you wish.
Related Stories:
Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled.
|
http://www.linuxtoday.com/developer/2000070200204OSSW
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
[wiki:ContributingToTwistedLabs Contribute] > [wiki:TwistedDevelopment Development] > Compatibility Policy
= Compatibility Policy =
NB: This is a work in progress.
[[PageOutline(2-4,, inline)]]
== Motivation ==
The Twisted project has a small development team, and we cannot afford to provide anything but critical bug-fix support for multiple version branches of Twisted. However, we all want Twisted to provide a positive experience during development, deployment, and usage. Therefore we need to provide the most trouble-free upgrade process possible, so that Twisted application developers will not shy away from upgrades that include necessary bugfixes and feature enhancements.
Twisted is used by a wide variety of applications, many of which are proprietary or otherwise inaccessible to the Twisted development team. Each of these applications is developed against a particular version of Twisted. Python does not provide us with a strict way to partition "public" and "private" objects (methods, classes, modules), so it is unfortunately quite likely that many of those applications are using arbitrary parts of Twisted. Our compatibility strategy needs to take this into account, and be comprehensive across our entire codebase. Exceptions can be made for modules aggressively marked "unstable" or "experimental", but even experimental modules will start being used in production code if they have been around for long enough.
The purpose of this document is to to lay out rules for Twisted application developers who wish to weather the changes when Twisted upgrades, and procedures for Twisted engine developers - both contributors and core team members - to follow when who want to make changes which may be incompatible to Twisted itself.
== Defining Compatibility ==
The word "compatibility" is itself difficult to define. While comprehensive compatibility is good, ''total'' compatibility is neither feasible nor desirable. Total compatibility requires that nothing ever change, since any change to Python code is detectable by a sufficiently determined program. There is some folk knowledge around which kind of changes "obviously" won't break other programs, but this knowledge is spotty and inconsistent. Rather than attempt to disallow specific kinds of changes, here we will lay out a list of changes which are considered compatible.
Throughout this document, "compatible" changes are those which meet these specific criteria. Although a change may be broadly considered backward compatible, as long as it does not meet this official standard, it will be officially deemed "incompatible" and put through the process for incompatible changes.
=== Non-Incompatibilties ===
* test changes. No code in a test package should be imported by a non-test package within Twisted, so there's no chance anything could access these objects by going through the public API.
* "private" changes. Code is considered "private" if the user would have to type a leading underscore to access it. In other words, a function, module, method, attribute or class whose name begins with an underscore may be arbitrarily changed, ''unless'':
* a "public" entry point returns a "private" object, that "private" object must preserve its "public" attributes. For example:
{{{
#!py
class _y:
def z(self): return 1
def _q(self): return 2
def x(): return _y()
}}}
in this example, `_y` can no longer be arbitrarily changed. Specifically, 'z' is now a public method, thanks to 'x' exposing it. However, '_q' can still be arbitrarily changed or removed.
* a "private" class is exposed by way of a "public" subclass. For example,
{{{
#!py
class _a:
def b(self): return 1
def c(self): return 2
class d(_a): pass
}}}
in this example `_a` is effectively public, since 'b' and 'c' are both exposed via `d`.
* Source
* The most basic thing that can happen between Twisted versions, of course, is that the code may change. That means that no application may ever rely on, for example, the value of any {{{func_code}}} object's {{{co_code}}} attribute remaining stable, or the checksum of a .py file remaining stable.
* Docstrings may also change at any time. No application code may expect any Twisted class, module, or method's {{{__doc__}}} attribute to remain the same.
* Attributes: New code may also be added. No application may ever rely on the output of the 'dir' function on any object remaining stable, nor on any object's {{{__all__}}} attribute, nor on any object's {{{__dict__}}} not having new keys added to it. These may happen in any maintenance or bugfix release, no matter how minor.
* Pickling: Even though Python objects can be pickled and unpickled without explicit support for this, whether a particular pickled object can be unpickled after any particular change to the implementation of that object is less certain. Because of this, no application may depend on any object defined by Twisted to provide pickle compatibility between any release unless the object explicitly documents this as a feature it has.
==== Fixing Gross Violation of Specifications ====
If Twisted documents an object as complying with a published specification, and there are inputs which can cause Twisted to behave in obvious violation of that specification, then changes may be made to correct the behavior in the face of those inputs. If application code must support multiple versions of Twisted, and work around violations of such specifications, then it ''must'' test for the presence of such a bug before compensating for it.
For example, Twisted supplies a DOM implementation in {{{twisted.web.microdom}}}. If an issue were discovered where parsing the string {{{
|
http://twistedmatrix.com/trac/wiki/CompatibilityPolicy?format=txt
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
TIFF and LibTiff Mailing List Archive
April 2005
Previous Thread
Next Thread
Previous by Thread
Next by Thread
Previous by Date
Next by Date
The TIFF Mailing List Homepage
This list is run by Frank Warmerdam
Archive maintained by AWare Systems
On Mon, 11 Apr 2005, katrina maramba wrote:
>
> Can anybody tell me what the TIFF_MAPPED macro means? It's comment
> says "file is mapped into memory". What does this mean? When can I
> have a 'memory mapped file'?
Libtiff normally memory maps input files on platforms where it makes
sense. You can re-inforce your desire for memory-mapped input by
adding 'M' to the TIFFOpen() option string, or prevent it by adding
'm' to the option string.
You can gain a bit more control over what libtiff does by using
TIFFClientOpen() to define the I/O functions it should use. One of
these functions is to memory map the file.
Please note that libtiff memory maps the entire file. It does not
attempt to do any "windowing" on the memory mapping. This means if
the input TIFF file is larger than the available process address
space, the mapping will fail.
/*
* Setup initial directory.
*/
switch (mode[0]) {
case 'r':
tif->tif_nextdiroff = tif->tif_header.tiff_diroff;
/*
* Try to use a memory-mapped file if the client
* has not explicitly suppressed usage with the
* 'm' flag in the open mode (see above).
*/
if ((tif->tif_flags & TIFF_MAPPED) &&
!TIFFMapFileContents(tif, (tdata_t*) &tif->tif_base, &tif->tif_size))
tif->tif_flags &= ~TIFF_MAPPED;
if (TIFFReadDirectory(tif)) {
tif->tif_rawcc = -1;
tif->tif_flags |= TIFF_BUFFERSETUP;
return (tif);
}
break;
Bob
======================================
Bob Friesenhahn
bfriesen@simple.dallas.tx.us,
GraphicsMagick Maintainer,
|
http://www.asmail.be/msg0054787562.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Java
Java Util Package - Utility Package of Java
Java Util Package - Utility Package of Java
Java Utility package is one of the most commonly used packages in the java
program. The Utility Package of Java consist
Lang and Util Base Libraries
Lang and Util Base Libraries
The Base libraries provides us the fundamental features and functionality of
the Java platform.
Lang and Util Packages
Lang...-performance threading utilities. Some example are blocking queues and
thread pools
EJB - EJB
2.0 and EJB 3.0
EJB 2.0
It is very complex, difficult to learn/use...EJB What is the difference between EJB 2.0 and EJB 3.0?
GIVE ME... EntityBeans for to access the database.But ejb3.0 we are using the JPA(Java
Iterate java collection
with the reference of other collection which supports iterator() method
Example...
Collection is the top level interface of the Collection framework.
Iterator interface has methods for traversing over the elements
of the collection
example | Java
Programming | Java Beginners Examples
| Applet Tutorials...
largest collection of tutorials on various programming languages. These days...
applications, mobile applications, batch processing applications. Java is used
ejb - EJB
EJB 3.0 is simplified Specification but still very powerful. Now... the javax.ejb.EnterpriseBean interface by EJB class. The EJB bean class is now pure java class... of 3.0 version. Hi friend,
EJB :
The Enterprise JavaBeans.
EJB Example - EJB
EJB Example Hi,
My Question is about enterprise java beans, is EJB stateful session bean work as web service? if yes can you please explain with small example.
Thanks
sravanth Hi Friend,
Please visit
Design patterns in ejb - EJB
development best practices, and a collection of EJB tips and strategies...Design patterns in ejb Hi any any one send some links or material on design patterns in EJB iam toally new to this concept(i.e design patterns)
Java AWT Package Examples
Swing Example
Java util...Site Map - All java tutorials
In this section of sitemap we have listed all the important sections of java tutorials.
Select the topics you want
default values |
C Array Declaration |
C Array copy example
Java Tutorial... Threading
Tutorial | Java 5 Tutorials
|
EJB Tutorial |
Jboss 3.0... | Site
Map | Business Software
Services India
Tutorial Section
util packages in java
util packages in java write a java program to display present date and after 25days what will be the date?
import java.util.*;
import java.text.*;
class FindDate{
public static void main(String[] args
Java List Iterator Example
Java List Iterator Example
In Java Collection framework every classes provides... to start the collection. By
using this iterator objects you can access each element in a collection.
An example given below
At fist make an object and add some
Java
Code example
Java Programming
Java
Beginners...
JDBC
Tutorial
EJB
Tutorials
Java Server... Site Map
We have organized our site map for easy access.
You can
Logic Iterate.
the Particular rows text value.
for example:
print("<logic:iterate id="xid...Logic Iterate. logic iterate Struts
I have one doubt about logic:iterate.
I use text field inside of logic:iterate , there is possible
garbage collection in java
.style1 {
color: #FFFFFF;
}
Garbage Collection in Java
In java...;
C:\unique>java GarbageCollector
Garbage Collection... are destroyed for later reallocation of their memory.
In java this is done automatically
garbage collection - Java Beginners
.
For read more information :... then it will be automatically deleted by the garbage collector.
It save the some amount of memory.It identifies the objects that are no longer used by a program.
A Java
EJB - EJB
sessions, for the very reason that it is resource hungry. For eg, if there are 100... on Java Beans
Thanks
EJB, Enterprise java bean- Why EJB (Enterprise Java Beans)?
Why EJB (Enterprise Java Beans)?
Enterprise Java Beans or EJB for short is the
server-side component architecture for the Java 2 Platform
Iterate java Arraylist
().
Example of Java Arraylist Iterate
import java.util.*;
public class...
Iterator is an interface in the collection framework.
ArrayList is a collection class and implements the List
Inteface.
All the elements
Java Collection : NavigableSet Example
Java Collection : NavigableSet Example
In this tutorial we are describing NavigableSet of collection Framework
NavigableSet :
NavigableSet interface...(). Here we are defining some of
NavigableSet methods -
ceiling(E e) : This method
DEVELOPING & DEPLOYING A PACKAGED EJB
' is very important in Enterprise Java. Compiling and Deploying such packaged servlets...;java weblogic.ejbc sqlejbpack.jar sqlejbpack1.jar
After some time, we will get...
DEVELOPING & DEPLOYING A PACKAGED EJB (WEBLOGIC-7) & ACCESSING
Chapter 5. EJB transactions
present special problems for transaction processing. For example, an update
to the first EJB may be successful, but the update to the second one may...
Chapter 5. EJB transactionsPrev Part I. Exam
collection
collection As we know array holds the similar kind of elements, then in collection how toArray() method will convert the collection having different objects as elements to an array in java
ejb
ejb what is ejb
ejb is entity java bean
Collection of Large Number of Java Sample Programs and Tutorials
;
Iterate
Collection
In this Example you will learn you will learn how to iterate
collection in java. A running...Collection of Large Number
of Java Sample Programs and Tutorials
Java
EJB Books
for the experienced Java developer or manager, Professional EJB provides a truly... with this powerful component
standard. While some titles on EJB are long... development best practices, and a collection of EJB tips and strategies, and other
How about this site?
Java services What is Java WebServices?
If you are living in Dallas, I'd like to introduce you this site, this home security company seems not very big, but the servers of it are really good.
Dallas Alarm systems
Collection
Collection i need a collections examples
Please visit the following links:
Collection : Iterator Example
Collection : Iterator Example
In this section we will discuss Iterator with example.
Iterator :
Iterator interface is a member of the Java Collection... element that
was returned by next .
Example :
package collection Example Update Method - Java Beginners
Java Example Update Method I wants simple java example for overriding update method in applet .
please give me that example
java - EJB
java Can anyone tell me about the workflow of EJB.I have developed an application by an example that contains a session bean and a CMP but not able....
Diffrent EJB & Java Bean - EJB
Diffrent EJB & Java Bean What is Different between EJB & Java Bean... model that adds a number of features to the Java programming language. Some...-based software components that are built to comply with Java's EJB specification
Java util package Examples
Java Collection : TreeSet Example
Java Collection : TreeSet Example
This tutorial contains description of TreeSet with example.
TreeSet :
TreeSet class is defined in java.util... constructor - TreeSet(), TreeSet(Collection
c), TreeSet(Comparator c
HashSet Example
the methods to add, remove and iterate the values of
collection. A HashSet... CollectionTest
Collection Example!
Collection data: Blue White Green...
HashSet Example
Collection of Large Number of Java Interview Questions!
Interview Questions - Large Number of Java Interview Questions
Here you... in Job Interview its very important to
prepare well. The easy way to prepare... usually asked in the Job
Interviews.
Huge collection
Java Collection
Java Collection What is the Collection interface
Prepared Statement With Batch Update
with
BatchUpdate and we are going to provide an example
that performs batch update... simultaneously by using the some java methods like: addBatch
and executeUpdate. ... is a collection of multiple update statements that provides the
facility for submitting
JDBC: Update Records Example
JDBC: Update Records Example
In this section, you will learn how to update records of the table using JDBC
API.
Update Records : Update record is most...("Update Records Example...");
Connection con = null;
Statement
Java Collection API - Java Tutorials
Java Collection API
Collection was added to Java with J2SE 1.2 release. Collection framework is
provided in 'java.util.package'.
All collections....
Example :
In the below example, the method "add" of Collection EJB - EJB
java EJB Please,
Tell me the directory structure of EJB applications
and how to deploy ejb on glassfish server,
java collection - Development process
java collection - creating and comparing lists using Java How.... Comparing and Creating new list in Java Java Example code for creating... Example code for creating and comparing lists using Java.
import java.util.
Java Collection-TreeSet
Java Collection-TreeSet What is TreeSet in Java Collection?
Example:
import java.util.Iterator;
import java.util.TreeSet;
public... the elements of treeset using Iterator and display the elements.
Example
EJB 3.0 Tutorials
a Java Object that receives the JMS messages to call EJB client
Reuse... in EJB
3.0 is POJO (Plain Old Java Object). It is a Java object that doesn't...
A Java
Persistence Example
In the Book
Java collection LinkedHashSet
Java collection LinkedHashSet How can we use LinkedHashSet in java collection?
import java.util.Iterator;
import...
Description:- The above example demonstrates you the Set interface. Since Set
EJB with NetBeans
EJB with NetBeans I am very new in Ejb and with very few knowledge... some example code like this code or many shopping cart examples. Can any body... no idea about Ejb, I am not getting how to run the code which includes a simple
Java collection HashSet
Java collection HashSet How can we use HashSet in java program?
The hashSet class is used to create a collection and store it in a hash table. Each collection refer to a unique value.
import java.util.Collections
Java Collection : WeakHashMap
Java Collection : WeakHashMap
In this tutorial, we are going to discuss one of concept (WeakHashMap ) of
Collection framework.
WeakHashMap... if key is presented.
Example :
package collection;
import java.util.Map
Collection in java
Collection in java What are the meaning of the letters E,T,K and V that come in the collection concept
How to Save Your Site from Google Penguin Update
site from Google penguin update all are not realistic enough to change your web... quickly make the difference as to save your site from Google penguin update... reputation and in the post Google Penguin Update scenario chances are very low
Java Collection : Hashtable
Java Collection : Hashtable
In this tutorial, we are going to discuss one of concept (Hashtable ) of
Collection framework.
Hashtable :
Hashtable... internally.
Example :
package collection;
import
iBatis Update -Updating data of a table
;
Add, Update and Delete are very common... executing an update statement is very
simple. For updating you have to add SQL...;
</sqlMapConfig>
iBatis Update Query
Here in our example we
java - EJB
java execution process of ejb
to update the information
update the information sir, i am working on library mgt project. front end is core java and backend is ms access.i want to open,update... the following link:
Java collection HashSet and TreeSet
Java collection HashSet and TreeSet How can we used HashSet and TreeSet in the both Example?
import java.util.HashSet;
import... except set1 element [A, B]
Description:- The above example demonstrates you
MDB - EJB
for more information.... in an EJB server - all the Swing code you've supplied is not MDB, its regular JMS MessageListeners / consumers as its not using MDBs or EJB.
import javax.swing.
Collection framework tutorial
framework in Java. I want many examples of Java collection framework. Tell me the best tutorial website for learning Java Collection framework.
Thanks
Hi,
Following urls are best for learning Java Collection framework:
Collections
doubt in ejb3 - EJB
(name="example")
EntityManager em;
public static final... = Persistence.createEntityManagerFactory("example");
// this.em= emf.createEntityManager();
// this.emf = Persistence.createEntityManagerFactory("example");
// this.em
Chapter 9. EJB-QL
.
The following example returns a collection of the CMP fields of type
Unlike a Java variable, an EJB QL identifier IS NOT case sensitive....
For example, matching finder method declaration and EJB QL
Java Example Codes and Tutorials
, collection many Java Applets example with running code. ...Java Tutorials - Java Example Codes and Tutorials
Java is great programming... Wide Web.
Java is an object-oriented language, and this is very
similar to C
Collections in Java
of classes and interface and used by Java professionals. Some collections in Java that are defined in Java collection framework are: Vectors, ArrayList, HashMap... in the collection.
Java Collection framework can be used to store phone
update a JTable - Java Beginners
://
Thanks...update a JTable how to update a JTable with Mysql data through user interface Hi friend,
Please implement like the follows code
ejb - EJB
ejb hi
i am making a program in java script with sateful... of loan,2-amount 3-year.
these field are in java script when user select types... response.
java script code :
JavaScript Loan Calculator
EJB - Java Interview Questions
state should not be retained i.e. the EJB container
destroys a stateless... invocation among all the clients to perform the generic task.
Some points... in Stateless session beans
For more information on EJB visit to :
http
Hibernate Collection Mapping
a persistent object
passed the collection to another persistent object.
Example...
of collection in Hibernate. To create an example I am using List. A list stores...;Hibernate Collection Mapping Example Using XML ");
Session session
Collection to Array
; a collection into a array. In this example we creating an object of ArrayList,
adding...; interface is a member of the Java Collection
Framework and extends Collection...
Collection to Array
Java collection
Java collection What is relation between map and set
Java Collection
Java Collection What are Vector, Hashtable, LinkedList and Enumeration
Java collection
Java collection What are differences between Enumeration, ArrayList, Hashtable and Collections
Custom Collection Implementations
the Java built-in
Collection Interfaces implementations. Apart from these, some times programmer
need to implement their own collections classes. The Java... Custom Collection Implementations
update a JTable - Java Beginners
update a JTable i have tried your advice as how to update a JTable with mysql data through user interface but it is resulting some errors
here is the code
/*
import java.sql.*;
import javax.swing.*;
import java.awt.
java collection
java collection how do I use a treeset to print unique words
Garbage Collection
Garbage Collection
In the example we are describing how the garbage
collector works.
Description of program:
In the example given below, first Garbage Collection
Java Garbage Collection
The Java language is the most widely used programming
language to rely... memory.
Read example at:
http:/ - EJB
java hi friends,
i am creating one file, i want to store this file in my working directory.how can i get real path or working directory path in ejb?
regards,
srinivas
|
http://www.roseindia.net/tutorialhelp/comment/14682
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Is it correct? Two same named classes in different unnamed namespaceget same typeid
Discussion in 'C++' started by Q77
- Govindan
- Sep 8, 2003
Re: Same named input elements in different forms?Harlan Messinger, Dec 28, 2009, in forum: HTML
- Replies:
- 3
- Views:
- 540
- dorayme
- Dec 28, 2009
same code produces different decimal symbol on different computers with same settingsGuest, Dec 29, 2003, in forum: ASP General
- Replies:
- 2
- Views:
- 198
- Foo Man Chew
- Dec 29, 2003
|
http://www.thecodingforums.com/threads/is-it-correct-two-same-named-classes-in-different-unnamed-namespaceget-same-typeid.752229/
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
An Approach to Web Services Non-Functional Requirements Using WSDL Annotations
By Anshuk Pal Chaudhuri, K M Senthil Kumar and Vineet Singh
01 Apr 2006 | TheServerSide.com
1. Introduction
Web Services play a key role in implementing Service Oriented Architecture (SOA). The notion of describing the service independent of the technology in which it has been implemented has been robustly captured in the Web Services Definition Language (WSDL) Specification. WSDL clearly specifies the service location along with the operations that need to be invoked, the parameters that need to be passed, the data type of each parameter, and the various message formats and types. WSDL describes both Document and RPC Style web services as explicitly as possible.
However, in most of the SOAP engines implemented by various vendors, the WSDL generation for a service is done automatically, when the service is deployed. This gives minimal freedom to the developer to customize the WSDL as per his/her needs during deployment of the service. But in terms of RPC services the amount of information given by WSDL is more than sufficient for the client to consume the service. But Basic Profile published by WS-I recommends the use of document style web service for maximum interoperability between service providers and the service consumers. This resulted in defining robust message exchanges with the help of XML schema. The XML schema that the input and output messages are based on is embedded into the WSDL, so that WSDL contains the complete information to facilitate dynamic discovery and invocation of the service. Hence the WSDL generated by the SOAP engines need to be customized to include the Schema information.
Also in the recent past, web services have been used in the e-commerce world dealing with critical data which need to be secured. Since then the free for all services concept has been dwindling with emphasis on proper access control for the service. The services also do not include the security feature intrinsically, but the feature is provided as a wrapper over the existing services, sometimes completely invisible to the service itself. This helps in re-using the existing services and still be able to make the service robust. In scenarios like these, the WSDL description of the service needs to be modified to include the security mechanism involved in invoking the service so that a client trying to consume the service can pass the necessary security credentials to the service. This raises the issue of customizing the WSDL to carry this information in a standard way to facilitate the dynamic discovery and invocation of the service.
In the coming sections we have addressed these issues and have explained a standards-based manner of customizing WSDL. We have also analyzed various SOAP engines with respect to support rehosting and consumption of customized WSDL for proxy generation. It also describes the interoperability issues and the best practices for customizing WSDL followed by conclusion and scope of future work.
2. Customizing WSDL
Customizing WSDL documents can result in regenerating the Web services binding file and in some cases, writing a wrapper program. One reason to add more custom information to the dynamically generated WSDL is for custom schema requirements as well as non-functional requirements to some extent. In different enterprise applications, it is also important to embed these extra information in the client stubs. In this context, the inclusion of a custom schema and security information in a WSDL file can be taken into consideration for displaying how exactly the WSDL can be used to invoke a service.
2.1. Schema Inclusion
As web services have been widely accepted in the industry from the small scale pilots to big scale enterprise level applications, the way information about the service is provided has become the most important point that developers need to address during the time of service deployment. This will ensure that the service is seamlessly invoked by a client without having much knowledge of the service and the background technology. As all the information about the service is communicated through the WSDL, it has the option of embedding schema in it. Using this embedded schema any SOAP engine tool that uses WSDL as input to generate the proxy classes can create the data holders and create a valid request for the invocation of service. The types element encloses data type definitions that are relevant for the exchanged messages [1]. A typical WSDL with types definitions looks like the one given in Listings-1
<wsdl:definitions <wsdl:types> <xs:schema <xs:element <xs:complexType <xs:sequence><xs:element</xs:sequence> </xs:complexType> [...........................] </xs:sequence></xs:complexType> </xs:schema> </wsdl:types> [.............] </wsdl:definitions>
Listing 1: WSDL containing the Schema inside the types element
In the above Listing-1 the entire schema is embedded in between the tag types. The information between the types tag is the detail of the data that is exchanged with the service. The important point to notice here is that unlike the RPC based services where the input and output data is simple, document style web service can have a complex structure of input and output data. All the soap engines, for example Apache axis, do not create a types definition for document style web service. In such cases a static wsdl has to be deployed with schema embedded in it. This will allow a user to generate the proxy classes and value objects for the web service allowing dynamic invocation.
2.2. Security Information Inclusion
In the last few years, Web services have gone from being an overly-hyped technology to one that many organizations are using productively. In these situations, even though service providers were using industry standards like SOAP, WSDL, etc. additional information concerning the security process was needed in order to allow the service providers to secure the service and provide a standard way to service requestors to send the security related data . So in April 2002, the WS-Security became an industry wide standard for exchanging security credentials between service provider and the consumer.
WS-Security Specification states "WS-Security describes enhancements to SOAP messaging to provide quality of protection through message integrity, message confidentiality, and single message authentication. These mechanisms can be used to accommodate a wide variety of security models and encryption technologies." ( )
2.2.1. Issue of exposing security related information to the consumers
While WS-Security concentrates on the standards required for the security credentials exchanged as part of the SOAP message, there are no standard mechanisms by which Security related information of the service can be communicated to the consumers. WSDL is the standard for communicating any information about a service to the consumers; hence the WSDL of a service needs to be customized to capture the security details adhering to the WSDL specifications.
A proper standard mechanism for communicating security related information to all the service consumers. for any web service is not present, so the question of how to inform a consumer about the security credentials that one needs to invoke the service.
Assuming the fact the WSDL is having security related information, the next information that needs to be answered is generation of client side proxies to access the service.
Normally for consuming a web service, consumers prefer to use the proxy generation tool of various SOAP engines to generate the proxies from WSDL. However, these tools provided by various vendors do not provide any mechanism to embed security related information in the proxy classes even from the custom WSDL.
2.2.2. Using SOAP Headers in WSDL 1.1
The issue of communicating security related information to various clients can be solved to some extent by exposing the required security credentials in the WSDL in a standard way.
According to the WS-Security Standard, security related information is embedded inside the SOAP Header (optional element in the SOAP Message). The WSDL 1.1 specification ( ) also provides a mechanism of to embed SOAP Header related information in Web Services Security (wsse). Figure 1 gives the snap shot of the same. So if the WSDL 1.1 specification provide a way of inserting the SOAP Header information, then there should be a standard way of adding the Security related information too.
>
Listing 2: WSDL Specification Snippet containing SOAP Headers
The "Customizing WSDL" tool mentioned in this paper is primarily with the objective of inserting security related information in the WSDL (w.r.t. WSDL specifications 1.1).
In the next section the above mentioned tool as well as the WSDL customization process is being discussed in details.
2.2.3. Customizing WSDL with security related information
The WSDL file for a particular web service can have multiple bindings and every binding can have multiple operations and each operation can have different security mechanisms.
In order to run the complete customization process, the tool takes the following inputs:
- WSDL File which needs to be customized
- The binding for which the security provided
- Operation for which the security information has to be provided
- Type of Security Tokens - (User Name Password or Binary Tokens)
Depending on these parameters, the tool is able to modify the existing WSDL, add the security related information appropriately and create a new WSDL out of it. The tool uses the open source implementation WSDL4J for parsing the existing WSDL file. It checks for the corresponding bindings and operations in the WSDL file as mentioned in the inputs and make the required changes.
Figure 1 is a shows a WSDL file containing two operations for a particular binding, where the CalculateAddition operation is secured by a Username Password, and this information is missing in the given WSDL.
Figure 1: WSDL of the service
Figure 2 shows the WSDL modified by our tool after adding the security details.
Figure 2: CalculateAddition Operation expects a WS-Security Username Token in the header of the incoming SOAP request
2.2.4. Tool Functionality
The GUI of the tool which allows the user to do WSDL customization is shown in Figure 4. Presently, the tool is packed with Axis 1.2.1. The steps involved in using it are very simple. Firstly, the tool uploads the WSDL File. WSDL specific vendor information is also required. Although WSDL is an open standard, still the vendor specific information is required due to some interoperability issues (in Section 3.2.2).
On uploading the WSDL, all the information related to binding, operations are displayed. Depending on the requirement, the schema or the security related information is provided.
Figure 3: User Interface of the TOOL which allows the user to upload and customize the dynamic generated WSDL
3. SOAP Engines support for Custom WSDL
3.1. Re-hosting the WSDL
After the existing WSDL file has been customized, there is a proper way of re-hosting the new modified WSDL.
Generally, any SOAP engine dynamically generates the WSDL of a Web Service, based on the contents of its deployment descriptor file. Most SOAP engines provide a way to publish a static and customized WSDL. If the Web Service Implementation is changed, one should also manually change the static WSDL file to reflect the changes one has made to the Web Service. Different SOAP Engines have different implementations of re-hosting the WSDL file. The following section talks about in detail.
3.1.1. Re-hosting Implementation in AXIS 1.2.1
In order to publish a new static wsdl file in Axis on Tomcat 5.0.28, the deployment descriptor of the Web Service needs to be modified. An element wsdlFile needs to be inserted, for a particular Web Service, containing the location of the static wsdl file as shown in Listing 3. The WSDL file can be any where in the disk but it is recommended that it must be inside the respective directory or sub-directories inside classes folder. Then the server needs to be restarted. The change in the deployment descriptor file (server-config.wsdd) is shown in Listing 3.
<service name="RPCService" provider="java:RPC"> <wsdlFile> CHANGEDWSDL.wsdl</wsdlFile> <parameter name="allowedMethods" value="*"/> <parameter name="className" value="RPCServices.Service"/> </service>
Listing 3: CHANGEDAXISWSDL is the custom WSDL of the Service
Similarly, the "Customizing Tool" mentioned in Section 2 also allows the user to re host the WSDL in axis. In fact, the tool makes required change in the deployment descriptor file and advises the user to restart the server in order to make the changes effective.
Figure 5 shows the User Interface where one is asked to re host the WSDL on axis 1.2.1.
Till date, the tool can only re host the WSDL File on Axis. In its future versions, it is going to re host the customized WSDL in other different vendor specific soap engines.
Figure 4: User Interface of the TOOL which allows the user to re host the customized static WSDL on Axis
3.1.2. Re-hosting Implementation in Websphere Application Server 5.1.2
In order to publish the new WSDL file on Websphere Application Server 5.1.2, the deployment descriptor file webservices.xml needs to be updated. The value of the wsdl-file element needs to be modified. Generally,, it refers to the WSDL file WEB-INFwsdl (default), which can be replaced with the new WSDL file. The purpose of re-publishing the WSDL file is to provide consumers with a description of the Web service, including the URL identifying the location of the service.
The WSDL files for each Web services-enabled module are published to the file system location one has specified. One can provide these WSDL files to consumers that want to invoke the Web services.
The WSDL file can be re-published in three different ways in Websphere:
- Using the Administrative Console
- Using the wsadmin command tool
- Publish it through a URI
Listing 4 depicts the changed deployment descriptor in Websphere Application Server using the URI Publish way.
<webservices id="WebServices_1132306198577"> <webservice-description <webservice-description-name>WEBSERVICEService</webservice-description-name> <wsdl-file>WEB-INF/wsdl/ CHANGEDWSDL.wsdl </wsdl-file> <jaxrpc-mapping-file>WEB-INF/WEBSERVICE_mapping.xml</jaxrpc-mapping-file> [.........] </webservices>
Listing 4: Inclusion of Custom WSDL in Websphere Application Server
3.1.3. Re-hosting Implementation in Weblogic Application Server 8.1.2 SP4
In order to publish the new WSDL file on Weblogic Application Server 8.1.2 SP4, updating of the web.xml inside the WEB-INF is necessary. The WEB-INF also contains their proprietary deployment descriptor file web-services.xml. The <mime-mapping> element needs to be added in the web.xml file. The new wsdl can be placed inside the top most directory of the Web Application, just above the WEB-INF directory. If the new wsdl is CHANGEDWEBLOGICEWEBSERVICEUSERNAMETOKEN.wsdl, then the entry in the web.xml would be as shown in Listing 5.
<mime-mapping> <extension>wsdl</extension> <mime-type>text/xml</mime-type> </mime-mapping>
Listing 5: Inclusion of Custom WSDL in Weblogic Application Server
3.2. Proxy Generation from custom WSDL
The earlier section explained customizing WSDL files and publishing them on their respective application servers and their soap engines.
Apart from this, different proxy generation tools by various vendors was also examined to check what kind of interfaces or client stubs were generated from the customized WSDL. The objective was to check whether the generated client stubs contains any information about the security information embedded inside the WSDL. Were the tools agile enough to include the soap header information in the stubs? The following section talks about that.
3.2.1. Proxy Generations
An existing web service has been secured provided with the security feature. The web service requires a username token from the consumers to function properly. The security feature has been implemented using Web Service Security for Java (wss4j) - An Open source implementation of WS-Security. The security features of the web service were implemented using vendor specific Handlers in all the following cases.
The tool used in Axis to generate the client stubs is called WSDL2Java. It is a command line tool utility. It generates the necessary classes and interfaces with WSDL as the input. Strikingly, the client stubs generated does not contain any information about the SOAPHeader and its message content and the element required to embed. But the client stubs has one extra class SecurityFault which was mentioned in the WSDL inside the header of the output element of the operation element.
But that does not say anything about what tokens or security credentials needs to be passed by the client to access the service.
In Websphere Application Server a similar kind of tool called WSDL2Java does the client proxy generation. It is also a command line tool utility. It also generated two interfaces about the definition of the different methods. Apart from that, the interfaces did not give any information about the tokens/security credentials. In Weblogic, the tool that is used to generate the client stubs in Weblogic 8.1 is called wsdl2service. It is an ant task, as shown in Listing 6.
<project name="buildWebservice" default="generate-from-WSDL"> <target name="generate-from-WSDL"> <wsdl2service wsdl=" CHANGEDWSDL.wsdl " destDir="wsdl/security" packageName="caseStudy.security"/> </target> </project>
Listing 6: Ant Script to generate the proxies from the WSDL
The ant task wsdl2service creates the necessary client stubs. In this particular case, it creates only a single interface which contains information about the security tokens as well as shown in Listing 7.
public java.lang.String calculateAddition(int intiOne, int iTwo,weblogic.xml.schema.binding.util.runtime. SOAPElementHolder UserNameToken) throws java.rmi.RemoteException ;
Listing 7: Client Stubs Generated by Ant Script
Apart from the stubs being generated, the wsdl2service also generated the deployment descriptor file web-services.xml which also contains information about the security tokens or rather the soap header information as shown in Listing 8.
<param xmlns:param </param>
Listing 8: Deployment Descriptor containing information about the SOAP headers
3.2.2. Interoperability Issue
In all these proxy generation tools, there is an interoperability issue that needs to be addressed. In cases of Axis and Websphere SOAP Engines, the proxy generation tool checks the WSDL and checks the wsse: SecurityHeader (taking the namespace URI xmlns: wsse) and generates the wsdl. Listing 9 depicts the Binding Input of the WSDL which mentions about the Security Header.
<wsdl:input <wsdlsoap:body <wsdlsoap:header </wsdlsoap:header> </wsdl:input>
Listing 9: WSDL Binding Input exposing the Security Header Information
Likewise, if the same wsdl has given as input to generate the client stubs using Weblogic Tool "wsdl2service," it will throw an error saying the SecurityHeader message is not declared. This, according to the WSDL Specification 1.1 should not be the case, because the same WSDL which is valid in generating the client stubs for one SOAP Engine should also work fine for others too.
So in this context, in Weblogic specifically, we have to generate the Security Header Message in the WSDL too.
That answers the questions why one needed to mention the WSDL Vendor type while uploading the WSDL.. Thus, the conclusion that can be drawn is, an interoperability issue exists, which needs to be sorted out. On the other hand, the proxy generation tool provided by Weblogic is more robust as well as agile enough to understand the customized WSDL. It helps the end-user to know that one must pass security token inside the client as a proper parameter to access the web service.
4. Best Practices to Customize the WSDL
It can be clearly seen out that there is an interoperability issue the way wsdl is being customized. The same customized wsdl is not generating the client stubs properly when used with all the respective proprietary proxy generation tools. The best way to deal with this issue is to create a generic customized wsdl such that the client stubs are generated irrespective of any proprietary proxy generation tool.
The number of issues that has to be kept in mind while creating the custom wsdl such that the interoperability issued does not arise are as follows:
- Create the SecurityHeader message
- Create the part with name and element equal to the Security Token that needs to be embedded in the WSDL
- The prefix of all the relevant Security related namespaces should be properly declared. In this scenario, the prefix "wsse" is mainly required.
- The SecurityFault Message needs to be declared such that it can be used in the SOAP Header Fault
The above declared points are the basic issues, which sorted out; the customized tool can be easily generated irrespective of the SOAP Engine and the proxy generation Tool.
5. Conclusions
So, presently, the tool that we have mentioned is customizing the existing WSDL File and embedding various custom information in a proper standard way. The tool can be used as a plug-in in different enterprise web applications. The tool can be utilized in a much more efficient way as it is being mentioned in the Future Works Section
6. Future Works
There are a number of improvements planned in the future. There are a number of important issues on which the tool can be improved and a better version can be implemented.
- The tool can be used at the client side to generate the client stubs which would contain the soap header related information (just as "wsdl2service" of Weblogic does)
In the above sections, we have discussed in detail how the Non-Functional Requirements (NFRs) (mainly "security") can be embedded in the WSDL which is adhering to WSDL Specification 1.1.
At the same time, WSDL 2.0 specification allows the Feature Component to carry NFR information like reliability, security, correlation, routing, etc.
The tool can be utilized in a more robust way by adding the mechanism such that it can migrate or customize the WSDL (presently adhering to WSDL 1.1 Standard) to WSDL 2.0 specification. So for any enterprise application, which wants to migrate their WSDL 1.1 to WSDL 2.0 can use the tool as a plug-in and achieve it very easily.
7. References
[1] Web Services Definition Language (WSDL) 1.1,
[2] Web Services Definition Language (WSDL) 2.0,
[3] What's New in WSDL 2.0,
[4] Implementing WS-Security,
[5] Web services programming tips and tricks: Using SOAP headers with JAX-RPC,
[6] WS-Security Spec Nearing Completion,
About the Authors
Senthil Kumar K M works as a Technical Specialist with Web Services Centre of Excellence in SETLabs, Bangalore. His current focus is on Syndeo - Web services Management Framework. He has published papers in international conferences like IEEE International Conference of Web services and spoken at various industry forums exclusively on Web services. His current research interests are Web services security, Web services interoperability and Web services Transaction Models. Senthil holds a B.E. (Hons) degree in Computer Science and M.Sc (Hons) degree in Mathematics from the Birla Institute of Technology and Science, Pilani, India. He can be reached at Senthil_KM@infosys.com.
Anshuk Pal Chaudhuri. He has done his B.Tech from Kolkata. He has developed and deployed Web services in various J2EE Compliant Servers. His current focus is on Syndeo - Web services Management Framework. His current research interests WS-Security and different open source products. He is also working in different binding frameworks for XML. He can be reached at Anshuk_PalChaudhuri@infosys.com.
Vineet Singh is a Software Engineer with Web Services Center of Excellence in SETLabs, Bangalore. His current focus is on Service Oriented Architecture, Web services over CRM. He has also implemented Web services on different application servers and worked on Syndeo - Web services bootstrap framework. He has been working on prevention and detection of XML based denial of service attack on Web services. He can be reached at vineet_singh01@infosys.com.
|
http://www.theserverside.com/news/1365336/An-Approach-to-Web-Services-Non-Functional-Requirements-Using-WSDL-Annotations
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
{-# OPTIONS_GHC -XNoImplicitPrelude #-} {-# OPTIONS_GHC -fno-warn-missing-signatures #-} {-#: (..) , atomically -- :: STM a -> IO a , retry -- :: STM a , orElse -- :: STM a -> STM a -> STM a , throwSTM -- :: Exception e => e -> STM a , catchSTM -- :: Exception e => STM a -> (e -> STM a) -> STM a , alwaysSucceeds -- :: STM a -> STM () , always -- :: STM Bool -> real_handler :: SomeException -> IO () real_handler se@(SomeException ex) = -- ignore thread GC and killThread exceptions: case cast ex of Just BlockedIndefinitelyOnMVar -> return () _ -> case cast ex of Just BlockedIndefinitelyOnSTM -> return () _ -> case cast ex of Just ThreadKilled -> return () _ -> case cast ex of -- report all others: Just StackOverflow -> reportStackOverflow _ -> = IO $ \ s -> let !ps = packCString# str !adr = byteArrayContents# ps in case (labelThread# t adr #)) instance MonadPlus STM where mzero = retry mplus = orEl :: STM a -> IO a atomically (STM m) = IO (\s -> (atomically# m) -- udpated. ::) -- | alwaysSucceeds adds a new invariant that must be true when passed -- to alwaysSucceeds, at the end of the current transaction, and at -- the end of every subsequent transaction. If it fails at any -- of those points then the transaction violating it is aborted -- and the exception raised by the invariant is propagated. alwaysSucceeds :: ::?" _ -> case cast ex of Just (ErrorCall s) -> s _ ->\end{code}
|
http://hackage.haskell.org/package/base-4.3.1.0/docs/src/GHC-Conc-Sync.html
|
CC-MAIN-2014-52
|
en
|
refinedweb
|
Created on 2021-04-08 11:09 by larry, last changed 2021-04-11 03:00 by gvanrossum. This issue is now closed.
The implementation of the | operator for TypeVar objects is as follows:
def __or__(self, right):
return Union[self, right]
def __ror__(self, right):
return Union[self, right]
I think the implementation of __ror__ is ever-so-slightly inaccurate. Shouldn't it be this?
def __ror__(self, left):
return Union[left, self]
I assume this wouldn't affect runtime behavior, as unions are sets and are presumably unordered. The only observable difference should be in the repr() (and only then if both are non-None), as this reverses the elements. The repr for Union does preserve the order of the elements it contains, so it's visible to the user there.
Yes.--
--Guido (mobile)
New changeset 9045919bfa820379a66ea67219f79ef6d9ecab49 by Jelle Zijlstra in branch 'master':
bpo-43772: Fix TypeVar.__ror__ (GH-25339)
|
https://bugs.python.org/issue43772
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Your browser does not seem to support JavaScript. As a result, your viewing experience will be diminished, and you have been placed in read-only mode.
Please download a browser that supports JavaScript, or enable it if it's disabled (i.e. NoScript).
Hi
Im making a tiny plugin that will grab an environment variable from the os and set it as a texture path in the preferences Files -> Paths -> File Assets -> Paths
The plugin prints that the path has been set but the path is not listed in prefs.
import c4d
import os
def PluginMessage(id, data):
if id == c4d.C4DPL_PROGRAM_STARTED:
#())
c4d.EventAdd()
When i run the code in a script in the editor, it works and the paths shows in the list in prefs
import c4d
import os
def main():
())
if __name__=='__main__':
main()
``` Any help appreciated
Regards
Bonsak
Hi @bonsak, I was not able to reproduce your issue, here on R20.026 SP1 both codes are working fine.
In which version of C4D are you?
Thats weird.
Im on R20.030 RB257898 with win 10.0.17134 and nvidia 391.35
The problem is that even thought the variable is defined...
c4d.GetGlobalTexturePaths()
[['S:\\_3D_Central\\Maxon\\tex\\GSG_EMC_Redshift', True]]
...it does not work. C4d will only find textures in that directory if the paths is actaully visible in the list in prefs.
Regards
Bonsak
I did some more testing and the variable is actually not getting set from the plugin. It only works when i run the script version manually after startup.
The strange thing is that when the plugin is executed it prints the result of c4d.GetGlobalTexturePaths(), and that does return the path, but when i run c4d.GetGlobalTexturePaths() in the python console after startup and the plugin is finished running, it returns an empty list "[]". And the list is empty in prefs.
Why is that?
Hi @bonsak I'm afraid you get another script which overrides it.
Here on a fresh R20.030 RB257898 it's working nicely.
Moreover please consider adding your path only if is not already present and please do not erase all paths (user may already have some setup before).
You can find an example which handles this properly in GitHub: globaltexturepaths.py.
Cheers,
Maxime.
Good for you
If i implement the script you linked to in my plugin like this:
import c4d
from c4d import storage
import os
def PluginMessage(id, data):
if id == c4d.C4DPL_PROGRAM_STARTED:
desktopPath = storage.GeGetC4DPath(c4d.C4D_PATH_DESKTOP)
homePath = storage.GeGetC4DPath(c4d.C4D_PATH_HOME)
# Gets global texture paths
paths = c4d.GetGlobalTexturePaths()
# Checks if the paths already exist in the global paths
desktopPathFound = False
homePathFound = False
for path, enabled in paths:
if os.path.normpath(path) == os.path.normpath(desktopPath):
desktopPathFound = True
if os.path.normpath(path) == os.path.normpath(homePathFound):
homePathFound = True
# If paths are not found then add them to the global paths
if not desktopPathFound:
paths.append([desktopPath, True])
if not homePathFound:
paths.append([homePath, True])
paths.append(['S:/_3D_Central/Maxon/tex', True])
# Sets global texture paths
c4d.SetGlobalTexturePaths(paths)
# Prints global texture paths
print(c4d.GetGlobalTexturePaths())
it does print that all three paths are set but none of them are visible in the prefs.
I need to do this from a plugin as it will be used to set renderslaves texture env on the fly from job to job.
I just tried the plugin on some other machines in the studio and it doesnt show newly set paths in prefs there either.
Do you have any other plugins installed? Please remove them to test.
As I said it's most likely you get another plugin which overrides the path.
Omg! I had an old version of the plugin defined in the Plugins list in prefs that set the paths to [].
Blush Deluxe!
Sorry for wasting your time. Works perfectly fine.
|
https://plugincafe.maxon.net/topic/11156/setting-texture-paths-with-plugin-on-startup/?
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
When function1 is successfully finished then run function2. But this need to be in function2, not as if statement outside of functions. How to call success finish of function1?
EDIT
import time def function1(): print ('This is function1!') time.sleep(5) print ('Function1 is successful') function1() def function2(): if function1 is success #this part I don't get (How to write this line?) print ('This is something about function1') else: pass function2()
Answer
Use a global variable. Set it in function1, and check it in function2.
function1_success = False def function1(): global function1_success # do stuff ... if success: function1_success = True def function2(): global function1_success if function1_success: # do stuff ...
|
https://www.tutorialguruji.com/python/how-to-call-functions-after-previous-function-is-completed-in-python/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Advertisement
In this blog, we will learn how to implement the transition effect in OpenCV.
Let’s Code Transition Effect in OpenCV!
Steps for Transition Effect in OpenCV
- Load two images, which will take part in the transition effect.
- Image manipulation (If you want a certain effect)
- Transition between the two images using addWeighted() OpenCV function.
Transition Effect Algorithm
Import Packages
For Implementing the Transition effect in OpenCV Python, we will have to import the Computer Vision package.
import cv2 as cv
The second package is NumPy for working with image matrix and creation of nD arrays.
For Transition Effect, NumPy is very important as it is used in the creation of the rotation matrix and also athematic and logical operations on image matrices.
import numpy as np
We also need to import the time package as it will help in resting the transition effect.
import time
Load Images
Read images in OpenCV using imread() function. Read two images that will take part in the transition effect.
img1 = cv.imread(‘./img1.jpg’)
img2 = cv.imread(‘./img2.jpg’)
Blending Logic
There is a function in OpenCV that helps blend two images on basis of percentage.
addWeighted() function in OpenCV takes five parameters:
- First Image
- Alpha value (Opacity for the first image)
- Second Image
- Beta Value (Opacity for the second image)
- Gamma Value (Weight for every pixel after blend, 0 for normal output)
The alpha value represents the opacity value of the first image.
The beta value represents the opacity value of the second image.
The gamma value represents the weight added to every pixel after the blend.
Maths behind the function
We have to make the percentage such that it follows the rule of:
alpha + beta = 1
If we choose the alpha value as 0.7 I.e 70%. The beta value then should be 0.3 I.e 30%.
0.7 + 0.3 = 1.0
Creating Transition Effect
np.linspace() is a NumPy function for generating linearly spaced numbers between two numbers.
np.linspace(0,10,5) – This function will generate 5 numbers between 1 – 10 and all will be evenly spaced.
We will use linspace function in the loop to generate different values for alpha and beta for the opacity of the images.
for i in np.linspace(0,1,100):
alpha = i
beta = 1 – alpha
output = cv.addWeighted(img1,alpha,img2,beta,0)
Alpha is assigned the value of ‘i’ that will change alpha’s value in every iteration.
The beta value will also change with each iteration as beta depends on the value of alpha. Beta = 1 – alpha.
But, the alpha and beta values will always sum to 1.
The addWeighted() function then takes the two images, alpha, beta, and gamma values to generate a new blend image.
This process continues till the loop ends or we forcefully end the process with an ‘ESC‘ keypress.
Simple Transition Effect Source code
import cv2 as cv import numpy as np import time while True: img1 = cv.imread('./img1.jpg') img2 = cv.imread('./img2.jpg') for i in np.linspace(0,1,100): alpha = i beta = 1 - alpha output = cv.addWeighted(img1,alpha,img2,beta,0) cv.imshow('Transition Effect ',output) time.sleep(0.02) if cv.waitKey(1) == 27: break cv.destroyAllWindow()
Create Trackbar for Transition Effect in OpenCV
Earlier we were dependent on the loop to see the transition effect.
We can also create a trackbar that will control the alpha value and on that basis, the transition will be applied.
You can change the range of the value of the trackbar in order to get a smoother or fast transition.
Change sleep time as well when you change the range of the trackbar.
Transition Effect OpenCV Documentation
Learn more about the transition and blending of OpenCV functions from official OpenCV Documentation.
|
https://hackthedeveloper.com/transition-effect-opencv-python/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
31829/unable-to-send-email-from-amazon-ec2-server-in-java
Trying to send mail from Amazon EC2 server with java code but getting an exception like -
Exception in thread "main" Status Code: 403, AWS Request ID: 3e9319ec-bc62-11e1-b2ea-6bde1b4f192c, AWS Error Code: AccessDenied, AWS Error Message: User: arn:aws:iam::696355342546:user/brandzter is not authorized to perform: ses:SendEmail
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:500)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:262)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:166)
at com.amazonaws.services.simpleemail.AmazonSimpleEmailServiceClient.invoke(AmazonSimpleEmailServiceClient.java:447)
at com.amazonaws.services.simpleemail.AmazonSimpleEmailServiceClient.sendEmail(AmazonSimpleEmailServiceClient.java:242)
at brandzter.util.SESExample.SendMail(SESExample.java:46)
at brandzter.util.SESExample.<init>(SESExample.java:31)
at brandzter.util.SESExample.main(SESExample.java:52)
There is no problem with credentials, so i dont know why i am not able to send a mail and why i am getting this error?
It mostly could be due to two reasons
You can add a group to IAM that is allowed just the sendEmail action with the following policy:
{
"Statement": [
{
"Action": [
"ses:SendEmail"
],
"Effect": "Allow",
"Resource": [
"*"
]
}
]
}
You need to set the proper privileges ...READ MORE
Here is my checklist of things you ...READ MORE
Hey, 3 ways you can do this:
To ...READ MORE
As per discussions in comment section, it's ...READ MORE
For some reason, the pip install of ...READ MORE
Check if the FTP ports are enabled ...READ MORE
To connect to EC2 instance using Filezilla, ...READ MORE
I had a similar problem with trying ...READ MORE
There is one way. If the other ...READ MORE
The code would be something like this:
import ...READ MORE
OR
At least 1 upper-case and 1 lower-case letter
Minimum 8 characters and Maximum 50 characters
Already have an account? Sign in.
|
https://www.edureka.co/community/31829/unable-to-send-email-from-amazon-ec2-server-in-java
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
This post is helpful for you to find out the receiving threshold values of various propagation models for certain communication distance. By default, ns2 uses a distance of 1.5m (the antenna is just placed 1.5m above the node ground). Suppose if someone needs to calculate the factor when the distance of the antenna is placed 10m above the ground, then the default value changes. NS2 has an inbuilt mechanism to calculate the distance for certain communication range using the threshold.cc file in ~ns-2.35/indep-utils/propagation/ The file is not having any OTcl linkages, its a conventional C++ file that can be compiled using the command g++ Before compiling the file, there are some changes in the threshold.cc file. 1. Change the #include<iostream.h> to #include <iostream> 2. Include the following two lines #include <string.h> using namespace std; Once the changes are made, compile the file using the command g++ -o threshold threshold.cc (The above
Its all about Network Simulations, Internet of Things, Sensor Networks, Programming, etc
|
https://www.nsnam.com/2014/03/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
libpfm_intel_snbep_unc_qpi — support for Intel Sandy Bridge-EP QPI uncore PMU
Synopsis
#include <perfmon/pfmlib.h> PMU name: snbep_unc_qpi0, snbep_unc_qpi1 PMU desc: Intel Sandy Bridge-EP QPI uncore PMU
Description
The library supports the Intel Sandy Bridge Power QPI uncore PMU. This PMU model only exists on Sandy Bridge model 45. There are two QPI PMUs per processor socket.
Modifiers
The following modifiers are supported on Intel Sandy Bridge QPI uncore PMU:
- i
Invert the meaning of the event. The counter will now count Q QPI cycles in which the number of occurrences of the event is greater or equal to the threshold. This is an integer modifier with values in the range [0:255].
|
https://dashdash.io/3/libpfm_intel_snbep_unc_qpi
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.