text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
I am writing a program that utilises a large number of files. Does Python feature an inbuilt class for file paths, or does one have to be implemented by the user (like below): class FilePath: def __init__(path): shazam(path) def shazam(self, path): """ something happens, path is formatted, etc """ self.formatted_path = foobar Python has several cross platform modules for dealing with the file system, paths and operating system. The os module specifically has an os.sep character. os.path.join() is OS-aware and will use the correct separator when joining paths together. Additionally, os.path.normpath() will take any path and convert the separators to whatever the native OS supports.
https://codedump.io/share/J37fOIo1Gwgw/1/python-dedicated-class-for-file-names
CC-MAIN-2018-26
en
refinedweb
Aditya Ghosh, president, whole-time director, Indigo Airlines said the government’s proposal to cap airfares on regional routes at Rs 2,500 is a holistic move to expand the market, on a day when the shares of the airline’s parent company InterGlobe Aviation soared 17 per cent on debut trade. (PTI) IndiGo’s President Aditya Ghosh today said the government’s proposal to cap airfares on regional routes at Rs 2,500 is a holistic move to expand the market, on a day when the shares of the airline’s parent company InterGlobe Aviation soared 17 per cent on debut trade. The stock’s performance even “beat my own expectations”, he said on the sidelines of listing ceremony of the InterGlobe shares. In afternoon trade, shares of InterGlobe rose nearly 17 per cent to Rs 892.95 on BSE. “It’s fair on the part of the government to seek something in return (to help develop a larger market). In isolation, the capping ticket prices and the 2 per cent cess on air tickets may not look fair, but on the whole it will help develop better air connectivity,” Ghosh said. In the draft aviation policy, the government has proposed a cap of Rs 2,500 on airfares in regional routes besides a 2 per cent levy on all air tickets to boost air connectivity. Ghosh said the new regional connectivity plan would help open up huge opportunities for regional airlines but was quick to add that his airline has no immediate plan to enter the sector as his present plane types do not suit that. “Entering the regional aviation space does not make sense for us now as our current planes are not economically suited for that,” he said. He also added that IndiGo has no plan to expand its international operations as of now. Noting that the airline, launched on August 6, 2006, has grown to fast in scale of operations, complexity and delivery, Ghosh said, “the risk is not about demand, which is already there in aplenty. But the risk is about execution and meeting those demands.” On the new proposed aviation policy, he said he would prefer “a policy and regulatory environment that is fair on everyone in the industry and not just a few. What we need is a fair policy environment that gives a level-playing field to all stakeholders and not just some getting pitched against the rest and vice versa.” When asked about whether the company’s networth has returned to positive territory, Ghosh answered in the affirmative but refused to share the number saying listing related regulatory requirements does not allow sharing of specific financial information. While filing for the IPO the company had said its networth had turned Rs 439 crore negative as of June 30, the day it filed for the IPO, after the management and promoters took out a huge Rs 1,500 crore in dividends. When asked about whether he is a relieved man today with the stock not doing an encore of what the Cafe Coffee Day stock (which tanked 18 per cent on the listing day), given the negative sentiment across the markets, he said, “for the past few months, we hardly slept for 3-4 hours. “We were asked those questions by regulators, analysts and i-bankers, which we thought never existed. But this huge success to our IPO and listing also increases the pressure and responsibility on us as we have to now get back to the public every three month with all the facts,” Ghosh, who has been with the airline from day one, said. However, he was quick to add that “IndiGo has a battle- ready management and will overcome every challenge as in the past and will deliver much than what we have been delivering so far.” He also described the IPO process was the toughest point in his days at the airline. Today, Indigo has 98 planes in service and has 430 more in order, which makes it the largest for any airline in the world. At the peak of the global crisis, it had ordered 180 Airbus planes which was followed up with another order of 250 Airbus planes in August last year. Out of the 98 planes in operation, as many as 75 are on operating lease, a business model which has helped it lower costs. The airline counter opened at 17 per cent premium on the issue price of Rs 765 today on the NSE, and touched a high of Rs 898, a sharp gain of 17.38 per cent, taking its market valuation to over Rs 31,702.4 crore. The company’s IPO, the biggest in nearly three years and the first from the sector since 2005, had elicited robust response as the issue got over-subscribed 6.15 times last month. This was the biggest IPO in the market since Bharti Infratel’s over Rs 4,000-crore public offer in December 2012. InterGlobe has raised Rs 3,008.5 crore at issue price of Rs 765 per share from its recently concluded, over-subscribed initial public offer. Qualified institutional buyers lapped up the issue 17.80 times, while the portion for non-institutional investors saw 3.57 times subscription. The category set aside for retail investors was subscribed 92 per cent.
https://www.financialexpress.com/economy/proposed-airfare-cap-on-regional-routes-a-holistic-move-indigo-president-aditya-ghosh/163989/
CC-MAIN-2018-26
en
refinedweb
Artifact d9f3f38a645379ab75eff8ab4925fa2d678944c7: - File tclreadline.n] .TH tclreadline n "@PATCHLEVEL_STR@" "Johannes Zellner" .\" FILE: "/home/joze/src/tclreadline/tclreadline.n.in" .\" LAST MODIFICATION: "Mit, 10 Jan 2001 06:29:33 +0100 (joze)" .\" (C) 1998 - 2001 by Johannes Zellner, <johannes@zellner.org> .\" $Id$ .\" --- .\" tclreadline -- gnu readline for tcl .\" .\" Copyright (c) 1998 - 2001, Johannes Zellner <johannes@zellner.org> .\" This software is copyright under the BSD license. .\" #. .PP The following list will give all commands, which are currently implemented in the shared lib (e.g. libtclreadline@TCLREADLINE_VERSION@P 5 \fB::tclreadline::readline bell\fP Ring the terminal bell, obeying the setting of bell-style -- audible or visible. .TP 5 \fB::tclreadline::text\fP Return the current input. .TP 5 \fB::tclreadline::readline update\fP Redraw the current input line. returns the current setting. .TP 5 \fB::tclreadline::Loop\fP [\fIhistoryfile\fP] \fline::Loop, if it is not already defined. So: If you define your own proc ::tclreadline::prompt1 before entering ::tclreadline::Loop, this proc is called each time the prompt is to be displayed. Example: .CS package require tclreadline namespace eval tclreadline { proc prompt1 {} { return "[clock format [clock seconds]]> " } } ::tclreadline::Loop .CE directory. .\" .SH "EXAMPLES" .\" .SH "ENVIRONMENT VARIABLES" .SH "VARIABLES" (read-only) holds the version string "@TCLREADLINE_VERSION@". .TP 5 \fBtclreadline::patchLevel\fP (read-only) holds the patch level string "@PATCHLEVEL_STR@". .TP 5 \fBtclreadline::library\fP (read-only) holds the library string "@TCLREADLINE_LIBRARY@". .TP 5 \fBtclreadline::license\fP (read-only) holds a BSD license statement. > > .SH "DEBIAN PACKAGE" David Engel <dlengel@home.com>, <david@debian.org> .SH "DISCLAIMER" \fBtclreadline\fP comes with a BSD type license. The read-only variable tclreadline::license holds the complete license statement.
http://chiselapp.com/user/rkeene/repository/tclreadline/artifact/d9f3f38a645379ab
CC-MAIN-2018-26
en
refinedweb
* JavaDocRule27 * @author Pavel Vlasov28 * @version $Revision: 1.2 $29 */30 public class JavaDocRuleFixTestCase {31 32 /**33 * Field documentation34 */35 public int z;36 37 /**38 * Method39 * documentation40 */41 public void doIt() {42 43 }44 }45 46 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/hammurapi/inspectors/testcases/fixes/JavaDocRuleFixTestCase.java.htm
CC-MAIN-2018-26
en
refinedweb
__SETFPUCW(3) Linux Programmer's Manual __SETFPUCW(3) __setfpucw - set FPU control word on i386 architecture (obsolete) #include <i386/fpu_control.h> void __setfpucw(unsigned short control_word); __setfpucw() transfers control_word to the registers of the FPU (floating-point unit) on the i386 architecture. This was used to control floating-point precision, rounding and floating-point exceptions. This function was a nonstandard GNU extension. As of glibc 2.1 this function does not exist anymore. There are new functions from C99, with prototypes in <fenv.h>, to control FPU rounding modes, like fegetround(3), fesetround(3), and the floating- point environment, like fegetenv(3), feholdexcept(3), fesetenv(3), feupdateenv(3), and FPU exception handling, like feclearexcept(3), fegetexcept. __setfpucw(0x1372) Set FPU control word on the i386 architecture to - extended precision - rounding to nearest - exceptions on overflow, zero divide and NaN feclearexcept(3) <fpu_control.h> This page is part of release 4.16 of the Linux man-pages project. A description of the project, information about reporting bugs, and the latest version of this page, can be found at. Linux 2017-09-15 __SETFPUCW(3)
http://man7.org/linux/man-pages/man3/__setfpucw.3.html
CC-MAIN-2018-26
en
refinedweb
Send a message to a channel #include <sys/neutrino.h> int MsgSendvs( int coid, const iov_t* siov, int sparts, void* rmsg, int rbytes ); int MsgSendvs_r( int coid, const iov_t* siov, int sparts, void* rmsg, int rbytes ); libc Use the -l c option to qcc to link against this library. This library is usually included automatically. The MsgSendvs() and MsgSendvsvs() is a cancellation point for the ThreadCancel() kernel call; MsgSendvsnc() isn't. The only difference between the MsgSendvs() and MsgSendvsnc(), name_open(), TimerTimeout() Message Passing chapter of Getting Started with QNX Neutrino
https://www.qnx.com/developers/docs/6.4.1/neutrino/lib_ref/m/msgsendvs.html
CC-MAIN-2018-26
en
refinedweb
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks; namespace pigLatin { class PigLatin { static void Main(string[] args) { string english = Convert.ToString("txtEnglsih.Text"); string Vowels = "AEIOUaeiou"; string firstLetter=null; string restOftheWord; string piglatin; Console.WriteLine(""); foreach (char word in english) { char c = word; english = firstLetter.Remove(c); restOftheWord = firstLetter + "ay"; //ifconsent if (firstLetter != Vowels) { piglatin = restOftheWord + firstLetter + "AY"; } else { piglatin = word + "AY"; } Console.ReadLine(); } } } } One error is bugging me, now in the text file I wrote down the word "Cat" to read it in a piglatin, it went throw with the program and one error pop up on the line 29 that says unassigned local variable"firstLetter...so I assigned it with Null...then when I started debugging a yellow line was marked on line 29 and there is a small bar pop up on the side that says Null Exception was unhanded any help would be great Thanks a lot
https://www.dreamincode.net/forums/topic/292919-error/page__st__15
CC-MAIN-2018-26
en
refinedweb
#include <FFPlanNode.h> Definition at line 6 of file FFPlanNode.h. List of all members. Definition at line 8 of file FFPlanNode.h. 37 of file FFPlanNode.cc. Definition at line 15 of file FFPlanNode.h. [protected] Definition at line 21 of file FFPlanNode.cc. Referenced by plan(). Definition at line 16 of file FFPlanNode.cc. Referenced by postStart(). Called by start() after the doStart(), allows superclasses to complete initialization. For robustness to future change, subclasses should be sure to call the superclass implementation. Definition at line 6 of file FFPlanNode.cc. Definition at line 19 of file FFPlanNode.h. Referenced by launchFF(). Definition at line 21 of file FFPlanNode.h. Referenced by doEvent(), and launchFF(). Definition at line 20 of file FFPlanNode.h. Referenced by launchFF(), and postStart(). Definition at line 22 of file FFPlanNode.h. Referenced by doEvent(), and getResult().
http://tekkotsu.org/dox/classFFPlanNode.html
CC-MAIN-2018-26
en
refinedweb
Dealing with Data In this chapter you'll learn about the following: - Rules for naming C++ variables - C++'s built-in integer types: unsigned long, long, unsigned int, int, unsigned short, short, char, unsigned char, signed char, and bool - The climits file, which represents system limits for various integer types - Numeric constants of various integer types - Using the const qualifier to create symbolic constants - C++'s built-in floating-point types: float, double, and long double - The cfloat file, which represents system limits for various floating-point types - Numeric constants of various floating-point types - C++'s arithmetic operators - Automatic type conversions - Forced type conversions (type casts) The essence of object-oriented programming (OOP) is designing and extending your own data types. Designing your own data types represents an effort to make a type match the data. If you do this properly, you'll find it much simpler to work with the data later. But before you can create your own types, you must know and understand the types that are built in to C++ because those types will be your building blocks. The built-in C++ types come in two groups: fundamental types and compound types. In this chapter you'll meet the fundamental types, which represent integers and floating-point numbers. That might sound like just two types; however, C++ recognizes that no one integer type and no one floating-point type match all programming requirements, so it offers several variants on these two data themes. Chapter 4, "Compound Types," follows up by covering several types that are built on the basic types; these additional compound types include arrays, strings, pointers, and structures. Of course, a program also needs a means to identify stored data. In this chapter you'll examine one method for doing sousing variables. Then, you'll look at how to do arithmetic in C++. Finally, you'll see how C++ converts values from one type to another. Simple Variables Programs typically need to store informationperhaps the current price of IBM stock, the average humidity in New York City in August, the most common letter in the U.S. Constitution and its relative frequency, or the number of available Elvis impersonators. To store an item of information in a computer, the program must keep track of three fundamental properties: Where the information is stored What value is kept there What kind of information is stored The strategy the examples in this book have used so far is to declare a variable. The type used in the declaration describes the kind of information, and the variable name represents the value symbolically. For example, suppose Chief Lab Assistant Igor uses the following statements: int braincount; braincount = 5; These statements tell the program that it is storing an integer and that the name braincount represents the integer's value, 5 in this case. In essence, the program locates a chunk of memory large enough to hold an integer, notes the location, assigns the label braincount to the location, and copies the value 5 into the location. These statements don't tell you (or Igor) where in memory the value is stored, but the program does keep track of that information, too. Indeed, you can use the & operator to retrieve braincount's address in memory. You'll learn about that operator in the next chapter, when you investigate a second strategy for identifying datausing pointers. Names for Variables C++ encourages you to use meaningful names for variables. If a variable represents the cost of a trip, you should call it cost_of_trip or costOfTrip, not just x or cot. You do have to follow a few simple C++ naming rules: The only characters you can use in names are alphabetic characters, numeric digits, and the underscore (_) character. The first character in a name cannot be a numeric digit. Uppercase characters are considered distinct from lowercase characters. You can't use a C++ keyword for a name. Names beginning with two underscore characters or with an underscore character followed by an uppercase letter are reserved for use by the implementationthat is, the compiler and the resources it uses. Names beginning with a single underscore character are reserved for use as global identifiers by the implementation. C++ places no limits on the length of a name, and all characters in a name are significant. The next-to-last point is a bit different from the preceding points because using a name such as __time_stop or _Donut doesn't produce a compiler error; instead, it leads to undefined behavior. In other words, there's no telling what the result will be. The reason there is no compiler error is that the names are not illegal but rather are reserved for the implementation to use. The bit about global names refers to where the names are declared; Chapter 4 touches on that topic. The final point differentiates C++ from ANSI C (C99), which guarantees only that the first 63 characters in a name are significant. (In ANSI C, two names that have the same first 63 characters are considered identical, even if the 64th characters differ.) Here are some valid and invalid C++ names: int poodle; // valid int Poodle; // valid and distinct from poodle int POODLE; // valid and even more distinct Int terrier; // invalid -- has to be int, not Int int my_stars3 // valid int _Mystars3; // valid but reserved -- starts with underscore int 4ever; // invalid because starts with a digit int double; // invalid -- double is a C++ keyword int begin; // valid -- begin is a Pascal keyword int __fools; // valid but reserved starts with two underscores int the_very_best_variable_i_can_be_version_112; // valid int honky-tonk; // invalid -- no hyphens allowed If you want to form a name from two or more words, the usual practice is to separate the words with an underscore character, as in my_onions, or to capitalize the initial character of each word after the first, as in myEyeTooth. (C veterans tend to use the underscore method in the C tradition, whereas Pascalians prefer the capitalization approach.) Either form makes it easier to see the individual words and to distinguish between, say, carDrip and cardRip, or boat_sport and boats_port. Real-World Note: Variable Names Schemes for naming variables, like schemes for naming functions, provide fertile ground for fervid discussion. Indeed, this topic produces some of the most strident disagreements in programming. Again, as with function names, the C++ compiler doesn't care about your variable names as long as they are within legal limits, but a consistent, precise personal naming convention will serve you well. As in function naming, capitalization is a key issue in variable naming (see the sidebar "Naming Conventions" in Chapter 2, "Setting Out to C++"), but many programmers may insert an additional level of information in a variable namea prefix that describes the variable's type or contents. For instance, the integer myWeight might be named nMyWeight; here, the n prefix is used to represent an integer value, which is useful when you are reading code and the definition of the variable isn't immediately at hand. Alternatively, this variable might be named intMyWeight, which is more precise and legible, although it does include a couple extra letters (anathema to many programmers). Other prefixes are commonly used in like fashion: str or sz might be used to represent a null-terminated string of characters, b might represent a Boolean value, p a pointer, c a single character. As you progress into the world of C++, you will find many examples of the prefix naming style (including the handsome m_lpctstr prefixa class member value that contains a long pointer to a constant, null-terminated string of characters), as well as other, more bizarre and possibly counterintuitive styles that you may or may not adopt as your own. As in all the stylistic, subjective parts of C++, consistency and precision are best. You should use variable names to fit your own needs, preferences, and personal style. (Or, if required, choose names that fit the needs, preferences, and personal style of your employer.) Integer Types Integers are numbers with no fractional part, such as 2, 98, 5286, and 0. There are lots of integers, assuming that you consider an infinite number to be a lot, so no finite amount of computer memory can represent all possible integers. Thus, a language can represent only a subset of all integers. Some languages, such as standard Pascal, offer just one integer type (one type fits all!), but C++ provides several choices. This gives you the option of choosing the integer type that best meets a program's particular requirements. This concern with matching type to data presages the designed data types of OOP. The various C++ integer types differ in the amount of memory they use to hold an integer. A larger block of memory can represent a larger range in integer values. Also, some types (signed types) can represent both positive and negative values, whereas others (unsigned types) can't represent negative values. The usual term for describing the amount of memory used for an integer is width. The more memory a value uses, the wider it is. C++'s basic integer types, in order of increasing width, are char, short, int, and long. Each comes in both signed and unsigned versions. That gives you a choice of eight different integer types! Let's look at these integer types in more detail. Because the char type has some special properties (it's most often used to represent characters instead of numbers), this chapter covers the other types first. The short, int, and long Integer Types Computer memory consists of units called bits. (See the "Bits and Bytes" sidebar, later in this chapter.) By using different numbers of bits to store values, the C++ types short, int, and long can represent up to three different integer widths. It would be convenient if each type were always some particular width for all systemsfor example, if short were always 16 bits, int were always 32 bits, and so on. But life is not that simple. However, no one choice is suitable for all computer designs. C++ offers a flexible standard with some guaranteed minimum sizes, which it takes from C. Here's what you get: A short integer is at least 16 bits wide. An int integer is at least as big as short. A long integer is at least 32 bits wide and at least as big as int. Bits and Bytes The fundamental unit of computer memory is the bit. Think of a bit as an electronic switch that you can set to either off or on. Off represents the value 0, and on represents the value 1. An 8-bit chunk of memory can be set to 256 different combinations. The number 256 comes from the fact that each bit has two possible settings, making the total number of combinations for 8 bits 2 x 2 x 2 x 2 x 2 x 2 x 2 x 2, or 256. Thus, an 8-bit unit can represent, say, the values 0 through 255 or the values 128 through 127. Each additional bit doubles the number of combinations. This means you can set a 16-bit unit to 65,536 different values and a 32-bit unit to 4,294,672,296 different values. A byte usually means an 8-bit unit of memory. Byte in this sense is the unit of measurement that describes the amount of memory in a computer, with a kilobyte equal to 1,024 bytes and a megabyte equal to 1,024 kilobytes. However, C++ defines byte differently. The C++ byte consists of at least enough adjacent bits to accommodate the basic character set for the implementation. That is, the number of possible values must equal or exceed the number of distinct characters. In the United States, the basic character sets are usually the ASCII and EBCDIC sets, each of which can be accommodated by 8 bits, so the C++ byte is typically 8 bits on systems using those character sets. However, international programming can require much larger character sets, such as Unicode, so some implementations may use a 16-bit byte or even a 32-bit byte. Many systems currently use the minimum guarantee, making short 16 bits and long 32 bits. This still leaves several choices open for int. It could be 16, 24, or 32 bits in width and meet the standard. Typically, int is 16 bits (the same as short) for older IBM PC implementations and 32 bits (the same as long) for Windows 98, Windows NT, Windows XP, Macintosh OS X, VAX, and many other minicomputer implementations. Some implementations give you a choice of how to handle int. (What does your implementation use? The next example shows you how to determine the limits for your system without your having to open a manual.) The differences between implementations for type widths can cause problems when you move a C++ program from one environment to another. But a little care, as discussed later in this chapter, can minimize those problems. You use these type names to declare variables just as you would use int: short score; // creates a type short integer variable int temperature; // creates a type int integer variable long position; // creates a type long integer variable Actually, short is short for short int and long is short for long int, but hardly anyone uses the longer forms. The three types, int, short, and long, are signed types, meaning each splits its range approximately equally between positive and negative values. For example, a 16-bit int might run from 32,768 to +32,767. If you want to know how your system's integers size up, you can use C++ tools to investigate type sizes with a program. First, the sizeof operator returns the size, in bytes, of a type or a variable. (An operator is a built-in language element that operates on one or more items to produce a value. For example, the addition operator, represented by +, adds two values.) Note that the meaning of byte is implementation dependent, so a 2-byte int could be 16 bits on one system and 32 bits on another. Second, the climits header file (or, for older implementations, the limits.h header file) contains information about integer type limits. In particular, it defines symbolic names to represent different limits. For example, it defines INT_MAX as the largest possible int value and CHAR_BIT as the number of bits in a byte. Listing 3.1 demonstrates how to use these facilities. The program also illustrates initialization, which is the use of a declaration statement to assign a value to a variable. Listing 3.1 limits.cpp // limits.cpp -- some integer limits #include <iostream> #include <climits> // use limits.h for older systems int main() { using namespace std; int n_int = INT_MAX; // initialize n_int to max int value short n_short = SHRT_MAX; // symbols defined in limits.h file long n_long = LONG_MAX; // sizeof operator yields size of type or of variable cout << "int is " << sizeof (int) << " bytes." << endl; cout << "short is " << sizeof n_short << " bytes." << endl; cout << "long is " << sizeof n_long << " bytes." << endl << endl; cout << "Maximum values:" << endl; cout << "int: " << n_int << endl; cout << "short: " << n_short << endl; cout << "long: " << n_long << endl << endl; cout << "Minimum int value = " << INT_MIN << endl; cout << "Bits per byte = " << CHAR_BIT << endl; return 0; } Compatibility Note The climits header file is the C++ version of the ANSI C limits.h header file. Some earlier C++ platforms have neither header file available. If you're using such a system, you must limit yourself to experiencing this example in spirit only. Here is the output from the program in Listing 3.1, using Microsoft Visual C++ 7.1: int is 4 bytes. short is 2 bytes. long is 4 bytes. Maximum values: int: 2147483647 short: 32767 long: 2147483647 Minimum int value = -2147483648 Bits per byte = 8 Here is the output for a second system, running Borland C++ 3.1 for DOS: int is 2 bytes. short is 2 bytes. long is 4 bytes. Maximum values: int: 32767 short: 32767 long: 2147483647 Minimum int value = -32768 Bits per byte = 8 Program Notes The following sections look at the chief programming features for this program. The sizeof Operator and the climits Header File The sizeof operator reports that int is 4 bytes on the base system, which uses an 8-bit byte. You can apply the sizeof operator to a type name or to a variable name. When you use the sizeof operator with a type name, such as int, you enclose the name in parentheses. But when you use the operator with the name of the variable, such as n_short, parentheses are optional: cout << "int is " << sizeof (int) << " bytes.\n"; cout << "short is " << sizeof n_short << " bytes.\n"; The climits header file defines symbolic constants (see the sidebar "Symbolic Constants the Preprocessor Way," later in this chapter) to represent type limits. As mentioned previously, INT_MAX represents the largest value type int can hold; this turned out to be 32,767 for our DOS system. The compiler manufacturer provides a climits file that reflects the values appropriate to that compiler. For example, the climits file for Windows XP, which uses a 32-bit int, defines INT_MAX to represent 2,147,483,647. Table 3.1 summarizes the symbolic constants defined in the climits file; some pertain to types you have not yet learned. Table 3.1 Symbolic Constants from climits Initialization Initialization combines assignment with declaration. For example, the statement int n_int = INT_MAX; declares the n_int variable and sets it to the largest possible type int value. You can also use regular constants to initialize values. You can initialize a variable to another variable, provided that the other variable has been defined first. You can even initialize a variable to an expression, provided that all the values in the expression are known when program execution reaches the declaration: int uncles = 5; // initialize uncles to 5 int aunts = uncles; // initialize aunts to 5 int chairs = aunts + uncles + 4; // initialize chairs to 14 Moving the uncles declaration to the end of this list of statements would invalidate the other two initializations because then the value of uncles wouldn't be known at the time the program tries to initialize the other variables. The initialization syntax shown previously comes from C; C++ has a second initialization syntax that is not shared with C: int owls = 101; // traditional C initialization int wrens(432); // alternative C++ syntax, set wrens to 432 Remember If you don't initialize a variable that is defined inside a function, the variable's value is undefined. That means the value is whatever happened to be sitting at that memory location prior to the creation of the variable. If you know what the initial value of a variable should be, initialize it. True, separating the declaring of a variable from assigning it a value can create momentary suspense: short year; // what could it be? year = 1492; // oh But initializing the variable when you declare it protects you from forgetting to assign the value later. Symbolic Constants the Preprocessor Way The climits file contains lines similar to the following: #define INT_MAX 32767 Recall that the C++ compilation process first passes the source code through a preprocessor. Here #define, like #include, is a preprocessor directive. What this particular directive tells the preprocessor is this: Look through the program for instances of INT_MAX and replace each occurrence with 32767. So the #define directive works like a global search-and-replace command in a text editor or word processor. The altered program is compiled after these replacements occur. The preprocessor looks for independent tokens (separate words) and skips embedded words. That is, the preprocessor doesn't replace PINT_MAXIM with P32767IM. You can use #define to define your own symbolic constants, too. (See Listing 3.2.) However, the #define directive is a C relic. C++ has a better way of creating symbolic constants (using the const keyword, discussed in a later section), so you won't be using #define much. But some header files, particularly those designed to be used with both C and C++, do use it. Unsigned Types Each of the three integer types you just learned about comes in an unsigned variety that can't hold negative values. This has the advantage of increasing the largest value the variable can hold. For example, if short represents the range 32,768 to +32,767, the unsigned version can represent the range 0 to 65,535. Of course, you should use unsigned types only for quantities that are never negative, such as populations, bean counts, and happy face manifestations. To create unsigned versions of the basic integer types, you just use the keyword unsigned to modify the declarations: unsigned short change; // unsigned short type unsigned int rovert; // unsigned int type unsigned quarterback; // also unsigned int unsigned long gone; // unsigned long type Note that unsigned by itself is short for unsigned int. Listing 3.2 illustrates the use of unsigned types. It also shows what might happen if your program tries to go beyond the limits for integer types. Finally, it gives you one last look at the preprocessor #define statement. Listing 3.2 exceed.cpp // exceed.cpp -- exceeding some integer limits #include <iostream> #define ZERO 0 // makes ZERO symbol for 0 value #include <climits> // defines INT_MAX as largest int value int main() { using namespace std; short sam = SHRT_MAX; // initialize a variable to max value unsigned short sue = sam;// okay if variable sam already defined cout << "Sam has " << sam << " dollars and Sue has " << sue; cout << " dollars deposited." << endl << "Add $1 to each account." << endl << "Now "; sam = sam + 1; sue = sue + 1; cout << "Sam has " << sam << " dollars and Sue has " << sue; cout << " dollars deposited.\nPoor Sam!" << endl; sam = ZERO; sue = ZERO; cout << "Sam has " << sam << " dollars and Sue has " << sue; cout << " dollars deposited." << endl; cout << "Take $1 from each account." << endl << "Now "; sam = sam - 1; sue = sue - 1; cout << "Sam has " << sam << " dollars and Sue has " << sue; cout << " dollars deposited." << endl << "Lucky Sue!" << endl; return 0; } Compatibility Note Listing 3.2, like Listing 3.1, uses the climits file; older compilers might need to use limits.h, and some very old compilers might not have either file available. Here's the output from the program in Listing 3.2: Sam has 32767 dollars and Sue has 32767 dollars deposited. Add $1 to each account. Now Sam has -32768 dollars and Sue has 32768 dollars deposited. Poor Sam! Sam has 0 dollars and Sue has 0 dollars deposited. Take $1 from each account. Now Sam has -1 dollars and Sue has 65535 dollars deposited. Lucky Sue! The program sets a short variable (sam) and an unsigned short variable (sue) to the largest short value, which is 32,767 on our system. Then, it adds 1 to each value. This causes no problems for sue because the new value is still much less than the maximum value for an unsigned integer. But sam goes from 32,767 to 32,768! Similarly, subtracting 1 from 0 creates no problems for sam, but it makes the unsigned variable sue go from 0 to 65,535. As you can see, these integers behave much like an odometer. If you go past the limit, the values just start over at the other end of the range. (See Figure 3.1.) C++ guarantees that unsigned types behave in this fashion. However, C++ doesn't guarantee that signed integer types can exceed their limits (overflow and underflow) without complaint, but that is the most common behavior on current implementations. Figure 3.1 Typical overflow behavior for integers. Beyond long C99 has added a couple new types that most likely will be part of the next edition of the C++ Standard. Indeed, many C++ compilers already support them. The types are long long and unsigned long long. Both are guaranteed to be at least 64 bits and to be at least as wide as the long and unsigned long types. Choosing an Integer Type With the richness of C++ integer types, which should you use? Generally, int is set to the most "natural" integer size for the target computer. Natural size refers to the integer form that the computer handles most efficiently. If there is no compelling reason to choose another type, you should use int. Now look at reasons why you might use another type. If a variable represents something that is never negative, such as the number of words in a document, you can use an unsigned type; that way the variable can represent higher values. If you know that the variable might have to represent integer values too great for a 16-bit integer, you should use long. This is true even if int is 32 bits on your system. That way, if you transfer your program to a system with a 16-bit int, your program won't embarrass you by suddenly failing to work properly. (See Figure 3.2.) Figure 3.2 For portability, use long for big integers. Using short can conserve memory if short is smaller than int. Most typically, this is important only if you have a large array of integers. (An array is a data structure that stores several values of the same type sequentially in memory.) If it is important to conserve space, you should use short instead of int, even if the two are the same size. Suppose, for example, that you move your program from a 16-bit int DOS PC system to a 32-bit int Windows XP system. That doubles the amount of memory needed to hold an int array, but it doesn't affect the requirements for a short array. Remember, a bit saved is a bit earned. If you need only a single byte, you can use char. We'll examine that possibility soon. Integer Constants An integer constant is one you write out explicitly, such as 212 or 1776. C++, like C, lets you write integers in three different number bases: base 10 (the public favorite), base 8 (the old Unix favorite), and base 16 (the hardware hacker's favorite). Appendix A, "Number Bases," describes these bases; here we'll look at the C++ representations. C++ uses the first digit or two to identify the base of a number constant. If the first digit is in the range 19, the number is base 10 (decimal); thus 93 is base 10. If the first digit is 0 and the second digit is in the range 17, the number is base 8 (octal); thus 042 is octal and equal to 34 decimal. If the first two characters are 0x or 0X, the number is base 16 (hexadecimal); thus 0x42 is hex and equal to 66 decimal. For hexadecimal values, the characters af and AF represent the hexadecimal digits corresponding to the values 1015. 0xF is 15 and 0xA5 is 165 (10 sixteens plus 5 ones). Listing 3.3 is tailor-made to show the three bases. Listing 3.3 hexoct1.cpp // hexoct1.cpp -- shows hex and octal constants #include <iostream> int main() { using namespace std; int chest = 42; // decimal integer constant int waist = 0x42; // hexadecimal integer constant int inseam = 042; // octal integer constant cout << "Monsieur cuts a striking figure!\n"; cout << "chest = " << chest << "\n"; cout << "waist = " << waist << "\n"; cout << "inseam = " << inseam << "\n"; return 0; } By default, cout displays integers in decimal form, regardless of how they are written in a program, as the following output shows: Monsieur cuts a striking figure! chest = 42 (42 in decimal) waist = 66 (0x42 in hex) inseam = 34 (042 in octal) Keep in mind that these notations are merely notational conveniences. For example, if you read that the CGA video memory segment is B000 in hexadecimal, you don't have to convert the value to base 10 45,056 before using it in your program. Instead, you can simply use 0xB000. But whether you write the value ten as 10, 012, or 0xA, it's stored the same way in the computeras a binary (base 2) value. By the way, if you want to display a value in hexadecimal or octal form, you can use some special features of cout. Recall that the iostream header file provides the endl manipulator to give cout the message to start a new line. Similarly, it provides the dec, hex, and oct manipulators to give cout the messages to display integers in decimal, hexadecimal, and octal formats, respectively. Listing 3.4 uses hex and oct to display the decimal value 42 in three formats. (Decimal is the default format, and each format stays in effect until you change it.) Listing 3.4 hexoct2.cpp // hexoct2.cpp -- display values in hex and octal #include <iostream> using namespace std; int main() { using namespace std; int chest = 42; int waist = 42; int inseam = 42; cout << "Monsieur cuts a striking figure!" << endl; cout << "chest = " << chest << " (decimal)" << endl; cout << hex; // manipulator for changing number base cout << "waist = " << waist << " hexadecimal" << endl; cout << oct; // manipulator for changing number base cout << "inseam = " << inseam << " (octal)" << endl; return 0; } Here's the program output for Listing 3.4: Monsieur cuts a striking figure! chest = 42 (decimal) waist = 2a hexadecimal inseam = 52 (octal) Note that code like cout << hex; doesn't display anything onscreen. Instead, it changes the way cout displays integers. Thus, the manipulator hex is really a message to cout that tells it how to behave. Also note that because the identifier hex is part of the std namespace and the program uses that namespace, this program can't use hex as the name of a variable. However, if you omitted the using directive and instead used std::cout, std::endl, std::hex, and std::oct, you could still use plain hex as the name for a variable. How C++ Decides What Type a Constant Is A program's declarations tell the C++ compiler the type of a particular integer variable. But what about constants? That is, suppose you represent a number with a constant in a program: cout << "Year = " << 1492 << "\n"; Does the program store 1492 as an int, a long, or some other integer type? The answer is that C++ stores integer constants as type int unless there is a reason to do otherwise. Two such reasons are if you use a special suffix to indicate a particular type or if a value is too large to be an int. First, look at the suffixes. These are letters placed at the end of a numeric constant to indicate the type. An l or L suffix on an integer means the integer is a type long constant, a u or U suffix indicates an unsigned int constant, and ul (in any combination of orders and uppercase and lowercase) indicates a type unsigned long constant. (Because a lowercase l can look much like the digit 1, you should use the uppercase L for suffixes.) For example, on a system using a 16-bit int and a 32-bit long, the number 22022 is stored in 16 bits as an int, and the number 22022L is stored in 32 bits as a long. Similarly, 22022LU and 22022UL are unsigned long. Next, look at size. C++ has slightly different rules for decimal integers than it has for hexadecimal and octal integers. (Here decimal means base 10, just as hexadecimal means base 16; the term decimal does not necessarily imply a decimal point.) A decimal integer without a suffix is represented by the smallest of the following types that can hold it: int, long, or unsigned long. On a computer system using a 16-bit int and a 32-bit long, 20000 is represented as type int, 40000 is represented as long, and 3000000000 is represented as unsigned long. A hexadecimal or octal integer without a suffix is represented by the smallest of the following types that can hold it: int, unsigned int, long, or unsigned long. The same computer system that represents 40000 as long represents the hexadecimal equivalent 0x9C40 as an unsigned int. That's because hexadecimal is frequently used to express memory addresses, which intrinsically are unsigned. So unsigned int is more appropriate than long for a 16-bit address. The char Type: Characters and Small Integers It's time to turn to the final integer type: char. As you probably suspect from its name, the char type is designed to store characters, such as letters and numeric digits. Now, whereas storing numbers is no big deal for computers, storing letters is another matter. Programming languages take the easy way out by using number codes for letters. Thus, the char type is another integer type. It's guaranteed to be large enough to represent the entire range of basic symbolsall the letters, digits, punctuation, and the likefor the target computer system. In practice, most systems support fewer than 256 kinds of characters, so a single byte can represent the whole range. Therefore, although char is most often used to handle characters, you can also use it as an integer type that is typically smaller than short. The most common symbol set in the United States is the ASCII character set, described in Appendix C, "The ASCII Character Set." A numeric code (the ASCII code) represents each character in the set. For example, 65 is the code for the character A, and 77 is the code for the character M. For convenience, this book assumes ASCII code in its examples. However, a C++ implementation uses whatever code is native to its host systemfor example, EBCDIC (pronounced "eb-se-dik") on an IBM mainframe. Neither ASCII nor EBCDIC serve international needs that well, and C++ supports a wide-character type that can hold a larger range of values, such as are used by the international Unicode character set. You'll learn about this wchar_t type later in this chapter. Try the char type in Listing 3.5. Listing 3.5 chartype.cpp // chartype.cpp -- the char type #include <iostream> int main( ) { using namespace std; char ch; // declare a char variable cout << "Enter a character: " << endl; cin >> ch; cout << "Holla! "; cout << "Thank you for the " << ch << " character." << endl; return 0; } Here's the output from the program in Listing 3.5: Enter a character: M Holla! Thank you for the M character. The interesting thing is that you type an M, not the corresponding character code, 77. Also, the program prints an M, not 77. Yet if you peer into memory, you find that 77 is the value stored in the ch variable. The magic, such as it is, lies not in the char type but in cin and cout. These worthy facilities make conversions on your behalf. On input, cin converts the keystroke input M to the value 77. On output, cout converts the value 77 to the displayed character M; cin and cout are guided by the type of variable. If you place the same value 77 into an int variable, cout displays it as 77. (That is, cout displays two 7 characters.) Listing 3.6 illustrates this point. It also shows how to write a character constant in C++: Enclose the character within two single quotation marks, as in 'M'. (Note that the example doesn't use double quotation marks. C++ uses single quotation marks for a character and double quotation marks for a string. The cout object can handle either, but, as Chapter 4 discusses, the two are quite different from one another.) Finally, the program introduces a cout feature, the cout.put() function, which displays a single character. Listing 3.6 morechar.cpp // morechar.cpp -- the char type and int type contrasted #include <iostream> int main() { using namespace std; char ch = 'M'; // assign ASCII code for M to c int i = ch; // store same code in an int cout << "The ASCII code for " << ch << " is " << i << endl; cout << "Add one to the character code:" << endl; ch = ch + 1; // change character code in c i = ch; // save new character code in i cout << "The ASCII code for " << ch << " is " << i << endl; // using the cout.put() member function to display a char cout << "Displaying char ch using cout.put(ch): "; cout.put(ch); // using cout.put() to display a char constant cout.put('!'); cout << endl << "Done" << endl; return 0; } Here is the output from the program in Listing 3.6: The ASCII code for M is 77 Add one to the character code: The ASCII code for N is 78 Displaying char ch using cout.put(ch): N! Done Program Notes In the program in Listing 3.6, the notation 'M' represents the numeric code for the M character, so initializing the char variable c to 'M' sets c to the value 77. The program then assigns the identical value to the int variable i, so both c and i have the value 77. Next, cout displays c as M and i as 77. As previously stated, a value's type guides cout as it chooses how to display that valuejust another example of smart objects. Because c is really an integer, you can apply integer operations to it, such as adding 1. This changes the value of c to 78. The program then resets i to the new value. (Equivalently, you can simply add 1 to i.) Again, cout displays the char version of that value as a character and the int version as a number. The fact that C++ represents characters as integers is a genuine convenience that makes it easy to manipulate character values. You don't have to use awkward conversion functions to convert characters to ASCII and back. Finally, the program uses the cout.put() function to display both c and a character constant. A Member Function: cout.put() Just what is cout.put(), and why does it have a period in its name? The cout.put() function is your first example of an important C++ OOP concept, the member function. Remember that a class defines how to represent data and how to manipulate it. A member function belongs to a class and describes a method for manipulating class data. The ostream class, for example, has a put() member function that is designed to output characters. You can use a member function only with a particular object of that class, such as the cout object, in this case. To use a class member function with an object such as cout, you use a period to combine the object name (cout) with the function name (put()). The period is called the membership operator. The notation cout.put() means to use the class member function put() with the class object cout. You'll learn about this in greater detail when you reach classes in Chapter 10, "Objects and Classes." Now, the only classes you have are the istream and ostream classes, and you can experiment with their member functions to get more comfortable with the concept. The cout.put() member function provides an alternative to using the << operator to display a character. At this point you might wonder why there is any need for cout.put(). Much of the answer is historical. Before Release 2.0 of C++, cout would display character variables as characters but display character constants, such as 'M' and 'N', as numbers. The problem was that earlier versions of C++, like C, stored character constants as type int. That is, the code 77 for 'M' would be stored in a 16-bit or 32-bit unit. Meanwhile, char variables typically occupied 8 bits. A statement like char c = 'M'; copied 8 bits (the important 8 bits) from the constant 'M' to the variable c. Unfortunately, this meant that, to cout, 'M' and c looked quite different from one another, even though both held the same value. So a statement like cout << '$'; would print the ASCII code for the $ character rather than simply display $. But cout.put('$'); would print the character, as desired. Now, after Release 2.0, C++ stores single-character constants as type char, not type int. Therefore, cout now correctly handles character constants. The cin object has a couple different ways of reading characters from input. You can explore these by using a program that uses a loop to read several characters, so we'll return to this topic when we cover loops in Chapter 5, "Loops and Relational Expressions." char Constants You have several options for writing character constants in C++. The simplest choice for ordinary characters, such as letters, punctuation, and digits, is to enclose the character in single quotation marks. This notation stands for the numeric code for the character. For example, an ASCII system has the following correspondences: 'A' is 65, the ASCII code for A 'a' is 97, the ASCII code for a '5' is 53, the ASCII code for the digit 5 ' ' is 32, the ASCII code for the space character '!' is 33, the ASCII code for the exclamation point Using this notation is better than using the numeric codes explicitly. It's clearer, and it doesn't assume a particular code. If a system uses EBCDIC, then 65 is not the code for A, but 'A' still represents the character. There are some characters that you can't enter into a program directly from the keyboard. For example, you can't make the newline character part of a string by pressing the Enter key; instead, the program editor interprets that keystroke as a request for it to start a new line in your source code file. Other characters have difficulties because the C++ language imbues them with special significance. For example, the double quotation mark character delimits strings, so you can't just stick one in the middle of a string. C++ has special notations, called escape sequences, for several of these characters, as shown in Table 3.2. For example, \a represents the alert character, which beeps your terminal's speaker or rings its bell. The escape sequence \n represents a newline. And \" represents the double quotation mark as an ordinary character instead of a string delimiter. You can use these notations in strings or in character constants, as in the following examples: char alarm = '\a'; cout << alarm << "Don't do that again!\a\n"; cout << "Ben \"Buggsie\" Hacker\nwas here!\n"; Table 3.2 C++ Escape Sequence Codes The last line produces the following output: Ben "Buggsie" Hacker was here! Note that you treat an escape sequence, such as \n, just as a regular character, such as Q. That is, you enclose it in single quotes to create a character constant and don't use single quotes when including it as part of a string. The newline character provides an alternative to endl for inserting new lines into output. You can use the newline character in character constant notation ('\n') or as character in a string ("\n"). All three of the following move the screen cursor to the beginning of the next line: cout << endl; // using the endl manipulator cout << '\n'; // using a character constant cout << "\n"; // using a string You can embed the newline character in a longer string; this is often more convenient than using endl. For example, the following two cout statements produce the same output: cout << endl << endl << "What next?" << endl << "Enter a number:" << endl; cout << "\n\nWhat next?\nEnter a number:\n"; When you're displaying a number, endl is a bit easier to type than "\n" or '\n', but, when you're displaying a string, ending the string with a newline character requires less typing: cout << x << endl; // easier than cout << x << "\n"; cout << "Dr. X.\n"; // easier than cout << "Dr. X." << endl; Finally, you can use escape sequences based on the octal or hexadecimal codes for a character. For example, Ctrl+Z has an ASCII code of 26, which is 032 in octal and 0x1a in hexadecimal. You can represent this character with either of the following escape sequences: \032 or \x1a. You can make character constants out of these by enclosing them in single quotes, as in '\032', and you can use them as parts of a string, as in "hi\x1a there". TIP When you have a choice between using a numeric escape sequence or a symbolic escape sequence, as in \0x8 versus \b, use the symbolic code. The numeric representation is tied to a particular code, such as ASCII, but the symbolic representation works with all codes and is more readable. Listing 3.7 demonstrates a few escape sequences. It uses the alert character to get your attention, the newline character to advance the cursor (one small step for a cursor, one giant step for cursorkind), and the backspace character to back the cursor one space to the left. (Houdini once painted a picture of the Hudson River using only escape sequences; he was, of course, a great escape artist.) Listing 3.7 bondini.cpp // bondini.cpp -- using escape sequences #include <iostream> int main() { using namespace std; cout << "\aOperation \"HyperHype\" is now activated!\n"; cout << "Enter your agent code:________\b\b\b\b\b\b\b\b"; long code; cin >> code; cout << "\aYou entered " << code << "...\n"; cout << "\aCode verified! Proceed with Plan Z3!\n"; return 0; } Compatibility Note Some C++ systems based on pre-ANSI C compilers don't recognize \a. You can substitute \007 for \a on systems that use the ASCII character code. Some systems might behave differently, displaying the \b as a small rectangle rather than backspacing, for example, or perhaps erasing while backspacing, perhaps ignoring \a. When you start the program in Listing 3.7, it puts the following text onscreen: Operation "HyperHype" is now activated! Enter your agent code:________ After printing the underscore characters, the program uses the backspace character to back up the cursor to the first underscore. You can then enter your secret code and continue. Here's a complete run: Operation "HyperHype" is now activated! Enter your agent code:42007007 You entered 42007007... Code verified! Proceed with Plan Z3! Universal Character Names C++ implementations support a basic source character setthat is, the set of characters you can use to write source code. It consists of the letters (uppercase and lowercase) and digits found on a standard U.S. keyboard, the symbols, such as { and =, used in the C language, and a scattering of other characters, such as newline and space characters. Then there is a basic execution character set (that is, characters that can be produced by the execution of a program), which adds a few more characters, such as backspace and alert. The C++ Standard also allows an implementation to offer extended source character sets and extended execution character sets. Furthermore, those additional characters that qualify as letters can be used as part of the name of an identifier. Thus, a German implementation might allow you to use umlauted vowels and a French implementation might allow accented vowels. C++ has a mechanism for representing such international characters that is independent of any particular keyboard: the use of universal character names. Using universal character names is similar to using escape sequences. A universal character name begins either with \u or \U. The \u form is followed by 8 hexadecimal digits, and the \U form by 16 hexadecimal digits. These digits represent the ISO 10646 code for the character. (ISO 10646 is an international standard under development that provides numeric codes for a wide range of characters. See "Unicode and ISO 10646," later in this chapter.) If your implementation supports extended characters, you can use universal character names in identifiers, as character constants, and in strings. For example, consider the following code: int k\u00F6rper; cout << "Let them eat g\u00E2teau.\n"; The ISO 10646 code for ö is 00F6, and the code for is 00E2. Thus, this C++ code would set the variable name to körper and display the following output: Let them eat gteau. If your system doesn't support ISO 10646, it might display some other character for or perhaps simply display the word gu00E2teau. Unicode and ISO 10646 Unicode provides a solution to the representation of various character sets by providing standard numeric codes for a great number of characters and symbols, grouping them by type. For example, the ASCII code is incorporated as a subset of Unicode, so U.S. Latin characters such as A and Z have the same representation under both systems. But Unicode also incorporates other Latin characters, such as those used in European languages; characters from other alphabets, including Greek, Cyrillic, Hebrew, Arabic, Thai, and Bengali; and ideographs, such as those used for Chinese and Japanese. So far Unicode represents more than 96,000 symbols and 49 scripts, and it is still under development. If you want to know more, you can check the Unicode Consortium's website, at. The International Organization for Standardization (ISO) established a working group to develop ISO 10646, also a standard for coding multilingual text. The ISO 10646 group and the Unicode group have worked together since 1991 to keep their standards synchronized with one another. signed char and unsigned char Unlike int, char is not signed by default. Nor is it unsigned by default. The choice is left to the C++ implementation in order to allow the compiler developer to best fit the type to the hardware properties. If it is vital to you that char has a particular behavior, you can use signed char or unsigned char explicitly as types: char fodo; // may be signed, may be unsigned unsigned char bar; // definitely unsigned signed char snark; // definitely signed These distinctions are particularly important if you use char as a numeric type. The unsigned char type typically represents the range 0 to 255, and signed char typically represents the range 128 to 127. For example, suppose you want to use a char variable to hold values as large as 200. That works on some systems but fails on others. You can, however, successfully use unsigned char for that purpose on any system. On the other hand, if you use a char variable to hold a standard ASCII character, it doesn't really matter whether char is signed or unsigned, so you can simply use char. For When You Need More: wchar_t Programs might have to handle character sets that don't fit within the confines of a single 8-bit byte (for example, the Japanese kanji system). C++ handles this in a couple ways. First, if a large set of characters is the basic character set for an implementation, a compiler vender can define char as a 16-bit byte or larger. Second, an implementation can support both a small basic character set and a larger extended character set. The usual 8-bit char can represent the basic character set, and another type, called wchar_t (for wide character type), can represent the extended character set. The wchar_t type is an integer type with sufficient space to represent the largest extended character set used on the system. This type has the same size and sign properties as one of the other integer types, which is called the underlying type. The choice of underlying type depends on the implementation, so it could be unsigned short on one system and int on another. The cin and cout family consider input and output as consisting of streams of chars, so they are not suitable for handling the wchar_t type. The latest version of the iostream header file provides parallel facilities in the form of wcin and wcout for handling wchar_t streams. Also, you can indicate a wide-character constant or string by preceding it with an L. The following code stores a wchar_t version of the letter P in the variable bob and displays a whar_t version of the word tall: wchar_t bob = L'P'; // a wide-character constant wcout << L"tall" << endl; // outputting a wide-character string On a system with a 2-byte wchar_t, this code stores each character in a 2-byte unit of memory. This book doesn't use the wide-character type, but you should be aware of it, particularly if you become involved in international programming or in using Unicode or ISO 10646. The bool Type The ANSI/ISO C++ Standard has added a new type (new to C++, that is), called bool. It's named in honor of the English mathematician George Boole, who developed a mathematical representation of the laws of logic. In computing, a Boolean variable is one whose value can be either true or false. In the past, C++, like C, has not had a Boolean type. Instead, as you'll see in greater detail in Chapters 5 and 6, "Branching Statements and Logical Operators," C++ interprets nonzero values as true and zero values as false. Now, however, you can use the bool type to represent true and false, and the predefined literals true and false represent those values. That is, you can make statements like the following: bool isready = true; The literals true and false can be converted to type int by promotion, with true converting to 1 and false to 0: int ans = true; // ans assigned 1 int promise = false; // promise assigned 0 Also, any numeric or pointer value can be converted implicitly (that is, without an explicit type cast) to a bool value. Any nonzero value converts to true, whereas a zero value converts to false: bool start = -100; // start assigned true bool stop = 0; // stop assigned false After the book introduces if statements (in Chapter 6), the bool type will become a common feature in the examples.
http://www.informit.com/articles/article.aspx?p=352319&amp;seqNum=4
CC-MAIN-2018-26
en
refinedweb
ttyname, ttyname_r, isatty, ttyslot— #include <unistd.h>char * ttyname(int fd); int ttyname_r(int fd, char *name, size_t namesize); int isatty(int fd); #include <stdlib.h>int ttyslot(void); FILEtyped. ttyname() function returns the NUL-terminated name if the device is found and isatty() is true; otherwise a null pointer is returned and errno is set to indicate the error. The ttyname_r() function returns zero if successful; otherwise an error number is returned.. ttyname(), ttyname_r(), and isatty() functions will fail if: EBADF] ENOTTY] ERANGE] isatty(), ttyname(), and ttyslot() functions appeared in Version 7 AT&T UNIX. The ttyname_r() function appeared in the POSIX Threads Extension (1003.1c-1995). ttyname() function leaves its result in an internal static object and returns a pointer to that object. Subsequent calls to ttyname() will modify the same object.
https://man.openbsd.org/ttyslot.3
CC-MAIN-2018-26
en
refinedweb
PMCONNECTLOGGER(3) Library Functions Manual PMCONNECTLOGGER(3) __pmConnectLogger - connect to a performance metrics logger control port #include "pmapi.h" #include "libpcp.h" int __pmConnectLogger(const char *hostname, int pid);.. pmcd(1), pmlc(1), pmlogger(1), PMAPI(3) and __pmControlLog(3). PM_ERR_PERMISSION no permission to connect to the specified pmlogger(1) instance -ECONNREFUSED the designated pmlogger(1) instance does not exist -EEADDRINUSE the requested control port is already inCONNECTLOGGER(3) Pages that refer to this page: __pmcontrollog(3)
http://man7.org/linux/man-pages/man3/__pmconnectlogger.3.html
CC-MAIN-2018-26
en
refinedweb
KTRACE(2) MidnightBSD System Calls Manual KTRACE(2) NAME ktrace — process tracing LIBRARY Standard C Library (libc, −lc) SYNOPSIS #include <sys/param.h> #include <sys/time.h> #include <sys/uio.h> #include <sys/ktrace.h> int ktrace(const char *tracefile, int ops, int trpoints, int pid); DESCRIPTION The ktrace() system call enables or disables tracing of one or more processes. Users may only trace their own processes. Only the super-user can trace setuid or setgid programs. The tracefile argument gives the pathname of the file to be used for tracing. The file must exist and be a regular file writable by the calling: The trpoints argument specifies the trace points of interest. The defined trace points are: Each tracing event outputs a record composed of a generic header followed by a trace point specific structure. The generic header is: struct ktr_header { }; −1. [ENOSYS] The kernel was not compiled with ktrace support. A thread may be unable to log one or more tracing events due to a temporary. MidnightBSD 0.3 June 4, 1993 MidnightBSD 0.3
http://www.midnightbsd.org/documentation/man/ktrace.2.html
CC-MAIN-2015-48
en
refinedweb
in reply to Re: Perl vs. Python for prime numbers in thread Perl vs. Python for prime numbers We could save typing and electrons: use Math::Prime::Util qw/forprimes/; forprimes { say } 1000; # optionally takes range a,b [download] Some Python ways to do this, all of which are *much* faster than the OP code when we want anything more than tiny values like 1000. There are probably even better ways. Using sympy. Much slower than the Perl module: from sympy import sieve for i in sieve.primerange(2,1000): print i [download] using gmpy2 (only ~2x slower than the Perl module): import gmpy2 n = 2 while n <= 1000: print n n = gmpy2.next_prime(n) [download] Or some Python by hand that is very fast: from math import sqrt, ceil def rwh_primes(n): # +ll-primes-below-n-in-python/3035188#3035188 """ Input n>=6, Returns a list of primes, 2 <= p < n """ correction = (n%6>1) n = {0:n,1:n-1,2:n+4,3:n+3,4:n+2,5:n+1}[n%6] sieve = [True] * (n/3) sieve[0] = False for i in xrange(int(n**0.5)/3+1): if sieve[i]: k=3*i+1|1 sieve[ ((k*k)/3) ::2*k]=[False]*((n/6-(k*k)/6-1)/k+1 +) sieve[(k*k+4*k-2*k*(i&1))/3::2*k]=[False]*((n/6-(k*k+4*k-2*k*( +i&1))/6-1)/k+1) sieve[n/3-correction] = False # If you want the count: return 2 + sum(sieve) return [2,3] + [3*i+1|1 for i in xrange(1,n/3-correction) if sieve +[i]] for i in rwh_primes(1000): print
http://www.perlmonks.org/index.pl?node_id=1040264
CC-MAIN-2015-48
en
refinedweb
Hello, I enabled initramfs under General Setup in the kernel configuration. I pointed the initramfs source to a cpio archive containing a simple init program: #include <stdio.h> #include <unistd.h> int main(int argc, char *argv[]) { printf("Hello world!\n"); sleep(999999999); }? Regards, Daniel On Wed, Feb 27, 2008 at 04:08:27PM +0100, Daniel Janzon wrote: >? Not that I know of. I played with this a bit, and I think the reason there's no output is that the kernel isn't finding a /dev/console. Here's what I found out (much of which you probably already know): You need to run cpio -c init needs to be /init, and statically linked You need /dev/console (although I made one) and I still get "Warning: unable to open an initial console." My cpio directory looks like this: $ ls -Rl . .: total 536 drwxrwxr-x 2 jdike jdike 4096 2008-03-07 12:49 dev -rwxrwxr-x 1 jdike jdike 537826 2008-03-07 12:58 init ./dev: total 0 crw-r--r-- 1 root root 5, 1 2008-03-07 12:49 console I removed the sleep from your init, and the thing seemed to exit, as UML paniced with "Attempted to kill init!", so I think it ran. There's just going to be no output until you figure out how to give a console. Jeff -- Work email - jdike at linux dot intel dot com
http://sourceforge.net/p/user-mode-linux/mailman/message/18691776/
CC-MAIN-2015-48
en
refinedweb
11 May 1998: This document is a submission to the W3C. Please see Acknowledged W3C Submissions regarding its disposition. This document discusses XML based mechanisms for distributed object communication on the Web. It is not intended to be a final specification. Rather, this document is a report on a proof of concept implementation and is meant to motivate discussion within the W3C. Discovered design principles of a successful architecture are outlined. Many open issues are noted. This is a report on a work in progress. This document is a NOTE made available by the W3 Consortium for discussion only. This indicates no endorsement of its content, nor that the Consortium has had any editorial control in its preparation, nor that the Consortium has, is, or will be allocating any resources to the issues addressed by the NOTE. The list of current W3C technical reports is available at. As of the time of this writing [1998/04] the necessary technological foundation exists to create a unified distributed computing model for the Web encompassing both document publishing and distributed software object communication. For lack of a better term, this model is referred to here as "WebComputing." Applications designed for the WebComputing environment exhibit a mix of features from both the Web publishing and the traditional distributed objects paradigms, blended into a unified model. The goal of this model is to extend the current Web application model such that the benefits of distributed object computing systems such as the OMG's CORBA and Microsoft's COM+ can be realized in a Web native fashion. The objective is to have a system which is less complicated than the above mentioned distributed computing systems and which is more powerful than HTML forms and CGI. The Web is beginning to be used as a platform for a new generation of distributed applications. There is a growing need for the Web's architecture to adopt some of the features of traditional distributed computing systems such as the OMG's CORBA and Microsoft's COM+ while still maintaining the Web's current benefits. This document reports on research into integrating the two paradigms of Web publishing and distributed object computing into a unified model based on Web technologies. Although the Web is nominally a document transfer system, it has long had mechanisms for distributed application communication. The functionality of HTML forms, HTTP POSTs, and server extension mechanisms such as CGI provide an "upstream" and flexible communication path from the client browser to the Web server. Taken as a whole these mechanisms amount to a simple and successful two-way client/server communication channel which has contributed to the successful adoption of Web technology. Web applications which use this channel for client to server communication are growing more complicated. Unfortunately, the current HTML form POSTing system lacks a foundation for application specific security, scalability, and object interoperability. The current Web application "architecture" is reaching design limitations. This document describes a design which extends the model to include some features of other distributed computing systems in order to address these and other issues. Distributed object computing systems such as COM+ (and its ancestors OLE, OLE2, COM, DCOM et alia) and CORBA are the results of extensive development and research. The concept of inter-object message "brokering" plays a central role in these types of systems. Message brokering, often referred to as remote procedure calls (RPC), is at the heart of systems whose labels include "client/server", "n-tier applications" and "distributed object computing". The industry knowledge related to message brokering issues has evolved to where the current generation of systems have very similar feature sets. CORBA represents an effort to codify such technology in an open standard. The model presented in this document represents a similar effort. The WebBroker system differs from CORBA in that the former is based on Web standards. In contrast, CORBA's Internet Inter-ORB Protocol (IIOP) is based on Internet standards. WebBroker is based on HTTP, XML, and URIs and so is termed "Web-native". WebBroker tries to blend and simplify the best features of COM+ and CORBA. The current WebBroker implementation represents the lowest common denominator of interoperability between these systems. Without a commonly agreed upon software component object model, Web applications will suffer from the same architecture incompatibilities which separate CORBA and COM+. Note that there seems to be a desire to unify the models and the politically neutral territory of HTTP/XML may be the best place to realize this goal. The specific issues addressed in this document are concerned with both the communication between and the description of software application components on the Web. The term "on the Web" is intended to imply that only HTTP is used for message transport and that URIs are used to address the individual software objects, similar to the CGI model. The novel aspect of this proposal is the use of XML for two purposes: as the format of the serialized method messages between software component objects, referred to as marshalling, and as the format of documents which characterize the objects and the messages which can pass between them, the latter corresponding to type information residing in a CORBA interface repository or the Microsoft type libraries. Whereas URIs are usually used to address (possibly dynamically generated) documents, the same mechanism can be used to address software component objects on a host which has an HTTP server. The argument can be made that this is simply an evolution of CGI and HTML forms. Note that CGI is simply an interface between a Web server and other processes within a host; it does not describe how the software objects within a host appear on the Web. The WebBroker typing documents are concerned only with the objects' network interfaces and do not constrain the internals of the hosts architecture. This document presents XML 1.0 DTDs for the above outlined purposes and a proof of concept implementation. DataChannel is developing a test-bed code base for research and development in this area. Much like Jigsaw, the W3C's HTTP server, the software is written in Java and is freely available as source code. The WebBroker can actually be used with Jigsaw since the latter supports the Java servlets interface and the former is a servlet which embeds within an HTTP server. The code is available at DataChannel's WebBroker site. Although this implementation is written in Java for reasons of portability, the design of the WebBroker architecture makes no assumptions about programming language or platform. There are many styles of inter-component communication in distributed object systems. The style used by WebBroker is "interface" based. Interfaces are a concept common to COM+, CORBA, and Java. The concept of interfaces has been implemented since the early days of object oriented programming although more recent systems have refined the concept and represented it more explicitly in syntax. Note that although the serialized method calls are referred to as "messages" in this document, this does not relate to "message oriented middleware". The goal of interface-based distributed object communication is to enable a software object on one machine to make method calls on a software object (potentially) located at another machine without the programmer or the objects having to directly deal with the fact that the communicating objects are possibly on separate machines. The core of this technique is that by typing the target object of the method call to an interface as opposed to a specific class, the method call can actually be made on an intermediary located on the same machine. This intermediary, called a proxy in WebBroker, implements the same interface as the real target of the message. The proxy is simply a helper which relays the message to the intended target. Symmetrically, there is a helper object on the destination host which receives the networked message intended for the target object. This object is termed the skeleton in WebBroker. In general, the situation is that object A runs on machine X and needs to call a method on object B which runs on machine Y: Interface-based communication coupled with an axiomatic set of intrinsic (or primitive) data-types and their representation on the wire, allows objects to communicate in a language independent fashion. Hence WebBroker needs a DTD to define the message contents and a DTD for describing the surfaces (or rather interfaces) on the objects. These two DTDs are the main deliverables of this work. Auxiliary DTDs are defined for such things as data encodings within an XML document and primitive data typing. Performance does not seem to be fundamentally crippled by the nature of this new type of system. Xerox's ILU effort has already combined an HTTP server with a broker (see [ILU]). The effort reported no performance problems. DataChannel has done a comparative study of various network transports and syntaxes and has found no fundimental problems with the HTTP/XML combination. Simply using XML and HTTP to recast traditional distributed object programming in a Web native fashion has value unto itself, but there are benefits beyond a simple syntactical and transport translation. This section explains some of these benefits. From the developers perspective, WebBroker unifies the Web browsing and distributed object paradigms. Previous systems required an application to drop out of the Web paradigm if ORB-like functionality is needed (e.g Netscape 4 which shipped with an IIOP ORB or Microsoft IE4 and the DCOM protocol). By formulating a ORB-like mechanism in terms of Web standards the two models are unified and so ease the developer learning curve. This also reduces the amount of code. The transport mechanism is already provided by HTTP; no need for another wire protocol. Likewise, this reduces the number of parsers needed on the client. In general less code means less software errors, lighter clients, and greater interoperability. Although some proxy and firewall administrators balk at the ideas proposed in this document, this recasting of distributed object communication into HTTP POSTs (or possibly another HTTP verb name, such as INVOKE, as discussed later) of XML documents actually provides a better foundation for secured firewalls than POSTed HTML forms (i.e. mime-type application/x-www-form-urlencoded). With POSTed forms, there is no way to constrain the contents of the message entity being POSTed. The design presented in this paper is the foundation for greater security as it enables the firewall to more precisely filter POSTed documents in an application specific manner. One of the principles of this design is that if a client needs asynchronous notification then this should be accomplished via the HTTP protocol. This implies an HTTP daemon on the client. By unifying the Web client browser with a small (code of less than 2K in size has been realized) HTTP daemon, notification can be realized without undesirable timed polling, a bandwidth wasting technique which is beginning to appear more often. See [DocumationEast] for more detail. Simply recasting distributed computing binary file formats to XML document types is beneficial. The extensible, hierarchical nature of XML increase interoperability. Consider such constructs as Microsoft's type libraries (TypeLibs). These are privately formatted binary documents which are accessible through the Win32 TypeLib APIs. They are simply declarative "documents" which describe software components. By recasting these to XML documents (one of XML deliverables in progress) they are open and extensible. This way there is no need for a specialized TypeLib parser on light weight clients; the XML processor is sufficient. TypeLibs can be interlinked. This can be recast to linked XML documents. Another candidate for syntactical translation is the Interface Definition Language (IDL). IDL documents look like procedural source code but are purely declarative documents void of procedural code. They are made to look like C code (with extra "attributes") because they are meant to represent interfaces to software objects. Interfaces are essentially collections of methods. Declarative documents on the Web are to be expressed in XML syntax. So IDL files and TypeLibs can both be expressed as XML documents. Indeed they are one and the same. The CORBA Interface Repository and the COM+ type information in the Registry both become a collection of interlinked XML documents available on a Web server. No longer is there an unnecessary distinction between interface description and interface repository. Recasting interface repositories as collections of documents is one example of where the WebBroker system is actually superior to other available systems on the Web. COM+ and CORBA were designed before the days of the Web. The Web has introduced high latency and cache concerns. The TypeLib APIs and the CORBA Interface Repository APIs provide very granular information about software components through sundry methods in the APIs. The WebBroker system recasts this information as XML documents. The benefit is that only one round trip to the Repository serving host is required to fetch a document compared to multiple calls in the case of the CORBA Interface Repository APIs. In general, the hierarchical nature of XML allows for a redesign of traditionally "chatty" services which do not adapt well to the Web environment. Predicating the system on HTTP, URIs, and XML tightly constrains the solution set thereby increasing interoperability. Consider the current state of IIOP. IIOP is a specific implementation derived from the General Inter-ORB Protocol (GIOP) which is based on Internet standards (e.g. IP addresses and NDR). GIOP is abstract enough that it could be used to derive, say, "Web-IOP." (The current version of the WebBroker effort is the result of an attempt to find the lowest common denominator between COM+, CORBA, and Java so it does not precisely comply to GIOP. Any attempt to create a Web native inter-ORB protocol should be checked against GIOP.) The authors believe that Web-IOP would increase interoperability over IIOP because given Web-IOP's constraints of HTTP, XML, and URIs there are less permutations which satisfy the constraints than with IIOP. Two brief notes on the terminology used in this document. Microsoft's systems is referred to as COM+. COM+ is not currently deployed. DCOM and COM are currently deployed in Windows NT and Windows 95. COM+ is an evolution of COM/DCOM which is in flux as of this writing. It has been specified though. It has been designed to more closely model some of the features of CORBA and Java and so is more object oriented than COM/DCOM. WebBroker was designed with COM+ and not COM/DCOM in mind and so that is why the former term is used in this document. Because of the close similarities of COM+ and CORBA and Java, a cleaner abstraction can be derived. For example, COM+ introduces user defined exceptions to Microsoft's model. This is already in CORBA. Exceptions can be reported over the wire so this affects the WebBroker specification. The other terminology point relates to the terms "proxy" and "skeleton". This represents a mix of terms from DCOM and CORBA. Both systems use the term "stub" but with conflicting definitions so it is not used here because of possible confusion. In WebBroker a proxy is analogous to a HTTP firewall proxy, it acts as a front for the real object. Proxies run on the client machine. Skeletons run on the server. DCOM uses the term stub where WebBroker uses skeleton. The following are among the design requirements: HTTP is the only transport used in the WebBroker system. Note that this work is distinct from efforts which simply put a Web server and an ORB (CORBA or COM+) on the same host. A Web brokering system as defined here uses no transport protocol other than HTTP. In general distributed computing situations, the client may need to to "hear" notification of asynchronous events. The client must have a network listening mechanism, a "listener." Although the argument has be made that protocols such as SMTP can be used for purposes of notification, using client side HTTP listeners is a better design choice. Low end machines, such as PalmPilots, have constraints on memory, both persistent and volatile. Most WebComputing clients will have software to function as an HTTP client. This implies that HTTP protocol issues such as parsing and writing HTTP headers will need to be handled by the "talker" part of the client. The same header processing capabilities are required of an HTTP listener. Using a completely separate protocol, such as SMTP, for such things as notification would impose unnecessary stress on the code working set and persistent memory system. Therefore, using the same HTTP protocol to "talk" and "listen" is a better design choice in terms of reducing the number of software flaws, increasing interoperability, and lowering the bar for minimal clients. HTTP is also a protocol which many Web coders are already familiar with. Some firewall administrators are uncomfortable with the similarity between the concepts presented in this document and "HTTP tunnelling". The idea has been presented that a new HTTP method would be helpful. Although it is computationally impossible to completely block covert communication, there is some value in having a separate HTTP method (or verb depending on your terminology) for use in for distributed object communication. This is often referred to as the "INVOKE" method. For those who are concerned about covert communication over HTTP, a Web brokering system would actually be an improvement over the current HTTP form POSTing situation. By having applications use the HTTP INVOKE method and not HTTP POST, firewalls and proxies can route HTTP GETs and POSTs quickly while controllably scrutinizing distributed object communications which use the HTTP INVOKE method in a more computationally intense fashion i.e. filter the documents at the firewall. Note that even with an INVOKE method, the HTTP protocol still remains stateless. WebBroker attempts to define possibly stateful services running on top of HTTP. Currently high security sites need products which detect HTTP tunneling on POSTs. The XML DTDs proposed in this submission actually allow more control, not less because the markup tags explicitly delineate data structures in the XML documents. In previous systems, the structure of the bytes in the data packet was externally specified. The only addressing mechanism is URIs. No COM+ OXIDs, OIDs, IPIDs. The same holds for CORBA. Note that URIs can be "urn:" prefixed UUIDs, which are currently used in DCOM and CORBA. This will ease migration. Using XML for syntax further reduces the amount of novel code. Lightweight Web clients will likely have an XML processor. Besides not being Web native, adopting DCOM or CORBA syntax would simply increase the amount of code needed in light clients. By assuming XML, issues such as byte ordering and data formatting are already decided therefore increasing interoperability. XML is also the foundation for data typing. XML-Data, WebSGML notations, and SGML data attributes are various mechanisms which have been worked out to express data types. Indeed one of the deliverables of this effort is a simple mechanism for data typing which is based on XML 1.0. XML 1.0 notation attributes are used for data encoding declaration. The same primitive data types as defined in the XML-Data Note are use by WebBroker. Hooks (namespace declaration and corresponding colonized attribute names) are provided for XML-Data compatibility. Full XML-Data is not actually used since as of this date there is no pubic implementation of an XML-Data processor. The WebBroker system is upward compatible with XML-Data. Both CORBA and COM+ recognize the interoperability value of a small set of primitive, or intrinsic, data types. WebBroker uses a set which is common to COM+, CORBA, Java, and XML-Data. The only exception is URIs. URIs are the only addressing mechanism used in WebBroker. COM+ and CORBA predate URIs and use other constructs for addressing. This section explains some of the design decisions which were made. XML's "entity" facility is highly leveraged. Related type description structures such as InterfaceDefs and ExceptionDefs are referenced as entities. This allows for the instance syntax of the XML documents as therefore the XML application layer code to be abstracted from the document addressing mechanism. Entities can be declared using either a system ID or a public ID. System ID's are URLs. Public ID can be other "ID"s. Different implementations can be based on a simple file system or more complex mechanisms. For example, Java's code space is simply implemented on top of a file systems. This enables the quick adoption of legacy systems such as CORBA Interface Repositories or Microsoft TypeLibs. The current version of WebBroker represents a very lowest common denominator in terms of features in existing systems and in terms of XML technology. There is no dependency on future or current work such as XLink, XPointer, or XML-Data except data typing. There are two deliverables: the DTDs and a proof of concept implementation. A layered approached has been taken in the design of these DTDs. The DTDs have been designed to each handle a separate aspect of the problem. The lowest DTD, PrimativeDataTypeNotations, declares primitive data type notations (in a fashion compatible with XML-Data but not dependent on an XML-Data processor). Another DTD, AnonymousData, defines how to data-type XML elements. A third, ObjectMethodMessages, defines document types which are the serialized method calls and returns between objects. Other DTDs are defined for describing software component interfaces. The latter part is an adaptation of work by McCool and Prescod (see [McCoolAndPrescod]). The issue of XML verbosity is proving to be a non-issue. Nonetheless, for maximum terseness each element type name in each DTD can be reduced to a single Unicode character. This has proven to be of little performance benefit. Any performance benefit is arguable overshadowed by the corresponding lack of comprehension on the part of humans reading such documents. DataChannel has produced these terse variations for completeness, but does not suggest their use at this time. The goal of these declarations is to express in XML documents the serialized method calls and returns between software component objects. The higher level issue of interface definitions is outside the scope of these declarations. See the InterfaceDef DTD for such issues. This version is not final; it will need to be modified in order to reflect details of COM+ as it is further refined. See the WebBroker site ([WebBrokerSite]) for the latest information. These XML definitions are designed to define two types of XML documents: one for serialized object method calls and the other for serialized object method returns which are collectively termed "object method messages." Documents complying to this DTD are expected to declare the document element type as either objectMethodRequest or objectMethodResponse. For example: <!DOCTYPE objectMethodRequest PUBLIC "-//DataChannel//DTD ObjectMethodMessages V1.0//EN" "" > Another example: <!DOCTYPE objectMethodResponse SYSTEM "" > Note that for network and parse efficiency, all the following element and attribute names can be mapped to single character tokens. This is not done here for the sake of human readability. The terse analog of this DTD is available at [TerseAnonymousData] There is no facility for "structs" or "complex" or "composite" structures. Complex data types (corresponding to, for example, C struct) are not addressed in this level of the "stack". These declarations address only how to serialized data into an XML document. The construction of complex data types from a sequence of primitives is the responsibility of the marshalling code. Such issues are addressed in the InterfaceDef DTD. This DTD deals only with serialization of data. It is assumed that the proxies and skeletons on both ends of the connection know how to marshal and unmarshal the primitive data types into (possibly) higher level constructs. This is analogous to DCE RPC where Request and Response PDUs do not have named data structures. Note that in the ObjectRef element there is no explicit support for "reference counting." Reference counting does not scale well on the Web and complicates the client and protocol. The Microsoft Transaction Server (MTS) employs a more scalable model where "state" is separated from implementation, allowing an object to have control over its own "liveness" without being controlled by reference counts in other processes. Not only does the lack of reference counting make the server more scalable, it also keeps the clients and protocol simple. A later version of this DTD may have the ability to name the data being marshaled. This would be useful for situations where parameters are optional and so need to be named to be correctly identified. For this version, correctly sequencing data into a document is the responsibility of the proxy and skeleton. Correctly behaving proxies and skeletons can be algorithmically generated from the InterfaceDef documents. The AnonymousData DTD defines structure for assigning data tying attributes to character data in element content, a simple way to represent data types in XML 1.0 documents. AnonymousData is used as a helper DTD for the ObjectMethodMesssages DTD and for the interface typing DTDs (InterfaceDef et alia). The data-types defined in XML-Data (see [XMLData]) are used. XML-Data's schema facilities are not used. Instead, the XML 1.0 DTD facilities are used as much as possible. Data typing is not achievable in straight XML 1.0. For maximum flexibility, AnonymousData allows any sequence of data-types to be expressed. It is the responsibility of higher level DTDs to constrain these definitions to a particular sequence of data typed elements. This DTD has only primitive data-types and arrays, no complex data-types. This DTD is named AnonymousData because the element type names are the same as the data type i.e. the data is anonymous, it is not "named." CORBA/IIOP and COM+ do not name data as it goes across the wire. Rather their data marshallers simply "know" where data element boundaries are supposed to occur in the byte stream. Or rather the knowledge of data type sequencing is in the method definitions. Denoting the method ID in the serialized method message is sufficient to allow a lookup on the interface definition for the proper method which defines the proper data typed information sequencing. The parallel with CORBA and COM+ is maintained in order to maximize possible interaction. XML is character based and structurally self-describing. The boundaries of the data elements can be represented as XML open and close tags. This is useful for variable sized data. This design also allows for quick adoption of existing systems (e.g. CORBA and DCOM). A simple XML 1.0 processor can be used with a small amount of code in the application layer to perform data typing. Of course, integrating the processor and data typer is more efficient, but at least this way the system is easier to reproduce with current standard technologies. The AnonymousData DTD has been designed to handle the most common data-types which are expressed in the most widely deployed distributed object systems. The following table maps data-type schemas between these various systems. The first two columns are taken from [ComToJavaMap]. The CORBA mapping is taken from [ComToCorbaMap]. The other columns are added for clarity. The following Ole Automation types have not yet been implemented: CY, DATE, VARIANT, SAFEARRAY. They might eventually, but were not necessary during the proof of concept development. The following Ole Automation types are not be expressed: IDispatch, IUnknown. Rather they are expressed as COM+ object references, not DCOM structures. DCOM structures can be expressed in a DTD on a higher level than this low level primitive data-typing DTD. Some types may be null. Null is indicated by an empty element of the appropriate type e.g.: <string /> Arrays appear as follows: <intArray length="2"><int>432908</int><int>0</int></intArray> OR <intArray /> OR <intArray></intArray> The first is a normal int array. The second is null occurring in the place of an intArray. The third is an intArray of length zero. As per normal XML 1.0 markup minimazation, the dt:dt attribute is not explicitly included because it has a default value declared in the DTD. It was decide to not have an ENTREF attribute which could be used to signify &null;. There are some data types which cannot evaluate to null: booleanis always an empty element which always has a non-defaulted valueattribute. booleans are never null after interpretation. They may appear in the form: <boolean value="true" /> <boolean value="true"></boolean> charcan never be null, so an empty element means the it should be interpreted as the character whose code is zero. The length attribute is currently required on all array element types. As an aside, this boils down to the same issue as chunked streams and the HTTP Content-Length header. Declaring the length is nice (easier parser memory allocation on read side because length is known at start of array read), but mandatorily having to calculate it can be expensive in terms of memory for small machines; the entire array needs to be held in memory to determine the value which needs to be assigned to the length attribute in the open tag. Perhaps this could be made optional but strongly recommended. Having an explicit length attribute for strings may seem unnatural to anyone experienced with SGML but it helps (dumb) data marshallers, and reduces the amount of code which needs to be written in order to Web enable legacy code. For network and parse efficiency, all the data typing element and attribute names can be mapped to single character name tokens. For the sake of readability this is not done in AnonymousData but the technique is explicitly mapped out in TerseAnonymousData. The InterfaceDef DTD (and its related DTDs ModuleDef, ExceptionDef, and TypeDef) is used to define software component interfaces and the messages which can pass between them. This functionality is less well defined than ObjectMethodMessages. InterfaceDef documents correspond to both IDL files and interface repository information in other systems. InterfaceDef documents can be used to generate proxy and skeleton implementation code. A design goal of these related XML document types is to correspond to existing interface definitions and the APIs for accessing those definitions (e.g. COM TypeLibs accessed through APIs such as ITypeLib, ITypeInfo, and ITypeComp and also the CORBA Interface Repositories and its access APIs). The novelty and value in the WebBroker system is the use of XML for syntax and the Web conscious design. Because the Web can have long, slow connections between hosts, it is desirable to minimize the number of client to server round trips. Traditionally in LAN-based distributed computing it is acceptable (even encouraged) to have very high granularity APIs which fetch small pieces of information from the server. On the Web it is desirable to pack as much related information as possible in a single round trip. This must be balanced against a modular design. The balance point is also constrained by the nature of XML mechanics. InterfaceDef documents can be compared to Microsoft TypeLibs. A TypeLib is a binary document which corresponds to an XLink group link (and embedding of) sundry related type definition documents, similar to how an HTML page may have links to its graphics and other related sub parts. In the WebBroker system, a client machine will download a desired "TypeLib" root document, keep the connection open, read the "TypeLib" to discover what other documents need to be downloaded, and after doing so it will then create the in-memory representation of the "TypeLib". (Technically that amounts to multiple round trips but an HTTP persistent connection makes that inexpensive). Perhaps Microsoft could even expose these documents through the existing Win32 TypeLib APIs. Another benefit of this design is that by packing up the type information into a collection of XML documents, only the relevant parts need be downloaded, resulting in efficient federating as opposed to bringing down large type repositories. In this way the interface repository (or NT Registry type informations) is (controllably) exposed to the Web. This Web conscious design demonstrates the value of XML as the syntax of the Web's APIs. The concept of Exceptions has been assumed for reporting faults even for the COM environment. Microsoft is heading in that direction so there is no need to propagate non-object oriented HRESULTs. The Microsoft Java Virtual Machine already expresses HRESULT error codes as instances of the class com.ms.com.COMException. The COMException is what is transmitted over the Web by WebBroker. One of the design goals of this effort is to enable a simple client and a simple protocol. For client-side simplicity, one design option is to allow method parameters to only be marshaled in. The same information to be fetched via out marshalled paramaters can be achieved by defining a struct which contains all the desired information and using that as the return type. The only information marshaled out would be the return value (or an exception). This way the client-side proxy does not have to map data structures which are marshaled both in and out. The mapping between CORBA's meta object facility, WebBroker, and COM+: ParameterDefs do not occur as independent document roots. Rather they occur as elements which are contained in an interfaceDef. Applying the "lowest common denominator" design goal means that there are no such things as AttributeDefs because Java does not support them well. In general, many element names defined in these documents are of the form "XxxxDef" and "XxxxRef" where XxxxDef is the definition of some structure and XxxxRef is a reference to a XxxxDef element with the same name prefix: e.g. ExceptionDef and ExceptionRef. In a situation such as DCOM where the included defs are actually in a legacy format such as typeLibs, you might want to use a notation declaration for transitional purposes, say: <!NOTATION MSTYPELIB PUBLIC "Microsoft TypeLib v2" > This has not been done here. TypeLibs, or some subset of their information, should be algorithmically migratable to the WebBroker system. See WG4_N1958 for a precedent of this style of identification. Note that the XML namespace facility is not relevant to this issue. XML namespaces and software code module namespaces are completely unrelated and one can not be used to express the other. A proof of concept was implemented as a Java servlet. Servlets are currently deployable in many Web servers. HTTP POST was used since there is no INVOKE method in HTTP 1.1. Note that many servlet engines are flaky. JRun is the best implementation that DataChannel has experimented with. It is freely downloadable from Live Software Inc. The following have been identified as open issues: This section outlines identified potential deliverables of any standards effort. Recent efforts have separated specification of architecture from specification of implementation details (binary API interfaces). For example, in the W3C there is the XML Working Group (architecture) and the DOM Working Group (API). In CORBA, there is the IIOP spec (architecture) for how the protocol looks on the wire and there is the binary interface (API) specification which defines how an ORB and an application relate to each other within a machine. The same patterns could be used for a WebBroker effort. An effort could be made to specify the network protocol and another effort to specify the binary interface between the broker and an application. For example, consider the issue of client side callbacks. The protocol architecture spec should say something like "each object on a host is required to have a unique ID scoped to the host address." The implementation spec should say something like "For simple clients, the interface IUuidGenerator is specified (with the following IDL interface ...and the following semantics ...and the following name scope for locating the service on the system...) to be used by client objects to generate unique IDs. For more sophisticated clients the following interface ISubURITreeNameNegotiator is specified (...various details...) for use in negotiating sub URL trees which the interface implementor shall guarantee will prevent host-wide URL namespace collisions. This interface enables the client application to negotiate the URL subtree it is to be notified on.". In general this specification would be concerned with defining the software environment in Web browsers and Web servers. The first deliverable of a Working Group is a process document defining the context of the effort. Goals and non-goals need to be specified. For example, full interoperability of existing COM+ and CORBA components would probably not be a goal. Yet a Web broker system should allow a disciplined programmer to use a Web brokering system to communicate between a COM+ server and a CORBA server using a lowest common denominator of functionality. Issues such as implementability and dependencies would need to be defined. For example, research has shown that the only hard requirement not yet satisfied by a W3C standard or standard endorsed by the W3C is primitive data typing. This is the only as yet identifies unsatisfied dependency. Desired and unacceptable features would need to be surveyed. The first non-process oriented deliverable should be a level 0 specification analogous to the work in the DOM Working Group. Indeed one requirement of the effort should probably be that client side software should be implementable on top of the DOM (even if that would not be the most efficient possible implementation). Such a document would define the lowest common denominator architecture and services necessary for implementation. Level 0 should have no optional features. This should define the minimal foundation for a Web native distributed object communication architecture. A target environment would be defined along with the interfaces to the required services.
http://www.w3.org/TR/1998/NOTE-webbroker-19980511/
CC-MAIN-2015-48
en
refinedweb
Hey, I'm reading now a C++ tutorial from cprogramming.com, lesson about a structures. And I see now and example of pointers in structures:And I'm confused about that part:And I'm confused about that part:Code:#include <iostream> using namespace std; struct xampl { int x; }; int main() { xampl structure; xampl *ptr; structure.x = 12; ptr = &structure; cout<< ptr->x; cin.get(); }What for do we use that, if I just can writeWhat for do we use that, if I just can writeCode:cout<< ptr->x;instead of the pointer?instead of the pointer?Code:cout<<structure.x;
http://cboard.cprogramming.com/cplusplus-programming/148610-i-feel-like-stupid-pointers.html
CC-MAIN-2015-48
en
refinedweb
[Date Index] [Thread Index] [Author Index] Re: Localizing Large Numbers of Variables in a DynamicModule: << With[{args = {x1,x2,x3,x4,x5}}, DynamicModule[args, foo@@args; goo@@args; ... ] ] < Yes this allows the arguments to be referred to within the body of the "With" but the question really came in two parts, the first relating to the ability to externally group these variables and the second, to make these variables available to all functions during the evaluation of the DynamicModule. It is the second part that seems to be the challenge. As the following examples show, in some specialized cases these local variables can be made without explicit passing but, in general, DynamicModule's scoping is not sufficiently "Block-like" for this to occur. In particular, the scoping seems to be a "Module-Block hybrid" that depends on whether or not a Dynamic subsequently wraps around a declared loca variable somewhere in the DynamicModule's initial evaluation. The rest of this post that illustrates the comments above and has some observations about DynamicModule's scoping and updating behaviour particularly in relation to segregated code. From this the basic conclusions I've drawn are as follows: * DynamicModule local variables can sometimes (but unreliably) be Block-like in relation to their local variables in the initial execution, Module-like in terms of generating new variables and DynamicModule-like in their subsequent behaviour in the Front-end. * Mixing Block & DynamicModule can lead to unexpected behaviour especially with any variable shadowing. * Despite the resulting code-bloat and inconvenience, passing all dynamic variables to nested functions appears safer mainly due to more predictable scoping and predictable updating. * Even with passing all DynamicModule variables there are some subtleties that distinguish it from "With", "Module" and "Block" .* Much flexibility would be gained if we had "Meta Scoping" - the ability to functionally add scoped variables to DynamicModule code blocks. This would not only allow eliminate the aforementioned code-bloat but assist with engineering and controlling Dynamic updating, maintain Syntax Coloring and finally also allow more flexible and interactive "Dynamic Code" generation. Just to recap the initial question and its motivation. The DynamicModule construct is very useful for creating Interfaces whose dynamic variables are made spacially local. Most of the examples of DynamicalModule I've observed however, seem to include the body of the code (lexically) within the DynamicalModule definition (i.e. directly between the "[" and "]"). This simplifies initialization, scoping and intended Dynamic Updating. In designing more complex interfaces however, this lexical placement quickly becomes unwieldy and one way of managing (and parallelizing) this complexity is to instead place function calls within the body of the DynamicModule. These functions could, for example, be thought of as corresponding to different components in an overarching interface. This process however, significantly affects the initialization, scoping and intended updating of these DynamicModule variables and one of the points of the following is to see what these effects are and how they can be managed. Hence in some of these "toy examples" the demonstrated idioms may appear unnecessary or unmotivated but it needs to remembered that the point here is to consider the usefulness of the idioms for large numbers of variables and functions and where any global variables can be considered to be in the "Private`" namespace of an associated package. Firstly here is the idiom I'm after VARS = {x1, x2, x3, x4}; f[] := "Interface Component A that accesses some or all of VARS"; g[] := "An Interface Component B that accesses some or all of VARS"; DynamicModule[ VARS, OverArchingInterface[ f[], g[] ] ] (* NOT VALID *) So essentially I want Block's "temporary globalization" to apply to DynamicModule's local variables so that other evaluated function (f and g ) have access to these variables even if they are not explicitly passed. The "With idiom" of insertion without evaluation as initially suggested in the reply certainly allows one to define all the arguments in the one place but, in general (as will be shown) this does not produce Block-like scoping. One might wonder why one would want to this Block-like behaviour since all the arguments, can be passed at once (and in some situations is good programming practice -especially when not done in a package namespace while the extra overhead of passing a variable space is neglible). For example, ARGS = {x1, x2}; f1[args_] := With[{x1 = args[[1]], x2 = args[[2]]}, {Slider2D[Dynamic[{x1, x2}]], {Dynamic@x1, Dynamic@x2}}]; With[{args = ARGS}, DynamicModule[args, f1[args]]] The point though here is when VARS contains a large number of variables that are required by many functions often deeply placed. In these cases the variables need to eventually extracted from within the nested function and hence in a sense the unwieldeness has just been deferred. At first, the Block-like behaviour appears to already be present. ARGS = {x1, x2}; f2[] := {Slider2D[Dynamic[{x1, x2}]], {Dynamic@x1, Dynamic@x2}}; With[{args = ARGS}, DynamicModule[args, f2[]]] That is, both x1 and x2 were not passed explicitly but remain completely local to the DynamicModule. That is, if you copy and paste this output the 2D slider are not linked via global variables x1, x2. This however does not work in general. Since this is a scoping issue of DynamicModule it is easier to observe without the "With" clutter (which also adds complications in any initializations). Take the following: Clear@x1; g1[] := x1; f3[] := Dynamic@x1; DynamicModule[{x1}, {g1[], f3[]}] ---> {x1$$, FE`x1$$14} In the evaluation of DynamicModule's body the locally declared variable is cast as local if it is wrapped in a Dynamic. Otherwise it is is cast as a global variable. Initiallizing with some values can make this clearer. Clear@x1; g1[] := x1; x1 = 1; f4[] := Dynamic@x1; DynamicModule[{x1 = 0}, {g1[], f4[]}] --> {1,0} That is, within g1 the local variable is cast as global whereas within f4 the Dynamic wrapper ensures that x1 is cast to the local DynamicModule variable. Hence it order to ensure that a variable is cast as the local DynamicModule variable it must be wrapped in a Dynamic wrapper and evaluated with the initial DynamicModule evaluation. Take the following f5[] := {Slider[Dynamic@x], {Dynamic@x}}; DynamicModule[{x}, {f5[], Dynamic[f5[]]}] The output shows two sliders and with their respective linked variable. The first behaves as desired with x being cast to the locally declared DynamicalModule variable. The second slider however, while initially showing the same functionality, is actually using a global value of x without any casting taking place. This can be confirmed by copying and pasting the output to another part of the notebook and observing that the second sliders are coupled. This is because casting doesn't take place since in the Dynamic[f5[]] because the f5[] expression is not immediately evaluated due to Dynamic's HoldAll Attribute. In this case, it can be avoided by pushing the First Dynamic "deeper" in order to ensure that the local variables are seen in the initial valuation thus enabling casting. f6[] := Dynamic[{Slider[Dynamic@x], {Dynamic@x}}]; DynamicModule[{x}, f6[]] A more fundamental problem however, is that since the scope of a local variable depending on its initial evaluation this is not something a programmer will either want or sometimes be able to trace. In particular, it can depend on the logical control which can lead to unexpected results. For example in the following the intention is for z to be cast as a local DynamicModule variable. g2[] := {SetterBar[Dynamic@z, {"z1", "z2", "z3"}], Dynamic@z}; f7[] := Dynamic[If[x, "nothing", g2[]]]; DynamicModule[{x = True, z}, {Checkbox[Dynamic@x], f7[]}] At first, the output may appear to work as intended however, note that the Dynamic@z term in g2 is not seen in the initial evaluation of the DynamicModule. This means that the variable z remains global not being cast to its local DynamicModule namesake. This can be observed by placing another copy of the output into another cell, unchecking both checkboxes and observing that the two SetterBars are linked via this global variable. This seems to pretty much be the death knell for any hopes for Block-like behaviour from DynamicModule. One can try to co-opt but Block's behaviour directly but as the following examples show (apart from some unexpected Block variables interaction) this doesn't provide a solution either: g2[] := {SetterBar[Dynamic@z, {"z1", "z2", "z3"}], Dynamic@z}; f7[] := Dynamic[If[x, "nothing", g2[]]]; Block[{x, z}, DynamicModule[{x = True, z}, {Checkbox[Dynamic@x], f7[]}]] nor does g2[] := {SetterBar[Dynamic@z, {"z1", "z2", "z3"}], Dynamic@z}; f7[] := Dynamic[If[x, "nothing", g2[]]]; DynamicModule[{x = True, z}, Block[{x = False, z}, {Checkbox[Dynamic@x], f7[]}]] Curiously in the last example, the inner Block initializations do not over-ride the outer DynamicModule. There are other possibilites that may simulate this behaviour - use of Preread to pre parse this list of arguments and the use of Interpretation but all seem to get stuck on Mathematica lexical scoping. Therefore it seems that trying to create a Block-like variable space is not really viable under the current DynamicModule scoping since such scoping depends on the control flow in the evaluation of the body of the DynamicModule. Hence, unfortunately it appears if that one is forced to adopt the potentially unweildy process of passing all relevant arguments to subfunctions in order to avoid this automatic casting. Ultimately then the issue here seems to be Mathematica's exclusive use of lexical scoping rather than what I'll coin "meta scoping". Adding "meta scoping" to DynamicModule (with Refresh integration) - the ability to functionally assign variable scoping to a function (say via its Attribute property) - would, I think, do several things - it would allow greater flexibity in organizing and managing argument spaces, the maintainenance of syntax coloring, the production of more readable code and the ability to interactively fashion dynamic interfaces. Even with performing normal argument passing however, there are several points worth noting in relation to DynamicModule local variable behaviour. Some of these are related to finding workarounds to the earlier examples and hence are added here for completeness The position of a wrapping Dynamic can DynamicModule's scope and lead to unintended behaviour. Imagine in the following f represents - say an updateable sub-inteface f8[x_] := {Slider[Dynamic@x], {Dynamic@x}}; DynamicModule[{x}, Dynamic[f8[x]]] Moving the right-hand slider produces an error about not being able to assign to a raw object - in this case the current local value of the DynamicModule x. This is due to to, during a dynamical update of f8, x's lastest value of 0 being inserted into Dynamic[x]. Further movement of this slider results in an assignment being attempted to this number instead of x. As before, this can be resolved by placing placing the Dynamic more deeply. f8[x_] := Dynamic[ {Slider[Dynamic@x], {Dynamic@x}}]; DynamicModule[{x}, f8[x]] Another point in modularizing code for dynamic interfaces is that initialising DynamicModule local variables can lead to premature evaluations. For example, take the following DynamicModule[{x = True}, {Checkbox[Dynamic@x], Dynamic@x}] to be broken down idiomatically as follows" f9[x_] := {Checkbox[Dynamic@x], Dynamic@x}; DynamicModule[{x = True}, f9[x]] Clicking the CheckBox creates an error as the box attempts to change the protected "True" Symbol. One straightforward way of dealing with this is to create an Initialization wrapper that performs any initializations after the interface has been created (N.B. this collects all the initializations in one spot - ControlObjects can perform their own initializations if distributing these in the code is not a problem) f9[x_] := {Checkbox[Dynamic@x], Dynamic@x}; Initializations[x_] := {x = True}; DynamicModule[{x}, With[{pre = f9[x]}, Initializations[x]; pre]] Even without initializations within the DynamicModule variable declarations, a premature evaluation can result. This can be managed, with the normal HoldAll Attribute setting.For example, recall the earlier example, that demonstrated how logical dependencies in the DynamicModule's body can affect the scope of its local variables. To avoid this with variable passing one might at a first pass try the following g2a[z_] := {SetterBar[Dynamic@z, {"z1", "z2", "z3"}], Dynamic@z}; f7a[x_, z_] := Dynamic[If[x, "nothing", g2a[z]]]; DynamicModule[{x, z}, {Checkbox[Dynamic@x], With[{pre = f7a[x, z]}, x = True; pre]}] Now de-select the checkbox to reveal the "z-setter" and then try and set z to z1. You'll get that familiar complaint about the setting of raw objects. This is because this value of z is being inserted into the setter bar. This can be delayed by changing g2a's HoldAll Attribute g2a[z_] := {SetterBar[Dynamic@z, {"z1", "z2", "z3"}], Dynamic@z}; SetAttributes[g2a, HoldAll]; f7a[x_, z_] := Dynamic[If[x, "nothing", g2a[z]]]; DynamicModule[{x, z}, {Checkbox[Dynamic@x], With[{pre = f7a[x, z]}, x = True; pre]}] And the output works with all variables being suitably localized (which again can be checked by the decoupling of any pasted copies) Ron Szabolcs Horv�t <szhorvat at gmail.com> wrote:. I really don't understand what you are trying to achieve here ... but perhaps you will find some inspiration in this: With[{args = {x1,x2,x3,x4,x5}}, DynamicModule[args, oo@@args; oo@@args; ... ] ] With[] will just subsitute {x1,x2,x3,x4,x5} for every instance of args before it evaluates its body. Szabolcs
http://forums.wolfram.com/mathgroup/archive/2008/Feb/msg00674.html
CC-MAIN-2015-48
en
refinedweb
You can subscribe to this list here. Showing 6 results of 6 > On 5/22/2009 9:13 AM mohsen rahmanian apparently wrote: >> When I use gnuplot-py on windows xp with python 2.5.1 and run demo.py I get this: >> IOError: [Errno 22] Invalid argument On 5/22/2009 5:34 PM Brian Connell apparently wrote: > I saw this problem when a long sequence of gnuplot commands was being passed into another function as a parameter. The IOError was complaining that the parameter was not valid for some reason. Well, out of curiosity, I started from scratch on Win XP: - download Gnuplot.py version 1.8 from sourceforce - install under Python 2.5.4 (setup.py install) - change gp_win32.py to point to my copy of pgnuplot.exe (note the name!) - run demo.py No problems at all. Cheers, Alan Isaac On 5/22/2009 9:13 AM mohsen rahmanian apparently wrote: > When I use gnuplot-py on windows xp with python 2.5.1 and run demo.py > I get this: > IOError: [Errno 22] Invalid argument Brian Connell recently ran into this. Hopefully he will post his solution. Alan Isaac His name Dear users, When I use gnuplot-py on windows xp with python 2.5.1 and run demo.py I get this: gnuplot> set terminal windows gnuplot> set title "A simple example" gnuplot> set data style linespoints gnuplot> plot "c:\\docume~1\\ef634~1.pas\\locals~1\\temp\\tmpm5ee2g.gnuplot" notitle Please press return to continue... Traceback (most recent call last): File "C:\Program Files\Paive-1.0\Lib\site-packages\Gnuplot\demo.py", line 110, in <module> demo() File "C:\Program Files\Paive-1.0\Lib\site-packages\Gnuplot\demo.py", line 36, in demo g.reset() File "C:\Program Files\Paive-1.0\lib\site-packages\Gnuplot\_Gnuplot.py", line 366, in reset self('reset') File "C:\Program Files\Paive-1.0\lib\site-packages\Gnuplot\_Gnuplot.py", line 210, in __call__ self.gnuplot(s) File "C:\Program Files\Paive-1.0\lib\site-packages\Gnuplot\gp_win32.py", line 130, in __call__ self.write(s + '\n') IOError: [Errno 22] Invalid argument what is my problem? Thanks a lot -- Best Regards Rahmanian On 5/20/2009 6:26 PM Brian Connell apparently wrote: > the persist option is not supported on Windoze. Aha. > I have printed out the script variable passed in, but my > untrained eye can not see a problem. Put it in a file and load it in gnuplot itself to see how gnuplot responds. I'm afraid I'm not currently using Gnuplot.py, so I cannot be much more help. Cheers, Alan Isaac You should not need any of your path tweaks. It looks like Gnuplot imports fine. Start up an interactive interpreter. At the interpreter prompt, enter: import Gnuplot What happens? Cheers, Alan Isaac Been trying awhile, read as much as I can (all FAQs, README’s, etc), and need help. Basically I cannot get the ‘import Gnuplot” statement to work in a python script. Details as follows. Working in Windows XP professional. Installed gnuplot.py 1.8, by unzipping to a directory, and from that directory running: python setup.py install When running the setup, got a warning about “licence” being deprecated in favor of “license”; but everything seems to have installed/copied OK. Have the following directory in place now (with the proper capital “G” which was a problem for some): C:\Program Files\Python26\Lib\site-packages\Gnuplot Editted the gp_win32.py file to point to where pgnuplot.exe is located on my PC. Ran all the demo scripts, and got all the demo graphs properly. Assuming gnuplot and numpy installed correctly. Added the sitepackages directory to my PATH. My problem is that when I run a python script that tries to “import Gnuplot” it gets an error. For instance my gpxplot.py script prints out this after trying to import Gnuplot. gnuplot.py is not found Exception AttributeError: "GnuplotProcess instance has no attribute 'gnuplot'" in <bound method GnuplotProcess.__del__ of <Gnuplot.gp_win32.GnuplotProcess instance at 0x0127C170>> i Exception AttributeError: "Gnuplot instance has no attribute 'gnuplot'" in <bound method Gnuplot.__del__ of <Gnuplot._Gnuplot.Gnuplot instance at 0x0127C148>> ignored Did some additional reading about python installs, and found that often a *.pth file is required to specify the path to the directory which contains the module, so following the conventions of some other packages I have installed, I created a Gnuplot.pth file in the site-packages directory with a single line as follows (since the paths seem to be relative to the directory the .pth file is in): Gnupath Still no luck. The “import Gnuplot” still fails. Anyone got any ideas? Even if it is next steps for debug, which I suspect needs to be around the area of how python locates modules in files on disk (I am not a Python expert) … Here is the gpxplot.py script, the failing import is in the plot_in_gnuplot function. Any help appreciated. … Brian
http://sourceforge.net/p/gnuplot-py/mailman/gnuplot-py-users/?viewmonth=200905
CC-MAIN-2015-48
en
refinedweb
When designing a Scala library, you can partition the services offered by the library into traits. This gives your users more flexibility: they can mix into each class only the services they need from your library. For example, in ScalaTest you create a basic suite of tests by mixing in any of several core Suite traits: class MySuite extends Suite // a basic suite of test methods class MySuite extends FunSuite // a basic suite of test functions class MySpec extends Spec // a BDD-style suite of tests class MyeSpec extends FeatureSpec // a higher level BDD-style suite class MySuite extends FixtureSuite // a suite of tests that take a fixture parameter You can also mix in any of several other traits that add additional, or modify existing, behavior of the core trait: class MySuite extends FunSuite with ShouldMatchers with EasyMockSugar with PrivateMethodTester Although splitting a library's behavior into composable parts with traits gives users flexibility, one downside is that users may end up repeatedly mixing together the same traits, resulting in code duplication. User can easily eliminate this duplication, however, by creating a convenience trait that mixes together the behavior they prefer, and then mixing in that convenience trait into their classes instead. For example: trait ProjectSpec extends WordSpec with ShouldMatchers with EasyMockSugar class OneSpec extends ProjectSpec class TwoSpec extends ProjectSpec class RedSpec extends ProjectSpec with PrivateMethodTester class BlueSpec extends ProjectSpec Besides just mixing together the things they need, users can add value inside their convenience traits. For example, projects often require many tests that share a common fixture, such as a connection to a database containing a clean set of test data. This could be addressed simply by creating yet another trait that mixes in ProjectSpec and adds the needed pre- and post-test behavior: trait DBProjectSpec extends ProjectSpec with BeforeAndAfterEach { override def beforeEach() { // initialize the database and open the connection } override def afterEach() { // clean up the database and close the connection } } Now test classes that need a database can simply mix in DBProjectSpec: class MySpec extends DBProjectSpec Although the ease of behavior composition afforded by traits is very useful, it has some downsides. One downside is that name conflicts are difficult to resolve. A user can't, for example, mix together two traits that contain methods with signatures that cause an overload conflict. Still another downside is that it is slightly awkward to experiment with the services offered by a trait in the Scala interpreter, because before the trait's services can be accessed, it must be mixed into some class or object. Both of these downsides can be addressed by making it easy to import the members of a trait as an alternative to mixing them in. Scala has two features that make it easy to offer users this choice between mixins and imports. First, Scala allows users to import the members of any object. Second, Scala allows traits to have a companion object---a singleton object that has the same name as its companion trait. You implement the selfless trait pattern simply by providing a companion object for a trait that itself mixes in the trait. Here's a simple example of the selfless trait pattern: trait Friendly { def greet() { println("hi there") } } object Friendly extends Friendly Trait Friendly in this example has one method, greet. It also has a companion object, named Friendly, which mixes in trait Friendly. Given this friendly design, client programmers of this library can access the services of Friendly either via mix in composition, like this (imports and uses of Friendly are in bold): object MixinExample extends Application with Friendly { greet() } Or by importing the members of the Friendly companion object, like this: import Friendly._ object ImportExample extends Application { greet() } Although the external behavior of MixinExample is the same as ImportExample, when MixinExample invokes greet it is calling greet on itself (i.e., on this), but when ImportExample invokes greet, it is calling greet on the Friendly singleton object. This is why being able to fall back on an import allows users resolve name conflicts. For example, a user would not be able to mix the following Functional trait into the same class as Friendly: trait Functional { def greet: String = "hi there" } Because Friendly and Functional's greet methods have the same signature but different return types, they will not overload if mixed into the same class: object Problem extends Application with Friendly with Functional // Won't compile By contrast, the offending method can be renamed on import, like this: import Friendly.{greet => sayHi} object Solved extends Application with Functional { sayHi() println(greet) } A good real-world example is trait ShouldMatchers from ScalaTest, which follows the selfless trait pattern. I expect the most common way ShouldMatchers will be used is by mixing it into test classes, often by means of a convenience trait. Here's an example: import org.scalatest.WordSpec import scala.collection.mutable.Stack import org.scalatest.matchers.ShouldMatchers class StackSpec extends WordSpec with ShouldMatchers { "A Stack" should { "pop values in last-in-first-out order" in { val stack = new Stack[Int] stack.push(1) stack.push(2) stack.pop() should be === 2 stack.pop() should be === 1 } "throw NoSuchElementException if an empty stack is popped" in { val emptyStack = new Stack[String] evaluating { emptyStack.pop() } should produce [NoSuchElementException] } } } Occasionally, however, users may want to experiment with the ShouldMatchers syntax in the Scala interpreter, which they can do with an import: scala> import org.scalatest.matchers.ShouldMatchers._ import org.scalatest.matchers.ShouldMatchers._ scala> Map("hi" -> "there") should (contain key ("hi") and not contain value ("dude")) scala> List(1, 2, 3) should have length 2 org.scalatest.TestFailedException: List(1, 2, 3) did not have length 2 at org.scalatest.matchers.Matchers$class.newTestFailedException(Matchers.scala:148) at org.scalatest.matchers.ShouldMatchers$.newTestFailedException(ShouldMatchers.scala:2318) at org.scalatest.matchers.Matchers$ResultOfHaveWordForSeq.length(Matchers.scala:2891) at . ( :7) at . ( ) ... The reason this is called the selfless trait pattern is that it generally only makes sense to do this with traits that don't declare a self type. (A self type is a more specific type for this that restricts what the trait can be mixed into.) One other way in which traits that follow this pattern are "selfless" is that rather than forcing users to mix in their services they instead give users a choice: either mixing in the trait or importing the members of its companion object. If you are designing a Scala library that has a trait that doesn't declare a self type, consider implementing the Selfless Trait pattern by creating a companion object that mixes in the trait. Have a question or opinion about the Selfless Trait pattern? Discuss this article in the Articles Forum topic, Scala's Selfless Trait Pattern. The ScalaTest examples shown in this article are taken from ScalaTest 1.0:.
http://www.artima.com/scalazine/articles/selfless_trait_patternP.html
CC-MAIN-2015-48
en
refinedweb
Details - Type: Improvement - Status: Closed - Priority: Trivial - Resolution: Fixed - Affects Version/s: None - Fix Version/s: None - Component/s: Java - Compiler - Labels:None - Patch Info:Patch Available Description Generated java code for Thrift structs should have doc strings from the .thrift file. The information is all there, it's just not made part of the generated code. In the default java generator, the doc strings would go on the field declarations. In the bean style generator, the the doc string would go on both the generated getters and setters. Issue Links - is depended upon by THRIFT-147 Ruby generated classes should include class doc strings - Closed Activity Forgive me, but where's the partial implementation? I see javadoc for structs, interfaces, and service methods. This issue is supposed to be specifically getting doc strings on the struct fields. You're right. Sorry for not reading the title carefully enough. This patch adds docstrings as described in the issue. This is a much better version that factors out some common logic to t_oop_generator. Please use braces with all if statements. Please don't put "using namespace" in a header file. Please use a separate line for each argument in the definition of generate_docstring_comment. I think the changes to ThriftTest are unnecessary because of DocTest.thrift. Do you think it makes sense to copy the full docstring for each getter and setter? This version handles all your comments. As far as whether we should add the docstring to both getter and setter, what would the alternative be? Choosing one or the other is pretty arbitrary. There's not really much cost to adding it to both. I made this (semi-)independent change:;a=commitdiff;h=2e045a6 I'm planning to commit it on its own. This allowed me to make this change to v3:;a=commitdiff;h=0350c04 The result is this diff, which I am planning on committing as a separate diff from the first:;a=treediff;h=0350c04;hp=2e045a6 Thoughts? This looks about right to me. I say commit it. Already partially implemented in r665256.
https://issues.apache.org/jira/browse/THRIFT-179
CC-MAIN-2015-48
en
refinedweb
"Serge E. Hallyn" <serge@hallyn.com> writes:> Quoting Eric W. Beiderman (ebiederm@xmission.com):>> From: Eric W. Biederman <ebiederm@xmission.com>>> >> Signed-off-by: Eric W. Biederman <ebiederm@xmission.com>>> --->> fs/attr.c | 8 ++++---->> fs/exec.c | 10 +++++----->> fs/fcntl.c | 6 +++--->> fs/ioprio.c | 4 ++-->> fs/locks.c | 2 +->> fs/namei.c | 8 ++++---->> include/linux/quotaops.h | 4 ++-->> 7 files changed, 21 insertions(+), 21 deletions(-)>> >> @@ -2120,7 +2120,7 @@ void do_coredump(long signr, int exit_code, struct pt_regs *regs)>> if (__get_dumpable(cprm.mm_flags) == 2) {>> /* Setuid core dump mode */>> flag = O_EXCL; /* Stop rewrite attacks */>> - cred->fsuid = 0; /* Dump root private */>> + cred->fsuid = GLOBAL_ROOT_UID; /* Dump root private */>> Sorry, one more - can this be the per-ns root uid? The coredumps should> be ok to belong to privileged users in the namespace right?I'm not certain it was clear when you were looking at this thatthis is about dumping core from suid applications, not normalapplications. Looking at the code in commoncap and commit_creds it looks like it is abug that we don't call set_dumpable(new, suid_dumpable) in common capwhen we use file capabilities. I might be wrong but I think we escapethe test in commit_creds in that case.Having thought about it we can make this per namespace but not inthis patch.Things that I see as missing.- We likely need to make the suid_dumpable sysctl per namespace. There is a prctl so it is already per process.- We would need to capture the user_namespace at mm creation time, during exec, so we know which root user we could use. By it's nature we know an mm can't escape a user namespace so the user namespace an mm is created in will have a root user we can dump core as.I was wondering if we could relax this to a uid captured at mm creationtime (and certainly we can capture the root user), but there are enoughweird cases I don't think it is possible to safely allow anything morerelaxed that the root of the user_namespace that created the mm.I don't believe we can't use the user_namespace of current because theapplication may have been suid and then cloned a new user namespacekeeping the mm or perhaps just the uid/euid split.So in short it is doable but a little tricky so it doesn't belong ina patch with a bunch of boring and safe conversions.Eric
https://lkml.org/lkml/2012/4/20/543
CC-MAIN-2015-48
en
refinedweb
Configuration manager for freshbooks-cli Configuration manager for freshbooks-cli freshbooks-cli is a command-line interface to the FreshBooks API. freshbooks-cli-config implements the config subcommand for freshbooks-cli. --key, -k [String] - A configuration key to operate on. If --value is not set, the current value will be written to STDOUT. Keys are namespaced and delimited by ':'. --value, -v [String] - Save a new value to the specified key. --file, -f [Path] - Explicitely specify the configFile to operate on. If not set, $HOME/.freshbooks will be used. --edit, -e - Manually edit configuration with $EDITOR --help, -h - Display this message # Set the Freshbooks API base url $ freshbooks-config -k api:url -v "" # Set the Freshbooks API version $ freshbooks-config -k api:version -v 2.1 # Print the current Freshbooks API version to STDOUT $ freshbooks-config -k api:version # Edit configuration using a text editor $ freshbooks-config --edit You can (and should!) use this interface to manage user configuration within your own freshbooks-cli plugins. config = require 'freshbooks-cli-config'# Reading the global defaultsconfigdefaults# Getting the current freshbooks\_config file pathconfigconfigFile# Retrieving the nconf objectnconf = configgetConf# Retrieving a valuenconfget 'namespace:key' The nconf object returned using configFile() has already loaded the freshbooks\_config file, env overrides, and the global defaults (in that order of precedence). The default freshbooks\_config file is ~/.freshbooks, which can be changed by setting the environment variable freshbooks\_config=PATH The test suite is implemented with nodeunit and nixt. To rebuild & run the tests $ git clone $ cd freshbooks-cli-config $ npm install $ grunt test You can use grunt watch to automatically rebuild and run the test suite when files are changed. Use npm link from the project directory to tell freshbooks-cli to use your modified freshbooks-cli-config during development. To contribute back, fork the repo and open a pull request with your changes.
https://www.npmjs.com/package/freshbooks-cli-config
CC-MAIN-2015-48
en
refinedweb
player_audio_seq_item Struct Reference Player audio sequence item. More... #include <player_interfaces.h> Collaboration diagram for player_audio_seq_item: Detailed Description Player audio sequence item. This describes a single sequence element, the link field is used for chord type playback when a series of notes are to be played together. Set link to true for all but the last notes to be player together. The documentation for this struct was generated from the following file:
http://playerstage.sourceforge.net/doc/Player-cvs/player/structplayer__audio__seq__item.html
CC-MAIN-2015-48
en
refinedweb
Since its birth, Java programming language has become a very popular programming language, covering many technical fields such as the Internet, Android applications, back-end applications, big data and so on. Therefore, the performance analysis and tuning of Java applications is also a very important topic. The performance of Java applications is directly related to the access carrying capacity of many large e-commerce websites and the data processing capacity of big data. Its performance analysis and tuning can often save a lot of hardware costs. 5.1 # JVM Basics 5.1.1} JVM introduction JVM is the English abbreviation of Java virtual machine, which is realized by simulating various computer functions on an actual computer. After the introduction of Java virtual machine, Java programming language makes Java applications run on different operating system platforms without recompiling again. The Java programming language shields the information related to the specific operating system platform by using the Java virtual machine to ensure the platform compatibility of the compiled application, so that the Java application can be deployed and run on different operating systems by compiling and generating the object code (bytecode) running on the Java virtual machine. In essence, Java virtual machine can be regarded as a program and process running on the operating system. After starting, the Java virtual machine starts to execute the instructions saved in the bytecode file. Its internal structure is shown in figure 5-1-1. Figure 5-1-1 In jdk1 8 (Java 8) and later versions, some small changes have taken place in the internal composition structure of the JVM, as shown in figure 5-1-2. Figure 5-1-2 5.1.2 class II loader The class loader is responsible for the compiled The class bytecode file is loaded into memory so that the JVM can instantiate or otherwise use the loaded class. Class loader supports dynamic loading at runtime. Dynamic loading can save memory space and flexibly loadWhen loading classes on the network, the separation of classes can be realized through the separation of namespaces, which enhances the security of the whole system. Class loaders are divided into the following types: L bootstrap class loader: the bootstrap class loader is the bottom loader, responsible for loading all Java bytecode files in rt.jar file in JDK. As shown in figure 5-1-3, the rt.jar file is generally located in the JRE directory of the JDK, which stores the core bytecode file of the Java language itself. Java’s own core bytecode files are generally loaded by the startup class loader. Figure 5-1-3 L extension class loader: it is responsible for loading some jar packages of extension functions into memory. Generally responsible for loading/Lib / ext directory or by the system variable – DJava Ext.dir specifies the bytecode file in the location. L system class loader: responsible for the system classpath Java – classpath or – DJava class. The bytecode class library in the directory specified by the path parameter is loaded into memory. Usually, Java programs written by programmers themselves are also loaded by this kind of loader. The process of loading classes by the class loader is shown in figure 5-1-4, which also describes the whole life cycle of a class bytecode file. Figure 5-1-4 Author: Zhang Yongqing, please indicate: From blog Garden The detailed description of class loader loading process is shown in Table 5-1. Table 5-1 detailed description of loading process of class # loader 5.1.3 Java virtual machine stack and local method stack Java virtual machine stack is the memory model of Java method execution. It is thread private and directly related to threads. Each time a new thread is created, the JVM assigns a corresponding Java stack to the thread. The memory area of the Java stack of each thread cannot be accessed directly to each other to ensure the safety of the thread during concurrent operation. Every time a method is called, the Java virtual machine stack will generate a stack frame for each method. When the method is called, press the stack frame (usually called stack), and when the method returns, pop up the stack frame and discard it (usually called stack). The stack frame stores local variables, operand stacks, dynamic links, intermediate operation results, method return values and other information. The process of each method being called and completed corresponds to the process of putting a stack frame into and out of the virtual machine stack. The life cycle of the virtual machine stack is the same as that of the thread. The local variables stored in the stack frame end with the end of the thread. The local method stack is similar to the Java virtual machine stack, which mainly stores local methodsThe status and information of the call is to facilitate the JVM to call local methods (native method)And interface stack area. Common stack related exceptions are as follows: Lstackoverflow error: commonly known as stack overflow. Generally, this error occurs when the stack depth exceeds the stack size allocated by the JVM virtual machine to the thread. When the method is called in a loop and cannot exit, it is prone to stack overflow errors. Outofmemoryerror: the detailed error information is generally “exception in thread” main “java.lang.outofmemoryerror: unable to create new native thread”. The memory size of the Java virtual machine stack allows dynamic expansion, and when the thread requests the stack, the memory runs out and cannot be expanded dynamically. At this time, an outofmemoryerror error is thrown. 5.1.4} method area and metadata area Author: Zhang Yongqing, please indicate: From blog Garden The method area is what we often call the permanent generation area. It stores Java class information, constant pool, static variables and other data. The memory area occupied by the method area is shared by threads in the JVM. In jdk1 In 8 and later versions, the method area has been removed and replaced by the metadata area and local memory. The metadata information of the class is directly stored in the local memory managed by the JVM. It should be noted that the local memory is not part of the virtual machine runtime data area, nor is it the memory area defined in the Java virtual machine specification. Constant pool, static variables and other data are stored in the Java heap. The main purpose of this is to reduce the problem of full GC caused by too many loaded classes. 5.1.5 # stacking area Java is an object-oriented development language, and the JVM heap is the memory area that really stores Java object instances and is shared by all threads. Therefore, Java programs need to solve synchronization and thread safety problems when instantiating objects and other operations. Java heap area can be subdivided into Cenozoic area and old age area. The Cenozoic can also be subdivided haveThey are Eden space area, from survivor space area and to survivor space area, as shown in figure 5-1-5. Heap is the memory area where GC garbage collection occurs most frequently, so it is also a key area for JVM performance tuning. Figure 5-1-5 The internal structure of Java heap is shown in table 5-2. Table 5-2 # description of internal structure of Java heap The direction indicated by the arrow in figure 5-1-5 above represents the movement process of data during generation specific garbage collection of JVM heap. Objects are saved in Eden space area just after being created, and those long-lived objects will be transferred to old generation through survivor space. Of course, for some large objects (a large continuous memory space needs to be allocated), they directly enter the old age area. This usually happens when the memory in the survivor area is insufficient. In jdk1 7 and earlier versions, the composition of the shared memory area of the JVM is shown in figure 5-1-6. Author: Zhang Yongqing, please indicate: From blog Garden Figure 5-1-6 In jdk1 8 and later versions, the composition of the shared memory area of the JVM is shown in figure 5-1-7. Figure 5-1-7 5.1.6} program counter The program counter is an indicator that records the location of bytecode instructions executed by the thread and is loaded into the JVM memory Class bytecode files are interpreted and executed by bytecode interpreter, and bytecode instructions are read in order. After each instruction is read, the instruction is converted into corresponding operations, and branch, cycle, condition judgment and other process processing are carried out according to these operations. Because the program is generally executed by multiple threads, and the multi threads of the JVM rotate through the CPU time slice (that is, the threads switch and execute in turn) distributionCPU execution time) algorithm, so that one thread may be suspended due to the depletion of time slice during execution, while another thread obtains the time slice and starts execution. When the suspended thread gets the CPU time slice again, if it wants to continue execution from the suspended place, it must know where it last executedIn the JVM, the program counter is used to record the execution position of bytecode instructions of a thread. Therefore, the program counter is thread private and thread isolated. Each thread has its own program counter at runtime. In addition, if the native method is executed, the value of the program counter is null, because the native method is executed by Java directly calling the Java local C / C + + language library through JNI (Java Native Interface), and the method implemented in C / C + + language naturally cannot produce corresponding Class bytecode (C / C + + language is executed in the way of C / C + + language), so the program counter of Java has no value at this time. 5.1.7 waste recycling Java language is different from other programming languages. The memory recycling during program running does not require the developer to manually recycle and release in the code, but the JVM automatically recycles the memory. During memory recycling, object instances that are no longer used will be removed from memory to free up more memory space. This process is often referred to as JVM garbage collection mechanism. Garbage recycling is generally called GC, the new generation of garbage recycling is generally called minor GC, and the old generation of garbage recycling is generally called major GC or full GC. Garbage collection is so important because it is usually accompanied by the suspension of the application. Generally, when garbage collection occurs, except for the threads required by GC, all other threads enter the waiting state until GC execution is completed. The main goal of GC tuning is to reduce the pause time of applications. Common algorithms of JVM garbage collection include root search algorithm, mark clear algorithm and copy algorithm andMarking sorting algorithm。 1. Root search algorithm The root search algorithm regards the garbage collection thread as a graph of all the reference relationships of the application, and starts from a node GC root (English interpretation is a garbage collection root is an object that is accessible from outside the heap, that is, an object that can be accessed from outside the heap). After finding this node, continue to find the reference node of this node. When all reference nodes are found, the remaining nodes are considered as nodes that are not referenced, that is, useless nodes, and then garbage collection is performed on these nodes. As shown in figure 5-1-8, nodes with darker colors (instance object 6, instance object 7 and instance object 8) are nodes that can be garbage collected because these nodes have been referenced. Figure 5-1-8 Author: Zhang Yongqing, please indicate: From blog Garden IBM website page. diagnostics. memory. analyzer. doc/gcroots. In the HTML introduction, the objects that can be used as GC root nodes in the JVM include: A class that was loaded by the bootstrap loader, or the system class loader. For example, this category includes all classes in the rt.jar file (part of the Java™ runtime environment), such as those in the java.util.* package. JNI local A local variable in native code, for example user-defined JNI code or JVM internal code. JNI global A global variable in native code, for example user-defined JNI code or JVM internal code. An object that was referenced from an active thread block. Thread A running thread. Busy monitor Everything that called the wait() or notify() methods, or that is synchronized, for example by calling the synchronized(Object) method or by entering a synchronized method. If the method was static, the root is a class, otherwise it is an object. Java local A local variable. For example, input parameters, or locally created objects of methods that are still in the stack of a thread. Native stack Input or output parameters in native code, for example user-defined JNI code or JVM internal code. Many methods have native parts, and the objects that are handled as method parameters become garbage collection roots. For example, parameters used for file, network, I/O, or reflection operations. Finalizer An object that is in a queue, waiting for a finalizer to run. Unfinalized An object that has a finalize method, but was not finalized, and is not yet on the finalizer queue. Unreachable An object that is unreachable from any other root, but was marked as a root by Memory Analyzer so that the object can be included in an analysis. Unreachable objects are often the result of optimizations in the garbage collection algorithm. For example, an object might be a candidate for garbage collection, but be so small that the garbage collection process would be too expensive. In this case, the object might not be garbage collected, and might remain as an unreachable object. By default, unreachable objects are excluded when Memory Analyzer parses the heap dump. These objects are therefore not shown in the histogram, dominator tree, or query results. You can change this behavior by clicking File > Preferences… > IBM Diagnostic Tools for Java – Memory Analyzer, then selecting the Keep unreachable objects check box. and The explanation given in the website is shown in figure 5-1-9. Figure 5-1-9 Finally, we summarize as follows: (1) The instance object referenced in the JVM virtual machine stack. (2) The object referenced by the static attribute in the method area (only for JVMs before JDK1.8. Since there is no method area after JDK1.8, the static attribute is directly stored in heap). (3) The object referenced by the static constant in the method area (only for JVMs before JDK1.8. Since there is no method area after JDK1.8, the static constant is directly stored in heap). (4) The object referenced in the stack of native methods (mostly used in JNI interface calls). (5) Objects held by the JVM itself, such as startup class loader, system class loader, etc. Other GC algorithms mentioned below will basically refer to the concept of root search algorithm. Author: Zhang Yongqing, please indicate: From blog Garden 2. Mark clear algorithm As shown in figure 5-1-10, the mark clear algorithm scans from GC root to mark the surviving object nodes. After marking, scan the unmarked objects in the whole memory area for direct recycling. Since the mark clear algorithm will not move and defragment the surviving objects after marking, it is easy to cause memory fragmentation。 However, because only non surviving objects are processed, when there are more surviving objects and fewer non surviving objects, the performance of tag removal algorithm is very high. Figure 5-1-10 3. Replication algorithm Replication algorithmAdopt from Root setScanning: copy the surviving objects to the idle area. After scanning the active area, all the memory in the active area will be recycled at one time. At this time, the original active area will become the idle area, as shown in figure 5-1-11. The replication algorithm divides the memory into two sections. All dynamically allocated instance objects can only be allocated in one section (at this time, the section becomes an active section), while the other section is idle. This operation is repeated every time during GC, and one area is always idle every time. Figure 5-1-11 4. marking sorting algorithm Mark and clear objects in the same way as the mark and clear algorithm, but after recycling the occupied space of non viable objectsAfter space, all surviving objects will be moved to the free space at the left end, and the corresponding memory node pointer will be updated, as shown in figure 5-1-12. The mark and sort algorithm is based on the mark and clear algorithm, and also carries out the moving, sorting and sorting of objects. Although the performance cost is higher, it solves the problem of memory fragmentation. If the problem of memory fragmentation is not solved, once a large object instance needs to be created, the JVM may not be able to allocate continuous large memory to the large instance object, resulting in full GC. In garbage collection, full GC should be avoided as much as possible, because once full GC occurs, the application will generally pause for a long time to wait for full GC to complete. Author: Zhang Yongqing, please indicate: From blog Garden Figure 5-1-12 In order to optimize the performance of garbage collection, the JVM uses generational collection. It mainly adopts the replication algorithm for the recycling of the new generation memory (minor GC), while most of the recycling of the old age (major GC / full GC) adopts the tag collation algorithm. When optimizing garbage collection, the most important thing is to reduce the number of garbage collection in the old age, because the garbage collection of the old generation takes a long time, the performance cost is very high, and has a great impact on the operation of the application. 5.1.8 parallelism and concurrency Author: Zhang Yongqing, please indicate: From blog Garden Parallelism and concurrency are often mentioned in concurrent program development. The difference between parallelism and concurrency in garbage collection is as follows: L parallelism: the JVM starts multiple garbage collection threads to work in parallel, but at this time, the user thread (the working thread of the application) needs to be in the waiting state all the time. L Concurrency: it refers to that the user thread (the working thread of the application) and the garbage collection thread execute at the same time (but not necessarily in parallel, and may execute alternately). At this time, the user thread can continue to run, while the garbage collection thread runs on another CPU core and can not interfere with each other. To be continued,Author: Zhang Yongqing, please indicate: From blog Garden. This article is excerpted fromSoftware performance test analysis and tuning practice
https://developpaper.com/software-performance-test-analysis-and-tuning-practice-performance-analysis-and-tuning-of-java-applications-excerpts-from-manuscripts/
CC-MAIN-2022-33
en
refinedweb
rotate an unsigned integer to the right #include <stdlib.h> unsigned int _rotr( unsigned int value, unsigned int shift ); The _rotr() function rotates the unsigned integer, value, to the right by the number of bits specified in shift. If you port an application using _rotr() between a 16-bit and a 32-bit environment, you'll get different results because of the difference in the size of integers. The rotated value. #include <stdio.h> #include <stdlib.h> unsigned int mask = 0x1230; void main() { mask = _rotr( mask, 4 ); printf( "%04X\n", mask ); } produces the output: 0123 WATCOM _lrotl(), _lrotr(), _rotl()
https://users.pja.edu.pl/~jms/qnx/help/watcom/clibref/src/_rotr.html
CC-MAIN-2022-33
en
refinedweb
Fl a Flutter app using Firebase Cloud Messaging. This tutorial will only deal with configuration for the Android platform. First, what are push notifications? Push Notifications are a sort of pop-up messaging medium that alerts app users to what’s going on in the app. They are also an important way to amplify user engagement in your app. For example, say a user forgets about the app once they have installed it. Then you can use push notifications as a mechanism to regain and retain their interest. Push notifications also help drive traffic to your app. Firebase Cloud Messaging is a service offered by Firebase which lets you send these notifications to your users. You can set up various configurations to send different notifications to different audiences based on time and routine. Because of all these benefits, we are going to use it to send notifications to our Flutter app. Step 1: Create a Flutter Project First, we are going to create a flutter project. For that, we must have the Flutter SDK installed in our system. You can find simple steps for flutter installation in the official documentation. After you’ve successfully installed Flutter, you can simply run the following command in your desired directory to set up a complete Flutter project: flutter create pushNotification After you’ve set up the project, navigate inside the project directory. Execute the following command in the terminal to run the project in either an available emulator or an actual device: flutter run After a successful build, you will get the following result in the emulator screen: Step 2: Integrate Firebase Configuration with Flutter In this step, we are going to integrate Firebase services with our Flutter project. But first, we need to create a Firebase project. The setup guidelines are also provided in the official firebase documentation for Flutter. To create a Firebase project, we need to log in to Firebase and navigate to the console. There we can simply click on ‘Add a project’ to get our project started. Then a window will appear asking to input the project’s name. Here, I’ve kept the project name as FlutterPushNotification as shown in the screenshot below: We can continue to the next step when the project has been created. After the project has been set up, we will get a project console as shown in the screenshot below: Here, we are going to set up Firebase for the Android platform. So we need to click on the Android icon displayed in the above screenshot. This will lead us to the interface to register Firebase with our Flutter app project. Step 3: Register Firebase to Your Android App As the registration process is platform-specific, we are going to register our app for the Android platform. After clicking on the Android icon, we will be directed to an interface asking for the Android package name. In order to add the package name of our Flutter project, we need to locate it first. The package name will be available in the ./android/app/build.gradle file of your Flutter project. You will see something like this: com.example.pushNotification We just need to copy it and paste it to the Android package name input field as shown in the screenshot below: After that, we can simply click on the ‘Register app’ button. This will lead us to the interface where we can get the google-services.json file which will link our Flutter app to Firebase Google services. We need to download the file and move it to the ./android/app directory of our Flutter project. The instructions are also shown in the screenshot below: Step 4: Add Firebase Configurations to Native Files in your Flutter Project Now in order to enable Firebase services in our Android app, we need to add the google-services plugin to our Gradle files. First in our root-level (project-level) Gradle file (android/build.gradle), we need to add rules to include the Google Services Gradle plugin. We need to check if the following configurations are available or not: buildscript { repositories { // Check that you have the following line (if not, add it): google() // Google's Maven repository } dependencies { ... // Add this line classpath 'com.google.gms:google-services:4.3.4' } } allprojects { ... repositories { // Check that you have the following line (if not, add it): google() // Google's Maven repository ... } } If not, we need to add the configurations as shown in the code snippet above. Now in our module (app-level) Gradle file (android/app/build.gradle), we need to apply the Google Services Gradle plugin. For that, we need to add the piece of code highlighted in the following code snippet to the ./android/app/build.gradle file of our project: // Add the following line: **apply plugin: 'com.google.gms.google-services'** // Google Services plugin android { // ... } Now, we need to run the following command so that some automatic configurations can be made: flutter packages get With that we have successfully integrated Firebase configurations with our Flutter project. Step 5: Integrate Firebase Messaging with Flutter First, we need to add the firebase-messaging dependency to the ./android/app/build.gardle file. In the file, we need to add the following dependencies: dependencies { implementation "org.jetbrains.kotlin:kotlin-stdlib-jdk7:$kotlin_version" implementation 'com.google.firebase:firebase-messaging:20.1.0' } Next, we need to add an action and a category as an intent-filter within the activity tag in the ./android/app/src/main/AndroidManifest.xml file: <intent-filter> <action android: <category android: </intent-filter> Now, we need to create a Java file called Application.java in the path /android/app/src/main/java/<app-organization-path>. Then, we need to add the code from the following code snippet inside it: package io.flutter.plugins.pushNotification; import io.flutter.app.FlutterApplication; import io.flutter.plugin.common.PluginRegistry; import io.flutter.plugin.common.PluginRegistry.PluginRegistrantCallback; import io.flutter.plugins.GeneratedPluginRegistrant; import io.flutter.plugins.firebasemessaging.FirebaseMessagingPlugin; import io.flutter.plugins.firebasemessaging.FlutterFirebaseMessagingService; public class Application extends FlutterApplication implements PluginRegistrantCallback { @Override public void onCreate() { super.onCreate(); FlutterFirebaseMessagingService.setPluginRegistrant(this); } @Override public void registerWith(PluginRegistry registry) { FirebaseMessagingPlugin.registerWith(registry.registrarFor("io.flutter.plugins.firebasemessaging.FirebaseMessagingPlugin")); } } Now, we need to assign this Application activity to the application tag of the AndroidManifest.xml file as shown in the code snippet below: <application android:name=".Application" This completes our setup of the Firebase messaging plugin in the native Android code. Now, we’ll move on to the Flutter project. Step 6: Install the Firebase Messaging Package Here, we are going to use the [firebase_messaging] package, which you can find here. For that, we need to add the plugin to the dependency option of the pubspec.yaml file. We need to add the following line of code to the dependencies option: firebase_messaging: ^7.0.3 Step 7: Implement a Simple UI Screen Now, inside the MyHomePage stateful widget class of the main.dart file, we need to initialize the FirebaseMessaging instance and some constants as shown in the code snippet below: String messageTitle = "Empty"; String notificationAlert = "alert"; FirebaseMessaging _firebaseMessaging = FirebaseMessaging(); The messageTitle variable will receive the notification message title and notificationAlert will be assigned the action that’s been completed once the notification comes up. Now, we need to apply these variables to the build function inside the Scaffold widget body as shown in the code snippet below: Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: Text(widget.title), ), body: Center( child: Column( mainAxisAlignment: MainAxisAlignment.center, children: <Widget>[ Text( notificationAlert, ), Text( messageTitle, style: Theme.of(context).textTheme.headline4, ), ], ), ), ); } Next, we need to run the Flutter application by executing the following command in the project terminal: flutter run We will get the result you see in the image below: For now, the notification title is empty, and the alert is also as defined. We need to assign a proper value to them once we receive the notification message. So we need to configure the code to receive the notification and use the notification message to display it on the screen. For that, we need to add the code from the following code snippet in the initiState function: @override void initState() { // TODO: implement initState super.initState(); _firebaseMessaging.configure( onMessage: (message) async{ setState(() { messageTitle = message["notification"]["title"]; notificationAlert = "New Notification Alert"; }); }, onResume: (message) async{ setState(() { messageTitle = message["data"]["title"]; notificationAlert = "Application opened from Notification"; }); }, ); } Here, we have used the configure method provided by _firebaseMessaging instance which in turn provides the onMessage and onResume callbacks. These callbacks provide the notification message as a parameter. The message response will hold the notification object as a map object. The onMessage function triggers when the notification is received while we are running the app. The onResume function triggers when we receive the notification alert in the device notification bar and opens the app through the push notification itself. In this case, the app can be running in the background or not running at all. Now we are all equipped with the Flutter app. We just need to configure a message in Firebase Cloud Messaging and send it to the device. Step 8: Create a Message from the Firebase Cloud Messaging Console First, we need to go back to the Cloud Messaging console in the Firebase site as shown in the image below: Here, we can see the ‘Send your first message’ option in the window, as we have not configured any messages before. We need to click on it which will lead us to the following window: Here, we can enter the title, text, image, and name of the notification. The title we set here will be provided as the title in the message object on the callbacks we set before in the Flutter project. After setting the required fields, we can click on ‘Next’ which will lead us to the following window: Here, we need to provide our target app and click on ‘Next’. For Scheduling we can keep the default option: Next, the Conversion window will appear which we can keep as default as well, and then click on the ‘Next’ button. Lastly, a window where we need to enter the custom data will appear in which we can set the title and click_action. This click action event is triggered whenever we click on the notification that appears in the notification bar of the device. After clicking on the notification message from the notification bar, the app will open and the onResume callback will trigger, setting title as assigned in the custom data in the screenshot below: Now, we are ready to send the first notification message to the device. First, let’s try it with the device running in the emulator. As we click on the ‘Review’ button and send the message, we will get the following result in the Cloud Messaging console as well as the emulator: Here, we can see that the title and the notification alert on the emulator screen are updated as soon as we send a message from the console. We can be sure that the onMessage callback was triggered in the app after receiving the notification message. Now let’s try with the app running in the background. As we send the message from the console, we will get the result as shown in the demo below: Here, as soon as we send the message, we receive a push notification in the notification bar of the device. Then, as we drag down the notification bar, we can see the notification message title and text. And, by clicking on the notification message, we can launch the application and display the custom data on the screen. This ensures that our onResume callback was triggered. And we’re done! We have successfully added a push notification feature in our Flutter application using Firebase Cloud Messaging. Conclusion Push notifications are essential in any app. They can be used to alert users to what’s going on in the app, and can help drive users’ interest back to the app. Additionally, Firebase Cloud Messaging makes sending notification alerts much simpler and easier. In this tutorial, we started by configuring the Firebase app and then moved on to the setup and implementation of the Firebase messaging configuration in the Flutter app. Lastly, we were able to send notification alerts to the app using Firebase Cloud Messaging. The tutorial was meant to be simple and easy to understand. Hope that it helps you add push notification to your Flutter apps. Want to see some examples of how you can implement all this? Check out these powerful Flutter templates.
https://envo.app/how-to-add-push-notifications-to-a-flutter-app-using-firebase-cloud-messaging/
CC-MAIN-2022-33
en
refinedweb
On 4/10/06, Steven Bethard <steven.bethard at gmail.com> wrote: > On 4/10/06, Guido van Rossum <guido at python.org> wrote: > > Are there other proto-PEPs being worked on? I would appreciate if the > > authors would send me a note (or reply here) with the URL and the > > status. > > This is the Backwards Incompatibility PEP discussed earlier. I've > submitted it for a PEP number, but haven't heard back yet: > > > I like this! I hope it can be checked in soon. > This is potentially a Python 2.6 PEP, but it has some optional > extensions for Python 3000 and may be relevant to the > adaptation/overloading/interfaces discussion. It proposes a make > statement such that: > make <callable> <name> <tuple>: > <block> > would be translated into the assignment: > <name> = <callable>("<name>", <tuple>, <namespace>) > much in the same way that the class statement works. I've posted it > to comp.lang.python and had generally positive feedback. I've > submitted it for a PEP number, but I haven't heard back yet: > > > I don't like this. It's been proposed many times before and it always ends up being stretched until it breaks. Also, I don't like the property declaration use case; IMO defining explicit access method and explicitly defining a property makes more sense. In particular it bugs me that the proposed syntax indents the access methods and places them in their own scope, while in fact they become (unnamed) methods. Also, I expect that the requirement that the accessor methods have fixed names will make debugging harder, since now the function name in the traceback doesn't tell you which property was being accessed. I expect that the PEP will go forward despite my passive aggressive negativism; there are possible rebuttals for all of my objections. But I don't have to like it. I wish the community efforts for Python 3000 were focused more on practical things like the effects of making all strings unicode, designing a bytes datatype, a new I/O stack, and the view objects to be returned by keys() etc. These things need thorough design as well as serious prototyping efforts in the next half year. -- --Guido van Rossum (home page:)
https://mail.python.org/pipermail/python-3000/2006-April/000704.html
CC-MAIN-2022-33
en
refinedweb
Answer: The program in Python is as follows: import random secretNum = random.randint(1,10) userNum = int(input("Take a guess: ")) while(userNum != secretNum): print("Incorrect Guess") userNum = int(input("Take a guess: ")) print("You guessed right") Explanation: This imports the random module This generates a secrete number This prompts the user to take a guess This loop is repeated until the user guess right This is executed when the user guesses right Explanation Parameter is basically a variable which is defined by a method/function. When a method/function is called then this Parameter receives the value. In any method/function, Parameter works as recipient. If we pass a value In a method/function when it invokes then that value is called Argument.In any method/function, Argument works as agent. Example: // here p1 and p2 are parameters in the method fun test(p1,p2) { return p1+p2; } // here 2,6 are Argument in the method test(2.
https://answer-ya.com/questions/351209-in-three-to-five-sentences-describe-how-good-e-mail-work-habits.html
CC-MAIN-2022-33
en
refinedweb
20 Python Gem Libraries Buried In the Installation Waiting To Be Found Get to know Python’s standard libraries like never before Introduction Most people think Python’s mass dominance is due to its powerful packages like NumPy, Pandas, Sklearn, XGBoost, etc. These are third-party packages written by professional developers, often with the help of other faster programming languages like C, Java, or C++. So, one of the feeble arguments haters might throw against Python is that it won’t be as popular once you strip away all the glory these third-party packages bring. I am here to say otherwise and show that standard Python is already powerful enough to give a serious run for any language’s money. I bring to your attention 20 lightweight packages that come built-in with your Python installation and are only a single line away from being unleashed. 1️. contextlib Handling external sources like database connections, open files, or anything that requires manual open/close operations can become a giant pain in the neck. Context managers solve this issue elegantly. Context managers are a defining feature of Python, not present in other languages, and highly sought after. You’ve probably seen the with keyword used with the open function, and you might not know that you can create functions that work as context managers. Below, you can see a context manager that serves as a timer: Wrapping a function written with special syntax under a contextmanager decorator from contextlib, converts it to a manager you can use with the with keyword. You can read more about custom context managers in my separate article. 2️. functools Want more powerful, shorter, and multi-functional functions? Then, functools has got you covered. This built-in library contains many methods and decorators you can wrap around existing ones to add additional features. One of them is the partial, which can be used to clone functions while preserving some of their arguments with custom values. Below, we are copying the read_csv from Pandas so that we won’t have to repeat passing the same arguments to read some particular CSV files: Another one of my favorites is a caching decorator. Once wrapped, cache remembers every output that maps to inputs so that the results are instantly available when the same arguments are passed to the function. The streamlit library greatly takes advantage of such a function. 3️. itertools If you ever find yourself in a situation where you are writing nested loops or complicated functions to iterate through more than one iterable, check if there is already a function in itertools library. Maybe, you don’t have to reinvent the wheel – Python thought of your every need. Below are some handy iteration functions from the library: 4️. glob For users who love Unix-style pattern matching, the glob library should feel right at home: glob contains all the relevant functions to work with multiple files simultaneously without headaches (or using a mouse). 5️. pathlib The Python os module, to put it nicely, sucks… Fortunately, core Python developers heard the cries of millions and introduced the pathlib library in Python 3.4. It brings the convenient object-oriented approach to systems paths. It also tries very hard to solve all the issues related to (put in the adjective) Windows path system: 6️. sqlite3 To the delight of data scientists and engineers, Python comes with built-in support for databases and SQL through the sqlite3 package. Just hook up to any database (or create one) using a connection object and fire away SQL queries. The package performs obediently. 7️. hashlib Python has spawned deep, deep roots in the sphere of cybersecurity, not just in AI and ML. An example of this is the hashlib library that contains your most common (and secure) cryptographic hash functions like SHA256, SHA512, and so on. 8️. secrets I love mystery novels. Have you ever read The Hound of Baskervilles? It is fantastic, go read it. While it might be immense fun to implement your own message encoding functions, they won’t probably be up to the same standards as the battle-tested functions in the secrets library. There, you will find everything you need to generate random numbers and characters for the hairiest of passwords, security tokens, and related secrets: 9️. argparse Are you good at the command line? Then, you are one of the few. Also, you will love the argparse library. You can make your static Python scripts accept user input through CLI keyword arguments. The library is rich in functionality, enough to create complex CLI applications for your script or even a package. I highly recommend checking out the RealPython article for a comprehensive overview of the library. 10. random There are no coincidences in this world — Oogway. Maybe that’s why scientists use pseudorandomness, that pure randomness doesn’t exist. Anyway, the random module in Python should be more than enough to simulate basic chance events: 1️1. pickle Just as dataset sizes are getting larger and larger, so are our needs to store them faster and more efficiently. One of the alternatives to flat CSV files that come natively with your Python installation is pickle file format. In fact, it is 80 times faster than CSVs at IO and occupies smaller memory. Here is an example that pickles a dataset and loads it back: 💻 Comparison article by Dario Radecic: link 1️2. shutil The shutil library, standing for shell utilities, is a module for advanced file operations. With shutil, you can copy, move, delete, archive, or do any file operation that you would typically perform in the file explorer or on the terminal: 13. statistics Who even needs NumPy or SciPy when there is the statistics module? (Actually, everyone does – I just wanted to write a dramatic sentence). This module can come in handy to perform standard statistical computations on pure Python arrays. There is no need to install third-party packages if all you need is to make a simple calculation. 14. gc Python really pulls out all the stops. It comes with everything — from package managers right up to garbage collectors. Yeah, you heard (/read) it right. The gc module serves as a garbage collector in Python programs once enabled. In lower-level languages, this irksome task is left to the developer, who has to allocate and release chunks of memory required in the program manually. The collect function returns the number of unreachable objects found and cleaned within the namespace. In simple terms, the function releases the memory slot of unused objects. You can read more about memory management of Python below. 💻Memory management in Python — link 15. pprint Some outputs coming from certain operations are just too horrific to look at. Do your eyes a favor and use the pprint package for intelligent indentations and pretty outputs: For even more complex outputs and custom printing options, you can create printer objects with pprint and use them multiple times over. Details are in the docs. 16. pydoc Code is more often read than written — Guido Van Rossum. Guess what? I love documentation and writing it for my own code (don’t be surprised; I am a bit of an OCD). Hate it or love it — documenting your code is a necessary evil. It becomes essentially important for larger projects. In such cases, you can use the pydoc CLI library to automatically generate docs on the browser or save it to HTML using the docstrings of your classes and functions. It can serve as a preliminary overview tool before deploying your docs to other services like Read the Docs. 17. calendar What the HECK was going on during this September? Apparently, there were 19 days in September 1752 in the UK. Where did 3, 4, … 13 go? Well, it is all about the giant mess about switching from the Julian Calendar to Gregorian, which the UK was very stubborn about till the 1750s. You can watch it here. This was the case only in the UK. The rest of the world had sense and was following through the correct course of time, as can be seen using the calendar module: Python takes time seriously. 18. webbrowser Imagine jumping straight to StackOverflow from your Jupyter Notebook or your Python script. Why would you even do that? Well, because you CAN… with the webbrowser module. 19. logging One of the signs that you are looking at a seasoned developer is the lack of print statements in their code. The vanilla logging. This module lets you log messages with different priorities and custom formatted timestamps. Here is the one I use daily: 💻 Excellent tutorial on logging in Python: Real Python 20. concurrent.futures I have left something juicy for the end. This library is about executing operations concurrently, as in multithreading. Below, I send 100 GET requests to a URL and get back the response. The process is slow and tedious as the interpreter waits until each request comes back, and that’s what you get when you use loops. A much smarter approach is to use concurrency and use all the cores on your machine. The concurrent.futures package enables you to do this. Here is the basic syntax: The runtime decreased 12 times, as concurrency allowed sending multiple requests simultaneously using all the cores. You can read more about concurrency in the below tutorial. 💻 Demo tutorial: Article by Dario Radecic Conclusion There is no need to overcomplicate things. If you don’t need them, there is no need to saturate your virtual environment with heavy packages. Having a few built-in packages up your sleeve might just be enough. Remember, “Simple is better than complex” — the Zen of Python. Reach out to me on LinkedIn or Twitter for a friendly chat about all things data. Or you can just read another story from me. How about these: AI/ML Trending AI/ML Article Identified & Digested via Granola by Ramsey Elbasheer; a Machine-Driven RSS Bot
https://ramseyelbasheer.io/2022/08/04/20-python-gem-libraries-buried-in-the-installation-waiting-to-be-found/
CC-MAIN-2022-33
en
refinedweb
- MongoEngine Tutorial - MongoEngine - Home - MongoEngine - MongoDB - MongoEngine - MongoDB Compass - MongoEngine - Object Document Mapper - MongoEngine - Installation - MongoEngine - Connecting to MongoDB Database - MongoEngine - Document Class - MongoEngine - Dynamic Schema - MongoEngine - Fields - MongoEngine - Add/Delete Document - MongoEngine - Querying Database - MongoEngine - Filters - MongoEngine - Query Operators - MongoEngine - QuerySet Methods - MongoEngine - Sorting - MongoEngine - Custom Query Sets - MongoEngine - Indexes - MongoEngine - Aggregation - MongoEngine - Advanced Queries - MongoEngine - Document Inheritance - MongoEngine - Atomic Updates - MongoEngine - Javascript - MongoEngine - GridFS - MongoEngine - Signals - MongoEngine - Text Search - MongoEngine - Extensions - MongoEngine Useful Resources - MongoEngine - Quick Guide - MongoEngine - Useful Resources - MongoEngine - Discussion - Selected Reading - UPSC IAS Exams Notes - Developer's Best Practices - Questions and Answers - Effective Resume Writing - HR Interview Questions - Computer Glossary - Who is Who MongoEngine - Fields A MongoEngine document class has one or more attributes. Each attribute is an object of Field class. BaseField is the base class or all field types. The BaseField class constructor has the following arguments − BaseField(db_field, required, default, unique, primary_key) The db_field represents name of database field. The required parameter decides whether value for this field is required, default is false. The default parameter contains default value of this field The unique parameter is false by default. Set to true if you want this field to have unique value for each document. The primary_key parameter defaults to false. True makes this field primary key. There are a number of Field classes derived from BaseField. Numeric Fields IntField (32bit integer), LongField (64 bit integer), FloatField (floating point number) field constructors have min_value and max_value parameters. There is also DecimalField class. Value of this field’s object is a float whose precision can be specified. Following arguments are defined for DecimalField class − DecimalField(min_value, max_value, force_string, precision, rounding) Text Fields StringField object can store any Unicode value. You can specify min_length and max_length of the string in the constructor. URLField object is a StringField with capability to validate input as a URL. EmailField validates the string as a valid email representation. StringField(max-length, min_length) URLField(url_regex) EmailField(domain_whiltelist, allow_utf8_user, allow_ip_domain) The domain_whitelist argument contains list of invalid domains which you would not support. If set to True, allow_utf8_user parameter allows the string to contain UTF8 characters as a part of email. The allow_ip_domain parameter is false by default, but if true, it can be a valid IPV4 or IPV6 address. Following example uses numeric and string fields − from mongoengine import * connect('studentDB') class Student(Document): studentid = StringField(required=True) name = StringField() age=IntField(min_value=6, max-value=20) percent=DecimalField(precision=2) email=EmailField() s1=Student() s1.studentid='001' s1.name='Mohan Lal' s1.age=20 s1.percent=75 s1.email='mohanlal@gmail.com' s1.save() When above code is executed, the student collection shows a document as below − ListField This type of field wraps any standard field, thus allowing multiple objects to be used as a list object in a database. This field can be used with ReferenceField to implement one to many relationships. The student document class from above example is modified as below − from mongoengine import * connect('studentDB') class Student(Document): studentid = StringField(required=True) name = StringField(max_length=50) subjects = ListField(StringField()) s1=Student() s1.studentid='A001' s1.name='Mohan Lal' s1.subjects=['phy', 'che', 'maths'] s1.save() The document added is shown in JSON format as follows − { "_id":{"$oid":"5ea6a1f4d8d48409f9640319"}, "studentid":"A001", "name":"Mohan Lal", "subjects":["phy","che","maths"] } DictField An object of DictField class stores a Python dictionary object. In the corresponding database field as well, this will be stored. In place of ListField in the above example, we change its type to DictField. from mongoengine import * connect('studentDB') class Student(Document): studentid = StringField(required=True) name = StringField(max_length=50) subjects = DictField() s1=Student() s1.studentid='A001' s1.name='Mohan Lal' s1.subjects['phy']=60 s1.subjects['che']=70 s1.subjects['maths']=80 s1.save() Document in the database appears as follows − { "_id":{"$oid":"5ea6cfbe1788374c81ccaacb"}, "studentid":"A001", "name":"Mohan Lal", "subjects":{"phy":{"$numberInt":"60"}, "che":{"$numberInt":"70"}, "maths":{"$numberInt":"80"} } } ReferenceField A MongoDB document can store reference to another document using this type of field. This way, we can implement join as in RDBMS. A ReferenceField constructor uses name of other document class as parameter. class doc1(Document): field1=StringField() class doc2(Document): field1=StringField() field2=ReferenceField(doc1) In following example, StudentDB database contains two document classes, student and teacher. Document of Student class contains reference to an object of teacher class. from mongoengine import * connect('studentDB') class Teacher (Document): tid=StringField(required=True) name=StringField() class Student(Document): sid = StringField(required=True) name = StringField() tid=ReferenceField(Teacher) t1=Teacher() t1.tid='T1' t1.name='Murthy' t1.save() s1=Student() s1.sid='S1' s1.name='Mohan' s1.tid=t1 s1.save() Run above code and verify result in Compass GUI. Two collections corresponding to two document classes are created in StudentDB database. The teacher document added is as follows − { "_id":{"$oid":"5ead627463976ea5159f3081"}, "tid":"T1", "name":"Murthy" } The student document shows the contents as below − { "_id":{"$oid":"5ead627463976ea5159f3082"}, "sid":"S1", "name":"Mohan", "tid":{"$oid":"5ead627463976ea5159f3081"} } Note that ReferenceField in Student document stores _id of corresponding Teacher document. When accessed, Student object is automatically turned into a reference, and dereferenced when corresponding Teacher object is accessed. To add reference to document being defined, use ‘self’ instead of other document class as argument to ReferenceField. It may be noted that use of ReferenceField may cause poor performance as far retrieval of documents is concerned. The ReferenceField constructor also has one optional argument as reverse_delete_rule. Its value determines what to be done if the referred document is deleted. The possible values are as follows − DO_NOTHING (0) - don’t do anything (default). NULLIFY (1) - Updates the reference to null. CASCADE (2) - Deletes the documents associated with the reference. DENY (3) - Prevent the deletion of the reference object. PULL (4) - Pull the reference from a ListField of references You can implement one to many relationship using list of references. Assuming that a student document has to be related with one or more teacher documents, the Student class must have a ListField of ReferenceField instances. from mongoengine import * connect('studentDB') class Teacher (Document): tid=StringField(required=True) name=StringField() class Student(Document): sid = StringField(required=True) name = StringField() tid=ListField(ReferenceField(Teacher)) t1=Teacher() t1.tid='T1' t1.name='Murthy' t1.save() t2=Teacher() t2.tid='T2' t2.name='Saxena' t2.save() s1=Student() s1.sid='S1' s1.name='Mohan' s1.tid=[t1,t2] s1.save() On verifying result of the above code in Compass, you will find the student document having reference of two teacher documents − Teacher Collection { "_id":{"$oid":"5eaebcb61ae527e0db6d15e4"}, "tid":"T1","name":"Murthy" } { "_id":{"$oid":"5eaebcb61ae527e0db6d15e5"}, "tid":"T2","name":"Saxena" } Student collection { "_id":{"$oid":"5eaebcb61ae527e0db6d15e6"}, "sid":"S1","name":"Mohan", "tid":[{"$oid":"5eaebcb61ae527e0db6d15e4"},{"$oid":"5eaebcb61ae527e0db6d15e5"}] } DateTimeField An instance of DateTimeField class allows data in date format in MongoDB database. MongoEngine looks for Python-DateUtil library for parsing data in appropriate date format. If it is not available in current installation, date is represented using built-in time module’s time.strptime() function. Default value of field of this type is current datetime instance. DynamicField Different and varying type of data can be handled by this field. This type of field is internally used by DynamicDocument class. ImageField This type of field corresponds to field in document that can store an image file. Constructor of this class can accept size and thumbnail_size parameters (both in terms of pixel size).
https://www.tutorialspoint.com/mongoengine/mongoengine_fields.htm
CC-MAIN-2022-33
en
refinedweb
How to manage zip file Hi. I have to manage a zip file (not gzip, just zip). Can somenone suggest me a way to do it (on both Linux and Windows)? qCompress and qUncompress does not work. Regards. The only answer is zlib, you just need to decide the flavour - QuaZIP (handles only .zip, wrapper around zlib) - KArchive (supports multiple compression formats, for zip it's a wrapper on zlib) - zlib if you don't mind C then you can use directly the reference library for zip files I personally use KArchive because it's already conveniently wrapped in a Qt style and it can rely on the support of KDE Hi. I'm trying to use quazip. I have built quazip package without problem. When I try to run JlCompress::extractDir("a.zip", "."); the probgram goes to crash. .pro windows { INCLUDEPATH += C:/Users/Denis/git/ControlloAccessi/quazip-0.7.2 INCLUDEPATH += C:/Users/Denis/git/ControlloAccessi/zlib128-dll/include LIBS += C:/Users/Denis/git/ControlloAccessi/quazip-0.7.2/quazip/release/quazip.lib } main.cpp #include "quazip/quazip.h" #include "QFile" #include "QDebug" #include <quazip/JlCompress.h> MainWindow::MainWindow(QWidget *parent) : QMainWindow(parent), ui(new Ui::MainWindow) { ui->setupUi(this); JlCompress::extractDir("a.zip", "b"); } @mrdebug Did you try to debug to see what happens? Is it SIGSEGV or something else? Any error messages? @mrdebug said in How to manage zip file: When I try to run You are linking to the release version of quazip so make sure you run the release version of your app. change LIBS += C:/Users/Denis/git/ControlloAccessi/quazip-0.7.2/quazip/release/quazip.libinto CONFIG(debug, debug|release) { LIBS += -L"C:/Users/Denis/git/ControlloAccessi/quazip-0.7.2/quazip/debug" LIBS += -lquazipd }else { LIBS += -L"C:/Users/Denis/git/ControlloAccessi/quazip-0.7.2/quazip/release" LIBS += -lquazip } Also make sure you make both Quazip.dll and zlib.dll (if you did not compile it as static) available at runtime Sorry. I have missed the required dlls. Is there a way to unzip a zip stream, from a QByteArray without to store it to a file before? yes with create a QDataStreamoperating on the QByteAraryand then pass QDataStream::device()to that constructor. On the other hand, I'm not sure what you are trying to do but you might not need QuaZip at all, have a look at and below Someting like this? QDataStream BufferIn(&Source, QIODevice::ReadOnly); QuaZipFile quaZip(BufferIn.device()); qDebug() << quaZip.open(QIODevice::ReadOnly); returns false... QuaZipFilerepresents a file inside a zip file, not a zip file in itself. the constructor you are calling is this one: use QuaZipinstead. P.S. QDataStream BufferIn(&Source, QIODevice::ReadOnly);is the same as QDataStream BufferIn(Source); After many tries I can't unzip something without to use a file. These lines of code QDataStream BufferIn(&QBABufferOut, QIODevice::ReadOnly); QuaZip quaZip(BufferIn.device()); does not seem to work. - SGaist Lifetime Qt Champion last edited by Hi, What do you have in QBABufferOut ? Are you sure it's opened correctly ? Why do you need BufferInfor ? And most important, what do you mean by does not seem to work? That's to vague to help you. Please have a look at this sequence: QuaZipFile quaZip(QBABufferOut); // (rapresents a zip archivie in ram). if (quaZip.open(QIODevice::ReadOnly)) { // QIODevice::ReadOnly or someting else qDebug() << "ok"; } else qDebug() << "error"; // always error!!!!!!!!!!!! You have to open the QuaZip for decompression first: QBuffer storageBuff(&QBABufferOut); QuaZip zip(&storageBuff); if (!zip.open(QuaZip::mdUnzip)) qDebug() << "error"; QuaZipFile file(&zip); for (bool f = zip.goToFirstFile(); f; f = zip.goToNextFile()) { QuaZipFileInfo fileInfo; file.getFileInfo(&fileInfo); qDebug() << fileInfo.name; file.open(QIODevice::ReadOnly); qDebug() << "Content: " << file.readAll().toBase64(); file.close(); } zip.close(); P.S. // (rapresents a zip archivie in ram). No it doesn't. As mentioned before QuaZipFile represent a file inside a zip archive, not the archive itself It works perfectly. Many thanks. @VRonin said in How to manage zip file: QuaZIP Is there a license associated with the use of this? As I have a need for compressing large amounts of binary data that I need to transmit to remote clients. @SPlatten said in How to manage zip file: Is there a license associated with the use of this? Thank you, @SPlatten if both ends are in your software I'd suggest agains using 3rd party library. qCompress and qUncompress work just fine in that regard. My usual approach is to supply a header (using QDataStream) with checksum and whatever else is needed, then stream to it output from compression routine. Works sufficiently well, eliminates the need for another dependency. - Ramkumar Mohan last edited by Hi, I'm working with zip files.
https://forum.qt.io/topic/74306/how-to-manage-zip-file
CC-MAIN-2022-33
en
refinedweb
While FusionAuth is fundamentally a single-tenant solution, we do support multiple tenants within a single-tenant instance. In this post I’ll outline a few of the common use cases we solve with our tenancy feature. White labeled Identity We have several clients that, like us, are also software companies. With these clients, it is very common for them to be selling Software as a Service (SaaS) solutions. This means they have many clients using a single instance of their platform. Let’s assume Acme Corp. sells a marketing communication platform that provides commerce, customer relationship management (CRM) and user management to small companies. Joe uses two different websites, funnymugs.com and chucknorrisjokes.com. Both of these websites buy their software from Acme Corp. and Acme Corp. provides a single identity backend that stores a single user object for Joe. Joe will be very (unpleasantly) surprised if he changes his password on chucknorrisjokes.com and magically his password is updated on funnymugs.com. This diagram illustrates why this unexpected password change occurs when Acme Corp. is storing single user objects. This would be a poor user experience and not ideal for Acme Corp. While both users are technically Joe, he is not aware of this nuance in the method that Acme Corp. built their platform. In most cases we want a user to be considered unique by their email address. You can think of this the same way that your Gmail address works. You have a single Google account, and you can use that set of credentials to gain access to Gmail, Blogger, YouTube, Google Play, (ahem..) Google+, Google Analytics and more. Each of these applications are considered an authenticated resource, and Google simply grants you permission to each of them based on your credentials. This is how FusionAuth views the world as well, we support one to many Applications in FusionAuth which represent different authenticated resources. A single user can register or be granted permissions to multiple Applications. This is also where our single sign-on comes into play. You login once and then you can access each Application without the need to log into each one separately. However, as you just saw with Acme Corp., when the platform is opaque to the end user and there is only a single identity for a single email address, surprising side-effects start to occur. In this case, what Acme Corp. needs is a way to partition each of their clients into their own namespace. This is one of the main reasons we built FusionAuth Tenants. A FusionAuth tenant is simply a namespace where Applications, Groups and Users exist. Now Acme Corp. can allow Joe to create his account with the same unique email address joe@example.com in multiple tenants. They become separate unique identities, one per tenant. Joe can then manage his permissions, passwords, and user details discretely inside each tenant (i.e. each client of Acme Corp.). The second diagram illustrates the new layout of Acme Corp. using multiple tenants. We still strongly believe that a single-tenant solution is the most secure option for our clients, so while we are still a single-tenant solution, we do allow our clients to build multi-tenant solutions to better suit their requirements. Dev, Stage and Prod For this use case, we don’t have multiple clients, but instead we have a single production environment using FusionAuth. In addition to production, we need separate environments for development, build and QA. One option is to stand up a separate instance of FusionAuth for each of these environments. This ensures that the development environment doesn’t impact the build environment, which doesn’t impact the QA environment, and so on. Most SaaS identity products don’t solve this problem directly (or easily). Instead, they force you to sign up for multiple accounts, one for each environment. That approach works, but now you have multiple accounts that may or may have a subscription fee associated with each of them. Plus, each account has separate configuration, credentials, and URLs that need to be maintained separately. And if you need to setup an additional environment, it could take quite a bit of work to get it configured properly. Leveraging tenants in this scenario is a big win because it allows a single instance of FusionAuth to service multiple environments, which reduces complexity, infrastructure costs, maintenance and more. If you want, each developer can have their own tenant so they can each develop, create and delete users without affecting the rest of their team. Here is a specific and common scenario: a customer has completed their integration, written all of their code, written all of their tests and are ready to move into production. If this same customer now wants to use tenants only for build, test and QA, they can do so without any code change. This is possible because of how we authenticate an API request for a particular tenant. While there is more than one way to specify a tenant id on the API request, the simplest is to create an API key and assign it to a tenant. This way, none of your API requests change, none of your code changes, you simply load your API key from an environment variable or inject it based on your runtime mode. Locking an API key to a tenant means that only Users, Groups and Applications in that tenant will be visible to that API key. To provide you an example of how an API request can be scoped to a tenant, consider the following code. I have integrated some code to retrieve a user by email address in FusionAuth. I’m using the API key 5EU_q5unGCCYv6w_FipDBFevXhAxbRGaRYoxK-nP6t0, which is assigned to tenant funnymugs.com. As you can see, my API call finds Joe successfully. FusionAuthClient client = new FusionAuthClient("5EU_q5unGCCYv6w_FipDBFevXhAxbRGaRYoxK-nP6t0", ""); ClientResponse<UserResponse, Errors> response = client.retrieveUserByLoginId("joe@example.com"); // API response is 200, success assertEquals(response.status, 200); Next, I update my API key to BwLzGhDTYtswDq9hK-ajohectZjFpMvmLeDT1mfiM54, which is assigned to tenant chucknorrisjokes.com and that tenant doesn’t contain a user with the email address joe@example.com. By changing the API key, I have scoped every FusionAuth API call to a different tenant. FusionAuthClient client = new FusionAuthClient("BwLzGhDTYtswDq9hK-ajohectZjFpMvmLeDT1mfiM54", ""); ClientResponse<UserResponse, Errors> response = client.retrieveUserByLoginId("joe@example.com"); // API response is 404, not found. assertEquals(response.status, 404); If you use tenants in this way, you still get the best of both worlds. When you log into the FusionAuth UI, you can manage all users across all tenants. Plus, you get full visibility and reporting across your entire system. These are just some of the use-cases that the FusionAuth Tenants feature help solve. Tenants are also great for taking multiple legacy backends and unifying the identities over time, with very little risk. If you need help implementing multi-tenants in your application, contact us directly.
https://fusionauth.io/blog/2018/09/24/multi-tenancy-in-a-single-tenant-architecture
CC-MAIN-2022-33
en
refinedweb
I am trying to host a Streamlit app on Azure Compute Instance resource. It appears that accessing the instance is possible through https://{instanceName}-{internalPort}.northeurope.instances.azureml.ms (with an Azure-provided security layer in between). To smoketest this I created a simple Flask app and verified I could access it: I was able to access my dummy app on https://[REDACTED]-5000.northeurope.instances.azureml.ms/, since it was running on port 5000 internally. Attempt 1: Basic Configuration Now I want to serve my Streamlit app. Initially I wanted to eliminate error sources and simply check if wires are connected correctly, and my app is simply: import streamlit as st st.title("Hello, World!") Running this streamlit app ( streamlit run sl_app.py) gives: 2022-03-28 11:49:38.932 Trying to detect encoding from a tiny portion of (13) byte(s). 2022-03-28 11:49:38.933 ascii passed initial chaos probing. Mean measured chaos is 0.000000 % 2022-03-28 11:49:38.933 ascii is most likely the one. Stopping the process. You can now view your Streamlit app in your browser. Network URL: http://[REDACTED]:8501 External URL: http://[REDACTED]:8501 Trying to access this through https://[REDACTED]-8501.northeurope.instances.azureml.ms/ I can access the app, but the “Please wait…” indicator appears indefinitely: Attempt 2: Updated Streamlit Config Inspired by App is not loading when running remotely Symptom #2 I created a Streamlit config.toml with a reconfiguring of server/browser access points, and ended up with the following: [browser] serverAddress = "[REDACTED]-8501.northeurope.instances.azureml.ms" serverPort = 80 gatherUsageStats = false [server] port = 8501 headless = true enableCORS = false enableXsrfProtection = false enableWebsocketCompression = false Running the app now gives: You can now view your Streamlit app in your browser. URL: http://[REDACTED]-8501.northeurope.instances.azureml.ms:80 However, I still get the infinite Please wait-indicator. Diving a little bit deeper reveals something related to a wss stream? Whatever that is? I suspect that what I’m seeing is due to the fact that Azure automatically pipes my request from http:// to https://, and this for some reason rejects the stream component that Streamlit uses? Note: Various IP addresses and hostnames are REDACTED for the sake of security
https://discuss.streamlit.io/t/hosting-streamlit-on-azure-compure-instance/23618
CC-MAIN-2022-33
en
refinedweb
A managed runtime aimed at running games distributed over multiple nodes Project description Epic Server EpicMan Server is a SANS-IO Async application server that runs your code over multiple 'Nodes'. This has the added effect of not being limited to a single core for running your code and mitigating some of the effects of the GIL in the cpython (and others) implementation While this is an Async Server this is NOT AsyncIO compatible and brings its own abstractions for File, Network and locking in order to meet the latency requirements imposed by its primary use case (Game engine that runs atop this framework) *** NOTE: this is a Technology preview/Alpha release *** Only a single instance is likley to work at this time or 2 node clusters with caveats Internal and External APIs are in heavy flux Features - Pluggable IO engines - Automatic Persistence of game features - Transparent Inter-process messaging abstraction - python3 codebase - Uses async/await (but is not AsyncIO compatible) - pypy3 compatible for versions supporting the python3.8 spec Getting Started The Full documentation is available here. The examples below should give you a quick feel for management of a cluster and what programming for epic server looks like. Requirements - Python 3.6 or newer - Linux 5.x or newer (io_uring) - liburing-dev (debian) - liburing1 (debian) - 64bit CPU if using lmdb backend - Kernel headers may be required to compile the python io_uring module Examples Starting a single or multiple node cluster is relatively simple as shown below. All that is required is a simple script to bootstrap the Initial Entities. In this example we use a simple test script called 'cluster.py' that is looked up on PYTHONPATH. if using a script in your current directory then prepending PYTHONPATH=. to the epic-server commands will ensure that this script can be found correctly. $ python3 -m venv venv $ venv/bin/pip install epicman-server $ venv/bin/epic-server -vvv -l '[::1]:3030' $ venv/bin/epic-server-start -vvv -b '[::1]:3030' cluster:start Programming for epicman.server The following is taken from 'cluster:start' from the above example and simply sends a value, has it incremented then passes the incremented value on to the next entity to confirm that the RPC like interface works from epicman.objects import EntityProxy, Entity, remote_spawn, remote_call from epicman.syscalls import THREAD_SLEEP from epicman.logging import log import sys TOTAL_COUNT = 1000 class _Test(Entity): @remote_call async def test(self, val): return val + 1 # this is a temporary work around pickle limitations and # issues pickle has with things that are renamed Test = EntityProxy(_Test) async def start(): count = 0 for i in range(TOTAL_COUNT): count = await Test[i].test(count) log.info('Value: {count}', count=count) sys.exit() Stay up to date Project details Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/epicserver/
CC-MAIN-2022-33
en
refinedweb
Add a Web Dashboard to a React Application - 2 minutes to read This article assumes that you implement a client-server architecture. An ASP.NET Core or an ASP.NET MVC application serves as the backend (server side). The client (frontend) application includes all the necessary styles, scripts and HTML-templates. Note that client scripts, libraries on the server side, and devexpress npm packages should have matching version numbers. This topic describes how to add the DashboardControl component to a React application and display the Web Dashboard. Prerequisites - Make sure you have Node.js 6+ and npm 5.2+ installed on your machine. - If you are not familiar with the basic concepts and patterns of React, please review the fundamentals before you continue: reactjs.org Create a React Application In the command prompt, create a React application: npx create-react-app dashboard-react-app Navigate to the created folder after the project is created: cd dashboard-react-app Install the Dashboard Package Install the following npm packages: npm install devexpress-dashboard@22.1.4 devexpress-dashboard-react@22.1.4 @devexpress/analytics-core@22.1.4 devextreme@22.1.4 devextreme-react@22.1.4 --save The devexpress-dashboard npm package references devextreme and @devexpress/analytics-core as peer dependencies. The peer dependencies packages should be installed manually. Modify App Content Modify the App.js file as shown below to display a dashboard component on the page: import React from 'react'; import './App.css'; import { DashboardControl } from 'devexpress-dashboard-react'; function App() { return ( <div style={{ position : 'absolute', top : '0px', left: '0px', right : '0px', bottom: '0px' }}> <DashboardControl style={{ height: '100%' }} </DashboardControl> </div> ); } export default App; The DashboardControlOptions.endpoint property specifies the URL used to send data requests to a server. The value should consist of a base URL where the Web Dashboard’s server side is hosted and a route prefix - a value that is set in the MVC / .NET Core MapDashboardRoute properties. Add Global Styles Replace the index.css file content with the following global-dashboard/dist/css/dx-dashboard.light.css"); Run the Application Run the application. npm start Open in your browser to see the result. The Web Dashboard displays the dashboard stored on the preconfigured server (). To configure your own server, follow the instructions below:
https://docs.devexpress.com/Dashboard/400683/web-dashboard/dashboard-component-for-react/add-web-dashboard-to-a-react-application
CC-MAIN-2022-33
en
refinedweb
This one borks on missing: package com.sun.net.ssl package javax.xml.namespace package javax.xml.rpc I do not want to bring in non-libre stuff from Sun, as that's hopeless to mirror and get sources for. The last two seem to exist in libre form, but the former will be trickier. It's not vital to any of our current packages. Now. Marking as REMIND, so that it shows up on the orphaned ebuild search in the Gentoo Java Wiki.
https://bugs.gentoo.org/show_bug.cgi?id=45990
CC-MAIN-2022-33
en
refinedweb
If i have to execute an external program (for example, in the frontend that i created for convert, from ImageMagick) i have 2 possibilities: i can call one function between the exec* family, or i can call the system function: so, which in your opinion is the best? The only difference that i know (but i may be in error) between these 2 functions is that the former doesn't return (so you have to create a child process), but the latter returns. Is this the only difference? Grokbase › Groups › Python › python-list › January 2006
https://grokbase.com/t/python/python-list/06122eh7x3/running-external-programs-what-is-the-best-way
CC-MAIN-2022-33
en
refinedweb
MUD%2C object-oriented MUD%2C object-oriented, aka MUD, object-oriented, is an actively used programming language created in 1993. The MOO programming language is a relatively simple programming language used to support the MOO Server. It is dynamically typed and uses a prototype-based object-oriented system, with syntax roughly derived from the Algol school of programming languages.. Read more on Wikipedia... - MUD%2C object-oriented ranks in the top 5% of languages - the MUD%2C object-oriented wikipedia page - MUD%2C object-oriented first appeared in 1993 - file extensions for MUD%2C object-oriented include moo - See also: scheme, smalltalk, self, c, ada, muf, lpc, pike, linden-scripting-language - I have 36 facts about MUD%2C object-oriented. what would you like to know? email me and let me know how I can help. Example code from the Hello World Collection: "Hello World in MOO"; player.location:announce_all("Hello, world!"); Example code from Linguist: @program toy:wind this.wound = this.wound + 2; player:tell("You wind up the ", this.name,"."); player.location:announce(player.name, " winds up the ", this.name,"."); . Example code from Wikipedia: @program toy:wind if (this.location == player) if (this.wound < this.maximum) this.wound = this.wound + 2; player:tell("You wind up the ", this.name,"."); player.location:announce(player.name, " winds up the ", this.name,"."); if (this.wound >= this.maximum) player:tell("The knob comes to a stop while winding."); endif else player:tell("The ",this.name," is already fully wound."); endif else player:tell("You have to be holding the ", this.name,"."); endif . Last updated July 22nd, 2019
https://codelani.com/languages/moo.html
CC-MAIN-2019-35
en
refinedweb
Data Structures have been a boon to the programming world as they simplify programming to a great extent. Stack class in Java is a part of Collection framework that simplifies various operations like push, pop, etc. In this article we explore this concept in detail. Following pointers will be explored in this article: - What is a Stack Class in Java? - Methods in Java Stack Class - Java Stack Operations Let’s get started. What is a Stack Class in Java? A stack is a data structure which follows LIFO (Last In First Out). Java Stack Class falls under the basic Collection Hierarchy Framework in which you can perform the basic operations such as push, pop, etc. We know that Java collection framework includes interfaces and classes. Now, let’s have a clear view of how stack class in Java is arranged in the Java collections framework hierarchy. In the above hierarchy, the blue box refers to the different interfaces and the yellow box defines the class. A stack in Java extends the vector class which further implements List interface. Whenever you create a Stack, initially it does not contain any item, i.e, the Stack is empty. Moving ahead, let’s see the different methods of Java Stack Class. Methods of Stack Class in Java In Java, there are mainly 5 methods of Stack Class. Following are the methods that are at our disposal when we use the stack class in Java. Let us understand each of these methods with a programmatic example: package Edureka; import java.io.*; import java.util.*; public class StackMethods { //add or push element on the top of the stack static void push_method(Stack st, int n) { st.push(new Integer(n)); System.out.println("push(" +n+ ")"); System.out.println("Current Stack: " + st); } // Display element on the top of the stack static void peek_method(Stack&amp;amp;lt;Integer&amp;amp;gt; st) { Integer element = (Integer) st.peek(); System.out.println("Element on stack top : " + element); } // Searches element in the stack static void search_method(Stack st, int element) { Integer pos = (Integer) st.search(element); if(pos == -1) System.out.println("Element not found"); else System.out.println("Element is found at position " + pos); } // Removes element from the top of the stack static void pop_method(Stack st) { System.out.print("pop = "); Integer n = (Integer) st.pop(); System.out.println(n); System.out.println("Remaining stack: " + st); } public static void main(String args[]) { Stack st = new Stack(); System.out.println("Empty stack: " + st); push_method(st, 4); push_method(st, 8); push_method(st, 9); peek_method(st); search_method(st, 2); search_method(st, 4); pop_method(st); pop_method(st); pop_method(st); try { pop_method(st); } catch (EmptyStackException e) { System.out.println("empty stack"); } } } Output: Empty stack: [] push(4) Current Stack: [4] push(8) Current Stack: [4, 8] push(9) Current Stack: [4, 8, 9] Element on stack top: 9 Element not found Element is found at position 3 pop = 9 Remaining stack: [4, 8] pop = 8 Remaining stack: [4] pop = 4 Remaining stack: [] pop = empty stack Explanation: In the above Java program, I have first printed an empty stack and added a few elements using the Push method. Once the elements are present in the stack, I have displayed the elements on the top of the stack using the Peek method. After that, I have performed searching using the Search method and finally removed the elements in the Java Stack class using the Pop method. Moving ahead with Java Stack Class, let’s have a look at various operations you can perform while implementing stack class in Java. Java Stack Operations: Size of the stack: package Edureka; import java.util.EmptyStackException; import java.util.Stack; public class StackOperations { public static void main (String[] args) { Stack stack = new Stack(); stack.push("1"); stack.push("2"); stack.push("3"); // Check if the Stack is empty System.out.println("Is the Java Stack empty? " + stack.isEmpty()); // Find the size of Stack System.out.println("Size of Stack : " + stack.size()); } } Output: Is the Java Stack empty? false Size of Stack : 3 Iterate Elements of a Java Stack: - Iterate over a Stack using iterator() - Iterate over a Stack using Java 8 forEach() - Iterate over a Stack using listIterator() from Top to Bottom Let’s begin to iterate elements by using iterator(). package Edureka; import java.util.EmptyStackException; import java.util.Iterator; import java.util.Stack; public class StackOperations { public static void main (String[] args) { Stack stack = new Stack(); stack.push("1"); stack.push("2"); stack.push("3"); Iterator iterator = stack.iterator(); while(iterator.hasNext()){ Object value = iterator.next(); System.out.println(value); } } } 1 2 3 Similarly, you can perform the iteration by other methods. Refer the below code for more understanding: package demo; import java.util.EmptyStackException; import java.util.Iterator; import java.util.ListIterator; import java.util.Stack; public class JavaOperators { public static void main (String[] args) { Stack stack = new Stack(); stack.push("1"); stack.push("2"); stack.push("3"); System.out.println("Iterate a stack using forEach() Method:"); stack.forEach(n ->; { System.out.println(n); }); ListIterator<String> ListIterator = stack.listIterator(stack.size()); System.out.println("Iterate over a Stack using listIterator() from Top to Bottom:"); while (ListIterator.hasPrevious()) { String str = ListIterator.previous(); System.out.println(str); } }} Output: Iterate a stack using forEach() Method: 1 2 3 Iterate over a Stack using listIterator() from Top to Bottom: 3 2 1 Explanation: In the above code, you can see the iteration using forEach() Method and then reverse the same using listIterator() from top to bottom of the stack. This is the end of the “Stack Class in Java” blog. I hope you guys are clear with Java collections framework, it’s hierarchy along with the Java Stack class example codes. Do read my next blog on Java Interview Questions where I have listed top 75 interview questions and answers which will help you set apart in the interview process. “Stack class in Java” blog and we will get back to you as soon as possible.
https://www.edureka.co/blog/stack-class-in-java/
CC-MAIN-2019-35
en
refinedweb
No idea why this happened. Unity went to black screen, possibly something to do with video card driver problem, regardless, it stopped responding and I have to close it to restart it. Upon restart, I notice all my sprite images are missing, only with box colliders, and I went to inspector. It says "the associated script cannot be loaded, please fix any compile errors and assign a valid script". I then when to inspect the code but I have done nothing to change anything. So I simply rebuild them and then I got bunch of errors all related to the unity library itself, such as "the type or namespace 'unityengine' could not be found'. 'the type or namespace 'MonoBehaviour' could not be found, are you missing an using directive or assembly reference?' "the type or namespace 'unityengine' could not be found'. 'the type or namespace 'MonoBehaviour' could not be found, are you missing an using directive or assembly reference?' I also found my 2D Toolkit plugin is missing, except one list on the menu bar with one option "Setup for javascript", there used to be a lot more than this, re-importing doesn't fix the problem. What happened here? I think this might be a bug with Unity... are you sure that you are in the right project? @rednax20 Positively Answer by Dracorat · Apr 08, 2013 at 08:37 PM Steps you should take: A) Reboot completely. B) If that doesn't work, try re-installing Unity and rebooting again. C) If that doesn't work, create a new project, add an object and attach one of the scripts from the other project. C-1) If it works, tears might be involved because it's likely something was corrupted on the other one. Work slowly through all associated objects and carefully make sure that the settings are correct. C-2) If it doesn't work, check that the script is valid. Use a default built-in one and see if it works. If it does but yours doesn't, there's a problem with your script C-3) If a default built-in script won't work, you probably have a very bad hardware-level problem. Try the project on another computer. If it works there, get your PC serviced. D) If it still won't work anywhere, including the built-in scripts, let us know because that would be something ridiculous. I reinstall unity and it works, thanks Damn, this problem just happened AGAIN, now I start to doubt if that's the problem with my 2D Toolkit plugin...reinstalling everytime is annoying... I had to delete and reimport the character controllers to work again in a new project.. How my last project got messed up- I downloaded the hoverscript from.. It worked until I went to modify the maximum x and y in the mouselook script. Changing those values didnt make any difference.. After restarting I saw all those scripts under the character controller showing it had compilation errors.. That behaviour didnt make any sense to me as I am sure I did not modify anything using the monodev. Then I thought maybe changing that maximum x and maximum y values caused the problem(had nothing else to believe in :D) so I looked up on google for this replaced the whole with the built in one and tears were involved in the decision of switching to cryengine... Because every single scripts in my project somehow got infected with HIV and had compilation errors.. Answer by dylanfries · Jul 16, 2013 at 01:34 AM I had this problem and fixed it by fixing the compile bugs. Specifically error CS0070: The event tk2dUIItem.OnClickUIItem' can only appear on the left hand side of += or -= when used outside of the type tk2dUIItem' tk2dUIItem.OnClickUIItem' can only appear on the left hand side of += or -= when used outside of the type which resulted from trying to call item.OnClickUIItem() -= FunctionName; rather then item.OnClickUIItem -= FunctionName; as referenced here from unikronsoftware. As they mention, its Unity choking on one of your script errors and causing everything to fail to compile in a bad way. Its worth a look anyways before doing a complete reinstall. Answer by Cornotiberious · Jul 16, 2013 at 01:57 AM I've had this problem. for me, one of my scripts was blank, and that was causing trouble. It probably isn't the issue, but take a look through your scripts to see if one might be blank. Answer by polmonroig · Apr 21, 2014 at 10:32 PM To solve this error I veryfied the script folder location and change it to to the one where the script was originally. Answer by Tekksin · May 12, 2014 at 02:19 PM What worked for me was going to my latest script. I accidentally wrote "if(player.something({ something = true; }" Did you notice the error? Backwards ending parenthesis. I didn't even notice. At that point I got the error similar to what was mentioned in this post. Just make sure there are no errors in your scripts, so that unity doesn't have a reason to complain. Once fixing this very minor error, my unity was reloaded and all the scripts and variables were still in tact. Check your old scripts before you do this (almost) insane delete and re-import Editor script? 3 Answers Script Problems? 1 Answer Unity Script Editor Not Working 1 Answer Scripting a "Unity Editor"-like object transformation control 1 Answer A node in a childnode? 1 Answer
https://answers.unity.com/questions/434324/unity-crashed-then-alll-scripts-cant-be-loaded.html
CC-MAIN-2019-35
en
refinedweb
Often times, the need to verify the conditions present in our program arises. The assert keyword in Java allows the users to verify or test the assumptions made during the program. This article will introduce you to Assertion In Java. Following pointers will be covered in this article, - Declaring Assertion In Java - Enable Assertions - Disable Assertions - Where To Use Assertion And Not? - Sample Program For Assertion In Java So let us get started with this article Declaring Assertion In Java The assert statement is used alongside a Boolean expression and can be declared as follows: assert expression; Another way to declare the assertion is as follows: assert expression1 : expression2; Example import java.util.Scanner; public class Test { public static void main( String args[] ) { int value = 18; assert value >= 20 : " Eligible"; System.out.println("Value: "+value); } } Output Value: 18 The output after enabling assertions will be as follows: Exception in thread “main” java.lang.AssertionError: Eligible Moving on with this Assertion In Java Article, Enable Assertions It must be noted that, assertions are disabled by default. The syntax for enabling the assertion statement is as follows: java –ea Test Another method for enabling assertions: java –enableassertions Test Moving on, let us see how to disable assertions, Disable Assertions The assertion statements can be disabled as follows: java –da Test Another method for enabling assertions: java -disableassertions Test Reasons For Using Assertions There are various reasons as to why a user might want to use assertions: - Making sure that assumptions defined in the comments are right. - To ensure that the switch case is not reached. - To check the state of the object. Moving on with this Assertion In Java Article Where To Use Assertion And Not? Where To Use Assertions? - Conditional cases and conditions at the beginning of a method. - Arguments to private methods. Where Not To Use Assertions? - Checking arguments in the public methods that are provided by the user should not be done using assertions. - Assertions should not be used on command line arguments. - Replacing error messages should not be done using assertions. Moving on to the final bit of this Assertion In Java Article Sample Program For Assertion In Java import java.util.Scanner; public class Test { public static void main( String args[] ) { Scanner scanner = new Scanner( System.in ); System.out.print("Enter the ID "); int value = scanner.nextInt(); assert value>=15:" Invalid"; System.out.println("Value "+value); } } Output Enter the ID Exception in thread “main” java.lang.AssertionError: Invalid To make sure that the assumptions made during the program are correct, assertions prove to be an important keyword. Thus we have come to an end of this article on ‘Assertion In Java.
https://www.edureka.co/blog/assertion-in-java/
CC-MAIN-2019-35
en
refinedweb
Dealing with callbacks as props in React Nikita Mostovoy ・8 min read TL;DR - Don't mix JSX and business-logic in one place, keep your code simple and understandable. - For small optimizations, you can cache function in class properties for classes or use the useCallbackhook for function components. In this case, pure components won't be re-rendered every time when their parent gets re-rendered. Especially, callbacks caching is effective to avoid excess updating cycles when you pass functions as a prop to PureComponents. - Don't forget that event handler receives a synthetic event, not the original event. If you exit from the current function scope, you won't get access to synthetic event fields. If you want to get fields outside the function scope you can cache fields you need. Part 1. Event handlers, caching and code readability React has quite a convenient way to add event handlers for DOM elements. This is one of the first basic things that beginners face with. class MyComponent extends Component { render() { return <button onClick={() => console.log('Hello world!')}>Click me</button>; } } It's quite easy, isn't it? When you see this code, it isn't complicated to understand what will happen when a user clicks the button. But what should we do, if the amount of the code in event handlers become more and more? Let's assume, we want to load the list of developers, filter them (user.team === 'search-team') and sort using their age when the button was clicked: class MyComponent extends Component { constructor(props) { super(props); this.state = { users: [] }; } render() { return ( <div> <ul> {this.state.users.map(user => ( <li>{user.name}</li> ))} </ul> <button onClick={() => { console.log('Hello world!'); window .fetch('/usersList') .then(result => result.json()) .then(data => { const users = data .filter(user => user.team === 'search-team') .sort((a, b) => { if (a.age > b.age) { return 1; } if (a.age < b.age) { return -1; } return 0; }); this.setState({ users: users, }); }); }} > Load users </button> </div> ); } } This code is so complicated. The business-logic part is mixed with JSX elements. The simplest way to avoid it is to move function to class properties: class MyComponent extends Component { fetchUsers() { // Move business-logic code here } render() { return ( <div> <ul> {this.state.users.map(user => ( <li>{user.name}</li> ))} </ul> <button onClick={() => this.fetchUsers()}>Load users</button> </div> ); } } We moved business-logic from JSX code to separated field in our class. The business-logic code needs to get access to this, so we made the callback as: onClick={() => this.fetchUsers()} Besides it, we can declare fetchUsers class field as an arrow function: class MyComponent extends Component { fetchUsers = () => { // Move business-logic code here }; render() { return ( <div> <ul> {this.state.users.map(user => ( <li>{user.name}</li> ))} </ul> <button onClick={this.fetchUsers}>Load users</button> </div> ); } } It allows us to declare callback as onClick={this.fetchUsers} What is the difference between them? When we declare callback as onClick={this.fetchUsers} every render call will pass the same onClick reference to the button. At the time, when we use onClick={() => this.fetchUsers()} each render call will init new function () => this.fetchUsers() and will pass it to the button onClick prop. It means, that nextProp.onClick and prop.onClick won't be equal and even if we use a PureComponent instead of button it will be re-rendered. Which negative effects can we receive during the development? In the vast majority of cases, we won't catch any visual performance issues, as Virtual DOM doesn't get any changes and nothing is re-rendered physically. However, if we render big lists of components, we can catch lags on a big amount of data. Why it's important to understand how functions are passed to the prop? You can often find on Twitter or StackOverflow such advice: "If you have issues with performance in React application, try to change inheritance in problem places from Component to PureComponent, or define shouldComponentUpdate to get rid of excess updating cycles". If we define a component as a PureComponent, it means, that it has already the shouldComponentUpdate function, which implements shallowEqual between its props and nextProps. If we set up new references as props to PureComponent in updating lifecycle, we'll lose all PureComponent advantages and optimizations. Let's watch an example. We implement Input component, that will show a counter representing the number of its updates class Input extends PureComponent { renderedCount = 0; render() { this.renderedCount++; return ( <div> <input onChange={this.props.onChange} /> <p>Input component was rerendered {this.renderedCount} times</p> </div> ); } } Now we create two components, which will render the Input component: class A extends Component { state = { value: '' }; onChange = e => { this.setState({ value: e.target.value }); }; render() { return ( <div> <Input onChange={this.onChange} /> <p>The value is: {this.state.value} </p> </div> ); } } Second: class B extends Component { state = { value: '' }; onChange(e) { this.setState({ value: e.target.value }); } render() { return ( <div> <Input onChange={e => this.onChange(e)} /> <p>The value is: {this.state.value} </p> </div> ); } } You can try the example here: This example shows how we can lose all the advantages of PureComponents if we set the new references to the PureComponent every time in the render. Part 2. Using event handlers in function components The new React hooks mechanism was announced in the new version of React@16.8 (). It allows implementing full-featured function components, with full lifecycle built with hooks. You are able to change almost all class components to functions using this feature. (but it isn't necessary) Let's rewrite Input Component from classes to functions. Input should store the information about how many times it was re-rendered. With classes, we are able to use instance field via this keyword. But with functions, we can't declare a variable with this. React provides useRef hook which we can use to store the reference to the HtmlElement in DOM tree. Moreover useRef is handy to store any mutable data like instance fields in classes: import React, { useRef } from 'react'; export default function Input({ onChange }) { const componentRerenderedTimes = useRef(0); componentRerenderedTimes.current++; return ( <> <input onChange={onChange} /> <p>Input component was rerendered {componentRerenderedTimes.current} times</p> </> ); } We created the component, but it isn't still PureComponent. We can add a library, that gives us a HOC to wrap component with PureComponent, but it's better to use the memo function, which has been already presented in React. It works faster and more effective: import React, { useRef, memo } from 'react'; export default memo(function Input({ onChange }) { const componentRerenderedTimes = useRef(0); componentRerenderedTimes.current++; return ( <> <input onChange={onChange} /> <p>Input component was rerendered {componentRerenderedTimes.current} times</p> </> ); }); Our Input component is ready. Now we'll rewrite A and B components. We can rewrite the B component easily: import React, { useState } from 'react'; function B() { const [value, setValue] = useState(''); return ( <div> <Input onChange={e => setValue(e.target.value)} /> <p>The value is: {value} </p> </div> ); } We have used useState hook, which works with the component state. It receives the initial value of the state and returns the array with 2 items: the current state and the function to set the new state. You can call several useState hooks in the component, each of them will be responsible for its own part of the instance state. How can we cache a callback? We aren't able to move it from component code, as it would be common for all different component instances. For such kind of issues React has special hooks for caching and memoization. The handiest hook for us is useCallback So, A component is: import React, { useState, useCallback } from 'react'; function A() { const [value, setValue] = useState(''); const onChange = useCallback(e => setValue(e.target.value), []); return ( <div> <Input onChange={onChange} /> <p>The value is: {value} </p> </div> ); } We cached function so that Input component won't be re-rendered every time when its parent re-renders. How does useCallback work? This hook returns the memoized version of the function. (that meant the reference won't be changed on every render call). Beside the function which will be memoized, this hook receives a second argument. In our case, it was an empty array. The second argument allows passing to the hook the list of dependencies. If at least one of this fields gets changed, the hook will return a new version of the function with the new reference to enforce the correct work of your component. The difference between inline callback and memoized callback you can see here: Why array of dependencies is needed? Let's suppose, we have to cache a function, which depends on some value via the closure: import React, { useCallback } from 'react'; import ReactDOM from 'react-dom'; import './styles.css'; function App({ a, text }) { const onClick = useCallback(e => alert(a), [ /*a*/ ]); return <button onClick={onClick}>{text}</button>; } const rootElement = document.getElementById('root'); ReactDOM.render(<App text={'Click me'} a={1} />, rootElement); The component App depends on a prop. If we execute the example, everything will work correctly. However as we add to the end re-render, the behavior of our component will be incorrect: setTimeout(() => ReactDOM.render(<App text={'Next A'} a={2} />, rootElement), 5000); When timeout executes, click the button will show 1 instead of 2. It works so because we cached the function from the previous render, which made closure with previous a variable. The important thing here is when the parent gets re-rendered React will make a new props object instead of mutating existing one. If we uncomment /*a*/ our code will work correctly. When component re-renders the second time, React hook will check if data from deps have been changed and will return the new function (with a new reference). You can try out this example here: React has a number of functions, which allow memoizing data: useRef, useCallback and useMemo. The last one is similar to useCallback, but it is handy to memoize data instead of functions. useRef is good both to cache references to DOM elements and to work as an instance field. At first glance, useRef hook can be used to cache functions. It's similar to instance field which stores methods. However, it isn't convenient to use for function memoization. If our memoized function uses closures and the value is changed between renders the function will work with the first one (that was cached). It means we have to change references to the memoized function manually or just use useCallback hook. — here is the example with the right useCallback usage and wrong useRef one. Part 3. Synthetic events We've already watched how to use event handlers, how to work with closures in callbacks but React also have differences in event objects inside event handlers. Take a look at the Input component. It works synchronously. However, in some cases, you would like to implement debounce or throttling patterns. Debounce pattern is quite convenient for search fields, you enforce search when the user has stopped inputting symbols. Let's create a component, which will call setState: function Search won't work. React proxies events and after synchronous callback React cleanups the event object to reuse it in order to optimization. So our onChange callback receives Synthetic Event, that will be cleaned. If we want to use e.target.value later, we have to cache it before the asynchronous code section: function Search> </> ); } Example: If you have to cache the whole event instance, you can call event.persist(). This function removes your Synthetic event instance from React event-pool. But in my own work, I've never faced with such a necessity. Conclusion: React event handlers are pretty convenient as they - implement subscription and unsubscription automatically - simplify our code readability Although there are some points which you should remember: - Callbacks redefinition in props - Synthetic events Callbacks redefinition usually doesn't make the big influence on visual performance, as DOM isn't changed. But if you faced with performance issues, and now you are changing components to Pure or memo pay attention to callbacks memoization or you'll lose any profit from PureComponents. You can use instance fields for class components or useCallback hook for function components.<< Thank you, this is absolutely valuable!
https://dev.to/xnimorz/dealing-with-event-handlers-in-react-540j
CC-MAIN-2019-35
en
refinedweb
Managing offline map areas¶ ahead of time to make going offline faster and easier for the field worker. This guide describes how to use the ArcGIS API for Python to create preplanned offline map areas for use in the ArcGIS Runtime SDKs and in the future, with apps like Collector. To learn about the general concept and steps in taking a map offline, refer here. To understand the data requirements needed to take a map offline, refer here. Creating offline map areas¶ With ArcGIS API for Python, you can conveniently manage offline areas from the WebMap object. The offline_areas property off the WebMap object gives you access to the OfflineAreaManager object with which you can create(), list() and update() these offline packages. from arcgis.gis import GIS from arcgis.mapping import WebMap gis = GIS("","arcgis_python","P@ssword123") Let us use a fire first responder web map and take it offline. wm_item = gis.content.get('7f88050bf48749c6b9d82634f04b6362') wm_item fire_webmap = WebMap(wm_item) You can create offline areas for a specified extent or a bookmark. You can additionally specify any layers that you need to ignore, a destination folder to store these packages and a min, max scale to which the packages need to be cached. List the bookmarks in the web map: for bookmark in fire_webmap.definition.bookmarks: print(bookmark.name) Socal NorCal Create offline areas for the bookmark "Southern California" and while we are at it, let us limit the scales for which the packages need to be created. As one of the parameters, you can specify a name, title, description for the "offline map area" item that gets created during this process. offline_item_properties = {'title': 'Offline area for Southern California', 'tags': ['Python', 'automation', 'fires'], 'snippet': 'Area created for first responders'} socal_offline_item = fire_webmap.offline_areas.create(area = fire_webmap.definition.bookmarks[1].name, item_properties = offline_item_properties, min_scale = 147914000, max_scale = 73957000) This operation can take a while as the server is packaging the contents of the web map for offline use. To view the status, you can optionally turn on the verbosity using the env module as shown below: from arcgis import env env.verbose = True socal_offline_item The type of the item we just created is Map Area. Read along to see how you can list the packages created for this map area. socal_offline_item.type 'Map Area' socal_offline_item.related_items('Area2Package', 'forward') [<Item title:"WTL_usa-1b8fefd666444ae199df560e6df50eee" type:Tile Package owner:demo_deldev>, <Item title:"World_Topo_Map-dbf7e376acec4b70b802d8fa0f3bac93" type:Tile Package owner:demo_deldev>, <Item title:"VectorTileServe-29fe800f37f640abbfd342b8ad679a9b" type:Vector Tile Package owner:demo_deldev>] These items are meant for use in offline applications described above. However, if needed, you can call the download() method off these Items and download their data to disk using the Python API. fire_webmap.offline_areas.list() [<Item title:"Offline area for Southern California" type:Map Area owner:demo_deldev>] Updating offline areas¶ Keeping offline areas up to date is an important task. You can accomplish this by calling the update() method off the offline_areas property of the WebMap object. This method accepts a list of Map Area items as input. To update all the offline areas created for a web map call the method without any input parameters. Below is an example of how the progress is relayed back to you when you turn on the verbosity in the env module. # update all offline areas for the fire web map fire_webmap.offline_areas.update() Submitted. Executing... Start Time: Wednesday, April 11, 2018 11:15:13 PM Running script RefreshMapAreaPackage... pending - pending - pending - pending - pending - pending - pending - complete = pending - pending - pending - pending - pending - pending - pending - pending - complete = pending - pending - pending - pending - complete = [{'itemId': '9b49ec838579464f97e85f7613b3aa1c', 'source': '', 'state': 'updated'}, {'itemId': 'c7549a1973cb487bac4d3f4099bf2940', 'source': '', 'state': 'updated'}, {'itemId': '46619dfde09c4eb09bb4b4c99fea8805', 'source': '', 'state': 'updated'}] Now your field users are all set with the latest packages for use in a disconnected setting. Feedback on this topic?
https://developers.arcgis.com/python/guide/managing-offline-map-areas/
CC-MAIN-2019-35
en
refinedweb
Am 04.07.2018 um 07:26 schrieb Egor Skriptunoff <egor.skriptunoff@gmail.com>:Hi!Let me start (n+1)-th thread about "global by default" feature of Lua. My suggestion is to replace "global by default" approach with "nothing by default" by introducing special syntax for accessing global variables: $print("Hello world") -- ok print("Hello world") -- syntax error: referencing undeclared variable "print" _G.print("Hello world") -- syntax error: referencing undeclared variable "_G" "$" is not a part of an identifier. "$" is a lexem (for example, there could be a space between "$" and global variable name). "$" is absolutely the same as "_ENV." but is more comfortable to use. Why do we need this? 1) "$globalvar" notation introduces two separate namespaces: one for globals and one for locals/upvalues. As a result of this separation, most of variable name typos will be detected at compile time. Although global name typos will not be detected ($math.sin -> $moth.sin), the most of identifiers in properly-written Lua programs are locals. Currently, writing Lua programs without "global-nil-alarmer" (such as "strict.lua") is practically impossible (because typos in local variable names are silently converted to globals). 2) "$print" contains more symbols than "print", and you have to press SHIFT on your keyboard to type "$", but this is not a drawback. Instead, there is a point in it: users would have to pay attention to slow global access. More finger moves = more CPU time to access. So, we are adding "ergonomical motivation" to cache globals in Lua programs :-) 3) Polluting of global namespace is already discouraged in Lua, so introducing "$" prefix for globals is the next step in that direction. It is a polite way to say: "use globals only when they are really needed". 4) The "$globalvar" notation will solve problem with globals occasionally shadowed out by locals or upvalues: local function sort_desc(table) table.sort(table, function(a, b) return a>b end) -- very popular newcomers' mistake end 5) The "$globalvar" notation will make the difference between "monkey-patch-friendly code" and "monkey-patch-unfriendly code" more easy to see. BTW, "_ENV" could be removed from Lua as "$" could do the same (with some modifications in the parser to interpret "$" either as "_ENV." or as "_ENV"): $print() -- instead of print() $.print() -- instead of print() local env = $ -- instead of local env = _ENV $[key] = value -- instead of _ENV[key] = value function sndbx($) -- instead of function sndbx(_ENV) What are your thoughts on this? -- Egor
http://lua-users.org/lists/lua-l/2018-07/msg00032.html
CC-MAIN-2019-35
en
refinedweb
The text below is based on my 2016 notes on Python, uvloop and Go after a colleague came back from Europython conference. It shows that on python side, even though uvloop lightly wraps libuv that is known to be fast, the performance drops significantly right after pure python code starts to be added into request handler, and the end result is that python server becomes more than an order of magnitude slower compared to go version. However in 2018 we still hope that python world could be improved for implementing modern networked servers via bringing in combination of Cython and coroutine/stackless-based approaches such as gevent and pygolang. All the pieces are there, but scattered. We just need to have the focal point where they all could be tightly integrated together. It is good to hear that Python world is trying to catch. As I was using Python for many years and also was playing with Go in recent times and know both, on my side I'd like to provide some feedback. But before we begin, let me show you something: Above are benchmarks for HTTP servers taken from uvloop blogpost for which HTTP handlers were modified to do some simple text processing instead of doing only 100% I/O. The graph shows that once we start adding python code - even very simple one - to server when handling requests, the performance is killed in Python case. Now here is the explanation: Modern web-servers are built around so-called reactors. These are specialized event-loops which communicate with kernel in efficient ways (solving c10k problem) using things like epoll() on Linux, kqueue() on FreeBSD etc. Reactors are basically loops when you subscribe for events on file-descriptors, and receive notifications via getting corresponding callbacks. Then every CPU is tried to be loaded which leads to something like M:N schemes where M is threads spawned for every CPU and N is many connections which corresponding thread handle via callback-events. libuv is a C library which wraps OS-specific mechanisms for organizing reactors in uniform way. uvloop wraps libuv via Cython and expose its service to Python. Go has builtin network scheduler in its runtime which is similar to what libuv is doing. Now original MagicStack benchmarks (python, go) actually only receive request and send back the payload. Since those two paths are all I/O and are handled 99% by only reactors, what those benchmarks really measure is how well underlying reactor libraries perform. I'm sure libuv should be a good library as well as Go runtime is well done and tuned, and MagicStack benchmarks confirm that. However any real server has some control logic and things to do in its HTTP handlers and that is not there in MagicStack benchmarks. And once we are starting to add code for handling requests it becomes not only I/O even for cases when people tend to think performance should be I/O bound. In Python case executing pure-python code is known to be slow. The fact that in original MagicStack benchmarks performance dropped on the floor while they were using HTTP parser written in Python justifies it -- performance recovered only when they actually switched to using C library to parse HTTP requests. For python the trend is: whenever we need performance we need to move that code to C and wrap it. But experience also shows that not all code can be so well localized and slowness often remains scattered throughout whole python codebase. (I'm not considering here cases when we move everything to C and wrap only something like top-level main() because then there is nothing left on Python side) So let's simulate, at least in part, of being real web-server and doing some work in HTTP handler. For an example workload I choose to analyze characters from request path and see how close they are to name of our several websites. Here is the workload for Python case (full source): def handle(self, request, response): parsed_url = httptools.parse_url(self._current_url) xc = XClassifier() for char in parsed_url.path: xc.nextChar(char) resp =) response.write(resp) if not self._current_parser.should_keep_alive(): self._transport.close() self._current_parser = None self._current_request = None navytux = b'navytux.spb.ru' nexedi = b'' lab = b'lab.nexedi.com' erp5 = b'' # whether character ch is close to string s. # character is close to a string if it is close to any of characters in it # character is close to a character if their distance <= 1 def isclose(ch, s): for ch2 in s: if abs(ch - ch2) <= 1: return True return False class XClassifier: def __init__(self): self.nnavytux = 0 self.nnexedi = 0 self.nlab = 0 self.nerp5 = 0 self.ntotal = 0 def nextChar(self, ch): if isclose(ch, navytux): self.nnavytux += 1 if isclose(ch, nexedi): self.nnexedi += 1 if isclose(ch, lab): self.nlab += 1 if isclose(ch, erp5): self.nerp5 += 1 self.ntotal += 1 and the same for Go (full source): func handler(w http.ResponseWriter, r *http.Request) { xc := NewXClassifier() path := r.URL.Path for i := range path { xc.nextChar(path[i]) } fmt.Fprintf) } const ( navytux = "navytux.spb.ru" nexedi = "" lab = "lab.nexedi.com" erp5 = "" ) func abs8(v int8) int8 { if v >= 0 { return v } return -v } // whether character ch is close to string s. // character is close to a string if it is close to any of characters in it // character is close to a character if their distance <= 1 func isclose(ch byte, s string) bool { for i := 0; i < len(s) ; i++ { ch2 := s[i] if abs8(int8(ch - ch2)) <= 1 { return true } } return false } type XClassifier struct { nnavytux int nnexedi int nlab int nerp5 int ntotal int } func NewXClassifier() *XClassifier { return &XClassifier{} } func (xc *XClassifier) nextChar(ch byte) { if isclose(ch, navytux) { xc.nnavytux += 1 } if isclose(ch, nexedi) { xc.nnexedi += 1 } if isclose(ch, lab) { xc.nlab += 1 } if isclose(ch, erp5) { xc.nerp5 += 1 } xc.ntotal += 1 } For every request we create classifier object, and then for characters in request path do some several method/function calls and lookups in website-name strings. In the end classification statistic is returned to client: $ curl -v * Trying 127.0.0.1... * Connected to 127.0.0.1 (127.0.0.1) port 25000 (#0) > GET /helloworld HTTP/1.1 > Host: 127.0.0.1:25000 > User-Agent: curl/7.50.1 > Accept: */* > < HTTP/1.1 200 OK < Content-Type: text/plain < Content-Length: 83 < navytux.spb.ru: 5: 10 lab.nexedi.com: 10: 10 total: 11 * Connection #0 to host 127.0.0.1 left intact This is not a big workload - rather small one - and doubtfully it is useful, but it shows what start to happen performance-wise when there is some non-trivial work in the handler. About performance: benchmarking was done via e.g.: wrk -t 1 -c 40 -d 10 on the same machine (my old Core2-Duo notebook) with output like: Running 10s test @ 1 threads and 40 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.20ms 2.16ms 31.72ms 90.58% Req/Sec 21.72k 1.24k 27.43k 89.00% 216080 requests in 10.00s, 41.21MB read Requests/sec: 21601.66 Transfer/sec: 4.12MB For N(input characters) > 11 "helloworld" was simply repeated on request path needed number of times. For those interested actual benchmarks run and numbers are here. At this point I'd like to ask you to look once again at the picture in the beginning of the post: when there is only 1 input character (only "/" in path) Python and Go perform close to each other, though python is already slower. For 11 input characters ("/helloworld" in path) the difference is ~ 2.8x. For 101 input characters the difference is ~14x -- python becomes more than an order of magnitude slower. So to me this once again shows that even with having libuv integration, for any real use-case, as long as there is python code in the request handler performance will be much slower compared to Go. On the other hand, the Go case shows that it performs rather well, usually without wrapping anything as most of the things can be and are actually written in Go itself. For example 90% of Go runtime and in particular libuv analog - Go's network scheduler - is implemented in Go itself which shows the language can be used to get good close to native speed while working on high-level-enough-language similar to python. I'd like to also add that MagicStacks benchmarks are not reasonable as they set GOMAXPROCS=1. In simple words this means that while Go support for multicore machines is very good, it is artificially limited to be using only 1 CPU on the system. In other words Go is artificially constrained to behave with something like GIL in Python world. Without above GOMAXPROCS=1 setting, Go by default uses all available CPUs and for cases when there is not much contention between handlers performance usually scales close to be linearly. I'd like to remind that we are already using 8-CPU machines at Vifib and Python practically cannot use more than 1 CPU in single process because of GIL. I want to also add some words about Python approach to concurrency and building parallel servers. To me with asyncio / async/await they are just creating different world for no reason - as every part of software has to be adapted to async & co stuff, and there becomes, yes a bit mitigated, callback spaghetti. On the other hand in Go it is still the same serial world and we add channels via which you can connect goroutines. Each goroutine runs serially but can send data via channels just like via pipe. To me it is significantly more better and uniform approach even in terms of human thinking so in this case Go adds not only performance but more productivity. And I can tell this with confidence, because I was in reactors/asynchronous programming for a while even implementing my own reactors sometimes. The thing is: computer is already a state machine (it runs each assembler instruction via leveraging state machine inside CPU) then we have state machine in OS to run several programs / threads. But it is harder for humans to implement state machines compared to serial programming and communication. I mean it is harder for humans to understand and harder to implement state machines. Thus what makes sense is to implement state machine at lowest-possible level and then give programmers the feeling of that they have a lot of serial processes and adequate communication primitives. The reactor is itself a state machine. Go does it at runtime and hides providing to user serial goroutines and channels while other parties just throw the complexity of "asynchronousity" to developers. For me as a developer, what would be a better approach for python is to actually integrate libuv deeply in its runtime and give developers green (= very cheap) threads and communication primitives. The sad thing about it is that stackless (2) was doing things in this direction for years (it started around beginning of 2000's - the same time when first GIL removal patches started to appear), but despite this approach was with us for a long time there is seemingly almost zero chances for it be merged into CPython and people reinvent "cute" things (async/await asyncio) and throw twisted complexity of programming to developers. In 2016, I would have said that low performance and lack of adequate approach for concurrent programming sadly makes Python to be a not so appropriate language to implement loaded webservers today. However, the existence of cython combined with a clean concurrency model based on technologies such as gevent and pygolang could change the situation if both can be tightly integrated into cython's static cdef code rather than being all scattered as it is today.
https://erp5.nexedi.com/blog/NXD-Document.Blog.UVLoop.Python.Benchmark
CC-MAIN-2019-35
en
refinedweb
A new Flutter package.: carousel_hero: ^0.0.1 You can install packages from the command line: with Flutter: $ flutter pub get Alternatively, your editor might support flutter pub get. Check the docs for your editor to learn more. Now in your Dart code, you can use: import 'package:carousel_hero/carousel_hero.dart'; We analyzed this package on Aug 16, 2019, and provided a score, details, and suggestions below. Analysis was completed with status completed using: Detected platforms: Flutter References Flutter, and has no conflicting libraries. Format lib/carousel_hero.dart. Run flutter format to format lib/carousel_hero carousel_hero.
https://pub.dev/packages/carousel_hero
CC-MAIN-2019-35
en
refinedweb
To. So we will discuss: - Setup of your cluster certificate - Setup of Azure AD - Azure Cluster creation - Testing your Admin and Read-only user login Setting up your cluster certificate My purpose for using Azure Active Directory (AAD) was to setup an admin user and a read-only user so they could access the Service Fabric Explorer with those permissions. You still need to have a cluster certificate to secure the cluster though. - Open PowerShell ISE as an Administrator. - In the PowerShell command window, log in to your Azure subscription using 'Login-AzureRMAccount'. When you do this, in the command window you will see the subscriptionID. You need to copy the subscriptionID, because you will need that in the PowerShell script to create the Azure AD application. Also copy the tenantID value. - Run the following PS script. This script create a new resource group for your key vault, a key vault, a self-signed certificate and a secret key. You will need to record the information that appears in the PS command prompt output window after the successful execution of this script. Fill in the variables with your own values. Note that it is usually best just to save the script code below to a PS file first. #-----You have to change the variable values-------# # This script will: # 1. Create a new resource group for your key vaults # 2. Create a new key vault # 3. Create, export and import (to your certificate store) a self-signed certificate # 4. Create a new secret and put the cert in key vault # 5. Output the values you will need to supply to your cluster for cluster cert security. # Make sure you copy these values before closing the PowerShell window #--------------------------------------------------# #Name of the Key Vault service $KeyVaultName = "<your-vault-name>" #Resource group for the Key-Vault service. $ResourceGroup = "<vault-resource-group-name>" #Set the Subscription $subscriptionId = "<your-Azure-subscription-ID>" #Azure data center locations (East US", "West US" etc) $Location = "<region>" #Password for the certificate $Password = "<certificate-password>" #DNS name for the certificate $CertDNSName = "<name-of-your-certificate>" #Name of the secret in key vault $KeyVaultSecretName = "<secret-name-for-cert-in-vault>" #Path to directory on local disk in which the certificate is stored $CertFileFullPath = "C:\<local-directory-to-place-your-exported-cert>\$CertDNSName.pfx" #If more than one under your account Select-AzureRmSubscription -SubscriptionId $subscriptionId #Verify Current Subscription Get-AzureRmSubscription -SubscriptionId $subscriptionId #Creates the a new resource group and Key-Vault New-AzureRmResourceGroup -Name $ResourceGroup -Location $Location New-AzureRmKeyVault -VaultName $KeyVaultName -ResourceGroupName $ResourceGroup -Location $Location -sku standard -EnabledForDeployment #Converts the plain text password into a secure string $SecurePassword = ConvertTo-SecureString -String $Password -AsPlainText -Force #Creates a new selfsigned cert and exports a pfx cert to a directory on disk $NewCert = New-SelfSignedCertificate -CertStoreLocation Cert:\CurrentUser\My -DnsName $CertDNSName Export-PfxCertificate -FilePath $CertFileFullPath -Password $SecurePassword -Cert $NewCert Import-PfxCertificate -FilePath $CertFileFullPath -Password $SecurePassword -CertStoreLocation Cert:\LocalMachine\My #Reads the content of the certificate and converts it into a json format $Bytes = [System.IO.File]::ReadAllBytes($CertFileFullPath) $Base64 = [System.Convert]::ToBase64String($Bytes) $JSONBlob = @{ data = $Base64 dataType = 'pfx' password = $Password } | ConvertTo-Json $ContentBytes = [System.Text.Encoding]::UTF8.GetBytes($JSONBlob) $Content = [System.Convert]::ToBase64String($ContentBytes) #Converts the json content a secure string $SecretValue = ConvertTo-SecureString -String $Content -AsPlainText -Force #Creates a new secret in Azure Key Vault $NewSecret = Set-AzureKeyVaultSecret -VaultName $KeyVaultName -Name $KeyVaultSecretName -SecretValue $SecretValue -Verbose #Writes out the information you need for creating a secure cluster Write-Host Write-Host "Resource Id: "$(Get-AzureRmKeyVault -VaultName $KeyVaultName).ResourceId Write-Host "Secret URL : "$NewSecret.Id Write-Host "Thumbprint : "$NewCert.Thumbprint The information you need to record will appear similar to this: Resource Id: /subscriptions/<your-subscriptionID>/resourceGroups/<your-resource-group>/providers/Microsoft.KeyVault/vaults/<your-vault-name> Secret URL : https://<your-vault-name>.vault.azure.net:443/secrets/<secret>/<generated-guid> Thumbprint : <certificate-thumbprint> Setting up Azure Active Directory - To secure the cluster with Azure AD, you will need to decide which AD directory in your subscription you will be using. In this example, we will use the 'default' directory. In step 2 above, you should have recorded the 'tenantID'. This is the ID associated with your default Active directory. NOTE: If you have more than one directory (or tenant) in your subscription, you are going to have to make sure you get the right tenantID from your AAD administrator. The first piece of script you need to save to a file named Common.ps1 is: <# .VERSION 1.0.3 .SYNOPSIS Common script, do not call it directly. #> if($headers){ Exit } Try { $FilePath = Join-Path $PSScriptRoot "Microsoft.IdentityModel.Clients.ActiveDirectory.dll" Add-Type -Path $FilePath } Catch { Write-Warning $_.Exception.Message } function GetRESTHeaders() { # Use common client $clientId = "1950a258-227b-4e31-a9cf-717495945fc2" $redirectUrl = "urn:ietf:wg:oauth:2.0:oob" $authenticationContext = New-Object Microsoft.IdentityModel.Clients.ActiveDirectory.AuthenticationContext -ArgumentList $authString, $FALSE $accessToken = $authenticationContext.AcquireToken($resourceUrl, $clientId, $redirectUrl, [Microsoft.IdentityModel.Clients.ActiveDirectory.PromptBehavior]::RefreshSession).AccessToken $headers = New-Object "System.Collections.Generic.Dictionary[[String],[String]]" $headers.Add("Authorization", $accessToken) return $headers } function CallGraphAPI($uri, $headers, $body) { $json = $body | ConvertTo-Json -Depth 4 -Compress return (Invoke-RestMethod $uri -Method Post -Headers $headers -Body $json -ContentType "application/json") } function AssertNotNull($obj, $msg){ if($obj -eq $null -or $obj.Length -eq 0){ Write-Warning $msg Exit } } # Regional settings switch ($Location) { "china" { $resourceUrl = "" $authString = "" + $TenantId } default { $resourceUrl = "" $authString = "" + $TenantId } } $headers = GetRESTHeaders if ($ClusterName) { $WebApplicationName = $ClusterName + "_Cluster" $WebApplicationUri = "" $NativeClientApplicationName = $ClusterName + "_Client" } You do not need to execute this script, it will be called by the next script, so make sure you have Common.ps1 in the same folder as the next script. 2. Create a new script file and paste in the code below. Name this file SetupApplications.ps1. Note that you will need to record some of the output from the execute of this script (explained below) for later use. <# .VERSION 1.0.3 .SYNOPSIS Setup applications in a Service Fabric cluster Azure Active Directory tenant. .PREREQUISITE 1. An Azure Active Directory tenant. 2. A Global Admin user within tenant. .PARAMETER TenantId ID of tenant hosting Service Fabric cluster. .PARAMETER WebApplicationName Name of web application representing Service Fabric cluster. .PARAMETER WebApplicationUri App ID URI of web application. .PARAMETER WebApplicationReplyUrl Reply URL of web application. Format: https://<Domain name of cluster>:<Service Fabric Http gateway port> .PARAMETER NativeClientApplicationName Name of native client application representing client. .PARAMETER ClusterName A friendly Service Fabric cluster name. Application settings generated from cluster name: WebApplicationName = ClusterName + "_Cluster", NativeClientApplicationName = ClusterName + "_Client" .PARAMETER Location Used to set metadata for specific region: china. Ignore it in global environment. .EXAMPLE . Scripts\SetupApplications.ps1 -TenantId '4f812c74-978b-4b0e-acf5-06ffca635c0e' -ClusterName 'MyCluster' -WebApplicationReplyUrl '' Setup tenant with default settings generated from a friendly cluster name. .EXAMPLE . Scripts\SetupApplications.ps1 -TenantId '4f812c74-978b-4b0e-acf5-06ffca635c0e' -WebApplicationName 'SFWeb' -WebApplicationUri '' -WebApplicationReplyUrl '' -NativeClientApplicationName 'SFnative' Setup tenant with explicit application settings. .EXAMPLE . $ConfigObj = Scripts\SetupApplications.ps1 -TenantId '4f812c74-978b-4b0e-acf5-06ffca635c0e' -ClusterName 'MyCluster' -WebApplicationReplyUrl '' Setup and save the setup result into a temporary variable to pass into SetupUser.ps1 #> Param ( [Parameter(ParameterSetName='Customize',Mandatory=$true)] [Parameter(ParameterSetName='Prefix',Mandatory=$true)] [String] $TenantId, [Parameter(ParameterSetName='Customize')] [String] $WebApplicationName, [Parameter(ParameterSetName='Customize')] [String] $WebApplicationUri, [Parameter(ParameterSetName='Customize',Mandatory=$true)] [Parameter(ParameterSetName='Prefix',Mandatory=$true)] [String] $WebApplicationReplyUrl, [Parameter(ParameterSetName='Customize')] [String] $NativeClientApplicationName, [Parameter(ParameterSetName='Prefix',Mandatory=$true)] [String] $ClusterName, [Parameter(ParameterSetName='Prefix')] [Parameter(ParameterSetName='Customize')] [ValidateSet('china')] [String] $Location ) Write-Host 'TenantId = ' $TenantId . "$PSScriptRoot\Common.ps1" $graphAPIFormat = $resourceUrl + "/" + $TenantId + "/{0}?api-version=1.5" $ConfigObj = @{} $ConfigObj.TenantId = $TenantId $appRole = @{ allowedMemberTypes = @("User") description = "ReadOnly roles have limited query access" displayName = "ReadOnly" id = [guid]::NewGuid() isEnabled = "true" value = "User" }, @{ allowedMemberTypes = @("User") description = "Admins can manage roles and perform all task actions" displayName = "Admin" id = [guid]::NewGuid() isEnabled = "true" value = "Admin" } $requiredResourceAccess = @(@{ resourceAppId = "00000002-0000-0000-c000-000000000000" resourceAccess = @(@{ id = "311a71cc-e848-46a1-bdf8-97ff7156d8e6" type= "Scope" }) }) if (!$WebApplicationName) { $WebApplicationName = "ServiceFabricCluster" } if (!$WebApplicationUri) { $WebApplicationUri = "" } if (!$NativeClientApplicationName) { $NativeClientApplicationName = "ServiceFabricClusterNativeClient" } #Create Web Application $uri = [string]::Format($graphAPIFormat, "applications") $webApp = @{ displayName = $WebApplicationName identifierUris = @($WebApplicationUri) homepage = $WebApplicationReplyUrl #Not functionally needed. Set by default to avoid AAD portal UI displaying error replyUrls = @($WebApplicationReplyUrl) appRoles = $appRole } switch ($Location) { "china" { $oauth2Permissions = @(@{ adminConsentDescription = "Allow the application to access " + $WebApplicationName + " on behalf of the signed-in user." adminConsentDisplayName = "Access " + $WebApplicationName id = [guid]::NewGuid() isEnabled = $true type = "User" userConsentDescription = "Allow the application to access " + $WebApplicationName + " on your behalf." userConsentDisplayName = "Access " + $WebApplicationName value = "user_impersonation" }) $webApp.oauth2Permissions = $oauth2Permissions } } $webApp = CallGraphAPI $uri $headers $webApp AssertNotNull $webApp 'Web Application Creation Failed' $ConfigObj.WebAppId = $webApp.appId Write-Host 'Web Application Created:' $webApp.appId #Service Principal $uri = [string]::Format($graphAPIFormat, "servicePrincipals") $servicePrincipal = @{ accountEnabled = "true" appId = $webApp.appId displayName = $webApp.displayName appRoleAssignmentRequired = "true" } $servicePrincipal = CallGraphAPI $uri $headers $servicePrincipal $ConfigObj.ServicePrincipalId = $servicePrincipal.objectId #Create Native Client Application $uri = [string]::Format($graphAPIFormat, "applications") $nativeAppResourceAccess = $requiredResourceAccess += @{ resourceAppId = $webApp.appId resourceAccess = @(@{ id = $webApp.oauth2Permissions[0].id type= "Scope" }) } $nativeApp = @{ publicClient = "true" displayName = $NativeClientApplicationName replyUrls = @("urn:ietf:wg:oauth:2.0:oob") requiredResourceAccess = $nativeAppResourceAccess } $nativeApp = CallGraphAPI $uri $headers $nativeApp AssertNotNull $nativeApp 'Native Client Application Creation Failed' Write-Host 'Native Client Application Created:' $nativeApp.appId $ConfigObj.NativeClientAppId = $nativeApp.appId #Service Principal $uri = [string]::Format($graphAPIFormat, "servicePrincipals") $servicePrincipal = @{ accountEnabled = "true" appId = $nativeApp.appId displayName = $nativeApp.displayName } $servicePrincipal = CallGraphAPI $uri $headers $servicePrincipal #OAuth2PermissionGrant #AAD service principal $uri = [string]::Format($graphAPIFormat, "servicePrincipals") + '&$filter=appId eq ''00000002-0000-0000-c000-000000000000''' $AADServicePrincipalId = (Invoke-RestMethod $uri -Headers $headers).value.objectId $uri = [string]::Format($graphAPIFormat, "oauth2PermissionGrants") $oauth2PermissionGrants = @{ clientId = $servicePrincipal.objectId consentType = "AllPrincipals" resourceId = $AADServicePrincipalId scope = "User.Read" $oauth2PermissionGrants = @{ clientId = $servicePrincipal.objectId consentType = "AllPrincipals" resourceId = $ConfigObj.ServicePrincipalId scope = "user_impersonation" $ConfigObj #ARM template Write-Host Write-Host '-----ARM template-----' Write-Host '"azureActiveDirectory": {' Write-Host (" `"tenantId`":`"{0}`"," -f $ConfigObj.TenantId) Write-Host (" `"clusterApplication`":`"{0}`"," -f $ConfigObj.WebAppId) Write-Host (" `"clientApplication`":`"{0}`"" -f $ConfigObj.NativeClientAppId) Write-Host "}," 3. Execute the following command from the PS command prompt window: .\SetupApplications.ps1 -TenantId '<your-tenantID>' -ClusterName '<your-cluster-name>.<region>.cloudapp.azure.com' -WebApplicationReplyUrl 'https://<your-cluster-name>.<region>.cloudapp.azure.com:19080/Explorer/index.html' The ClusterName is used to prefix the AAD applications created by the script. It does not need to match the actual cluster name exactly as it is only intended to make it easier for you to map AAD artifacts to the Service Fabric cluster that they're being used with. This can be a bit confusing because you haven't created your cluster yet. But, if you know what name you plan to give your cluster, you can use it here. The WebApplicationReplyUrl is the default endpoint that AAD returns to your users after completing the sign-in process. You should set this to the Service Fabric Explorer endpoint for your cluster, which by default is: https://<cluster_domain>:19080/Explorer For a full list of AAD helper scripts, you can find more of these at. Record the information at the bottom of the command prompt window. You will need this information when you deploy your cluster from the portal. The information will look similar to what you see below. "azureActiveDirectory": { "tenantId":"<Your-AAD-tenantID>", "clusterApplication":"1xxxxxxxx-x68e-490a-89c8-2894e4b8686a", "clientApplication":"xxxxxxx-7825-4e1e-a586-f0ff8d9e679e" NOTE: You may receive a Warning that you have a missing assembly. You can ignore this warning. 4. After you run the script in step 6, log in to the classic Azure portal at. For now you need to use the classic Azure portal because the production portal Azure Active Directory features are still in preview. 5. Find your Azure Active Directory in the list and click on it. 6. Add 2 new users to your directory. Name them whatever you want just as long as you know which one is Admin and which one would be the read-only user. Make sure to record the password that is initially generated, because the first time you try to log in to the portal as this user, you will be asked to change the password. 7. Within your AAD, click on the Applications menu. In the Show drop-down box, pick Applications My Company Owns and then click on the check button over to the right to do a search. 8. You should see two applications listed. One will be for Native client applications and the other for Web Applications. Click on the application name for the web application type. Since we will be doing our connectivity test connecting to the Service Fabric Explorer web UI, this is the application we need to set the user permissions on. 9. Click on the Users menu. 10. Click on the user name that should be the administrator and then select the Assign button at the bottom of the portal Window. 11. In the Assign Users dialog box, pick Admin from the dropdown box and select the check button. 12. Repeat 10 and 11 but this time, for the read-only user select Read-only from the Assign Users drop-down. This step completes what you will need to do in the classic portal, so you can close the classic portal window. 13. You now have all the information you need to create your cluster in the portal. Log in to the Azure Portal at. Creating your Service Fabric cluster - Create a new resource group and then within the resource group start the process of adding a new Service Fabric Cluster to the resource group. As you are stepping through creating the cluster, you will find 4 core blades with information you need to provide: - Basic - unique name for the cluster, operating system, username/password for RDP access etc. - Cluster configuration - node type count, node configuration, diagnostics etc. - Security - This is where we want to focus in the next step…. NOTE: If you want more details about creating your Service Fabric cluster via the portal, go to ~ the procedures at this link uses certificates for Admin and Read-only user setup, not AAD. 2. In the cluster Security blade, make sure the Security mode is set to Secure. It is by default. 3. The output from the first PS script you executed will contain the information you need for the Primary certificate information. After you enter your recorded information, make sure to select the Configure advanced settings checkbox. 4. By selecting the Configure advanced settings checkbox, the blade will expand (vertically) and you can scroll down in the blade to find the area where you need to enter the Active Directory information. The information you recorded when you executed SetupApplications.ps1 will be used here. 5. Select the Ok button in the Security blade. 6. Complete the Summary blade after the portal validates your settings. Testing your Admin and Read-only user access - Once the cluster has completed the creation process, make sure you log out of the Azure portal. This assures that when you attempt to log in as the Admin or Read-only user, you will not accidentally log in to the portal as the subscription administrator. - Log in to the portal as either the Admin or Read-only user. You will need to change the temporary password you were provided early and then the log in will complete. - Open up a new browser window and log in to https://<yourfullclustername>:19080/Explorer/. Test the Explorer functionality. Hope this helps you in your work with Azure Service Fabric! thank you, I am so much to learn from this article
https://blogs.msdn.microsoft.com/ncdevguy/2017/01/09/securing-an-azure-service-fabric-cluster-with-azure-active-directory-via-the-azure-portal-2/
CC-MAIN-2019-35
en
refinedweb
Le vendredi 04 juillet 2014 à 19:01 +0200, Yann Droneaud a écrit : > Le vendredi 04 juillet 2014 à 18:24 +0200, Yann Droneaud a écrit : > > Hi, > > > > I'm a bit puzzled regarding the behavor of make: I don't understand why > > a make function is executed twice when stored in a variable which is > > expanded as part of a recipe. > > > > See the following Makefile fragment, as info.mk: > > > > ifeq ($(V),1) > > Q := > > else > > Q ?= @ > > endif > > > > all: > > $(Q) : $(info 1) > > $(Q)$(RECIPE) > > $(Q) : $(info 3) > > > > When make is invoked with > > > > $ $ ./make -f ./info.mk > > 1 > > 3 > > > > Now, let's put some shell in RECIPE variable: > > > > $ make -f ./info.mk > 1 > > 3 > > 2 > > > > Having 2 printed after 3 is OK here: $(info ...) in the recipe were > > expanded before each command were feed to the shell for execution. > > > > Now, try to get 2 printed between 1 and 3: > > > > $ ./make -f ./info.mk > 1 > > 2 > > 3 > > 2 > > > > This way, 2 is printed twice. The second times seems to happen just > > before passing to the shell the line where the variable is expanded. > > That can be checked with > > > > $ ./make -f ./info.mk V=1 > 1 > > 2 > > 3 > > : > > 2 > > : > > : > > > > So the second 2 (!) seems to be reported after recipe line > > > > $(Q) : $(info 1) > > > > is executed by the shell, but before this recipe line > > > > $(Q)$(RECIPE) > > > > is executed by the shell. > > > > As I haven't done my homework, I don't know if it's an expected > > documented behavor. But it's a pity it behave this way. > > > > Is there any workaround to have 2 printed only one ? > > > > BTW, it seems to happen only with $(info ...), as it would be very bad > > if other functions having side effect would be called twice. See for > > example: > > > > I've found a workaround, but such workaround so counter-intuitive that's > demonstrate there's really something wrong about having $(info ...) in > a variable which is expanded as part of a recipe. See this new makefile > fragment, info-empty.mk, which use $(if ...): > > ifeq ($(V),1) > Q := > else > Q ?= @ > endif > > empty := > > all: > $(Q) : $(info 1) > $(Q)$(if $(empty),$(RECIPE)) > $(Q) : $(info 3) > > $(if ...) is said to not expand one of its arguments, so my variable > RECIPE containing $(info ...) should not be expanded, as $(empty) is > always false. > > Let's try: > > $ ./make -f ./info-empty.mk 1 > 3 > 2 > > And with V=1: > > $ ./make -f ./info-empty.mk V=1 1 > 3 > : > 2 > : > > $(info 2) is only evaluated once ... but it should never be evaluated as > far as I understand how make proceed to variable expansion and function > execution. > > What do you think of this ? > If the variable is defined as a simply defined one ('simple' flavor) with := operator on make command line, the results are a bit weird: $ ./make -f ./info.mk V=1 RECIPE:=': $(info 2)' 2 1 3 : : : $ ./make -f ./info-empty.mk V=1 RECIPE:=': $(info 2)' 2 1 3 : : But they make sense: as a simply expanded variable, RECIPE should get its value at the time it's defined, so having 2 printed as soon as make start processing seems correct and logical. In the other hand having the variable RECIPE defined in the process environment (so as a simply expanded variable) prior make invocation, the behavor is different. $ RECIPE=': $(info 2)' ./make -f ./info.mk V=1 1 2 3 : : : $ RECIPE=': $(info 2)' ./make -f ./info-empty.mk V=1 1 3 : : It's somewhat what I've expected in the first place, but regarding the other results, I'm not convinced to have a correct behavor: RECIPE has simple "flavor", so it shouldn't be recursively expanded when referenced the recipe. I'm a bit lost ... Regards. -- Yann Droneaud OPTEYA
https://lists.gnu.org/archive/html/help-make/2014-07/msg00003.html
CC-MAIN-2019-35
en
refinedweb
Writing code is tricky enough. You shouldn’t have to spend hours improving its readability or worry about unnecessary typos causing build errors. Ranorex 6.0 now makes a ton new code editor enhancements available, which will help you quickly write clean and easily maintainable test scripts. Here’re 7 of the most fantastic time-saving features: 1. Code templates We all love the custom code templates in Ranorex Studio. Using the tab key, you can now access multiple predefined templates, such as the for/for each loop. Icing on the cake for all us coders! 2. Context specific actions Improve your code structure with these amazing new context specific actions. Simply move newly created classes into specific files, or right click on the edit pencil to check for null or undefined variables. These are just a few examples – give it a try! 3. Refactoring Wouldn’t it be great if you could replace complex code fragments with small, easily readable methods? The extract method enables you to group your fragments to methods. You can further give them a clear name that explains their purpose. 4. CamelCase search functionality Find what you’re looking for faster with the CamelCase search functionality! CamelCase identifies the segments of compound words and uses the capital letters to list potential search results. 5. Auto insertion of using Start saving time when using namespaces! Type in a class using the auto-complete functionality. Ranorex will then automatically add the specific using directive of the needed namespace. 6. Introduction of new methods And yet another feature that will save you time: When calling an unknown method in code, you can now easily implement it with the context specific action ‘introduce method…’. 7. Switch on enum This little feature comes in quite handy and enables you to write code faster. When typing a “switch” statement where the condition is an enum the cases are automatically prefilled. These and many more fantastic features are available with Ranorex 6.0. Update your Ranorex Studio now (yes, it’s free!) and start coding!
https://www.ranorex.com/blog/code-editor-features/
CC-MAIN-2019-35
en
refinedweb
Java language offers you to work with several loops. Loops are basically used to execute a set of statements repeatedly until a particular condition is satisfied. Here, I will tell you about the ‘while’ loop in Java. The topics included in this article are mentioned below: - What is while loop in Java? - What is an Infinite while loop? Let’s begin! What is a while loop in Java? The Java while loop is used to iterate a part of the program again and again. If the number of iteration is not fixed, then you can use while loop. A pictorial representation of how a while loop works: In the above diagram, when the execution begins and the condition returns false, then the control jumps out to the next statement after the while loop. On the other hand, if the condition returns true then the statement inside the while loop is executed. Moving on with this article on While Loop in Java, Let’s have a look at the syntax: Syntax: while (condition) { // code block to be executed } Now that I have shown you the syntax, here is an example: Practical Implementation: class Example { public static void main(String args[]){ int i=10; while(i>1){ System.out.println(i); i--; } } } Output: 10 9 8 7 6 5 4 3 2 Next, let’s take a look at another example: Another example of While Loop in Java: // Java While Loop example package Loops; import java.util.Scanner; public class WhileLoop { private static Scanner sc; public static void main(String[] args) { int number, sum = 0; sc = new Scanner(System.in); System.out.println("n Please Enter any integer Value below 10: "); number = sc.nextInt(); while (number <= 10) { sum = sum + number; number++; } System.out.format(" Sum of the Numbers From the While Loop is: %d ", sum); } } Output: Please Enter any integer Value below 10: 7 Sum of the Numbers From the While Loop is: 34 Above illustrated example is a bit complex as compared to the previous one. Let me explain it step by step. In this Java while loop example, the machine would ask the user to enter any integer value below 10. Next, the While loop and the Condition inside the While loop will assure that the given number is less than or equal to 10. Now, User Entered value = 7 and I have initialized the sum = 0 This is how the iteration would work: (concentrate on the while loop written in the code) First Iteration: sum = sum + number sum = 0 + 7 ==> 7 Now, the number will be incremented by 1 (number ++) Second Iteration Now in the first iteration the values of both Number and sum has changed as: Number = 8 and sum = 7 sum = sum + number sum = 7 + 8 ==> 15 Again, the number will be incremented by 1 (number ++) Third Iteration Now, in the Second Iteration, the values of both Number and sum has changed as: Number = 9 and sum = 15 sum = sum + number sum = 15 + 9 ==> 24 Following the same pattern, the number will be incremented by 1 (number ++) again. Fourth Iteration In the third Iteration of the Java while loop, the values of both Number and sum has changed as: Number = 10 and sum = 24 sum = sum + number sum = 24 + 10 ==> 34 Finally, the number will be incremented by 1 (number ++) for the last time. Here, Number = 11. So, the condition present in the while loop fails. In the end, System.out.format statement will print the output as you can see above! Moving further, One thing that you need to keep in mind is that you should use increment or decrement statement inside while loop so that the loop variable gets changed on each iteration so that at some point, the condition returns false. This way you can end the execution of the while loop. Else, the loop would execute indefinitely. In such cases, where the loop executes indefinitely, you’ll encounter a concept of the infinite while loop in Java, which is our next topic of discussion! Infinite while loop in Java The moment you pass ‘true’ in the while loop, the infinite while loop will be initiated. Syntax: while (true){ statement(s); } Practical Demonstration Let me show you an example of Infinite While Loop in Java: class Example { public static void main(String args[]){ int i=10; while(i>1) { System.out.println(i); i++; } } } It’s an infinite while loop, hence it won’t end. This is because the condition in the code says i>1 which would always be true as we are incrementing the value of i inside the while loop. With this, I have reached towards the end of this blog. I really hope the above-shared content added value to your Java knowledge. Let us keep exploring Java world together. Stay tuned! ‘’While loop in Java” blog and we will get back to you as soon as possible.
https://www.edureka.co/blog/java-while-loop/
CC-MAIN-2019-35
en
refinedweb
// This is a work-in-process procedural map generator for an idea of mine. // For now, I am generating the co-ordinates of the map and then display them. // With the 'room' structure in place I should be able to create larger and more detailed packages for the rooms which form the 'building blocks' of the map. #include <iostream> #include <conio.h> #include <fstream> #include <windows.h> #include <stdlib.h> #define PAUSE _getch() // This line of code will ask the user for input, and upon getting a character (which is then technically entered into no existing variable) continues running. Useful for debugging. using namespace std; struct room { double id; int x; int y; char displaysymbol; }; int mapxsize; int mapysize; room * map; // We designate a dynamic array of type 'room'. room roomgenerator (int roomx, int roomy); void mapdisplay (room *map); int randomnumber (int range_min, int range_max); int main() { cout << "NP map generator\n\n"; cout << "Please enter the intended x- and y-sizes for the map.\nThe maximum size is 20x20 tiles.\n\n"; do //The program checks whether the suggested x-size is within the allowed limits. { cout << "Size (x): "; cin >> mapxsize; if (mapxsize > 20 || mapxsize < 1) { cout << "Please type a valid value between 0 and 21!\n"; } } while (mapxsize > 20 || mapxsize < 1); do //The program checks whether the suggested y-size is within the allowed limits. { cout << "Size (y): "; cin >> mapysize; if (mapysize > 20 || mapysize < 1) { cout << "Please type a valid value between 0 and 21!\n"; } } while (mapysize > 20 || mapysize < 1); map = new (nothrow) room [mapxsize*100+mapysize]; // I create a dynamic memory unit here. Note that it is not completely efficient with space. This is because of the way I allocate the rooms to their own numbers. if (map == 0) { cout << "Errorcode MG01: Failure assigning memory."; exit (EXIT_FAILURE); PAUSE; } cout << "\n"; int roomx; int roomy; for (roomx=1; roomx<=mapxsize; roomx++) // I run the roomgenerator once for every room to be generated. { for (roomy=1; roomy<=mapysize; roomy++) { roomgenerator (roomx,roomy); } } cout << "\n\nMapdisplay will now run."; PAUSE;"; // Sloppy code to 'clear' the console. I will later replace it, but for practice and debugging purposes, it is alright. mapdisplay (map); PAUSE; delete map; return 0; } room roomgenerator (int roomx, int roomy) { map [(roomx-1)*100+(roomy-1)].id = roomx*100+roomy; // In the 'map' array, we generate a room with a number that is 100*x + Y. For the map with coordinates x = 10 and y = 15, for instance, we would get number '1015'. This is also its slot in the array. map [(roomx-1)*100+(roomy-1)].x = roomx; map [(roomx-1)*100+(roomy-1)].y = roomy; map [(roomx-1)*100+(roomy-1)].displaysymbol = '+'; cout << "Room number " << map [(roomx-1)*100+(roomy-1)].id << "\n"; cout << "x = " << map [(roomx-1)*100+(roomy-1)].x << " y = " << map [(roomx-1)*100+(roomy-1)].y << "\n\n"; return map [(roomx-1)*100+(roomy-1)]; } void mapdisplay (room *map) // This code will be expanded along with the variables contained in the 'room' structure and then ported into a different part of the final program. /* The mapdisplay displays the cahracter assigned to every room. For now, that's just plusses. */ { cout << "This is the grid of coordinates:\n\n\n\n"; for (int roomx = 1; roomx<=mapxsize; roomx++) { cout << "\n"; for (int roomy = 1; roomy<=mapysize; roomy++) { cout << map [(roomx-1)*100+(roomy-1)].displaysymbol; } } } int randomnumber (int range_min, int range_max) { int result = 0; cout << "Hello! I am a function that is not called yet because I do absolutely nothing. I serve as a placeholder until my beginning programmer learns how to actually generate a random number! Then I will be used for all sorts of things!"; cout << "\nI am so excited! What if I will be used to generate random types of rooms and contents and all other kinds of information?! And what if the number I return can be modified for all sorts of different functions?! Wow!"; return result; }
http://www.cplusplus.com/forum/beginner/105759/
CC-MAIN-2017-04
en
refinedweb
alabaster 0.5.0 A configurable sidebar-enabled Sphinx theme This theme is a modified "Kr" Sphinx theme from @kennethreitz (especially as used in his [Requests]() project), which was itself originally based on @mitsuhiko's theme used for [Flask]() & related projects. A live example of what this theme looks like can be seen on e.g. [paramiko.org](). Features (compared to Kenneth's original theme): * Easy ability to install/use as a Python package (tip o' the hat to [Dave & Eric's sphinx_rtd_theme]() for showing the way); * Style tweaks, such as better code-block alignment, Gittip and Github button placement, page source link moved to footer, etc; * Additional customization hooks, such as header/link/etc colors; * Improved documentation for all customizations (pre-existing & new). To use: 1. `pip install alabaster` (or equivalent command) 1. Enable the 'alabaster' theme + mini-extension in your `conf.py`: ```python import alabaster html_theme_path = [alabaster.get_path()] extensions = ['alabaster'] html_theme = 'alabaster' html_sidebars = { '**': [ 'about.html', 'navigation.html', 'searchbox.html', 'donate.html', ] } ``` * Modify the call to `abspath` if your `_themes` folder doesn't live right next to your `conf.py`. * Feel free to adjust `html_sidebars` as desired - the theme is designed assuming you'll have `about.html` activated, but otherwise it doesn't care much. * See [the Sphinx docs]() for details on how this setting behaves. * Alabaster provides `about.html` (logo, github buttom + blurb), `donate.html` (Gittip blurb/button) and `navigation.html` (a more flexible version of the builtin `localtoc`/`globaltoc` templates); the others listed come from Sphinx itself. 1. If you're using either of the image-related options outlined below (logo or touch-icon), you'll also want to tell Sphinx where to get your images from. If so, add a line like this (changing the path if necessary; see [the Sphinx docs]()): ```python html_static_path = ['_static'] ``` 1. Add one more section to `conf.py` setting one or more theme options, like in this example (*note*: snippet doesn't include all possible options, see following list!): ```python html_theme_options = { 'logo': 'logo.png', 'github_user': 'bitprophet', 'github_repo': 'alabaster', } ``` The available theme options (which are all optional) are as follows: **Variables and feature toggles** * `logo`: Relative path (from `$PROJECT/_static/`) to a logo image, which will appear in the upper left corner above the name of the project. * If `logo` is not set, your `project` name setting (from the top level Sphinx config) will be used in a text header instead. This preserves a link back to your homepage from inner doc pages. * `logo_name`: Set to `true` to insert your site's `project` name under the logo image as text. Useful if your logo doesn't include the project name itself. Defaults to `false`. * `logo_text_align`: Which CSS `text-align` value to use for logo text (if there is any.) * `description`: Text blurb about your project, to appear under the logo. * `github_user`, `github_repo`: Used by `github_button` and `github_banner` (see below); does nothing if both of those are set to `false`. * `github_button`: `true` or `false` (default: `true`) - whether to link to your Github. * If `true`, requires that you set `github_user` and `github_repo`. * See also these other related options, which behave as described in [Github Buttons' README](): * `github_button_type`: Defaults to `watch`. * `github_button_count`: Defaults to `true`. * `github_banner`: `true` or `false` (default: `false`) - whether to apply a 'Fork me on Github' banner in the top right corner of the page. * If `true`, requires that you set `github_user` and `github_repo`. * `travis_button`: `true`, `false` or a Github-style `"account/repo"` string - used to display a Travis-CI build status button in the sidebar. If `true`, uses your `github_(user|repo)` settings; defaults to `false.` * `gittip_user`: Set to your [Gittip]() username if you want a Gittip 'Donate' section in your sidebar. * `analytics_id`: Set to your [Google Analytics]() ID (e.g. `UA-#######-##`) to enable tracking. * `touch_icon`: Path to an image (as with `logo`, relative to `$PROJECT/_static/`) to be used for an iOS application icon, for when pages are saved to an iOS device's home screen via Safari. * `extra_nav_links`: Dictionary mapping link names to link targets; these will be added in a UL below the main sidebar navigation (provided you've enabled `navigation.html`.) Useful for static links outside your Sphinx doctree. * `sidebar_includehidden`: Boolean determining whether the TOC sidebar should include hidden Sphinx toctree elements. Defaults to `true` so you can use `:hidden:` in your index page's root toctree & avoid having 2x copies of your navigation on your landing page. **Style colors** These should be fully qualified CSS color specifiers such as `#004B6B` or `#444`. The first few items in the list are "global" colors used as defaults for many of the others; update these to make sweeping changes to the colorscheme. The more granular settings can be used to override as needed. * `gray_1`: Dark gray. * `gray_2`: Light gray. * `gray_3`: Medium gray. * `body_text`: Main content text. * `footer_text`: Footer text (includes links.) * `link`: Non-hovered body links. * `link_hover`: Body links, hovered. * `sidebar_header`: Sidebar headers. Defaults to `gray_1`. * `sidebar_text`: Sidebar paragraph text. * `sidebar_link`: Sidebar links (there is no hover variant.) Applies to both header & text links. Defaults to `gray_1. * `sidebar_link_underscore`: Sidebar links' underline (technically a bottom-border.) * `sidebar_search_button`: Background color of the search field's 'Go' button. * `sidebar_list`: Foreground color of sidebar list bullets & unlinked text. * `sidebar_hr`: Color of sidebar horizontal rule dividers. Defaults to `gray_3`. * `anchor`: Foreground color of section anchor links (the 'paragraph' symbol that shows up when you mouseover page section headers.) * `anchor_hover_fg`: Foreground color of section anchor links (as above) when moused over. Defaults to `gray_1. * `anchor_hover_bg`: Background color of above. * `note_bg`: Background of `.. note::` blocks. Defaults to `gray_2`. * `note_border`: Border of same. * `footnote_bg`: Background of footnote blocks. * `footnote_border`: Border of same. Defaults to `gray_2`. * `pre_bg`: Background of preformatted text blocks (including code snippets.) Defaults to `gray_2`. * `narrow_sidebar_bg`: Background of 'sidebar' when narrow window forces it to the bottom of the page. * `narrow_sidebar_fg`: Text color of same. * `narrow_sidebar_link`: Link color of same. Defaults to `gray_3`. ## Additional info / background * [Fabric #419]() contains a lot of general exposition & thoughts as I developed Alabaster, specifically with a mind towards using it on two nearly identical 'sister' sites (single-version www 'info' site & versioned API docs site). * Alabaster includes/requires a tiny Sphinx extension on top of the theme itself; this is just so we can inject dynamic metadata (like Alabaster's own version number) into template contexts. It doesn't add any additional directives or the like, at least not yet. - Author: Jeff Forcier - Categories - Package Index Owner: bitprophet - DOAP record: alabaster-0.5.0.xml
https://pypi.python.org/pypi/alabaster/0.5.0
CC-MAIN-2017-04
en
refinedweb
#include <math.h> double a, b; long c; c = (long) ceil( a / b ); #include <math.h> long a, b, c; c = (long) ceil( (double)a/(double)b ); long a, b, c; c = (a - 1) / b + 1; Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/tips/Tip/13968
CC-MAIN-2017-04
en
refinedweb
This action might not be possible to undo. Are you sure you want to continue? Luck in Entrepreneurship and Venture Capital: Evidence from Serial Entrepreneurs Paul Gompers, Anna Kovner, Josh Lerner, and David Scharfstein* July 2006 Abstract. * Harvard University. Gompers, Lerner, and Scharfstein are also affiliates of the National Bureau of Economic Research. Helpful comments were provided by Yael Hochberg, Steve Kaplan, Roni Michaely, and Avri Ravid. We thank for Tim Dore for exceptional research assistance on this project. Harvard Business School’s Division of Research and the National Science Foundation provided financial assistance. All errors and omissions are our own. 1. Introduction What makes entrepreneurs successful? Is it skill or luck? Knight (1921. p. ???) argues that an important component of entrepreneurship is the willingness of the “venturesome to ‘assume’ or ‘insure’ the doubtful and timid by guaranteeing to the latter a specified income in return for an assignment of the actual results.” In this view, luck is a big determinant of entrepreneurial success. According to Kihlstrom and Laffont (1979), luck is the only determinant of entrepreneurial success: in their model entrepreneurs are simply less risk averse individuals who are willing to guarantee workers’ wages and bear residual risk. Schumpeter (1934, p. 137) argues just the opposite, claiming that “the entrepreneur is never the risk bearer,” but rather an innovator, one who discovers new production processes, finds new markets, creates new types of organizations, or introduces new products. Entrepreneurial success, in this view, flows from innovative skill. Only suppliers of capital bear risk. In this paper, we empirically reject the Kihlstrom and Laffont hypothesis that entrepreneurs are just efficient risk bearers in favor of the view, emphasized by Schumpeter, that skill is an important component of entrepreneurship. At the same time, we present evidence that suppliers of capital are not just efficient risk bearers in the entrepreneurial process, as Schumpeter suggests, but rather bring their own set of capabilities to identifying skilled entrepreneurs and helping them build their businesses. Our approach to identifying skill in entrepreneurship is to examine the performance of venture-capital backed serial entrepreneurs. We try to answer the following simple question: Are successful entrepreneurs more likely to succeed in their next ventures than first-time entrepreneurs and entrepreneurs who previously failed? Our answer is yes. Our empirical model indicates that entrepreneurs who succeeded in a prior venture (i.e., started a company that went public) have a 30% chance of succeeding in their next venture. By contrast, first-time entrepreneurs have only an 18% chance of succeeding and entrepreneurs who previously failed have a 20% chance of succeeding. This performance persistence suggests that a component of success in entrepreneurship is attributable to skill. While it may be better to be lucky than smart, the evidence presented here indicates that being smart has value too. We also find evidence in support of the entrepreneurial skill hypothesis by examining the behavior and performance of the venture capital firms. As has been shown by Sorensen (2004), Kaplan and Schoar (2005), Gompers, Kovner, Lerner and Scharfstein (2006), and Hochberg, Ljungqvist, and Lu (2006), companies that are funded by more experienced (top-tier) venture capital firms are more likely to succeed. This could be because top-tier venture capital firms are better able to identify high quality companies and entrepreneurs. Alternatively, this performance differential could be because top-tier venture capital firms add more value – e.g., by helping new ventures make customer contacts, fill key management positions, or set business strategy. However, we find that there is only a performance differential when venture capital firms invest in companies started by first-time entrepreneurs or those who previously failed. If a company is started by an entrepreneur with a track record of success, then the company is no more likely to succeed if it is funded by a top-tier venture capital firm or one in the lower tier. Thus, it seems, prior success is a public signal of quality. It also implies that previously successful entrepreneurs derive no benefits from the value-added services of 2 more experienced venture capital firms; successful entrepreneurs apparently know what they’re doing. Another piece of evidence in support of the entrepreneurial skill hypothesis is that when previously successful entrepreneurs raise funding for their next venture they are able to do so at an earlier age and at an earlier stage in the company’s development. Presumably, this is the case because venture capital firms perceive a successful track record as evidence of skill, not just luck. Taken together, these findings also support the view that suppliers of capital are not just efficient risk-bearers, but rather help to put capital in the right hands and ensure that it is used effectively. The evidence for this goes beyond the finding–documented here and by others–that more experienced venture capital firms have higher success rates on their investments. More experienced venture capital firms only have higher success rates when they invest in unproven entrepreneurs, a fact which highlights the role suppliers of venture capital play in identifying skilled entrepreneurs and helping them to succeed. Finally, we study the value consequences of serial entrepreneurship. We start by examining the pre-money valuations of new ventures. More experienced venture capital firms invest at higher valuations, which is consistent with our finding that they also invest in firms with higher success rates. However, we do not find that serial entrepreneurs (whether successful or not) are able to benefit from their higher success rates by selling equity at higher prices. Given this fact, it should come as no surprise that the average investment multiple (exit valuation divided by pre-money valuation) is higher for companies of previously successful serial entrepreneurs. We also find that 3 fund returns are higher for venture capital firms that tend to invest a larger share of their portfolio in serial entrepreneurs. Our findings are consistent with there being an imperfectly competitive venture capital market in which prices do not get bid up to the point where excess returns from investing in serial entrepreneurs are eliminated. Our findings are related to a number of other studies in the entrepreneurship literature. Several study the effect of experience on performance. Consistent with our findings, Eesley and Roberts (2006a) use data from a survey of alumni from the Massachusetts Institute of Technology to show that entrepreneurial experience increases the likelihood of success (as measured by firm revenues). Our finding that serial entrepreneurs are more likely to succeed is also consistent with the observations of Kaplan and Stromberg (2003), who study the contractual terms of venture capital financings. They find that serial entrepreneurs receive more favorable control provisions than first time entrepreneurs, including more favorable board control, vesting, liquidation rights, and more up-front capital. Presumably this is because their higher success rates makes it less important for venture capitalists to protect themselves with tighter control provisions. Chatterji (2005) shows that industry experience also increases the likelihood of success. In the medical device industry, startups founded by former employees of other medical device companies perform better than other startups. The value of industry experience is also emphasized by Bhide (2000), who shows that a substantial fraction of the Inc. 500 got their ideas for their new company while working for their prior employer. See also Carroll and Mosakowski (1987), Honig and Davidson (2000) and Reuber, Dyke and Fischer (1990). 4 Finally, a number of papers have examined the characteristics of serial entrepreneurs. Eesley and Roberts (2006b) find that entrepreneurs are more likely to start another venture if they started their first venture when they were younger, were not married, and funding their first company with venture capital. Bengtsson (2005) finds that failed serial entrepreneurs are more likely than successful serial entrepreneurs to get funding from the same venture capital firm that financed their first ventures. He argues that these initial venture capitalists are better able to judge whether the venture failed because of bad luck or the limitations of the entrepreneur. The rest of the paper is organized as follows. Section 2 describes the construction of the data set and summarizes the data. Our main findings are presented in Section 3. We conclude in Section 4. 2. Data The core data for the analysis come from Dow Jones’ Venture Source (previously called Venture One), described in more detail in Gompers, Lerner, and Scharfstein (2005). Venture Source, established in 1987, collects data on firms that have obtained venture capital financing. Firms that have received early-stage financing exclusively from individual investors, federally chartered Small Business Investment Companies, and corporate development groups are not included in the database. The companies are initially identified from a wide variety of sources, including trade publications, company Web pages, and telephone contacts with venture investors. Venture Source then collects information about the businesses through interviews with venture capitalists and entrepreneurs. The data collected include the identity of the key founders (the crucial information used here) as well 5 as the industry, strategy, employment, financial history, and revenues of the company. Data on the firms are updated and validated through monthly contacts with investors and companies. When considering and controlling for the role of the venture capital investor, we consider only observations in which the venture capital investor serves on the board of the company. We do not consider the influence of other venture investors who do not serve on the board of directors. Our analysis focuses on data covering investments from 1975 to 2000, dropping information prior to 1975 due to data quality concerns.1 In keeping with industry estimates of a maturation period of three to five years for venture capital financed companies, we drop companies receiving their first venture capital investment after 2000 so that the outcome data can be meaningfully interpreted. Results were qualitatively similar when we ran the analyses looking only at data through 1998 in order to be conservative about exit periods. For the purposes of this analysis, we examine the founders (henceforth referred to as “entrepreneurs”) that joined firms listed in the Venture Source database during the period from 1986 to mid-2000. Typically, the database reports the previous affiliation and title (at the previous employer) of these entrepreneurs, as well as the date they joined the firm. In some cases, however, Venture Source did not collect this information. In these cases, we attempt to find this information by examining contemporaneous news stories in LEXISNEXIS, securities filings, and web sites of surviving firms. We believe this data collection procedure may introduce a bias in favor of having more information on successful firms, but it is not apparent to us that it affects our analysis. Comment [JL1]: I assume this is what you mean—otherwise, this paragraph is inconsistent with what follows… Gompers and Lerner (2004) discuss the coverage and selection issues in Venture Economics and Venture Source data prior to 1975. 1 6. Table 1 reports the number and fraction of serial entrepreneurs in our sample in each year. Several patterns are worth highlighting. First, the number of entrepreneurs in the sample increased slowly from 1984 through 1994. Afterwards, as the Internet and technology boom took off in the mid-1990s, the number of entrepreneurs grew very rapidly. Second, with the general growth of the industry through this period, serial entrepreneurs accounted for an increasing fraction of the sample, growing from about 7% in 1986 to a peak of 13-14% in 1994. There was some decrease in the fraction of serial entrepreneurs after 1994, probably because of the influx of first-time entrepreneurs as part of the Internet boom. The absolute number of serial entrepreneurs actually peaked in 1999. Table 2 documents the distribution of serial entrepreneurs across industries based on the nine industry groupings used in Gompers, Kovner, Lerner, and Scharfstein (2006). The data show a clear concentration of entrepreneurs in the three sectors that are most closely associated with the venture capital industry: Internet and computers; communications and electronics; and biotech and healthcare. These are also the three industries with the highest representation of serial entrepreneurs. The other industries, such as financial services and consumer, are smaller and have a lower percentage of serial entrepreneurs. 7 Table 3 lists the 50 most active venture capital firms in our sample and ranks them according to both the number of serial entrepreneurs they have funded and the fraction of serial entrepreneurs in their portfolios. Given that many successful venture capital firms have an explicit strategy of funding serial entrepreneurs, it is not surprising that these firms have higher rates of serial entrepreneurship than the sample average. This tabulation suggests that the biggest and most experienced venture capital firms are more successful in recruiting serial entrepreneurs. Nevertheless, there does appear to be quite a bit of heterogeneity among these firms in their funding of serial entrepreneurs. Some of the variation may stem from the industry composition of their portfolios, the length of time that the groups have been active investors, and the importance they place on funding serial entrepreneurs. In any case, the reliance on serial entrepreneurs of the largest, most experienced, and most successful venture capital firms indicates that we will need to control for venture capital firm characteristics in trying to identify an independent effect of serial entrepreneurship. Table 4 provides summary statistics for the data we use in our regression analysis. We present data for (1) all entrepreneurs in their first ventures; (2) entrepreneurs who have started only one venture; (3) serial entrepreneurs in their first venture; and (4) serial entrepreneurs in their later ventures. The first variable we look at is the success rate within these subgroups of entrepreneurs. We define “success” as going public or filing to go public by December 2003. The findings are similar if we define success to also include firms that were acquired or merged. The overall success rate on first time ventures is 25.3%. Not surprisingly, serial entrepreneurs have an above-average success rate of 36.9% on their first ventures: venture 8 capitalists are more likely to be more enthusiastic about financing a successful entrepreneur than one who has previously failed. It is more interesting that in their subsequent ventures they have a significantly higher success rate (29.0%) than do first time entrepreneurs (25.3%). Serial entrepreneurs have higher success rates, even though on average they receive venture capital funding at an earlier stage in their company's development. While 45% of first-time ventures receive first-round funding at an early stage (meaning they are classified as “startup,” “developing product,” or “beta testing,” and not yet “profitable” or “shipping product”), close to 60% of entrepreneurs receive first-round funding at an early stage when it is their second or later venture. The later ventures of serial entrepreneurs also receive first-round funding when they are younger–21 months as compared to 37 months for first time entrepreneurs. This earlier funding stage is also reflected in lower first-round premoney valuations for serial entrepreneurs–$12.3 million as compared to $16.0 million for first-time entrepreneurs. Controlling for year, serial entrepreneurs appear to be funded by more experienced venture capital firms, both in their first and subsequent ventures.2 The last row of Table 4 reports the ratio of the number of prior investments made by the venture capital firm to the average number of prior investments made by other venture capital firms in the year of the investment. This ratio is consistently greater than one because more experienced (and likely larger) venture capital firms do more deals. The table indicates that venture capital firms that invest in serial entrepreneurs, whether in their first or subsequent ventures, have nearly three times the average experience of the average firm investing in the same year. This is 2 Throughout the paper, we use venture capital experience as a proxy for ability. Recently, other measures of ability have been utilized including centrality of the venture capitalists in the overall venture capital network [Hochberg, Ljungqvist, and Lu (2006), Sorenson and Stuart (2001).] 9 about 14% greater than the year-adjusted experience of venture capital firms that invest in one-time-only entrepreneurs.3 Given the evidence that more experienced venture capital firms have higher success rates (e.g., Gompers, Kovner, Lerner and Scharfstein, 2006) it will be important for us to control for venture capital experience in our regression, as well as other factors such as company location, which has also been linked to outcomes. 3. Findings A. Success In this section we take a regression approach to exploring the impact of serial entrepreneurs on the success of the companies they start. In the first set of regressions, the unit of analysis is the entrepreneur at the time that the database first records the firm’s venture capital funding. Our basic approach is to estimate logistic regressions where the outcome is whether the firm “succeeds,” i.e. goes public or registers to go public by December 2003. Our results are qualitatively similar if we also include an acquisition as a successful outcome. A main variable of interest in the initial regressions is a dummy variable, LATER VENTURE, which takes the value one if the entrepreneur had previously been a founder of a venture capital backed company. We are also interested in whether the entrepreneur had succeeded in his prior venture, and thus construct a dummy variable, PRIOR SUCCESS, to take account of this possibility. There are a number of controls that must be included in the regression as well. As noted above, we control for venture capitalist’s experience. The simplest measure of Note that venture capital firms that invest in the first ventures of serial entrepreneurs have done fewer deals on an absolute basis. This is because these first deals are early in the sample period. 3 10 experience would be the number of prior companies in which the venture capital firm invested. We take a log transformation of this number to reflect the idea that an additional investment made by a firm that has done relatively few deals is more meaningful than an additional investment by a firm that has done many. However, because of the growth and maturation of the venture capital industry, there would be a time trend in this measure of experience. This is not necessarily a problem; investors in the latter part of the sample do have more experience. Nevertheless, we use a more conservative measure of experience, which adjusts for the average level of experience of other venture capital firms in the relevant year. Thus, our measure of experience for a venture capital investor is the log of one plus the number of prior companies in which the venture capital firm has invested minus the log of one plus the average number of prior investments undertaken by venture capital firms in the year of the investment. Because there are often multiple venture capital firms investing in a firm in the first round, we take experience of the most experienced investor who serves on the board of directors of the firms after the first venture financing round of the company, which we label VC EXPERIENCE.4 The regressions also include dummy variables for the round of the investment. Although we include each company only once (when the company shows up in the database for the first time), about 26% of the observations begin with rounds later than the first round. (In these instances, the firm raised an initial financing round from another investor, such as an individual angel.) All of the results are robust to including only 4 We have replicated the analysis using the average experience of investors from the earliest round and employing an entrepreneur-company-VC firm level analysis where each investor from the earliest round was a separate observation. In both cases, the results were qualitatively similar. We do not use the experience of venture capitalists who do not join the firm’s board, since it is standard practice for venture investors with significant equity stakes or involvement with the firm to join the board. Comment [JL2]: This is correct, right? 11 companies where the first observation in the database is the first investment round. We also include dummy variables for the company’s stage of development and logarithm of company age in months. Because success has been tied to location, we include a dummy variable for whether the firm was headquartered in California and one for whether it was headquartered in Massachusetts. We also include year and industry fixed effects. Finally, because there is often more than one entrepreneur per company, there will be multiple observations per company. Thus, robust standard errors of the coefficient estimates are calculated after clustering by company. In later regressions, the unit of analysis will be the company. The first column of Table 5 reports one of the central findings of the paper. The coefficient of LATER VENTURE is positive and statistically significant. At the means of the other variables, entrepreneurs in their second or later ventures have a predicted success rate of 25.0%, while first-time entrepreneurs have a predicted success rate of 20.8%. There are a number of hypotheses as to why the success rate of entrepreneurs in their second or later ventures is higher than the success rate of first-time entrepreneurs. One hypothesis is that there is learning-by-doing in entrepreneurship. The experience of starting a new venture–successful or not–confers on entrepreneurs some benefits (skills, contacts, ideas) that are useful in subsequent ventures. (Such a hypothesis is consistent with Lazear’s (2005) finding that Stanford MBAs who ultimately become entrepreneurs follow more varied career paths than their classmates.) In this view, entrepreneurs can learn to succeed through the experience of having started a company regardless of what its ultimate performance is. Alternatively, the higher average success rate of 12 entrepreneurs in subsequent ventures could reflect a deeper pool of talented and hardworking entrepreneurs. We use the outcome of serial entrepreneurs' prior ventures to distinguish between these hypotheses. To determine whether a pure learning-by-doing effect exists, in the second column of Table 5 we add the dummy variable, PRIOR SUCCESS, which equals 1 if the prior venture of the serial entrepreneur was successful. The estimated coefficient of this variable is positive and statistically significant. Including it also lowers the coefficient of the LATER VENTURE dummy so that it is no longer statistically significant.. The unit of analysis for the first two columns of Table 5 is the entrepreneurcompany level. We also repeat the analysis using only one observation per company, accounting for any potential concerns about the independence of observations. The third column of Table 5 reports the results of a regression in which the unit of analysis is the company, not the entrepreneur-company. The key variables are 1) a dummy for whether any of the founders is in their second or later ventures and 2) a dummy for whether any of the founders was successful in a prior venture. Here too a track record of prior success has a bigger effect on future success than does prior experience per se. Companies with a previously successful entrepreneur have a predicted success rate of 26.7%, whereas those with entrepreneurs who failed in prior ventures have an 17.9% success rate, and 13 companies with first-time entrepreneurs have a 14.1% chance of success. The effect of prior success on predicting future success is very large. The regressions also indicate that venture capital firm experience is positively related to success. Using estimates from the third column of Table 5, at the 75th percentile of VC EXPERIENCE and at the means of all the other variables, the predicted success rate is 19.0%, while at the 25th percentile, the predicted success rate is only 13.3%. There are a number of reasons why more experienced venture capital firms may make more successful investments. To consider the importance of the VC firm in determining portfolio company success, we do a similar analysis on two levels. In specification 4 of Table 5, we look at the data on an entrepreneur-company-VC firm level. This allows us to fully consider variation in entrepreneur and VC firm characteristics. To account for concerns about the independence of observations, specification 5 is at the company-VC Firm level. In these specifications, we are using VC EXPERIENCE as an imperfect proxy for the quality of a venture capital firm. If successful entrepreneurs are more likely to get funded by better venture capital firms, we could be getting a positive coefficient of PRIOR SUCCESS because it is a proxy for the unobservable components of venture capital firm quality that is not captured by VC EXPERIENCE. Thus, to control for unobservable characteristics, we estimate the model with venture capital firm fixed effects. This enables us to estimate how well a given venture capital firm does on its investments in serial entrepreneurs relative to its other investments in first-time entrepreneurs. Results in the fourth and fifth columns of Table 5 indicate that with venture capital firm fixed effects the differential between first time entrepreneurs and 14 successful serial entrepreneurs is even larger. The fifth column, which estimates the effects at the company level, generates a predicted success rate for first-time entrepreneurs of 17.7%. The predicted success rate for failed serial entrepreneurs in later ventures is 19.8%, and it is 29.6% for entrepreneurs with successful track records.. But, if an entrepreneur already has a demonstrable track record of success, does a more experienced venture capital firm enhance performance through screening or through monitoring and business building? To answer this question, we add to the basic specification in column 2 and 3 of Table 5 an interaction term between VC EXPERIENCE and PRIOR SUCCESS, as well an interaction term between VC EXPERIENCE and LATER VENTURE. The results are reported in columns 6 and 7 of the table. The coefficient of VC EXPERIENCE×PRIOR SUCCESS is negative and statistically significant (though somewhat more so in column 6). This indicates that venture capital firm experience has a less positive effect on the performance of entrepreneurs with successful track records. Indeed, using estimates from column 7, the predicted success rate for previously successful entrepreneurs is 28.1% when funded by more experienced venture capital firms (at the 75th percentile of VC EXPERIENCE) and 27.7% when funded by less experienced venture capital firms (at the 25th percentile of VC EXPERIENCE). Essentially, venture capital firm experience has a minimal effect on the performance of 15 entrepreneurs with good track records. Where venture capital firm experience does matter is in the performance of first-time entrepreneurs and serial entrepreneurs with histories of failure. First-time entrepreneurs have a 17.6% chance of succeeding when funded by more experienced venture capital firms and an 11.7% chance of succeeding when being funded by a less experienced venture capital firm. Likewise, failed entrepreneurs who are funded by more experienced venture capital firms have a 22.1% chance of succeeding as compared to a 14.7% chance of succeeding when they are funded by less experienced venture capital firms. These findings provide support for the view that venture capital firms actively screen and/or monitor their portfolio companies, and that there is some skill in doing so. When an entrepreneur has a proven track record of success–a publicly observable measure of quality–experienced venture capital firms are no better than others at determining whether he will succeed. It is only when there are less clear measures of quality–an entrepreneur is starting a company for the first time, or an entrepreneur has actually failed in his prior venture–where more experienced venture capital firms have an advantage in identifying entrepreneurs who will succeed. To use a sports analogy, all general managers of teams in the National Football League (NFL) probably agree that superstar Patriot quarterback Tom Brady would be a valuable addition to their teams. But, NFL teams were much less optimistic about his prospects in 2000 when the Patriots drafted him in the sixth round. The football equivalent of our finding would be that teams with a more experienced staff (such as the Patriots) are better at identifying 16 diamonds-in-the-rough such as Tom Brady when they are in the draft, but no better at determining their worth once they are proven superstars.5 The results are also consistent with the view that venture capitalists actively monitor their portfolio firms or add value through a variety of means such as executive recruiting and customer contacts. Previously successful entrepreneurs–who presumably need less monitoring and value-added services–do not benefit as much from this sort of venture capital firm monitoring and expertise. By way of contrast, the evidence suggests that first-time entrepreneurs and those with a track record of failure are more likely to benefit from venture capital firm expertise. To continue the football analogy, Tom Brady would benefit less from a high-quality football coach now than he did when he was drafted. Table 6 provides additional supporting evidence for the view that more experienced venture capital firms are better able to identify and/or develop entrepreneurial skill. Here, we analyze the sample of first-time entrepreneurs to determine the factors that lead them to become serial entrepreneurs. The dependent variable is equal to one if the entrepreneur subsequently starts another venture. The logistic regression reported in the first column in the table indicates that first-time entrepreneurs funded by more experienced venture capital firms are more likely to become serial entrepreneurs. At the 25th percentile of experience, there is a 4.8% chance that an entrepreneur will start another venture, whereas at the 75th percentile, there is a Massey and Thaler (2006) present strong evidence that high first-round NFL draft picks are overvalued relative to later picks. Although they interpret their findings as evidence of a behavioral bias, it is also possible that less experienced general managers (who run lower quality teams and get to pick early in the first round) have a harder time assessing quality. 5 17 5.7% probability. Though the increase is small on an absolute basis, given the low baseline rates of serial entrepreneurship, the effect is quite big. B. Valuation We now examine how serial entrepreneurship and venture capital firm experience affect company valuation.6 To analyze this question, we use first-round “pre-money” valuation as our valuation measure. Venture Source calculates this as the product of the price paid per share in the round and the shares outstanding prior to the financing round.7 The pre-money valuation is the perceived net present value of the company, and therefore excludes the additional capital raised in the financing. A company’s valuation depends on numerous factors including those we can (imperfectly) observe (e.g., the stage of product development, company age, industry, location, public market valuation levels, entrepreneur’s quality, and venture capital firm’s quality) and those we cannot (e.g., the company’s sales and assets). We are mainly interested in how measures of entrepreneur quality and venture capital firm quality affect pre-money valuation. Table 7 presents the results of regressing the natural log of real pre-money valuation (expressed in millions of year 2000 dollars) on the above observables. Because the data include significant outliers (one valuation exceeds $600 million), we winsorize the dependent variable at the 99th percentile ($131.5 million), which is more than 15 times the median. All the regressions include industry and year fixed effects. We again Comment [JL3]: Or is it that all variables are winsorized? Hsu (2004) shows that entrepreneurs have to pay more (i.e., to accept a lower valuation) to be financed by venture capitalists with better track records. 7 Almost all venture capital financings use convertible preferred stock. This methodology for calculation pre-money valuation implicitly assumes that the value of preferred stock’s liquidation preference is zero. Thus, this common approach to calculating pre-money valuation overstates the true valuation. This bias is unlikely to vary systematically with the variables we are using in our regression analysis. 6 18 consider specifications at the entrepreneur-company level (1, 2 and 6), company level (3 and 7), the entrepreneur-company-VC firm level (4), and the company-VC firm level (5). Before describing our main results, it is worth pointing out that the controls all have the predicted sign. Older firms and those at later stages of product development have higher valuations. In addition, when public market industry valuations are higher, venture capital valuations are also higher. The public market industry valuation is calculated as the average market-to-book equity ratio for publicly traded firms in the same industry.8 Finally, firms located in California have slightly higher valuations than those in other states and firms located in Massachusetts have somewhat lower valuations, but these differences are not statistically significant. Of more interest is the finding that venture capital firm experience is positively related to pre-money valuation. The effect, however, is modest. The elasticity is approximately 9.2%. For example, the estimates from column 3 of Table 7 imply that at the 75th percentile of VC EXPERIENCE, the forecasted valuation is $10.49 million, whereas at the 25th, it is $8.92 million. That more experienced firms pay more for new ventures is not surprising, given that they have higher success rates. Because there are unobservable firm characteristics that affect valuation levels (or those that are measured In order to do this we need to link the SIC codes of public companies to the nine industries used in our analysis. Our procedure is to identify the SIC codes of all venture capital-backed firms that went public within a given Venture Economics industry code. Because there are multiple SIC codes associated with each of our nine industries, we construct market-to-book ratios as a weighted average of the market-tobook ratios of the public companies in those SIC codes, where the weights are the relative fractions of firms that went public within our nine industries. For each of the public firms assigned to the industry, we compute the ratio of shareholders’ equity to the market value of the equity at the beginning of the quarter. If multiple classes of common and preferred stock were outstanding, the combined value of all classes is used. In many industries, numerous small firms with significant negative earnings introduce a substantial skewness to the distribution of these ratios. Consequently, we weighted the average by equity market capitalization at the beginning of the quarter. 8 19 with error), it is likely that VC EXPERIENCE serves as a proxy for the characteristics that increase firm value. These characteristics, such as the entrepreneurial quality, might be unobservable to less experienced venture capital firms. Alternatively, characteristics such as sales or assets could be observable to market participants, but unobservable to us given the data we have. If more experienced venture capital firms invest in more mature firms in ways we do not fully capture with our company stage controls, this could explain our finding. The finding that new ventures funded by more experienced venture capital firms invest at higher pre-money valuations needs to be reconciled with Hsu’s (2004) finding that more experienced venture capital firms make offers at lower pre-money valuations. Hsu examines a sample of new ventures that received competing offers from venture capital firms. To the extent that more experienced venture capital firms add more value to new ventures (as is consistent with our findings), they would require larger equity stakes (lower share prices) in exchange for their money and their value-added services. Thus, the offers from top-tier ventures capital firms should imply lower pre-money valuations even though the companies are worth more if funded by them. Because Hsu is looking at within-venture offers, he is controlling for the quality of the venture. He is therefore able to isolate the effect of venture capital firm quality on valuations. Because we are looking across ventures, we are picking up the effect identified by Hsu as well as the between-venture differences in quality. This may explain why the estimated effect is small. Somewhat surprisingly, in the first two columns of Table 7 we find no relationship between pre-money valuation and LATER VENTURE and PRIOR SUCCESS. 20 The same is true when we conduct the analysis at the company level (column 3) and include venture capital firm fixed effects (columns 4 and 5). Given the higher success rates of previously successful entrepreneurs, one would have thought that firms associated with these entrepreneurs would have had higher valuations. Apparently this is not the case, which suggests that venture capital firms are able to buy equity in firms started by previously successful entrepreneurs at a discount. The last two columns of Table 7 add interactions of VC EXPERIENCE with measures of PRIOR SUCCESS and LATER VENTURE. The coefficient of the interaction term is negative and statistically significant only at the five percent level. This suggests that top-tier venture capital firms are not as eager to pay for prior performance, but the magnitude of the effect is small. This result is consistent with the results of Kaplan and Stromberg (2003). Kaplan and Stromberg examine venture capital contractual terms and find that repeat entrepreneurs receive more favorable terms for vesting, board structure, liquidation rights, and the traunching of capital, but did not receive greater equity ownership percentages. It therefore appears that serial entrepreneurs may extract greater value from venture capitalist in the non-price terms of investment. The overall conclusion that we draw from Table 7 is that despite the higher success rates of entrepreneurs with successful track records, venture capital firms are not paying premiums to invest in their companies. Why successful entrepreneurs appear unable to capture an increasing share of rents is something of a mystery, but it has implications for returns. C. Returns 21 In this section, we investigate whether venture capital firms earn higher returns on their investments in serial entrepreneurs. Unfortunately, we do not observe actual rates of returns on venture capital investments.9 What we can observe, with varying degrees of accuracy, is company valuations at the time of exit. Venture Economics and Venture Source provide this information for most IPOs and some acquisitions. For companies missing information in these databases, we search for valuation at the time of IPO in SDC Corporate New Issues database. We also searched for acquisition values using Factiva. If the firm did not go public by December 2003, we assume that the exit value is zero. We exclude firms that went public or were acquired where we could not find the valuation from the analysis. Our crude measure of return is the ratio of the exit valuation to pre-money valuation in the first financing round with venture investors, what we refer to as the investment multiple. The investment multiple is likely to be correlated with actual returns but it does not adjust for two critical elements of return: the time it takes to exit and the dilution that occurs over financing rounds. Table 8 presents regression results in which the dependent variable is the investment multiple divided by the average investment multiple of firms funded in the same industry and year. We refer to this variable as the relative investment multiple. When the relative investment multiple is one, the investment multiple on the venture capital firm’s investment is equal to the industry-year average. The regressors include the same set of variables we have been considering throughout. The first column indicates that the relative investment multiple is greater for 9 Venture capitalists typically invest in multiple financing rounds. Even if we know that a given venture firm invested in a certain round, it is often unclear what percentage of the equity sold in the financed the venture capitalist received. This information is needed to compute a rate of return. Comment [JL4]: Is this at the IPO, or else after the first closing day? 22 firms with serial entrepreneurs, although the effect is not statistically significant. The effect is larger for serial entrepreneurs who previously succeeded, as results reported in the second column indicate. The estimated effect is statistically significant at conventional levels. Finally, venture capital experience is positively related to the relative investment multiple. The estimates from the third column of Table 8 imply that among companies funded by inexperienced venture capital firms, only those with previously successful entrepreneurs do better than the industry-year average investment multiple (79% higher on average). First time and failed entrepreneurs do significantly worse than the average. By contrast, when companies are funded by top-tier venture capital firms, they perform in general at the industry average and do significantly better if one of the entrepreneurs has a successful track record (107% greater). The last column of Table 8 looks at relative investment multiples, conditional on the venture succeeding. Prior success and venture capital experience have no appreciable effect on relative investment valuation. This indicates that the higher returns documented in the first four columns of the Table 8 come from higher success rates, not greater returns in the IPO. Finally, we try to connect our deal-level results to venture capital fund internal rates of return. Our source of return data is the 2004 Private Equity Performance Monitor, which presents return data on over 1,700 private equity funds. This information is compiled by Mark O’Hare, who over the past five years has created a database of returns from public sources (e.g., institutional investors who have posted their returns online), Freedom of Information Act requests to public funds, and voluntary disclosures 23 by both general and limited partners. In order to do this mapping, we need to make some assumptions. (For instance, because Mayfield V was raised in 1984 and Mayfield VI in 1988, we attribute all investments made between 1984 and 1987 to Mayfield V.) Our dependent variable is fund internal rate of return (IRR) measured in percent. (For example, a 60% return gets entered into the data as 60.) The average fund IRR is 13.8%. We include a series of controls including industry shares in the portfolio of the fund, year dummies for the year the fund was established, and assets under management Comment [JL5]: How is this defined at the time the fund was raised. Our main independent variables of interest are the portion of a fund’s deals that involve serial entrepreneurs and the portion that involve successful serial entrepreneurs. The results in Table 9 are quite strong and demonstrate the impact that serial entrepreneurs can have on portfolio returns. At the 25th return percentile, 6.8% of venture capital funds are investments in companies with serial entrepreneurs; at the 75th percentile, it is 18.2%. The coefficient of 59.2 in column 2 implies a 7.3% greater IRR for funds that invest in serial entrepreneurs. However, there appears to be no link to the share of the fund invested in previously successful entrepreneurs. The estimated effects of experience are also large. Top-tier firms are predicted to have IRRs of 45.4%, as compared to 14.3% for less experienced venture capital firms. 4. Conclusions This paper examines the role that skill plays in the success of entrepreneurs and venture capitalists. By examining the experience of serial entrepreneurs and the venture capitalists that fund them, we are able to provide insights into how important and what type of skill each possesses. Our results indicate that skill is an important determinant of 24 success for entrepreneurial startups. Successful serial entrepreneurs are more likely to replicate the success of their past companies than either single venture entrepreneurs or serial entrepreneurs who failed in their prior venture. More experienced venture capital firms are also shown to have higher success rates on their investments. However, this is isolated to first time entrepreneurs and those who previously failed. When experienced and inexperienced venture capital firms invest in entrepreneurs with a track record of success, there is no performance differential. This evidence would seem to suggest that prior success is a signal of quality or that venture capital firms add little value to talented, successful entrepreneurs. If prior success were pure luck, we would not see this pattern. While they are more likely to be successful, serial entrepreneurs are not able to extract all of the value from their superior ability. We find that successful serial entrepreneurs do not achieve higher valuations than do other entrepreneurs.10 This leads to higher deal returns for venture capitalists who invest in companies started by successful serial entrepreneurs. Investing in serial entrepreneurs also leads to higher rates of return of the funds themselves. Our paper raises several interesting questions for future research. First, while our paper identifies entrepreneurial skill, it does not distinguish exactly what the critical entrepreneurial skill is. It is possible that entrepreneurial skill is embodied in the networks with customers, suppliers, and other market participants that enhance the outcomes of serial entrepreneurs. It is also possible that the skill is greater ability to identify markets, set strategy, and correctly analyze various business problems. In future We are unable to determine the value implications of the non-price terms in Kaplan and Stromberg (2003) because we do not have the actual financing documents. 10 25 work, we hope to examine the markets that serial entrepreneurs enter and to identify whether future success is confined to the markets where they have operated in the past or whether successful serial entrepreneurs are also more successful in new markets. While not ruling it out, our results are less consistent with the learning by doing work of Eesley and Roberts (2006a,b). A learning by doing story would need to explain why there is differential learning between successful and unsuccessful serial entrepreneurs as well as why more experienced venture capitalists can identify failed serial entrepreneurs who “learned” in their previous venture. The results in this paper also highlight the role of venture capital skill in identifying talented entrepreneurs and attractive markets. We do not, however, identify whether this ability operates at the individual or the firm level. Similarly, we do not know whether various attributes of the individual general partners or the firms themselves are also associated with greater ability to identify successful investments. In future work, we plan to look at how demographic characteristics of individual general partners and characteristics of venture capital teams affect the success of venture capital investments. 26 References Bengtsson, Ola, Investor Attitudes and the Supply of Capital: Are Venture Capitalist in California more Forgiving?” University of Chicago working paper, 2006. Bengtsson, Ola, Repeat Rleationships between Venture Capitalists and Entrepreneurs, University of Chicago working paper, 2005. Bhide, Amar, The Origin and Evolution of New Businesses. Oxford: Oxford University Press. 2000. Carroll, Glenn R., and Elaine Mosakowski, The Career Dynamics of Self-Employment, Administrative Science Quarterly 32. 1987. 570-589. Chatterji, Aaron K., Spawned with a Silver Spoon? Entrepreneurial Performance and Innovation in the Medical Device Industry, Unpublished Working Paper, University of California at Berkeley, 2005. Eesley, Charles, and Edward Roberts, Cutting Your Teeth: Learning from Rare Experiences, MIT working paper, 2006a. Eesley, Charles, and Edward Roberts, The Second Time Around?: Serial Entrepreneurs From MIT, MIT working paper, 2006b. Fischer, Eileen, Rebecca Reuber and Lorraine Dyke, The Impact of Entrepreneurial Teams on the Financing Experiences of Canadian Ventures, Journal of Small Business and Entrepreneurship 7. 1990. 13-22. Gompers, Paul, Anna Kovner, Josh Lerner, and David Scharfstein, Specialization and Success: Evidence from Venture Capital, Unpublished Working Paper, Harvard University, 2006. Hochberg, Yael, Alexander Ljungqvist, and Yang Lu, Whom You Know Matters: Venture Capital Networks and Investment Performance, Journal of Finance, forthcoming, 2006. Honig, Benson, and Per Davidsson, Nascent Entrepreneurship, Social Networks and Organizational Learning. Paper presented at the Competence 2000, Helsinki, Finland. Holtz-Eakin, Douglas, David Joulfaian and Harvey Rosen, Sticking it Out: Entrepreneurial Survival and Liquidity Constraints, Journal of Political Economy 102. 1994. 53-75. Hsu, David H., What Do Entrepreneurs Pay for Venture Capital Affiliation? Journal of Finance 59. 2004. 1805-1844 27 John, Kose, S. Abraham Ravid, and Jayanthi Sunder, The Role of Termination in Employment Contracts: Theory and Evidence from Film Directors’ Careers, New York University working paper, 2001. Kaplan, Steven N., and Antoinette Schoar, Private Equity Performance: Returns, Persistence and Capital, Journal of Finance 60. 2005. 1791-1823. Kaplan, Steven N., Berk A. Sensoy, and Per Strömberg, What are Firms?Evolution from Birth to Public Companies, University of Chicago working paper. 2006. Kaplan, Steven N., and Per Stromberg, Financial Contracting Theory Meets the Real World: Evidence from Venture Capital Contracts, Review of Economic Studies, April 2003. 281-316. Kihlstrom, Richard, and Jean-Jacques Laffont, A General Equilibrium Entrepreneurial Theory of Firm Formation Based on Risk Aversion, Journal of Political Economy 87. 1979. 719-748. Knight, Frank H. Risk, Uncertainty and Profit (G. J. Stigler, ed.). Chicago: University of Chicago Press. 1921. Lazear, Edward P., Entrepreneurship, Journal of Labor Economics 23. 2005. 649-680. Schumpeter, Joseph A. The Theory of Economic Development. Cambridge, Mass: Harvard University Press. 1934. Sorensen, Morten, How Smart is Smart Money? A Two-Sided Matching Model of Venture Capital, Unpublished Working Paper, University of Chicago, 2004. Sorenson, Olav, and Toby Stuart, Syndication Networks and the Spatial Distribution of Venture Capital Investments. American Journal of Sociology. 106. 2001. 1546-1586. 28 Table 1: Frequency of Serial Entrepreneurs by Year Serial Entrepreneurs as a Percent of Total 0.0 0.0 0.0 0.0 6.9 7.1 9.1 6.9 4.8 5.5 11.6 10.1 10.2 12.6 13.6 12.3 13.2 11.7 13.1 10.4 9.4 Year 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Serial Entrepreneurs 0 0 0 0 2 3 9 9 10 14 35 34 53 65 78 129 166 141 164 174 38 Total Entrepreneurs 11 7 11 34 29 42 99 130 209 254 301 337 522 516 574 1,051 1,262 1,205 1,256 1,678 404 Sample includes one observation per entrepreneur - company pair. 29 Table 2: Frequency of Serial Entrepreneurs by Industry Serial Entrepreneurs as a Percent of Total 12.4 11.0 1.8 5.0 0.0 13.8 6.7 8.2 8.3 Internet and Computers Communications and Electronics Business and Industrial Consumer Energy Biotech and Healthcare Financial Services Business Services Other Serial Entrepreneurs 556 157 2 29 0 271 11 68 30 Total Entrepreneurs 4,489 1,424 109 576 19 1,964 163 827 361 Sample includes one observation per entrepreneur - company pair. Table 3: Frequency of Serial Entrepreneurs by Venture Capital Firm Serial Entrepreneurs 100 80 69 68 63 61 60 56 49 44 43 42 40 40 38 38 37 36 36 35 33 31 31 31 30 30 29 27 25 24 24 24 24 24 23 23 22 20 18 16 Total Entrepreneurs 666 702 432 454 459 418 407 385 340 275 305 315 265 389 251 462 210 264 374 312 238 188 215 252 185 204 192 231 270 191 202 207 242 254 225 277 190 183 214 195 Serial Entrepreneurs as a Percent of Total 15.0 11.4 16.0 15.0 13.7 14.6 14.7 14.5 14.4 16.0 14.1 13.3 15.1 10.3 15.1 8.2 17.6 13.6 9.6 11.2 13.9 16.5 14.4 12.3 16.2 14.7 15.1 11.7 9.3 12.6 11.9 11.6 9.9 9.4 10.2 8.3 11.6 10.9 8.4 8.2 Ranking by: Number Percent 1 9 2 28 3 5 4 10 5 19 6 13 7 11 8 14 9 16 10 4 11 17 12 21 14 8 13 31 16 6 15 39 17 1 19 20 18 34 20 29 21 18 24 2 23 15 22 23 26 3 25 12 27 7 28 25 29 36 34 22 33 24 32 26 31 33 30 35 36 32 35 38 37 27 38 30 39 37 40 40 Year Kleiner Perkins Caufield & Byers New Enterprise Associates Sequoia Capital U.S. Venture Partners Mayfield Accel Partners Crosspoint Venture Partners Institutional Venture Partners Bessemer Venture Partners Matrix Partners Menlo Ventures Sprout Group Brentwood Associates Venrock Associates Mohr Davidow Ventures Oak Investment Partners Domain Associates Benchmark Capital Greylock Partners InterWest Partners Advent International Foundation Capital Enterprise Partners Venture Capital Canaan Partners Delphi Ventures Sigma Partners Charles River Ventures Norwest Venture Partners Austin Ventures Morgan Stanley Venture Partners Lightspeed Venture Partners Sutter Hill Ventures Battery Ventures Sevin Rosen Funds JPMorgan Partners St. Paul Venture Capital Alta Partners Morgenthaler Trinity Ventures Warburg Pincus Sample includes one observation per VC firm-portfolio company. Includes the 40 VC firms with the most total deals in the sample. 2 Table 4: Summary Statistics All First Ventures 0.253 15.95 0.116 0.294 0.039 0.469 0.073 0.009 0.430 0.119 36.64 51.35 2.896 Entrepreneurs with One Venture 0.243 15.78 0.118 0.294 0.039 0.470 0.070 0.009 0.417 0.119 36.30 51.76 2.887 Serial Entrepreneurs First Venture Later Ventures 0.369 *** 0.290 *** 17.75 * 12.30 *** 0.090 ** 0.175 *** 0.293 0.377 *** 0.037 0.045 0.462 0.362 *** 0.101 ** 0.036 *** 0.016 0.006 0.578 *** 0.591 *** 0.122 0.119 40.54 ** 20.60 *** 46.70 *** 58.86 *** 2.989 3.290 *** Success Rate Pre-Money Valuation (millions of 2000 $) Firm in Startup Stage Firm in Development Stage Firm in Beta Stage Firm in Shipping Stage Firm in Profitable Stage Firm in Re-Start Stage California-Based Company Massachusetts-Based Company Age of Firm (in Months) Previous Deals by VC Firm Previous Deals by VC Firm Relative to Year Average One observation per entrepreneur-company pair. ***, **, * indicate significant difference from mean value of entrepreneurs with one venture at the 1%, 5% and 10% level, respectively. 3 Table 5: Venture Success Rates (1) Probit.0060 (0.89) 0.0249 (1.33) -0.0616 (2.38) 0.0175 (0.58) 0.0402 (0.78) ** 0.0061 (0.90) 0.0249 (1.33) -0.0610 (2.36) 0.0168 (0.56) 0.0403 (0.78) ** 0.0084 (1.70) 0.0029 (0.21) -0.0415 (2.24) 0.0304 (1.39) 0.0691 (1.78) * ** * 0.0086 (1.03) 0.0183 (0.75) -0.1039 (3.24) 0.0183 (0.55) 0.0476 (0.93) *** 0.0024 (0.37) -0.0066 (0.34) -0.0681 (2.61) 0.0544 (2.05) 0.1202 (2.79) ** *** 0.0060 (0.89) 0.0249 (1.33) -0.0617 (2.39) 0.0171 (0.57) 0.0411 (0.80) ** 0.0381 (4.51) *** 0.0379 (4.49) *** 0.0411 (2.92) *** (2) Probit 0.0126 (0.73) 0.0830 (2.93) *** 0.0384 (2.21) 0.0808 (3.12) 0.0357 (5.82) *** *** ** (3) Probit (4) Probit 0.0017 (0.09) 0.0992 (3.04) *** 0.0222 (1.01) 0.0939 (2.90) *** 0.0391 (4.56) 0.0079 (0.51) -0.0453 (2.02) ** 0.0027 (0.16) -0.0404 (1.87) 0.0082 (1.68) 0.0024 (0.17) -0.0436 (2.37) 0.0305 (1.40) 0.0702 (1.81) * ** * * *** (5) Probit (6) Probit 0.0069 (0.34) 0.1252 (3.68) *** 0.0362 (1.65) 0.1198 (3.66) 0.0399 (5.52) *** *** * (7) Probit Company In Shipping Stage Company In Profitable Stage Company In Re-Start Stage Company Stage Missing 0.0659 (2.06) 0.1784 (3.49) -0.0163 (0.15) 0.1057 (1.96) * *** ** 0.0657 (2.06) 0.1789 (3.50) -0.0138 (0.13) 0.1061 (1.97) ** *** ** 0.0671 (2.92) 0.2269 (5.40) -0.0554 (0.76) 0.1453 (3.31) *** *** *** 0.0982 (2.79) 0.1814 (2.99) -0.0040 (0.04) 0.2929 (4.07) *** *** *** 0.1282 (4.56) 0.2672 (5.26) 0.0133 (0.15) 0.2891 (4.46) *** *** *** 0.0664 (2.08) 0.1798 (3.51) -0.0128 (0.12) 0.1082 (2.00) ** *** ** 0.0677 (2.94) 0.2278 (5.41) -0.0541 (0.74) 0.1500 (3.39) *** *** *** Controls for: Round Number Year Industry VC Firm Fixed Effects Log-likelihood χ2-Statistic p-Value Observations yes yes yes no -4872.2 373.1 0.000 9,876 yes yes yes no -4867.7 376.9 0.000 9,876 yes yes yes no -1635.5 536.7 0.000 3,831 yes yes yes yes -9568.9 1008.7 0.000 19,617 yes yes yes yes -2805.8 1034.9 0.000 6,180 yes yes yes no -4865.5 379.4 0.000 9,876 yes yes yes no -1632.9 535.7 0.000 3,831 The sample consists of 9,932 ventures by 8,808 entrepreneurs covering the years 1975 to 2000. The dependent variable is Success, an indicator variable that takes on the value of one if the portfolio company went public and zero otherwise. 6: Probability of Becoming a Serial Entrepreneur (1) Probit 0.0058 (2.57) (2) Probit 0.0058 (2.53) 0.0009 (0.17) 0.0036 (1.75) 0.0347 (6.46) 0.0194 (2.19) 0.0122 (1.28) 0.0058 (0.40) 0.0141 (1.47) 0.0031 (0.25) -0.0085 (0.41) 0.0261 (1.75) yes yes yes -2145.5 VC FIRM EXPERIENCE PRIOR SUCCESS Logarithm of Age of Company California-Based Company Massachusetts-Based Company Company In Development Stage Company In Beta Stage Company In Shipping Stage Company In Profitable Stage Company In Re-Start Stage Company Stage Missing Controls for: Round Number Year Industry Log-likelihood χ2-Statistic ** ** 0.0036 (1.75) 0.0347 (6.47) 0.0193 (2.18) 0.0122 (1.29) 0.0058 (0.40) 0.0141 (1.47) 0.0032 (0.26) -0.0086 (0.41) 0.0262 (1.75) yes yes yes -2145.5 * *** ** * *** ** * * 563.6 564.4 p-Value 0.000 0.000 Observations 8,734 8,734 The sample consists of 8,808 initial ventures by entrepreneurs covering the years 1975 to 2000. The dependent variable is Become Serial, an indicator variable that takes the value of one if the entrepreneur begins a second venture and zero otherwise. VC FIRM EXPERIENCE is the difference between the log of the number of investments made by venture capital organization f prior to year t and the average in year t of the number of investments made by all organizations prior to year t. PRIOR SUCCESS is an indicator variable that takes on the value of one if the entrepreneur's first venture-backed company went public or filed to go public by December 2003 and zero otherwise. Standard errors are clustered at portfolio company level. Robust t-statistics are in parentheses below coefficient estimates. ***, **, * indicate statistical significance at the 1%, 5% and 10% level, respectively. Table 7: Pre-Money Valuations Company In Shipping Stage 0.0889 (4.94) 0.0771 (1.87) -0.0554 (0.95) 0.4787 (8.28) 0.6504 (6.17) 0.9070 *** *** * *** 0.0889 (4.93) 0.0771 (1.87) -0.0555 (0.95) 0.4788 (8.28) 0.6503 (6.17) 0.9070 *** *** * *** 0.0927 (5.82) 0.0629 (1.69) -0.0715 (1.42) 0.4366 (8.71) 0.6652 (7.32) 0.8456 *** *** * *** 0.0396 (3.77) 0.0576 (2.29) -0.1195 (3.46) 0.7874 (19.28) 1.1635 (19.80) 1.2609 *** *** ** *** 0.0543 (3.08) 0.0195 (0.47) -0.1312 (2.53) 0.6834 (10.77) 1.0595 (11.83) 1.1157 *** *** ** *** 0.0886 (4.92) 0.0771 (1.87) -0.0575 (0.99) 0.4794 (8.29) 0.6509 (6.18) 0.9078 *** *** * *** 0.0916 (4.88) *** 0.0916 (4.88) *** -0.0290 (0.97) (2) OLS -0.0245 (0.66) -0.0132 (0.23) -0.0260 (0.60) 0.0065 (0.11) 0.1149 (7.12) *** (3) OLS (4) OLS 0.0144 (0.46) -0.0726 (1.31) -0.0112 (0.28) -0.0351 (0.58) 0.0965 (5.09) -0.0050 (0.15) -0.0926 (1.75) * -0.0135 (0.31) -0.1382 (2.34) 0.0916 (5.76) 0.0611 (1.64) -0.0800 (1.60) 0.4367 (8.70) 0.6661 (7.37) 0.8468 *** *** *** ** *** (5) OLS (6) OLS -0.0210 (0.45) 0.0671 (0.90) -0.0176 (0.31) 0.1204 (1.50) 0.1344 (7.12) *** (7) OLS (14.02) Company In Profitable Stage Company In Re-Start Stage Company Stage Missing Logarithm of Value-Weighted Industry Index 1.3444 (12.92) -0.5520 (2.29) 0.0000 (0.00) 0.4467 (4.32) Controls for: Round Number Year Industry VC Firm Fixed Effects R-squared Observations yes yes yes no 0.36 6,377 *** *** ** (14.02) 1.3442 (12.91) -0.5524 (2.29) 0.0000 (0.00) 0.4465 *** *** ** (14.48) 1.3231 (13.16) -0.5416 (2.45) 0.0000 (0.00) 0.3532 *** *** ** (28.84) 1.8374 (28.27) 0.2357 (2.29) 0.0000 (0.00) 0.2684 *** *** ** (16.19) 1.6150 (14.51) 0.0529 (0.22) 0.0000 (0.00) 0.1682 *** *** (14.05) 1.3451 (12.93) -0.5501 (2.28) 0.0000 (0.00) 0.4472 (4.33) *** *** ** (14.49) 1.3247 (13.21) -0.5322 (2.41) 0.0000 (0.00) 0.3572 *** *** ** ** (4.32) ** (3.80) ** (3.95) ** (1.61) ** (3.86) ** yes yes yes no 0.36 6,377 yes yes yes no 0.35 2,348 yes yes yes yes 0.56 15,670 yes yes yes yes 0.56 4,912 yes yes yes no 0.36 6,377 yes yes yes no 0.35 2,348 The sample consists of 6,418 professional venture financings of privately held firms between 1975 and 2000 in the Venture Source database for which the firm was able to determine the valuation of the financing round. The dependent variable is natural logarithm of Pre-Money Valuation, defined as the product of the price paid per share in the financing round and the shares outstanding prior to the financing round, expressed in millions of current dollars. 8: Venture Returns .1371 (2.23) 0.3013 (1.23) -0.1708 (0.68) -0.3487 (0.80) -0.9392 (2.23) ** ** -0.1373 (2.23) 0.3024 (1.23) -0.1634 (0.65) -0.3519 (0.81) -0.9333 (2.21) ** ** -0.1376 (2.22) 0.3021 (1.23) -0.1643 (0.66) -0.3521 (0.81) -0.9346 (2.21) ** ** 0.1531 (1.73) * 0.1515 (1.71) * 0.1594 (1.80) -0.0546 (0.28) -0.0555 (0.16) -0.0892 (0.34) 0.0359 (0.09) -0.1084 (1.60) 0.1221 (0.48) -0.1585 (0.54) -0.1943 (0.41) -0.7156 (1.57) -0.0712 (1.18) 0.2886 (0.96) -0.0815 (0.26) -0.2974 (0.61) -1.0589 (2.28) ** -0.2048 (3.36) 0.0169 (0.08) 0.0258 (0.09) -1.1035 (2.05) -1.7482 (3.07) *** ** *** -0.2023 (3.31) 0.0126 (0.06) 0.0277 (0.09) -1.1169 (2.07) -1.7498 (3.07) *** ** *** -0.2023 (3.30) 0.0146 (0.07) 0.0247 (0.08) -1.1208 (2.07) -1.7495 (3.07) *** ** *** * 0.3429 (1.63) (2) OLS 0.0883 (0.36) 0.7799 (1.46) (3) OLS 0.1228 (0.54) 0.8278 (1.66) * 0.3823 (1.39) 0.8185 (1.64) 0.2285 (2.29) ** 0.2089 (1.93) 0.0224 (0.17) -0.0591 (0.16) * -0.1216 (1.63) -0.1187 (1.59) -0.1076 (1.41) -0.1387 (1.18) 0.1012 (0.40) (4) OLS (5) OLS -0.3030 (1.93) 1.4183 (2.77) *** * (6) OLS 0.1845 (1.41) (7) OLS 0.0163 (0.12) 0.4238 (1.37) (8) OLS 0.1536 (0.86) 0.3159 (0.91) 3 Company In Shipping Stage Company In Profitable Stage Company In Re-Start Stage Company Stage Missing -0.1372 (0.29) -0.3106 (0.64) 0.5986 (0.39) -1.0495 (2.54) ** -0.1318 (0.28) -0.3011 (0.62) 0.6178 (0.41) -1.0383 (2.51) ** -0.1321 (0.28) -0.3019 (0.62) 0.6163 (0.40) -1.0363 (2.50) ** -0.0247 (0.05) -0.1153 (0.22) 0.0814 (0.09) -0.7742 (1.74) * -0.4690 (0.91) -0.4534 (0.82) 0.8543 (0.48) -1.1125 (2.52) ** -1.2298 (2.27) -1.4655 (2.62) 0.3517 (0.31) 0.0000 (0.00) *** ** -1.2334 (2.27) -1.4653 (2.62) 0.3693 (0.32) 0.0000 (0.00) *** ** -1.2365 (2.27) -1.4703 (2.62) 0.3669 (0.32) 0.0000 (0.00) *** ** Controls for: Round Number Year Industry R-squared Observations yes yes yes 0.01 8,897 yes yes yes 0.01 8,897 yes yes yes 0.01 8,897 yes yes yes 0.01 3,513 yes yes yes 0.01 6,586 yes yes yes 0.1 1,554 yes yes yes 0.1 1,554 yes yes yes 0.1 1,554 The sample consists of 8,944 ventures for which an IPO valuation was determined or for which there was no IPO. The dependent variable is IPO Exit Return, defined as the ratio of the IPO valuation to the pre-money valuat relative to the ratio of the IPO valuation to pre-money valuation of all ventures in the same industry in the current year. LATER VENTURE is an indicator variable that takes on the value of one if the entrepreneur had started backed company and zero otherwise. PRIOR SUCCESS is an indicator variable that takes on the value of one if the entrepreneur had started a previous venture-backed company that went public or filed to go public by Decem otherwise. Any Entrepreneur in Later Venture is an indicator variable that takes the value of one if any entrepreneur within the company had started a previous venture-backed company and zero otherwise. Any Entrepreneu an indicator variable that takes the value of one if any entrepreneur within the company started a previous venture-backed company that went public or filed to go public by December 2003 and zero otherwise. VC Firm Expe difference between the log of the number of investments made by venture capital organization f prior to year t and the average in year t of the number of investments made by all organizations prior to year t. Standard errors are clustered at portfolio company level. Robust t-statistics are in parentheses below coefficient estimates. ***, **, * indicate statistical significance at the 1%, 5% and 10% level, respectively. 4 Table 9: Fund-Level Returns (1) OLS 54.0521 (1.21) (2) OLS 59.1963 (1.24) (3) OLS 36.7272 (0.63) 64.4079 (0.72) 20.0984 (5.01) yes yes yes 0.46 514 (4) WLS 310.7474 (1.70) -74.5977 (0.31) 20.1381 (3.50) yes yes yes 0.59 514 Share of Portfolio With LATER VENTURE Share of Portfolio With PRIOR SUCCESS VC FIRM EXPERIENCE Controls: Vintage Year Fixed Effects Fund Size Percentage in Each Industry Mean Round Number of Deals R-squared Observations * 20.0744 (5.02) yes yes yes 0.45 514 *** 19.5406 (4.77) yes yes yes yes 0.46 482 *** *** *** The sample consists of 370 VC funds with information from the 2004 Private Equity Performance Monitor. The dependent variable is Fund IRR, defined as the IRR of the fund. Share of Portfolio with Later Venture is the share of the individual VC firm's portfolio in later ventures of serial entrepreneurs over the years of the fund. Share of Portfolio with Prior Success is the share of the individual VC firm's portfolio in later ventures of serial entrepreneurs where the entrepreneur was successful in the previous venture over the years of the fund. VC FIRM EXPERIENCE is the difference between the log of the average number of investments made by venture capital organization f prior to year t for each investment in the fund and the average in year t of the average number of investments made by all organizations prior to year t. Standard errors are clustered at VC firm level. Robust t-statistics are in parentheses below coefficient estimates. ***, **, * indicate statistical significance at the 1%, 5% and 10% level, respectively. This action might not be possible to undo. Are you sure you want to continue? We've moved you to where you read on your other device. Get the full title to continue reading from where you left off, or restart the preview.
https://www.scribd.com/document/501271/Serial-Entrepreneur-July-06
CC-MAIN-2017-04
en
refinedweb
Archive! . Atmosphere! . Hitchiker Guide to the Atmosphere Framework using WebSocket, Long-Polling and Http Streaming The Atmosphere Framework easily allow the writing of web application that support, transparently, SSE (Server Side Events), JSONP, WebSocket, Long-Polling and Http Streaming. The Atmosphere Framework also hide the complexity of the current asynchronous API, which differ from Server to Server and make your application portable among them. More important, it is much more easy to write an Atmosphere application than using the Servlet 3.0 API. There are several APIs available in Atmosphere to write an asynchronous application: AtmosphereHandler, Meteor or using Jersey‘s Atmosphere extension. In this blog I will take the famous JQuery PubSub sample to demonstrate those APIs. Note that I will not discuss the JQuery Atmosphere Plugin as it is the same for all APIs. Important, all code snippet below support WebSocket, SSE, JSONP, Long-Polling and Streaming by default. Only the last section only support WebSocket. The JQuery PubSub Application is quite simple. You enter a topic to subscribe, you select a transport to use (WebSocket, JSONP, SSE, Streaming or Long-Polling) or let the plug in decide for you, and then you are ready to publish message. You can use Redis on the server side to cluster your application among servers. The subscribe operation is done using a GET, the publish using a POST (in the form of message=”something”). If the WebSocket transport is used, the message is wrapped as a POST and delivered as a normal HTTP request. Note that this feature is configurable in Atmosphere PubSub using AtmosphereHandler The AtmosphereHandler is a low level API that can be used to write an asynchronous application. An application just have to implement that interface. This API is usually used by other framework in order to integrate with Atmosphere (GWT, Jersey, Vaading, etc.) but it can also be used if you want to write Servlet style code. So, with an AtmosphereHandler, the PubSub implementation will take the form of: public class AtmosphereHandlerPubSub extends AbstractReflectorAtmosphereHandler { @Override public void onRequest (AtmosphereResource r) throws IOException { HttpServletRequest req = r.getRequest(); HttpServletResponse res = r.getResponse(); String method = req.getMethod(); // Suspend the response. if ("GET".equalsIgnoreCase(method)) { String trackingId = trackingId(req); // Log all events on the console, including WebSocket events. r.addEventListener(new WebSocketEventListenerAdapter()); res.setContentType("text/html;charset=ISO-8859-1"); Broadcaster b = lookupBroadcaster(req.getPathInfo()); r.setBroadcaster(b); if (req.getHeader(X_ATMOSPHERE_TRANSPORT) .equalsIgnoreCase(LONG_POLLING_TRANSPORT)) { req.setAttribute(RESUME_ON_BROADCAST, Boolean.TRUE); r.suspend(-1, false); } else { r.suspend(-1); } } else if ("POST".equalsIgnoreCase(method)) { Broadcaster b = lookupBroadcaster(req.getPathInfo()); String message = req.getReader().readLine(); if (message != null && message.indexOf("message") != -1) { b.broadcast(message.substring("message=".length())); } } } @Override public void destroy() { } Broadcaster lookupBroadcaster(String pathInfo) { String[] decodedPath = pathInfo.split("/"); Broadcaster b = BroadcasterFactory.getDefault() .lookup(decodedPath[decodedPath.length - 1], true); return b; } } When a GET is received, we lookup a Broadcaster and then suspend the response based on the path info (REST style). Here we need a make sure we aren’t sending padding data (required for WebKit browser) when long polling is used. That’s the only required conditional evaluation needed in terms of transport. With the POST we just look up the Broadcaster (which represent a pubsub topic) and broadcast the request’s body. That;s it. PubSub using Meteor The Meteor is another low level API that can be used with existing Servlet application. As an example, the ADF framework use Meteor in order to integrate Atmosphere support. public class MeteorPubSub extends HttpServlet { @Override public void doGet(HttpServletRequest req, HttpServletResponse res) throws IOException { // Create a Meteor Meteor m = Meteor.build(req); // Log all events on the console, including WebSocket events. m.addListener(new WebSocketEventListenerAdapter()); res.setContentType("text/html;charset=ISO-8859-1"); Broadcaster b = lookupBroadcaster(req.getPathInfo()); m.setBroadcaster(b); if (req.getHeader(X_ATMOSPHERE_TRANSPORT) .equalsIgnoreCase(LONG_POLLING_TRANSPORT)) { req.setAttribute(RESUME_ON_BROADCAST, Boolean.TRUE); m.suspend(-1, false); } else { m.suspend(-1); } } public void doPost(HttpServletRequest req, HttpServletResponse res) throws IOException { Broadcaster b = lookupBroadcaster(req.getPathInfo()); String message = req.getReader().readLine(); if (message != null && message.indexOf("message") != -1) { b.broadcast(message.substring("message=".length())); } } Broadcaster lookupBroadcaster(String pathInfo) { String[] decodedPath = pathInfo.split("/"); Broadcaster b = BroadcasterFactory.getDefault() .lookup(decodedPath[decodedPath.length - 1], true); return b; } } When a GET is received, we create a Meteor and use that Meteor to suspend the response, again using the path info. For post, we do the same as with AtmosphereHandler, e.g retrieve the Broadcaster and broadcast the message. PubSub using Jersey’s Atmosphere Extension With the Jersey extension, we can either use annotations or the programmatic API. As simple); } } The GET could have been handled using the @Suspend annotation: @GET @Suspend(listeners = EventsLogger.class, outputComments = true) public Broadcastable subscribe(){ return new Broadcastable(topic); } As you can see, it is quite simpler that with Meteor and AtmosphereHandler. PubSub. PubSub using WebSocket only If you are planning to write pure WebSocket application and don’t plan to support normal HTTP, you can also write your own WebSocket sub protocol. It is quote important to note here that only WebSocket will be supported. public class WebSocketPubSub implements WebSocketProtocol { private AtmosphereResource r; @Override public HttpServletRequest onMessage(WebSocket webSocket, String message) { Broadcaster b = lookupBroadcaster(r.getRequest().getPathInfo()); if (message != null && message.indexOf("message") != -1) { b.broadcast(message.substring("message=".length())); } //Do not dispatch to another Container like Jersey return null; } @Override public void onOpen(WebSocket webSocket) { // Accept the handshake by suspending the response. r = (AtmosphereResource) webSocket.atmosphereResource(); Broadcaster b = lookupBroadcaster(r.getRequest().getPathInfo()); r.setBroadcaster(b); r.addEventListener(new WebSocketEventListenerAdapter()); r.suspend(-1); } @Override public void onClose(WebSocket webSocket) { webSocket.atmosphereResource().resume(); } Broadcaster lookupBroadcaster(String pathInfo) { String[] decodedPath = pathInfo.split("/"); Broadcaster b = BroadcasterFactory.getDefault(). lookup(decodedPath[decodedPath.length - 1], true); return b; } The important method here is onOpen (for accepting the handshake) and the onMessage, which is were the messages are received. Conclusion It is quite important to pick the best API when writing Atmosphere application as it can save you a lot of time! You can download all samples from here. For any questions or to download Atmosphere Client and Server Framework, go to our main site, use our Google Group forum, follow the team or myself and tweet your questions there! . REST + WebSocket applications? Why not using the Atmosphere Framework The Atmosphere Framework easily allow the creation of REST applications … using WebSocket. This time I will describe a super simple example on how to do it. The Atmosphere Framework supports transparently both WebSocket and Comet transport and brings portability to any Java based application. An application written using Atmosphere can be deployed in any WebServer and Atmosphere will transparently make it work. Atmosphere is also able to transparently select the best transport to use, e.g. WebSocket or Comet. Now let’s write a very simple REST application with Comet support as we normally write: @Path("/pubsub/{topic}") @Produces("text/html;charset=ISO-8859-1") public class JQueryPubSub { private @PathParam("topic") Broadcaster topic; @GET public SuspendResponse<String> subscribe() { return new SuspendResponse.SuspendResponseBuilder<String>() .broadcaster(topic) .outputComments(true) .addListener(new EventsLogger()) .build(); } @POST @Broadcast public Broadcastable publish(@FormParam("message") String message) { return new Broadcastable(message, "", topic); } } Doing GET /pubsub/something will invoked the SuspendResponse. To make the exercise simple, all we do there is suspend the connection (e.g. do not return any response, wait for an event). If you want to make this exercise more difficult, you can always implements the ETag trick! Once the connection is suspended, we need to use a second connection in order to post some data POST /pubsub/something message=I love Comet Executing the POST request will result in the invocation of the publish method. The @Broadcast annotation means the FormParam value will be broadcasted to all suspended connections. WebSocket to rule them all OK so let’s assume we now deploy your application in a WebServer that supports WebSocket like Jetty or GlassFish. Now Atmosphere will auto detect WebSockets are supported and use it when a WebSocket request is done. Now let’s assume we build the client using the Atmosphere JQuery Plug In and execute the GET request using Chrome (which support Websocket).The Atmosphere Javascript library is able to challenge the remote server and discover if the server and client support WebSocket, and use it. In that scenario, suspending the connection will tell Atmosphere to execute the WebSocket handshake. Now the POST will be executed on the same connection, and the public method will be invoked this time. This is not a POST as you see normally with normal HTTP. Since WebSocket is used, only the form param will be send over the wire: message=I love WebSocket All of this occurs without any modification of your REST application. All you need to do to enable WebSocket is to “suspend” the connection when a @GET occurs. Transparent, is it :-) You can download the current version of the sample here. Now what Atmosphere is doing under the hood is wrapping the WebSocket message into an HttpServletRequest so any framework like Jersey, Wicket, etc, works as it is. If you are familiar with Atmosphere, your AtmosphereHandler implementation will get invoked with an instance of HttpServletRequest that contains the WebSocket message, so you can use it as your will normally do using Comet or normal HTTP request. For any questions or to download Atmosphere Client and Server Framework, go to our main site, use our Nabble forum, follow the team or myself and tweet your questions there! You can also checkout the code on Github. Atmosphere private @PathParam("topic") RedisBroadcaster topic; @GET public SuspendResponse subscribe() { return new SuspendResponse.SuspendResponseBuilder() .broadcaster(topic) .outputComments(true) .addListener(new EventsLogger()) .build(); } @POST @Broadcast public Broadcastable publish(@FormParam("message") String message) { return new Broadcastable(message, "", topic); }. Fr. Friday’s Tricks #4: Improving Websocket/Comet performance using Delayed/Aggregated Server Side Events This week I will explain how you can significantly improve the performance of your WebSocket/Comet application using delayed and aggregated Server Side Events using the Atmosphere Framework. It is not trivial to broadcast real time server side events using a Comet or WebSocket connection. As an example, if the frequency of your server side events broadcast is high like many events per seconds, it is important to pick up the best strategy when it is time to write those events back to the client.. Using JQuery, XMPP and Atmosphere to cluster your WebSocket/Comet applicationPPBroadcaster topic; @GET public SuspendResponse subscribe() { return new SuspendResponse.SuspendResponseBuilder() .broadcaster(topic) .outputComments(true) .addListener(new EventsLogger()) .build(); } @POST @Broadcast public Broadcastable publish(@FormParam("message") String message) { return new Broadcastable(message, "", topic); } Don’t look for more code, there isn’t more! How it works: - When the JQuery Plugin issue a request like GET /pubsub/”A Topic”, the above resource is created and an instance of!
http://jfarcand.wordpress.com/category/jquery/
CC-MAIN-2014-15
en
refinedweb
Exception handling for Windows Runtime apps in C# or Visual Basic Learn how to handle exceptions (or errors) in Windows Runtime apps in C# or Visual Basic. You can handle exceptions as Microsoft .NET exceptions with try-catch blocks in your app code, and you can process app-level exceptions by handling the UnhandledException event. Prerequisites If you're new to .NET programming, you might want to read some more about exception handling in the common language runtime (CLR) and the .NET Framework. For more info, see these topics: - Remarks in the System.Exception reference topic - Handling and throwing exceptions Debugging in Visual Studio Many of the techniques that you probably use to debug your app rely on features of Microsoft Visual Studio. These features include first-chance exception handling, breakpoints, stepping through code, edit and continue, locals, watches, and remote debugging. These features, and techniques that use them, aren't documented extensively in this topic because most of them are common to any Visual Studio development or debugging scenario. For more info, see Debug Windows Runtime apps in Visual Studio. Getting specific Windows Runtime error and exception info The Windows Runtime uses a unified system that exposes the errors slightly differently for each language and programming model. .NET programmers generally use the term exception rather than error because, in its programming model, .NET represents the occurrence of an error as an exception object. For an app written in C# or Visual Basic, the system exposes errors as an exception that is either an instance of the System.Exception class, or an exception that's derived from System.Exception and has the inherited HResult and Message properties. A Windows Runtime error can sometimes be mapped to a standard exception that is more specific, if that error is similar in meaning to a standard exception from the .NET core profile. That's why you might see the error appear as a derived exception class such as InvalidOperationException rather than as a System.Exception, either during debugging or in your handler code. The standard exception mapping is done internally by the .NET runtime, and is based on converting the original Windows Runtime error's HResult numeric value and returning a more specific standard exception to any listeners. (Listeners include first-chance exceptions for debuggers, or app code that's handling exceptions.) Exceptions that can't be mapped to a standard exception for .NET are left as the System.Exception type. If you need more detail about what the exception means in context, read the Message string while you're debugging or catching exceptions. Every exception also has an HResult value. In some cases the HResult matches an existing system error code, even if it doesn't get mapped to a .NET standard exception. If you're getting an exception for an error where you know the code but don't know how or whether it's being mapped as a standard exception, see Cross-reference: Standard exceptions and error codes for a table that may help you determine the correct exception type to catch. Using try-catch .NET supports a try-catch technique for isolating exceptions in code. (There are slight variations in the keywords that C# and Visual Basic use for this technique; for this discussion we'll use the C# keywords.) When an exception occurs in a try block, the Windows Runtime searches the associated catch blocks, in the order in which they appear in app code, until it locates one that handles the exception based on its type. As you develop an app, you might have exception-handling code that isolates exceptions that you'll eventually eliminate from code because they're preventable. A useful technique for these is a try-catch coding style in combination with first-chance exception notification. While you're still fixing code or trying to isolate the causes of errors, put the app code that you suspect is throwing the exception within the try block. Then write a general catch block for the exception. While you're debugging that areas of app code, use first-chance, watches, breakpoints, and other techniques in the Microsoft Visual Studio debugger, along with the Exception from the catch block, to figure out what's going wrong. Eventually, after you've addressed the coding error, you might not need that try-catch block anymore. There are also app code error-handling scenarios that are a result of how some of the Windows Runtime API is designed. For these scenarios, you might still need try-catch blocks even in the production code for your app. For example, some of the Windows Runtime networking APIs and network information APIs are designed so that the API calls throw errors even when the app code uses the APIs correctly. The errors come from external systems, but reach your app as Windows Runtime exceptions that are thrown by the logic of the Windows Runtime API. The errors you get from external systems may be transient or rare, but you may still need to handle them in your app if you can't completely prevent the error from happening at run time. If you catch the exception as the general System.Exception, that catch block will catch all exceptions, even the ones that are mapped to standard exceptions. That catch block would also catch exceptions you've thrown from your own app code. So if you want to mix the handling of standard exceptions, custom exceptions, and unmapped cases that you can catch only with System.Exception, place the block that catches System.Exception last in your sequence of catch blocks. That way, any more specific exception types are caught first, and the catch (Exception e) statement catches those that don't match your previous filters. The .NET languages also support the use of the finally clause following try-catch. An advanced scenario where you might use catch and finally together is to obtain and use resources in a try block, deal with exceptional circumstances in a catch block, and release the resources in the finally block. For more info, see try-catch-finally in the C# Reference. Note Visual C++ component extensions (C++/CX) code can't use the finally keyword. Remember, not all exceptions are recoverable. Some exceptions indicate that something is drastically wrong and the app won't be permitted to run any further. Even if you catch these in try-catch code, that might be the last code that your app gets to run before it's terminated by the Windows Runtime or the system. Even while debugging you might not get past such a severe exception as a first-chance exception, because the runtime won't let debugging continue. For more info about the design philosophy— that is, why some exceptions are considered recoverable and some aren't—see Handling and throwing exceptions. Rethrowing exceptions For some exceptions, you can't fix the error condition entirely in your app code but you do have more info about why that exception happens in a particular circumstance. For example, you might be catching a system error that you can associate with some aspect of your app's use of the Windows Runtime API, but the message in the original error is vague or isn't using HResult values that could have explained the circumstances better. If you're able to give more precise info about why that exception is happening, based on your knowledge of your app's logic in context, it's sometimes worthwhile to throw a new, more precise exception from within the catch block—that is, to "rethrow" the exception you caught. You might then want to catch the new exception at the app level (UnhandledException), so that the app-level logic can record the exception condition (in a log, for example) or otherwise save info about it as part of the app state. This might be useful app-state data for when your app needs to restart. Or you might just want other exception-recording techniques, such as external crash reporting, to see the more precise exception as thrown by your app code. Either way, when you rethrow an exception, anticipate that the app will eventually terminate but that it will do so while reporting the exception conditions from your rethrown exception rather than the original exception you caught. App-level exception handling Some exceptions, if they're not caught in your try-catch blocks or if you rethrow them from there, are propagated by the Windows Runtime as app-level exceptions. At this point you can use an app-level exception-handling technique and mark the exception occurrence as Handled in the event's data. If you don't mark the occurrence as Handled, the exception is propagated back to the Windows Runtime and system, and in most cases the app will then be terminated. Still, just before that happens, your handler for Application.UnhandledException can serve as a centralized, app-level exception handler to log errors, save what app state you can, and perform similar operations. If you don't have a handler for UnhandledException and the UnhandledException event fires, the system terminates the app because the exception that motivated firing the UnhandledException event was unhandled. Similarly, if you do have a handler but never set the Handled property of the event data to true, the system terminates your app, usually just after the handler is invoked. If the UnhandledException event handler sets Handled to true, your app won't necessarily be terminated immediately and your app code can attempt to recover from the error condition by executing the remainder of your UnhandledException handler. Your app might use an UnhandledException handler to take some action because you consider the source exception to be recoverable. The definition of "recoverable" depends at least partly on your app's logic, features or design. For an unrecoverable exception, your app might still perform logging, cleanup, or other actions before being terminated. But be aware that there are exceptions that will terminate your app before it even reaches your UnhandledException handler. Also, we recommend that you do not routinely set the value of Handled to true for all UnhandledException events and their originating error or exception cases, or routinely consider the circumstances as recoverable. Here's why: - Often the UnhandledException event handler doesn’t have enough info about that exception and its context to know whether it's safe to continue and run more app code. Your app code may be in an inconsistent state or could be getting incorrect data results from the system. This could result in subsequent failures that are even harder to recover from or to avoid in the future if the app continues to run. - The Windows Runtime and the system consider some exceptions to be unrecoverable regardless of the circumstances during which they're thrown, because the runtime execution code itself will be in an inconsistent state following these exceptions. If more unrecoverable exceptions happen, even if you set Handled to true, the app is still terminated. That might happen at a time that's no longer detectable (you might not have a try block that isolates it, or the system terminates the app without raising the UnhandledException event because it's a severe exception). Tip Even if you do set Handled to true, it's a good idea to save app state just in case you need it for a future restart. At the app level, some of the specifics of the original exception may not longer be available after propagation through layers of system and app code. For any exception that really originated from your app's code, using a try-catch block in the app code is a better, more accurate exception-handling technique than using UnhandledException later on. Here are the reasons why: - The type, message, and stack trace for UnhandledException event data are not guaranteed to exactly match the original exception as it was available to try-catch. Internally, the Windows Runtime does a lot of work to unify the error reporting between systems and your app, but sometimes info just can't be retrieved from the app's context. In particular, in your app-level exception handling you might lose at least one of the possible stack traces that was available with the original exception (either managed or native stack, depending on the context and other circumstances). - The try-catch exception object might have an InnerException in nested exception cases, but the UnhandledException event arguments won't have this. - The Message property does copy the message of the originally raised exception, but that's not always enough for detailed debugging or run-time recovery. Even with these limitations, the app-level exception handling technique can still be useful for situations where your app is consuming components or services. An exception at run time due to a missing service or bad code or data from a component might be recoverable, but you can't always wrap the connection points to your dependency in a try-catch block. An UnhandledException event handler is a useful interim technique while you're still refining your code and isolating exceptions that are difficult to reproduce. For some of these exceptions you can eventually wrap a specific API call in a try-catch block. Or you can just fix the originating code if the exceptions turn out to be the result of code-usage mistakes. It’s possible for the UnhandledException event to fire for exceptions that never have any chance of being caught by other app code or wrapped by try-catch blocks. For example, if the XAML framework is performing layout and an exception is raised, this exception won’t propagate through any app code because layout happens automatically. (A layout pass is not the direct result of any API that your app code calls; it's just one of the underlying systems that act when the app starts or resumes and XAML for UI is loaded.) In this case, the UnhandledException event fires, and your app-level handler will be the first point at which any app code is notified about the exception. Note When a debugger is attached—for example, while you're actively developing your app in the IDE—the original exceptions are usually caught as first-chance exceptions in the Visual Studio debugger. These include exceptions that you may not be able to wrap with try-catch. For typical debugging techniques combined with certain exception cases like this, your UnhandledException event handler might not be called while you're running your app with a debugger. Exceptions and asynchronous programming Asynchronous programming is a key tenet of the Windows Runtime. You probably won't get far in your app's programming without touching at least some of the feature areas that use async programming techniques extensively. For example, the file picker and programmatic file access APIs are almost all async methods. Many media or imaging APIs are also async methods. You can make calls to async methods from within try blocks, but not from within a catch block. Exceptions that come from an async method call thus can't make another async method call as part of their recovery option. In other words, you're allowed only one level of async method calling if you've entered any error condition. Most of the existing async methods that attempt to populate an object that you'll want later will use an IAsyncOperation return value. If it's a method you wrote yourself in .NET code, the equivalent type is Task<TResult>. The task can report whether it completed successfully or failed. That's usually more info for recovery than is available when you rely on exceptions reaching the app level. At the app level, all the caller has are the HResult and Message, and no connection to the original Task or operation and how or when it reported successful completion. For example, a task can report IsCanceled is true but the exception won't capture that detail. Exceptions from components and libraries Components or libraries can throw exceptions when you access their APIs. As with your own code or system code, the exception can sometimes be mapped as a standard exception when you catch it or when you're debugging. If that's the case, you can apply all the guidance you've read so far about exceptions in general. However, you may need to experiment a bit to see which of the exceptions that a component throws are still recoverable in your app, so you can decide how to handle them by means of a catch block or as a case in your UnhandledException event handler. Your code could encounter a custom exception, depending on which components or libraries you use. A true custom exception can't be mapped to .NET standard exceptions; it always appears as a System.Exception type to your code if you attempt to catch it, see it as first-chance, or handle it at app level. A custom exception has an HResult numeric value and hopefully a useful Message. You might need documentation, headers, or other info sources from the author of the component or library to learn more about what the exception means and how to prevent it. Tip Sometimes converting the HRESULT value from integer to hexadecimal representations of the numeric error code is helpful for finding references to an HRESULT value in general documentation sources, such as the system error code reference. Even custom components sometimes choose to raise errors using the system-defined codes if their exception is a good match with the meaning of a previously-defined system error or exception. Or see Cross-reference: Standard exceptions and error codes. If you're writing a Windows Runtime component yourself using a .NET language, how you throw exceptions is one of the many areas about which you'll need to learn the specific programming requirements. In particular, you're not permitted to create custom public System.Exception derivatives. (However, you can use the existing derivatives in the .NET for Windows Runtime apps APIs, or leave the class as private.) And you'll have to think about how other languages see your exceptions, because supporting multiple languages is often the reason for creating your own Windows Runtime component. For more info, see Creating Windows Runtime Components in C# and Visual Basic, especially the "Throwing exceptions" section. .NET-specific exceptions for the Windows Runtime For .NET programming, the Windows Runtime supports these exceptions that don't have equivalent standard exceptions in the other languages: LayoutCycleException LayoutCycleException is an exception that is thrown specifically by the layout and rendering subsystems of the Windows Runtime while they are composing a XAML-based layout. LayoutCycleException can only occur when your code (or code in a component you're using) calls the UIElement.Measure or UIElement.Arrange method. Those calls are typically made from within the ArrangeOverride and MeasureOverride method implementations of a control, a panel, or another element that performs a custom layout on its child elements. The layout system has an inherent internal concept of passes over layout, during which the layout adjusts itself asynchronously. The layout logic is based on several inputs and a bottom-up evaluation of all contributing layout elements in a visual tree. In some situations it's possible for an app layout to introduce a loop or condition that causes the layout system to throw LayoutCycleException. In such a loop, the layout system can't arrive at a solution and can't produce the visuals for a finalized layout, because it repeatedly invalidates its candidate layouts. For example, there could be a layout-related property binding or custom property-changed handling between a parent and child that keeps invalidating the inner parts of the layout before it can finish the outer part. A custom container doesn't necessarily have full control over what elements could exist within it, so loops like this aren't completely avoidable either by control authors or by app writers. Unfortunately there's really no good run-time recovery option for a LayoutCycleException. If you're still in the development phase, and if it's your own code that throws the exception, have a close look at how your layout container changes its own layout-affecting properties as part of its layout overrides. The logic used there could be the cause of the loop. Otherwise, if contained elements are reporting bad info to an otherwise sound layout logic (the parent-child scenario mentioned previously), you might have to debug the app-specific consumption of that control to find out what's causing a LayoutCycleException in a particular layout situation. There might also be scenarios where you choose to not use out a UI that threw a LayoutCycleException and you're able to completely replace the UI with a different XAML composition that doesn't have any layout issues. If you're writing an app but getting a LayoutCycleException from a Windows Runtime UI class or from a third-party code base, have a look at how your elements are nested and at their object-property relationships. There might be objects at different levels of your visual tree that are attempting to share values through bindings or other techniques, where the values are changing dynamically at run time. These properties and their sources or dependencies could be the source of a looping layout invalidation. Generally, don't throw a LayoutCycleException from your app code, not even from a layout-method override of a specific Panel or Control implementation. Leave it up to the system to detect the conditions and throw the exception. Note For more info on how panels work, see Custom panels overview. XamlParseException XamlParseException is thrown specifically by the XAML parser component of the Windows Runtime. It means that an object graph couldn't be successfully created from a XAML file or from a XAML string or fragment. If this exception is thrown for a XAML page that the app attempted to load for UI, no elements of that page will be available. The best you might do is attempt to use another page that does parse for your Window.Content property and root visual. If you get this exception from loading smaller units of XAML, such as run-time parse operations from XamlReader.Load, your app can catch this exception and probably recover from it by using other XAML that parses, or by using some other fallback strategy. By the time you're finalizing the production code for your app, you shouldn't be getting XamlParseException anymore from any XAML that's packaged with your app. During the design phase, Visual Studio can report on many parsing errors before that XAML even loads and before the app code is activated. It reports these errors or exceptions in output windows, or by syntax marking in the XAML text editor, or in other areas of the IDE. The design-time reporting shortens the cycle of fixing errors because you don't need to build and run the app just to load XAML and see errors. The design-time experience can also report on multiple errors in succession rather than failing on only the first exception encountered in a given XAML construct. A XamlParseException contains several important pieces of info that can help you correct issues even if the IDE doesn't provide a design-time experience for error reporting. The Message property of the exception gives the line number and the column position within the text line where the parsing error occurs. If a specific attribute or element is what failed to be parsed, the name of that attribute or element is in the Message string too. The named elements or attributes are for the specific error encountered, so if this is a first-chance exception you might only see an exception and related message for the first part of XAML that fails to parse, even though there might be additional errors further in.It's possible that fixing the error and reloading the XAML will reach another error that is lexically further into the same XAML construct. For severe markup errors, such as XAML that's completely malformed or invalid as XML, you might get a message that reports the error at line 0, position 0. Note If you're converting code from other frameworks or are generally familiar with this exception from having handled it in other frameworks, note that the Windows Runtime implementation is a bit different. There aren't discrete LineNumber and LinePosition properties in the specific event type. This info is all rolled up into an override of the common exception Message string. Generally, don't have your Windows Runtime app code throw a XamlParseException that it creates and initiates. The exception should be reserved for use by full-featured XAML parsers such as the built-in parser for XAML, or for related facilities like the platform implementation of the XamlReader.Load API. Tip Switching your Visual Studio debugging mode to support mixed-mode debugging can be useful because it exposes the native stack. The native stack sometimes has clues about underlying code-definition problems that surface as a XAML issue. The managed stack doesn't always capture the originating class name for resource lookups or walk into the constructor code that fails, especially if the class definition comes from a component. For more info on XAML parse exceptions, see XAML overview or Basic XAML syntax. ElementNotAvailableException and ElementNotEnabledException ElementNotAvailableException and ElementNotEnabledException aren't commonly used, so your app code won't typically have to handle these errors. Your app code might create, initialize, and throw these errors, though, specifically if you are implementing a custom automation peer for a custom control class. For more info, see Custom automation peers. Displaying error info in the UI In general, don't use the developer-oriented info from a System.Exception or any derivative directly in your app UI. For example, don't show the raw HRESULT code to the user. Although the Message string might contain info that helps you fix code while you're still developing the app, that string won't typically help the user avoid the underlying error whether you catch it or not. If you do catch the error, you might be able to interpret an error message and present a recovery path to the user, but do this in a way that the user can act upon. For example, you can catch the error without rethrowing it, and then offer the user a recovery option in your app's own UI. If your app is validating input that's passed to a service, and the service API throws an error on invalid input, show UI that enables the user to correct the input. Then you can have your code use the service API to try submitting the input again. If you don't have a good try-catch opportunity, at least write a case into your UnhandledException event handler that enables the user to correct the conditions, and that lets your app continue. If you do show UI to users when errors occur, use inline page techniques rather than dialogs. Use dialogs only when the user has to correct an error condition that blocks most of the functionality of your app. For more info, see Guidelines and checklist for message dialogs. If your app crashes, don't attempt to display any info to the user that describes the specifics of the crash even if the app lifetime state lets you do so. According to the guidelines for app suspend and resume, let the user see only the start screen on restart after any Windows Runtime app crash. When your app starts again, have it enable the user to resume what he or she was doing. If you used UnhandledException to write state info about your app before it was terminated, that info may be useful when restarting but not during the termination. Describing how to restore app state from recovered app data is beyond the scope of this topic; for more info about that, see Guidelines for app suspend and resume. Exceptions in the other programming languages - For an app written in C++/CX, all errors are reported as a Platform::Exception object. Standard exceptions might be mapped to one of the other Platform namespace exceptions. Custom exceptions are declared as COMException objects with a custom HRESULT. You can't create classes based on Platform::Exception; the compiler will block any attempt to do so. For more info about exceptions in C++/CX, see Exceptions (C++/CX). - For an app written in JavaScript all errors are represented as a JavaScript error object and can be handled by function(error)definitions. Errors from asynchronous API calls can be processed using the then/done keywords. Handling exceptions in network apps There are some additional considerations for handling exceptions that use network APIs in the Windows.Networking.Sockets and Windows.Web.Http namespaces. For more info, see Handling exceptions in network apps. Related topics - System.Exception - Cross-reference: Standard exceptions and error codes - Exceptions (C++/CX) - Debug Windows Runtime apps in Visual Studio
http://msdn.microsoft.com/en-us/library/windows/apps/dn532194
CC-MAIN-2014-15
en
refinedweb
"Fernando Pérez" <fperez528 at yahoo.com> wrote in message news:9u0jor$2li$1 at peabody.colorado.edu... > Ursus Horibilis wrote: > > > Is there a way to force the Python run time system to ignore > > integer overflows? I was trying to write a 32-bit Linear > > Congruential Pseudo Random Number Generator and got clobbered > > the first time an integer product or sum went over the 32-bit > > limit. > > well, you can always put the relevant parts in a try:..except block, but I > suspect that will kill the speed you need for a rng. Yes, I did put the computation in a try-block and you're right, it killed the speed, but worse, it also didn't store the low-order 32-bits of the computation. > If it's just an academic > algorithm exercise It's not. Such computations are routinely used in algorithms like PRNG's and cryptography. > Out of curiosity, though: I've never written a rng myself. In C, an integer > overflow silently goes to 0. No. In C, an integer overflow silently throws away the high-order bits, leaving the low-order bits just as they would be if you had a larger field. As an illustration, assume we are working with 8-bit integers instead of 32-bit. Then here's what happens with integer overflows: signed char a = 127; /* in binary = 01111111 */ a = a * 2; /* Because of sign, a = (-2) */ a = a * 2; /* Because of sign, a = (-4) */ a = a + 128; /* Sign bit gets inverted, a = 124 */ > Are you relying on this 'feature' in your > algorithm? I'm just curious about to whether asking to ignore an overflow > without triggering an exception is a design feature or a flaw of the way you > are doing things. The algorithm relys on the ability to ignore the fact that overflow has occurred and continue with the computation using the low-order bits. This is not my algorithm; it's as old as computer science, and then some. Here is how we implement one particular 32-bit Linear Congruential Pseudo Random Number Generator in C: unsigned int Lcprng(unsigned int *seed) { *seed = 29 * (*seed) + 13; return (*seed); } How do you do this in Python?
https://mail.python.org/pipermail/python-list/2001-November/079361.html
CC-MAIN-2014-15
en
refinedweb
I had a productive meeting with Dr. Bob Fuller, University of Nebraska, emeritus, yesterday, a long time associate on that First Person Physics proposal to NSF (close, no cigar). He's working on the Karplus legacy, in turn stemming from Piaget. Science teaching went through a more successful transformation to "constructivist" (in the sense of student centered, construct your own model of reality) than USA math teaching managed (talking later 1900s), as the latter was mostly a panic response too Sputnik (so-called SMSG) and it's been a backlash ever since ("back to basics" to the point of near extinction of the subject, in terms of attracting fresh thinking). I'm not sure how it went in the UK, other Anglophone cultures. Others on edu-sig will have more place-based stories of curriculum writing (the evolution thereof) in your respective necks of the woods. Anyway, the physics community has been interested in video games as teaching devices right from the get go, with museum-grade simulators (like the ones pilots train in) representing a kind of high end state of the art (people actually get sick in those, given the realism). Speaking of getting sick, you'll find in my Vilnius slides, other places, a strong emphasis on "grossology" when working with kids. That's a part of kid culture I've always found missing from Squeak, which seems too squeaky clean, not sufficiently demented. For example, if using a system language and defining a function, you'll like encounter strong type awareness, meaning every type declared *and* in a specific order e.g. f(int x, str y) and g(str y, int x) are quite strict about what they "eat" (function as mouth) and if you send them the wrong args, they will "barf" (has to be OK to say that, or you lose a lot of would-be attenders). The "type awareness" we want to induce is very traditional and follows that time-honored sequence: N, W, Z, Q, R, C. You might not think if quite those terms (namespaces differ) but we're talking natural, whole, integer, rational, real and complex respectively. These are types, and there's an historical narrative explaining the drive to expand to new horizons, starting with simple geometric ratios such as the body diagonal of a cube (math.sqrt(3)) or of the 1 x 2 rectangle (math.sqrt(5)). Given the historical dimension, it's quite appropriate to give these primitive geometric relationships a somewhat neolithic spin i.e. some talk of "cave people". This helps anchor some data points for later, when we get into trigonometry and navigation techniques (over desert, over sea). (gnu math teacher Glenn Stockton, expert in neolithic tool making, including for astronomical purposes) You get these right simple surds (e.g. phi, math.sqrt(2)) out of the gate, with compass and ruler, scribing in sand (on a spherical surface, so only locally Euclidean -- "close enough for folk music" as we say in geography class, zooming in on Greece in Google Earth maybe). Pi, unlike phi, is transcendental, not just irrational. I agree with posters here than Ramanujan is a great source of generators (in the Pythonic sense), plus I like playing that epic song. The complex numbers get added by those in the Italian peninsula, seeking to solve Polynomial Puzzles (Pisa a center for this kind of game playing, lots of betting, not unlike cockfighting). Fractals ala the Mandelbrot pattern, scribed in the complex plane, come latter ("phi is the first fractal" -- a mnemonic we use). However, given this is alpha-numeric literacy i.e. string-oriented as well as numerical, we don't stop with a recap of basic algebra. We need those regular expressions (good for URL parsing) and Unicode studies. Fine if the language arts teachers want to pick up the story at this point, take it away from the algebra teachers. We're talking DOM (Document Object Model), XML... what became of "the outline" in Roman times (structured thinking, rhetoric). I'd like to thank Ian Benson of Sociality / Tizard for confirming my impression that R0ml is correct in his approach, with strong emphasis on Liberal Arts (in healthy doses at OSCONs -- the guy is simply brilliant). 'Godel Escher Bach' is another trailblazing work, in making sure we keep the string games going, don't propagate the misinformation that "number crunching" is all that we're about. Knuth called 'em *semi*-numerical algorithms for a reason. But the question remains, if you *are* committed to keeping regular expressions within math: where to put them? I think the answer is pretty obvious: students need to work as a team to maintain some kind of Django web site, could be exclusively in-house (not public), with time line data, events in math history, adding and morphing over time. Actually parse URLs, triggering real SQL behind the scenes. This is all completely topical, very job market oriented. Yet we're in a constructivist realm, giving imaginations free play and lots of open-ended exploration time. I continue with the "gnu math" and "computer algebra" labeling, adding the Bucky stuff as a "secret sauce" -- spices it up to have something a little questioning of authority, especially in a math learning context, where some adults are accustomed to unchallenged authority. No longer, rest assured. Kirby
https://mail.python.org/pipermail/edu-sig/2009-January/008990.html
CC-MAIN-2014-15
en
refinedweb
Recent changes to feature-requests Support for CDATA end (]]>) "literal" in string2014-02-22T23:54:25Z2014-02-22T23:54:25ZLajos Cseppentő<div class="markdown_content"><p>Sorry, the good link: <a href="" rel="nofollow"></a></p></div>Support for CDATA end (]]>) "literal" in string2014-02-22T23:53:46Z2014-02-22T23:53:46ZLajos Cseppentő<div class="markdown_content"><p>Hello,</p> <p>I started to use Simple about 2 years ago. I really miss the support for escaping the CDATA end sequence (]]>). However, it can be done, like in the example in the end of this post: <a href="" rel="nofollow"></a></p> <p>I would like to request this feature.</p> <p>Lajos Cseppentő<br /> Hungary</p></div>#22 Support @Convert on @Attribute members2013-09-11T22:25:08Z2013-09-11T22:25:08ZDaniel Demus<div class="markdown_content"><p>Also, unless there is some trickery going on, this should only require putting something like</p> <div class="codehilite"><pre><span class="k">if</span> <span class="p">(</span><span class="n">element</span> <span class="o">==</span> <span class="n">null</span><span class="p">)</span> <span class="p">{</span> <span class="k">if</span> <span class="p">(</span><span class="n">type</span><span class="p">.</span><span class="n">getAnnotation</span><span class="p">(</span><span class="n">Attribute</span><span class="p">.</span><span class="n">class</span><span class="p">)</span> <span class="o">==</span> <span class="n">null</span><span class="p">)</span> <span class="p">{</span> <span class="n">throw</span> <span class="n">new</span> <span class="n">ConvertException</span><span class="p">(</span><span class="s">"Element annotation required for %s"</span><span class="p">,</span> <span class="n">type</span><span class="p">);</span> <span class="p">}</span> <span class="p">}</span> </pre></div> <p>somewhere in the <code>private Convert getConvert(Type type)</code> method in <code>ConverterScanner.java</code></p></div>Support @Convert on @Attribute members2013-09-11T22:05:59Z2013-09-11T22:05:59ZDaniel Demus<div class="markdown_content"><p>It seems you can only specify a converter on a member decorated with @Element. Seeing as a converter takes an InputNode, the restriction seems arbitrary. Making sure the converter receives a compatible InputNode and can produce compatible output is the responsibility of the Converter implementation.</p></div>Add @Documented annotation to Simple annotations2013-03-31T15:09:03Z2013-03-31T15:09:03ZEleftherios Kritikos<div class="markdown_content"><p>Please add the @Documented annotation to the annotations in package org.simpleframework.xml.* so that they appear in Javadocs as part of the documentation of the class. It would be nice to easily figure out how the class is serialized by reading the Javadoc.</p></div>Customizable Date and Number Transform classes2012-10-01T10:59:44Z2012-10-01T10:59:44ZMichael Berestov<div class="markdown_content"><p>Could you expand existed Transform classes or add new public ones for custom formats?<br /> This is important especially for Date and Number types which often require conversion from a non-standard format.<br /> Right now I'm ending up on implementing in my applications the same code again and again:</p> <p>import org.simpleframework.xml.transform.Transform;</p> <p>import java.text.DateFormat;<br /> import java.util.Date;</p> <p>public class DateFormatTransformer implements Transform<Date> {<br /> private DateFormat dateFormat;</p> <p>public DateFormatTransformer(DateFormat dateFormat) {<br /> this.dateFormat = dateFormat;<br /> }</p> <p>public Date read(String value) throws Exception {<br /> return dateFormat.parse(value);<br /> }</p> <p>public String write(Date value) throws Exception {<br /> return dateFormat.format(value);<br /> }<br /> }</p> <p>and then using them like this: <br /> DateFormat dateFormat = new SimpleDateFormat("dd/MM/yyyy HH:mm:ss");<br /> Transform dateTransform = new DateFormatTransformer(dateFormat);<br /> RegistryMatcher matcher = new RegistryMatcher();<br /> matcher.bind(Date.class, dateTransform);<br /> Serializer s = new Persister(matcher);</p> <p>which is a bit clumsy.</p></div>Add parent-annotation.2011-12-12T09:19:58Z2011-12-12T09:19:58ZUlrik Mikaelsson<div class="markdown_content"><p>I have the same need as in <a href=".">.</a> Basically I need to, in a child, access the parent-pojo after deserialization.</p> <p>I did not get the proposed solution working, but irregardless I think it's a usecase common enough to deserve @Parent-annotation, which should resolve to the closest "upstream" @Root-annotated object. It should not affect serialization.</p> <p>I.E. </p> <p>@Root<br /> class Parent {<br /> @Element ArrayList<Child> child;<br /> }</p> <p>@Root<br /> class Child {<br /> @Parent<br /> Parent myParent;<br /> }</p> <p>myParent should after serialization reference an instance of Parent. Mismatch between expected parent-type and actual parent-type should throw an exception (such as a ClassCastException).</p></div>Use name from @Root2011-04-20T07:42:35Z2011-04-20T07:42:35ZAnonymous<div class="markdown_content"><p>If the @Element does not have a specified name the name of the parameter is used. I suggest that it should first be checked to see if the class of the parameter has a @Root tag with the name specified and if so then use that name. This could possibly be controlled using some flag.</p></div>inline for Element2011-04-19T08:19:29Z2011-04-19T08:19:29ZAnonymous<div class="markdown_content"><p>Fairly often there is a need to "flatten" the generated xml in the sense that you have several Java objects contained in one Java object which should generate a flat xml structure. They should be serialized into the same element due to the structure of the required xml protocol. This is exactly the functionality that is already provided for ElementList.</p> <p>Example:<br /> public class Message {<br /> @Element(inline=true)<br /> private Header myHeader;<br /> @Element(inline=true)<br /> private Product myProduct;<br /> }</p> <p>which should be serialized into a structure like<br /> <message><br /> <sender>a</sender><br /> <timestamp>1</timestamp><br /> <productid>2</productid><br /> </message></p> <p>We assume that sender and timestamp reside in the Header object and the productid in the Product object.</p> <p>Since this makes deserialization a bit trickier as a first step it could be made available for serialization only. The API seems to be pretty prepared for this change already.</p> <p>It could be argued that a Converter should be used but that pretty much removes the idea of having Simple in the first place, you might as well build the xml in a StringBuffer in that case. In this particular case the Product is also a large number of subclasses meaning that there would have to be a Converter per subclass.</p> <p>Maybe there is already some sneaky way to achieve this funtionality but I have not been able to figure it out.</p></div>Namespace prefix handling2011-02-23T10:08:18Z2011-02-23T10:08:18ZAnonymous<div class="markdown_content"><p>From my understanding the way to add a wanted prefix to an element is to use @Namespace(reference="something"). But that sort of destroys the idea of having a prefix which is supposed to operate as an alias, not having to type the long URLs.<br /> When I tried @Namespace(prefix="something") the generated XML contained an empty xmlns declaration as well.<br /> My suggestion is that declaring @Namespace(prefix="something") means that only the prefix is added to the element name, no xmlns declaration.<br /> If this is not possible due to some additional need to use prefix to set empty xmlns then I suggest that you introduce something like @Namespace(alias="something") that will only prepend the element.</p></div>
http://sourceforge.net/p/simple/feature-requests/feed.atom
CC-MAIN-2014-15
en
refinedweb
10 August 2010 06:08 [Source: ICIS news] SINGAPORE (ICIS)--MEGlobal has nominated its September 2010 Asian Contract Price (ACP) for monoethylene glycol (MEG) at $870/tonne (€661/tonne), up by $30/tonne from its August nomination, a company official said on Tuesday. “Our September MEG ACP … reflects the short term supply and demand situation in the Asian market,” said the source. The price is on a cost and freight (CFR) ?xml:namespace> MEG prices in China spiked $70/tonne or nearly 10% in the past month to $875-880/tonne CFR China Main Port (CMP) on Tuesday morning, based on ICIS data, driven by strong crude prices and speculative trading. The other two MEG majors - SABIC and Shell Chemical - have yet to announce their September ACP nominations. (
http://www.icis.com/Articles/2010/08/10/9383497/meglobal-hikes-september-meg-acp-nomination-by-30tonne.html
CC-MAIN-2014-15
en
refinedweb
All about Async/Await, System.Threading.Tasks, System.Collections.Concurrent, System.Linq, and more… A common problem users run into when writing parallel applications is the lack of the thread-safety support in the .NET collection classes. Users typically need to implement their own synchronization mechanism for achieving the goal of safely reading/writing data to the same shared collection. A largely deprecated solution was to use the synchronized collections introduced in the .NET Framework 1.0, or to use the SyncRoot mechanism exposed through the ICollection interface. However, both of these approaches are not recommended to be used, for a variety of reasons, including less-than-ideal performance, but also because by using them, developers can set themselves up for a multitude of race conditions. See for a discussion of these design issues. Because of all of these reasons, the generic collections introduced in .NET Framework 2.0 do not offer a mechanism of synchronization anymore (i.e. they don’t provide a static Synchronized method), forcing the developers to do the synchronization manually. Parallel Extensions to the .NET Framework aims to fill this gap by introducing new thread-safe generic collections that don’t suffer from the same issues as their antiquated “Synchronized” relatives. ConcurrentStack<T> is one of them. In the current moment ConcurrentStack<T> is implemented as a LIFO linked list and uses an interlocked compare exchange for the Push and Pop actions. However, it not recommended as the user to rely on the internal implementation details. The main and most used methods of a stack data structure are usually Push, Pop, and Peek. A quick look at ConcurrentStack<T> will show that it provides Push, but not Pop and Peek: instead, it provides a TryPop and TryPeek instead. This is quite intentional. One of the most common patterns when using a Stack<T> is to check the stack’s Count, and if it’s greater than 0, pop an item from it and use that item, e.g.: T item;if (s.Count > 0){item = s.Pop();UseData(item);} T item;if (s.Count > 0){item = s.Pop();UseData(item);} But in a world where this stack is being accessed by multiple threads concurrently, even if the individual Count and Pop methods were thread-safe, we still run into the issue that the stack could be emptied between a successful non-emptiness check and the attempt to pop. ConcurrentStack<T> takes this into account in the APIs it provides. To safely attempt to pop an item from the stack, we can instead write the code: T item;if (s.TryPop(out item)){UseData(item);} T item;if (s.TryPop(out item)){UseData(item);} In this way using a ConcurrentStack<T> a producers/consumer scenario could be implemented like below: //initialize an empty concurrent stack ConcurrentStack<string> m_myConcurrentStack = new ConcurrentStack<string>(); //every producer will produce this number of elements const int COUNT = 10; public void ConsumeDataFromStack() { //create the producers for (int i = 0; i < COUNT; i++) { Thread currentProducer = new Thread(new ThreadStart(delegate() { for(int currentIndex = COUNT; currentIndex > 0; currentIndex--) { m_myConcurrentStack.Push( Thread.CurrentThread.ManagedThreadId.ToString() + "_" + currentIndex.ToString()); } })); currentProducer.Start(); } //allow the worker threads to start Thread.Sleep(500); //consume data string currentData = ""; while (m_myConcurrentStack.TryPop(out currentData)) //ConsumeData(currentData); } PLINQ queries over a concurrent stack Being an IEnumerable<T> a concurrent stack can be used as data source in PLINQ queries as well. Below is a sample of such usage. volatile bool m_producersEnded = false; const int COUNT = 10; public void PLinqOverConcurrentStack() Thread[] producers = new Thread[COUNT]; Thread currentProducer = new Thread(new ThreadStart(delegate() m_myConcurrentStack.Push(currentIndex.ToString()); producers[i] = currentProducer; Thread PLINQConsumer = new Thread(new ThreadStart(delegate() while (!m_producersEnded) var currentValues = from data in m_myConcurrentStack where data.Contains("9") select data; foreach (string currentData in currentValues) //consume data } })); //start the consumer pLinqConsumer.Start(); //start the producers foreach (Thread producer in producers) producer.Start(); //join the producers and the consumer producer.Join(); m_producersEnded = true; pLinqConsumer.Join(); How about really mixing it up... using System; using System.Linq; using System.Threading; using System.Threading.Tasks; using System.Threading.Collections; class ConcurrentStackWithTaskCreate { const int COUNT = 10; static ConcurrentStack<string> _stack; public static void ConsumeDataFromStack () { Action<int> worker = (j) => { _stack.Push( Thread.CurrentThread.ManagedThreadId.ToString() + "_" + j ); }; Parallel.For( 0, COUNT, (i) => Parallel.For( 0, COUNT, j => worker(j) ) ); Thread.Sleep( 500 ); // consume data string data; while( _stack.TryPop( out data ) ) { Console.WriteLine( data ); } } Main () _stack = new ConcurrentStack<string>(); ConsumeDataFromStack(); } awesome! should we expect ConcurrentQueue soon? ;) Hi Cristina, Just to clarify, this ConcurrentStack<T> class does not allow simultaneous readers and writers, correct? One cannot do iterate over the collection (e.g. evaluate a LINQ query) while another thread is modifying the collection, right? Another question. Since shared state is one of the big concurrency blockers to using LINQ, have you considered making ConcurrentStack<T> an immutable data structure? This way you wouldn't have to do any interlocked exchanges and consumers wouldn't have to worry about reading the collection on one thread and writing to it on another? Nice article. I hope there are plans to include the TPL in the reference source section of .NET. Developers could learn a lot from the large quantity of comments in the TPL code :) Kevin My favorite quote from the post above: "However, it not recommended as the user to rely on the internal implantations details. " PingBack from The June 2008 CTP of Parallel Extensions provides the first look at its 3 rd major piece, a set of coordination Judah, ConcurrentStack<T> allows stack updates from multiple threads. Also allows iterating over it when other threads concurrently update it. However, when we do an enumeration only a snapshot of the current collection is got. Please find below an example of such usage. Paralle.ForEach is used in order to enumerate over the current collection. const int COUNT = 300; public void EnumeratingOverConcurrentStack() ConcurrentStack<string> enumStack = new ConcurrentStack<string>(); int internalCount = 20; //create the worker threads Thread[] workers = new Thread[COUNT*2 + 1]; for (int i = 0; i < COUNT; i++) { Thread currentWorker = new Thread(new ThreadStart(delegate() { int currentIndex = COUNT; for (int k = 0; k < internalCount; k++) { m_myConcurrentStack.Push(currentIndex.ToString()); } })); workers[i] = currentWorker; } Thread iterator = new Thread(new ThreadStart(delegate() Parallel.ForEach<string>(m_myConcurrentStack, (s) => enumStack.Push(s)); })); workers[COUNT] = iterator; workers[COUNT + i + 1] = currentWorker; //start the worker threads foreach (Thread worker in workers) worker.Start(); //join the threads worker.Join(); //we can safely get the count - all the worker threads are finished Console.WriteLine("Stack count {0}", m_myConcurrentStack.Count); Console.WriteLine("Snapshot Stack count {0}", enumStack.Count); In regard to your question about using immutable data structures we decided that the current implementation is more useful in the current moment; its functionality is closer with the current functionality provided by the current synchronized collections. Hope that this helps. I have a plea for clarity in the TryXXX patterns of the concurrent collections In the post you use the out parameter pattern, which of course is a normal idiom of C#... T item; if (s.TryPop(out item)) UseData(item); Will you please consider adding Nullable<T> return methods TryXXX methods on the types included in the System.Threading.Collections. They would support the following signature. public Nullable<T> TryXXX(...); To support the following, and IMHO more readable pattern: T? item = s.TryPop(); if (item != null) UseData(item); I know for sure that I am not the only one that wishes not to have this feature and doesn't want to have to keep creating extension methods on types.. Thanks, Anthony Is it possible to send a ManualResetEvent or set a timeout to the ConcurrentStack<T> methods TryPop or TryPush, so we're able to bail out (for example if there is a lengthy enumeration taking place and the application is being shut down)? I ask this because I guess that an enumeration over the structure locks it. Or does it support simultaneous readers and writers? Interesting, so when we do iteration, we're actually iterating over a copy! Cool. Well, that is kind of functional/immutable then, isn't it? I still suggest that a completely immutable data structure may be a better fit for PLINQ, but hey, you guys know your stuff. Thanks for the info, that was helpful. Judah wrote: "Interesting, so when we do iteration, we're actually iterating over a copy! Cool. Well, that is kind of functional/immutable then, isn't it?" Well it's not really a immutability issue, but more an approach at managing state via isolation, since the underlying collection could change it's state after you made your copy. Patrik, TryPop/Push are non-blocking operations. There's no need to set a timeout, since there's nothing to timeout. If you want blocking, you can use BlockingCollection<T> (which works with both ConcurrentStack<T> and ConcurrentQueue<T>), and its methods do support timeouts. Chris, the June 2008 CTP also includes a ConcurrentQueue<T> in addition to ConcurrentStack<T> :) PingBack from
http://blogs.msdn.com/b/pfxteam/archive/2008/06/18/8614596.aspx?PageIndex=1
CC-MAIN-2014-15
en
refinedweb
#include <Xm/Print.h> Widget XmPrintSetup( Widget video_widget, Screen *print_screen, String print_shell_name, ArgList args, Cardinal num_args); A function that does the appropriate setting and creates a realized XmPrintShell that it returns to the caller. This function hides the details of the Xt to set up a valid print shell heirarchy The id the XmPrintShell widget created on the X Print Server connection, or NULL if an error has occured. None.); ... } XmPrintShell(3), XmRedisplayWidget(3), XmPrintToFile(3), XmPrintPopupPDM(3)
http://www.makelinux.net/man/3/X/XmPrintSetup
CC-MAIN-2014-15
en
refinedweb
Line reads Analying the bug in the Simple Dot Com Game Should read Analyzing the bug in the Simple Dot Com Game Reads: "Do you believe than a technical book..." it should read: "Do you believe that a technical book..." to increase the chance that the content gets coded coded into more should read to increase the chance that the content gets coded into more "Note: there is no Java 2 pre-OS X Mac operating systems" should be: "Note: there is no Java 2 on pre-OS X Mac operating systems" headfirstjava.com is listed for downloading the code - but the site doesn't seem to be related to the book - did they lose the URL? Note from the Author or Editor:page xxvii: last sentence, replace headfirstjava.com with: headfirstlabs.com JButton button = new JButton("Roll should read, JButton button = new JButton("Roll 'em!"); The next line String[] choices = { "1", "2", "3", "4", should be: String[] choices = { "1", "2", "3", "4", "5"}; missing semi-colon after the statement "Button c = new Button("Shoot me")" if ((x < 3) & (name.equals("Dirk")) { should be if ((x < 3) & (name.equals("Dirk"))) { Reads: you'll learn about all the Java types in chapter 4 Should read: you'll learn about all the Java types in chapter 3 int x = 4; // assign 3 to x now reads int x = 4; // assign 4 to x x = x + 1; // or we'd loop forever now reads: x = x - 1; // or we'd loop forever System.out.println("Value of x is " + x); 2nd quotation mark " is pointing in the wrong direction. The line with x = x + 1; should be brought about 2 characters forward. (Indentation problem) The command specified under "Given the output:" : % java Test now reads % java DooBee Lines 9, 10 & 15 2nd quotation mark " is pointing in the wrong direction. The third word in the wordListTwo array reads: "valued-added" should read: "value-added" The first few lines of the example reads as follows: "what we have here is a... should say (to be consistent with the code on page 14): "what we need is a..." "Declaraing" now reads: Declaring System,out,print("an"); should be: System.out.print("an"); The Pool Puzzle snippets have System.out.print("a "); System.out.print("n "); I.e., there appear to be spaces after the letters. it should be: System.out.print("a"); System.out.print("n"); Also, the snippet System,out,print("an"); is not syntactically correct since it uses commas rather than periods between "System", "out" and "print" The comments state that the code would run forever without a line added to the program, however the line added in the answer would not prevent infinite looping: class Exercise1b { public static void main(String[] args) { int x=1; while (x<10) { x=x-1; //THIS IS THE ADDED LINE if (x>3) { System.out.println("big x"); } } } } The added line now reads x = x + 1; 1st amoeba: "Amoeba" is misspelled as "Ameoba" in "Ameoba rotation point..." 2nd amoeba "amoeba" is spelled as "ameba" (ameba is an acceptable spelling, just does not match the rest of the page) The sentence says: "...he added an attribute that all Ameboas would have". should be "Amoebas". First sentence reads: "So objects have instance and variables and methods,..." now reads: "So objects have instance variables and methods,..." In the two following code snippets "System.out.print" now reads "System.out.println" void playTopHat() { System.out.print("ding ding da-ding"); } void playSnare() { System.out.print("bang bang da-bang"); } additionally, to create the specified output "bang bang da-bang" now reads "bang bang ba-bang" The code is given correctly in the exercise solutions on page 44. The 7th item in answers says "I have behavior" it should say "I have behaviors" The 8th item in answers says "Objects use me" The actual puzzle says "I am located in objects" "compiler always errors" Should be "compiler always errs" The last sentence of the last paragraph is cut off. It reads: "Don't worry, by the end of the book you'll have most of" The continuation now reads: "these memorized". theJVM's development team should be the JVM's development team ... allocation issues, you're Big Concern should be ... allocation issues, your Big Concern Text reads: Create a new int array with a length of 7 now reads: Create a new Dog array with a length of 7 in class BooksTestDrive, the next to last statement is System.out.println(myBooks[x].Author); now reads: System.out.println(myBooks[x].author); Currently - If a method takes an parameter ... now reads - If a method takes a parameter ... In Java, You don't... should be In Java, you don't... So parameters are ALWAYS initialized, because they compiler... should be: So parameters are ALWAYS initialized, because the compiler... "(although it doesn't care about the size of the variable, so all the extra zeroes on the left end don't matter." Should be : "(although it doesn't care about the size of the variable, so all the extra zeroes on the left end don't matter)." "it will be same for two references to a single object." should be: "it will be the same for two references to a single object." The solution on page 91 used many "attendees" names that are not consistent with the Exercise on page 87. For example, the following words: instance variables args encapsulation should be changed to: instance variable argument encapsulate The sentence on page 72..."A method uses parameters, A caller passes arguments" contradicts the question on "Who am I" where it asks "A method can have many of these______". From my understanding I thought it was parameters but you have the answer as arguments. Note from the Author or Editor:page 89: add "parameter" to the list of attendees. page 93: the existing answer "argument" should be: "arguments or parameters" Ouch! you sunk Go2.com should be: Ouch! you sunk AskMe.com In the pseudocode for the method, the following statement: "COMPUTE a random number between 0 and 4..." should be changed to: "COMPUTE a random number between 0 and 5..." in order to be consistent with the statment on page 106. "COMPUTE a random number between 0 and 5 that will be the starting location cell position" should be changed to: "COMPUTE a random number between 0 and 4 that will be the starting location cell position" Since the array begins at 0 and ends at 6, if you fill a series of three cells, beginning in cell 5, your final cell will be in cell 7, which does not exist in the array. This also means that the errata for page 105 (included below), is in error. On page 108, the code for the SimpleDotComGame says that if the dot com is killed, you should "System.out.println(numOfGuesses + " guesses")" On page 111, where there is an example of the code running, after printing "kill", the output is "You took 6 guesses". The code never prints "You took". In the box Converting a String to an int, the line that reads: Sting num = "2"; should read String num = "2"; myList.remove(s); should be: myList.remove(0); "if (locationCells.isEmpty())" should read: "if (locationCells.isEmpty()) {" Current: for (int z = 0; z < a.size(); z++) { now reads: for (int z = 0; z < al.size(); z++) { "public class ArrayList6 {" now reads: "class ArrayList6 {" to be consistent with the solution on page 159. Alternatively, "class ArrayList6 {" on page 159 now reads: "public class ArrayList6 {" to be consistent with the Code Magnets exercise on page 157. The sentence: "Does it makes sense to say X IS-A type Y?" Should be: "Does it make sense to say X IS-A type Y?" the word "makes" should be changed to "make" In the third line of the TestBoats class definition: Current: _____________ b1 - new Boat (); Should be: _____________ b1 = new Boat (); The two last lines matching candidate code and output are out of place (lower than they should be). Note from the Author or Editor:The brackets and arrows for the "mixed messages" should be returned to the alignment in the first edition. "so Canine, for example, could implement an abstract class from Animal" should be: "so Canine, for example, could implement an abstract METHOD from Animal" % java AnimalListUser should be % java AnimalTestDrive This is just a simple formatting error. On page 204 in the second paragraph under "Every class in Java extends class Object" there is a large space. It seems that a new line was inserted before the 'rn' of 'return'. simple typo: "That't OK." should be "That's OK." In the Object diagram: change method booleanequals() to: boolean equals() Simple typo: "(Object don't truly forget" should be "(Objects don't truly forget" The text "The Dog object in the array can't..." should read "The Dog object in the ArrayList can't..." -a Snowboard obect- should be -a Snowboard object- if (d instanceof Dog) { Dog d = (Dog) o; } should read: if (o instanceof Dog) { Dog d = (Dog) o; } The instanceof comparison should use 'o' not 'd'. varaiable, now reads variable, The answer on page 230 says "I can look different to different people" and the exercise question on 227 says "I can appear different to different people". The paragraph just above the illustrated drawing of the stack: "barf() declares and creates a new Duck reference variable 'b'..." That variable 'b' should be changed to 'd' "object's it holds references to?" should read: "objects it holds references to?" "whose" should be "who is" Death-by-Dhocolate Brown ought to be Death-by-Chocolate Brown "default accessn chapter 16 and appendx B" should be "default access in chapter 16 and appendix B" in the sentence "And what do you think that supeclass constructor does?" should be "And what do you think that superclass constructor does?" The footnote states: "... (you'll see that on page 22)." now reads: "... (you'll see that on page 252)." "Look at the Stack series on page 17 again," now reads "Look at the Stack series on page 248 again" The text just below the Stack figures in item 2 and item 4: Item 2: "go() plops on top of the Stack. 'x' and 'y' are..." This is incorrect. 'x' and 'y' should be changed to 'x' and 'z' Item 4 also needs to change 'x' and 'y' to 'x' and 'z' "trying to make an new instance of the class," should read: "trying to make a new instance of the class," "Assign a value to a final instance variable must be either at the time it is declared, or in the contructor." Should read: "Assigning a value to a final instance variable must be done either at the time it is declared, or in the contructor." You can't guarnatee should be: You can't guarantee And we have to this now. should be: And we have to do this now. ".....send something to the another machine." Should be ".....send something to another machine". "Jimmy Hendrix" should read "Jimi Hendrix" "Musical Insrument Digital Interface" now reads "Musical Instrument Digital Interface" In the letter from the compiler to Geeky he says "... be sure to catch any problems before all hell breaks lose." Lose should be loose. They're knows as (big surprise here)... shoul be: They're known as (big surprise here)... 4 lines from bottom of first paragraph: you're now reads you've In the Stack figure, the laundry method should be changed to doLaundry method. The first sentence reads: "Remember from page three, we looked at how MIDI data holds..." MIDI data is discussed on the third page of the chapter, not the third page of the book. Therefore, the sentence should be changed to: "Remember from page 299, we looked at how MIDI data holds..." The MIDI sound examples' process never terminates, so after you run it, you don't get your command-line back. This is because the Sequencer starts a separate thread (we haven't gotten to the threads chapter yet, so don't worry about it at this point), and it keeps it running *even after your main method completes*. For now, think of it as almost like a separate program being launched -- so that even when your initial program (the one that starts with main()) returns, that *other* program, the one running the Sequencer, is still running. For now, we'll add two things at the end to close it down: Instead of: player.start(); } catch (Exception ex) { ... } Insert the following lines between player.start() and the catch block: Thread.sleep(1000 * 2); // inserts a pause to give the sound a chance to play player.close(); // closes the sequencer System.exit(0); // quits the Java application So... this is what it should look like now: player.start(); Thread.sleep(1000 * 2); // new player.close(); // new System.exit(0); // new } catch (Exception ex) { // everything else... ============= "public class MiniMusicCmdLine" now reads: "public class MiniMiniMusicCmdLine" to match the instantiation of the object two lines down. Alternatively, "MiniMiniMusicCmdLine mini = new MiniMiniMusicCmdLine();" now reads: "MiniMusicCmdLine mini = new MiniMusicCmdLine();" to match the class definition (MiniMusicCmdLine). Most the code you write should be: Most of the code you write If you want MouseEvents, implement the MouseEvent interface. should be If you want MouseEvents, implement the MouseListener interface. "If your class wants to to know..." should be changed to "If your class wants to know..." interrface should be spelled interface. There is a line missing from the method: Method reads: public void go() { JFrame frame = new JFrame(); button = new JButton("click me"); button.addActionListener(this); frame.getContentPane().add(button); frame.setSize(300, 300); frame.setVisible(true); } Now reads: public void go() { JFrame frame = new JFrame(); button = new JButton("click me"); button.addActionListener(this); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); // Missing frame.getContentPane().add(button); frame.setSize(300, 300); frame.setVisible(true); } "Add buttons, menus, radion buttons, etc." "radion" now reads "radio" Under the section "Display a JPEG", in the paragraph explained the drawImage method, the second sentence begins: "This says 3 pixels from the left edge of the panel and 2 pixels from the top edge of the panel.." should be changed to: "This says 3 pixels from the left edge of the panel and 4 pixels from the top edge of the panel.." because the parameter list of the drawImage is : (image,3,4,this) all references to Graphics2d should be Graphics2D (capital D). Using Graphics2d will give you a 'cannot resolve symbol' error. It says: Given the pictures on page 17 ... it now reads: Given the pictures on page 351 ... At the bottom of the page, in the paintComponent method, the comment refered to an incorrect page number. The second comment that said: //See page 13 for the code should be changed to //See page 349 for the code it says: label.setLabel("That Hurt!"); It now reads: label.setText("That Hurt!"); Change: "... and the server will have to be certain that all instances of that entity bean in sync ..." To: "... and the server will have to be certain that all instances of that entity bean are in sync ..." The bullet points are numbered incorrectly; They are: 1 1 1 1 They should be: 1 2 3 4 "It builds directoy on Version Two." should read "It builds directly on Version Two." There appears to be an extra space between public and void in the method line: "public void setUpGui() {" should read: "public void setUpGui() {" "layed out" should be "laid out" Change: "... primary key type (<prim-key-class>, but I don't see ..." To: "... primary key type (<prim-key-class>), but I don't see ..." "FlowLayout places components lleft to right" should read: "FlowLayout places components left to right" track.add(makeEvent(192,9,1,0,15)); should be 16 instead of 15. The bullet point after C should be D. In the book it is A; thus the sequence is : A B C A E The code on this page uses GameCharacter which is mentioned on page 413, however there isn't any code for this class. Additionally, the 2nd comment following the code on page 425 says, "// and then save the Dogs exactly as they are now", which is not the object type being used. System.out.println(one.getPower() + "," + one.getType() + "," + one.getWeapons()); System.out.println(two.getPower() + "," + two.getType() + "," + two.getWeapons()); System.out.println(three.getPower() + "," + three.getType() + "," + three.getWeapons()); should read: System.out.println(oneAgain.getPower() + "," + oneAgain.getType() + "," + oneAgain.getWeapons()); System.out.println(twoAgain.getPower() + "," + twoAgain.getType() + "," + twoAgain.getWeapons()); System.out.println(threeAgain.getPower() + "," + threeAgain.getType() + "," + threeAgain.getWeapons()); "FileInputStream" now reads: "FileOutputStream" "triggered when use chooses...." should be: "triggered when user chooses...." File myFile = new File("MyText.txt); Should read: File myFile = new File("MyText.txt"); "Quiz Card Reader code" should read: "Quiz Card Player code" and "public class QuizCardReader" should read: "public class QuizCardPlayer" to match paragraph 2, item 2, on page 428 ("2) QuizCardPlayer, a playback engine . .." and to match grapic with text on lower right-hand corner of page 428, and to match code outline on page 435. in the first block of handwritten text, the word viewing is spelt incorrectly: ... See if they're currently veiwing a question or answer should be: ... See if they're currently viewing a question or answer the last line of the classes' method uses a deprecated method on the JButton nextButton : nextButton.disable() now reads nextButton.setEnabled(false); that ships with you Java development kit. should read that ships with your Java development kit. What's the first foreign country due south of Detroit Michigan? should be ....due north of Detroit Michigan Code fragment: Printwriter writer = new PrintWriter(chatSocket.getOutputStream); should be Printwriter writer = new PrintWriter(chatSocket.getOutputStream()); "PrintWriter acts as it's own bridge" should read: "PrintWriter acts as its own bridge" "The accept() method blocks (just sits there) while its waiting" should read: "The accept() method blocks (just sits there) while it's waiting" "We want something to run continusouly..." should be "We want something to run continuously..." "myThread .start();" should read: "myThread.start();" It works either way, but to be consistent with the way you have used the dot notation throughout this book, the extra space is distracting. The second line of the very first paragraph refered to an incorrect page number: "...each time we ran it? Look back at page 28..." should be changed to: "...each time we ran it? Look back at page 478..." class ThreadTester { should be: class ThreadTestDrive { and Runnable theJob = new HelloThread(); should be: Runnable theJob = new MyRunnable(); Missing a closing "}" So, if you don't lock the back account should read So, if you don't lock the bank account "mport java.io.*;" should read: "import java.io.*;" before Ryan has a chance to wakes up should be before Ryan has a chance to wake up The side description of the first example says "Now you have to specify the PATH to get the actual class file". it should be "Now you have to specify the PATH to get the actual source (.java) file". "JVM will see that, and immediately look inside it's" should read: "JVM will see that, and immediately look inside its" To make a Java Web Start app, you to .jnlp should read To make a Java Web Start app, you the .jnlp <homepage href="index.html/> should read: <homepage href="index.html"/> "Your client object ge to" should read: "Your client object gets to" "Remember, must be able to see......" should be: "Remember, rmic must be able to see..." "rmiregistery using the static" should read: "rmiregistry using the static" The description for the step is the same for steps 3 and 5. The one for step 5 should be different. 'thing' is coming back from the server as a reuslt of should read 'thing' is coming back from the server as a result of extra space between y and ou "response. Imagine a reasonably complext HTML page, and now" should read: "response. Imagine a reasonably complex HTML page, and now" "intereted" should be "interested" "do you have anything implements..." should be "do you have anything that implements..." "Here's the serialized object the Scientific Calculator service registered with me." Suggested improvement: "Here's the serialized object that the Scientific Calculator service registered with me." Instead of a diagram of the ServiceServer and ServiceServerImpl, there are two diagrams for ServiceServer. In the definition of the method getGuiPanel() two lines have been truncated: The line incorrectly ends with an opening square bracket [ -- it should be an opening brace { That is: public static void main(String [] args) [ should be: public static void main(String [] args) { Per the JDK_1.4.2/docs/api, LinkedHashSet does not implement iteration in order of most recently accessed; © 2014, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.oreilly.com/catalog/errata.csp?isbn=9780596004651
CC-MAIN-2014-15
en
refinedweb
The goal for this project was to allow a user to input the name and age of a person, as many times as they want, until they type "quit". After they type quit, the program is to print the name and age of the youngest person, and the oldest person. So far, I have a (probably not good) way for the user to input all the information and it moves to another method when the user types quit. I found a way to get all the ages to sort, but that's where I'm stuck. When I just print the array (which you see in my code) just to check how it sorts, it sorts a like 90 0's, then say 10 numbers in order. I'm having a pretty hard time explaining this but hopefully someone can understand. Here is my code, now any help would be appreciated. import java.util.*; public class age { public static void main(String[] args){ Scanner keyboard = new Scanner(System.in); int[] personAge = new int[100]; String[] personName = new String[100]; int x = 0; char responseChar; responseChar = 'a'; while(responseChar != 'q'){ System.out.println("Please enter a name: "); personName[x] = keyboard.nextLine(); responseChar = personName[x].charAt(0); if(responseChar == 'q'){ sortAges(personAge); } else{ System.out.println("Please enter their age: "); personAge[x] = keyboard.nextInt(); keyboard.nextLine(); ++x; } } } public static void sortAges(int[] personAge){ int sortAge[] = personAge; Arrays.sort(sortAge); System.out.println(Arrays.toString(sortAge)); } }
http://www.javaprogrammingforums.com/file-i-o-other-i-o-streams/1100-more-java-homework-help.html
CC-MAIN-2014-15
en
refinedweb
Savable in Node Based MultiViewElement By Geertjan on Jan 07, 2013 Some time ago, Timon Veenstra from Ordina wrote about how to create MultiViewElements for Nodes. A follow up question is... how to enable the Save Action for MultiViewElements that have been created for Nodes? The constructor of a MultiViewElement always receives a Lookup. But how to add a Saveable object to that Lookup so that the Save Action can become enabled? My solution is weird, but the only thing that I can think of: put the InstanceContent into the Node's Lookup, i.e., add InstanceContent to InstanceContent: public class CustomerNode extends BeanNode<Customer> implements Serializable { public CustomerNode(Customer bean) throws IntrospectionException { this(bean, new InstanceContent()); } private CustomerNode(Customer bean, InstanceContent ic) throws IntrospectionException { super(bean, Children.LEAF, new AbstractLookup(ic)); ic.add(ic); setDisplayName(bean.getName()); } @Override public Action getPreferredAction() { return new AbstractAction("Edit") { @Override public void actionPerformed(ActionEvent e) { TopComponent tc = MultiViews.createMultiView("application/x-customernode", CustomerNode.this); tc.open(); tc.requestActive(); } }; } } Now, in your MultiViewElement, you can get the InstanceContent from the Lookup that you receive in the constructor. That Lookup is defined by the Lookup in the Node. Therefore, because there's an InstanceContent in the Node Lookup (as you can see above), the InstanceContent is now in the MultiViewElement: @MultiViewElement.Registration( displayName = "General", mimeType = "application/x-customernode", persistenceType = TopComponent.PERSISTENCE_NEVER, preferredID = "CustomerNodeGeneralElement", position = 100) public class GeneralPanel extends JPanel implements MultiViewElement { private final Lookup lookup; private final InstanceContent ic; public GeneralPanel(Lookup lookup) { this.lookup = lookup; this.ic = lookup.lookup(InstanceContent.class); ... ... ... And now, anywhere in the MultiViewElement, you can add/remove the Savable to/from the InstanceContent whenever needed. That will trigger the Save Action to enable/disable itself.
https://blogs.oracle.com/geertjan/date/201301?page=1
CC-MAIN-2014-15
en
refinedweb
This is a long post, but I tried to keep it clean and concise. Please don't just skip over it because it has a lot of stuff - I really need some help. I want to get a project off on the right foot but lack the experience to be sure I'm doing it as efficiently [0] as possible. I'm creating a set of classes to implement an API [1]. It looks something like below, with the exception that I'm writing this from home and am not posting the several thousand lines of code. Suffice it to say that the program works alright, but I'm looking for a way to organize it for clean future expansion: FileRetriever.py: class DataSource: def __init__(self): self.containers = [] for container in remoteSource(): self.containers.append(Container(container)) class Container: def __init__(self, container): self.files = [] for subfile in message: self.files.append(DataStore(subfile)) class DataStore: def __init__(self, subfile): self.param1 = someTransform(attachment) self.param2 = someOtherTransform(attachment) self.param3 = YetAnotherTransform(attachment) def classMethodOne(self): pass ... def classMethodTwenty(self): pass Now, the problem is that I plan to subclass the heck out of each of these classes, with overloading appropriate to the type of data source being represented. For example, a DataSource that retrieves images from a POP3 mailbox might be defined like: POP3Retriever.py: import FileRetriever class POP3DataSource(Datasource): def __init__(self): self.containers = [] for message in getPop3MessageList(): self.containers.append(Container(message)) class POP3Container(Container): def __init__(self, message): self.files = [] for attachment in message: self.files.append(DataStore(attachment))). I've only been heavily using Python for about a year and haven't leaned too heavily on inheritence yet, so I want to do this the right way. First, a question on file layout. I've thought about several ways to classify these modules: 1) Stick each set of classes in a file in the same directory. That is, FileRetriever.py, POP3Retriever.py, POP3TiffFile.py, etc. are all in the same place. 2) Create a tree like: + FileRetriever +-- __init__.py +-- DataSource.py +-- Container.py +-- DataStore.py +-- POP3Retriever | +-- __init__.py | +-- DataSource.py | +-- Container.py | +-- DataStore.py | +-- POP3TiffFile | | +-- __init__.py | | +-- DataStore.py | +-- POP3ZipArchive | +-- __init__.py | +-- Container.py +-- SFTPRetriever +-- __init__.py ... ... 3) Just kidding. I only have two ideas. The first layout has the advantage that it's simple and involves a minimum of files, but has annoying quirks such as if I define a DataSource subclass before a Container subclass, then that DataSource will use the parent's Container class since the local one hasn't been defined yet when the local DataSource definition is being read. The second layout has more files to deal with, but (hopefully?) avoids that dependency on defining things in a particular order. Second, what's a good way to name each of the classes? Again, I see two main possibilities: 1) Name all of the DataSource classes "DataSource", and explicitly name the parent class: class DataSource(FileRetriever.DataSource): 2) Name all of the DataSource classes with some variation: class POP3ZipArchive(POP3Retriever): The first seems preferable, in that whenever a client program wants to use one of the leaf classes, it will always be named DataSource. However, that seems like a whole lotta namespace confusion that could come back to bite me if I didn't do it right ("What do you mean I accidentally inherited CarrierPigeonDataSource and nuked all of the files our customer uploaded?!?"). I ask all of this because the project is still relatively young and malleable, and this is something that will have to be maintained and expanded for years to come. I want to take the time now to build a solid foundation, but I don't have enough experience with Python to have a good grasp on recommended styles. [0] "Efficient" being hereby defined as "easy for me to understand when I revisit the code six months from now". [1] We receive files from our customers via many means - fax, email, ftp, you name it. I'm developing delivery method agnostic tools to manipulate those files and flatly refuse to write n tools to handle n methods. -- Kirk Strauser The Strauser Group Open. Solutions. Simple.
https://mail.python.org/pipermail/python-list/2004-June/256008.html
CC-MAIN-2014-15
en
refinedweb
On Tue, 2010-02-02 at 13:39 -0500, Jon Masters wrote:> On Tue, 2010-02-02 at 19:36 +0100, Patrick McHardy wrote:> > Jon> > ct_net pointer pointing to &init_net. I'll take care of this> > tommorrow.> > Ok. I'll leave this box running with the hack. I think at the very least> that this specific issue needs to get fixed and in the stable tree, then> the other bits (per namespace cachep...) are probably a good idea at the> same time but that's up to you.FYI, my box has the quick don't free untracked hack *and* per-ns cachep.I don't think the latter has anything specific to do with this (thoughit needs fixing also), but worth knowing my test is using both.Back to the podcasts tonight instead of this ;)Jon.
http://lkml.org/lkml/2010/2/2/325
CC-MAIN-2014-15
en
refinedweb
NAME namespace.conf - the namespace configuration file DESCRIPTION The. The script receives the polyinstantiated directory path and the instance directory path as its arguments. The /etc/security/namespace.conf file specifies which directories are polyinstantiated, how they are polyinstantiated, how instance directories would be named, and any users for whom polyinstantiation would not be performed. When someone logs in, the file namespace.conf is scanned. Comments are marked by # characters. Each non comment line represents one polyinstantiated directory. The fields are separated by spaces but can be quoted by " characters also escape sequences \b, \n, and \t are recognized. The fields are as follows: polydir instance_prefix method list" to generate the final instance directory path. This directory is created if it did not exist already, and is then bind mounted on the <polydir> to provide an instance of <polydir> based on the <method> column. The special string $HOME is replaced with the user's home directory, and $USER with the username. This field cannot be blank. The. EXAMPLES. # Polyinstantiation will not be performed for user root # and adm for directories /tmp and /var/tmp, whereas home # directories will be polyinstantiated for all users. # # Note that instance directories do not have to reside inside # the polyinstantiated directory. In the examples below, # instances of /tmp will be created in /tmp-inst directory, # where as instances of /var/tmp and users home directories # will reside within the directories that are being # polyinstantiated. # /tmp /tmp-inst/ level root,adm /var/tmp /var/tmp/tmp-inst/ level root,adm $HOME $HOME/$USER.inst/inst- context For the <service>s you need polyinstantiation (login for example) put the following line in /etc/pam.d/<service> as the last line for session group: session required pam_namespace.so [arguments] This module also depends on pam_selinux.so setting the context. SEE ALSO pam_namespace(8), pam.d(5), pam(7) AUTHORS The namespace.conf manual page was written by Janak Desai <janak@us.ibm.com>. More features added by Tomas Mraz <tmraz@redhat.com>.
http://manpages.ubuntu.com/manpages/maverick/man5/namespace.conf.5.html
CC-MAIN-2014-15
en
refinedweb
NAME pmcd - performance metrics collector daemon SYNOPSIS pmcd [-f] [-i ipaddress] [-l logfile] [-L bytes] [-n pmnsfile] [-p port[,port ...] [-q timeout] [-T traceflag] [-t timeout] [-x file] DESCRIPTION pmcd is the collector used by the Performance Co-Pilot (see PCPIntro(1)) to gather performance metrics on a system. As a rule, there must be an instance of pmcd running on a system for any performance metrics to be available to the PCP. pmcd accepts connections from client applications running either on the same machine or remotely and provides them with metrics and other related information from the machine that pmcd is executing on. pmcd delegates most of this request servicing to a collection of Performance Metrics Domain Agents (or just agents), where each agent is responsible for a particular group of metrics, known as the domain of the agent. For standard dotted form (e.g. 100.23.45.6). The -i option may be used multiple times to define a list of IP addresses. Connections made to any other IP addresses the host has will be refused. This can be used to limit connections to one network interface if the host is a network gateway. It is also useful if the host takes over the IP address of another host that has failed. In such a situation only the standard namespace is loaded from the file pmnsfile. . -t timeout To prevent misbehaving agents from hanging the entire Performance Metrics Collection System (PMCS), pmcd uses timeouts on PDU exchanges By default, event tracing is buffered using a circular buffer that is over-written as new events are recorded. The default buffer size holds the last 20 events, although this number may be over-ridden by using pmstore(1) to modify the metric pmcd.control.tracebufs. Similarly once pmcd is running, the event tracing control may be dynamically modified by storing 1 (enable) or 0 (disable) into the metrics pmcd.control.traceconn, pmcd.control.tracepdu and pmcd.control.tracenobuf. These metrics map to the bit fields associated with the traceflag argument for the -T option. When operating in buffered mode, the event trace buffer will be dumped whenever an agent connection is terminated by pmcd, or when any value is stored into the metric pmcd.control.dumptrace via pmstore(1). In unbuffered mode, every event will be reported when it occurs. -x file Before the pmcd logfile can be opened, pmcd may encounter a fatal error which prevents it from starting. By default, the output describing this error is sent to /dev/tty but it may redirected to file. If a PDU exchange with an agent times out, the agent has violated the requirement that it delivers metrics with little or no delay. This is deemed a protocol failure and the agent is disconnected from pmcd. Any subsequent requests for information from the agent will fail with a status indicating that there is no agent to provide it. It is possible to specify host-level access control to pmcd. This allows one to prevent users from certain hosts from accessing the metrics provided by pmcd and is described in more detail in the Section on ACCESS CONTROL below. CONFIGURATION as root. The configuration file may contain shell commands to create agents, which will be executed by root. To prevent security breaches the configuration file should be writable only by root. The use of absolute path names is also recommended. The case of the reserved words in the configuration file is unimportant, but elsewhere, the case is preserved. Blank lines and comments are permitted (even encouraged) in the configuration file. A comment begins with a ``#'' character and finishes at the end of the line. A line may be continued by ensuring that the last character on the line is a ``\'' (backslash). A Each whitespace characters, however a single agent specification may not be broken across lines unless a \ (backslash) is used to continue the line. Each agent specification must start with a textual label (string) followed by an integer in the range 1 to 510. The label is a tag used to refer to the agent and the integer specifies the domain for which the agent supplies data. This domain identifier corresponds explictily The access control section of the configuration file is optional, but if present it must follow the agent configuration data. The case of reserved words is ignored, but elsewhere case is preserved. Lexical elements in the access control section are separated by whitespace or the special delimiter characters: square brackets (``['' and ``]''), braces (``{'' and ``}''), colon (``:''), semicolon (``;'') and comma (``,''). The special characters are not treated as special in the agent configuration section. The access control section of the file must start with a line of the form: [access] Leading and trailing whitespace may appear around and within the brackets and the case of the access keyword is ignored. No other text may appear on the line except a trailing comment. Following this line, the remainder of the configuration file should contain lines that allow or disallow operations from particular hosts or groups of hosts. There are two kinds of operations that occur via pmcd: fetch allows retrieval of information from pmcd. This may be information about a metric (e.g. its description, instance domain 129.127.112.2 : all except fetch; because they both refer to the same host, but disagree as to whether the fetch operation is permitted from that host. Statements containing more specific host specifications override less specific ones according to the level of wildcarding. For example a rule of the form allow * : all except store, maximum 2 connections; This says that only 2 client connections at a time are permitted for all hosts other than "clank", which is permitted 5. If a client from host "boing" is the first to connect to pmcd, its connection is checked against the second statement (that is the most specific match with a connection limit). As there are no other clients, the connection is accepted and contributes towards the limit for only the second statement above. If the next client connects from "clank", its connection is checked against the limit for the first statement. There are no other connections from "clank", so the connection is accepted. Once this connection is accepted, it counts towards both statements' limits because "clank" matches the host identifier in both statements. Remember that the decision to accept a new connection is made using only the most specific matching access control statement with a connection limit. Now, the connection limit for the second statement has been reached. Any connections from hosts other than "clank" will be refused. If instead, pmcd with no clients saw three successive connections arrived from "boing", the first two would be accepted and the third refused. After that, if a connection was requested from "clank" it would be accepted. It matches the first statement, which is more specific than the second, so the connection limit in the first is used to determine that the client has the right to connect. Now there are 3 connections contributing to the second statement's connection limit. Even though the connection limit for the second statement has been exceeded, the earlier connections from "boing" are maintained. The connection limit is only checked at the time a client attempts a connection rather than being re-evaluated every time a new client connects to pmcd. This gentle scheme is designed to allow reasonable limits to be imposed on a first come first served basis, with specific exceptions. As illustrated by the example above, a client's connection is honored once it has been accepted. However, pmcd reconfiguration (see the next section) re-evaluates all the connection counts and will cause client connections to be dropped where connection limits have been exceeded. RECONFIGURING PMCD If the configuration file has been changed or if an agent is not responding because it has terminated or the PMNS has been changed, pmcd may be reconfigured by sending it a SIGHUP, as in # pmsignal -a -s HUP pmcd When pmcd receives a SIGHUP, it checks the configuration file for changes. If the file has been modified, it is reparsed and the contents become the new configuration. If there are errors in the configuration file, the existing configuration is retained and the contents of the file are ignored. Errors are reported in the pmcd log file. It also checks the PMNS file for changes. If the PMNS file has been modified, then it is reloaded. Use of tail(1) on the log file is recommended while reconfiguring pmcd. If the configuration for an agent has changed (any parameter except the agent's label is different), the agent is restarted. Agents whose configurations do not change are not restarted. Any existing agents not present in the new configuration are terminated. Any deceased agents are that are still listed are restarted. Sometimes it is necessary to restart an agent that is still running, but malfunctioning. Simply stop the agent (e.g. using SIGTERM from pmsignal(1)), then send pmcd a SIGHUP, which will cause the agent to be restarted. STARTING AND STOPPING PMCD Normally, stop to stop pmcd. Starting pmcd when it is already running is the same as stopping it and then starting it again. Sometimes it may be necessary to restart pmcd during another phase of the boot process. Time-consuming parts of the boot process are often put into the background to allow the system to become available sooner (e.g. mounting huge databases). If an agent run by pmcd requires such a task to complete before it can run properly, it is necessary to restart or reconfigure pmcd after the task completes. Consider, for example, the case of mounting a database in the background while booting. If the PMDA which provides the metrics about the database cannot function until the database is mounted and available but pmcd is started before the database is ready, the PMDA will fail (however pmcd will still service requests for metrics from other domains). If the database is initialized by running a shell script, adding a line to the end of the script to reconfigure pmcd (by sending it a SIGHUP) will restart the PMDA (if it exited because it couldn't connect to the database). If the PMDA didn't exit in such a situation it would be necessary to restart pmcd because if the PMDA was still running pmcd would not restart it. Normally pmcd listens for client connections on is a comma-separated list of one or more numerical port numbers. Should both methods be used or multiple -p options appear on the command line, pmcd will listen on the union of the set of ports specified via all -p options and the PMCD_PORT environment variable. If non-default ports are used with pmcd care should be taken to ensure that PMCD_PORT is also set in the environment of any client application that will connect to pmcd. In addition to the PCP environment variables described in the PCP ENVIRONMENT section below, the PMCD_PORT variable is also recognised as the TCP/IP port for incoming connections (default 443dbg(1), pmerr(1), pmgenmap(1), pminfo(1), pmstat(1), pmstore(1), pmval(1), pcp.conf(4), and pcp.env(4). DIAGNOSTICS If possible to run pmcd.. CAVEATS pmcd does not explicitly terminate its children (agents), it only closes their pipes. If an agent never checks for a closed pipe it may not terminate. The configuration file parser will only read lines of less than 1200 characters. This is intended to prevent accidents with binary files. The timeouts controlled by the -t option apply to IPC between pmcd and the PMDAs it spawns. This is independent of settings of the environment variables PMCD_CONNECT_TIMEOUT and PMCD_REQUEST_TIMEOUT (see PCPIntro(1)) which may be used respectively to control timeouts for client applications trying to connect to pmcd and trying to receive information from pmcd.
http://manpages.ubuntu.com/manpages/oneiric/man1/pmcd.1.html
CC-MAIN-2014-15
en
refinedweb
Opened 3 years ago Closed 3 years ago #7880 closed defect (fixed) "ioerror: invalid mode: Ur" in htfile.py Description Python 2.4 complains about invalid mode Ur in htfile.py, line 209 def _get_users(self, filename): f = open(filename, 'Ur') for line in f: user = line.split(':', 1)[0] Solved it by simply changing mode to 'rU' everywhere in that file. Attachments (0) Change History (2) comment:1 Changed 3 years ago by hasienda - Keywords python syntax added - Status changed from new to assigned comment:2 Changed 3 years ago by hasienda - Resolution set to fixed - Status changed from assigned to closed (In [9344]) AccountManagerPlugin: Correct reversed mode argument in read mode, closes #7880. Note: See TracTickets for help on using tickets. Sure, my fault. Python docs confirms your findings. I've tested extensively before, so his has slipped through, because at least Python2.5 tolerates the reverse order and works with 'Ur' just the same way as 'rU'. So many thanks for the report. Will fix this immediately.
http://trac-hacks.org/ticket/7880
CC-MAIN-2014-15
en
refinedweb
- Nick Lansley Decoding TV Teddy – Part Two: Programming and Audio Output As a purely fun and academic exercise, I’m going attempt to decode the TV Teddy audio track embedded in a TV Teddy video programme, and output the audio as a separate file. I’ll then try and play back both the audio file and the YouTube video in sync to enjoy this particular TV Teddy episode with full dialogue for the first time. I’ll be using the Python 3.7 programming language for this project so I’ve started a new project in my favourite Python development environment (Jetbrains PyCharm) which has a free community edition as well a commercial edition. I’ll also need to download the OpenCV video file library (which can read MP4 format video files) using: pip install opencv-python Next, I capture (using Quicktime on my Mac) the first couple of minutes of video from a TV Teddy show on Youtube. I’ve chosen this one: I’ve renamed the file to make it easier to access and created a folder called ‘media’ in the project, saving the file there as ‘media/TVTeddyCapture.mp4’. To get started I’ve used the working Python OpenCV coding example at: and adapted it thus: import cv2 print('TV Teddy Frame Extractor - CV2 version = ', cv2.__version__) vidcap = cv2.VideoCapture('media/TVTeddyExcerpt.mp4') success, image = vidcap.read() count = 0 success = True while success: success,image = vidcap.read() count += 1 if count % 1000 == 0: print('count of frames so far =', count) cv2.imwrite("media/frame%d.jpg" % count, image) # save frame as JPEG file print('total count of frames =', count) In the above code, I am simply reading the video file and occasionally printing out frames once every count of 1,000 frames. I ran the code for a few seconds and made it save a few frames to make sure it was working. Frame 1000 looks like this: From this frame I note: This video is in 640 x 476 format. Looking at the embedded audio track in an image editor, I found that its centre is 5 pixels in from the left. The top of the grey soundtrack starts at vertical pixel 2, and the bottom ends at vertical pixel 472 (beyond those extremes appear to be distortions at the start and end of the frame, probably caused by the VHS player switching between its rotary heads as it reads the tape). So my plan is to: Read each frame, and do this: Set a loop running from value 16 to value 700 and, at each position (5, 2) through to (5, 472) do this: Read the RGB value of the pixels and add those three values together together, subtracting 127 from each so that the waveform centres on value 0 and not on value 127. The values will be sampled between 0 and 255 so subtracting 127 will make them become sampled between 128 and -127. ‘Silence’ (denoted by grey rather than black or white) will then be recorded at 0 and not at 127. Summing the three R, G, and B samples will help with any subtlety that can still be derived from the change in greyness from sample to sample despite digitisation. I’ll save this value as a 16-bit signed integer. Append the value to the end of an array which later is saved to the WAV file. Normalise the array of audio samples; that is, ‘amplify’ the sample values and re-centre them around the ‘0’ centre line. Save the array of samples in a WAV file requested to playback at 30 fps x (700-16) frame-lines per second = 20.520 KHz sample rate with 16bit sample size. Now be warned about a patent: USA patent 5,808,869 “Method and apparatus for nesting secondary signals within a television signal” (and its international equivalents of the same name) owned by Shoot The Moon, who I have just discovered invented the TV Teddy technology in the first place. You may not be able to use this code – and certainly not in any commercial context – without their permission. Shoot The Moon have every right to earn from this patented idea until it expires. If you think you would enjoy creating new content compatible with TV Teddy, or decoding TV Teddy videos to use with other equipment, great! But contact Shoot The Moon via their website and agree some sort of licensing first. The code below is provided purely for your academic interest and self-education in Python programming. import cv2 import cv2 import array import wave # A flag used to find out if the next video frame was read successfully success = True # Audio samples are counted into this variable: currentSampleCount = 0 # This is the output WAV file's sample rate that will be written into its header information wavSampleRate = 20483 # Each WAV file sample is 2 bytes (16-bit) wavSampleSize = 2 # The Wav file will have a single mono audio channel wavChannels = 1 # An array that will store all samples in a 16-bit signed integer format sampleArray = array.array('h') # The top and bottom video lines in the video frame where will measure the greyscale to get samples # increase audioLineModulationStartLine and decrease audioLineModulationEndLine until the loud 60Hz buzz disappears # from the finished audio audioLineModulationStartLine = 16 audioLineModulationEndLine = 460 # The horizontal position in the video frame where we will take the sample - ideally set to be in the centre # of the greyscale audio line audioLineCentrePixelToRead = 5 print('TV Teddy Audio Extractor from YouTube 720p Source - CV2 version = ', cv2.__version__) frameCount = 0 # Open the video file vidcap = cv2.VideoCapture('media/TVTeddyExcerpt.mp4') if vidcap.isOpened(): # Get some info on the file) # Process the file frame by frame for currentFrame in range(0, frameCount): success, image = vidcap.read() if success: # For the current frame, read the grey line and extract a sample from each pixel, ' - ', int(currentFrame * 100 / frameCount), "%") # Close the incoming video file vidcap.release() print('Total count of frames =', frameCount) print('Total count of samples =', currentSampleCount) print('Analysing extracted audio...') # Find the sum of sample sizes and the minimum & maximum sample size sumSampleSize = 0 maxSampleValue = 0 for sampleIndex in range(0, len(sampleArray) - 1): sumSampleSize += sampleArray[sampleIndex] if maxSampleValue < sampleArray[sampleIndex]: maxSampleValue = sampleArray[sampleIndex] # Calculate mean average sample size meanSampleSize = int(sumSampleSize / len(sampleArray)) # Now alter the sample values to become rebalanced around zero based # on the mean sample size, and amplified by multiplying the samples # based on the amplifyValue - a process called 'normalisation' print('Normalising....') maxSampleValue = maxSampleValue - meanSampleSize amplifyValue = 16000 / maxSampleValue # reduce constant value 16000 if Warnings below keep happening for sampleIndex in range(0, len(sampleArray)): # Safety catch shown multiplication make signed integer too big or too small for the array try: sampleArray[sampleIndex] = int((sampleArray[sampleIndex] - meanSampleSize) * amplifyValue) except OverflowError: # sampleArray[sampleIndex] is kept at the same value print("Warning: Normalised sample was too large or too small for signed 16-bit array") continue # Write the output WAV file print('Writing WAV file...') f = wave.open('media/output.wav', 'w') f.setparams((wavChannels, wavSampleSize, wavSampleRate, len(sampleArray), "NONE", "Uncompressed")) f.writeframes(sampleArray.tostring()) # Important to convert to string or only half the audio will be written out f.close() print('Completed!') print('Now use an audio application such as Audacity (free) to read the output WAV file and' \ ' increase the pitch by 100% (i.e. double it)') Let’s look and listen to the output! The view from Audacity, the free audio editing application: The waveform is somewhat shifted and not normalised properly around the zero middle, but we’ve certainly got something! Let me save the file as an MP3 now as it’s wastefully large as a WAV file, and take a listen: The sample rate seems just about spot on as the audio is keeping time when played back in sync with the video. But, if you watch the demonstration in Databits’s video, the audio pitch is a lot higher than here. This makes me wonder if that’s something that the tech in TV Teddy is doing? So, using Audacity, I’ll apply a ‘High-Quality Pitch Change’ at +100% (so doubling the pitch) while keeping the same speed. How does it sound now? Near enough! So what’s happened is that, in order to stay within the limited audio bandwidth of this soundtrack, the designers have halved the pitch (but not speed) of the audio during recording, and either the TV Teddy box or TV Teddy receiver inside the bear has taken the audio and doubled the pitch on playback to restore the child-like voice without needing extra audio bandwidth. All clever stuff – and remember this is all done with analogue circuits in the 1990s. The distortions in the audio are going to be from the digitisation of the VHS tape where subtle differences between continuous analogue grey levels are lost in the sampling of each video frame. Also, bear in mind that either the analogue to digital video converter – or Youtube – will take an interlaced video and create a progressive version of each frame. In terms of our audio soundtrack, each ‘odd’ 60th second has been overlaid on the next ‘even’ 60th second part of the audio. Digital compression (including YouTube) will also have played their part in damaging this subtlety, too. The ‘buzzing’ effect on the voice is caused by the jump from frame to frame. I’m guessing that the not-exactly-high-fidelity speaker inside TV Teddy’s body reduces the obviousness of this effect for its intended listeners – or there’s a high-pass filter in the circuit blocking low frequencies before the pitch-doubling takes place. Well, now I feel highly satisfied with that effort! I can certainly try to improve the audio and get the bitrate better (it seems a bit fast so lowering it would help) but we’ve decoded TV Teddy successfully. Footnote: Now, of course, you’re going to ask: Would it be possible to create a video with a TV Teddy compatible soundtrack? The answer is Yes! On to Part Three… #audio #decoding #tvteddy
https://www.nicklansley.com/post/decoding-tv-teddy-part-two-programming-and-audio-output
CC-MAIN-2020-45
en
refinedweb
16 AnswersNew Answer Example findall(regex,"12345")=234 General case: find string w.o first and last. 16 AnswersNew Answer Oma Falk Based on the question: "How can regex find the middle 3 letters of a 5 char-word?" I'm assuming the word can include any non-whitespace alphanumeric character like this one: "12345" The regexes in this thread would work with this example. However, many may also incorrectly work for the following invalid values: " 234 " " 2345" "1234 " "1 3 5" "123 5" The same regexes may not properly extract "234" from 5 char values with leading and/or trailing whitespaces like the following: " 12345" "12345 " " 12345 " I believe the proper regex to cover all positive and negative test cases would be: re.search(r'(?<!\S)\S(\S{3})\S(?!\S)', s) - The \S matches on non whitespace characters. - (?<!\S) is a negative look behind. - (?!\S) is a negative look ahead. To match on all but the first and last characters, replace (\S{3}) with (\S+) The code below runs tests on the RegEx patterns posted in this thread for review with mine posted last: I hope this helps. import re print(re.findall(r".(.{3}).", "12345")) print(re.findall(r".(.{3}).", "hello")) print(re.findall(r".(.{3}).", "12AA5")) same "pattern" used for all three. Oma Falk If you know the length of the string why not just use a slice? 😳 reg = re.compile(r'^(\d|\w)(\d{3}|\w{3})(\d|\w)$') mo = reg.search('12345') print(mo.group(2)) It works for letters or numbers. Not both Code Crasher want to learn regex. It is very mighty. After 3 hours I could find a regex for a String if it is a tictactoe winner. Hope to become a bit quicker. Oma Falk 😉😜👍 Google. reg = r"\B.+\B" "The \B metacharacter is used to find a match, but where it is NOT at the beginning/end of a word." w3Schools.com Regex is such a lovely topic. You could use a lookaround pattern (lookbehind and lookahead), like: import re m=re.match(r"(?<=.).{3}(?=.)", your_5_chr_str).group() (since + and other special characters are greedy, you don't have to specify {3} though.) You could also capture it in a group: m=re.match(r".(.+).",your_5_chr_str).group(1) This is what helped me get to grips with regular expression after the tutorial in Sololearn:- Learn Playing. Play Learning SoloLearn Inc.4 Embarcadero Center, Suite 1455
https://www.sololearn.com/Discuss/2272451/how-can-regex-find-the-middle-three-letters-of-a-5char-word
CC-MAIN-2020-45
en
refinedweb
Heads up! To view this whole video, sign in with your Courses account or enroll in your free 7-day trial. Sign In Enroll Deployment Options8:26 with James Churchill Regardless of your specific development, testing, and deployment workflow, you'll need a way to apply pending migrations to databases hosted in shared environments. Let's review the deployment options that Code First Migrations makes available to us..4 -b deployment-options Finished Project Code Here's how to get the code for the finished project: git clone git@github.com:treehouse-projects/dotnet-ef-migrations.git cd dotnet-ef-migrations Additional Project Ideas Now that you've completed this course, it's a great time to reinforce what you've learned by working on a project. Idea #1: Update the Comic Book Library Manager Console App Update the Comic Book Library Manager console app—from the Entity Framework Basics course—to use Code First Migrations. Here's the link to download the completed Comic Book Library Manager project files. Idea #2: Design Your Own Entity Data Model Design your own entity data model, enable migrations, then practice making model changes and adding migrations. You could purposely start with an overly simplistic model, so that you'll have plenty of model changes to make once you've enabled migrations. Additional Treehouse Entity Framework Courses Treehouse has two more EF courses (one that's available now and another that'll be available soon). Additional Resources Treehouse Community If you ever have a question about EF or get stuck on something, check out the Treehouse Community. You’ll find there other students and a great team of moderators who can help. An easy option for applying, 0:00 any pending migrations to databases hosted in shared environments is to configure 0:01 EF to use the migrate database to latest version, database initializer. 0:06 We're currently setting the database initializer to know, in order to disable 0:13 database initialization in favor of using the code first migrations, 0:18 update database command to update our database. 0:22 Let's start. 0:26 With adding a new database initializer then, we'll add preprocessor directives so 0:27 we can configure a different database initializer per build configuration. 0:32 Allow the new database initializer below our current one. 0:39 Database.SetInitializer (new 0:44 MigrateDatabaseToLatestVersion) Of type <Context,. 0:46 In addition to the context class type, the migrate database to latest version, 0:53 generic class also requires us to specify the code first 0:58 migrations configuration class type, which in our case is Configuration. 1:02 Don't forget to add the using directive, 1:10 for the ComicBookGalleryModel.Migrations namespace. 1:13 Now, let's add our preprocessor directives. 1:21 Right before our first database initializer, let's add an if directive 1:26 that checks for the presence of the debug symbol, #if DEBUG. 1:31 Then, in between the two database initializers, add an else directive. 1:36 #else. 1:42 And, just after the second database initializer, add an end if directive, 1:46 #endif. 1:51 Let's test our new database initializer by downgrading our database to 1:54 the previous migration in running our app using the release build configuration 1:58 to see if the database will be upgraded to the latest migration. 2:03 First, let's downgrade our database, 2:11 update-database -targetmigration AddBioPropertyToArtist. 2:15 Using the SQL Server Object Explorer, 2:22 we can verify that the comic book average rating 2:27 table that is created by our latest migration isn't in the list of tables. 2:32 Now, let's change our build configuration to release and 2:41 start the application without debugging by pressing Ctrl F5. 2:44 Here's our list of comic books. 2:56 Press Enter twice to continue execution. 2:58 Refresh the Tables folder, And, 3:03 here's our ComicBookAverageRating table. 3:11 If we view the data in the migration history table, 3:15 we can confirm that all three of our migrations have been applied. 3:18 By using the database initializer to migrate the database to the latest 3:26 migration is easy to do and works well with automated workflows, 3:30 it's not always possible to use. 3:34 In some situations, developers don't have direct access to test and 3:37 production environment databases or the servers that they're hosted on. 3:42 Instead, developers have to coordinate any updates to those databases 3:47 through a database administrator or DBA. 3:52 Luckily, there's a workaround. 3:56 We can use the Update-Database command to generate a SQL script which we can 3:58 hand off to our DBA, who can then review and apply it to the database. 4:03 To start, let's downgrade our database to the previous migration. 4:09 I'll press the up arrow key to recall the previously executed command. 4:13 Then, we can run the update database command with the script flag. 4:20 update-database -script. 4:24 When the command completes, it'll open the generated script into a new tab for 4:33 us to review. 4:37 Here's to create table SQL statement to create the ComicBookAverageRating table. 4:43 Our SQL statement, to migrate the AverageRating data from the comic book 4:49 table to the ComicBookAverageRating table. 4:52 To ALTER TABLE statements, one to drop the ComicBookAverageRating column and 5:00 another to add the comic book ID foreign key to the ComicBookAverageRating table. 5:05 Then lastly, 5:13 an answer statement to add this migration to the MigrationHistory table. 5:14 In addition to generating a script to apply the latest migration to 5:19 the database, we can also generate an item potent script 5:23 that can upgrade a database currently at any version. 5:27 To the latest version with an item potent script, 5:31 we can safely executed against any version of the database. 5:34 As a contains logic to determine which migrations have been applied and 5:38 which haven't. 5:42 To generate an item potent script, we specify the source migration 5:44 flag along with the dollar sign initial database migration name. 5:49 Update database -script 5:54 -sourcemigration $initialDatabase. 6:00 In the generated script, we can see that it includes a query to get the current 6:15 migration from the MigrationHistory table. 6:19 And, the conditionals to only apply migrations that 6:25 haven't been previously applied. 6:28 Regardless of the approach that you end up using to apply migrations to databases 6:31 in shared environments, it's important to take the necessary time to review and 6:36 test migrations before they are applied to production databases. 6:41 Even if you are able to recover from a failed migration, the resulting 6:46 application downtime can be costly and a frustrating experience for your users. 6:50 Let's recap this section. 6:55 We made a change to our model and took a closer look at using the Add-Migration 6:58 command to create migrations. 7:02 We saw how to use the update-database command to downgrade the database and 7:04 an example of modifying a migration. 7:09 We discussed workflows and environments, and learned about the deployment options 7:12 available to us for applying migrations to databases in shared environments. 7:17 Thanks for hanging out with me and 7:22 learning about Entity Framework Code First Migrations. 7:24 Now, it's a great time to reinforce what you've learned 7:27 by using Code First Migrations on a practice project. 7:30 For example, you can take the comic book library manager console application 7:33 from an earlier Treehouse course on EF and update it to use migrations. 7:38 Or, you could design and create your own entity data model and 7:43 add migrations to your project as you evolve your model. 7:47 If you haven't already, 7:51 be sure to check out Treehouse's other courses on entity framework. 7:52 See the teachers notes for links to courses that cover the basics of EF and 7:56 how to use EF within in ASP.NET MVC application. 8:01 There are also other great online and offline EF resources available. 8:05 Again, see the teacher's notes for a list of these resources and 8:09 don't forget if you ever have a question about EF or 8:13 get stuck on something, check out the Treehouse community. 8:17 You'll find, there are other students and 8:20 a great team of moderators who can help, see you next time. 8:22
https://teamtreehouse.com/library/deployment-options
CC-MAIN-2020-45
en
refinedweb
Bug Description Just upgraded from 4.4.4 on 10.04 and received the following error. Configure local and remote Printers The service 'Printer Configuration' does not provide an interface 'KCModule' with keyword 'system- Thanks for submitting this issue and helping to make Kubuntu better! I have tried to reproduce this problem and have not been able to do so. Could you please provide a list of installed kubuntu packages. How did you upgrade? Could you please provide the logs from the upgrade in /var/log/ Thanks again for your support of Kubuntu! I'm seeing this too. I upgraded by doing apt-get update and then apt-get upgrade. The upgrade was accomplished using synaptic after adding the PPA. I've attached the dpkg.log file. If that is not what you want then please advise. Is there a good way to generate the list that you have requested?? Sorry not sure how to approach this. both logs show that KDE 4.5 beta2 was upgraded, not KDE 4.4.4. In 4.4.4, I could not reproduce this problem, however it does occur in 4.4.85. The error shown is misleading, the reason that the factory is reporting an error is because it is trying to load a Python module that isn't packaged in Ubuntu. :-( The real error is (I believe) systemsettings( Traceback (most recent call last): File "/usr/share/ import cupsutils.ppds ImportError: No module named cupsutils.ppds systemsettings( systemsettings( On Debian systems the cupsutils.ppds module is provided by the python-cupsutils package: python-cupsutils: Installed: (none) Candidate: 1.0.0-4lenny1 Version table: 1.0.0-4lenny1 0 700 http:// But this module doesn't appear to be packaged for Ubuntu: root@quad:~# apt-cache search python-cupsutils root@quad:~# root@quad:~# apt-file search cupsutils root@quad:~# Good luck with that! Still broken in Beta 2 My error. Still broken in RC 1. Upgraded to KDE 4.5 RC1 and I'm also still getting this error. I agree with Chris Samuel in comment 6, if you type: python /usr/share/ Traceback (most recent call last): File "/usr/share/ import cupsutils.ppds ImportError: No module named cupsutils.ppds Could the package be installed from debian for now? Downloading and installing python-cupsutils from http:// Basic Server Settings Show printers shared by other systems Share published printers connected to this system Allow remote administration Allow users to cancel any job (not just their own) Save debugging information for troubleshooting Hey everybody. I have just met the same problem as you did. I find RC very nice but problem with my wifi printer can be annoying. So i unmarked beta repository and installed kde printer setup from 4.4.5 release. And it is working like it should have with 4.5rc1. I hope in final release you will deal with it. This problem affects me too. I think https:/ Bug #602343 is indeed a duplicate of this, but cannot be marked as such as there are 2 other bugs marked as duplicates of it and Launchpad doesn't seem to be clever enough to handle that situation and requires someone with more privileges to change them to point here first. Fixed in 4.4.92-0ubuntu2 * Remove 01_system_ relevant for Debian not Ubuntu No, not fixed in 4.4.92-0ubuntu2. I have system- @Philip: You're right. I get the output in bug 611677 now. I may have had two terminals open last night and looked at the wrong one after upgrading, or it may have been something that cleaned out when I shut down last night and powered up this morning, but I did see the same errors. Has this only been pushed out as a 32-bit package ? I'm still seeing 4:4.4.92- root@quad:~# apt-cache policy system- system- Installed: 4:4.4.92- Candidate: 4:4.4.92- Version table: *** 4:4.4.92- 500 http:// 100 /var/lib/ 4: 500 http:// 4: 500 http:// 500 http:// Kubuntu PPA have Triaged status. I still have this problem with a fresh installation of the Daily Build LiveCD (32-bit) dated 2 August 2010. I also have that issue :( Still a problem in lucid or maverick with KDE 4.5.1 ? Fixed for me with 4.5.1 fixed for me as well. Lucid w/ 4.5.1. The last update that I had indicated the 4.5.1 has not been released for Maverick yet. Good ;) Thanks I'm using Maverick and kde backports, so I got KDE 4.5.1 today and the bug is fixed. Actually, it got fixed a week or two ago. Looks like it's fixed for me with Lucid (10.04) and KDE 4.5.1 from the Kubuntu PPA. Thanks everyone! I've just updated to KDE 4.7 from kubuntu backports ppa and got this bug. The tail of .xsession-errors file is attached. Yes, I have the same problem on Kubuntu/Natty after upgrading to 4.7 from ppa Alex @Stanislav and @Alex: this bug is closed, can you please file a new bug for your 4.7 issues. (Which I can confirm btw.) Just realized that I may not have been clear enough. In System Settings/ Printer Configuration is where I see this error.
https://bugs.launchpad.net/kubuntu-ppa/+bug/591980
CC-MAIN-2020-45
en
refinedweb
Patterns and Guards Elixir provides pattern matching, which allows us to assert on the shape or extract values from data-structures. Patterns are often augmented with guards, which give developers the ability to perform more complex checks, albeit limited. This page describes the semantics of patterns and guards, where they are all allowed, and how to extend them. Patterns Patterns in Elixir are made of variables, literals, and data-structure specific syntax. One of the most used constructs to perform pattern matching is the match operator ( =): iex> x = 1 1 iex> 1 = x 1 In the example above, x starts without a value and has 1 assigned to it. Then, we compare the value of x to the literal 1, which succeeds as they are both 1. Matching x against 2 would raise: iex> 2 = x ** (MatchError) no match of right hand side value: 1 Patterns are not bidirectional. If you have a variable y that was never assigned to (often called an unbound variable) and you write 1 = y, an error will be raised: iex> 1 = y ** (CompileError) iex:2: undefined function y/0 In other words, patterns are allowed only on the left side of =. The right side of = follows the regular evaluation semantics of the language. Now let's cover the pattern matching rules for each construct and then for each relevant data-types. Variables Variables in patterns are always assigned to: iex> x = 1 1 iex> x = 2 2 iex> x 2 In other words, Elixir supports rebinding. In case you don't want the value of a variable to change, you can use the pin operator ( ^): iex> x = 1 1 iex> ^x = 2 ** (MatchError) no match of right hand side value: 2 If the same variable appears twice in the same pattern, then they must be bound to the same value: iex> {x, x} = {1, 1} {1, 1} iex> {x, x} = {1, 2} ** (MatchError) no match of right hand side value: {1, 2} The underscore variable ( _) has a special meaning as it can never be bound to any value. It is especially useful when you don't care about certain value in a pattern: iex> {_, integer} = {:not_important, 1} {:not_important, 1} iex> integer 1 iex> _ ** (CompileError) iex:3: invalid use of _ Literals (numbers and atoms) Atoms and numbers (integers and floats) can appear in patterns and they are always represented as is. For example, an atom will only match an atom if they are the same atom: iex> :atom = :atom :atom iex> :atom = :another_atom ** (MatchError) no match of right hand side value: :another_atom Similar rule applies to numbers. Finally, note that numbers in patterns perform strict comparison. In other words, integers to do not match floats: iex> 1 = 1.0 ** (MatchError) no match of right hand side value: 1.0 Tuples Tuples may appear in patterns using the curly brackets syntax ( {}). A tuple in a pattern will match only tuples of the same size, where each individual tuple} Lists Lists may appear in patterns using the square brackets syntax ( []). A list in a pattern will match only lists of the same size, where each individual list] Opposite to tuples, lists also allow matching on non-empty lists by using the [head | tail] notation, which matches on the head and tail of a list: iex> [head | tail] = [1, 2, 3] [1, 2, 3] iex> head 1 iex> tail [2, 3] Multiple elements may prefix the | tail construct: iex> [first, second | tail] = [1, 2, 3] [1, 2, 3] iex> tail [3] Note [head | tail] does not match empty lists: iex> [head | tail] = [] ** (MatchError) no match of right hand side value: [] Given charlists are represented as a list of integers, one can also perform prefix matches on charlists using the list concatenation operator ( ++): iex> 'hello ' ++ world = 'hello world' 'hello world' iex> world 'world' Which is equivalent to matching on [?h, ?e, ?l, ?l, ?o, ?\s | world]. Suffix matches ( hello ++ ' world') are not valid patterns. Maps Maps may appear in patterns using the percentage sign followed by the curly brackets syntax ( %{}). Opposite to lists and tuples, maps perform a subset match. This means a map pattern will match any other map that has at least all of the keys in the pattern. Here is an example where all keys match: iex> %{name: name} = %{name: "meg"} %{name: "meg"} iex> name "meg" Here is when a subset of the keys match: iex> %{name: name} = %{name: "meg", age: 23} %{age: 23, name: "meg"} iex> name "meg" If a key in the pattern is not available in the map, then they won't match: iex> %{name: name, age: age} = %{name: "meg"} ** (MatchError) no match of right hand side value: %{name: "meg"} Note that the empty map will match all maps, which is a contrast to tuples and lists, where an empty tuple or an empty list will only match empty tuples and empty lists respectively: iex> %{} = %{name: "meg"} %{name: "meg"} Finally, note map keys in patterns must always be literals or previously bound variables matched with the pin operator. Binaries Binaries may appear in patterns using the double less-than/greater-than syntax ( <<>>). A binary in a pattern can match multiple segments at the same, each with different type, size, and unit: iex> <<val::unit(8)-size(2)-integer>> = <<123, 56>> "{8" iex> val 31544 See the documentation for <<>> for a complete definition of pattern matching for binaries. Finally, remember that strings in Elixir are UTF-8 encoded binaries. This means that, similar to charlists, prefix matches on strings are also possible with the binary concatenation operator ( <>): iex> "hello " <> world = "hello world" "hello world" iex> world "world" Suffix matches ( hello <> " world") are not valid patterns. Guards Guards are a way to augment pattern matching with more complex checks. They are allowed in a predefined set of constructs where pattern matching is allowed, such as function definitions, case clauses, and others. Not all expressions are allowed in guard clauses, but only a handful of them. This is a deliberate choice. This way, Elixir (and Erlang) can make sure that nothing bad happens while executing guards and no mutations happen anywhere. It also allows the compiler to optimize the code related to guards efficiently. List of allowed functions and operators You operators ( +, -) - arithmetic binary operators +, -, *, /) inand not inoperators (as long as the right-hand side is a list or a range) - "type-check" functions ( is_list/1, is_number/1, and the like) - functions that work on built-in datatypes ( abs/1, hd/1, map_size/1, and others) The module Bitwise also includes a handful of Erlang bitwise operations as guards. Macros constructed out of any combination of the above guards are also valid guards - for example, Integer.is_even/1. For more information, see the "Defining custom guard expressions" section shown below. Why guards Let's see an example of a guard used in a function clause: def empty_map?(map) when map_size(map) == 0, do: true def empty_map?(map) when is_map(map), do: false Guards start with the when operator, followed by a guard expression. The clause will be executed if and only if the guard expression returns true. Multiple boolean conditions can be combined with the and and or operators. Writing the empty_map?/1 function by only using pattern matching would not be possible (as pattern matching on %{} would match any map, not only the empty ones). Failing guards A function clause will be executed if and only if its guard expression evaluates to true. If any other value is returned, the function clause will be skipped. In particular, guards have no concept of "truthy" or "falsey". For example, imagine a function that checks that the head of a list is not nil: def not_nil_head?([head | _]) when head, do: true def not_nil_head?(_), do: false not_nil_head?(["some_value", "another_value"]) #=> false Even though the head of the list is not nil, the first clause for not_nil_head?/1 fails because the expression does not evaluate to true, but to "some_value", therefore triggering the second clause which returns false. To make the guard behave correctly, you must ensure that the guard evaluates to true, like so: def not_nil_head?([head | _]) when head != nil, do: true def not_nil_head?(_), do: false not_nil_head?(["some_value", "another_value"]) #=> true Errors in guards In guards, when functions would normally raise exceptions, they cause the guard to fail instead. For example, the tuple_size/1 function only works with tuples. If we use it with anything else, an argument error is raised: iex> tuple_size("hello") ** (ArgumentError) argument error However, when used in guards, the corresponding clause will fail to match instead of raising an error: iex> case "hello" do ...> something when tuple_size(something) == 2 -> ...> :worked ...> _anything_else -> ...> :failed ...> end :failed In many cases, we can take advantage of this. In the code above, we used tuple_size/1 to both check that the given value is a tuple and check its size (instead of using is_tuple(something) and tuple_size(something) == 2). However, if your guard has multiple conditions, such as checking for tuples or maps, it is best to call type-check functions like is_tuple/1 before tuple_size/1, otherwise the whole guard will fail if a tuple is not given. Alternatively your function clause can use multiple guards as shown in the following section. Multiple guards in the same clause There exists an additional way to simplify a chain of or expressions in guards: Elixir supports writing "multiple guards" in the same clause. The following code: def is_number_or_nil(term) when is_integer(term) or is_float(term) or is_nil(term), do: :maybe_number def is_number_or_nil(_other), do: :something_else can be alternatively written as: def is_number_or_nil(term) when is_integer(term) when is_float(term) when is_nil(term) do :maybe_number end def is_number_or_nil(_other) do :something_else end If each guard expression always returns a boolean, the two forms are equivalent. However, recall that if any function call in a guard raises an exception, the entire guard fails. To illustrate this, the following Where patterns and guards can be used In the examples above, we have used the match operator ( =) and function clauses to showcase patterns and guards respectively. Here is the list of the built-in constructs in Elixir that support patterns and guards. match?({:ok, value} when value > 0, {:ok, 13}) function clauses: def type(term) when is_integer(term), do: :integer def type(term) when is_float(term), do: :float case x do 1 -> :one 2 -> :two n when is_integer(n) and n > 2 -> :larger_than_two end anonymous functions ( fn/1): larger_than_two? = fn n when is_integer(n) and n > 2 -> true n when is_integer(n) -> false end forand withsupport patterns and guards on the left side of <-: for x when x >= 0 <- [1, -2, 3, -4], do: x withalso supports the elsekeyword, which supports patterns matching and guards. trysupports patterns and guards on catchand else receivesupports patterns and guards to match on the received messages. custom guards can also be defined with defguard/1and defguardp/1. A custom guard can only be defined based on existing guards. Note that the match operator ( =) does not support guards: {:ok, binary} = File.read("some/file") Custom patterns and guards expressions Only the constructs listed in this page are allowed in patterns and guards. However, we can take advantage of macros to write custom patterns guards that can simplify our programs or make them more domain-specific. At the end of the day, what matters is that the output of the macros boils down to a combination of the constructs above. For example, the Record module in Elixir provides a series of macros to be used in patterns and guards that allows tuples to have named fields during compilation. For defining your own guards, Elixir even provides conveniences in defguard and defguardp. Let's look at a quick case study: we want to check whether an argument is an even or an odd integer. With pattern matching this is impossible because there is an infinite number of integers, and therefore we can't pattern match on every single one of them. Therefore we must use guards. We will just focus on checking for even numbers since checking for the odd ones is almost identical. Such a guard would look like this: def my_function(number) when is_integer(number) and rem(number, 2) == 0 do # do stuff end It would be repetitive to write every time we need this check. Instead you can use defguard/1 and defguardp/1 to create guard macros. Here's an example how: defmodule MyInteger do defguard is_even(term) when is_integer(term) and rem(term, 2) == 0 end and then: import MyInteger, only: [is_even: 1] def my_function(number) when is_even(number) do # do stuff end While it's possible to create custom guards with macros, it's recommended to define them using defguard/1 and defguardp/1 which perform additional compile-time checks.
https://hexdocs.pm/elixir/master/patterns-and-guards.html
CC-MAIN-2020-45
en
refinedweb
Page 1 Introduction Welcome to the Python Language Companion for Starting Out with Programming Logic and Design, 4th Edition, by Tony Gaddis. You can use this guide as a reference for the Python Programming Language as you work through the textbook. Each chapter in this guide corresponds to the same numbered chapter in the textbook. As you work through a chapter in the textbook, you can refer to the corresponding chapter in this guide to see how the chapter's topics are implemented in Python. In this book you will also find Python versions of many of the pseudocode programs that are presented in the textbook. Page 2 Chapter 1 Introduction to Python Installing Python Before you can run Python programs on your computer you will need to install the Python interpreter. You can download the latest version of the Python Windows installer from. The web site also provides downloadable versions of Python for several other operating systems. Note: On the download page you likely will see two current versions of Python. One will be named Python 2.x.x, and the other will be named Python 3.x.x. This booklet is written for the Python 3.x.x version. When you execute the Python Windows installer, it's best to accept all of the default settings by clicking the Next button on each screen. (Answer "Yes" if you are prompted with any Yes/No questions.) As you perform the installation, take note of the directory where Python is being installed. It will be something similar to C:\Python34. (The 34 in the path name represents the Python version. The directory name Python34 would indicate Python version 3.4.) You will need to remember this location after finishing the installation. When the installer is finished, the Python interpreter, the IDLE programming environment, and the Python documentation will be installed on your system. When you click the Start button and look at your All Programs list you should see a program group named something like Python 3.4. The program group will contain the following items: IDLE (Python GUI) – When you click this item the IDLE programming environment will execute. IDLE is an integrated development environment that you can use to create. edit, and execute Python programs. See Appendix A for a brief introduction to IDLE. Python Command Line – Clicking this item launches the Python interpreter in interactive mode. Python Manuals – This item launches a utility program that allows you to browse documentation for the modules in the Python standard library. Python Docs Server – This item opens the Python Manuals in your web browser. The manuals include tutorials, a reference section for the Python standard library, an in- depth reference for the Python language, and information on many advanced topics. Uninstall Python – This item removes Python from your system. If you plan to execute the Python interpreter from a command prompt window, you will probably want to add the Python directory to the existing contents of your system's Path variable. (You saw the name of the Python directory while installing Python. It is something Page 3 similar to C:\Python34.) Doing this will allow your system to find the Python interpreter from any directory when you run it at the command-line. Use the following instructions to edit the Path variable under Windows 8 and Windows 7. Windows 8 In the Right bottom corner of the screen, click on the Search icon and type Control Panel. Click on Control Panel, then click System, then click Advanced system settings. Click on the Advanced tab, then click Environment Variables. Under System Variables, find Path, click on it, and then click the Edit button. Add a semicolon to the end of the existing contents and then add the Python directory path. Click the OK buttons until all the dialog boxes are closed, and exit the control panel. Windows 7. Page 4 Interactive Mode Once Python has been installed and set up on your system, you start the interpreter in interactive mode by going to the operating system's command line and typing the following command: python If you are using Windows, you can alternatively click the Start button, then All Programs. You should see a program group named something like Python 3.4. (The "3.4" is the version of Python that is installed.) Inside this program group you should see an item named Python (command line). Clicking this menu item will start the Python interpreter in interactive mode. When the Python interpreter starts in interactive mode, you will see something like the following displayed in a console window: The >>> that you see is a prompt that indicates the interpreter is waiting for you to type a Python statement. Let's try it out. One of the simplest actions that you can perform in Python is to display a message on the screen. For example, the following statement causes the message Python programming is fun! to be displayed: Notice that inside the parentheses that appear after the word print, we have written 'Python programming is fun!'. The quote marks are necessary, but they will not be displayed. They simply mark the beginning and the end of the text that we wish to display. Here is an example of how you would type this statement at the interpreter's prompt: After typing the statement you press the Enter key and the Python interpreter executes the statement, as shown here: After the message is displayed, the >>> prompt appears again, indicating that the interpreter is waiting for you to enter another statement. Let's look at another example. In the following sample session we have entered two statements. Page 5 >>> print('To be or not to be') [Enter] To be or not to be >>> print('That is the question.') [Enter] That is the question. >>> If you incorrectly type a statement in interactive mode, the interpreter will display an error message. This will make interactive mode useful to you while you learn Python. As you learn new parts of the Python language, you can try them out in interactive mode and get immediate feedback from the interpreter. To quit the Python interpreter in interactive mode on a Windows computer, press Ctrl-Z (pressing both keys together) followed by Enter. On a Mac, Linux, or UNIX computer, press Ctrl- D. Although interactive mode is useful for testing code, the statements that you enter in interactive mode are not saved as a program. They are simply executed and their results displayed on the screen. If you want to save a set of Python statements as a program, you save those statements in a file. Then, to execute the program, you use the Python interpreter in script mode. For example, suppose you want to write a Python program that displays the following three lines of text: Nudge nudge Wink wink Know what I mean? To write the program you would use a simple text editor like Notepad (which is installed on all Windows computers) to create a file containing the following statements: print('Nudge nudge') print('Wink wink') print('Know what I mean?') Note: It is possible to use a word processor to create a Python program, but you must be sure to save the program as a plain text file. Otherwise the Python interpreter will not be able to read its contents. When you save a Python program, you give it a name that ends with the .py extension, which identifies it as a Python program. For example, you might save the program previously shown Page 6 with the name test.py. To run the program you would go to the directory in which the file is saved and type the following command at the operating system command line: python test.py This starts the Python interpreter in script mode and causes it to execute the statements in the file test.py. When the program finishes executing, the Python interpreter exits. The previous sections described how the Python interpreter can be started in interactive mode or script mode at the operating system command line. As an alternative, you can use a program named IDLE, which is automatically installed when the Python language is installed. (IDLE stands for Integrated DeveLopment Environment.) When you run IDLE, the window shown in Figure 1-1 appears. Notice that the >>> prompt appears in the IDLE window, indicating that the interpreter is running in interactive mode. You can type Python statements at this prompt and see them executed in the IDLE window. IDLE also has a built-in text editor with features specifically designed to help you write Python programs. For example, the IDLE editor "colorizes" code so that key words and other parts of a program are displayed in their own distinct colors. This helps make programs easier to read. In IDLE you can write programs, save them to disk, and execute them. Appendix A provides a quick introduction to IDLE, and leads you through the process of creating, saving, and executing a Python program. Note: Although IDLE is installed with Python, there are several other Python IDEs available. Your instructor might prefer that you use a specific one in class. Page 7 Chapter 2 Input, Processing, and Output Displaying Screen Output A function is a piece of prewritten code that performs an operation. Python has numerous built- in functions that perform various operations. Perhaps the most fundamental built-in function is the print function, which displays output on the screen. Here is an example of a statement that executes the print function: print('Hello world') When programmers execute a function, they say that they are calling the function. When you call the print function, you type the word print, followed by a set of parentheses. Inside the parentheses, you type an argument, which is the data that you want displayed on the screen. In the previous example, the argument is 'Hello world'. The quote marks will not be displayed when the statement executes, however. The quote marks simply specify the beginning and the end of the text that we wish to display. Suppose your instructor tells you to write a program that displays your name and address on the computer screen. Program 2-1 shows an example of such a program, with the output that it will produce when it runs. (The line numbers that appear in a program listing in this book are not part of the program. We use the line numbers in our discussion to refer to parts of the program.) Program Output Kate Austen 123 Full Circle Drive Asheville, NC 28899 Remember, these line numbers are NOT part of the program! Don't type the line numbers when you are entering program code. All of the programs in this booklet will show line numbers for reference purposes only. It is important to understand that the statements in this program execute in the order that they appear, from the top of the program to the bottom. When you run this program, the first Page 8 statement will execute, followed by the second statement, and followed by the third statement. Strings and String Literals In Python code, string literals must be enclosed in quote marks. As mentioned earlier, the quote marks simply mark where the string data begins and ends. In Python you can enclose string literals in a set of single-quote marks (') or a set of double- quote marks ("). The string literals in Program 2-1 are enclosed in single-quote marks, but the program could also be written as shown here: print("Kate Austen") print("123 Full Circle Drive") print("Asheville, NC 28899") If you want a string literal to contain either a single-quote or an apostrophe as part of the string, you can enclose the string literal in double-quote marks. For example, Program 2-2 prints two strings that contain apostrophes. Program Output Don't fear! I'm here! Likewise, you can use single-quote marks to enclose a string literal that contains double-quotes as part of the string. Program 2-3 shows an example. Program Output Your assignment is to read "Hamlet" by tomorrow. Python also allows you to enclose string literals in triple quotes (either """ or '''). Triple quoted strings can contain both single quotes and double quotes as part of the string. The following statement shows an example: Page 9 I'm reading "Hamlet" tonight. Triple quotes can also be used to surround multiline strings, which is something that single and double quotes cannot be used for. Here is an example: print("""One Two Three""") One Two Three Variables Variables are not declared in Python. Instead, you use an assignment statement to create a variable. Here is an example of an assignment statement: age = 25 After this statement executes, a variable named age will be created and it will reference the value 25. This concept is shown in Figure 2-1. In the figure, think of the value 25 as being stored somewhere in the computer's memory. The arrow that points from age to the value 25 indicates that the name age references the value. variable = expression The equal sign (=) is known as the assignment operator. In the general format, variable is the name of a variable and expression is a value, or any piece of code that results in a value. After an assignment statement executes, the variable listed on the left side of the = operator will reference the value given on the right side of the = operator. Page 10 The code in Program 2-4 demonstrates a variable. Line 1 creates a variable named room and assigns it the value 503. The statements in lines 2 and 3 display a message. Notice that line 3 displays the value that is referenced by the room variable. Program Output I am staying in room number 503 You may choose your own variable names in Python, as long as you do not use any of the Python key words. The key words make up the core of the language and each has a specific purpose. Table 2-1 shows a list of the Python key words. Additionally, you must follow these rules when naming variables in Python: If you look back at Program 2-4 you will see that we used the following two statements in lines 3 and 4: Page 11 print('I am staying in room number') print(room) We used two calls to the print function because we needed to display two pieces of data. Line 3 displays the string literal 'I am staying in room number', and line 4 displays the value referenced by the room variable. This program can be simplified, however, because Python allows us to display multiple items with one call to the print function. We simply have to separate the items with commas as shown in Program 2-5. Program Output I am staying in room number 503 The statement in line 2 displays two items: a string literal followed by the value referenced by the room variable. Notice that Python automatically printed a space between these two items. When multiple items are printed this way in Python, they will automatically be separated by a space. Python uses the int data type to store integers, and the float data type to store real numbers. Let's look at how Python determines the data type of a number. Many of the programs that you will see will have numeric data written into their code. For example, the following statement, which appears in Program 2-4, has the number 503 written into it. room = 503 This statement causes the value 503 to be stored in memory, and it makes the room variable reference it. The following statement shows another example. This statement has the number 2.75 written into it. dollars = 2.75 This statement causes the value 2.75 to be stored in memory, and it makes the dollars variable reference it. A number that is written into a program's code is called a numeric literal. When the Python interpreter reads a numeric literal in a program's code, it determines its data type according to the following rules: Page 12 A numeric literal that is written as a whole number with no decimal point is considered an int. Examples are 7, 124, and -9. A numeric literal that is written with a decimal point is considered a float. Examples are 1.5, 3.14159, and 5.0. So, the following statement causes the number 503 to be stored in memory as an int: room = 503 And the following statement causes the number 2.75 to be stored in memory as a float: dollars = 2.75 In addition to the int and float data types, Python also has a data type named str, which is used for storing strings in memory. The code in Program 2-6 shows how strings can be assigned to variables. Program Output Kathryn Marino In this booklet we will use Python's built-in input function to read input from the keyboard. The input function reads a piece of data that has been entered at the keyboard and returns that piece of data, as a string, back to the program. You normally use the input function in an assignment statement that follows this general format: variable = input(prompt) In the general format, prompt is a string that is displayed on the screen. The string’s purpose is to instruct the user to enter a value. variable is the name of a variable that will reference the data that was entered on the keyboard. Here is an example of a statement that uses the input function to read data from the keyboard: Page 13 The string 'What is your name? ' is displayed on the screen. The program pauses and waits for the user to type something on the keyboard, and then press the Enter key. When the Enter key is pressed, the data that was typed is returned as a string, and assigned to the name variable. When the first statement was entered, the interpreter displayed the prompt 'What is your name? ', and waited for the user to enter some data. The user entered Holly and pressed the Enter key. As a result the string 'Holly' was assigned to the name variable. When the second statement was entered, the interpreter displayed the value referenced by the name variable. Program 2-7 shows a complete program that uses the input function to read two strings as input from the keyboard. The input function always returns the user's input as a string, even if the user enters numeric data. For example, suppose you call the input function and the user types the number 72 and pressed the Enter key. The value that is returned from the input function is the string '72'. This can be a problem if you want to use the value in a math operation. Math operations can be performed only on numeric values, not strings. Page 14 Fortunately, Python has built-in functions that you can use to convert a string to a numeric type. Table 2-2 summarizes two of these functions. For example, suppose you are writing a payroll program and you want to get the number of hours that the user has worked. Look at the following code: The first statement gets the number of hours from the user, and assigns that value as a string to the string_value variable. The second statement calls the int() function, passing string_value as an argument. The value referenced by string_value is converted to an int, and assigned to the hours variable. This example illustrates how the int() function works, but it is inefficient because it creates two variables: one to hold the string that is returned from the input function, and another to hold the integer that is returned from the int() function. The following code shows a better approach. This one statement does all of the work that the previously shown two statements do, and it creates only one variable: This one statement uses nested function calls. The value that is returned from the input function is passed as an argument to the int() function. This is how it works: After this statement executes, the hours variable will be assigned the value entered at the keyboard, converted to an int. Program 2-8 shows a sample program that uses the input function. Page 15 Program 2-8 (input.py) 1 age = int(input('What is your age? ')) 2 print('Here is the value that you entered:') 3 print(age) This is the Python version Program Output (with Input Shown in Bold) of Program 2-2 in your What is your age? 28 [Enter] Here is the value that you entered: textbook. 28 You read floating-point numbers as input in a similar fashion. Suppose you want to get the user's hourly pay rate. The following statement prompts the user to enter that value at the keyboard, converts the value to a float, and assigns it to the pay_rate variable: After this statement executes, the pay_rate variable will be assigned the value entered at the keyboard, converted to a float. Performing Calculations Table 2-3 lists the math operators that are provided by the Python language. Page 16 Here are some examples of statements that use an arithmetic operator to calculate a value, and assign that value to a variable: Program 2-9 shows an example program that performs mathematical calculations (This program is the Python version of pseudocode Program 2-8 in your textbook.) In Python, the order of operations and the use of parentheses as grouping symbols works just as described in the textbook. Integer Division In Python, when an integer is divided by an integer the result will also be an integer. This behavior is known as integer division. For example, look at the following statement: number = 3 / 2 Because the numbers 3 and 2 are both treated as integers, Python will throw away (truncate) the fractional part of the result. So, the statement will assign the value 1 to the number variable, not 1.5. If you want to make sure that a division operation yields a real number, at least one of the operands must be a number with a decimal point or a variable that references a float value. For example, we could rewrite the statement as follows: Page 17 Documenting a Program with Comments To write a line comment in Python you simply place the # symbol where you want the comment to begin. The Python interpreter ignores everything from that point to the end of the line. Here is an example: Page 18 Chapter 3 Modularizing Programs with Functions Chapter 3 in your textbook discusses modules as named groups of statements that perform specific tasks in a program. You use modules to break a program down into small, manageable units. In Python, we use functions for this purpose. (In Python, the term "module" has a slightly different meaning. A Python module is a file that contains a set of related program elements, such as functions.) In this chapter we will discuss how to define and call Python functions, use local variables in a function, and pass arguments to a function. We also discuss global variables, and the use of global constants. To create a function you write its definition. Here is the general format of a function definition in Python: def function_name(): statement statement etc. The first line is known as the function header. It marks the beginning of the function definition. The function header begins with the key word def, followed by the name of the function, followed by a set of parentheses, followed by a colon. Beginning at the next line is a set of statements known as a block. A block is simply a set of statements that belong together as a group. These statements are performed any time the function is executed. Notice in the general format that all of the statements in the block are indented. This indentation is required because the Python interpreter uses it to tell where the block begins and ends. Let’s look at an example of a function. Keep in mind that this is not a complete program. We will show the entire program in a moment. def message(): print('I am Arthur,') print('King of the Britons.') This code defines a function named message. The message function contains a block with two statements. Executing the function will cause these statements to execute. Page 19 Calling a Function A function definition specifies what a function does, but it does not cause the function to execute. To execute a function, you must call it. This is how we would call the message function: When a function is called, the interpreter jumps to that function and executes the statements in its block. Then, when the end of the block is reached, the interpreter jumps back to the part of the program that called the function, and the program resumes execution at that point. When this happens, we say that the function returns. To fully demonstrate how function calling works, look at Program 3-1. Program Output I am Arthur, King of the Britons. When the Python interpreter reads the def statement in line 3, a function named message is created in memory, containing the block of statements in lines 4 and 5. (A function definition creates a function, but it does not cause the function to execute.) Next, the interpreter encounters the comment in line 7, which is ignored. Then it executes the statement in line 8, which is a function call. This causes the message function to execute, which prints the two lines of output. Program 3-1 has only one function, but it is possible to define many functions in a program. In fact, it is common for a program to have a main function that is called when the program starts. The main function then calls other functions in the program as they are needed. It is often said that the main function contains a program's mainline logic, which is the overall logic of the program. Program 3-2 shows an example of a program with two functions: main and show_message. This is the Python version of Program 3-1 in your textbook, with some extra Page 20 Program 3-2 (function_demo2.py) 1 # Define the main function. This is the Python version 2 def main(): of Program 3-1 in your 3 print("I have a message for you.") textbook. 4 show_message() 5 print("That's all folks!") 6 7 # Define the show_message function. 8 def show_message(): 9 print("Hello world") 10 11 # Call the main function. 12 main() Program Output I have a message for you. Hello world That's all folks! The main function is defined in lines 2 through 5, and the show_message function is defined in lines 8 through 9. When the program runs, the statement in line 12 calls the main function, which then calls the show_message function in line 4. Indentation in Python In Python, each line in a block must be indented. As shown in Figure 3-1, the last indented line after a function header is the last line in the function's block. When you indent the lines in a block, make sure each line begins with the same number of spaces. Otherwise an error will occur. For example, the following function definition will cause an error because the lines are all indented with different numbers of spaces. Page 21 def my_function(): print('And now for') print('something completely') print('different.') In an editor there are two ways to indent a line: (1) by pressing the Tab key at the beginning of the line, or (2) by using the spacebar to insert spaces at the beginning of the line. You can use either tabs or spaces when indenting the lines in a block, but don't use both. Doing so may confuse the Python interpreter and cause an error. IDLE, as well as most other Python editors, automatically indents the lines in a block. When you type the colon at the end of a function header, all of the lines typed afterward will automatically be indented. After you have typed the last line of the block you press the Backspace key to get out of the automatic indentation. Tip: Python programmers customarily use four spaces to indent the lines in a block. You can use any number of spaces you wish, as long as all the lines in the block are indented by the same amount. Local Variables Anytime you assign a value to a variable inside a function, you create a local variable. A local variable belongs to the function in which it is created, and only statements inside that function can access the variable. (The term local is meant to indicate that the variable can be used only locally, within the function in which it is created.) In Chapter 3 of your textbook you learned that a variable's scope is the part of the program in which the variable may be accessed. A local variable's scope is the function in which the variable is created. Because a function’s local variables are hidden from other functions, the other functions may have their own local variables with the same name. For example, look at the Program 3-3. In addition to the main function, this program has two other functions: texas and california. These two functions each have a local variable named birds. Page 22 9 10 # Definition of the texas function. It creates 11 # a local variable named birds. 12 def texas(): 13 birds = 5000 14 print('texas has', birds, 'birds.') 15 16 # Definition of the california function. It also 17 # creates a local variable named birds. 18 def california(): 19 birds = 8000 20 print('california has', birds, 'birds.') 21 22 # Call the main function. 23 main() Program Output texas has 5000 birds. california has 8000 birds. Although there are two separate variables named birds in this program, only one of them is visible at a time because they are in different functions. If you want a function to receive arguments when it is called, you must equip the function with one or more parameter variables. A parameter variable, often simply called a parameter, is a special variable that is assigned the value of an argument when a function is called. Here is an example of a function that has a parameter variable: def double_number(value): result = value * 2 print(result) Page 23 Program 3-4 (pass_integer.py) 1 # Define the main function. 2 def main(): This is the Python version of 3 double_number(4) Program 3-5 in your textbook. 4 5 # Define the double_number function. 6 def double_number(value): 7 result = value * 2 8 print(result) 9 10 # Call the main function. 11 main() Program Output 8 When this program runs, the main function is called in line 11. Inside the main function, line 3 calls the double_number function passing the value 4 as an argument. The double_number function is defined in lines 6 through 8. The function has a parameter variable named value. In line 7 a local variable named result is assigned the value of the math expression value * 2. In line 8 the value of the result variable is displayed. Program Output Enter a number and I will display that number doubled: 20 [Enter] 40 When this program runs, the main function is called in line 12. Inside the main function, line 3 gets a number from the user and assigns it to the number variable. Line 4 calls the double_number function passing the number variable as an argument. Page 24 The double_number function is defined in lines 7 through 9. The function has a parameter variable named value. In line 8 a local variable named result is assigned the value of the math expression value * 2. In line 9 the value of the result variable is displayed. Often it is useful to pass more than one argument to a function. When you define a function, you must have a parameter variable for each argument that you want passed into the function. Program 3-6 shows an example. This is the Python version of pseudocode Program 3-7 in your textbook. Program Output The sum of 12 and 45 is 57 Notice that two parameter variable names, num1 and num2, appear inside the parentheses in the show_sum function header. This is often referred to as a parameter list. Also notice that a comma separates the variable names. The statement in line 6 calls the show_sum function and passes two arguments: 12 and 45. These arguments are passed by position to the corresponding parameter variables in the function. In other words, the first argument is passed to the first parameter variable, and the second argument is passed to the second parameter variable. So, this statement causes 12 to be assigned to the num1 parameter and 45 to be assigned to the num2 parameter. Page 25 Making Changes to Parameters When an argument is passed to a function in Python, the function parameter variable will reference the argument's value. However, any changes that are made to the parameter variable will not affect the argument. To demonstrate this look at Program 3-7. Program Output The value is 99 I am changing the value. Now the value is 0 Back in main the value is 99 The main function creates a local variable named value in line 5, assigned the value 99. The statement in line 6 displays The value is 99. The value variable is then passed as an argument to the change_me function in line 7. This means that in the change_me function the arg parameter will also reference the value 99. Global Variables When a variable is created by an assignment statement that is written outside all the functions in a program file, the variable is global A global variable can be accessed by any statement in the program file, including the statements in any function. For example, look at Program 3-8. Page 26 5 # the value of the global variable. 6 def show_value(): 7 print(my_value) 8 9 # Call the show_value function. 10 show_value() Program Output 10 The assignment statement in line 2 creates a variable named my_value. Because this statement is outside any function, it is global. When the show_value function executes, the statement in line 7 prints the value referenced by my_value. An additional step is required if you want a statement in a function to assign a value to a global variable. In the function you must declare the global variable, as shown in Program 3-9. Program Output Enter a number: 22 [Enter] The number you entered is 22 The assignment statement in line 2 creates a global variable named number. Notice that inside the main function, line 5 uses the global key word to declare the number variable. This statement tells the interpreter that the main function intends to assign a value to the global number variable. That's just what happens in line 6. The value entered by the user is assigned to number. Page 27 Global Constants The Python language does not allow you to create true global constants, but you can simulate them with global variables. If you do not declare a global variable with the global key word inside a function, then you cannot change the variable's assignment. Page 28 Chapter 4 Decision Structures and Boolean Logic Relational Operators and the if Statement Python's relational operators, shown in Table 4-1, are exactly the same as those discussed in your textbook. The relational operators are used to create Boolean expressions, and are commonly used with if statements. Here is the general format of the if statement in Python: if condition: statement statement etc. For simplicity, we will refer to the first line as the if clause. The if clause begins with the word if, followed by a condition, which is an expression if statement executes, the condition is tested. If the condition is true, the statements that appear in the block following the if clause are executed. If the condition is false, the statements in the block are skipped. Program 4-1 demonstrates the if statement. This Python program is similar to pseudocode Program 4-1 in your textbook. Page 29 Program 4-1 (test_average.py) 1 # This program prompts the user to enter three test 2 # scores. It displays the average of those scores and 3 # and congratulates the user if the average is 95 4 # or greater. 5 This is the Python version of 6 def main(): Program 4-1 in your textbook. 7 # Get the three test scores. 8 test1 = float(input('Enter the score for test 1: ')) 9 test2 = float(input('Enter the score for test 2: ')) 10 test3 = float(input('Enter the score for test 3: ')) 11 12 # Calculate the average test score. 13 average = (test1 + test2 + test3) / 3.0 14 15 # Print the average. 16 print('The average score is', average) 17 18 # If the average is 95 or greater, 19 # congratulate the user. 20 if average >= 95: 21 print('Congratulations!') 22 print('That is a great average!') 23 24 # Call the main function. 25 main() You use the if-else statement in Python to create a dual alternative decision structure. This is the format of the if-else statement: Page 30 if condition: statement statement etc. else: statement statement etc. When this statement executes, the condition is tested. If it is true, the block of indented statements following the if clause is executed, and then control of the program jumps to the statement that follows the if-else statement. If the condition is false, the block of indented statements following the else clause is executed, and then control of the program jumps to the statement that follows the if-else statement. Program 4-2 shows an example. This is the Python version of pseudocode Program 4-2 in your textbook. The program gets the number of hours that the user has worked (line 11) and the user's hourly pay rate (line 12). The if-else statement in lines 15 through 18 determines whether the user has worked more than 40 hours. If so, the calc_pay_with_OT function is called in line 16. Otherwise the calc_regular_pay function is called in line 18. Page 31 30 # Calculate the gross pay. 31 gross_pay = BASE_HOURS * rate + overtime_pay 32 33 # Display the gross pay. 34 print('The gross pay is $', format(gross_pay, '.2f')) 35 36 # The calc_regular_pay function calculates pay with 37 # no overtime. It accepts the hours worked and the hourly 38 # pay rate as arguments. The gross pay is displayed. 39 def calc_regular_pay(hours, rate): 40 # Calculate the gross pay. 41 gross_pay = hours * rate 42 43 # Display the gross pay. 44 print('The gross pay is $', format(gross_pay, '.2f')) 45 46 # Call the main function. 47 main() Comparing Strings name1 = 'Mary' name2 = 'Mark' if name1 == name2: print('The names are the same.') else: print('The names are NOT the same.') The == operator compares name1 and name2 to determine whether they are equal. Because the strings 'Mary' and 'Mark' are not equal, the else clause will display the message 'The names are NOT the same.' Let's look at another example. Assume the month variable references a string. The following code uses the != operator to determine whether the value referenced by month is not equal to 'October'. if month != 'October': print('This is the wrong time for Octoberfest!') Page 32 Program 4-3 is a complete program demonstrating how two strings can be compared. This is the Python version of pseudocode Program 4-3 in your textbook. The program prompts the user to enter a password and then determines whether the string entered is equal to 'prospero'. String comparisons are case sensitive. For example, the strings 'saturday' and 'Saturday' are not equal because the “s” is lowercase in the first string, but uppercase in the second string. The following sample session with Program 4-3 shows what happens when the user enters Prospero as the password (with an uppercase P). Page 33 Nested Decision Structures Program 4-4 shows an example of nested decision structures. As noted in your textbook, this type of nested decision structure can also be written as an if-else-if statement, as shown in Program 4-5. Page 34 Program 4-5 (if_elif_else.py) 1 def main(): 2 # Prompt the user to enter the temperature. 3 temp = float(input('What is the temperature outside? ')) 4 5 # Determine the type of weather we're having. 6 if temp < 30: 7 print('Wow. That is cold.') 8 elif temp < 50: 9 print('A little chilly.') 10 elif temp < 80: 11 print('Nice and warm.') 12 else: 13 print('Whew! It is hot!') 14 15 # Call the main function. 16 main() Note: Python does not prove a Case structure, so we can't cover that topic in this booklet. Page 35 Logical Operators Table 4-2 shows Python's logical operators, which work like the ones discussed in your textbook. For example, the following if statement checks the value of x to determine if it is in the range of 20 through 40: The Boolean expression in the if statement will be true only when x is greater than or equal to 20 AND less than or equal to 40. The value in x must be within the range of 20 through 40 for this expression to be true. The following statement determines whether x is outside the range of 20 through 40: Page 36 First, the expression (temperature > 100) is tested and a value of either true or false is the result. Then the not operator is applied to that value. If the expression (temperature > 100) is true, the not operator returns false. If the expression (temperature > 100) is false, the not operator returns true. The previous code is equivalent to asking: “Is the temperature not greater than 100?” Boolean Variables The bool data type allows you to create variables that may reference one of two possible values: True or False. Here are examples of how we assign values to a bool variable: hungry = True sleepy = False Boolean variables are most commonly used as flags that signals when some condition exists in the program. When the flag variable is set to False, it indicates the condition does not exist. When the flag variable is set to True, it means the condition does exist. For example, suppose a salesperson has a quota of $50,000. Assuming sales references the amount that the salesperson has sold, the following code determines whether the quota has been met: As a result of this code, the sales_quota_met variable can be used as a flag to indicate whether the sales quota has been met. Later in the program we might test the flag in the following way: if sales_quota_met: print('You have met your sales quota!') This code displays 'You have met your sales quota!' if the bool variable sales_quota_met is True. Notice that we did not have to use the == operator to explicitly compare the sales_quota_met variable with the value True. This code is equivalent to the following: if sales_quota_met == True: print('You have met your sales quota!') Page 37 Chapter 5 Repetition Structures The while Loop while condition: statement statement etc. We will refer to the first line as the while clause. The while clause begins with the word while, followed by a Boolean condition while loop executes, the condition is tested. If the condition is true, the statements that appear in the block following the while clause are executed, and then the loop starts over. If the condition is false, the program exits the loop. Program 5-1 shows an example of the while loop. This is the Python version of pseudocode Program 5-2 in your textbook. Page 38 27 # Call the main function. 28 main() In Python, the for statement is designed to work with a sequence of data items. When the statement executes, it iterates once for each item in the sequence. We will use the for statement in the following general format: We will refer to the first line as the for clause. In the for clause, variable is the name of a variable. Inside the brackets a sequence of values appears, with a comma separating each value. (In Python, a comma-separated sequence of data items that are enclosed in a set of brackets is called a list.) Beginning at the next line is a block of statements that is executed each time the loop iterates. The for statement executes in the following manner: The variable is assigned the first value in the list, and then the statements that appear in the block are executed. Then, variable is assigned the next value in the list, and the statements in the block are executed again. This Page 39 continues until variable has been assigned the last value in the list. Program 5-2 shows a simple example that uses a for loop to display the numbers 1 through 5. Program Output I will display the numbers 1 through 5. 1 2 3 4 5 The first time the for loop iterates, the num variable is assigned the value 1 and then the statement in line 7 executes (displaying the value 1). The next time the loop iterates, num is assigned the value 2, and the print function is called (displaying the value 2). This process continues, as shown in Figure 5-1, until num has been assigned the last value in the list. Because the list contains five values, the loop will iterate five times. Python programmers commonly refer to the variable that is used in the for clause as the target variable because it is the target of an assignment at the beginning of each loop iteration The values that appear in the list do not have to be a consecutively ordered series of numbers. For example, Program 5-3 uses a for loop to display a list of odd numbers. There are five numbers in the list, so the loop iterates five times. Page 40 Figure 5-1 The for loop Program Output I will display the odd numbers 1 through 9. 1 3 5 7 9 Page 41 Program 5-4 shows another example. In this program the for loop iterates over a list of strings. Notice that the list (in line 5) contains the three strings 'Winken', 'Blinken', and 'Nod'. As a result, the loop iterates three times. Program Output Winken Blinken Nod Python provides a built-in function named range that simplifies the process of writing a count- controlled for loop. Here is an example of a for loop that uses the range function: Notice that instead of using a list of values, we call to the range function passing 5 as an argument. In this statement the range function will generate a list of integers in the range of 0 up to (but not including) 5. This code works the same as the following: As you can see, the list contains five numbers, so the loop will iterate five times. Program 5-5 uses the range function with a for loop to display "Hello world" five times. Page 42 7 print('Hello world!') 8 9 # Call the main function. 10 main() Program Output Hello world Hello world Hello world Hello world Hello world If you pass one argument to the range function, as demonstrated in Program 5-5, that argument is used as the ending limit of the list. If you pass two arguments to the range function, the first argument is used as the starting value of the list and the second argument is used as the ending limit. Here is an example: 1 2 3 4 By default, the range function produces a list of numbers that increase by 1 for each successive number in the list. If you pass a third argument to the range function, that argument is used as step value. Instead of increasing by 1, each successive number in the list will increase by the step value. Program 5-6 shows an example: Program Output 1 3 5 7 9 11 Page 43 In this for statement, three arguments are passed to the range function: The first argument, 1, is the starting value for the list. The second argument, 12, is the ending limit of the list. This means that the last number in the list will be 11. The third argument, 2, is the step value. This means that 2 will be added to each successive number in the list. In a for loop, the purpose of the target variable is to reference each item in a sequence of items as the loop iterates. In many situations it is helpful to use the target variable in a calculation or other task within the body of the loop. For example, suppose you need to write a program that displays the numbers 1 through 10 and their squares, in a table similar to the following: Number Square 1 1 2 4 3 9 4 16 5 25 6 36 7 49 8 64 9 81 10 100 This can be accomplished by writing a for loop that iterates over the values 1 through 10. During the first iteration, the target variable will be assigned the value 1, during the second iteration it will be assigned the value 2, and so forth. Because the target variable will reference the values 1 through 10 during the loop’s execution, you can use it in the calculation inside the loop. Program 5-7 shows how this is done. Page 44 11 # and their squares. 12 for number in range(1, 11): 13 square = number**2 14 print(number, '\t', square) 15 16 # Call the main function. 17 main() Program Output Number Square -------------- 1 1 2 4 3 9 4 16 5 25 6 36 7 49 8 64 9 81 10 100 First, take a closer look at line 7, which displays the table headings: print('Number\tSquare') Notice that inside the string literal the \t escape sequence between the words Number and Square. These are special formatting characters known as the tab escape sequence. The escape sequence works similarly to the word Tab that is used in pseudocode in your textbook. As you can see in the program output, the "\t" characters are not displayed on the screen, but rather cause the output cursor to "tab over." It is useful for aligning output in columns on the screen. The for loop that begins in line 12 uses the range function to produce a list containing the numbers 1 through 10. During the first iteration, number will reference 1, during the second iteration number will reference 2, and so forth, up to 10. Inside the loop, the statement in line 13 raises number to the power of 2 (recall that ** is the exponent operator), and assigns the result to the square variable. The statement in line 14 prints the value referenced by number, tabs over, and then prints the value referenced by square. (Tabbing over with the \t escape sequence causes the numbers to be aligned in two columns in the output.) Page 45 Calculating a Running Total Your textbook discusses the common programming task of calculating the sum of a series of values, also known as calculating a running total. Program 5-8 demonstrates how this can be done in Python. The total variable that is declared in line 6 is the accumulator variable. Notice that it is initialized with the value 0. During each loop iteration the user enters a number, and in line 15 this number is added to the value already stored in total. The total variable accumulates the sum of all the numbers entered by the user. This program is the Python version of pseudocode Program 5-18 in your textbook. Page 46 Chapter 6 Value-Returning Functions The functions that you learned about in Chapter 3 of this booklet are simple functions that do not return a value. A function can also be written to return a value to the statement that called the function. We refer to such a function as a value-returning function. Python provides several library functions for working with random numbers. To use any of these functions you first need to write this import statement at the top of your program: import random The first random-number generating function that we will discuss is random.randint. The following statement shows an example of how you might call the random.randint function. The part of the statement that reads random.randint(1, 100) is a call to the function. Notice that two arguments appear inside the parentheses: 1 and 100. These arguments tell the function to give an integer random number in the range of 1 through 100. (The values 1 and 100 are included in the range.) Notice that the call to the random.randint function appears on the right side of an = operator. When the function is called, it will generate a random number in the range of 1 through 100 and then return that number. The number that is returned will be assigned to the number variable. Program 6-1 shows a complete demonstration. This is the Python version of pseudocode Program 6-2 in your textbook. This program uses a for loop that iterates five times. Inside the loop, the statement in line 8 calls the random.randint function to generate a random number in the range of 1 through 100. Page 47 Program 6-2 (random_numbers2.py) 1 # This program displays five random 2 # numbers in the range of 1 through 100. 3 import random This is the Python version 4 of Program 6-2 in your 5 def main(): textbook. 6 for count in range(5): 7 # Get a random number. 8 number = random.randint(1, 100) 9 # Display the number. 10 print(number) 11 12 # Call the main function. 13 main() Program Output 89 7 16 41 12 The standard library contains numerous functions for working with random numbers. In addition to the random.randint function, you might find the random.randrange, random, and random.uniform functions useful. (To use any of these functions you need to write import random at the top of your program.) If you remember how to use the range function (which we discussed in Chapter 5) then you will immediately be comfortable with the random.randrange function. The random.randrange function takes the same arguments as the range function. The difference is that the random.randrange function does not return a list of values. Instead, it returns a randomly selected value from a sequence of values. For example, the following statement assigns a random number in the range of 0 through 9 to the number variable: number = random.randrange(10) The argument, in this case 10, specifies the ending limit of the sequence of values. The function will return a randomly-selected number from the sequence of values 0 up to, but not including, the ending limit. The following statement specifies both a starting value and an ending limit for the sequence: Page 48 When this statement executes, a random number in the range of 5 through 9 will be assigned to number. The following statement specifies a starting value, an ending limit, and a step value: In this statement the random.randrange function returns a randomly selected value from the following sequence of numbers: [0, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100] number = random.random() The random.uniform function also returns a random floating-point number, but allows you to specify the range of values to select from. Here is an example: Up to now, all of the functions that you have written have been simple functions that do not return a value. You write a value-returning function in the same way that you write a simple function, with one exception: a value-returning function must have a return statement. Here is the general format of a value-returning function definition in Python: def function_name(): statement statement etc. return expression One of the statements in the function must be a return statement, which takes the following form: Page 49 return expression The value of the expression that follows the key word return will be sent back to the part of the program that called the function. This can be any value, variable, or expression that has a value (such as a math expression). The purpose of this function is to accept two integer values as arguments and return their sum. Let’s take a closer look at how it works. The first statement in the function's block assigns the value of num1 + num2 to the result variable. Next, the return statement executes, which causes the function to end execution and sends the value referenced by the result variable back to the part of the program that called the function. Program 6-3 demonstrates the function. Page 50 11 total = sum(first_age, second_age) 12 13 # Display the total age. 14 print('Together you are', total, 'years old.') 15 16 # The sum function accepts two numeric arguments and 17 # returns the sum of those arguments. 18 def sum(num1, num2): 19 result = num1 + num2 20 return result 21 22 # Call the main function. 23 main() Returning Strings So far you’ve seen examples of functions that return numbers. You can also write functions that return strings. For example, the following function prompts the user to enter his or her name, and then returns the string that the user entered. def get_name(): # Get the user's name. name = input('Enter your name: ') Python allows you to write Boolean functions, which return either True or False. For example, the following function accepts a number as an argument, and returns True if the argument is an even number. Otherwise, it returns False. Page 51 def is_even(number): # Determine whether number is even. If it is, # set status to true. Otherwise, set status # to false. if (number % 2) == 0: status = True else: status = False The following code gets a number from the user, and then calls the function to determine whether the number is even or odd: Math Functions The Python standard library contains several functions that are useful for performing mathematical operations. Table 6-2 lists many of the math functions. To use these functions you first need to write this import statement at the top of your program: import math These functions typically accept one or more values as arguments, perform a mathematical operation using the arguments, and return the result. For example, one of the functions is named math.sqrt. The math.sqrt function accepts an argument and returns the square root of the argument. Here is an example of how it is used: result = math.sqrt(16) Page 52 math.exp(x) Returns ex math.floor(x) Returns the largest integer that is less than or equal to x. math.hypot(x, y) Returns the length of a hypotenuse that extends from (0, 0) to (x, y). math.log(x) Returns the natural logarithm of x. math.log10(x) Returns the base-10 logarithm of x. math.radians(x) Assuming x is an angle in degrees, the function returns the angle converted to radians. math.sin(x) Returns the sine of x in radians. math.sqrt(x) Returns the square root of x. math.tan(x) Returns the tangent of x in radians.</TB></TBL> The Python library also defines two variables, math.pi and math.e, which are assigned mathematical values for pi and e. You can use these variables in equations that require their values. For example, the following statement, which calculates the area of a circle, uses pi. (Notice that we use dot notation to refer to the variable.) Formatting Numbers Chapter 6 in your textbook also covers formatting functions, and gives an example for formatting a number to look like a currency amount. In Python 3, we use the built-in format function to format numbers for output. When you call the built-in format function, you pass it two arguments: a numeric value, and a format specifier. The format specifier is a string that contains special characters specifying how the numeric value should be formatted. Let's look at an example: format(12345.6789, '.2f') The first argument, which is the floating-point number 12345.6789, is the number that we want to format. The second argument, which is the string '.2f', is the format specifier. Here's the meaning of its contents: The .2 specifies the precision. It indicates that we want to round the number to two decimal places. The f specifies that the data type of the number we are formatting is a floating-point number. (If you are formatting an integer, you cannot use f for the type. We will discuss integer formatting momentarily.) Page 53 The format function returns a string containing the formatted number. The following interactive mode session demonstrates how you use the format function along with the print function to display a formatted number: Notice that the number is rounded to two decimal places. The following example shows the same number, rounded to one decimal place: Here is an example that prints multiple items, one of which is a formatted number: Page 54 Inserting Comma Seperators If you want the number to be formatted with comma separators, you can insert a comma into the format specifier, as shown here: String Functions You use the len function to get the length of a string. The following code demonstrates: name = 'Charlemagne' strlen = len(name) After this code executes, the strlen variable will be assigned the value 11, which is the number of characters in the string 'Charlemagne'. Appending Strings In Python, you do not use a function to append a string to another string. Instead, you use the + operator. When the + operator is used with two strings, it performs string concatenation. This means that it appends one string to another. For example, look at the following statement: After this statement executes, the message variable will reference the string'This is one string.' The upper method returns a copy of a string with all of its letters converted to uppercase. Here is an example: littleName = 'herman' Page 55 bigName = littleName.upper() After this code executes, the bigName variable will reference the string "HERMAN". The lower method returns a copy of a string with all of its letters converted to lowercase. Here is an example: bigName = 'HERMAN' littleName = bigName.lower() After this code executes, the littleName variable will reference the string "herman". In Python you can use the find method to perform a task similar to that of the contains function discussed in your textbook. The find method searches for a substring within a string. The method returns the index of the first occurrence of the substring, if it is found. (The index is the character position number within the string. The first character in a string is at index 0, the second character in a string is a index 1, and so forth.) If the substring is not found, the method returns -1. Here is an example: String Slicing A slice is a span of items that are taken from a sequence. When you take a slice from a string, you get a span of characters from within the string. String slices are also called substrings. To get a slice of a string, you write an expression in the following general format: string[start : end] In the general format, start is the index of the first character in the slice, and end is the index marking the end of the slice. The expression will return a string containing a copy of the Page 56 characters from start up to (but not including) end. For example, suppose we have the following: The second statement assigns the string 'Lynn' to the middle_name variable. If you leave out the start index in a slicing expression, Python uses 0 as the starting index. Here is an example: The second statement assigns the string 'Lynn' to first_name. If you leave out the end index in a slicing expression, Python uses the length of the string as the end index. Here is an example: The second statement assigns the string 'Smith' to first_name. What do you think the following code will assign to the my_string variable? The second statement assigns the entire string 'Patty Lynn Smith' to my_string. The statement is equivalent to: The slicing examples we have seen so far get slices of consecutive characters from stings. Slicing expressions can also have step value, which can cause characters to be skipped in the string. Here is an example of code that uses a slicing expression with a step value: letters = 'ABCDEFGHIJKLMNOPQRSTUVWXYZ' print(letters[0:26:2]) The third number inside the brackets is the step value. A step value of 2, as used in this example, causes the slice to contain every 2nd character from the specified range in the string. The code will print the following: ACEGIKMOQSUWY Page 57 You can also use negative numbers as indexes in slicing expressions, to reference positions relative to the end of the string. Here is an example: Recall that Python adds a negative index to the length of a string to get the position referenced by that index. The second statement in this code assigns the string 'Smith' to the last_name variable. If the end index specifies a position beyond the end of the string, Python will use the length of the string instead. If the start index specifies a position before the beginning of the string, Python will use 0 instead. If the start index is greater than the end index, the slicing expression will return an empty string. Page 58 Chapter 7 Input Validation Chapter 7 in your textbook discusses the process of input validation in detail. There are no new language features introduced in the chapter, so here we will simply show you a Python version of the pseudocode Program 7-2. This program uses an input validation loop in lines 28 through 30 to validate that the value entered by the user is not negative. Program Output Enter the item's wholesale cost: -.50 [Enter] ERROR: The cost cannot be negative. Page 59 Enter the correct wholesale cost: 0.50 [Enter] The retail price is $ 1.25. Do you have another item? (Enter y for yes): n [Enter] Page 60 Chapter 8 Arrays (Lists) In Python, you create lists instead of arrays. A list is similar to an array, but provides many more capabilities than a traditional array. A list is an object that contains multiple data items. Each item that is stored in a list is called an element. Here is a statement that creates a list of integers: The items that are enclosed in brackets and separated by commas are the values of the list elements. The following is another example: You can use the print function to display an entire list, as shown here: When the print function is called in the second statement, it will display the elements of the list like this: You access each element of a list with a subscript. As discussed in your textbook, the first element's subscript is 0, the second element's subscript is 1, and so forth. The last element's subscript is the array size minus 1. Program 8-1 shows an example of a list being used to hold values entered by the user. This is the Python version of pseudocode Program 8-1 in your textbook. Page 61 11 # Get the hours worked by employee 3. 12 hours[2] = int(input('Enter the hours worked by employee 3: ')) 13 14 # Display the values entered. 15 print('The hours you entered are:') 16 print(hours[0]) 17 print(hours[1]) 18 print(hours[2]) Program Output Enter the hours worked by employee 1: 40 [Enter] Enter the hours worked by employee 2: 20 [Enter] Enter the hours worked by employee 3: 15 [Enter] The hours you entered are: 40 20 15 An error will occur if you use an invalid index with a list. For example, look at the following code: The last time that this loop iterates, the index variable will be assigned the value 4, which is an invalid index for the list. As a result, the statement that calls the print function will cause an error. The len function, that you learned about in Chapter 6 of this language companion, can be used with lists as well as strings. When you pass a list as an argument, the len function returns the number of elements in the list. The previously shown code, which causes an error, can be modified as follows to prevent the error: Page 62 Iterating Over a List with the for Loop You can easily iterate over the contents of a list with the for loop, as shown here: The for statement executes in the following manner: The variable n is assigned a copy of the first value in the list, and then the statements that appear in the block are executed. Then, the variable n is assigned a copy of the next value in the list, and the statements in the block are executed again. This continues until the variable has been assigned the last value in the list. If we run this code, it will print: 99 100 101 102 Keep in mind that as the for loop executes, the n variable is assigned a copy of the list elements, and any changes made to the n variable do not affect the list. To demonstrate, look at the following: The statement in line 3 merely reassigns the n variable to a different value (0). It does not change the list element that n referred to before the statement executed. When this code executes, the statement in line 5 will print: 99 100 101 102 Program 8-2 further demonstrates the use of loops with a list. This is the Python version of pseudocode Program 8-3 in your textbook. Page 63 7 8 # Get the hours for each employee. 9 while index < len(hours): 10 hours[index] = int(input('Enter the hours worked by employee ' + 11 str(index + 1) + ': ')) 12 index += 1 13 14 # Display the values entered. 15 for n in hours: 16 print(n) Program Output Enter the hours worked by employee 1: 40 [Enter] Enter the hours worked by employee 2: 20 [Enter] Enter the hours worked by employee 3: 15 [Enter] 40 20 15 Page 64 22 else: 23 print('You did not earn 100 on any test.') Program Output You earned 100 on test number 4 Program Output Enter a name to search for in the list: Matt Hoyle [Enter] That name was found in element 3 Program Output Enter a name to search for in the list: Terry Thompson [Enter] That name was not found in the array. Page 65 Passing List as an Argument to a Function def set_to_zero(numbers): index = 0 while index < len(numbers): numbers[index] = 0 index = index + 1 The function's parameter, numbers, is used to refer to a list. When you call this function and pass a list as an argument, the loop sets each element to 0. Here is an example of a code that calls the function: my_list = [1, 2, 3, 4, 5] set_to_zero(my_list) print(my_list) 0 0 0 0 0 Program 8-5 gives a complete demonstration of passing an array to a method. This is the Python version of pseudocode Program 8-13 in your textbook. Page 66 14 def get_total(value_list): 15 # Create a variable to use as an accumulator. 16 total = 0 17 18 # Calculate the total of the list elements. 19 for num in value_list: 20 total += num 21 22 # Return the total. 23 return total 24 25 # Call the main function. 26 main() Program Output The sum of the array elements is 30 Two-Dimensional Arrays In Python you can create a list of lists, which acts much like a two-dimensional array. Here's an example: This creates a list named numbers, with two elements. The first element is the following list: [1, 2, 3] The following statement prints the contents of numbers[0], which is the first element: print(numbers[0]) [1, 2, 3] The following statement prints the contents of numbers[1], which is the second element: print(numbers[1]) Page 67 If we execute this statement, the following be displayed: As discussed in your textbook, we normally think of two-dimensional arrays as having rows and columns. We can use this metaphor with two-dimensional lists as well. For example, let's say the following two-dimensional list contains sets of test scores: By declaring the list this way (with each list that is an element shown on a separate line) it's easy to see how we can think of the list as a set of rows and columns. When processing the data in a two-dimensional list, we use two subscripts: one for the row and another for the column. In the scores list, the elements in row 0 are referenced as follows: scores[0][0] scores[0][1] scores[0][2] scores[1][0] scores[1][1] scores[1][2] scores[2][0] scores[2][1] scores[2][2] To access one of the elements in a two-dimensional list, you use both subscripts. For example, the following statement prints the number in scores[0][2]: print(scores[0][2]) scores[2][1] = 95 Page 68 Programs that process two-dimensional lists can do so with nested loops. For example, the following code displays all of the elements in the scores list: NUM_ROWS = 3 NUM_COLS = 3 row = 0 while row < NUM_ROWS: col = 0 while col < NUM_COLS: print(scores[row][col]) col = col + 1 row = row + 1 And the following code prompts the user to enter a score, once for each element in the list: NUM_ROWS = 3 NUM_COLS = 3 row = 0 while row < NUM_ROWS: col = 0 while col < NUM_COLS: scores[row][col] = int(input('Enter a score: ')) col = col + 1 row = row + 1 Program 8-6 shows a complete example. It creates a list with three rows and four columns, prompts the user for values to store in each element, and then displays the values in each element. This is the Python example of pseudocode Program 8-16 in your textbook. Page 69 16 row = row + 1 17 18 # Display the vlues in the list. 19 print('Here are the values you entered.') 20 row = 0 21 while row < ROWS: 22 col = 0 23 while col < COLS: 24 print(values[row][col]) 25 col = col + 1 26 row = row + 1 Program Output Enter a number: 1 [Enter] Enter a number: 2 [Enter] Enter a number: 3 [Enter] Enter a number: 4 [Enter] Enter a number: 5 [Enter] Enter a number: 6 [Enter] Enter a number: 7 [Enter] Enter a number: 8 [Enter] Enter a number: 9 [Enter] Enter a number: 10 [Enter] Enter a number: 11 [Enter] Enter a number: 12 [Enter] Here are the values you entered. 1 2 3 4 5 6 7 8 9 10 11 12 Page 70 Chapter 9 Sorting and Searching Chapter 9 in your textbook discusses the following sorting algorithms: Bubble Sort Selection Sort Insertion Sort The Binary Search algorithm is also discussed. The textbook chapter examines these algorithms in detail, and no new language features are introduced. For these reasons we will simply present the Python code for the algorithms in this chapter. For more in-depth coverage of the logic involved, consult the textbook. Bubble Sort Program 9-1 is only a partial program. It shows the Python version of pseudocode Program 9-1, which is the Bubble Sort algorithm. This is the Python version of Program 9-1 (bubble_sort.py) Program 9-1 in your textbook. 1 # Note: This is not a complete program. 2 # 3 # The bubble_sort function uses the bubble sort algorithm 4 # to sort a list of integers. 5 # Note the following: 6 # (1) We do not have to pass the array size because we 7 # can use the len function. 8 # (2) We do not have a separate method to swap values. 9 # The swap is perfomed inside this method. 10 11 def bubble_sort(arr): 12 # Set max_element to the length of the arr list, minus 13 # one. This is necessary for the outer loop. 14 max_element = len(arr) - 1 15 16 # The outer loop positions max_element at the last element 17 # to compare during each pass through the list. Initially 18 # max_element is the index of the last element in the array. 19 # During each iteration, it is decreased by one. 20 while max_element >= 0: 21 # Set index to 0, necessary for the inner loop. 22 index = 0 23 24 # The inner loop steps through the list, comparing 25 # each element with its neighbor. All of the elements 26 # from index 0 thrugh max_element are involved in the 27 # comparison. If two elements are out of order, they 28 # are swapped. 29 while index <= max_element - 1: 30 # Compare an element with its neighbor. 31 if arr[index] > arr[index + 1]: Page 71 32 # Swap the two elements. 33 temp = arr[index] 34 arr[index] = arr[index + 1] 35 arr[index + 1] = temp 36 # Increment index. 37 index = index + 1 38 # Decrement max_element. 39 max_element = max_element - 1 Selection Sort Program 9-2 is also a partial program. It shows the Python version of the selectionSort pseudocode module that is shown in Program 9-5 in your textbook. This is the Python version of the Program 9-2 (selection_sort.py) selectionSort Module shown 1 # Note: This is not a complete program. in Program 9-5 in your textbook. 2 # 3 # The selection_sort function performs the selection sort 4 # algorithm on a list of integers. 5 6 def selection_sort(arr): 7 # Set start_scan to 0. This is necessary for 8 # the outer loop. It is the starting position 9 # of the scan. 10 start_scan = 0 11 12 # The outer loop iterates once for each element in the 13 # list. The start_scan variable marks the position where 14 # the scan should begin. 15 while start_scan < len(arr) - 1: 16 # Assume the first element in the scannable area 17 # is the smallest value. 18 min_index = start_scan 19 min_value = arr[start_scan] 20 21 # Initialize index for the inner loop. 22 index = start_scan + 1 23 24 # Scan the list, starting at the 2nd element in 25 # the scannable area. We are looking for the smallest 26 # value in the scannable area. 27 while index < len(arr): 28 if arr[index] < min_value: 29 min_value = arr[index] 30 min_index = index 31 # Increment index. 32 index = index + 1 Page 72 33 34 # Swap the element with the smallest value 35 # with the first element in the scannable area. 36 arr[min_index] = arr[start_scan] 37 arr[start_scan] = min_value 38 39 # Increment start_scan. 40 start_scan = start_scan + 1 Insertion Sort Program 9-3 is also a partial program. It shows the Python version of the insertionSort pseudocode module that is shown in Program 9-6 in your textbook. Page 73 30 # within the sorted subset. 31 arr[scan] = unsorted_value 32 33 # Increment index. 34 index = index + 1 Binary Search Program 9-4 is also a partial program. It shows the Python version of the binarySearch pseudocode module that is shown in Program 9-7 in your textbook. Page 74 33 # if it was not found. 34 return position Page 75 Chapter 10 Files Opening a File You use the open function in Python to open a file. The open function creates a file object and associates it with a file on the disk. Here is the general format of how the open function is used: For example, suppose the file customers.txt contains customer data, and we want to open for reading. Here is an example of how we would call the open function: After this statement executes, the file named customers.txt will be opened, and the variable customer_file will reference a file object that we can use to read data from the file. Suppose we want to create a file named sales.txt and write data to it. Here is an example of how we would call the open function: After this statement executes, the file named sales.txt will be created, and the variable sales_file will reference a file object that we can use to write data to the file. Page 76 Warning: Remember, when you use the 'w' mode you are creating the file on the disk. If a file with the specified name already exists when the file is opened, the contents of the existing file will be erased. Once you have opened a file for writing, you use the file object's write method to write data to a file. Here is the general format of how you call the write method: file_variable.write(string) In the format, file_variable is a variable that references a file object, and string is a string that will be written to the file. The file must be opened for writing (using the 'w' or 'a' mode) or an error will occur. Let's assume that customer_file references a file object, and the file was opened for writing with the 'w' mode. Here is an example of how we would write the string 'Charles Pace' to the file: customer_file.write('Charles Pace') The second statement writes the value referenced by the name variable to the file associated with customer_file. In this case, it would write the string 'Charles Pace' to the file. (These examples show a string being written to a file, but you can also write numeric values.) Closing a File Once a program is finished working with a file, it should close the file. Closing a file disconnects the program from the file. In Python you use the file object's close method to close a file. For example, the following statement closes the file that is associated with customer_file: customer_file.close() Once a file is closed, the connection between it and the file object is removed. In order to perform further operations on the file, it must be opened again. Program 10-1 shows a complete Python program that that opens an output file, writes data to it, and then closes it. This is the Python version of pseudocode Program 10-1 in your textbook. Page 77 Program 10-1 (file_write_demo.py) 1 # This program writes three lines of data 2 # to a file. 3 def main(): 4 # Open a file named philosophers.txt. 5 outfile = open('philosophers.txt', 'w') 6 7 # Write the names of three philosphers 8 # to the file. 9 outfile.write('John Locke\n') 10 outfile.write('David Hume\n') 11 outfile.write('Edmund Burke\n') 12 13 # Close the file. This is the Python version of 14 outfile.close() Program 10-1 in your textbook. 15 16 # Call the main function. 17 main() When this program executes, line 5 creates a file named philosophers.dat on the disk, and lines 9 through 11 write the strings 'John Locke\n', 'David Hume\n', and 'Edmund Burke\n' to the file. Line 14 closes the file. Notice the use of the \n that appears inside the strings that are written to the file in lines 9, 10, and 11. The \n sequence is known as an escape character. An escape character is a special character that is preceded with a backslash (\), appearing inside a string literal. When a string literal that contains escape characters is printed on the screen or written to a file, the escape characters are treated as special commands that are embedded in the string. The \n sequence is the newline escape character. When the \n escape character is printed by the print function, it isn't displayed on the screen. Instead, it causes output to advance to the next line. For example, look at the following statement: print('One\nTwo\nThree') One Two Three So why did we include a \n at the end of each item that is written to the file in Program 10-1? Let's take a closer look. The statements in lines 9 through 11 write three strings to the file. Line Page 78 9 writes the string 'John Locke\n', line 10 writes the string 'David Hume\n', and line 11 writes the string 'Edmund Burke\n'. Line 14 closes the file. After this program runs, the three items shown in Figure 10-1 will be written to the philosophers.txt file. Notice that each of the strings written to the file end with \n. The \n marks the location where a new line begins in the file. We can see how this works if we open the file in a text editor. For example, Figure 10-2 shows the philosophers.txt file as it appears in Notepad. If a file has been opened for reading (using the 'r' mode) you can use the file object's readline method to read a line from the file. The method returns the line as a string, including the \n. Program 10-2 shows how we can use the readline method to read the contents of the philosophers.txt file, one line at a time. (This is the Python version of pseudocode Program 10-2 in your textbook.) Page 79 6 7 # Read three lines from the file 8 line1 = infile.readline() 9 line2 = infile.readline() 10 line3 = infile.readline() 11 12 # Close the file. 13 infile.close() 14 15 # Print the names that were read. 16 print('Here are the names of three philosophers:') 17 print(line1) 18 print(line2) 19 print(line3) 20 21 # Call the main function. 22 main() Program Output Here are the names of three philosophers: John Locke David Hume Edmund Burke The statement in line 5 opens the philosophers.txt file for reading, using the 'r' mode. It also creates a file object and assigns the object to the infile variable. When a file is opened for reading, a special value known as a read position is internally maintained for that file. A file’s read position marks the location of the next item that will be read from the file. Initially, the read position is set to the beginning of the file. After the statement in line 5 executes, the read position for the philosophers.txt file will be positioned as shown in Figure 10-3. The statement in line 8 calls the infile.readline method to read the first line from the file. The line, which is returned as a string, is assigned to the line1 variable. After this statement executes the line1 variable will be assigned the string 'John Locke\n'. In addition, the file’s read position will be advanced to the next line in the file, as shown in Figure 10-4. Page 80 Figure 10-4 Read position advanced to the next line Then the statement in line 9 reads the next line from the file and assigns it to the line2 variable. After this statement executes the line2 variable will be assigned the string 'David Hume\n'. The file’s read position will be advanced to the next line in the file, as shown in Figure 10-5. Then the statement in line 10 reads the next line from the file and assigns it to the line3 variable. After this statement executes the line3 variable will be assigned the string 'Edmund Burke\n'. After this statement executes, the read position will be advanced to the end of the file, as shown in Figure 10-6. 10-7 shows the line1, line2, and line3 variables and the strings they are assigned after these statements have executed. Page 81 Figure 10-7 The strings referenced by the line1, line2, and line3 variables The statement in line 13 closes the file. The statements in lines 17 through 19 display the contents of the line1, line2, and line3 variables. Note: If the last line in a file is not terminated with a \n, the readline method will return the line without a \n. Sometimes complications are caused by the \n that appears at the end of the strings that are returned from the readline method. For example, did you notice in the sample output of Program 10-2 that a blank line is printed after each line of output? This is because each of the strings that are printed in lines 17 through 19 end with a \n escape sequence. When the strings are printed, the \n causes an extra blank line to appear. The \n serves a necessary purpose inside a file: it separates the items that are stored in the file. However, in most cases you want to remove the \n from a string after it is read from a file. Each string in Python has a method named rstrip that removes, or "strips", specific characters from the end of a string. The following code shows an example of how the rstrip method can be used. The first statement assigns the string 'Joanne Manchester\n' to the name variable. (Notice that the string ends with the \n escape sequence.) The second statement calls the name.rstrip('\n') method. The method returns a copy of the name string without the trailing \n. This string is assigned back to the name variable. The result is that the trailing \n is stripped away from the name string. Program 10-3 is another program that reads and displays the contents of the philosophers.txt file. This program uses the rstrip method to strip the \n from the strings that are read from the file before they are displayed on the screen. As a result, the extra blank lines do not appear in the output. Page 82 Program 10-3 (strip_newline.py) 1 # This program reads the contents of the 2 # philosophers.txt file one line at a time. 3 def main(): 4 # Open a file named philosophers.txt. 5 infile = open('philosophers.txt', 'r') 6 7 # Read three lines from the file 8 line1 = infile.readline() 9 line2 = infile.readline() 10 line3 = infile.readline() 11 12 # Strip the \n from each string. 13 line1 = line1.rstrip('\n') 14 line2 = line2.rstrip('\n') 15 line3 = line3.rstrip('\n') 16 17 # Close the file. 18 infile.close() 19 20 # Print the names that were read. 21 print('Here are the names of three philosophers:') 22 print(line1) 23 print(line2) 24 print(line3) 25 26 # Call the main function. 27 main() Program Output Here are the names of three philosophers: John Locke David Hume Edmund Burke Program 10-1 wrote three string literals to a file, and each string literal ended with a \n escape sequence. In most cases, the data items that are written to a file are not string literals, but values in memory that are referenced by variables. This would be the case in a program that prompts the user to enter data, and then writes that data to a file. When a program writes data that has been entered by the user to a file, it is usually necessary to concatenate a \n escape sequence to the data before writing it. This ensures that each piece of data is written to a separate line in the file. Program 10-4 demonstrates how this is done. Page 83 Program 10-4 (write_names.py) 1 # This program gets three names from the user 2 # and writes them to a file. 3 4 def main(): 5 # Get three names. 6 print('Enter the names of three friends.') 7 name1 = input('Friend #1: ') 8 name2 = input('Friend #2: ') 9 name3 = input('Friend #3: ') 10 11 # Open a file named friends.txt. 12 myfile = open('friends.txt', 'w') 13 14 # Write the names to the file. 15 myfile.write(name1 + '\n') 16 myfile.write(name2 + '\n') 17 myfile.write(name3 + '\n') 18 19 # Close the file. 20 myfile.close() 21 print('The names were written to friends.txt.') 22 23 # Call the main function. 24 main() Lines 7 through 9 prompt the user to enter three names, and those names are assigned to the variables name1, name2, and name3. Line 12 opens a file named friends.txt for writing. Then, lines 15 through 17 write the names entered by the user, each with '\n' concatenated to it. As a result, each name will have the \n escape sequence added to it when written to the file. Figure 10-8 shows the contents of the file with the names entered by the user in the sample run. Page 84 Appending Data to an Existing File When you use the 'w' mode to open an output file and a file with the specified filename already exists on the disk, the existing file will be erased and a new empty file with the same name will be created. Sometimes you want to preserve an existing file and append new data to its current contents. Appending data to a file means writing new data to the end of the data that already exists in the file. In Python you can use the 'a' mode to open an output file in append mode, which means the following. If the file already exists, it will not be erased. If the file does not exist, it will be created. When data is written to the file, it will be written at the end of the file’s current contents. For example, assume the file friends.txt contains the following names, each in a separate line: Joe Rose Geri The following code opens the file and appends additional data to its existing contents. After this program runs, the file friends.txt will contain the following data: Joe Rose Geri Matt Chris Suze Strings can be written directly to a file with the write method, but numbers must be converted to strings before they can be written. Python has a built-in function named str that converts a value to a string. For example, assuming the variable num is assigned the value 99, the expression str(num) will return the string '99'. Page 85 Program 10-5 shows an example of how you can use the str function to convert a number to a string, and write the resulting string to a file. The statement in line 7 opens the file numbers.txt for writing. Then the statements in lines 10 through 12 prompt the user to enter three numbers, which are assigned to the variables num1, num2, and num3. Take a closer look at the statement in line 15, which writes the value referenced by num1 to the file: outfile.write(str(num1) + '\n') Page 86 The expression str(num1) + '\n' converts the value referenced by num1 to a string and concatenates the \n escape sequence to the string. In the program's sample run, the user entered 22 as the first number, so this expression produces the string '22\n'. As a result, the string '22\n' is written to the file. Lines 16 and 17 perform the similar operations, writing the values referenced by num2 and num3 to the file. After these statements execute, the values shown in Figure 10-9 will be written to the file. Figure 10-10 shows the file viewed in Notepad. When you read numbers from a text file, they are always read as strings. For example, suppose a program uses the following code to read the first line from the numbers.txt file that was created by Program 10-5: The statement in line 2 uses the readline method to read a line from the file. After this statement executes, the value variable will reference the string '22\n'. This can cause a problem if we intend to perform math with the value variable, because you cannot perform math on strings. In such a case you must convert the string to a numeric type. Python provides the built-in function int to convert a string to an integer, and the built-in function float to convert a string to a floating-point number. For example, we could modify the code previously shown as follows: Page 87 1 infile = open('numbers.txt', 'r') 2 string_input = infile.readline() 3 value = int(string_input) 4 infile.close() The statement in line 2 reads a line from the file and assigns it to the string_input variable. As a result, string_input will reference the string '22\n'. Then the statement in line 3 uses the int function to convert string_input to an integer, and assigns the result to value. After this statement executes, the value variable will reference the integer 22. (Both the int and float functions ignore the \n that appears at the end of the string that is passed as an argument.) This code demonstrates the steps involved in reading a string from a file with the readline method, and then converting that string to an integer with the int function. In many situations, however, the code can be simplified. A better way is to read the string from the file and convert it in one statement, as shown here: Notice in line 2 that a call to the readline method is used as the argument to the int function. Here's how the code works: the readline method is called, and it returns a string. That string is passed to the int function, which converts it to an integer. The result is assigned to the value variable. Program 10-6 demonstrates how a loop can be used to collect items of data to be stored in a file. This is the Python version of pseudocode Program 10-3 in your textbook. Figure 10-11 shows the contents of the sales.txt file containing the data entered by the user in the sample run. Page 88 10 sales_file = open('sales.txt', 'w') 11 12 # Get the amount of sales for each day and write 13 # it to the file. 14 for count in range(1, num_days + 1): 15 # Get the sales for a day. 16 sales = float(input('Enter the sales for day #' + 17 str(count) + ': ')) 18 19 # Write the sales amount to the file. 20 sales_file.write(str(sales) + '\n') 21 22 # Close the file. 23 sales_file.close() 24 print('Data written to sales.txt.') 25 26 # Call the main function. This is the Python version of 27 main() Program 10-3 in your textbook. Page 89 Detecting the End of a File In Python, the readline method returns an empty string ('') when it has attempted to read beyond the end of a file. This makes it possible to write a while loop that determines when the end of a file has been reached. Program 10-7 demonstrates how this can be done in code. The program reads and displays all of the values in the sales.txt file. (This is the Python version of pseudocode Program 10-3 in your textbook.) Page 90 $ 4000.00 $ 5000.00 In the previous example you saw how the readline method returns an empty string when the end of the file has been reached. The Python language also allows you to write a for loop that automatically reads line in a file without testing for any special condition that signals the end of the file. The loop does not require a priming read operation, and it automatically stops when the end of the file has been reached. When you simply want to read the lines in a file, one after the other, this technique is simpler and more elegant than writing a while loop that explicitly tests for an end of the file condition. Here is the general format of the loop: In the general format, variable is the name of a variable and file_object is a variable that references a file object. The loop will iterate once for each line in the file. The first time the loop iterates, variable will reference the first line in the file (as a string), the second time the loop iterates, variable will reference the second line, and so forth. Program 10-8 provides a demonstration. It reads and displays all of the items in the sales.txt file. Page 91 Program Output $ 1000.00 $ 2000.00 $ 3000.00 $ 4000.00 $ 5000.00 Page 92 Chapter 11 Menu-Driven Programs Chapter 11 in your textbook discusses menu-driven programs. A menu-driven program presents a list of operations that the user may select from (the menu), and then performs the operation that the user selected. There are no new language features introduced in the chapter, so here we will simply show you a Python program that is menu-driven. Program 11-1 is the Python version of the pseudocode Program 11-3. Program Output 1. Convert inches to centimeters. 2. Convert feet to meters. This is the Python version of 3. Convert miles to kilometers. Program 11-3 in your textbook. Page 93 Enter the number of inches: 10 [Enter] That is equal to 25.4 centimeters. Program Output 1. Convert inches to centimeters. 2. Convert feet to meters. 3. Convert miles to kilometers. Program Output 1. Convert inches to centimeters. 2. Convert feet to meters. 3. Convert miles to kilometers. Page 94 Chapter 12 Text Processing Chapter 12 in your textbook discusses programming techniques for working with the individual characters in a string. Python allows you to retrieve the individual characters in a string using subscript notation, as described in the book. For example, the following code creates the string 'Hello', and then uses subscript notation to print the first character in the string: greeting = 'Hello' print(greeting[0]) Although you can use subscript notation to retrieve the individual characters in a string, you cannot use it to change the value of a character within a string. This is because strings in Python are immutable, which means that once they are created, they cannot be changed. Because Python strings are immutable, you cannot use an expression in the form string[index] on the left side of an assignment operator. For example, the following code will cause an error: The last statement in this code will cause an error because it attempts to change the value of the first character in the string 'Bill'. Because of string immutability, we will not be able to show a simple Python version of pseudocode Program 12-3. Also, there are no string methods for inserting and deleting characters, so we will not discuss that section in this chapter. Program 12-1 shows the Python version of pseudocode Program 12-1 in the textbook. Page 95 6 print(name[0]) 7 print(name[1]) 8 print(name[2]) 9 print(name[3]) 10 print(name[4]) Program Output J a c o b Program 12-2 is the Python for loop version of pseudocode Program 12-2 in your textbook, and Program 12-3 is the while loop version. Both of these programs use a loop to step through all of the characters in a string. Page 96 Program Output J a c o b Python provides string testing methods that are similar to the character testing library functions shown in Table 12-2 in your textbook. The Python methods that are similar to those functions are shown here, in Table 12-1. The difference between these methods and the character testing functions discussed in the textbook is that the Python functions operate on an entire string. For example, the following code determines whether all of the characters in the string referenced by the my_string variable are uppercase: my_string = "ABC" if my_string.isupper(): print('That string is all uppercase.') This code will print the message That string is all uppercase because all of the characters in the string that is assigned to my_string are uppercase. Page 97 These methods can be applied to an individual character in a string, however. Here is an example: my_string = "Abc" if my_string[0].isupper(): print('The first character is uppercase.') This code determines whether the character at subscript 0 in my_string is uppercase (and, it is). Program 12-4 further demonstrates the isupper() method. This program is the Python version of Program 12-4 in your textbook. Program Output Enter a sentence: Mr. Jones will arrive TODAY! [Enter] That string has 7 uppercase letters. Page 98 Chapter 13 Recursion A Python function can call itself recursively, allowing you to design algorithms that recursively solve a problem. Chapter 13 in your textbook describes recursion in detail, discusses problem solving with recursion, and provides several pseudocode examples. Other than the technique of a function recursively calling itself, no new language features are introduced. In this chapter we will present Python versions of two of the pseudocode programs that are shown in the textbook. Both of these programs work exactly as the algorithms are described in the textbook. Program 13-1 is the Python version of pseudocode Program 13-2. Program Output This is a recursive function. This is a recursive function. This is a recursive function. This is a recursive function. This is a recursive function. Next, Program 13-2 is the Python version of pseudocode Program 13-3. This program recursively calculates the factorial of a number. Page 99 7 8 # Get the factorial of the number. 9 fact = factorial(number) 10 11 # Display the factorial. 12 print('The factorial of', number, 'is', fact) 13 14 # The factorial function uses recursion to 15 # calculate the factorial of its argument, 16 # which is assumed to be nonnegative. 17 def factorial(num): 18 if num == 0: 19 return 1 20 else: 21 return num * factorial(num - 1) 22 23 # Call the main function. 24 main() Page 100 Chapter 14 Object-Oriented Programming Python is a powerful object-oriented language. An object is an entity that exists in the computer's memory while the program is running. An object contains data and has the ability to perform operations on its data. An object's data is commonly referred to as the object's fields, and the operations that the object performs are the object's methods. In addition to the many objects that are provided by the Python language, you can create objects of your own design. The first step is to write a class. A class is like a blueprint. It is a declaration that specifies the methods for a particular type of object. When the program needs an object of that type, it creates an instance of the class. (An object is an instance of a class.) class ClassName: Method definitions go here… The first line of a class declaration begins with the keyword class, followed by the name of the class, followed by a colon. The class's method definitions follow this line. Method definitions are written much like regular function definitions. Because they belong to the class, method definitions must be indented. One difference that you will notice between Python class declarations and the pseudocode class declarations in the textbook is that there are no field declarations in a Python class. This is because an object's fields are created by assignment statements that appear inside the class's methods. Another difference that you will notice is the absence of access specifiers such as Private and Public. In Python we hide a field or method by starting its name with two underscores. This is similar to making the field or method private. The following Python program contains a CellPhone class like the one shown in your textbook in Class Listing 14-3. It also has a main method to demonstrate the class, like that shown in pseudocode Program 14-3 in your textbook. Program 14-1 1 class CellPhone: 2 def set_manufacturer(self, manufact): 3 self.__manufacturer = manufact 4 This is the Python version of 5 def set_model_number(self, model): Class Listing 14-3 and Program 6 self.__model_number = model 7 14-1 in your textbook. 8 def set_retail_price(self, retail): Page 101 9 self.__retail_price = retail 10 11 def get_manufacturer(self): 12 return self.__manufacturer 13 14 def get_model_number(self): 15 return self.__model_number 16 17 def get_retail_price(self): 18 return self.__retail_price 19 20 def main(): 21 # Create a CellPhone object. The phone 22 # variable will reference the object. 23 phone = CellPhone() 24 25 # Store values in the object's fields. 26 phone.set_manufacturer("Motorola") 27 phone.set_model_number("M1000") 28 phone.set_retail_price CellPhone class declaration begins in line 1. It has the following method definitions: Page 102 Notice that each of the methods has a parameter named self. The self parameter is required in every method that a class has. A method operates on a specific object's data attributes. When a method executes, it must have a way of knowing which object's data attributes it is supposed to operate on. That's where the self parameter comes in. When a method is called, Python automatically makes its self parameter reference the specific object that the method is supposed to operate on. Now let's look at the set_maufacturer method in lines 2 through 3. Notice that in addition to the self parameter, it also has a parameter named manufact. The statement in line 3 assigns manufact to self.__manufacturer. What is self.__manufacturer? Let's analyze it: So, the statement in line 3 assigns the value of the manufact parameter to a CellPhone object's __manufacturer field. The get_manufacturer method, in lines 11 through 12, returns the value of the object's __manufacturer field. The get_model_number method, in lines 14 through 15, returns the value of the object's model_number field. The get_retail_price method, in lines 17 through 18, returns the object's __retail_price field. Inside the main function, line 23 creates an instance of the CellPhone class in memory and assigns it to the phone variable. We say that the object is referenced by the phone variable. (Notice that Python does not require the New keyword, as discussed in your textbook.) Lines 26 through 28 call the object's set_manufacturer, set_model_number, and set_retail_price methods, passing arguments to each. Page 103 In line 26, the argument "Motorola" is being passed into the set_manufacturer method's manufact parameter. In line 27 the argument "M1000" is being passed into the set_model_number method's model parameter In line 28 the 199.99 argument is being passed into the set_retail_price method's retail parameter. Lines 31 through 33 call the print function to display the values of the object's fields. Constructors In Python, classes can have a method named __init__ which is automatically executed when an instance of the class is created in memory. The __init__ method is commonly known as an initializer method because it initializes the object's data attributes. (The name of the method starts with two underscore characters, followed by the word init, followed by two more underscore characters.) Program 14-2 shows a version of the CellPhone class that has an __init__ method. This is the Python version of Class Listing 14-4 combined with pseudocode Program 14-2 from your textbook. Program 14-2 1 class CellPhone: 2 def __init__(self, manufact, model, retail): 3 self.__manufacturer = manufact 4 self.__model_number = model 5 self.__retail_price = retail This is the Python version of 6 Class Listing 14-4 and Program 7 def set_manufacturer(self, manufact): 14-2 in your textbook. 8 self.__manufacturer = manufact 9 10 def set_model_number(self, model): 11 self.__model_number = model 12 13 def set_retail_price(self, retail): 14 self.__retail_price = retail 15 16 def get_manufacturer(self): 17 return self.__manufacturer 18 19 def get_model_number(self): 20 return self.__model_number 21 22 def get_retail_price(self): 23 return self.__retail_price Page 104 24 25 def main(): 26 # Create a CellPhone object and initialize its 27 # fields with values passed to the __init__ method. 28 phone = CellPhone("Motorola", "M1000", statement in line 28 creates a CellPhone object in memory and assigns it to the phone variable. Notice that the values "Motorola", "M1000", and1 99.99 appear inside the parentheses after the class name. These values are passed as arguments to the class's __init__ method. Inheritance The inheritance example discussed in your textbook starts with the GradedActivity class (see Class Listing 14-8), which is used as a superclass. The FinalExam class is then used as a subclass (see Class Listing 14-9) The Python versions of these classes are shown in Program 14- 3. This program also has a main function that demonstrates how the inheritance works. Page 105 Program 14-3 (inheritance_demo.py) 1 class GradedActivity: This is the Python version of 2 def set_score(self, s): 3 self.__score = s Class Listing 14-8, Class Listing 4 5 def get_score(self): 14-9, and Program 14-3 in your 6 return self.__score textbook. 7 8 def get_grade(self): 9 if self.__score >= 90: 10 grade = 'A' 11 elif self.__score >= 80: In this method, grade is a local variable. 12 grade = 'B' 13 elif self.__score >= 70: It is not a class field. 14 grade = 'C' 15 elif self.__score >= 60: 16 grade = 'D' 17 else: 18 grade = 'F' 19 return grade 20 21 class FinalExam(GradedActivity): 22 def __init__(self, questions, missed): 23 # Set the __num_questions and __num_missed fields. 24 self.__num_questions = questions 25 self.__num_missed = missed Here we are calling the inherited 26 27 # Calculate the points for each question and set_score method. 28 # the numeric socre for this exam. 29 self.__points_each = 100.0 / questions 30 numeric_score = 100.0 - (missed * self.__points_each) 31 32 # Call the inherited set_score method to 33 # set the numeric score. 34 self.set_score(numeric_score) In this method, 35 36 def get_points_each(self): numeric_score is a local 37 return self.__points_each variable. It is not a class field. 38 39 def get_num_missed(self): 40 return self.__num_missed 41 42 def main(): 43 # Prompt the user for the number of questions 44 # on the exam. 45 questions = int(input('Enter the number of questions on the exam: ')) 46 47 # Prompt the user for the number of questions 48 # missed by the student. 49 missed = int(input('Enter the number of questions that the student missed: ')) 50 51 # Create a FinalExam object. 52 exam = FinalExam(questions, missed) 53 54 # Display the test results. 55 print('Each question on the exam counts', exam.get_points_each(), 'points.') 56 print('The exam score is', exam.get_score()) 57 print('The exam grade is', exam.get_grade()) 58 59 # Call the main function. 60 main() Page 106 Program Output Enter the number of questions on the exam: 20 [Enter] Enter the number of questions that the student missed: 3 [Enter] Each question on the exam counts 5.0 points. The exam score is 85.0 The exam grade is B The GradedActivity class is declared in lines 1 through 19. Then the FinalExam class is declared in lines 21 through 40. Notice the first line of the FinalExam class in line 21: class FinalExam(GradedActivity): By writing GradedActivity inside parentheses after the class name, we are indicating that the FinalExam class extends the GradedActivity class. As a result, GradedActivity is the superclass and FinalExam is the subclass. Polymorphism Your textbook presents a polymorphism demonstration that uses the Animal class (Class Listing 14-10) as a superclass, and the Dog class (Class Listing 14-11) and Cat class (Class Listing 14-12) as subclasses of Animal. The Python versions of those classes are shown here. The main function and the show_animal_info functions are the Python equivalent of Program 14-6 in your textbook. Page 107 20 print('Meow') 21 22 # Here is the main function. 23 24 def main(): 25 # Create an animal object, a Dog object, and 26 # a Cat object. 27 my_animal = Animal() 28 my_dog = Dog() 29 my_cat = Cat() 30 31 # Show info about an animal. 32 print('Here is info about an animal.') 33 show_animal_info(my_animal) 34 print() 35 36 # Show info about a dog. 37 print('Here is info about a dog.') 38 show_animal_info(my_dog) 39 print() 40 41 # Show info about a cat. 42 print('Here is info about a cat.') 43 show_animal_info(my_cat) 44 45 # The show_animal_info function accepts an Animal 46 # object as an argument and displays information 47 # about it. 48 49 def show_animal_info(creature): 50 creature.show_species() 51 creature.make_sound() 52 53 # Call the main function. 54 main() Program Output Here is info about an animal. I am just a regular animal. Grrrrrrr Page 108 Here is info about a cat. I am a cat. Meow Page 109 Chapter 15 GUI Applications and Event-Driven Programming Python does not have GUI programming features built into the language itself. However, it comes with a module named Tkinter that allows you to create simple GUI programs. The name "Tkinter" is short for "Tk interface." It is named this because it provides a way for Python programmers to use a GUI library named Tk. Many other programming languages use the Tk library as well. This chapter will give you a brief introduction to GUI programming using Python and Tkinter. We won’t go over all of the features, but we will discuss an adequate number of topics to get you started. Note: There are numerous GUI libraries available for Python. Because the Tkinter module comes with Python, we will use only it in this chapter. A GUI program presents a window with various graphical widgets that the user can interact with and/or display data to the user. The tkinter module provides 15 widgets, which are described in Table 15-1. We won't cover all of the Tkinter widgets, but we will demonstrate how to create simple GUI programs that gather input and display data. Page 110 ability. Text A widget that allows the user to enter multiple lines of text input. Toplevel A container, like a Frame, but displayed in its own window. The simplest GUI program that we can demonstrate is one that displays an empty window. Program 15-1 shows how we can do this using the tkinter module. When the program runs, the window shown in Figure 15-1 is displayed. To exit the program, simply click the standard Windows close button ( ) in the upper right corner of the window. Page 111 In order for the program to use the tkinter module, it must have an import statement such as the one shown in line 3. Inside the main function, line 7 creates an instance of the tkinter module's Tk class, and assigns it to the main_window variable. This object is the root widget, which is the main window in the program. Line 10 calls the tkinter module's mainloop function. This function runs like an infinite loop until you close the main window. Most programmers prefer to take an object-oriented approach when writing a GUI program. Rather than writing a function to create the on-screen elements of a program, it is a common practice to write a class with an __init__ method that builds the GUI. When an instance of the class is created, the GUI appears on the screen. To demonstrate, Program 15-2 shows an object-oriented version of our program that displays an empty window. When this program runs it displays the window previously shown in Figure 15-1. Lines 5 through 11 are the class declaration for the MyGUI class. The class's __init__ method begins in line 6. Line 8 creates the root widget and assigns it to the class field main_window. Line 11 executes the tkinter module's mainloop function. The statement in line 14 creates an instance of the MyGUI class. This causes the class's __init__ method to execute, displaying the empty window on the screen. You can use a Label widget to display a single line of text in a window. To make a Label widget you create an instance of the tkinter module's Label class. Program 15-3 creates a window containing a Label widget that displays the text "Hello World!" The window is shown in Figure 15-2. Page 112 Program 15-3 (hello_world.py) 1 # This program displays a label with text. 2 This is the line continuation character. 3 import tkinter It allows a statement to be broken up 4 5 class MyGUI: into multiple lines. 6 def __init__(self): 7 # Create the main window widget. 8 self.main_window = tkinter.Tk() 9 10 # Create a Label widget containing the 11 # text 'Hello World!' 12 self.label = tkinter.Label(self.main_window, \ 13 text='Hello World!') 14 15 # Call the Label widget's pack method. 16 self.label.pack() 17 18 # Enter the tkinter main loop. 19 tkinter.mainloop() 20 21 # Create an instance of the MyGUI class. 22 my_gui = MyGUI() The MyGUI class in this program is very similar to the one you saw previously in Program 15-2. Its __init__ method builds the GUI when an instance of the class is created. Line 8 creates a root widget and assigns it to self.main_window. The following statement appears in lines 12 and 13: self.label = tkinter.Label(self.main_window, \ text='Hello World!') First, let's explain the \ character at the end of the first line. This is the line continuation character. In Python, when you want to break a long statement into multiple lines, you type the backslash key (\) at the point where you want to break the statement, and then press the Enter key. Now let's look at what the statement does. This statement creates a Label widget and assigns it to self.label. The first argument inside the parentheses is self.main_window, which is a reference to the root widget. This simply Page 113 specifies that we want the Label widget to belong to the root widget. The second argument is text='Hello World!'. This specifies the text that we want displayed in the label. The statement in line 16 calls the Label widget's pack method. The pack method arranges a widget in its proper position, and it makes the widget visible when the main window is displayed. (You call the pack method for each widget in a window.) Line 19 calls the Tkinter module's mainloop method which displays the program's main window, shown in Figure 15- 2. Let's look at another example. Program 15-4 displays a window with two Label widgets, shown in Figure 15-3. Page 114 Notice that the two Label widgets are displayed with one stacked on top of the other. We can change this layout by specifying an argument to pack method, as shown in Program 15-5. When the program runs it displays the window shown in Figure 15-4. In lines 18 and 19 we call each Label widget's pack method passing the argument side='left'. This specifies that the widget should be positioned as far left as possible inside the parent widget. Because the label1 widget was added to the main_window first, it will appear at the leftmost edge. The label2 widget was added next, so it appears next to the label1 widget. As a result, the labels appear side by side. The valid side arguments that you can pass to the pack method are side='top', side='bottom', side='left', and side='right'. Page 115 Organizing Widgets with Frames A frame is a container. It is a widget that can hold other widgets. Frames are useful for organizing and arranging groups of widgets in a window. For example, you can place a set of widgets in one frame and arrange them in a particular way, then place a set of widgets in another frame and arrange them in a different way. Program 15-6 demonstrates this. When the program runs it displays the window shown in Figure 15-5. Page 116 39 40 # Pack the labels that are in the bottom frame. 41 # Use the side='left' argument to arrange them 42 # horizontally from the left of the frame. 43 self.label4.pack(side='left') 44 self.label5.pack(side='left') 45 self.label6.pack(side='left') 46 47 # Yes, we have to pack the frames too! 48 self.top_frame.pack() 49 self.bottom_frame.pack() 50 51 # Enter the tkinter main loop. 52 tkinter.mainloop() 53 54 # Create an instance of the MyGUI class. 55 my_gui = MyGUI() self.top_frame = tkinter.Frame(self.main_window) self.bottom_frame = tkinter.Frame(self.main_window) These lines create two Frame objects. The self.main_window argument that appears inside the parentheses causes the Frames to be added to the main_window widget. Lines 17 through 22 create three Label widgets. Notice that these widgets are added to the self.top_frame widget. Then, lines 27 through 29 call each of the Label widgets' pack method, passing side='top' as an argument. As shown in Figure 15-6, this causes the three widgets to be stacked one on top of the other inside the Frame. Lines 33 through 28 create three more Label widgets. These Label widgets are added to the self.bottom_frame widget. Then, lines 43 through 45 call each of the Label widgets' pack method, passing side='left' as an argument. As shown in Figure 15-6, this causes the three widgets to appear horizontally inside the Frame. Page 117 Lines 48 and 49 call the Frame widgets' pack method, which makes the Frame widgets visible. Line 52 executes the tkinter module's mainloop function. A Button is a widget that the user can click to cause an action to take place. When you create a Button widget you can specify the text that is to appear on the face of the button, and the name of a callback function. A callback function is a function or method that executes when the user clicks the button. Note: A callback function is also known as an event handler because it handles the event that occurs when the user clicks the button. To demonstrate, we will look at Program 15-7. This program displays the window shown in Figure 15-7. When the user clicks the button, the program displays a separate info dialog box, shown in Figure 15-8. We use a function named tkinter.messagebox.showinfo to display the info dialog box. This is the general format of the showinfo function call: tkinter.messagebox.showinfo(title, message) In the general format, title is a string that is displayed in the dialog box's title bar, and message is an informational string that is displayed in the main part of the dialog box. Page 118 7 class MyGUI: 8 def __init__(self): 9 # Create the main window widget. 10 self.main_window = tkinter.Tk() 11 12 # Create a Button widget. The text 'Click Me!' 13 # should appear on the face of the Button. The 14 # do_something method should be executed when 15 # the user clicks the Button. 16 self.my_button = tkinter.Button(self.main_window, \ 17 text='Click Me!', \ 18 command=self.do_something) 19 20 # Pack the Button. 21 self.my_button.pack() 22 23 # Enter the Tkinter main loop. 24 tkinter.mainloop() 25 26 # The do_something method is a callback function 27 # for the Button widget. 28 29 def do_something(self): 30 # Display an info dialog box. 31 tkinter.messagebox.showinfo('Response', \ 32 'Thanks for clicking the button.') 33 34 # Create an instance of the MyGUI class. 35 my_gui = MyGUI() Page 119 The statement in lines 16 through 18 creates the Button widget. The first argument inside the parentheses is self.main_window, which is the parent widget. The text='Click Me!' argument specifies that the string 'Click Me!' should appear on the face of the button. The command='self.do_something' argument specifies the class's do_something method as the callback function. When the user clicks the button, the do_something method will execute. The do_something method appears in lines 29 through 32. The method simply calls the tkinter.messagebox. showinfo function to display the info box shown in Figure 15-8. To dismiss the dialog box the user can click the OK button. GUI programs usually have a Quit button (or an Exit button) that closes the program when the user clicks it. To create a Quit button in a Python program you simply create a Button widget that calls the root widget's destroy method as a callback function. Program 15-8 demonstrates how to do this. It is a modified version of Program 15-7, with a second Button widget added as shown in Figure 15-9. Page 120 29 # Enter the tkinter main loop. 30 tkinter.mainloop() 31 32 # The do_something method is a callback function 33 # for the Button widget. 34 35 def do_something(self): 36 # Display an info dialog box. 37 tkinter.messagebox.showinfo('Response', \ 38 'Thanks for clicking the button.') 39 40 # Create an instance of the MyGUI class. 41 my_gui = MyGUI() The statement in lines 22 through 23 creates the Quit button. Notice that the self.main_window.destroy method is used as the callback function. When the user clicks the button, this method is called and the program ends. An Entry widget is a rectangular area that the user can type text into. Entry widgets are used to gather input in a GUI program. Typically, a program will have one or more Entry widgets in a window, along with a button that the user clicks to submit the data that he or she has typed into the Entry widgets. The button's callback function retrieves data from the window's Entry widgets and processes it. You use an Entry widget's get method to retrieve the data that the user has typed into the widget. The get method returns a string, so it will have to be converted to the appropriate data type if the Entry widget is used for numeric input. To demonstrate we will look at a program that allows the user to enter a distance in kilometers into an Entry widget, and then click a button to see that distance converted to miles. The formula for converting kilometers to miles is: Figure 15-10 shows the window that the program displays. To arrange the widgets in the positions shown in the figure, we will organize them in two frames, as shown in Figure 15-11. Page 121 The label that displays the prompt and the Entry widget will be stored in the top_frame, and their pack methods will be called with the side='left' argument. This will cause them to appear horizontally in the frame. The Convert button and the Quit button will be stored in the bottom_frame, and their pack methods will also be called with the side='left' argument. Program 15-9 shows the code for the program. Figure 15-12 shows what happens when the user enters 1000 into the Entry widget and then clicks the Convert button. Page 122 23 # Pack the top frame's widgets. 24 self.prompt_label.pack(side='left') 25 self.kilo_entry.pack(side='left') 26 27 # Create the button widgets for the bottom frame. 28 self.calc_button = tkinter.Button(self.bottom_frame, \ 29 text='Convert', \ 30 command=self.convert) 31 self.quit_button = tkinter.Button(self.bottom_frame, \ 32 text='Quit', \ 33 command=self.main_window.destroy) 34 # Pack the buttons. 35 self.calc_button.pack(side='left') 36 37 self.quit_button.pack(side='left') 38 39 # Pack the frames. 40 self.top_frame.pack() 41 self.bottom_frame.pack() 42 43 # Enter the Tkinter main loop. 44 tkinter.mainloop() 45 46 # The convert method is a callback function for 47 # the Calculate button. 48 49 def convert(self): 50 # Get the value entered by the user into the 51 # kilo_entry widget. 52 kilo = float(self.kilo_entry.get()) 53 54 # Convert kilometers to miles. 55 miles = kilo * 0.6214 56 57 # Display the results in an info dialog box. 58 tkinter.messagebox.showinfo('Results', \ 59 str(kilo) + ' kilometers is equal to ' + \ 60 str(miles) + ' miles.') 61 62 # Create an instance of the KiloConverterGUI class. 63 kilo_conv = KiloConverterGUI() Page 123 The convert method, shown in lines 49 through 60 is the Convert button's callback function. The statement in line 52 calls the kilo_entry widget's get method to retrieve the data that has been typed into the widget. The value is converted to a float and then assigned to the kilo variable. The calculation in line 55 performs the conversion and assigns the results to the miles variable. Then, the statement in lines 58 through 60 displays the info dialog box with a message that gives the converted value. Page 124 Appendix A: Introduction to IDLE IDLE is an integrated development environment that combines several development tools into one program, including the following: A Python shell running in interactive mode. You can type Python statements at the shell prompt and immediately execute them. You can also run complete Python programs. A text editor that color codes Python keywords and other parts of programs. A "check module" tool that checks a Python program for syntax errors without running the program. Search tools that allow you to find text in one or more files. Text formatting tools that help you maintain consistent indentation levels in a Python program. A debugger that allows you to single-step through each statement in a Python program and watch the values of variables as the statements execute. Several other advanced tools for developers. The IDLE software is bundled with Python. When you install the Python interpreter, IDLE is automatically installed as well. This appendix provides a quick introduction to IDLE, and describes the basic steps of creating, saving, and executing a Python program. After Python is installed on your system a Python program group will appear in your Start menu's program list. One of the items in the program group will be titled IDLE (Python GUI). Click this item to start IDLE and you will see the Python Shell window shown in Figure A-1. Inside this window the Python interpreter is running in interactive mode, and at the top of the window is a menu bar that provides access to all of IDLE's tools. The >>> prompt indicates that the interpreter is waiting for you to type a Python statement. When you type a statement at the >>> prompt and press the Enter key, the statement is immediately executed. For example, Figure A-2 shows the Python Shell window after three statements have been entered and executed. Page 125 Figure A-1 IDLE shell window When you type the beginning of a multiline statement, such as an if statement or a loop, each subsequent line is automatically indented. Pressing the Enter key on an empty line indicates the end of the multiline statement and causes the interpreter to execute it. Figure A-3 shows the Python Shell window after a for loop has been entered and executed. Page 126 Figure A-3 A multiline statement executed by the Python interpreter To write a new Python program in IDLE you open a new editing window. As shown in Figure A-4 you click File on the menu bar, then click New Window. (Alternatively you can press Ctrl+N.) This opens a text editing window like the one shown in Figure A-5. To open a program that already exists, click File on the menu bar, then Open. Simply browse to the file's location and select it, and it will be opened in an editor window. Page 127 Figure A-5 A text editing window Color Coding Code that is typed into the editor window, as well as in the Python Shell window, is colorized as follows: Figure A-6 shows an example of the editing window containing colorized Python code. Page 128 Tip: You can change IDLE's color settings by clicking Options on the menu bar, then clicking Configure IDLE. Select the Highlighting tab at the top of the dialog box and you can specify colors for each element of a Python program. Automatic Indentation The IDLE editor has features that help you to maintain consistent indentation in your Python programs. Perhaps the most helpful of these features is automatic indentation. When you type a line that ends with a colon, such as an if clause, the first line of a loop, or a function header, and then press the Enter key, the editor automatically indents the lines that are entered next. For example, suppose you are typing the code shown in Figure A-7. After you press the Enter key at the end of the line marked , the editor will automatically indent the lines that you type next. Then, after you press the Enter key at the end of the line marked , the editor indents again. Pressing the Backspace key at the beginning of an indented line cancels one level of indentation. Page 129 Figure A-7 Lines that cause automatic indentation By default, IDLE indents four spaces for each level of indentation. It is possible to change the number of spaces by clicking Options on the menu bar, then clicking Configure IDLE. Make sure Fonts/Tabs is selected at the top of the dialog box, and you will see a slider bar that allows you to change the number of spaces used for indentation width. However, because four spaces is the standard width for indentation in Python, it is recommended that you keep this setting. Saving a Program In the editor window you can save the current program by performing any of these operations from the File menu: Save Save As Save Copy As The Save and Save As operations work just like they do in any Windows application. The Save Copy As operation works like Save As, but it leaves the original program in the editor window. Running a Program Once you have typed a program into the editor, you can run it by pressing the F5 key, or as shown in Figure A-8, by clicking Run on the editor window's menu bar, then Run Module. If the program has not been saved since the last modification was made, you will see the dialog box shown in Figure A-9. Click OK to save the program. When the program runs you will see its output displayed in IDLE's Python Shell window, as shown in Figure A-10. Page 130 Figure A-8 The editor window's Run menu Page 131 Figure A-10 Output displayed in the Python Shell window If a program contains a syntax error, when you run the program you will see the dialog box shown in Figure A-11. After you click the OK button the editor will highlight the location of the error in the code. If you want to check the syntax of a program without trying to run it you can click Run on the menu bar, then Check Module. Any syntax errors that are found will be reported. Other Resources This appendix has provided an overview for using IDLE to create, save, and execute programs. IDLE provides many more advanced features. To read about additional capabilities, see the official IDLE documentation at. Page 132
https://www.scribd.com/document/395703225/Python-Language-Companion-1-pdf
CC-MAIN-2019-35
en
refinedweb
PHP localization file for use in Laravel/F3/Kohana PHP framework. Example <?php return array( "boolean_key" => "--- true ", "empty_string_translation" => "", "key_with_description" => "Check it out! This key has a description! (At least in some formats)", "key_with_line-break" => "This translations contains a line-break.", "nested" => array( "deeply" => array( "key" => "Wow, this key is nested even deeper.", ), "key" => "This key is nested inside a namespace.", ), "null_translation" => "", "pluralized_key" => "You have no pluralization.|Only one pluralization found.|Wow, you have %s pluralizations!", "sample_collection" => "--- - first item - second item - third item ", "simple_key" => "Just a simple key with a simple message.", "unverified_key" => "This translation is not yet verified and waits for it. (In some formats we also export this status)", );
https://phrase.com/docs/guides/formats/php-laravel/
CC-MAIN-2019-35
en
refinedweb
53007/delimiter-on-the-data I have a file with records as below. s.no,name,Country 101,Raju,India,IN 102,Reddy,UnitedStates,US here the my country column has data as "India,IN" which is single value and it has comma as well. Can you let me know how to handle this data when we read the file using comma delimiter in spark-scala? I tried with split(",") which did not give me expected output. for ex: expected output for the first record: S.no: 101 name: Raju Country: India,IN You can use this: import org.apache.spark.sql.functions.struct val df = Seq((1,2), (3,4), (5,3)).toDF("a", "b") val new = df.withColumn("NewColumn", struct(df("a"), df("b")) new.show() +---+---+---------+ |a |b |NewColumn| +---+---+---------+ |1 |2 |[1,2] | |3 |4 |[3,4] | |5 |3 |[5,3] | +---+---+---------+ val data = new.drop("a"); val data = data.drop("b"); First find the Hadoop directory present in ...READ MORE this article on HiveStorageHandler will let you create ...READ MORE what can i do???????/ READ MORE You can either install Apache Hadoop
https://www.edureka.co/community/53007/delimiter-on-the-data
CC-MAIN-2019-35
en
refinedweb
For my school project ' if this, then that ' I had to make something Interactive with the use of an arduino. I chose to create this automatic dice tower. In this project I'll show you how I put it together. The tower itself is simple, its just a wooden case with an extra tray to catch the dice. On top of the tower are 6 different cases to put the dice, all closed by a hatch controlled by a servo motor. The six buttons on the tower each controll a different hatch. The tray has leds underneath it that light up different colours when the buttons are pressed. Teacher Notes Teachers! Did you use this instructable in your classroom? Add a Teacher Note to share how you incorporated it into your lesson. Step 1: Things You'll Need The things I used for this project: - Arduino Uno - WS2812B LED STRIP - SG90 Analog Servo - 6 tactile led button white - Four battery clips - Four 9 volt batteries - Any set of clear polydice. I used Chessex Borealis purple/white - printed wiring board - Mahogany Planks - Plexiglass, I used one that isn't entirely see through - Any clear ballpoint pen. - Any kind of thin MDF wood - A phone charger Ofcourse you can switch a lot of the materials to something else of your liking, These are just the specifics for what I used. Step 2: The Case Creating the case is a fairly straight forward process. - For the sides and the back: cut 3 planks of your desired wood, 320 cm tall and 150 wide. To fit it together nicely in the end I cut the sides diagonally. - For the front: You'll have to create an entrance of your desired height. Keep in mind that you have to fit in the buttons, servos and the arduino itself so make sure you don't run out of space. - For the tray: Cut the side pieces 160 cm wide and cut the sides diagonally like the tower. This way you can easily slot it into the tower without necessarily having to attach it, Cut out two slits on each piece. One in the middle for the plexiglass, and one on the bottom so you can slot in the piece of wood with your ledstrip. - For the buttons: Drill 6 holes justr above the entrance, make sure its spaced like you'll space your buttons because they'll be put behind them on the inside. - For your charger : This is something I completely forget to do but I can absolutely recommend you do. Take out a little bit of wood on the botton of the backpiece, you can let your charger wire for the leds feed through this so your tower doesnt rest on it and wobble. - Putting it together: Lay your pieces in the right order with the right side up. Apply tape diagonally along the seams and a few horizontal pieces for support. Then turn it around and apply generous amounts of glue. Fold them together and apply enough clamps to keep it all together. Dry overnight Step 3: The Buttons For this project I wanted the buttons to resemble the dice that were going to fall out when pressed. Because the buttons light up I thought it would be cool for the dice to be translucent. I considered a few options here: 3d print them: pros: - Fairly cheap - If you have knowledge of 3d programs it would be fairly easy to make Cons: - If you don't have knowledge of 3d programms, its not that easy to model a set of dice - 3d printing takes a while - 3d printers are not accessible for everyone - The result often is fairly rough Resin Print them: Pros: - Clear and pretty result - If you have knowledge of 3d programs it would be fairly easy to make Cons: - Resin printing takes a while - Is even less accesible then 3d printing Buy a set of dice , get the chainsaw: Pros: - You have to model nothing yourself - Great range of colours, glitters , finish. - Quick and easily obtainable Cons: - Fairly expensive - Cutting them in half is quite the project - Chance of breaking whilst cutting - If you're a dice fanatic like me , it hurts a little to buy a set only to destroy it In the end I went for buying a set of dice because I did not have that much time left and I did like the thought of having a pretty colour. Cutting them: The nasty part about cutting these in half is that they conduct heat...extremeeely well. So if you go too quick you get a dice melted to your saw which is incredibly inconvenient to get off. What I did was take two pieces of scrap wood, drill a partial hole in them and wedge the dice inbetween. Then I took a handsaw and just slowly and steadily started to half them. Once I got through I got a fairly rough sandpaperblock and sanded the underside evenly. This worked for all dice except the d4, Its fairly thin and small and I just didn't do it carefully enough, it ended up melting/breaking away. I replaced it with the d100 for aesthetic reasons but I can recommend just putting it on the button as a whole. Assembeling the button: Because the actual buttons are on the inside of the tower you'll need something that reaches through the drilled holes and portrudes a little on the outside so you can press it. Because light needs to travel through it needs to be translucent aswell. To do this I simply took the barrel of a ballpoint pen, cut it in six equal pieces long enough to go through the holes and glued the halved dice on the other side. I didnt permanently attach mine to the buttons on the inside because this way I can take them out when travellling with the tower. (I used the other half of the dice to put next to the hatches of the tower so you know in which one to put the corrosponding dice) Step 4: The Button/Led/Battery Printplate Get ready for an unholy amount of wires. The printplate: I completely forgot to take a clear picture of the printplate with all wires soldered to so these two have to do. Cut two pieces of printplate, drill 4 holes at the corners for each and attach your buttons. Put appropriate resistors for both the button and the led on it (treat the led on it like you'd treat a seperate normal one). Then solder a wire for each button and led. Attach all the volt wires of the buttons and leds together, and add a wire that leads back to the 5v input on the arduino. Attach all the ground wires of the buttons and leds together, then add a wire that leads back to the ground on the arduino. This is also where I attached my battery clips and male connector to plug into the arduino. Step 5: The Servos To make the little dice compartments: - Cut out two different pieces of wood, that fit perfectly in the width of the tower. Glue 3 servos on each, make sure they can be put opposite each other later (see first and second picture)m drill a hole where the turning part of the servo fits in. - Put little dividers between them so you have the dice compartments - Cut a piece of wood fo rthe middle , then put each piece of wood on a side. - cut 6 hatches (make sure they're not too heavy, the servos arent that strong) drill a tiny tiny hole in the side and put an equally smale nail in. Glue(be sure to use strong glue and let it dry appropriately) that into the moving part of the servo - Tada, you got your compartments. Wiring: Now put all the ground wires of your servos together, solder them and lead one wire from there into a ground pin on the arduino. Now put all the power wires of your servos together, solder them and lead one wire from there to the printplate with the buttons and attach it there so they can take energy from the batteries. attach all the signal pins into your arduino Step 6: The LEDS I used 15 leds for this project, I divided them in 3 pieces and glued them to a piece of mdf. Then slotted them into the dice tray. I attached this to a wire with an usb end so I can just use a phone charger because these ledstrips take a lot of energy to run. You'll need two ground wires for this one , one goes into the charger and the other goes to the arduino. Also lead the signal wire to the arduino. Step 7: Putting It All Together This will sound so much easier then it actually is. Cause I wanted a fairly sleek tower I absolutely sacrificed a bit of comfort. Putting it in is a lot of fumbling and cursing and I absolutely reccomend a second pair of hands for this. If you want this all to be a bit easier, size up your tower. - Start with wedging your servos into the top of the tower. I made it such a tight fit I just had to push it in , no need for glue or anything. I then glued the other halves of the dice to the top so I know what slot needs what dice. - Turn the tower around. - Put the printplate of the buttons behind the drilled holes. Screw tiny screws into the 8 drilled holes you got. Take extra care to make sure the buttons are pressable and there are no wires accidentally infront of it! - Screw in your arduino opposite of that printplate, all the wires are now in the middle of the tower. (I don't think this the most professional way of doing this but I was so afraid of wires popping out I put a layer of hotglue on them once they were all plugged inside the arduino. Its easibly peelable so you dont destroy your arduino. ) - Wire manage, tie them together, keep them in the middle (this is the worst I'm sorry) - Make a ramp, put it infront of your arduino so its hidden and the dice can roll off it.Make it overlap the outside of the tower slightly so you can slot in your dice tray. (picture) Make sure your ramp is prettier then mine. I'm sorry I don't have more picturees for refrence, I was too busy crying whilst putting it together. Step 8: The Coding #include <servo.h> //include servo library<br>#include <fastled.h> //include fastled library</fastled.h></servo.h> #define LED_PIN 19 //define the LEDSTRIP pins #define NUM_LEDS 24 //define the amount of leds you use, your first one counts as 0 CRGB leds[NUM_LEDS]; const int ledPin1 = 2; //define which pins you'll use for the button LEDS const int ledPin2 = 3; const int ledPin3 = 4; const int ledPin4 = 5; const int ledPin5 = 6; const int ledPin6 = 7; const int buttonPin1 = A0; //define which pins you'll use for your buttons const int buttonPin2 = A1; const int buttonPin3 = A2; const int buttonPin4 = A3; const int buttonPin5 = A4; const int buttonPin6 = A5; int servoPin1 = 8; //define which pins you'll use for your Servo's int servoPin2 = 9; int servoPin3 = 10; int servoPin4 = 11; int servoPin5 = 12; int servoPin6 = 13; int buttonState1 = 0; //create a variable for your buttonstate int buttonState2 = 0; int buttonState3 = 0; int buttonState4 = 0; int buttonState5 = 0; int buttonState6 = 0; Servo Servo1; Servo Servo2; Servo Servo3; Servo Servo4; Servo Servo5; Servo Servo6; void setup() { Serial.begin(9600); FastLED.addLeds<ws2812, led_pin,="" grb="">(leds, NUM_LEDS); </ws2812,> pinMode(ledPin1, OUTPUT); //set led pins as output pinMode(ledPin2, OUTPUT); pinMode(ledPin3, OUTPUT); pinMode(ledPin4, OUTPUT); pinMode(ledPin5, OUTPUT); pinMode(ledPin6, OUTPUT); Servo1.attach(8); //attach your servos Servo2.attach(9); Servo3.attach(10); Servo4.attach(11); Servo5.attach(12); Servo6.attach(13); } void loop() //Servo 1 { Servo1.write(0); //automatically puts the servo to 0 at the begin buttonState1 = digitalRead(buttonPin1); //the state of the button gets saved as buttonstate1 if (buttonState1 == HIGH) { //if you press the button digitalWrite(ledPin1, HIGH); //turns led on Servo1.write(90); //servo turns 90 degrees for(int i=0;i<6;i++){ //leds 0-5 turn your defined colour leds[i].setRGB(131, 7, 247); } for(int i=5;i<11;i++){ //leds 6-10 turn your defined colour leds[i].setRGB(255, 13, 93); } for(int i=11;i<16;i++){ //leds 11-15 turn your defined colour leds[i].setRGB(253, 185, 155); } FastLED.show(); Serial.println("Servo1 ON"); //prints that the servo is on delay(5000); //waits 5 seconds Servo1.write(0); //servo turns back to 0 degrees } else { digitalWrite(ledPin1, LOW); Serial.println("Servo1 OFF"); } //repeat x6 //Servo 2 Servo2.write(0); buttonState2 = digitalRead(buttonPin2); if (buttonState2 == HIGH) { digitalWrite(ledPin2, HIGH); Servo2.write(90); for(int i=0;i<6;i++){ leds[i].setRGB(131, 7, 247); } for(int i=5;i<11;i++){ leds[i].setRGB(255, 255, 93); } for(int i=11;i<16;i++){ leds[i].setRGB(253, 185, 155); } FastLED.show(); Serial.println("Servo2 ON"); delay(5000); Servo2.write(0); } else { digitalWrite(ledPin2, LOW); Serial.println("Servo2 OFF"); } //Servo 3 Servo3.write(0); buttonState3 = digitalRead(buttonPin3); if (buttonState3 == HIGH) { digitalWrite(ledPin3, HIGH); Servo3.write(90); Serial.println("Servo3 ON"); delay(5000); Servo3.write(0); } else { digitalWrite(ledPin3, LOW); Serial.println("Servo3 OFF"); } //Servo 4 Servo4.write (0); buttonState4 = digitalRead(buttonPin4); if (buttonState4 == HIGH) { digitalWrite(ledPin4, HIGH); Servo4.write(90); for(int i=0;i<6;i++){ leds[i].setRGB(131, 7, 247); } for(int i=5;i<11;i++){ leds[i].setRGB(255, 13, 93); } for(int i=11;i<16;i++){ leds[i].setRGB(253, 185, 155); } FastLED.show(); Serial.println("Servo4 ON"); delay(5000); Servo4.write(0); } else { digitalWrite(ledPin4, LOW); Serial.println("Servo4 OFF"); } //Servo 5 Servo5.write (0); buttonState5 = digitalRead(buttonPin5); if (buttonState5 == HIGH) { digitalWrite(ledPin5, HIGH); Servo5.write(90); Serial.println("Servo5 ON"); delay(5000); Servo5.write(0); } else { digitalWrite(ledPin5, LOW); Serial.println("Servo5 OFF"); } //Servo 6 Servo6.write (0); buttonState6 = digitalRead(buttonPin6); if (buttonState6 == HIGH) { digitalWrite(ledPin6, HIGH); Servo6.write(90); Serial.println("Servo6 ON"); delay(5000); Servo6.write(0); } else { digitalWrite(ledPin6, LOW); Serial.println("Servo6 OFF"); } } 2 Discussions 2 months ago this is awesome, i am actually looking for a way to make something like this, but where it is possible to load multiple dice of each kind and drop one at a time when you push the corresponding button 7 months ago Nice. I like how the dice are also the buttons.
https://www.instructables.com/id/Arduino-Automatic-Dice-Tower/
CC-MAIN-2019-35
en
refinedweb
A recent set of requirements I’ve been playing with deals with passwords. This one specifically handles password expiration. Given that I’m working with ASP.NET MVC I know I can rest assured that there’s some great (read awesome) way of implementing a given requirement. This is exactly what happened and I want to show you how to have a clean and beautiful solution to this problem. So my client’s requirement is the following: Passwords should expire in 45 days. I’m currently using the default ASP.NET membership provider. It gives you a database schema ready to manage users and roles. You just have to use ASP.NET Configuration Tool to create Roles and Users, decorate your Controllers/Actions with the Authorize attribute and you’re good to go most of the time. The default membership provider allows a fast project start – no doubt – but as always there’s something that must be done according to the infinitude of possible requirements that change project by project. One of these not contemplated things is a setting in the default provider for handling user’s password expiration. We have to roll our own code to manage this. My friend Google told me that some folks have already done some work related to this and as always I borrow some of their code and adapt it to my specific case/technology. First I created a PasswordExpiredAttribute that derives from/extends the AuthorizeAttribute. Here’s its code: public class PasswordExpiredAttribute : AuthorizeAttribute { private static readonly int PasswordExpiresInDays = int.Parse(ConfigurationManager.AppSettings["PasswordExpiresInDays"]); application if (ts.TotalDays > PasswordExpiresInDays) { filterContext.HttpContext.Response.Redirect( string.Format("~/{0}/{1}?{2}", MVC.Account.Name, MVC.Account.ActionNames.ChangePassword, "reason=expired")); } } base.OnAuthorization(filterContext); } } As you see, the code goes inside the OnAuthorization overridable method. I get the PasswordExpiresInDays setting from the Web.config <appSettings> section. This gives an easy way to change the requirement in the future without the need of recompiling the whole app. <appSettings> <add key="PasswordExpiresInDays" value="45" /> </appSettings> The code explains itself but let’s go through it: 1 - If the User is authenticated, let’s get his membership data; 2 - A TimeSpan is useful for getting the difference in days between Today and the last time the user changed his password ( LastPasswordChangedDate ) 3 - Check if the TimeSpan.TotalDays is greater than the PasswordExpiresInDays setting we got from the Web.config file. If true the user must change his password and we redirect him to the ChangePassword view. Note 1: I’m using T4MVC to retrieve the Controller and Action names in the code above. You should take a look at it! Really… Note 2: See that "reason=expired" in the response redirect URL? I’m using this querystring as a route parameter inside the ChangePassword action method to display a message to the user informing him that he’s being asked to change the password because it has expired. /// <summary> /// This allows the logged on user to change his password. /// </summary> public virtual ActionResult ChangePassword(string reason) { var viewModel = new ChangePasswordViewModel(); if (reason != null) { ShowMessage(Infrastructure.Notification.MessageType.Warning, Localization.PasswordExpired, true); } return View(viewModel); } By the way: I use MvcNotification infrastructure by Martijn Boland to display beautiful messages to the user. OK, getting back to the main point… now it’s just a matter of applying the PasswordExpiredAttribute filter to every controller of the app but the AccountController. With ASP.NET MVC 3 it’s easy to apply a filter to every controller and action using GlobalFilters. Instead of going controller by controller to add this attribute we can just register it as a global filter in the Global.asax file: public static void RegisterGlobalFilters(GlobalFilterCollection filters) { filters.Add(new HandleErrorAttribute()); filters.Add(new AuthorizeAttribute()); filters.Add(new PasswordExpiredAttribute()); } Doing so the PasswordExpiredAttribute will be executed for every controller and action but there’s a problem with the above approach. Since it’s a global filter, it’ll be executed even for the AccountController. Remember: we don’t want it to be executed for the AccontController… How can we exclude a global filter from a single controller or action? To achieve this, there’s an awesome thing we can do: create a ExcludeFilterAttribute and a ExcludeFilterProvider. WOW, ASP.NET MVC has a Filter Provider that gives us even more power when working with filters. Look here for the complete story: Exclude a Filter by Ori Calvo. I’ve uploaded the source code files here: ExcludeFilterAttribute.cs and ExcludeFilterProvider.cs Now it’s just a matter of decorating the AccountController with the ExcludeFilter attribute like this: [ExcludeFilter(typeof(PasswordExpiredAttribute))] public partial class AccountController : BaseController{ ... } The ExcludeFilter attribute explicitly tells the ASP.NET MVC runtime to ignore the PasswordExpiredAttribute for the AccountController. With this in place, once the logged in user tries to access any part of the site and his password is expired, he'll be redirected to the ChangePassword view and won't be allowed access to anywhere else in the site until he changes the password. This is great and the requirement is implemented. Of course in software there are multiple ways of doing the same thing. If you know of any better option, please share you knowledge in the comments. Hope it helps. Bonus While working on this requirement I posted a question at StackOverflow regarding the use of Web.config settings as magic strings. I’ve found a nice way to let the code a little bit cleaner. So, if you want a nice way to access your Web.config app settings as properties with compile time checking and nice error handling, you can do as described here: T4MVC for Web.config <appSettings> This is a much better/cleaner code IMO (see the AppSettings class that was automatically generated with the T4 template): public class PasswordExpiredAttribute : AuthorizeAttribute { system if (ts.TotalDays > int.Parse(AppSettings.PasswordExpiresInDays)) { filterContext.HttpContext.Response.Redirect( string.Format("~/{0}/{1}?{2}", MVC.SGAccount.Name, MVC.SGAccount.ActionNames.ChangePassword, "reason=expired")); } } base.OnAuthorization(filterContext); } } References: ASP.NET MVC Authentication - Global Authentication and Allow Anonymous by Jon Galloway ASP.NET MVC Authentication - Customizing Authentication and Authorization The Right Way by Jon Galloway Exclude a Filter by Ori Calvo Introducing System.Web.Providers - ASP.NET Universal Providers for Session, Membership, Roles and User Profile on SQL Compact and SQL Azure by Scott Hanselman Conditional Filters in ASP.NET MVC 3 by Phil Haack
http://www.leniel.net/2012/05/user-password-expired-filter-aspnet-mvc.html
CC-MAIN-2017-43
en
refinedweb
Ok so Im trying to create a constructor that reads input from a text file. Which I got it to do. Then adds the information to a LinkedList. public class CourseCatalog<T> { private BufferedReader read; private LinkedList<Course> catalog; public CourseCatalog(String filename) throws FileNotFoundException { catalog = new LinkedList<Course>(); try { //Construct the BufferedReader object this.read = new BufferedReader(new FileReader(filename)); String line = null; while ((line = read.readLine()) != null) { if(line.contains("Course")){ //Process the data System.out.println(line); } } } catch (FileNotFoundException ex) { ex.printStackTrace(); } catch (IOException ex) { ex.printStackTrace(); } finally { //Close the BufferedReader try { if (read != null) read.close(); } catch (IOException ex) { ex.printStackTrace(); } } The text file contains a bunch of differnt courses like this. Ex: Course: MAT 1214 Title: Calculus I Prerequisite: none Course: MAT 1224 Title: Calculus II Prerequisite: MAT 1214 So my code right now just reads the .txt file and outputs whats in it. Anybody have any suggestions or could point me in the right direction of how I would go about adding that information to a LinkedList<Course>. The constructor of my Course class looks like this. public Course(String dep, int num, String title) { department = dep; coursenumber = num; coursetitle = title; prerequisites = new LinkedList<String>(); subsequents = new LinkedList<String>(); }
https://www.daniweb.com/programming/software-development/threads/323163/how-do-you-add-string-input-from-a-text-to-a-linked-list
CC-MAIN-2017-43
en
refinedweb
With the rate at which new software languages and technologies are appearing, its a tough job for a software engineer to keep up with the latest developments and trends as well as keeping existing skills sharp. Just as you feel you master one language or technology, a new one comes along and your apprenticeship starts over in this new field. One of the best ways that I have found to both learn new languages/skills and to keep existing skills sharp is to practice each skill as much as possible. For programming languages this means regularly writing pieces of code in each language in the toolbox. I try to use each language that I know at least once a month for languages that I have become proficient in and at least once a week for languages that I am learning. Dave Thomas describes this concept in greater detail in his CodeKata series. I find the use of programming puzzles and challenges as excellent ways to find concise coding exercises that don't soak up to much time but yet provide opportunities to hone & develop skills as well as often providing opportunities to learn about new libraries, packages, algorithms etc. Some of the best coding problem / challenge sites that I use are: Friday, July 13, 2007 Always an apprentice? Posted by Stephen Doyle at 5:09 PM No comments: Simple container dump using STL iterator Quick and dirty printing of containers contents in C++ using STL ostream_iterator ... #include <iostream> #include <algorithm> #include <vector> using namespace std; int main() { vector v(10); generate(v.begin(), v.end(), rand); copy(v.begin(), v.end(), ostream_iterator (cout, "\n")); return 0; } Posted by Stephen Doyle at 4:37 PM No comments: Labels: C++, code snippet, STL Friday, July 6, 2007 Pillars of Concurrency Excellent article on decomposing and categorizing concurrency traits by Herb Sutter ... The three "pillars" identified in the article: An interesting reference from this article is to an earlier article from Herb illustrating why lock based programming is hard and insufficient: The three "pillars" identified in the article: - Responsiveness and Isolation Via Asynchronous Agents - Throughput and Scalability Via Concurrent Collections - Consistency Via Safely Shared Resources An interesting reference from this article is to an earlier article from Herb illustrating why lock based programming is hard and insufficient: Posted by Stephen Doyle at 2:41 PM 1 comment: Labels: Concurrency, Multithreading
http://stephendoyle.blogspot.com/2007_07_01_archive.html
CC-MAIN-2017-43
en
refinedweb
Error importing pygments.lexers.web on Python 2.5 {{{ $ python Python 2.5 (r25:51908, May 15 2012, 16:19:31) [GCC 4.1.2 20080704 (Red Hat 4.1.2-52)] on linux2 Type "help", "copyright", "credits" or "license" for more information. import pygments.lexers.web Traceback (most recent call last): File "<stdin>", line 1, in <module> UnicodeDecodeError: 'rawunicodeescape' codec can't decode bytes in position 12-22: \Uxxxxxxxx out of range }}} Hey I had the same problem: The import of pygments.lexers.web did not work. I switched to Python version 2.5.4 and there is the bug fixed. I hope this helps. Yes, this is an issue with one of my last commits. It affects all narrow builds of python (sys.maxunicode == 65535ish). I'm working on a fix as well as making regexlint recognize this case. Tim Any updates on this? The fix is merged to pygments-main here. Georg, I think this can be closed.
https://bitbucket.org/birkenfeld/pygments-main/issues/778/error-importing-pygmentslexersweb-on
CC-MAIN-2017-43
en
refinedweb
I can make the front part of laravel Echo work with pusher. In my app.js import Echo from "laravel-echo" window.Echo = new Echo({ broadcaster: 'pusher', key: 'MY_KEY', // I use my own key. cluster: 'eu', encrypted: true }); gulp Echo.channel('survey').listen('survey', function(e) { console.log('test'); }); if you are sending events through pushers console you need to set the full namespace of the event. For example App/Events/survey. Echo adds the namespace automatically for you. Have a look at under namespaces. If you would send the event from Laravel it would use the complete namespace.
https://codedump.io/share/4Dw6dEiaEDZi/1/event-broadcating-laravel-echo-fails-receiving-broadcasts
CC-MAIN-2017-43
en
refinedweb
, with the alpha version of the article coming up, i suggest that we make a= =20 full blown ChemBrowser (tm) that demos much of the functionality of the C= DK=20 library... I would like to see a java application that can switch views (2D, 3D, orb= ital, table of atoms, spectra), do calculations (NMR), copy molecule from datab= ase=20 and internet, and all other stuff that is in the libs right now... Up till now, i've been releasing source code on SourceForge... i am going= to=20 chance the way i do it a bit... 1. needed jar libs are going to be seperated from cdk souce code -> cdk-required-libs.tar.gz -> cdk-source-VERSION.tar.gz 2. binary dist will be added (also without extra jar libs) -> cdk-VERSION.tar.gz 3. versioning is will keep usinging dates, like 20011015 4. demos/tests will be seperated from library -> cdk-tests-VERSION.tar.gz -> cdk-demo-VERSION.tar.gz -> cdk-chembrowser-VERSION.tar.gz I suggest that org.openscience.cdk.test is just for JUnit (?) tests only, and that high-end tests (e.g. FileReaderTest.java) move to=20 org.openscience.cdk.demo. Or something similar, like: junit: org.openscience.cdk.test.junit other tests: org.openscience.cdk.test The ChemBrowser (tm) is another thing... since this is clearly an applica= tion, we should move it out of org.openscience.cdk... (one could even say this = for=20 the high-end tests/demos like FileReaderTest). I suggest the namespace org.openscience.chembrowser. Ok, having said that... i hope to make these changes next week, so *flame= *=20 me... ;) Egon Egon Willighagen wrote: > > Hi all, > > with the alpha version of the article coming up, i suggest that we make a > full blown ChemBrowser (tm) that demos much of the functionality of the CDK > library... Good point. I envision a tabbed pane, one panel for each demo. > I suggest that org.openscience.cdk.test is just for JUnit (?) tests only, > and that high-end tests (e.g. FileReaderTest.java) move to > org.openscience.cdk.demo. Or something similar, like: Yes - and in any case the authors of the non-JUnit tests might ask themselves if they cannot move to JUnit. In some cases that should be possible. Of course, there is a problem with the purely graphical tests :-) Cheers,..
https://sourceforge.net/p/cdk/mailman/message/4917019/
CC-MAIN-2017-43
en
refinedweb
Poser + Genesis DSON Importer module not found edited December 1969 in Technical Help (nuts n bolts) Hi! Not sure what I'm doing wrong, here... installed the Genesis to Poser importer package (Great idea, by the way!); go to my DAZ People folder, click on the Genesis entry, and get the following error message: Traceback (most recent call last): File "J:\Runtime Backup 8-4-2011\Runtime\libraries\Character\DAZ People\Genesis.py", line 1, in import dson.dzdsonimporter ImportError: No module named dson.dzdsonimporter I'm not sure what I did wrong. Can anyone help? I have the same issue. so do I "Can`t open Script for Reading (PEPythonEngine;DoScript (j)" I use both Poser 6 and Poser 8 I took a look at the Genesis.py and it is nothing more than an Import. In Python, this is basically a procedural call. It is saying "Load this module and run it". However, the way it is written is not that flexible. It only states the modules rather than the location of the module. The problem is: Since at least Poser 7 (I think), there has been multiple configurations you can put Poser into. You can use a Shared Directory, C:\Program Files, and a few others that I do not recall off the top of my head. However, the installers are assuming you chose C:\program files. If you have it setup with Shared Direction, this is not going to find those files. If Poser is looking at Shared Directories as Root, but the files were installed into C:\Program Files..the files according to Poser are missing. If I recall right, the Shared directory in Windows Vista/7 should be: C:\Users\Public\Shared Documents\Smith Micro\Poser 2012\Runtime\Python . I cannot test it on this system since I do not have poser installed, but that is how in the past I fixed issues similar to this. It clearly states in the info It is for Poser9/Poser2012 only. I'm sorry but that was clear to me on the About page mejed. While this is a parse error, it is not the same issue being experienced by Dakkar. It is trying to read a Script that is written for Poser 9 / Poser 2012 version of Poser Python. What is occurring with Dakkar and the other posters is that Poser cannot FIND the dsonimporter.py file. Two completely different parse issues.:) I have Poser Pro 2012 and still have the issue. Does the About page say "Poser must be installed to default directory only?" Add me to the list of people for whom it is not working... I have both PP2012 and Poser 9 installed and same issue on both despite installing the correct X64 and X32 packages to the respective Poser installations which are both SR3 The doc's say SR3 is needed for your version as well. If that helps any. Just my opinion, I think there was an assumption that a majority of the downloaders would have the configuration setup to have the Runtime. However, keep in mind that I said Runtime and not installed location. You can load Poser into C:\ProgramFiles and the default Runtime can be elsewhere. Dakkar tried the solution I suggested and it did not help. Question..where is your Scripts (the one in the Script menu) Stored. That is considered the Default Runtime for your installation. Not a safe assumption at all. Many users do not install things to the default directory in Windows Vista or 7 in order to avoid permissions/UAC issues and other irritating problems. I think DAZ knows better than this - there's never been a single problem with my non-default DAZ and Poser installations before. I'm thinking it may be an SR3 issue per Jaderail's suggestion, since my work machine is not normally connected to the internet. I'm downloading it to install just in case it's not already there. I will report if it works! I am unable to add the importer to my cart to receive it. Help. Dakkar put SR3 in and reinstalled..now working. I am not sure EXACTLY what they changed (not on my computer that has Poser on it), but the file format makes a difference. I was looking in the folder and saw dson.blahblahblah was referring to addon/dson etc. Definately something different. Ok... feel like an idiot. Installed SR 3. It works just fine. Yes, SR 3 issue. Thank you! Umm..this is a different issue all together. However, exactly WHAT error are you getting? And does anyone know how I might see what service pack is installed. Not as newbie as I sound, just memory problems. Oh! A line appears in red that says only one is allowed. The cart says none are in it. Help>About. Or you could just click on Edit, go to Preference and click on Check for Update. Add something else to the cart, then both should appear and you can remove the extra item. I had that problem because I had the nerve to add it to my cart and then try to sign in manual. Easy fix. In the Browser of your choice, clear your cookies. In Firefox, this is FireFox>Options>Advanced>Network>Clear Offline Web And User Data. The store is seeing that you have one in storage. Once it is clear, it sees nothing.:) This works too.:) Updated to SR3. Now the DSON starter essentials Genesis loads, but it's invisible and has no materials. O.o Thanks! I'll do as suggested! Woo Hoo! Try reinstalling. That what Dakkar did and it seemed to help. Reinstalling starter pack did not fix it. Reinstall of service pack 3 did the trick............ Go figure? Hate it when SR do not work right the first time.:) SR3 for POSER! For some reason I was looking for it for DazStudio 4.5 D'oh! Got the same issue, nothing shows up. it says that Genesis is loaded... but nothing is showing at all. Re-installed it, also reinstalled SR3 again, just in case and just for good messures re-installed it again one last time... yet still nothing... really odd. Where did you install the GSE Core and GSE Poser Companion Files?
http://www.daz3d.com/forums/viewthread/9476/P15/
CC-MAIN-2015-40
en
refinedweb
#include <hallo.h> * Joerg Schilling [Sun, Oct 29 2006, 02:49:32PM]: > Eduard Bloch <edi@gmx.de> wrote: > > > #include <hallo.h> > > * Joerg Schilling [Sun, Oct 29 2006, 10:51:58AM]: > > > Eduard Bloch <blade@master.debian.org> wrote: > > > > > > > > Let me repeat: > > > > > > > > > > In order to investigate on the claim that there is a problem with a missing > > > > > isnan(), I need the output from the "configure" run _and_ I need the content > > > > > > > > It is not missing. It is in libm which you don't link with. I told you > > > > that already. I pointed you even to our linker test to detect the issue. > > > > > > Ist is not a problem being an inexperienced programmer like you are.... > > > > No. It is a problem with people like you who cannot stand that someone > > may know better and therefore try to invent something by their own and > > As long as you accusingly write in the public about things that rather > blame you, you need to be called a person that poisons OSS projects in the > public. Yeah. Sure. Thanks for keeping quotes this time, so it is easy to see who starts writing accusingly and who pulls down the discussion to personal level. Eduard. -- * maxx hat weasel seine erste packung suse gebracht, der hat mich dafür später zu debian gebracht <weasel> .oO( und jetzt ist der DD. jeder macht mal fehler.. ) <maxx> du hast 2 gemacht.... du warst auch noch advocate :P
https://lists.debian.org/cdwrite/2006/10/msg00112.html
CC-MAIN-2015-40
en
refinedweb
. > 2) something that doesn't use namespaced tags to identify dynamic > scopes (clashes with #1) Would be nice. But since I don't like 1, no problem here. ;-) > 3) something that doesn't use the name taglib The name does have evil connotations. Do you have a suggestion for a new name? > > That's pretty much all you have to do to make me happy. > > -- > Stefano. > > > > Glen Ezkovich HardBop Consulting glen at hard-bop.com A Proverb for Paranoids: "If they can get you asking the wrong questions, they don't have to worry about answers." - Thomas Pynchon Gravity's Rainbow
http://mail-archives.apache.org/mod_mbox/cocoon-dev/200412.mbox/%3C7AE08032-456E-11D9-AA5C-000393B3DE96@hard-bop.com%3E
CC-MAIN-2015-40
en
refinedweb
>>, which use the (%xx) escape mechanism just like URLs, so it can be freely used in regular expressions without doubling. Strings are concatenated by just putting them one after each other without any operator in between.. it’s saintmode list .saintmode_threshold can be set to the maximum list size. Setting a value of 0 disables saintmode checking entirely for that backend. The value in the backend declaration overrides the parameter. Directors Directors choose from different backends based on health status and a per-director algorithm. There currently exists a round-robin and a random director. Directors are defined using: director b2 random { .retries = 5; { /* We can refer to named backends */ .backend = b1; .weight = 7; } { /* Or define them inline */ .backend = { .host = "fs2"; } .weight = 3; } } The random director The random director takes one per-director option .retries. This specifies how many tries it will use to find a working backend. The default is the same as the number of backends defined for the director. There is also a per-backend option: weight which defines the portion of traffic to send to the particular backend. The round-robin director The round-robin does not take any options. Backend probes Backends can be probed to see whether they should be considered healthy or not. The return status can also be checked by using req.backend.healthy .window is how many of the latest polls we examine, while .threshold is how many of those must have succeeded for us to consider the backend healthy. .initial is how many of the probes are considered good when Varnish starts - defaults to the same amount as the threshold. backend www { .host = ""; .port = "http"; .probe = { .url = "/test.jpg"; .timeout = 0.3 s; .window = 8; .threshold = 3; .initial = 3; } } It is also possible to specify the raw HTTP request. backend www { .host = ""; .port = "http"; .probe = { # NB: \r\n automatically inserted after each string! .request = "GET / HTTP/1.1" "Host:" "Connection: close"; } } ACLs; } Grace obj.grace = 2m; } Functions The following built-in functions are available:. purge_url(regex) Purge all objects in cache whose URLs match regex. one of the following keywords: error code [reason] Return the specified error code to the client and abandon the request. pass Proceed with pass mode. vcl_hash Use req.hash += req.http.Cookie or similar to include the Cookie HTTP header in the hash string. The vcl_hash subroutine may terminate with one of the following keywords: hash Proceed. vcl_hit Called after a cache lookup if the requested document was found in the cache. The vcl_hit subroutine may terminate with one of the following keywords: error code [reason] Return the specified error code to the client and abandon the request. pass Switch to pass mode. Control will eventually pass to vcl_pass. deliver Deliver the cached object to the client. Control will eventually pass to vcl_deliver. one of the following keywords: error code [reason] Return the specified error code to the client and abandon the request. pass Switch to pass mode. Control will eventually pass to vcl_pass. deliver Possibly insert the object into the cache, then deliver it to the client. Control will eventually pass to vcl_deliver. esi ESI-process the document which has just been fetched. vcl_deliver Called before a cached object is delivered to the client. The vcl_deliver subroutine may terminate with one of the following keywords: error code [reason] Return the specified error code to the client and abandon the request. deliver Deliver the object to the client. "purge.vcl"; # in file "backends.vcl" sub vcl_recv { if (req.http.host ~ "example.com") { set req.backend = foo; } elsif (req.http.host ~ "example.org") { set req.backend = bar; } } # in file "purge.vcl" sub vcl_recv { if (client.ip ~ admin_network) { if (req.http.Cache-Control ~ "no-cache") { purge. req.http.header The corresponding HTTP header. cache or from the backend: obj.proto The HTTP protocol version used when the object was retrieved. obj.status The HTTP status code returned by the server. obj.response The HTTP status message returned by the server. obj.cacheable True if the request resulted in a cacheable response. A response is considered cacheable if it is valid (see above), the HTTP status code is 200, 203, 300, 301, 302, 404 or 410 and it has a non-zero time-to-live when Expires and Cache-Control headers are taken into account. obj.ttl The object’s remaining time to live, in seconds. obj.lastuse The approximate time elapsed since the object was last requests, in seconds. obj.hits The approximate number of times the object has been delivered. A value of 0 indicates a cache miss. ~ "^(www.)?example.com$") { set req.http.host = ""; } } HTTP headers can be removed entirely using the remove keyword: sub vcl_fetch { # Don’t cache cookies remove obj.http.Set-Cookie; } EXAMPLES The following code is the equivalent of the default configuration with the backend address set to "backend.example.com" and no backend port specified. backend default { .Varnish cache server</a> </address> </body> </html> "}; return (deliver); } ~ "^(www.)?example.com$") { set req.http.host = ""; (obj.http.Set-Cookie) { deliver; } } The following code implements the HTTP PURGE method as used by Squid for object invalidation: acl purge { "localhost"; "192.0.2.1"/24; }@des.no〉.
http://manpages.ubuntu.com/manpages/lucid/man7/vcl.7.html
CC-MAIN-2015-40
en
refinedweb
Service Enablement of CORBA Applications through Oracle Service Bus By Ricardo Ferreira-Oracle on Jan statement in mind, it is reasonable to think that an good ESB must be able to handle different types of systems and technologies found in legacy systems, no matter if this system was built last year, five years ago or even in the last decade. Those systems represents the assets of an organization in terms of their business building blocks, so there is a huge chance that those systems carries a substantial number of business services that could be leveraged by an SOA initiative. CORBA is a distributed technology very powerful, that was pretty popular in the 90's and the beginning of 2000 year. Many industries that demands an robust infrastructure to handle their business transactions, critical by nature and extreme sensitive in terms of performance and reliability, relied on CORBA as technology implementation. It is pretty common to find communications companies like internet providers, mobile operators and pre-paid services chain that built its foundation (also known as engineering services) on top of CORBA systems. This article will show how to enable CORBA systems through OSB, the ESB implementation from Oracle that is part of the SOA Suite. Through the steps showed here, you will be able to leverage existing CORBA systems and expose that business logic (defined as CORBA objects) in different transports, protocols and contracts, making the reuse of that business logic both possible and viable. This article will not cover any CORBA specific ORB to make the techniques available here reproducible in different contexts. The Interface Definition Language The definition of any CORBA object is written in an neutral description language called IDL, acronym of Interface Definition Language. For this example, I will consider that OSB will service enable an functionality that sends SMS messages, and this functionality is currently implemented as an object of an CORBA system. The IDL of this object is described below: module corbaEnablementExample { struct Message { string content; string phoneId; }; interface SMSGateway { oneway void send(in Message message); }; }; As you can see, this is a very simple object that accepts an message as main parameter and the message has attributes that represents the content to be sent as an SMS message, and the mobile phone number that will receive the content. The CORBA Server Application It does not matter for the didactic of this article in which programming language the server part of the CORBA application will be implemented. What really matters is which ORB the CORBA server application will register its implementation stub. To illustrate the example, lets suppose that this CORBA object is implemented in Java. package corbaEnablementApplication; import org.omg.CORBA.ORB; import org.omg.CORBA.Object; import org.omg.CosNaming.NameComponent; import org.omg.CosNaming.NamingContextExt; import org.omg.CosNaming.NamingContextExtHelper; import org.omg.PortableServer.POA; import org.omg.PortableServer.POAHelper; import corbaEnablementExample.SMSGateway; import corbaEnablementExample.SMSGatewayHelper; public class ServerApplication { public static void main(String[] args) { ORB orb = null; Object tmp = null; POA rootPOA = null; SMSGatewayImpl impl = null; Object ref, objRef = null; SMSGateway href = null; NamingContextExt ncRef = null; NameComponent path[] = null; try { System.setProperty("org.omg.CORBA.ORBInitialHost", "soa.suite.machine"); System.setProperty("org.omg.CORBA.ORBInitialPort", "8001"); orb = ORB.init(args, null); tmp = orb.resolve_initial_references("RootPOA"); rootPOA = POAHelper.narrow(tmp); rootPOA.the_POAManager().activate(); impl = new SMSGatewayImpl(); ref = rootPOA.servant_to_reference(impl); href = SMSGatewayHelper.narrow(ref); objRef = orb.resolve_initial_references("NameService"); ncRef = NamingContextExtHelper.narrow(objRef); path = ncRef.to_name("sms-gateway"); ncRef.rebind(path, href); System.out.println("----------------------------------------------------"); System.out.println(" CORBA Server is Running and waiting for Requests "); System.out.println("----------------------------------------------------"); orb.run(); } catch (Exception ex) { ex.printStackTrace(); } } } The code listing above shows an CORBA server application that connects onto a ORB available on one 8001 TCP/IP port. After retrieve the POA from the ORB, it get access to the naming service that will be used to register the object implementation. Finally, the application binds the object implementation under the name of "sms-gateway", the name which the CORBA object will be known from the outside world. In order to test this CORBA server application, start an ORB under the port 8001 and execute the program using one JVM. If you don't have any commercial ORB available, you can use the ORB which comes with the JDK. Just enter in the /bin folder of your JDK and type: orbd -ORBInitialHost soa.suite.machine -ORBInitialPort 8001 To check if this remote object is working properly, you need to write an CORBA client application. Here is an example of an CORBA client written upon the same IDL interface which the server was written: package corbaEnablementApplication; import java.util.Random; import java.util.UUID;; import corbaEnablementExample.SMSGatewayHelper; public class ClientApplication { private static final Random random = new Random(); public static void main(String[] args) { ORB orb = null; Object objRef = null; NamingContextExt ncRef = null; SMSGateway smsGateway = null; try { System.setProperty("org.omg.CORBA.ORBInitialHost", "soa.suite.machine"); System.setProperty("org.omg.CORBA.ORBInitialPort", "8001"); orb = ORB.init(args, null); objRef = orb.resolve_initial_references("NameService"); ncRef = NamingContextExtHelper.narrow(objRef); smsGateway = SMSGatewayHelper.narrow(ncRef.resolve_str("sms-gateway")); Message message = createNewRandomMessage(); smsGateway.send(message); } catch (Exception ex) { ex.printStackTrace(); } } private static Message createNewRandomMessage() { String content = UUID.randomUUID().toString(); String phoneId = String.valueOf(random.nextLong()); Message message = new Message(content, phoneId); return message; } } The Business Services Layer In order to OSB get access to the remote object, it is necessary to create an mechanism that can translate the IIOP protocol (the protocol used in pure CORBA systems) for one protocol that OSB can understand, which could be RMI/IIOP or pure RMI. To accomplish that, the best way is to implement the wrapper pattern. Write down one EJB 3.0 service that encapsulates the CORBA remote object, and delegates its service calls to this object. The interface for this EJB 3.0 service should be something simpler like this: package com.oracle.fmw.soa.osb.corba; public interface SMSGateway { public void sendMessage(String content, String phoneId); } The implementation of this EJB 3.0 service should perform a job similar to the CORBA client application described previously, but quite different in terms of how it connect to an ORB: package com.oracle.fmw.soa.osb.corba; import javax.annotation.PostConstruct; import javax.annotation.Resource; import javax.ejb.Remote; import javax.ejb.Stateless;Helper; @Remote(value = SMSGateway.class) @Stateless(name = "SMSGateway", mappedName = "SMSGateway") public class SMSGatewayImpl implements SMSGateway { private corbaEnablementExample.SMSGateway smsGateway; @Resource(name = "namingService") private String namingService; @Resource(name = "objectName") private String objectName; @PostConstruct @SuppressWarnings("unused") private void retrieveStub() { ORB orb = null; Object objRef = null; NamingContextExt ncRef = null; try { orb = ORB.init(); objRef = orb.resolve_initial_references(namingService); ncRef = NamingContextExtHelper.narrow(objRef); smsGateway = SMSGatewayHelper.narrow(ncRef.resolve_str(objectName)); } catch (Exception ex) { throw new RuntimeException("EJB wrapper failed in the retrieval of the CORBA stub.", ex); } } @Override public void sendMessage(String content, String phoneId) { smsGateway.send(new Message(content, phoneId)); } } The code is very similar to the CORBA client application showed before, with one important difference: it has no information about which ORB to connect. In this case, the EJB will reside in on WebLogic JVM. Each WebLogic JVM has one ORB implementation out-of-the-box. So when you write EJB objects that will wrap-up CORBA remote objects, you don't need to worry about which ORB to use. WebLogic it is already an ORB. Note: as explained before, this article will not enter in details of any commercial ORB available, in defense of the clarity and didactic of the article. But keep in mind that the steps shown here for the stub retrieval can be quite different if you are using another ORB. If you are using Borland VisiBroker for instance, there is a unique way to access the ORB which is using an service called "Smart Agent", which dynamically finds another objects in the network. IONA Orbix has another unique way to connect to an ORB, which is by the use of the domain configuration location of the Orbix network. Create one WebLogic domain and execute one or more WebLogic managed servers, and re-run the CORBA server application again. Remember that now, the CORBA server application should point to the WebLogic port since the ORB now should be the one available in the WebLogic subsystem. If you check the JNDI tree of the WebLogic JVM, you should see something like this: This means that the remote CORBA object was properly registered in the CosNaming service available in the WebLogic ORB. Package the EJB 3.0 implementation into a JAR or an EAR and deploy it in the same WebLogic JVM that the CORBA remote object was registered. Now we have everything in place to start the development of the OSB project. For the purposes of this article, I will assume that the EJB 3.0 object is available under the following JNDI name: "SMSGateway#com.oracle.fmw.soa.osb.corba.SMSGateway". The OSB Project In the OSB side, all you have to do is to create an business service that points to one or more endpoints of the EJB 3.0 that is running in the one or more servers of the WebLogic domain. In order to accomplish that, you will need to teach OSB about how to communicate with this foreign WebLogic domain. This is done creating an JNDI provider for the OSB configuration scheme: OSB also needs to access the EJB 3.0 interfaces (and any other helper classes) to instantiate client proxies, so you need to package all the EJB 3.0 artifacts (except of course from the enterprise bean implementation) and deploy it onto your OSB project: Now we have everything in place. It is time to create the business service that will point to the EJB 3.0 wrapper. Create one business service and set its service type to "Transport Typed": Configure the business service protocol as "EJB" and set its endpoint URI to the prefix "ejb:" plus the name of the JNDI provider and plus the JNDI name of the EJB 3.0: Finally, you need to configure the client interface of the EJB 3.0 endpoint in the business service configuration page. Check the "EJB 3.0" checkbox and choose from the drop-down list which interface will be used for message communication. Finish the creation of the business service and save the changes. You can now test your business service using the testing tool available on OSB: After making an request to the business service using the OSB testing tool, you can check the CORBA server application log to see the results of this invocation. Here is an example: With the business service in place, you can easily create one or more proxy services to access the remote CORBA object with minimal effort. For the OSB perspective, it is all about routing messages to the business service that you created, making the fact that this business service is a CORBA remote object really irrelevant. No matter what will be your use case, now you have the CORBA remote object available in OSB for virtually anything. You can expose it directly using one of the available transports, you can forward messages for it in the middle of your pipeline, you can use it as enrichment mechanism using service callouts or you can just use the business service as one of the choices of an dynamic routing. If you choose to expose this business service into a new protocol, you can play with SOAP, REST, HTTP, JMS, Email, Tuxedo, File and FTP with zero-coding. OSB will take care of the protocol translation during messages exchanges. You can download the project artifacts created in this article here.
https://blogs.oracle.com/middlewareplace/tags/soap
CC-MAIN-2015-40
en
refinedweb
Greetings, The amounts that the trust pays for your son's expenses are not income to you and the payments to you for administration are income to you. This payment to you is an expense of the trust. The trust will file a tax return (and is required if it has more than $600 of income). Even if not required, it may be beneficial to at least prepare (and probably file) a return to have a record of the transactions. The distributions to your son may cause some of the trust income to be taxable to your son; but he should receive a Form K-1 to report the amoutns to include on his return if he is required to file. The problem may arise of keeping separate the two types of payments. My suggestion is that you keep a separate account for the amounts distributed for your son's expenses and to repay you for the amounts you spend on him or have payments made directly from that account for his care and support. That way, there is no question what is and is not your income and how the funds were spent. As both the paid administrator and the caretaker of your son you can avoid any question of using his money for yourself by keeping separate acccounts and preparing a return. I hope this helps for separating the paymnets from the trust.
http://www.justanswer.com/tax/109aq-trustee-disabled-sons-account.html
CC-MAIN-2015-40
en
refinedweb
ROUTE(4) BSD Programmer's Manual ROUTE(4) route - kernel packet forwarding database #include <sys/socket.h> #include <net/if.h> #include <net/route.h> int socket(PF_ROUTE, SOCK_RAW, int in- terfaces, each protocol family installs a routing table entry for each interface when it is ready for traffic. Normally the protocol specifies the route through each interface as a "direct" connection to the destina- tion SYNOPSIS above: The family parameter may be AF_UNSPEC which will provide routing informa- tion posi- tion, and delimited by the new length entry in the sockaddr. An example of a message with four addresses might be an ISO redirect: Destination, Netmask, Gateway, and Author of the redirect. distin- guish lo- cally, mes- sages(3)., eg DONE */ int rtm_addrs; /* bitmask identifying sockaddrs in msg */ pid_t rtm_pid; /* identify sender */ int rtm_seq; /* for sender to identify action */ int rtm_errno; /* why failed */ int rtm_use; /* from rtentry */ u_long_announcemsghdr { u_short ifan_msglen; /* to skip over non-understood messages */ u_char ifan_version; /* future binary compatibility */ u_char ifan_type; /* message type */ u_short ifan_index; /* index for associated ifp */ char ifan_name[IFNAMSIZ]; /* if name, e.g. "en0" */ u_short ifan_what; /* what type of announcement */ }; The RTM_IFINFO message uses a if_msghdr header, the RTM_NEWADDR and RTM_DELADDR messages use a ifa_msghdr header, the RTM. _hopcount */ */ socket(2), sysctl(3) MirOS BSD #10-current April 19,.
https://www.mirbsd.org/htman/i386/man4/route.htm
CC-MAIN-2015-40
en
refinedweb
Name | Synopsis | Interface Level | Parameters | Description | Return Values | Context | Examples | See Also #include <sys/varargs.h> #include <sys/ddi.h> #include <sys/sunddi.h> char *vsprintf(char *buf, const char *fmt, va_list ap); Solaris DDI specific (Solaris DDI). Pointer to a character string. Pointer to a character string. Pointer to a variable argument list. vsprintf() builds a string in buf under the control of the format fmt. The format is a character string with either plain characters, which are simply copied into buf, or conversion specifications, each of which converts zero or more arguments, again copied into buf. The results are unpredictable if there are insufficient arguments for the format; excess arguments are simply ignored. It is the user's responsibility to ensure that enough storage is available for buf. ap contains the list of arguments used by the conversion specifications in fmt. ap is a variable argument list and must be initialized by calling va_start(9F). va_end(9F) is used to clean up and must be called after each traversal of the list. Multiple traversals of the argument list, each bracketed by va_start(9F) and va_end(9F), are possible. Each conversion specification conversion character is ignored. A character indicating the type of conversion to be applied: The integer argument is converted to signed decimal (d, D), unsigned octal (o, O), unsigned hexadecimal (x, X) or unsigned decimal (u), respectively, and copied. The letters abcdef are used for x conversion. The letters ABCDEF are used for X conversion. The character value of the argument is copied. This conversion uses two additional arguments. The first is an integer, and is converted according to the base specified in the second argument. The second argument is a character string in the form <base>[<arg> . . . ]. The base supplies the conversion base for the first argument as a binary value; \10 gives octal, \20 gives hexadecimal. Each subsequent <arg> is a sequence of characters, the first of which is the bit number to be tested, and subsequent characters, up to the next bit number or terminating null, supply the name of the bit. A bit number is a binary-valued character in the range 1-32. For each bit set in the first argument, and named in the second argument, the bit names are copied, separated by commas, and bracketed by < and >. Thus, the following function call would generate reg=3<BitTwo,BitOne>\n in buf. vsprintf(buf, "reg=%b\n", 3, "\10\2BitTwo\1BitOne") The argument is taken to be a string (character pointer), and characters from the string are copied until a null character is encountered. If the character pointer is NULL on SPARC, the string <nullstring> is used in its place; on x86, it is undefined. Copy a %; no argument is converted. vsprintf() returns its first parameter, buf. vsprintf() can be called from user, kernel, or interrupt context. In this example, xxerror() accepts a pointer to a dev_info_t structure dip, an error level level, a format fmt, and a variable number of arguments. The routine uses vsprintf() to format the error message in buf. Note that va_start(9F) and va_end(9F) bracket the call to vsprintf(). instance, level, name, and buf are then passed to cmn_err(9F). #include <sys/varargs.h> #include <sys/ddi.h> #include <sys/sunddi.h> #define MAX_MSG 256 void xxerror(dev_info_t *dip, int level, const char *fmt, . . . ) { va_list ap; int instance; char buf[MAX_MSG], *name; instance = ddi_get_instance(dip); name = ddi_binding_name(dip); /* format buf using fmt and arguments contained in ap */ va_start(ap, fmt); vsprintf(buf, fmt, ap); va_end(ap); /* pass formatted string to cmn_err(9F) */ cmn_err(level, "%s%d: %s", name, instance, buf); } cmn_err(9F), ddi_binding_name(9F), ddi_get_instance(9F), va_arg(9F) Name | Synopsis | Interface Level | Parameters | Description | Return Values | Context | Examples | See Also
http://docs.oracle.com/cd/E19082-01/819-2256/vsprintf-9f/index.html
CC-MAIN-2015-40
en
refinedweb
Python Programming, news on the Voidspace Python Projects and all things techie. Python and Threading In my Python programming so far I've managed to avoid threads altogether. I learned a lot whilst working on ConfigObj with Nicola Larosa. He has a pathological hatred of threads, which I inherited by proxy. In order to justify a prejudice like this you need to understand the issues. In his case the loathing almost certainly came from great pain in debugging thread related problems. I understand what problems threading could cause, but had no direct experience. Working with IronPython and Windows Forms I've had to wrestle with threads a bit. In our production code Timers and Network Clients involve asynchronous callbacks which use threads. More to the point; in order to interact with our GUI a lot of the tests need to run on another thread. We've had much fun working out timing and blocking issues. We're currently going through our test codebase and trying to remove as many 'voodoo sleeps' [1] as possible and actually resolve the issues they are attempting to work round. Programming in IronPython and Windows Forms we're using a native GUI framework. My guess is that other Python GUI toolkits also use threads 'under the hood' for timer classes and the like. Because these frameworks are non-native, the threading doesn't normally interact with your Python code. Perhaps this a good thing. In the last couple of days, working with the boss on a CPython script, I've used the Python threading API for the first time. The basics are very easy, but there are a couple of noticeable oddnesses. import time TIMEOUT = 110 def LongRunning(): time.sleep(100) print 'Finished' thread = Thread(target=LongRunning) thread.start() # # Thread is now running thread.join(TIMEOUT) if thread.isAlive(): print "Oh dear - the thread didn't terminate normally." print 'We timed out instead.' else: print 'Terminated normally.' One slightly odd thing is that the first argument to the Thread constructor is reserved, so you can't use it. Huh ? Secondly, there is no way to terminate a thread. My boss was most perplexed by this (he is used to threading APIs from other languages which do allow this). If I understand correctly (which is possible - but perhaps not likely), the reason for this is "it could leave your objects in an inconsistent state to terminate at a random point, so you shouldn't do it". This does seem at odds with the normal Python philosophy of not telling the developer how he ought to do things. In our case we were trying to test a long running loop by spinning it off on another thread. We got round it by monkey patching one of the functions the loop used to raise an exception. We redirect standard error around the exception to reduce the noise. Not ideal by any means... Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2006-11-08 23:43:33 | | Categories: Python, Hacking Python Jobs on the HiddenNetwork Big news: there are now Python jobs on the Hidden Network Jobs Board. If you're searching for a Python job, or are looking to hire, this is the place. This is great news. It means that Python jobs will be shown all across the Hidden Network. It's only just happened, so there are only a couple so far, but expect this to grow. The HiddenNetwork is the job board started by the team responsible for The Daily WTF. Adverts are shown on a network of top programming blogs. That means that the posts get shown many thousands of times a day, and are read by top programmers from all around the world. So is advertising on blogs an effective way of hiring ? Well... a few months back I mentioned on this blog that we were hiring at Resolver Systems. We had an excellent developer, Christian Muirhead [1], who applied for the job. He's an excellent programmer. We've got an interesting mix of skills now, and have been very lucky with hires. With a small team it's very important that everyone fits in. I feel very lucky to work at Resolver, as they're all very good blokes. The team includes : - Andrzej with web development experience in Java and Ruby - Giles (the boss) who has done a lot of Java development for a large investment bank - William, who has done C++ (and Objective C etc) programming (including working on a win32 compatibility layer for porting games to the Mac) - Jonathan with a lot of GIS experience in C, C++, C# - Christian a web developer with ASP.NET and Python - Me (My apologies to my esteemed colleagues if I have misrepresented them.) As you can see, we're a diverse bunch. This makes for great pair programming as we've all learned different patterns and idioms and all have different experiences to draw from. Anyway, I've wandered further away from the point than usual. The theory is that techies who read blogs are more likely to have a passion for programming. It's not just Python jobs, they also have Java, Ruby, .NET, PHP jobs and all the usual suspects. So if you're looking to hire someone who's a cut above the average then the Hidden Network could be what you're looking for. Watch out though, you might get someone like you, or even worse; someone like me. Oh, by the way: I'm off to Italy for a few days. You'll have to cope without me until next week. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2006-11-08 23:07:44 | | Categories: Python, Website, General Programming, Work Movable Python 2.0.0 Beta 2 I'm pleased to announce the release of Movable Python 2.0.0 Beta 2. There are now updated versions for Python 2.2 - 2.5, the Mega-Pack [1], and the free trial version. The most important changes in this release are : - Fixes for Pythonwin. The 'run' button and 'grep' dialog now work. - You can now specify a directory for config files. - The environment programs are run in is no longer pre-populated with variables. The Movable Python variables are now in a module called movpy. You can see the full changelog below. You can download the updated version from : Movable Python Download Pages You can get the free trial version, from Movable Python Demo & Other Files. Plus many other features and bundled libraries. What's New ? The changes in version 2.0.0 Beta 2 include : (Many thanks to Schipo and Patrick Vrijlandt for bug reports, fixes and suggestions.) Updated to Python 2.4.4 and wxPython 2.7.1 Fixed the bug with pylab support. Fixed problem with global name scope in the interactive interpreter. Everything moved out of the default namespace and moved into a module called 'movpy'. commandline != '' if '-c' was used go_interactive = True if '-i' was set. interactive = True if we are in an interactive session interactive_mode is a function to enter interactive mode interactive_mode(localvars=None, globalvars=None, IPOFF=False, argv=None) movpyw = True if we are running under movpyw rather than movpy The docs menu option will now launch.) Hopefully the next release will be 2.0.0 final, with a few minor changes and completed documentation Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2006-11-07 23:45:57 | | Categories: Python, Projects BlogAds Gadget Network Voidspace is now part of the BlogAds Gadget Network, which is very cool. BlogAds is an advertising network of specialist blogs. The Gadget Network features those with a focus on gadgets and techie subjects. The adverts are pretty good, you can see them on the left sidebar. They can include an image and a good amount of texts. There are certainly some higher traffic (and more expensive) blogs than mine, but according to the metrics on the network; an advert on Voidspace will get around 22 000 impressions in a week and cost you $10. The adverts are site-wide, but they will mainly be seen by alpha geeks like you. If you have any technical products to advertise, then the Gadget Network is a great place to do it, and especially Voidspace. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2006-11-07 23:27:47 | | Categories: Website, Computers Movable Python Extras There are now some Movable Python extras available for download. These can be downloaded by anyone, but are especially useful for use with Movable Python. You can find them at : Python files at Tradebit The files available are : - Python manuals (CHM files) for Python 2.3, 2.4 & 2.5 - PyWin32 Manual (CHM) - Matplotlib (pylab) and Numarray for Python 2.4 - SPE 0.8.3c : the Python IDE for Python 2.3-2.5 Instructions on how to use SPE and matplotlib with Movable Python are included on the details pages for the files. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2006-11-07 18:24:39 | | Categories: Python, Projects New Hard Drive I've just bought (or at least paid for) a new hard drive. The most cost effective size seemed to be 320 gigabytes. The two cheapest places I could find (for branded drives) were Dabs and Ebuyer. They were almost identically priced, but the service from Dabs has been better in the past. The UK cost was the equivalent of about 40c per giga-byte. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2006-11-04 14:59:45 | | Python import and __main__ Oddness Try creating a small Python script with the following code, and running it : import imp x = imp.new_module('__main__') sys.modules['__main__'] = x print x What would you expect to see output ? In theory I suppose it ought to be a module object. What happens (with Python 2.4.4) is that it prints None. The reason for this is that when you replace sys.modules['__main__'] with another one, the reference count of the original __main__ module drops to zero. So garbage collection kicks in. The original of course is the script currently running. Except shouldn't that result in a NameError. I guess the module is only partially garbage collected, hence the name isn't lost but its value is reset to None. Hmmm... As the module is being executed it shouldn't be garbage collected at all, the interpreter should keep a reference to it [1]. And the moral of this story, don't do this at home. Note Python behaves differently if you try this in an interactive session. To see the same results as above you have to run this as a script. Chris Siebenmann has written up an explanation. Like this post? Digg it or Del.icio.us it. Posted by Fuzzyman on 2006-11-04 10:55:45 | | Categories: Hacking, Python Archives This work is licensed under a Creative Commons Attribution-Share Alike 2.0 License. Counter...
http://www.voidspace.org.uk/python/weblog/arch_d7_2006_11_04.shtml
CC-MAIN-2015-40
en
refinedweb
NetBeans: the Java EE 6 IDE Bonjour Comment Java? today is some type of historical milestone for Java EE 6: This is Milestone 1 of NetBeans 6.8 and Java EE 6 support with the latest GlassFish v3 (build 57). The bundle is only 132Mb and contains everything you need to start with Java EE 6: the IDE, the Java EE 6 current runtime, the JavaEE 6 JavaDocs (for code completion), the JavaDB database, and very very cool features from the plaform or its implementation: - No Need for web.xml in Web Application (and support for web-fragment.xml) - Servlet Annotations - EJB inside Web Application Projects - Embedded Web Browser for fast testing (Mozzilla XULRunner) - GlassFish v3 build 57 pre-registered - Stellar GlassFish v3 startup time - Stellar Deploy on Save for Java EE projects (redeploy in less than a second), with Session preservation - JSF 2.0 and Facelet Support - Java EE 6 Javadoc (preview) in Code completion (not many IDEs have this support:-) - Singleton EJB support - All current Java EE 6 APIs available: - REST JAX-RS 1.1 and associated wizards - JAXB 2.2 - Metro 2.0 - JAX-WS 2.2 - JPA 2.0 - Beans Validation Framework - etc... - Maven support And more and more (i.e all the NetBeans 6.7.x features as well). Read more at and get the Milestone 1 bits at Ludo - Login or register to post comments - Printer-friendly version - ludo's blog - 5496 reads by marekf - 2009-08-07 06:23The root tag of the document should contain the xhtml namespace xmlns="". OTOH I have changed the bahaviour so the ns declaration is not necessary recently. Please try latest daily build, there are more changes there. Then file an issue with steps to reproduce if it still doesn' work. Thank you and sorry for the inconvenience. Marek Fukala (mfukala@netbeans.org) by cayhorstmann - 2009-08-05 21:17Yes, EE6 and correct name spaces. I think my problem is that I use h:head and h:body tags. When I started a new project, I saw that the template.xhtml just uses head and body. And it got really confused by f:metadata. Just doesn't seem to be quite ready for JSF 2.0. I'll file a bug report. by ludo - 2009-08-05 13:53even with (...) using xhtml and correct namespace declared? by cayhorstmann - 2009-08-05 13:05Hi Ludo, I tried taking it for a spin, but I couldn't get facelet page editing to do anything useful. The editor insisted that the facelets pages were plain HTML files. Is there some secret switch? Thanks, Cay
https://weblogs.java.net/blog/ludo/archive/2009/08/netbeans_the_ja.html
CC-MAIN-2015-40
en
refinedweb
and here is the orignal file where i got the code. bascially all i want to do is take this file and put the code into a JFrame so its easier to add more features and move things around Type: Posts; User: beginner123 and here is the orignal file where i got the code. bascially all i want to do is take this file and put the code into a JFrame so its easier to add more features and move things around i have attached the files ok here is all 3 files (sorry for all the extra stuff netbeans puts in): import java.io.*; import java.net.*; import java.util.*; public class MyServer{ ServerSocket ss; Socket s; sorry i though you mean't the code i just posted a minute ago - that code is fine. maybe compare the two and see whats the problem? i didnt change any of the code. i just want the code to work using... yes i did test it and it worked. thats why i started making the changes because the code was fine I am using netbeans --- Update --- i got this code from a website. here is the orginal code: i mport javax.swing.JButton; import javax.swing.JFrame; import javax.swing.JLabel; import... that is because i didnt post the 3rd file here it is: import java.io.*; import java.net.*; import java.util.*; public class MyServer{ ServerSocket ss; ok the problem is the login page its not running the actionPerformed function when the button is clicked. how can i fix this code? ok so i changed String name to static String name; and now the error is gone but the client page is still not displaying when i click the login button on the login page I tried that. I changed the code to new MyClient(name).setVisible(true); and the error says non static variable name cannot be referenced from a static context I'm making a chat application using client/server. I have 3 files - server, login and client. Nothing is happening when I click the login button, It should open the client page. I am getting an... i already have some code in an ActionPerformed event for the button. how do i add an ActionListener? so i have a simple typing game where the user clicks the start button and 10 random words appear at the top of the screen. The user then has to enter these words into the texbox and press the space... yes i know using if (Words.equals(Words)) is wrong. I know i need some kind of loop. I thought maybe i would use the jTextArea1KeyReleased event so when the last letter of the word being typed is... not really sure how to do that. i tried this in the jTextArea1KeyReleased event: if (Words.equals(Words)) { generateNextWord(); } here is my code for the array: Random r = new Random(); public int randomNumber; public String[] Words = new String[10]; public int WordCounter=0; public int... ok here is some of the code: this is my basic array with 10 words public int randomNumber; public String[] Words = new String[10]; public int WordCounter=0; I am trying to make a typing game where random words come on the screen and the user has to type the word out in the textarea. I created an array with 10 words which a displayed using a label,...
http://www.javaprogrammingforums.com/search.php?s=5fd59c4f5f55d4ed25b14cf219a898e5&searchid=1814150
CC-MAIN-2015-40
en
refinedweb
Hi, T E Schmitz <mailreg@numerixtechnology.de> schrieb am 06.08.2004 17:53:35: > Hello, > > >>>. >>> >> For backwards compatibility we would need to leave the existing methods >> as is. There are I guess two ways forward, either we generate a second >> set of methods that do left joins or we add an option to the generator >> to determine if the existing or a left join should be used (I think the >> latter might be the best option). > > If I could make a suggestion: > > I think the best thing would be to generate something like : > > /** Old method */ > protected static List doSelectJoin_XXX( > Criteria criteria, > Connection connection) throws TorqueException > { > return (doSelectJoin_BrandViaBrand1SkippedID(criteria,connection,null)); > } > > /** New method */ > protected static List doSelectJoin_XXX ( > Criteria criteria, > Connection connection, > SqlEnum joinType) throws TorqueException > I aggree with you that your suggestion is more "backwards compatible" than the generator option. However, in my opinion, the "standard" join for 99% of the cases will be a left join, so I would also prefer the generator option. If one really wants to have an inner join, one could emulate it via a Criteria object which restricts the FK column to "NOT NULL". > Note that this requires SqlEnum to be public! Or is there any reason why > you would not want SqlEnum to be public? > Most of the SqlEnum constants are made available as public constants in the Criteria object (e.g. Criteria.LEFT_JOIN is a SqlEnum Object), so I do not see the reason why you had to make SqlEnum public (I diod not have to make this change). I guess the reason for not making SqlEnum public is the following: People expect these constants in the Criteria object, so this is the "default" access point to the constants. As one usually wants only one access point to avoid confusion, the SqlEnum is not public. In my opinion, this is a correct decision. Thomas --------------------------------------------------------------------- To unsubscribe, e-mail: torque-dev-unsubscribe@db.apache.org For additional commands, e-mail: torque-dev-help@db.apache.org
http://mail-archives.apache.org/mod_mbox/db-torque-dev/200408.mbox/%3COF37B106F9.B6A67961-ONC1256EEC.002EF425-C1256EEC.0031F2D1@seitenbau.net%3E
CC-MAIN-2015-40
en
refinedweb
Jhoel Esmejarda wrote:@Vishal Shaw Package? uhmm.. I think I should finish first the Head First Java book, I'm stuck in Chapter 5 Extra-Strength Methods, I don't know yet the package but I will try to learn it bit by bit. Thanks with the comments. Rajdeep Biswas wrote:Hi Jhoel, Its encouraging that you are putting your efforts. I have not gone through your code completely but do have a small recommendation. Creating a lot of objects is not recommended, you know it. So, why create String objects again with same value. For example, you wrote the following lines in more than one place: System.out.println("IOException " + e); System.out.println("Here's your " + prodName + "!"); So, you can introduce a constant file, like following: public class VendingMachineConstants{ public static final String EXCLAMATION = "!"; public static final String HERE_IS_YOUR = "Here's your"; public static final String BLANK_SPACE = " "; public static final String IO_EXCEPTION = "IOException"; } This is just for example. And you could use these as follows: System.out.println(VendingMachineConstants.IO_EXCEPTION + VendingMachineConstants.BLANK_SPACE + e); System.out.println(VendingMachineConstants.HERE_IS_YOUR + prodName + VendingMachineConstants.EXCLAMATION); Remember, this just exemplifies, not justifies the requirement. So you have to decide and this way you're spared creating duplicate and used-in-many-places String objects. The same holds for other data types like "int" also. Best wishes Vishal Shaw wrote:there's a lot more to learn , which he will eventually pick up on the way. For now, he's doing a good job. Jhoel Esmejarda wrote:The first one is the Menu Class... second one is the Slot Class that gets your order and payment... Please help me with this, I don't have any formal training with programming nor a graduate of any computer course, I'm just studying by myself... Jhoel Esmejarda wrote:Thanks for the post, actually I've been waiting for your comments . For now I will try my best to improve on things that you pointed out. Is there any book that you can recommend that can give me (as a beginner) a good start on things about Programming in Java? Winston Gutkowski wrote:WHAT IT IS ... WHAT IT DOES ... (and lots of other good stuff) Programming is about telling a story. Code is the programmer's interpretation of what the system should do, written as formal instructions that a computer can interpret and execute. Design is how the story is told and how the plot unfolds. As with any good story, well-crafted code should be engaging and help the reader understand exactly what the author was trying to say. H. Abelson and G. Sussman (in "Structure and Interpretation of Computer Programs") wrote:Programs are meant to be read by humans and only incidentally for computers to execute. Junilu Lacar wrote:Some of what Winston wrote under "IS" is actually "DOES" though. Edit: BTW, Dijkstra's first name was Edsger
http://www.coderanch.com/t/596495/java/java/Code-Optimization-Part
CC-MAIN-2015-40
en
refinedweb
The second most distinguished fault, with almost as many special cases and discussion as the MustUnderstand fault I talked about last week, ended up not making it into the product. Some form of the fault is still in there of course, but it lost its mouthful of a name: the InvalidCardinalityAddressing fault. An InvalidCardinalityAddressing fault occurs when you have more or fewer copies of a header than was expected. For instance, messages are expected to have at most one MessageID. Having twelve MessageIDs would be frowned upon. On the basic MessageHeaders collection, this is checked when accessing the Action, FaultTo, From, MessageID, ReplyTo, or To properties. Detecting that there's an InvalidCardinalityAddressing problem only takes place when someone looks at the message and in particular looks at the offending header. Given our extensible protocol stack, there's no way to predict who the first person to look at a particular header will be. Therefore, this is one of the few faults that can potentially occur at any layer. Since there's no longer an InvalidCardinalityAddressingException to use in situations like this, you should be expecting a MessageHeaderException, one of the subtypes of ProtocolException, to come back instead. public class MessageHeaderException : ProtocolException{ public MessageHeaderException(); public MessageHeaderException(string message); protected MessageHeaderException(SerializationInfo info, StreamingContext context); public MessageHeaderException(string message, bool isDuplicate); public MessageHeaderException(string message, Exception innerException); public MessageHeaderException(string message, string headerName, string ns); public MessageHeaderException(string message, string headerName, string ns, bool isDuplicate); public MessageHeaderException(string message, string headerName, string ns, Exception innerException); public MessageHeaderException(string message, string headerName, string ns, bool isDuplicate, Exception innerException); public string HeaderName { get; } public string HeaderNamespace { get; } public bool IsDuplicate { get; }} This exception is also generated by the standard wire faults InvalidAddressingHeader (subcode InvalidCardinality) and MessageAddressingHeaderRequired. MessageAddressingHeaderRequired translates to IsDuplicate = false while InvalidAddressingHeader.InvalidCardinality translates to IsDuplicate = true. This interpretation of duplicates is mandated by the SOAP binding in the WS-Addressing specification so I'm hoping that everyone is doing it correctly. InvalidCardinality is otherwise vague from the name about whether it means too many or too few copies so it would be easy to get this wrong. Knowing that also helps explain why using InvalidCardinalityAddressingException for both cases was so awkward. Next time: Consuming Faults, Part 1
http://blogs.msdn.com/b/drnick/archive/2006/12/27/a-historical-awkwardly-named-fault.aspx
CC-MAIN-2015-40
en
refinedweb
After you install Oracle Identity Manager, you may have to perform certain postinstallation tasks before you can use the application. Some of the postinstallation tasks are optional, depending on your deployment and requirement. This chapter discusses the following topics: Starting Oracle Identity Manager Stopping Oracle Identity Manager Accessing the Administrative and User Console Using the Diagnostic Dashboard to Verify Installation Increasing the Memory and Setting the Java Option Changing Keystore Passwords Setting the Compiler Path for Adapter Compilation Removing Backup xlconfig.xml Files After Starting or Restarting (Optional) Configuring Proxies to Access Web Application URLs (Optional) Setting Log Levels (Optional) Enabling Single Sign-On (SSO) for Oracle Identity Manager (Optional) Configuring Custom Authentication (Optional) Protecting the JNDI Namespace (Optional) Deploying the SPML Web Service (Optional) Configuring Database-Based HTTP Session Failover (Optional) Perform the following procedures if you upgrade from Oracle WebLogic Server release 10.3.0 to release 10.3.1 or later: Upgrading the weblogic.xml File Changing the Memory Settings Updating the JDK and JRockit Installation This section describes how to start Oracle Identity Manager on Microsoft Windows and UNIX. To start Oracle Identity Manager: Verify that your database is up and running. Start Oracle Identity Manager by running one of the following scripts. Running the Oracle Identity Manager start script also starts Oracle: SQL2005_JDBC_DRIVER_HOME /sqljdbc_1.2/enuto the BEA_HOME /user_projects/domains/ DOMAIN_NAME /libdirectory and add the driver location to the CLASSPATH environment variable. For example: export CLASSPATH=/opt/sql_driver_location/sqljdbc.jar In a clustered environment, start the Administrative Server by running the xlStartWLS.bat or xlStartWLS.sh script, and then start the managed servers in the cluster by using the WebLogic Administration Console if you are using WebLogic Node Manager. Otherwise, you can start the managed servers by using the DOMAIN_HOME /bin/xlStartManagedServer script as follows: xlStartManagedServer.cmd/sh MANAGEDSERVERNAME For example: xlStartManagedServer.cmd/sh OIM_SERVER1 This section describes how to stop Oracle Identity Manager on Microsoft Windows and UNIX. To stop an Administrative Server or Managed Server: Log in to the WebLogic Server Administration Console by using the following URL: In this URL, hostname represents the name of the computer hosting the application server and port refers to the port on which the server is listening. The default port number for Oracle WebLogic Server is 7001. In the Domain Structure tree on the left pane, expand Environment and then select Servers. On the right pane, select the Control tab. Select the check box for the server that you would want to shut down. From the Shutdown list (at the top or bottom of the table), select either When work completes or Force Shutdown Now. Note:In a clustered environment, first stop the Managed servers and then stop the Administrative Server. After starting the Oracle Oracle This section describes how to increase the JVM memory settings when Oracle Identity Manager is: Deployed on WebLogic Admin Server Deployed on WebLogic Managed Servers When Oracle Identity Manager is deployed on WebLogic admin server, to increase the JVM memory settings: Use the WebLogic Server Administration Console to shut down the application server gracefully. Navigate to WebLogic DOMAIN_HOME /bin. For example, C:\bea103\user_projects\domains\base_domain\bin or /opt/bea103/user_projects/domains/base_domain/bin. Open xlStartWLS.cmd for Microsoft Windows. For UNIX, open xlStartWLS.sh. For Microsoft Windows: Before "SET JAVA_OPTIONS=....", add any one of the following lines depending on the type of JVM: For Sun and HP JVMs, add: set USER_MEM_ARGS=-Xms1280m -Xmx1280m -XX:PermSize=128m -XX:MaxPermSize=256m For JRockit JVMs, add: set USER_MEM_ARGS=-Xms1280m -Xmx1280m -XnoOpt For IBM JVMs, add: set USER_MEM_ARGS=-Xms1280m -Xmx1280 For UNIX: Before "JAVA_OPTIONS=...", add any one of the following lines depending on the type of JVM: For Sun and HP JVMs, add: USER_MEM_ARGS=-Xms1280m -Xmx1280m -XX:PermSize=128m -XX:MaxPermSize=256m For JRockit JVMs, add: USER_MEM_ARGS=-Xms1280m -Xmx1280 -XnoOpt For IBM JVMs, add: USER_MEM_ARGS=-Xms1280m -Xmx1280 Add the following line: export USER_MEM_ARGS You can deploy Oracle Identity Manager on WebLogic managed servers. This is the only option for clustered installation. Depending on how you start the managed server, such as by using WebLogic admin console or Node Manager, or by running the scripts, changes must be made in different locations. When managed servers are started by running the xlStartManagedServer script, repeat the steps for increasing the JVM memory settings when Oracle Identity Manager is deployed on WebLogic admin server for script DOMAIN_HOME /bin/xlStartManagedServer.sh or DOMAIN_HOME /bin/xlStartManagedServer.cmd. For more information, see "Deployed on WebLogic Admin Server". When Managed Servers are started by using the Admin console or Node Manager, to increase the JVM memory settings: Open the WebLogic Server Administration Console. Click Environment, Servers, SERVER_NAME, for example OIM_SERVER1. Click the Server Start tab. Change the JVM Memory values as shown in the procedure when Oracle Identity Manager is deployed on WebLogic admin server. During installation, the passwords for the Oracle Identity Manager keystores are set to xellerate. The Installer scripts and installation log contain this default password. It is strongly recommended that you change the keystore passwords for all production installations. To change the keystore passwords, you must change the storepass of .xlkeystore and the keypass of the xell entry in .xlkeystore. These two values must be identical. Use the keytool utility to change the keystore passwords as follows: Open a command prompt on the Oracle Identity Manager host computer. Navigate to the OIM_HOME \xellerate\config directory. Run the keytool utility with the following options to change the storepass: JAVA_HOME\jre\bin\keytool -storepasswd -new new_password -storepass xellerate -keystore .xlkeystore -storetype JKS Run the keytool with the following options to change the keypass of the xell entry in .xlkeystore: JAVA_HOME\jre\bin\keytool -keypasswd -alias xell -keypass xellerate -new new_password -keystore .xlkeystore -storepass new_password Note:Replace new_passwordwith the same password entered in Step 3. Table 9-1 lists the options used in the preceding example of keytool usage. In a text editor, open the OIM_HOME \xellerate\config\xlconfig.xml file. Edit the <xl-configuration>.<Security>.<XLPKIProvider>.<KeyStore> section, <xl-configuration>.<Security>.<XLPKIProvider>.<Keys> section and the <RMSecurity>.<KeyStore> section to specify the keystore password as follows: Note:Change the <XLSymmetricProvider>.<KeyStore>section of the configuration file to update the password for the database keystore (.xldatabasekey). Change the password tag to encrypted=" false". Enter the password, for example: <Security> <XLPKIProvider> <KeyStore> <Location>.xlkeystore</Location> <Password encrypted="false">new_password</Password> <Type>JKS</Type> <Provider>sun.security.provider.Sun</Provider> </KeyStore> <Keys> <PrivateKey> <Alias>xell</Alias> <Password encrypted="false">new_password</Password> </PrivateKey> </Keys> <RMSecurity> <KeyStore> <Location>.xlkeystore</Location> <Password encrypted="false">new_password</Password> <Type>JKS</Type> <Provider>sun.security.provider.Sun</Provider> </KeyStore> Save and close the xlconfig.xml file. Note:When you perform the procedures described in the "Starting Oracle Identity Manager" and "Stopping Oracle Identity Manager" sections, a backup of the configuration file is created. The configuration file with the new password is read in, and the password is encrypted in the file. If all of the preceding steps succeed, then you can delete the backup file. On UNIX, you might also want to clear the command history of the shell by using the following command: history -c To compile adapters or import Deployment Manager XML files that have adapters, you must set the compiler path. To set the compiler path for adapter compilation, you must first install the Design Console. Refer to Chapter 8, "Installing and Configuring the Oracle Identity Manager Design Console" for instructions on installing the Design Console and then setting the compiler path for adapter compilation. files. By default, Oracle Identity Manager uses the following Web application URLs. You may have to configure proxies to allow access to the following URLs: /xlWebApp /xlScheduler /Nexaweb /spmlws Oracle Identity Manager uses log4j for logging. Logging levels are configured in the logging properties file, OIM_HOME /xellerate/config/log.properties. The following is a list of the supported log levels, appearing in descending order of information logged. DEBUG logs the most information and FATAL logs the least information: DEBUG INFO WARN ERROR FATAL By default, Oracle Identity Manager is configured to provide output at the WARN level except for DDM, which is configured to provide output at the DEBUG level. You can change the log level universally for all components or for one or more individual component. Oracle Identity Manager components are listed in the OIM_HOME \xellerate\config\log.properties file in the XELLERATE section. For example: log4j.logger.XELLERATE=WARN log4j.logger.XELLERATE.DDM=DEBUG log4j.logger.XELLERATE.ACCOUNTMANAGEMENT=DEBUG log4j.logger.XELLERATE.SERVER=DEBUG log4j.logger.XELLERATE.RESOURCEMANAGEMENT=DEBUG log4j.logger.XELLERATE.REQUESTS=DEBUG log4j.logger.XELLERATE.WORKFLOW=DEBUG log4j.logger.XELLERATE.WEBAPP=DEBUG log4j.logger.XELLERATE.SCHEDULER=DEBUG log4j.logger.XELLERATE.SCHEDULER.Task=DEBUG log4j.logger.XELLERATE.ADAPTERS=DEBUG log4j.logger.XELLERATE.JAVACLIENT=DEBUG log4j.logger.XELLERATE.POLICIES=DEBUG log4j.logger.XELLERATE.RULES=DEBUG log4j.logger.XELLERATE.DATABASE=DEBUG log4j.logger.XELLERATE.APIS=DEBUG log4j.logger.XELLERATE.OBJECTMANAGEMENT=DEBUG log4j.logger.XELLERATE.JMS=DEBUG log4j.logger.XELLERATE.REMOTEMANAGER=DEBUG log4j.logger.XELLERATE.CACHEMANAGEMENT=DEBUG log4j.logger.XELLERATE.ATTESTATION=DEBUG log4j.logger.XELLERATE.AUDITOR=DEBUG To set Oracle Identity Manager log levels, edit the logging properties in the OIM_HOME \xellerate\config\log.properties file as follows: Note:For a clustered installation, perform this procedure on all the nodes of the cluster. Open the OIM_HOME \xellerate\config\log.properties file in a text editor. This file contains a general setting for Oracle Identity Manager and specific settings for the components and modules that comprise Oracle Identity Manager. By default, Oracle Identity Manager is configured to provide output at the WARN level: log4j.logger.XELLERATE=WARN This is the general value for Oracle Identity Manager. Individual components and modules are listed following the general value in the properties file. You can set individual components and modules to different log levels. The log level for a specific component overrides the general setting. Set the general value to the required log level. Set other component log levels according to your requirement. Individual components or modules can have different log levels. For example, the following values set the log level for the Account Management module to INFO, whereas the server is at DEBUG, and the rest of Oracle Identity Manager is at the WARN level: log4j.logger.XELLERATE=WARN log4j.logger.XELLERATE.ACCOUNTMANAGEMENT=INFO log4j.logger.XELLERATE.SERVER=DEBUG Save your changes. The following procedure describes how to enable Single Sign-On with ASCII character logins. To enable Single Sign-On with non-ASCII character logins, use the following procedure, but include the additional configuration setting described in Step 4. See Also:Oracle Identity Manager Best Practices Guide for more information about configuring Single Sign-On with Oracle Access Manager Note:Header names can contain only English-language characters, the dash character (-), and the underscore character (_). Oracle recommends that you do not use special characters or numeric characters in header names. To enable Single Sign-On for Oracle Identity Manager: Stop the application server gracefully. In a text editor, open the OIM_HOME \xellerate\config\xlconfig.xml file: Locate the following Single Sign-On configuration. The following are the default settings without Single Sign-On. <web-client> <Authentication>Default</Authentication> <AuthHeader>REMOTE_USER</AuthHeader> </web-client> Edit the Single Sign-On configuration to be the following and the application server and Web server configuration to enable Single Sign-On by referring to the application and Web server vendor documentation. Restart the application server. This section describes how to use custom authentication solutions with Oracle Identity Manager. Oracle Identity Manager deploys a Java Authentication and Authorization Service (JAAS) module to authenticate users. For unattended logins, which require offline message processing and scheduled task execution, Oracle Identity Manager uses signature-based authentication. Although you should use JAAS to handle signature-based authentication, you can create a custom authentication solution to handle standard authentication requests. Note:The Oracle Identity Manager JAAS module must be deployed on the application server and must be the first invoked authenticator. To enable custom authentication on Oracle WebLogic Server, you use the WebLogic Server Console, which allows you to add multiple authentication providers and invoke them in a specific order. The custom authentication provider that you specify will handle standard authentication requests, and the Oracle Identity Manager JAAS module will continue to handle signature-based authentication. Note:The custom authentication provider that you specify must appear after the Oracle Identity Manager JAAS module in the WebLogic Server Console's list of authentication providers. To specify a custom authentication provider for Oracle WebLogic Server: Start the WebLogic Server Console and open the Authentication Providers page from domain/Security/Realms/realm name/Providers/Authentication. On the Authentication Providers page, select Oracle Identity Manager Authenticator from the table at the bottom of the page. The Oracle Identity Manager Authenticator page is displayed. On the Oracle Identity Manager Authenticator page, select the Allow Custom Authentication option on the Details tab, and then click Apply. On the Authentication Providers page, configure a new authentication provider by clicking the Configure a new link for the custom authentication provider that you want to add. When you finish configuring the new authentication provider, confirm that it is listed after Oracle Identity Manager Authenticator (which is the Oracle Identity Manager JAAS module) in the list of authentication providers. If the Oracle Identity Manager Authenticator is not listed above your custom authentication provider, then click Reorder the Configured Authentication Providers. When you specify a custom authentication solution, you should also protect the Java Naming and Directory Interface (JNDI) namespace to ensure that only designated users have permission to view resources. The primary purpose of protecting the JNDI namespace is to protect Oracle Identity Manager from any malicious applications that might be installed in the same application server instance. Even if no other applications, malicious or otherwise, are installed in the same application server instance as Oracle Identity Manager, you should protect your JNDI namespace as a routine security measure. To protect your JNDI namespace and configure Oracle Identity Manager to access it: From the WebLogic Server Console: Click Environment, Servers, and then AdminServer. Click the View JNDI Tree link. On the page that is displayed, click the Security tab. On the Security tab, click the Policies tab. Click Add Conditions in the Policy Conditions section. The Choose a Predicate page is displayed. From the Predicate List list, you must select a predicate to create a security condition policy. For Oracle Identity Manager, select User from the list and click Next. In the User Argument Name field, enter Internal or xelsysadm based on your requirements and click Add. Click Finish. Note:For a clustered installation, repeat the steps for all the available servers in the domain where Oracle Identity Manager is installed. Open the OIM_HOME /config/xlconfig.xml file in a text editor and add the following elements to the <Discovery> element: <java.naming.security.principal>user</java.naming.security.principal> <java.naming.security.credentials>user_password</java.naming.security.credentials> For user, specify Internal. For user_password, enter the password for Internal. To optionally encrypt the JNDI password, add an encrypted attribute that is assigned a value of true to the <java.naming.security.credentials> element, and assign the password as the element's value, as follows: <java.naming.security.credentialspassword</java.naming.security.credentials> Note:To protect the plain password, it is strongly recommended that you add the encrypted="true"attribute. Add the following elements to the <Scheduler> element: <CustomProperties> <org.quartz.dataSource.OracleDS.java.naming.security.principal>user </org.quartz.dataSource.OracleDS.java.naming.security.principal> <org.quartz.dataSource.OracleDS.java.naming.security.credentials>user_password </org.quartz.dataSource.OracleDS.java.naming.security.credentials></CustomProperties> Restart the server. Organizations can have multiple provisioning systems that exchange information about the modification of user records. In addition, there can be applications that interact with multiple provisioning systems. The SPML Web Service provides a layer over Oracle Identity Manager to interpret SPML requests and convert them to Oracle Identity Manager calls. The SPML Web Service is packaged in a deployable Enterprise Archive (EAR) file. This file is generated when you install Oracle Identity Manager. Because the EAR file is generated while you install Oracle Identity Manager, a separate batch file in the Oracle Identity Manager home directory runs the scripts that deploy the SPML Web Service on the application server on which Oracle Identity Manager is running. You must run the batch file to deploy the SPML Web Service. For more information, see Chapter 12, "The SPML Web Service" in Oracle Identity Manager Tools Reference. Oracle Identity Manager on Oracle WebLogic Server cluster is by default configured to provide memory-to-memory session replication and failover. However, it is possible to use database-based replication.To enable database-based replication: Edit the profile WebLogic.profile in OIM_HOME /Profiles on the application server host, and change the replication mechanism from InMemory to Database. Delete the OIM_HOME \xellerate\OIMApplications directory. To patch the application, run the patch_weblogic script, which is located in the OIM_HOME \xellerate\setup directory. Note:The database tables required for holding the sessions must be created manually. Refer to Oracle WebLogic Server documentation for information about creating these tables. It is possible to use other types of failover mechanisms in Oracle WebLogic Server. To use them, change the deployment descriptor (weblogic.xml) in the OIM_HOME /DDTemplates/xlWebApp directory, then insert the settings for the Web application descriptor. After the change, run the patch_weblogic script to fix the existing application. Note:If the deployment descriptor is changed (for example, during an upgrade), then you must perform the same changes again on the deployment descriptor. If you upgrade from Oracle WebLogic Server release 10.3.0 to release 10.3.1 or later, then upgrade the weblogic.xml file as follows: Note:In a clustered environment, perform this procedure on all the nodes. Open the OIM_HOME/xellerate/DDTemplates/xlWebApp/weblogic.xml file in a text editor. In this file, search for the following block of code: <XDtConfig:ifConfigParamEquals <XDtConfig:ifConfigParamEquals <session-descriptor> <persistent-store-type>replicated</persistent-store-type> <> </XDtConfig:ifConfigParamEquals> Replace that block of code with the following: <XDtConfig:ifConfigParamEquals <session-descriptor> <persistent-store-type>replicated_if_clustered</persistent-store-type> <cookie-http-only>false</cookie-http-only> <> Save and close the file. Run the patch_weblogic script as follows: OIM_HOME/xellerate/setup/patch_weblogic.sh (or patch_weblogic.cmd) WEBLOGIC_ADMIN_PASSWORD OIM_DB_USER_PASSWORD If you upgrade from Oracle WebLogic Server release 10.3.0 to release 10.3.1 or later, then change the memory settings as follows: For Microsoft Windows: In a text editor, open the DOMAIN_HOME\bin\setDomainEnv.cmd file. In this file, search for the following line: set MEM_MAX_PERM_SIZE_32BIT=-XX:MaxPermSize=128m Change this line to the following: set MEM_MAX_PERM_SIZE_32BIT=-XX:MaxPermSize=256m Save and close the file. Restart Oracle WebLogic Server. For UNIX: In a text editor, open the DOMAIN_HOME/bin/setDomainEnv.sh file. In this file, search for the following lines: MEM_MAX_PERM_SIZE_32BIT="-XX:MaxPermSize=128m" export MEM_MAX_PERM_SIZE_32BIT Change these lines to the following: MEM_MAX_PERM_SIZE_32BIT="-XX:MaxPermSize=256m" export MEM_MAX_PERM_SIZE_32BIT Save and close the file. Restart Oracle WebLogic Server. If you upgrade from Oracle WebLogic Server release 10.3.0 to release 10.3.1 or later, then update the JDK and JRockit installation as follows: Navigate to the DOMAIN_HOME/bin directory. Sample path for Microsoft Windows: C:\bea103\user_projects\domains\base_domain\bin Sample path for UNIX: /opt/bea103/user_projects/domains/base_domain/bin Open one of the following files: For Microsoft Windows: xlStartWLS.cmd For UNIX: xlStartWLS.sh Set the Java memory options as follows: For Microsoft Windows: Before the SET JAVA_OPTIONS=.... line, add any one of the following lines depending on the type of JVM: For Sun and HP JVMs, add the following line: set USER_MEM_ARGS=-Xms1280m -Xmx1280m -XX:PermSize=128m -XX:MaxPermSize=256m For JRockit JVMs, add the following line: set USER_MEM_ARGS=-Xms1280m -Xmx1280m -XnoOpt For UNIX: Before the JAVA_OPTIONS=... line, add any one of the following lines depending on the type of JVM: For Sun and HP JVMs, add the following line: USER_MEM_ARGS=-Xms1280m -Xmx1280m -XX:PermSize=128m -XX:MaxPermSize=256m For JRockit JVMs, add the following lines: USER_MEM_ARGS=-Xms1280m -Xmx1280 -XnoOpt Start Oracle WebLogic Server by using xlStartWLS.cmd for Microsoft Windows and xlStartWLS.sh for UNIX. Log in to the Oracle WebLogic Server Admin console by using WebLogic credentials. Select Lock and Edit. Click Environment, Servers, and then Admin Server. On the Server Start tab, provide inputs about the Java home directory: JDK: jdk160_14_R27.6.5-32 JRocket: jrockit_160_14_R27.6.5-32 Java vendor: Enter either Sun or BEA. BEA Home: Enter the full path of the ORACLE_HOME directory in which you install Oracle WebLogic Server. WebLogic User ID and password Select Activate Changes. Restart the server.
http://docs.oracle.com/cd/E14049_01/doc.9101/e14047/post_install.htm
CC-MAIN-2015-40
en
refinedweb