text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
our own custom React hook that will manage the photos for the gallery.
note { isPlatform } from '@ionic/react';
import { Camera, CameraResultType, CameraSource, Photo } from '@capacitor/camera';
import { Filesystem, Directory } from '@capacitor/filesystem';
import { Storage } from '@capacitor/storage';
import { Capacitor } from '@capacitor/core';
Next, create a function named usePhotoGallery:
export function usePhotoGallery() {
const takePhoto = async () => {
const photo = await Camera.getPhoto({
resultType: CameraResultType.Uri,
source: CameraSource.Camera,
quality: 100,
});
};
return {
takePhoto,
};
}
Our
usePhotoGallery hook exposes a method called takePhoto, which in turn calls into Capacitor. We still need to display it within our app and save it for future access.
Displaying Photos
First we will create a new type to define our Photo, which will hold specific metadata. Add the following UserPhoto interface to the
usePhotoGallery.ts file, somewhere outside of the main function:
export interface UserPhoto {
filepath: string;
webviewPath?: string;
}
Back at the top of the function (right after the call to
usePhotoGallery, we will define a state variable to store the array of each photo captured with the Camera.
const [photos, setPhotos] = useState<UserPhoto[]>([]);
When the camera is done taking a picture, the resulting Photo: photo.
|
https://ionicframework.com/docs/react/your-first-app/taking-photos
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Opened 12 years ago
Closed 22 months ago
#10202 closed enhancement (wontfix)
Use pkg-config --define-variable option to set ${SAGE_ROOT} anytime pkg-config is invoked
Description (last modified by )
Currently we rewrite all of the local/lib/pkgconfig/*.pc files every time we move locations. Instead, we should just use the --define-variable option of pkg-config to define a ${SAGE_ROOT} variable. Then we don't have to keep rewriting files every time.
INSTRUCTIONS FOR TESTING:
- Download a fresh sage-4.7.alpha5.tar source archive
- Extract and delete the spkg/standard/sage_scripts-4.7.alpha5.spkg
- Put into the spkg/standard/ directory
- Make Sage
Note to release manager: you have to edit the sage_scripts spkg-install to copy over the pkg-config shell script; see my sage_scripts spkg above.
P.S. Why is the sage_scripts spkg-install not under version control?!?
Attachments (1)
Change History (42)
comment:1 Changed 12 years ago by
- Cc drkirkby kcrisman added
comment:2 Changed 12 years ago by
comment:3 follow-up: ↓ 8 Changed 12 years ago by
Oh, may it be the case that
sage-location is also called (first) during the build (through
sage-sage)?
comment:4 follow-up: ↓.
I think we ought to just post new versions of the right spkgs, and not modify the pkgconfig files in sage-location.
comment:5 in reply to: ↑ 4 ; follow-up: ↓.
Possible. Regarding the number of spkgs affected, rather "no" (not now)...
I think we ought to just post new versions of the right spkgs, and not modify the pkgconfig files in sage-location.
We should just check there if the files are already patched.
I also suggested elsewhere to make it more verbose.
comment:6 in reply to: ↑ 5 Changed 12 years ago by
comment:7 Changed 12 years ago by
- Owner changed from tbd to leif
comment:8 in reply to: ↑ 3 Changed 12 years ago by
Oh, may it be the case that
sage-locationis also called (first) during the build (through
sage-sage)?
Yep:
$ ls -rtl local/lib/pkgconfig/*.pc lrwxrwxrwx 1 leif leif 11 Nov 1 14:21 local/lib/pkgconfig/libpng.pc -> libpng12.pc -rw-r--r-- 1 leif leif 310 Nov 1 14:23 local/lib/pkgconfig/zlib.pc -rw-r--r-- 1 leif leif 311 Nov 1 14:23 local/lib/pkgconfig/sqlite3.pc -rw-r--r-- 1 leif leif 947 Nov 1 14:23 local/lib/pkgconfig/opencdk.pc -rw-r--r-- 1 leif leif 313 Nov 1 14:23 local/lib/pkgconfig/libpng12.pc -rw-r--r-- 1 leif leif 973 Nov 1 14:23 local/lib/pkgconfig/gnutls.pc -rw-r--r-- 1 leif leif 1153 Nov 1 14:23 local/lib/pkgconfig/gnutls-extra.pc -rw-r--r-- 1 leif leif 324 Nov 1 14:23 local/lib/pkgconfig/freetype2.pc -rw-r--r-- 1 leif leif 303 Nov 1 14:23 local/lib/pkgconfig/bdw-gc.pc -rw-r--r-- 1 leif leif 296 Nov 1 14:34 local/lib/pkgconfig/pynac.pc -rw-r--r-- 1 leif leif 348 Nov 1 14:47 local/lib/pkgconfig/gsl.pc -rw-r--r-- 1 leif leif 239 Nov 1 14:51 local/lib/pkgconfig/libR.pc $ ls -l local/lib/sage-current-location.txt -rw-r--r-- 1 leif leif 24 Nov 1 14:23 local/lib/sage-current-location.txt
comment:9 Changed 12 years ago by
- Cc jdemeyer added
- Priority changed from major to critical
comment:10 follow-up: ↓ 11 Changed 12 years ago by
This ticket is getting confusing. When I opened it, the goal was to modify the spkg-install file in the freetype spkg so that after installation, the freetype2.pc file would be something like:
prefix=${SAGE_LOCAL} exec_prefix=${prefix} libdir=${exec_prefix}/lib includedir=${prefix}/include Name: FreeType 2 Description: A free, high-quality, and portable font engine. Version: 9.16.3 Requires: Libs: -L${libdir} -lfreetype -lz Cflags: -I${includedir}/freetype2 -I${includedir}.
comment:11 in reply to: ↑ 10 Changed 12 years ago by
This ticket is getting confusing. When I opened it, the goal was to modify the spkg-install file in the freetype spkg [...]
I hesitated to change its title... ;-)
The modification of the freetype spkg should IMHO be addressed at #9523 as well..
The situation is much worse since the logic of patching / "initializing" the
.pc files exactly once (and all at the same time) is currently completely broken.
I'll write a shell script to (re-)initialize them conditionally.
comment:12 Changed 11 years ago by
I'm curious where we are on this. I haven't worked on it at all. leif, you were going to do something (see your last comment). I agree that the pkg files should be initialized conditionally. Did you write a script of some sort to check to see if the files had been already modified?
comment:13 Changed 11 years ago by
This last problem (initializing pc files only once) might be causing a problem behind the scenes:
$ pkg-config --cflags libpng Duplicate definition of variable 'SAGE_ROOT' in '/Users/grout/sage/local/lib/pkgconfig/libpng.pc'
So I'm not sure that our pkg-config works right now. At least
pkg-config --exists libpng still works.
comment:14 Changed 11 years ago by
One other note. I'm not sure that the basic idea of having ${SAGE_LOCAL} as a variable inside the .pc files works. I just changed libpng.pc to not define
${SAGE_ROOT}, but hopefully just take it out of the environment. However, when I run pkg-config, it says:
$ pkg-config --cflags libpng Variable 'SAGE_ROOT' not defined in '/Users/grout/sage/local/lib/pkgconfig/libpng.pc'
So I think the main idea of this ticket is doomed.
comment:15 Changed 11 years ago by
I take that back about this ticket being doomed. In fact, the solution is *much* easier. Instead of defining
${SAGE_ROOT} at the top of pkg-config, we should wrap the system pkg-config using the --define-variable option:
--define-variable=VARIABLENAME=VARIABLEVALUE This sets a global value for a variable, overriding the value in any files. Most packages define the vari- able "prefix", for example, so you can say: $ pkg-config --print-errors --define-variable=prefix=/foo \ --variable=prefix glib-2.0 /foo
So if we have a pkg-config in
$SAGE_ROOT/local/bin which is something like:
#!/bin/sh pkg-config --define-variable=SAGE_ROOT=<whatever> $*
then we don't have to keep mucking with the .pc files every time our location moves. We just need to do a simple string substitution the first time Sage is built or the first time any package makes a .pc file.
This solves both the main problem of this ticket and the problem with double-initialization.
comment:16 Changed 11 years ago by
Another very automatic solution for us: patch pkg-config:
The advantage to patching and distributing our own pkg-config is that then we can count on pkg-config, even on osx systems that don't come with it by default.
comment:17 Changed 11 years ago by
Only one MB. Comes with glib, which is perhaps half the download?
Changed 11 years ago by
apply to sage scripts repository
comment:18 Changed 11 years ago by
- Description modified (diff)
- Summary changed from Make freetype have ${SAGE_LOCAL} as its prefix in the pkgconfig file to Use pkg-config --define-variable option to set ${SAGE_ROOT} anytime pkg-config is invoked
Actually, that path to pkg-config won't solve our problem, since it involves modifying each of the pkg-config files anyway, and then things depend on the package, etc. So I don't think anymore that it would be better to ship pkg-config.
I changed the description to reflect the patch's new solution.
comment:19 Changed 11 years ago by
I'm going to set this to "needs review" as it needs testing on a lot of systems. My guess is that it needs to be tested by making a new
sage_scripts spkg and then a system needs to be built fresh with that new spkg.
comment:20 Changed 11 years ago by
So what happens if the computer doesn't have pkg-config?
comment:21 Changed 11 years ago by
Nothing. Same thing that happens now.
comment:22 Changed 11 years ago by
comment:23 Changed 11 years ago by
comment:24 Changed 11 years ago by
- Reviewers set to Leif Leonhardy
Finally (and again) returning to this...
- We don't need to "wrap"
pkg-config(and if we would, one should use
"$@"instead of
$*).
- We don't have to
--define-variable=SAGE_ROOT=${SAGE_ROOT}, provided we don't put any [hardcoded] definitions of
SAGE_ROOT(or
SAGE_LOCAL) into the
.pcfiles, which I more than once said is totally superfluous if we for example use (literally)
prefix=${SAGE_ROOT}/local:(This is from
def initialize_pkgconfig_files(): """ Insert a sage_local variable in each pkg_config file and replace them to make the paths portable. """ LIB = os.path.join(os.path.abspath(SAGE_ROOT), 'local', 'lib') PKG = os.path.join(LIB,'pkgconfig') for name in os.listdir(PKG): filename=os.path.join(PKG,name) if os.path.splitext(filename)[1]==".pc": with open(filename) as file: config = file.read() new_config = config.replace(os.path.abspath(SAGE_ROOT), "${SAGE_ROOT}") new_config = 'SAGE_ROOT=%s\n'%os.path.abspath(SAGE_ROOT)+new_config with open(filename, 'w') as file: file.write(new_config)
local/bin/sage-location, Sage 4.7.1.rc0.)
There we have to remove the second assignment to
new_config. The function
update_pkgconfig_files()should be deleted (or no longer used), since it just updates
SAGE_ROOT=...definitions with a new hardcoded path for
SAGE_ROOT.
- We may still have to fix a few
.pcfiles in their corresponding
spkg-installfiles. I did this for R /
libR.pc(cf. #9668) in a new r-2.10.1.p6 spkg (which I haven't yet uploaded though). For most
.pcfiles defining
prefix=${SAGE_ROOT}/local(see above) is sufficient, and
initialize_pkgconfig_files()does this, although it could do better by explicitly looking for
^prefix=.... (Also, we could do the same -- more reliably -- from some shell script, e.g.
sage-spkg, using
sed, since Python is not necessarily available during the build. If we do, we no longer have to deal with
.pcfiles in
sage-locationat all.
sage-spkgalready calls
sage-make_relativeafter every spkg installation, so this would fit well.)
Jason's new patch to
sage-location already does
- remove the second assignment to
new_config, which defined
SAGE_ROOTto the current
$SAGE_ROOT(i.e., a hardcoded path),
- remove the no longer necessary function
update_pkgconfig_files().
(The note on) using
--define-variable=SAGE_ROOT=${SAGE_ROOT} is not necessary though, see above.
No offense btw.
comment:25 Changed 11 years ago by
P.S.: I currently have no idea where the shell script I wrote is... 8/ (IIRC it consisted of more than a single
sed line, to be more robust and address some corner cases.)
comment:26 Changed 11 years ago by
Unfortunately my reasoning is now spread across a few tickets... 8/
First of all, we don't have to wrap
pkg-config, which would not necessarily be available during the whole build (including upgrades), and doing so might confuse [dumb] build scripts testing for its presence on systems that lack a "real"
pkg-config.
I now seem to recall why using
${SAGE_ROOT} without explicitly defining it in the
.pc files "worked for me" last year -- I was using
$${SAGE_ROOT}, which postpones expansion to the shell, like it does in Makefiles.
But there is an even safer way (cf. this comment at #11687), namely to define the environment variable
PKG_CONFIG_TOP_BUILD_DIR in
sage-env and use the corresponding
.pc-file variable
pc_top_builddir (instead of
SAGE_ROOT) in all
.pc files, e.g.
# We set PKG_CONFIG_TOP_BUILD_DIR to $SAGE_ROOT in sage-env, # such that ${pc_top_builddir} always yields this. prefix=${pc_top_builddir}/local ... # All further directory definitions are relative to ${pc_top_builddir} # or ${prefix}.
Substitutions in ("initialization" of)
.pc files should IMHO be performed from or better directly within
sage-spkg (or an spkg's
spkg-install if necessary), of course not "fixing" individual files more than once. We then can drop the
pkg-config-related code in
sage-location completely, or otherwise have to make sure it is consistent with the other one, i.e. doesn't mess things up afterwards (as it currently does).
We also have to fix setting
PKG_CONFIG_PATH in
sage-env; see #11687.
comment:27 in reply to: ↑ description Changed 11 years ago by
comment:28 in reply to: ↑ description Changed 11 years ago by
comment:29 Changed 11 years ago by
- Status changed from needs_review to needs_info
This ticket IMHO needs work anyway (see above), and its title should be changed (or I can open another one with a different approach as pointed out above).
comment:30 Changed 11 years ago by
The one disadvantage I can see to the PC_TOP_BUILD_DIR solution is that if the user is already using that for some other purpose in their system-installed packages. Using $${SAGE_ROOT} instead avoids this problem.
At any rate, I'm glad you're looking into this so carefully!
comment:31 follow-up: ↓ 32 Changed 11 years ago by
It also seems like using PC_TOP_BUILD_DIR in this way goes against its purpose. That would confuse users, I think.
comment:32 in reply to: ↑ 31 Changed 11 years ago by
It also seems like using PC_TOP_BUILD_DIR in this way goes against its purpose. That would confuse users, I think.
Well, it's just another situation where its application is useful (and which is quite similar to its original intent). The way Sage uses its own prefix / subtree of typical system directories (
$SAGE_ROOT/local/{bin,lib,include}) is just not very common, so the developer of
pkg-config didn't have it in mind.
To make it less obscure (to people actually looking at the
.pc files), we could use
SAGE_ROOT=${pc_top_builddir} # the latter is set "dynamically" by sage-env prefix=${SAGE_ROOT}/local ... # (further references to SAGE_ROOT rather than pc_top_builddir)
(Adding comments to the
.pc files isn't bad anyway; I used to also add a comment telling when the file was [last] modified by Sage, e.g. for debugging.)
W.r.t. users already using that variable (
pc_top_builddir) for other purposes:
I don't think anybody will have it in his/her system-wide
.pc files, but at least in case
PKG_CONFIG_TOP_BUILD_DIR is already set, we could issue a warning.
Using
$${SAGE_ROOT} instead isn't that safe, since programs using parameters supplied by
pkg-config do not necessarily use the shell to execute commands with these parameters (though they usually do; I'm currently not aware of any counter-example), in which case
${SAGE_ROOT}/... would get passed verbatim to the commands, e.g. to the compiler or the linker.
comment:33 Changed 9 years ago by
- Milestone changed from sage-5.11 to sage-5.12
comment:34 Changed 9 years ago by
- Milestone changed from sage-6.1 to sage-6.2
comment:35 Changed 8 years ago by
- Milestone changed from sage-6.2 to sage-6.3
comment:36 Changed 8 years ago by
- Milestone changed from sage-6.3 to sage-6.4
comment:37 Changed 2 years ago by
- Cc jhpalmieri dimpase added
- Milestone changed from sage-6.4 to sage-duplicate/invalid/wontfix
- Status changed from needs_info to needs_review
This is outdated and should be closed
comment:38 Changed 2 years ago by
- Reviewers Leif Leonhardy deleted
comment:39 Changed 2 years ago by
- Priority changed from critical to major
comment:40 Changed 2 years ago by
- Reviewers set to Dima Pasechnik
- Status changed from needs_review to positive_review
comment:41 Changed 22 months ago by
- Resolution set to wontfix
- Status changed from positive_review to closed
Slightly abusing this ticket (the G
**gle groups are less suited for markup), here's my current situation:
(Note that the
egreppatterns are a bit "redundant", to be partially also used with
sed. ;-) )
So briefly, in my Sage 4.6 final (built from scratch),
gsl.pcand
pynac.pclack a
SAGE_ROOT=...line, while
libpng.pchas in addition a superfluous line
SAGE_ROOT=${SAGE_ROOT}, and
libR.pclacks a definition of both
prefixand
SAGE_ROOT(but the file is sane, i.e. contains the proper hard-coded path(s) in
rhomeand
rincludedir.)
I really wonder what might have caused the differences between 4.6.rc0 and 4.6 final. Do these files get modified by other scripts than
sage-location?
The redundant
SAGE_ROOT=${SAGE_ROOT}is clearly a "bug" in #9210's
initialize_pkgconfig_files(), since it doesn't check if a
.pcfile is just a link to another one (which is the case for
libpng.pc). Testing for an already existing
SAGE_ROOT=...would of course be possible as well.
|
https://trac.sagemath.org/ticket/10202
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
A in a warning. For example, consider the following sample:
<manifest xmlns: <application ... <activity android: <intent-filter> <action android: <category android: </intent-filter> <action android: </activity> </application> </manifest>
Previous versions of AAPT would simply ignore the misplaced
<action> tag.
However, with AAPT2, you get the following error:
AndroidManifest.xml:15: error: unknown element <action> found.
To resolve the issue, make sure your manifest elements are nested correctly. For more information, read Manifest file structure.
Declaration of resources
You can no longer indicate the type of a resource from the
name attribute.
For example, the following sample incorrectly declares an
attr resource item:
<style name="foo" parent="bar"> <item name="attr/my_attr">@color/pink</item> </style>
Declaring a resource type this way results in the following build error:
Error: style attribute 'attr/attr/my_attr (aka my.package:attr/attr/my_attr)' not found.
To resolve this error, explicitly declare the type using
type="attr":
<style name="foo" parent="bar"> <item type="attr" name="my_attr">@color/pink</item> </style>
Additionally, when declaring a
<style> element, its parent must also be
style resource type. Otherwise, you get an error similar to the following:
Error: (...) invalid resource type 'attr' for parent of style
Android namespace with ForegroundLinearLayout
ForegroundLinearLayout includes three
attributes:
foregroundInsidePadding,
android:foreground, and
android:foregroundGravity.
Note that
foregroundInsidePadding is not included in the
android namespace,
unlike the other two attributes.
In previous versions of AAPT, the compiler would silently ignore
foregroundInsidePadding attributes when you define it with the
android
namespace. When using AAPT2, the compiler catches this early and throws the
following build error:
Error: (...) resource android:attr/foregroundInsidePadding is private
To resolve this issue, simply replace
android:foregroundInsidePadding with
foregroundInsidePadding.
Incorrect use of @ resource reference symbols
AAPT2 throws build errors when you omit or incorrectly place resource
reference symbols (
@). For example, consider if you omit the symbol when
specifying a style attribute, as shown below:
<style name="AppTheme" parent="Theme.AppCompat.Light.DarkActionBar"> ... <!-- Note the missing '@' symbol when specifying the resource type. --> <item name="colorPrimary">color/colorPrimary</item> </style>
When building the module, AAPT2 throws the following build error:
ERROR: expected color but got (raw string) color/colorPrimary
Additionally, consider if you incorrectly include the symbol when accessing a
resource from the
android namespace, as shown below:
... <!-- When referencing resources from the 'android' namespace, omit the '@' symbol. --> <item name="@android:windowEnterAnimation"/>
When building the module, AAPT2.
|
https://developer.android.com/studio/command-line/aapt2?authuser=2
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Ever feel too lazy to get up to turn off THAT one lamp? That lamp which is essential but also irritates you the most. That lamp which after you turn off you race to bed like hell. Well, fear not people I got a perfect solution for you. Clap-O-Switch, a perfect switch which you can control by clapping twice. So no sprinting to bed with the ghost just clap and sleep.Components Required
- ATtiny85 microcontroller / Arduino can also be used
- Sound sensor
- 5 volts relay
- Three pin AC socket
- Three pin C14 input socket with cable
- Pcb board.
- Any colour Leds x2
- A project box
For the project I have used a basic ATtiny85 microcontroller as the brain of the project. A sound sensor is used to sense the clap intensity. An algorithm runs on the microcontroller to sense the particular type of clap. It is then used to actuate a relay, which in turn activates the load (bulb).
Before soldering test the schematics and the code on the breadboard to avoid unnecessary stress.Code
I have attached the code below with the project, but in this section I will describe about the code in detail.
#include <ResponsiveAnalogRead.h>
For this project have included "ResponsiveAnalogRead" library.
ResponsiveAnalogRead is an Arduino library for eliminating noise in analogRead inputs without decreasing responsiveness. You can read more about this library here:
#define sound_pin 5 //use any ADC pin for sound sensor #define relay_pin 3 //Can use any pin for digital logic
Here you can define the pin number where the sensor is attached. Use only ADC pin for sound sensor as we need analog value. Relay can be connected to any pin which will give use digital output.
ResponsiveAnalogRead sound(sound_pin, true);
An object 'sound' of class 'ResponsiveAnalogRead' is created here. This has been done so we could access various functions in the 'ResponsiveAnalogRead' library.
void loop() { sound.update(); //current sound value is updated soundValue = sound.getValue(); currentNoiseTime = millis(); Serial.println(soundValue); if (soundValue > threshold) { // if there is currently a noise if ( (currentNoiseTime > lastNoiseTime + 250) && // to debounce a sound occurring in more than a loop cycle as a single noise (lastSoundValue < 400) && // if it was silent before (currentNoiseTime < lastNoiseTime + 750) && // if current clap is less than 0.75 seconds after the first clap (currentNoiseTime > lastLightChange + 1000) // to avoid taking a third clap as part of a pattern ) { relayStatus = !relayStatus; digitalWrite(relay_pin, relayStatus); delay(300); lastLightChange = currentNoiseTime; } lastNoiseTime = currentNoiseTime; } lastSoundValue = sound.getValue(); }
This is the algorithm for sensing the only two claps and changing the logic level accordingly. The sound value is stored in the variable 'soundValue'. This is updated every loop.
If the 'soundValue' is more than the threshold then it will advance in the if loop. Then it will check four conditions for the clap. These conditions are explained in code itself. When all the given condition are satisfied the relay status will toggle. A delay of 300 milli second is given so that the relay won't make clicking noise.
The algorithm is taken from one of the Instructatbles project. I'll post the the link once I get it.Assemble
I have used a normal project ABS box for my project. I have used 3 pin AC connector C14 as the input socket.
For the output where the load/lamp is connected I have used a normal 3 pin AC female socket where we can easily connect anything we want.
I have also mounted two led in the box. The red led represents if the box(Clap-O-Switch) is ON or OFF. The green led represents the condition of the output.
I have assemble the circuit in ABS project box. Both the AC ground are connected for protection. The ATtiny is powered by a 5v supply which we get by using an adapter circuit to convert 230V AC to 5V DC.Working
|
https://www.hackster.io/Rushabh/clap-o-switch-4fb036
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
-
SML, indexing the men from 0:
fun j(n,k,i) = (if i=1 then k-1 else k+j(n-1,k,i-1)) mod n; fun J(n,k) = j(n,k,n);
Admin
Josephus' Circle, AKA Eeny, meeny, miny, moe. The solution is to argue about how many words are in the rhyme and whether the person selected at the end is "it" or "not it".
Admin
$ cat jos.c
#include <stdio.h> #include <stdlib.h>
/* f(n,k)=(f(n-1,k)+k) mod n f(1,k)=0 */
static int f (int n, int k) { if (n == 1) return 0; return (f (n-1, k) + k) % n; }
int main (int argc, const char **argv) { int n, k; if (argc != 3) { fprintf (stderr, "Usage: %s <N> <K>\n", argv[0]); return -1; }
n = atoi (argv[1]); k = atoi (argv[2]);
fprintf (stdout, "With %d soldiers and killing every %d'th, the safe spot is p osition %d\n", n, k, f (n, k) + 1); return 0; }
$ gcc -g -O2 -W -Wall -Wextra jos.c -o jos
$ ./jos.exe 12 3 With 12 soldiers and killing every 3'th, the safe spot is position 10
$ ./jos.exe 40 3 With 40 soldiers and killing every 3'th, the safe spot is position 28
$
Admin
Ha, now we have a way to stop people from posting "Frist!!1". Anyways, random solution here
ps: TRWTF is the forum software erroring out because I forgot the /code tag.
Admin
Admin
This is pretty brain dead, but coffee hasn't done it's job yet:
Admin
I think this is more about being the guy who decides where the "top of the circle" is than anything else.
Admin
I doubt I could find the safe spot very quickly, but I've always been good at finding the G spot.
Admin
Fat lotta good that'll do you in a cave full of suicidal men...
Admin
An ugly script to handle this (via PHP)
I couldn't think of a better name for the function.
Admin
Consider the function J(n,k) which is the position of the surviving soldier relative to the current starting point of the count, where n is the number of remaining soldiers and k is the interval (3 in the example). (Note that counting 3 at a time actually means you're only skipping 2 at a time.) Note that "position" is counting only currently alive soldiers.
If n=1 then clearly we are done, and J(1,k) = 0 for any k. Now consider the case of n>1. We have something like this:
Where the caret marks where the count starts. Now at the next step (assuming again k=3)
So now suppose that J(n-1,k) = x. That means that the surviving soldier is X positions after the caret in the second diagram. But the caret in the second diagram is 3 positions after the caret in the first diagram, so the surviving soldier is X+3 positions after the caret in the first diagram. Of course you have to take the mod of that by n to account for wrap around.
Thus our formula is (code in Python):
Note that because of the way the recursion is set up, at no point do you have to keep track of which positions of soldiers are dead already. And it is an O(n) algorithm.
Admin
This is sexy.
Admin
Admin
If I'm remembering right, Josephus was actually one of the last two standing, upon which he said to the other guy, as they were surrounded by corpses, "Maybe surrender isn't so shameful after all?" Not sure if the algorithm for figuring out how to be one of the last two is significantly easier.
Admin
A not as ugly as the PHP script in CF
<cfparam name="url.soldiers" type="numeric" default="12"> <cfparam name="url.skip" type="numeric" default="3"> <cfset circle=""> <cfloop from=1 to=#url.soldiers# index=x> <cfset circle=listAppend(circle,x)> </cfloop> <cfset x=1> <cfloop condition="listLen(circle) gt 1"> <cfset x+=url.skip-1> <cfif x gt listLen(circle)> <cfset x=x mod listLen(circle)> </cfif> <cfoutput>Deleting man at position #listGetAt(circle,x)#
</cfoutput> <cfset circle=listDeleteAt(circle, x)> </cfloop> <cfoutput>The last name standing was in position #circle#
</cfoutput>
Admin
A little bit of scala:
def josephus(soldiers:Int, toSkip:Int):Int = { josephus(soldiers, toSkip, 0) }
def josephus(soldiers:Int, toSkip:Int, first:Int):Int = { if(soldiers == 1) { 0 } else { val position = josephus(soldiers - 1, toSkip, (first + toSkip) % (soldiers - 1)); if(position < first) { position; } else { position + 1; } } }
Admin
Sorry messed up the code tag
Admin
ColdFusion
Admin
a simple python solution:
Admin
Beat me to it.
Admin
Admin
In that case, I'll find the P Spot.
Admin
You stand as a first one to die and then invert the alive-dead state and you're the only one alive.
Admin
PHP:
Admin
Admin
C#: private int Josephus(int soldiers, int everyOther) { List<int> alive = new List<int>(); for (int i = 1; i <= soldiers; i++) alive.Add(i);
}
Admin
Pipped at the post by the one and only Ben Forta. I'm half livid, half honoured :)
Admin
Assuming that the start position is index 1. He is safe as he is the only one in the game: f(1,k) = 1 In general you solve the n'th case by reducing the problem by 1 and solving, adding k afterwards: f(n,k) = f(n-1,k)+ k mod n+1 = f(n-2,k) + 2k mod n = f(1,k) + (n-1)k mod n = 1 + (n-1)k mod n So all you need to do is:
public static int f(int n, int k) { return 1 + (((n-1) * k) % n); }
Admin
Crude Python method (which calculates lists of soldiers along the way):
Admin
c++ templates
template <unsigned int s, unsigned int k> struct safe { static const unsigned int result = (safe<s - 1, k>::result + k) % s; };
template <unsigned int k> struct safe<1, k> { static const unsigned int result = 0; };
unsigned int index = safe<12, 3>::result;
it came out looking exactly like the python implementation
Admin
What happened to the locker problem comment? I can't find it anymore, and I had such a nice solution
Admin
Admin
Truly, there is nothing sexier than recursion. It's especially elegant for problems like this one.
Unfortunately, it's a bitch to maintain, and a memory hog for larger functions.
Admin
Javascript (0 starting index):
Admin
But then we wouldn't be able to mock God in a unit test!
Admin
But then we wouldn't be able to mock God in a unit test!
Admin
I just wanted to say that the animations accompanying these are brilliant. :)
Admin
assuming that the starting position is 0:
Admin
similarly (ruby1.9):
j = lambda { |n,k| n == 1 ? 0 : (j.(n-1, k) + k) % n }
Admin
Here is a solution in PHP:
$a = array(); $b = 0;
for ($x = 0; $x < $argv[1]; $x++) { $a[] = $x + 1; } while (sizeof($a) > 1) { $c = ($b + $argv[2] - 1) % sizeof($a); $b = $c; unset($a[$c]); $a = array_values($a); } print $a[0];
Admin
Sure I could have used a modulus, but I like the notion of representing the circle of soldiers literally.
Admin
I prefer to avoid recursion if possible (and in C++ for no good reason):
#include <iostream> #include <cstdlib>
using namespace std; /* f(n,k)=(f(n-1,k)+k) mod n f(1,k)=0 => a[n]=a[n-1]%n -> (...(((k%2 +k)%3 +k%4) +k%5... +k)%n */
int main (int argc, const char **argv) { int n, k; if (argc != 3) { cerr << "Usage: "<< argv[0] << "<N> <K>\n"; return -1; }
n = atoi (argv[1]); k = atoi (argv[2]);
int r = 0; for(int a=2; a<=n; a++) r= (r+k)%a;
cout<< "With " <<n<<" soldiers and killing every "<< k <<"'th, the safe spot is position "<<r+1<<endl; return 0; }
Admin
In Java, using Lists:
Using the recursion:
Admin
Admin
Vba:
Admin
Admin
And also, if your grandparents are there, what the word ending in "ger" is. Talk about embarrassing.
Admin
99 characters of ruby on the wall...
def josephus(s,n)c=(0...s).to_a ;i=0;while c.size>1;c[i,1]=nil;i=i+n;i=i%c.size;end;return c[0];end;
or with line breaks instead of ;
def josephus(s,n)c=(0...s).to_a i=0 while c.size>1 c[i,1]=nil i=i+n i=i%c.size end return c[0]; end
Admin
Mmh, these solutions overlook one important point: "Josephus somehow managed to figure out — very quickly — where to stand with forty others".
Unless Ole Joe had a phenomenal stack-based brain, he simply couldn't recurse through the 860 iterations required for a 41 soldiers circle (starting at 2).
So there must be some kind of shortcut (e.g. to know whether a number is divisible by 5, you don't have to actually divide it, just check whether the final digit is 5 or 0).
Indeed, here is the list of safe spots according to the number of soldiers (assuming a skip of 3):
Soldiers: 2, Last Spot: 2 Soldiers: 3, Last Spot: 2 Soldiers: 4, Last Spot: 1 Soldiers: 5, Last Spot: 4 Soldiers: 6, Last Spot: 1 Soldiers: 7, Last Spot: 4 Soldiers: 8, Last Spot: 7 Soldiers: 9, Last Spot: 1 Soldiers: 10, Last Spot: 4 Soldiers: 11, Last Spot: 7 Soldiers: 12, Last Spot: 10 Soldiers: 13, Last Spot: 13 Soldiers: 14, Last Spot: 2 Soldiers: 15, Last Spot: 5 Soldiers: 16, Last Spot: 8 Soldiers: 17, Last Spot: 11 Soldiers: 18, Last Spot: 14 Soldiers: 19, Last Spot: 17 Soldiers: 20, Last Spot: 20 Soldiers: 21, Last Spot: 2 Soldiers: 22, Last Spot: 5 Soldiers: 23, Last Spot: 8 Soldiers: 24, Last Spot: 11 Soldiers: 25, Last Spot: 14 Soldiers: 26, Last Spot: 17 Soldiers: 27, Last Spot: 20 Soldiers: 28, Last Spot: 23 Soldiers: 29, Last Spot: 26 Soldiers: 30, Last Spot: 29 Soldiers: 31, Last Spot: 1 Soldiers: 32, Last Spot: 4 Soldiers: 33, Last Spot: 7 Soldiers: 34, Last Spot: 10 Soldiers: 35, Last Spot: 13 Soldiers: 36, Last Spot: 16 Soldiers: 37, Last Spot: 19 Soldiers: 38, Last Spot: 22 Soldiers: 39, Last Spot: 25 Soldiers: 40, Last Spot: 28 Soldiers: 41, Last Spot: 31
The series on the right looks conspicuously susceptible to a shortcut. But I can't find it for the life of me... Anyone ?
|
https://thedailywtf.com/articles/comments/Programming-Praxis-Josephus-Circle/?parent=279989
|
CC-MAIN-2022-33
|
en
|
refinedweb
|
Real Time GraphQL Mutations — Using Apollo Client, React and Optimistic UI
Taking your app UX to the next level isn’t an easy task, however choosing the right tools can help you deliver a brilliant experience to your users.
A long short story — I’ve always been interested in learning new technologies to improve the websites I developed for a better UI/UX. It seems to be easy to think like that, but it’s actually very hard and sometimes frustration to bring the most value to your users.
Almost an year ago I’ve started a project called Askable as a side project at the company I’m currently working for. Since then it started to get traction and it became a very promising project that could be turned out into a company with very hard stakes and challenges to overcome everyday. I still remember the day I was creating the booking form (Basically 5 different pages with loads of options and a Save button in every page) and the head of product Dre Zhou came to me saying that he wanted that form to be “Real Time”.
I simply told him: “You know that’s not an easy task, right?” He then replied: “I know, but I know you can do it!”. I did it.
In this article I’ll show you how I achieved such a thing.
Choose your stack well
Take your time. There are a lot of tools available to help us create amazing experiences. Two years ago I was bored and frustrated with the technology I was using and then I decided to step up and study React and React Native. This was by far one of the best decisions I’ve made in my professional life. One year later I was building amazing apps and websites with React and Redux, but got overwhelmed with the amount of code I had to put up. I then decided to step up again and study GraphQL (Mainly because of an article from Peggy Rayzis). Another great professional decision.
The beauty of technology comes when you start connecting the all the dots to create amazing products.
Tools like Apollo and React are awesome and you should check them out.
Anyway, enough talking about me and how I became who I”m today. Let’s go to the fun part…
Real-time
How can you achieve an experience that products like Medium or Google Docs bring to you? Imagine yourself in a situation where you need to save data while the user is typing. That itself bring lots of questions:
- Can my server process HTTP requests in real-time?
- Will it crash my server?
- Should I make a call every time my state changes?
- What happens with the data the server returns?
- But, how?
The answer is GraphQL, Apollo Client and Optimistic UI.
Optimistic UI stands for an operation that fakes your final data and update your state until your network responds back from your server.
Final Goal
Notice in this example the little status toggle close to the “Next” button. It’s reflecting the real-time changes on the database. Differently to what most people might think, this page doesn’t save the data on the “Next” or “Save & Close” button. In fact, those buttons don’t do anything special, they just redirect the user to a different page.
Every change on this page triggers a mutation called
updateBooking, and every piece of data is connected to a field returned from a query.
These are the steps to get this process up and running:
- Setting up your Query
- Setting up your Mutation
- Connect your Query and Mutation to your component
- Make your UI fields rely on your query
I’ve included all the fields we are using in production to make this process feel as close as it is in production. (You probably don’t need this level of complexity on your code)
Setting up your Query
query FetchBookingsById($id: ID) {
bookingByID(id: $id) {
_id
config {
type
}
}
}
Settings up your Mutation
mutation updateBooking($booking_id: ID!, $booking: BookingInput!) {
updateBooking(booking_id: $booking_id, booking: $booking) {
_id
config {
type
}
}
}
Notice that the returned fields from my mutation are exactly the same from my
FetchBookingById query. This is intentional and VERY important to make our Optimistic UI to work. (You can use Fragments if you want to make sure both files are in sync)
Connect your Query and Mutation to your component
import { graphql, compose } from 'react-apollo';
import fetchBookingById from 'src/queries/booking/fetchBookingById';
import updateBookingDetails from 'src/mutations/booking/updateBookingDetails';... your component code ...const bookingDataContainer = graphql(fetchBookingById, {
name: 'bookingData',
options: () => ({
variables: {
id: booking_id
}
})
});const updateBookingContainer = graphql(updateBookingDetails, {
props: ({ mutate }) => ({
updateBooking: (booking_id, booking) => mutate({
variables: { booking_id, booking },
refetchQueries: [{
query: fetchBookingById,
variables: { id: booking_id },
}],
optimisticResponse: {
updateBooking: {
_id: booking_id,
config: {
type: booking.config.type,
__typename: 'BookingConfig'
},
__typename: 'Booking'
}
}
}),
})
});export default compose(
bookingDataContainer,
updateBookingContainer
)(CreateBookingStepBookingDetails);
Make your UI fields rely on your query
renderSessionTypesContainer() {
return (
<div
key="sessionTypesContainer"
className="sessionTypesContainer bookingFormAccordion"
>
<RadioButton
name="sessionTypesGroup"
onChange={(value) => {
this.updateBooking({
booking: {
...this.state.booking,
config: {
...this.state.booking.config,
type: parseInt(value, 10)
}
}
});
}}
value={this.props.bookingData.bookingByID.config.type}
values={bookingUtils.sessionTypes()}
/>
</div>
);
}
Notice that the value of our component that will render the “Session Times” on our UI is coming from our props.
Breaking it down
So let’s break down this whole code to understand how each part is playing well with each other.
- The page is loaded with a bunch of pre-filled fields. We call them Default Valid fields (Which you can read more about it on our post here)
- The user changes the Session Time (which happens on the GIF above) from 1 on 1 interviews to Focus Group.
- We update our local state to change the field
booking.config.session.type
- We call our mutation
updateBookingsending along our updated “booking” (Coming from
this.state)
- Apollo identifies that we are using an Optimistic UI on the mutation, so it returns straight away what we believe it’s going to be the return from our network
- We update our UI based on our new props (notice that we connected our mutation to our component via
compose)
- Once the network request comes back from our server, it will update the component again, as it will receive new props.
Advantages of this approach
We’ve been getting lots of real good feedback from our users after we adopted this approach of using Optimistic UI with React and Apollo to update UI components. In fact, we’ve seen an increase of 20% on the number of returning clients that started a booking and finished where they left off in the past. This also improved significantly the level of trust we’re building up with clients as they know their information is always saved securely and in real-time.
We’ve been running React and GraphQL together in production for a while already, so feel free to get in touch with me if you have any question. Feedbacks are also very welcome.
|
https://medium.com/@franciscovarisco/real-time-graphql-mutations-using-apollo-client-react-and-optimistic-ui-10e35ec3553e?utm_campaign=Fullstack%2BReact&utm_medium=web&utm_source=Fullstack_React_104
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Interoperability
XAP offers interoperability between documents and concrete objects via space - it is possible to write objects and read them as documents, and vice versa. This is usually useful in scenarios requiring reading and/or manipulating objects without loading the concrete .NET classes. This page describes how to do it.
Requirements
When working with documents the user is in charge of creating and registering the space type descriptor manually before writing/reading documents. When working with concrete objects the system implicitly generates a space type descriptor for the object’s class using attributes or
gs.xml files when the class is used for the first time. In order to inter-operate, the same type descriptor should be used for both concrete objects and documents.
If the object’s class is in the application’s AppDomain, or the object type is already registered in the space, there’s no need to register it again - the application will retrieve it automatically when it’s used for the first time. For example:
// Create a document template using the POJO class name: SpaceDocument template = new SpaceDocument(typeof(MyObject).FullName); // Count all entries matching the template: int count = spaceProxy.Count(template);
If the object’s class is not available in the classpath or server, the application will throw an exception indicating there concrete object settings and maintain them if the object class changes.
Query Result Type
When no interoperability is involved this is a trivial matter - Querying an object type returns objects, querying a document type returns documents. When we want to mix and match, we need semantics to determine to query result type - Object or Document.
Template Query
Template query result types are determined by the template class - if the template is an instance of a
SpaceDocumnet the result(s) will be document(s), otherwise it will be object(s).
Sql Query
The
SqlQuery class has a
QueryResultType settings which can be set at construction. The following options are available:
Object- Return .NET Object(s).
Document- Return space document(s).
ID Based Query
In order to support ID queries for documents, use the
IdQuery class, which encapsulates the type, id, routing and a
QueryResultType with the corresponding
ISpaceProxy overload methods:
ReadById,
ReadIfExistsById,
TakeById,
TakeIfExistsById. The result type is determined by the
QueryResultType, similar to
SqlQuery.
Respectively, to support multiple ids queries, use the
IdsQuery with the corresponding
ReadByIds and
TakeByIds.
Dynamic Properties
By default, type descriptors created from concrete object classes do not support dynamic properties. If a document of such a type with a property that is not defined in the object will be written to the space, an exception will be thrown indicating the property is not defined in the type and the type does not support dynamic properties.
In order to have a concrete class support dynamic properties it should have a property decorated with the [SpaceDynamicProperties] and the type of that property must be either DocumentProperties, Dictionary
[SpaceClass] public class MyObject { private Dictionary<String, Object> _dynamicProperties [SpaceDynamicProperties] public Dictionary<String, Object> DynamicProperties { get{ return _dynamicProperties; } set{ _dynamicProperties = value; } } }
The storage type of the dynamic properties can be explicitly set in the attribute [SpaceDynamicProperties(StorageType=StorageType.Binary)] (the default is StorageType.Object).
For more details about storage type refer to Property Storage Type
|
https://docs.gigaspaces.com/xap/11.0/dev-dotnet/document-object-interoperability.html
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Core data.
using System; using Microsoft.AspNetCore.DataProtection; using Microsoft.Extensions.DependencyInjection; public class Program { public static void Main(string[] args) { // add data protection services var serviceCollection = new ServiceCollection(); serviceCollection.AddDataProtection(); var services = serviceCollection.BuildServiceProvider(); // create an instance of MyClass using the service provider var instance = ActivatorUtilities.CreateInstance<MyClass>(services); instance.RunSample(); } public class MyClass { IDataProtector _protector; // the 'provider' parameter is provided by DI public MyClass(IDataProtectionProvider provider) { _protector = provider.CreateProtector("Contoso.MyClass.v1"); } public void RunSample() { Console.Write("Enter input: "); string input = Console.ReadLine(); // protect the payload string protectedPayload = _protector.Protect(input); Console.WriteLine($"Protect returned: {protectedPayload}"); // unprotect the payload string unprotectedPayload = _protector.Unprotect(protectedPayload); Console.WriteLine($"Unprotect returned: {unprotectedPayload}"); } } } /* * SAMPLE OUTPUT * * Enter input: Hello world! * Protect returned: CfDJ8ICcgQwZZhlAlTZT...OdfH66i1PnGmpCR5e441xQ * Unprotect returned: Hello world! */
When you create a protector you must provide one or more Purpose Strings. A purpose string provides isolation between consumers. For example, a protector created with a purpose string of "green" wouldn't be able to unprotect data provided by a protector with a purpose of "purple".
Tip
Instances of
IDataProtectionProvider and
IDataProtector are thread-safe for multiple callers. It's intended that once a component gets a reference to an
IDataProtector via a call to
CreateProtector, it will use that reference for multiple calls to
Protect and
Unprotect.
A call to
Unprotect will throw CryptographicException if the protected payload cannot be verified or deciphered. Some components may wish to ignore errors during unprotect operations; a component which reads authentication cookies might handle this error and treat the request as if it had no cookie at all rather than fail the request outright. Components which want this behavior should specifically catch CryptographicException instead of swallowing all exceptions.
Feedback
|
https://docs.microsoft.com/en-us/aspnet/core/security/data-protection/using-data-protection?view=aspnetcore-2.2
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
#include <a.out.h>
#include <elf.h>
ld and as can produce output files in any one of two file formats: Common Object File Format (COFF) and Executable and Linking Format (ELF). See the ld(CP) and as(CP) manual pages for further information. The following sections, ``COFF files'' and ``ELF files'', describe these two file formats.
A common object file consists of a file header,
a UNIX system header (if the file is link editor output),
a table of section headers, relocation information,
(optional) line numbers,
a symbol table, and a string table.
The order is as shown:
File header
UNIX system header
Section 1 header
...
Section n header
Section 1 data
...
Section n data
Section 1 relocation
...
Section n relocation
Section 1 line numbers
...
Section n line numbers
Symbol table
String table
Some of these parts can be missing:
When an a.out file is loaded into memory for execution, three logical segments are set up:
The text segment starts at location virtual address 0.
The a.out file produced by ld may have one of two magic numbers, 0410 and 0413, in the first field of the UNIX system header. A magic number of 0410 indicates that the executable must be swapped through the private swapping store of the UNIX system, while the magic number 0413 makes the system try to page the text directly from the a.out file.
In a 0410 executable, the text section is loaded at virtual location 0x00000000. The data section is loaded after the text section.
For a 0413 executable, the headers (file header, UNIX system header, and section headers) are loaded at the beginning of the text segment, and the text follows the headers in the user address space. The first text address equals the sum of the sizes of the headers, and varies, depending on the number of sections in the a.out file. In an a.out file with three sections (.text, .data, and .bss), the first text address is at 0x000000D0.
The data section starts in the next page table directory after the last one used by the text section, in the first page of that directory, with an offset into that page equal to the first unused memory offset in the last page of text.
Thus, given that etext is the address of the last byte of the text section, the first byte of the data section is at 0x00400000 + (etext & 0xFFC00000) + ((etext+1) & 0xFFC00FFF).
On an 80386 computer, the stack begins at location 7FFFFFFC and grows toward lower addresses. The stack is automatically extended as required. The data segment is extended only as requested by the brk(S) system call.
For relocatable files,
the value of a word in the text or data portions
that is not a reference to an undefined external symbol
is the value that appears in memory
when the file is executed.
If a word in the text involves a reference to an undefined external symbol,
there is a relocation entry for the word;
the storage class of the symbol-table entry for
the symbol is marked as an ``external symbol'',
and the value and section number of the symbol-table entry are
undefined. When the file is processed by the
link editor and the external symbol becomes
defined, the value of the symbol is added to the word in the file.
struct filehdr { unsigned short f_magic; /* magic number */ unsigned short f_nscns; /* number of sections */ long f_timdat; /* time and date stamp */ long f_symptr; /* file ptr to symtab */ long f_nsyms; /* # symtab entries */ unsigned short f_opthdr; /* sizeof(opt hdr) */ unsigned short f_flags; /* flags */ };
typedef struct aouthdr { short magic; /* magic number */ short vstamp; /* version stamp */ long tsize; /* text size in bytes, padded */ long dsize; /* initialized data (.data) */ long bsize; /* uninitialized data (.bss) */ long entry; /* entry point */ long text_start; /* base of text used for this file */ long data_start; /* base of data used for this file */ } AOUTHDR;
struct scnhdr { char s_name[SYMNMLEN]; /* section name */ long s_paddr; /* physical address */ long s_vaddr; /* virtual address */ long s_size; /* section size */ long s_scnptr; /* file ptr to raw data */ long s_relptr; /* file ptr to relocation */ long s_lnnoptr; /* file ptr to line numbers */ unsigned short s_nreloc; /* # reloc entries */ unsigned short s_nlnno; /* # line number entries */ long s_flags; /* flags */ };
struct reloc { long r_vaddr; /* (virtual) address of reference */ long r_symndx; /* index into symbol table */ ushort r_type; /* relocation type */ };The start of the relocation information is
s_relptrfrom the section header. If there is no relocation information,
s_relptris 0.
#define SYMNMLEN 8 #define FILNMLEN 14 #define DIMNUM 4
struct syment { union /* all ways to get a symbol name */ { char _n_name[SYMNMLEN]; /* name of symbol */ struct { long _n_zeroes; /* == 0L if in string table */ long _n_offset; /* location in string table */ } _n_n; char *_n_nptr[2]; /* allows overlaying */ } _n; long n_value; /* value of symbol */ short n_scnum; /* section number */ unsigned short n_type; /* type and derived type */ char n_sclass; /* storage class */ char n_numaux; /* number of aux entries */ };
#define n_name _n._n_name #define n_zeroes _n._n_n._n_zeroes #define n_offset _n._n_n._n_offset #define n_nptr _n._n_nptr[1]
Some symbols require more information than a single entry; they are followed by auxiliary entries that are the same size as a symbol entry. The format follows:
union auxent { struct { long x_tagndx; union { struct { unsigned short x_lnno; unsigned short x_size; } x_lnsz; long x_fsize; } x_misc; union { struct { long x_lnnoptr; long x_endndx; } x_fcn; struct { unsigned short x_dimen[DIMNUM]; } x_ary; } x_fcnary; unsigned short x_tvndx; } x_sym;
union { char x_fname[FILNMLEN]; struct { long x_zero; long x_offset; } x_longname; } x_file;
struct { long x_scnlen; unsigned short x_nreloc; unsigned short x_nlinno; } x_scn;
struct { long x_tvfill; unsigned short x_tvlen; unsigned short x_tvran[2]; } x_tv; };
Indexes of symbol table entries begin at zero.
The start of the symbol table is f_symptr
bytes from the beginning of the file.
This value is taken from the file header.
If the symbol table is stripped, f_symptr is 0.
The string table (if one exists) begins at f_symptr +
(f_nsyms
SYMESZ) bytes from the beginning of the file.
The macro SYMESZ is defined in <syms.h>.
Programs that manipulate ELF files may use the library that is described by the elf(S) manual page. An overview of the file format follows. For more complete information, see the references given later.(S) system call.
Use the file(C) command to display information about whether an ELF object is dynamically or statically linked, was stripped, and whether it contains debug info.
|
http://osr507doc.xinuos.com/en/man/html.FP/a.out.FP.html
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
AzCopy v8.1, see the Use the previous version of AzCopy section of this article.
Download AzCopy
First, download the AzCopy V10 executable file to any directory on your computer. AD
By using Azure AD, you can provide credentials once instead of having to append a SAS token to each command.
The level of authorization that you need is based on whether you plan to upload files or just download them.
If you just want to download files, then verify that the Storage Blob Data Reader has been assigned to your user identity or service principal.
Note
User identity may, the client secret won't appear in your console's command history.
Next, type the following command, and then press the ENTER key.
azcopy login --service-principal --application-id <application-id>
Replace the
<application-id> placeholder with the application ID of your service principal's app registration. in PowerShell.
$env:AZCOPY_SPA_CERT_PASSWORD="$(Read-Host -prompt "Enter key")"
Next, type the following command, and then press the ENTER key.
azcopy login --service-principal --certificate-path <path-to-certificate-file>
Replace the
<path-to-certificate-file> placeholder with the relative or fully-qualified path to the certificate file. AzCopy saves the path to this certificate but it doesn't save a copy of the certificate, so make sure to keep that certificate in place.
Note
Consider using a prompt as shown in this example. That way, your password won't appear in your console's command history. cp . For examples, see the Authenticate your service principal section of this.
Use AzCopy in Storage Explorer
If you want to leverage the performance advantages of AzCopy, but you prefer to use Storage Explorer rather than the command line to interact with your files, then enable AzCopy in Storage Explorer.
In Storage Explorer, choose Preview->Use AzCopy for Improved Blob Upload and Download.
Note
You don't have to enable this setting if you've enabled a hierarchical namespace on your storage account. That's because Storage Explorer automatically uses AzCopy on storage accounts that have a hierarchical namespace.
Storage Explorer uses your account key to perform operations, so after you sign into Storage Explorer, you won't need to provide additional authorization credentials.
Use the previous version of AzCopy
If you need to use the previous version of AzCopy (AzCopy v8.1), see either of the following links:
Configure, optimize, and troubleshoot AzCopy
See Configure, optimize, and troubleshoot AzCopy
Next steps
If you have questions, issues, or general feedback, submit them on GitHub page.
Feedback
|
https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy-v10
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Flutter Ravepay
Flutter_Ravepay provides a wrapper that incorporate payments using Ravepay within your flutter applications. The integration is achieved using Ravepay's Android/iOS SDK libraries. It currently has full support for only Android. Getting it to work on iOS comes with a few more steps and configurations (soon to come).
Installing
dependencies: flutter_ravepay: "^0.2.0"
Import
import 'package:flutter_ravepay/flutter_ravepay.dart';
Instantiate
Ravepay ravePay = Ravepay.of(context);
Charging a Card
RavepayResult result = await ravePay.chargeCard( const RavepayConfig( amount: 4500.0, country: "NG", currency: "NGN", email: "testemail@gmail.com", firstname: "Jeremiah", lastname: "Ogbomo", narration: "Test Payment", publicKey: "****", secretKey: "****", txRef: "ravePay-1234345", useAccounts: false, useCards: true, isStaging: true, useSave: true, metadata: [ const RavepayMeta("email", "jeremiahogbomo@gmail.com"), const RavepayMeta("id", "1994"), ] ), );
Bugs/Requests
If you encounter any problems feel free to open an issue. If you feel the library is missing a feature, please raise a ticket on Github and I'll look into it. Pull request are also welcome.
Note
For help getting started with Flutter, view our online documentation.
For help on editing plugin code, view the documentation.
License
MIT License
|
https://pub.dev/documentation/flutter_ravepay/latest/
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Description
Chrono::Parallel collision_settings.
This structure that contains all settings associated with the collision detection phase.
#include <ChSettings.h>
Constructor & Destructor Documentation
The default values are specified in the constructor, use these as guidelines when experimenting with your own simulation setup.
Member Data Documentation
This variable is the primary method to control the granularity of the collision detection grid used for the broadphase.
As the name suggests, it is the number of slices along each axis. During the broadphase stage the extents of the simulation are computed and then sliced according to the variable.
This parameter, similar to the one in chrono inflates each collision shape by a certain amount.
This is necessary when using NSC as it creates the contact constraints before objects acutally come into contact. In general this helps with stability.
There are multiple narrowphase algorithms implemented in the collision detection code.
The narrowphase_algorithm parameter can be used to change the type of narrowphase used at runtime.
Chrono parallel has an optional feature that allows the user to set a bounding box that automatically freezes (makes inactive) any object that exits the bounding box.
|
http://api.projectchrono.org/classchrono_1_1collision__settings.html
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Object.GetHashCode Method
Updated: October 2010
Serves as a hash function for a particular type.
Namespace: System
Assembly: mscorlib (in mscorlib.dll)
Syntax
'Declaration Public Overridable Function GetHashCode As Integer
public virtual int GetHashCode()
Return Value
Type: System.Int32
A hash code for the current Object.
Remarks
GetHashCode returns equal values for different methods of the same class. GetHashCode returns zero for some user-defined structs in Silverlight for Windows Phone, but non-zero values in Silverlight..
Examples
In some cases, the GetHashCode method is implemented to simply return an integer value. The following code example illustrates an implementation of GetHashCode that returns an integer value.
Public Structure Int32 Public value As Integer 'other methods... Public Overrides Function GetHashCode() As Integer Return value End Function 'GetHashCode End Structure 'Int32
using System; public struct Int32 { public int value; //other methods... public override int GetHashCode() { return value; } }
Frequently, a type has multiple data fields that can participate in generating the hash code. One way to generate a hash code is to combine these fields using an XOR (eXclusive OR) operation, as shown in the following code example.
Public Structure Point Public x As Integer Public y As Integer 'other methods Public Overrides Function GetHashCode() As Integer Return x ^ y End Function 'GetHashCode End Structure 'Point
using System; public struct Point { public int x; public int y; //other methods public override int GetHashCode() { return x ^ y; } }
The following code example illustrates another case where the type's fields are combined using XOR (eXclusive OR) to generate the hash code. Notice that in this code example, the fields represent user-defined types, each of which implements GetHashCode and Equals.
Public Class SomeType Public Overrides Function GetHashCode() As Integer Return 0 End Function 'GetHashCode End Class 'SomeType Public Class AnotherType Public Overrides Function GetHashCode() As Integer Return 1 End Function 'GetHashCode End Class 'AnotherType Public Class LastType Public Overrides Function GetHashCode() As Integer Return 2 End Function 'GetHashCode End Class 'LastType Public Class [MyClass] Private a As New SomeType() Private b As New AnotherType() Private c As New LastType() Public Overrides Function GetHashCode() As Integer Return a.GetHashCode() ^ b.GetHashCode() ^ c.GetHashCode() End Function 'GetHashCode End Class '[MyClass].
Public Structure Int64 Public value As Long 'other methods... Public Overrides Function GetHashCode() As Integer Return (Fix(value) ^ Fix(value >> 32)) End Function 'GetHashCode End Structure 'Int64
using System; public struct Int64 { public long value; //other methods... public override int GetHashCode() { return ((int)value ^ (int)(value >> 32)); } }
|
https://docs.microsoft.com/en-us/previous-versions/windows/silverlight/dotnet-windows-silverlight/zdee4b3y%28v%3Dvs.95%29
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
Provides a flexible mechanism for controlling access, without requiring that a class be immutable. Once frozen, an object can never be unfrozen, so it is thread-safe from that point onward. Once the object has been frozen, it must guarantee that no changes can be made to it. Any attempt to alter it must raise an UnsupportedOperationException exception. This means that when the object returns internal objects, or if anyone has references to those internal objects, that those internal objects must either be immutable, or must also raise exceptions if any attempt to modify them is made. Of course, the object can return clones of internal objects, since those are safe.
Background
There are often times when you need objects to be objects 'safe', so that they can't be modified. Examples are when objects need to be thread-safe, or in writing robust code, or in caches. If you are only creating your own objects, you can guarantee this, of course -- but only if you don't make a mistake. If you have objects handed into you, or are creating objects using others handed into you, it is a different story. It all comes down to whether you want to take the Blanche Dubois approach ("depend on the kindness of strangers") or the Andy Grove approach ("Only the Paranoid Survive").
For example, suppose we have a simple class:
public class A { protected Collection b; protected Collection c; public Collection get_b() { return b; } public Collection get_c() { return c; } public A(Collection new_b, Collection new_c) { b = new_b; c = new_c; } }
Since the class doesn't have any setters, someone might think that it is immutable. You know where this is leading, of course; this class is unsafe in a number of ways. The following illustrates that.
public test1(SupposedlyImmutableClass x, SafeStorage y) { // unsafe getter A a = x.getA(); Collection col = a.get_b(); col.add(something); // a has now been changed, and x too // unsafe constructor a = new A(col, col); y.store(a); col.add(something); // a has now been changed, and y too }
There are a few different techniques for having safe classes.
- Const objects. In C++, you can declare parameters const.
- Immutable wrappers. For example, you can put a collection in an immutable wrapper.
- Always-Immutable objects. Java uses this approach, with a few variations. Examples:
- Simple. Once a Color is created (eg from R, G, and B integers) it is immutable.
- Builder Class. There is a separate 'builder' class. For example, modifiable Strings are created using StringBuffer (which doesn't have the full String API available). Once you want an immutable form, you create one with toString().
- Primitives. These are always safe, since they are copied on input/output from methods.
- Cloning. Where you need an object to be safe, you clone it.
There are advantages and disadvantages of each of these.
- Const provides a certain level of protection, but since const can be and is often cast away, it only protects against most inadvertent mistakes. It also offers no threading protection, since anyone who has a pointer to the (unconst) object in another thread can mess you up.
- Immutable wrappers are safer than const in that the constness can't be cast away. But other than that they have all the same problems: not safe if someone else keeps hold of the original object, or if any of the objects returned by the class are mutable.
- Always-Immutable Objects are safe, but usage can require excessive object creation.
- Cloning is only safe if the object truly has a 'safe' clone; defined as one that ensures that no change to the clone affects the original. Unfortunately, many objects don't have a 'safe' clone, and always cloning can require excessive object creation.
Freezable Model
The
Freezable model supplements these choices by giving you
the ability to build up an object by calling various methods, then when it is
in a final state, you can make it immutable. Once immutable, an
object cannot ever be modified, and is completely thread-safe: that
is, multiple threads can have references to it without any synchronization.
If someone needs a mutable version of an object, they can use
cloneAsThawed(), and modify the copy. This provides a simple,
effective mechanism for safe classes in circumstances where the alternatives
are insufficient or clumsy. (If an object is shared before it is immutable,
then it is the responsibility of each thread to mutex its usage (as with
other objects).)
Here is what needs to be done to implement this interface, depending on the type of the object.
Immutable Objects
These are the easiest. You just use the interface to reflect that, by adding the following:
public class A implements Freezable<A> { ... public final boolean isFrozen() {return true;} public final A freeze() {return this;} public final A cloneAsThawed() { return this; } }
These can be final methods because subclasses of immutable objects must
themselves be immutable. (Note:
freeze is returning
this for chaining.)
Mutable Objects
Add a protected 'flagging' field:
protected volatile boolean frozen; // WARNING: must be volatile
Add the following methods:
public final boolean isFrozen() { return frozen; }; public A freeze() { frozen = true; // WARNING: must be final statement before return return this; }
Add a
cloneAsThawed() method following the normal pattern for
clone(), except that
frozen=false in the new
clone.
Then take the setters (that is, any method that can change the internal state of the object), and add the following as the first statement:
if (isFrozen()) { throw new UnsupportedOperationException("Attempt to modify frozen object"); }
Subclassing
Any subclass of a
Freezable will just use its superclass's
flagging field. It must override
freeze() and
cloneAsThawed() to call the superclass, but normally does not
override
isFrozen(). It must then just pay attention to its
own getters, setters and fields.
Internal Caches
Internal caches are cases where the object is logically unmodified, but internal state of the object changes. For example, there are const C++ functions that cast away the const on the "this" pointer in order to modify an object cache. These cases are handled by mutexing the internal cache to ensure thread-safety. For example, suppose that UnicodeSet had an internal marker to the last code point accessed. In this case, the field is not externally visible, so the only thing you need to do is to synchronize the field for thread safety.
Unsafe Internal Access
Internal fields are called safe if they are either
frozen or immutable (such as String or primitives). If you've
never allowed internal access to these, then you are all done. For example,
converting UnicodeSet to be
Freezable is just accomplished
with the above steps. But remember that you have allowed
access to unsafe internals if you have any code like the following, in a
getter, setter, or constructor:
Collection getStuff() { return stuff; } // caller could keep reference & modify void setStuff(Collection x) { stuff = x; } // caller could keep reference & modify MyClass(Collection x) { stuff = x; } // caller could keep reference & modify
These also illustrated in the code sample in Background above.
To deal with unsafe internals, the simplest course of action is to do the
work in the
freeze() function. Just make all of your internal
fields frozen, and set the frozen flag. Any subsequent getter/setter will
work properly. Here is an example:
Warning! The 'frozen' boolean MUST be volatile, and must be set as the last statement in the method.
public A freeze() { if (!frozen) { foo.freeze(); frozen = true; } return this; }
If the field is a
Collection or
Map, then to
make it frozen you have two choices. If you have never allowed access to the
collection from outside your object, then just wrap it to prevent future
modification.
zone_to_country = Collections.unmodifiableMap(zone_to_country);
If you have ever allowed access, then do a
clone()
before wrapping it.
zone_to_country = Collections.unmodifiableMap(zone_to_country.clone());
If a collection (or any other container of objects) itself can contain mutable objects, then for a safe clone you need to recurse through it to make the entire collection immutable. The recursing code should pick the most specific collection available, to avoid the necessity of later downcasing.
Note: An annoying flaw in Java is that the generic collections, like
Mapor
Set, don't have a
clone()operation. When you don't know the type of the collection, the simplest course is to just create a new collection:zone_to_country = Collections.unmodifiableMap(new HashMap(zone_to_country));
Public Method Summary
Public Methods
public abstract T cloneAsThawed ()
Provides for the clone operation. Any clone is initially unfrozen.
public abstract T freeze ()
Freezes the object.
Returns
- the object itself.
public abstract boolean isFrozen ()
Determines whether the object has been frozen or not.
|
https://developers.google.com/j2objc/javadoc/jre/reference/android/icu/util/Freezable?hl=es-419
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
import "github.com/gogf/gf/g/os/grpool"
Package grpool implements a goroutine reusable pool.
Add pushes a new job to the pool using default goroutine pool. The job will be executed asynchronously.
Jobs returns current job count of default goroutine pool.
Size returns current goroutine count of default goroutine pool.
Goroutine Pool
New creates and returns a new goroutine pool object. The parameter <limit> is used to limit the max goroutine count, which is not limited in default.
Add pushes a new job to the pool. The job will be executed asynchronously.
Cap returns the capacity of the pool. This capacity is defined when pool is created. If it returns -1 means no limit.
Close closes the goroutine pool, which makes all goroutines exit.
IsClosed returns if pool is closed.
Jobs returns current job count of the pool.
Size returns current goroutine count of the pool.
Package grpool imports 3 packages (graph) and is imported by 2 packages. Updated 2019-06-26. Refresh now. Tools for package owners.
|
https://godoc.org/github.com/gogf/gf/g/os/grpool
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
OpenGL is an open standard for rendering 2D and 3D graphics leveraging graphics hardware. OpenGL has been implemented across a stunning array of platforms allowing apps targeting OpenGL to be extremely flexible.
In this example code we will create A blank Opengl Window using LWJGL 3.0+, this doesn't contain steps to create the project in your IDE
WindowManager.java
import org.lwjgl.glfw.*; import static org.lwjgl.glfw.Callbacks.*; import static org.lwjgl.glfw.GLFW.*; import static org.lwjgl.opengl.GL11.*; import static org.lwjgl.system.MemoryUtil.*; /** * Class Containing code related to inflating Opengl Window */ public class Displaymanager { private static long window; public static void createDisplay(){ // Setup an error callback. The default implementation // will print the error message in System.err. GLFWErrorCallback.createPrint(System.err).set(); // Initialize GLFW. Most GLFW functions will not work before doing this. if ( !glfwInit() ) throw new IllegalStateException("Unable to initialize GLFW"); // Configure our window glfwDefaultWindowHints(); // optional, the current window hints are already the default glfwWindowHint(GLFW_VISIBLE, GLFW_FALSE); // the window will stay hidden after creation glfwWindowHint(GLFW_RESIZABLE, GLFW_TRUE); // the window will be resizable int WIDTH = 300; int HEIGHT = 300; // Create the window window = glfwCreateWindow(WIDTH, HEIGHT, "Hello World!", NULL, NULL); if ( window == NULL ) throw new RuntimeException("Failed to create the GLFW window"); // Setup a key callback. It will be called every time a key is pressed, repeated or released. glfwSetKeyCallback(window, (window, key, scancode, action, mods) -> { if ( key == GLFW_KEY_ESCAPE && action == GLFW_RELEASE ) glfwSetWindowShouldClose(window, true); // We will detect this in our rendering loop }); // Get the resolution of the primary monitor GLFWVidMode vidmode = glfwGetVideoMode(glfwGetPrimaryMonitor()); // Center our window glfwSetWindowPos( window, (vidmode.width() - WIDTH) / 2, (vidmode.height() - HEIGHT) / 2 ); // Make the OpenGL context current glfwMakeContextCurrent(window); // Enable v-sync glfwSwapInterval(1); // Make the window visible glfwShowWindow(window); } public static boolean isCloseRequested(){ return glfwWindowShouldClose(window); } public static void updateDisplay(){ glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // clear the framebuffer glfwSwapBuffers(window); // swap the color buffers // Poll for window events. The key callback above will only be // invoked during this call. glfwPollEvents(); } public static void destroyDisplay(){ // Terminate GLFW and free the error callback cleanUp(); glfwTerminate(); glfwSetErrorCallback(null).free(); } private static void cleanUp() { // Free the window callbacks and destroy the window glfwFreeCallbacks(window); glfwDestroyWindow(window); } }
OpenGlMain.java
import org.lwjgl.opengl.GL; import renderEngine.Displaymanager; import static org.lwjgl.opengl.GL11.glClearColor; /** * Class to test the opengl Window */ public class OpenGlMain { public static void main(String[] args) { Displaymanager.createDisplay(); // This line is critical for LWJGL's interoperation with GLFW's // OpenGL context, or any context that is managed externally. // LWJGL detects the context that is current in the current thread, // creates the GLCapabilities instance and makes the OpenGL // bindings available for use. GL.createCapabilities(); while (!Displaymanager.isCloseRequested()){ // Set the clear color glClearColor(1.0f, 0.0f, 0.0f, 0.0f); Displaymanager.updateDisplay(); } Displaymanager.destroyDisplay(); } }
For further detail checkout official LWJGL Guide
Note: There will be some Objective-c in this example.. We will make a wrapper to C++ in this example, So don't worry to much about it.
First start Xcode and create a project.
And select a Cocoa application
Delete all sources except the Info.plist file.(Your app won't work without it)
Create 4 new source-files: A Objective-c++ file and header (I've called mine MacApp) A C++ class (I've called mine (Application)
In the top left (with the project name) click on it and add linked frameworks and libraries. Add: OpenGL.Framework AppKit.Framework GLKit.Framework
Your project will look probably like this:
NSApplication is the main class you use while creating a MacOS app. It allows you to register windows and catch events.
We want to register (our own) window to the NSApplication. First create in your objective-c++ header a objective-c class that inherits from NSWindow and implements NSApplicationDelegate The NSWindow needs a pointer to the C++ application, A openGL View and a timer for the draw loop
//Mac_App_H #import <Cocoa/Cocoa.h> #import "Application.hpp" #import <memory> NSApplication* application; @interface MacApp : NSWindow <NSApplicationDelegate>{ std::shared_ptr<Application> appInstance; } @property (nonatomic, retain) NSOpenGLView* glView; -(void) drawLoop:(NSTimer*) timer; @end
We call this from the main with
int main(int argc, const char * argv[]) { MacApp* app; application = [NSApplication sharedApplication]; [NSApp setActivationPolicy:NSApplicationActivationPolicyRegular]; //create a window with the size of 600 by 600 app = [[MacApp alloc] initWithContentRect:NSMakeRect(0, 0, 600, 600) styleMask:NSTitledWindowMask | NSClosableWindowMask | NSMiniaturizableWindowMask backing:NSBackingStoreBuffered defer:YES]; [application setDelegate:app]; [application run]; }
The implementation of our window is actually quite easy First we declare with synthesise our glview and add a global objective-c boolean when the window should close.
#import "MacApp.h" @implementation MacApp @synthesize glView; BOOL shouldStop = NO;
Now for the constructor. My preference is to use the initWithContentRect.
-(id)initWithContentRect:(NSRect)contentRect styleMask:(NSUInteger)aStyle backing:(NSBackingStoreType)bufferingType defer:(BOOL)flag{ if(self = [super initWithContentRect:contentRect styleMask:aStyle backing:bufferingType defer:flag]){ //sets the title of the window (Declared in Plist) [self setTitle:[[NSProcessInfo processInfo] processName]]; //This is pretty important.. OS X starts always with a context that only supports openGL 2.1 //This will ditch the classic OpenGL and initialises openGL 4.1 NSOpenGLPixelFormatAttribute pixelFormatAttributes[] ={ NSOpenGLPFAOpenGLProfile, NSOpenGLProfileVersion3_2Core, NSOpenGLPFAColorSize , 24 , NSOpenGLPFAAlphaSize , 8 , NSOpenGLPFADoubleBuffer , NSOpenGLPFAAccelerated , NSOpenGLPFANoRecovery , 0 }; NSOpenGLPixelFormat* format = [[NSOpenGLPixelFormat alloc]initWithAttributes:pixelFormatAttributes]; //Initialize the view glView = [[NSOpenGLView alloc]initWithFrame:contentRect pixelFormat:format]; //Set context and attach it to the window [[glView openGLContext]makeCurrentContext]; //finishing off [self setContentView:glView]; [glView prepareOpenGL]; [self makeKeyAndOrderFront:self]; [self setAcceptsMouseMovedEvents:YES]; [self makeKeyWindow]; [self setOpaque:YES]; //Start the c++ code appInstance = std::shared_ptr<Application>(new Application()); } return self; }
Alright... now we have actually a runnable app.. You might see a black screen or flickering.
Let's start drawing a awesome triangle.(in c++)
My application header
#ifndef Application_hpp #define Application_hpp #include <iostream> #include <OpenGL/gl3.h> class Application{ private: GLuint program; GLuint vao; public: Application(); void update(); ~Application(); }; #endif /* Application_hpp */
The implementation:
Application::Application(){ static const char * vs_source[] = { "#version 410 core \n" " \n" "void main(void) \n" "{ \n" " const vec4 vertices[] = vec4[](vec4( 0.25, -0.25, 0.5, 1.0), \n" " vec4(-0.25, -0.25, 0.5, 1.0), \n" " vec4( 0.25, 0.25, 0.5, 1.0)); \n" " \n" " gl_Position = vertices[gl_VertexID]; \n" "} \n" }; static const char * fs_source[] = { "#version 410 core \n" " \n" "out vec4 color; \n" " \n" "void main(void) \n" "{ \n" " color = vec4(0.0, 0.8, 1.0, 1.0); \n" "} \n" }; program = glCreateProgram(); GLuint fs = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fs, 1, fs_source, NULL); glCompileShader(fs); GLuint vs = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vs, 1, vs_source, NULL); glCompileShader(vs); glAttachShader(program, vs); glAttachShader(program, fs); glLinkProgram(program); glGenVertexArrays(1, &vao); glBindVertexArray(vao); } void Application::update(){ static const GLfloat green[] = { 0.0f, 0.25f, 0.0f, 1.0f }; glClearBufferfv(GL_COLOR, 0, green); glUseProgram(program); glDrawArrays(GL_TRIANGLES, 0, 3); } Application::~Application(){ glDeleteVertexArrays(1, &vao); glDeleteProgram(program); }
Now we only need to call update over and over again(if you want something to move) Implement in your objective-c class
-(void) drawLoop:(NSTimer*) timer{ if(shouldStop){ [self close]; return; } if([self isVisible]){ appInstance->update(); [glView update]; [[glView openGLContext] flushBuffer]; } }
And add the this method in the implementation of your objective-c class:
- (void)applicationDidFinishLaunching:(NSNotification *)notification { [NSTimer scheduledTimerWithTimeInterval:0.000001 target:self selector:@selector(drawLoop:) userInfo:nil repeats:YES]; }
this will call the update function of your c++ class over and over again(each 0.000001 seconds to be precise)
To finish up we close the window when the close button is pressed:
- (BOOL)applicationShouldTerminateAfterLastWindowClosed:(NSApplication *)theApplication{ return YES; } - (void)applicationWillTerminate:(NSNotification *)aNotification{ shouldStop = YES; }
Congratulations, now you have a awesome window with a OpenGL triangle without any third party frameworks.
Creating a Window with OpenGL context (extension loading through GLEW):
#define GLEW_STATIC #include <GL/glew.h> #include <SDL2/SDL.h> int main(int argc, char* argv[]) { SDL_Init(SDL_INIT_VIDEO); /* Initialises Video Subsystem in SDL */ /* Setting up OpenGL version and profile details for context creation */ SDL_GL_SetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, SDL_GL_CONTEXT_PROFILE_CORE); SDL_GL_SetAttribute(SDL_GL_CONTEXT_MAJOR_VERSION, 3); SDL_GL_SetAttribute(SDL_GL_CONTEXT_MINOR_VERSION, 2); /* A 800x600 window. Pretty! */ SDL_Window* window = SDL_CreateWindow ( "SDL Context", SDL_WINDOWPOS_UNDEFINED, SDL_WINDOWPOS_UNDEFINED, 800, 600, SDL_WINDOW_OPENGL ); /* Creating OpenGL Context */ SDL_GLContext gl_context = SDL_GL_CreateContext(window); /* Loading Extensions */ glewExperimental = GL_TRUE; glewInit(); /* The following code is for error checking. * If OpenGL has initialised properly, this should print 1. * Remove it in production code. */ GLuint vertex_buffer; glGenBuffers(1, &vertex_buffer); printf("%u\n", vertex_buffer); /* Error checking ends here */ /* Main Loop */ SDL_Event window_event; while(1) { if (SDL_PollEvent(&window_event)) { if (window_event.type == SDL_QUIT) { /* If user is exiting the application */ break; } } /* Swap the front and back buffer for flicker-free rendering */ SDL_GL_SwapWindow(window); } /* Freeing Memory */ glDeleteBuffers(1, &vertex_buffer); SDL_GL_DeleteContext(gl_context); SDL_Quit(); return 0; }
Full example code included at the end
WGL (can be pronounced wiggle) stands for "Windows-GL", as in "an interface between Windows and OpenGL" - a set of functions from the Windows API to communicate with OpenGL. WGL functions have a wgl prefix and its tokens have a WGL_ prefix.
Default OpenGL version supported on Microsoft systems is 1.1. That is a very old version (most recent one is 4.5). The way to get the most recent versions is to update your graphics drivers, but your graphics card must support those new versions.
Full list of WGL functions can be found here.
GDI (today updated to GDI+) is a 2D drawing interface that allows you to draw onto a window in Windows. You need GDI to initialize OpenGL and allow it to interact with it (but will not actually use GDI itself).
In GDI, each window has a device context (DC) that is used to identify the drawing target when calling functions (you pass it as a parameter). However, OpenGL uses its own rendering context (RC). So, DC will be used to create RC.
So for doing things in OpenGL, we need RC, and to get RC, we need DC, and to get DC we need a window. Creating a window using the Windows API requires several steps. This is a basic routine, so for a more detailed explanation, you should consult other documentation, because this is not about using the Windows API.
This is a Windows setup, so
Windows.h must be included, and the entry point of the program must be
WinMain procedure with its parameters. The program also needs to be linked to
opengl32.dll and to
gdi32.dll (regardless of whether you are on 64 or 32 bit system).
First we need to describe our window using the
WNDCLASS structure. It contains information about the window we want to create:
/* REGISTER WINDOW */ WNDCLASS window_class; // Clear all structure fields to zero first ZeroMemory(&window_class, sizeof(window_class)); // Define fields we need (others will be zero) window_class.style = CS_OWNDC; window_class.lpfnWndProc = window_procedure; // To be introduced later window_class.hInstance = instance_handle; window_class.lpszClassName = TEXT("OPENGL_WINDOW"); // Give our class to Windows RegisterClass(&window_class); /* *************** */
For a precise explanation of the meaning of each field (and for a full list of fields), consult MSDN documenation.
Then, we can create a window using
CreateWindowEx . After the window is created, we can acquire its DC:
/*); /* ************* */
Finally, we need to create a message loop that receives window events from the OS:
/* EVENT PUMP */ MSG msg; while (true) { if (PeekMessage(&msg, window_handle, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break; TranslateMessage(&msg); DispatchMessage(&msg); } // draw(); <- there goes your drawing SwapBuffers(dc); // To be mentioned later } /* ********** */
OpenGL needs to know some information about our window, such as color bitness, buffering method, and so on. For this, we use a pixel format. However, we can only suggest to the OS what kind of a pixel format we need, and the OS will supply the most similar supported one, we don't have direct control over it. That is why it is only called a descriptor.
/*); /* *********************** */
We've enabled double buffering in the
dwFlags field, so we must call
SwapBuffers in order to see things after drawing.
After that, we can simply create our rendering context:
/* RENDERING CONTEXT */ HGLRC rc = wglCreateContext(dc); wglMakeCurrent(dc, rc); /* ***************** */
Note that only one thread can use the RC at a time. If you wish to use it from another thread later, you must call
wglMakeCurrent there to activate it again (this will deactivate it on the thread it's currently active, and so on).
OpenGL functions are obtained by using function pointers. The general procedure is:
For example, consider glBegin:
// We need to somehow find something that contains something like this, // as we can't know all the OpenGL function prototypes typedef void (APIENTRY *PFNGLBEGINPROC)(GLenum); // After that, we need to declare the function in order to use it PFNGLBEGINPROC glBegin; // And finally, we need to somehow make it an actual function
("PFN" means "pointer to function", then follows the name of an OpenGL function, and "PROC" at the end - that is the usual OpenGL function pointer type name.)
Here's how it's done on Windows. As mentioned previously, Microsoft only ships OpenGL 1.1. First, function pointer types for that version can be found by including
GL/gl.h . After that, we declare all the functions we intend to use as shown above (doing that in a header file and declaring them "extern" would allow us to use them all after loading them once, just by including it). Finally, loading the OpenGL 1.1 functions is done by opening the DLL:
HMODULE gl_module = LoadLibrary(TEXT("opengl32.dll")); /* Load all the functions here */ glBegin = (PFNGLBEGINPROC)GetProcAddress("glBegin"); // ... /* *************************** */ FreeLibrary(gl_module);
However, we probably want a little bit more than OpenGL 1.1. But Windows doesn't give us the function prototypes or exported functions for anything above that. The prototypes need to be acquired from the OpenGL registry. There are three files of interest to us:
GL/glext.h ,
GL/glcorearb.h , and
GL/wglext.h .
In order to complete
GL/gl.h provided by Windows, we need
GL/glext.h . It contains (as described by the registry) "OpenGL 1.2 and above compatibility profile and extension interfaces" (more about profiles and extensions later, where we'll see that it's actually not a good idea to use those two files).
The actual functions need to be obtained by
wglGetProcAddress (no need for opening the DLL for this guy, they aren't in there, just use the function). With it, we can fetch all the functions from OpenGL 1.2 and above (but not 1.1). Note that, in order for it to function properly, the OpenGL rendering context must be created and made current. So, for example,
glClear :
// Include the header from the OpenGL registry for function pointer types // Declare the functions, just like before PFNGLCLEARPROC glClear; // ... // Get the function glClear = (PFNGLCLEARPROC)wglGetProcAddress("glClear");
We can actually build a wrapper
get_proc procedure that uses both
wglGetProcAddress and
GetProcAddress :
// Get function pointer void* get_proc(const char *proc_name) { void *proc = (void*)wglGetProcAddress(proc_name); if (!proc) proc = (void*)GetProcAddress(gl_module, proc_name); // gl_module must be somewhere in reach return proc; }
So to wrap up, we would create a header file full of function pointer declarations like this:
extern PFNGLCLEARCOLORPROC glClearColor; extern PFNGLCLEARDEPTHPROC glClearDepth; extern PFNGLCLEARPROC glClear; extern PFNGLCLEARBUFFERIVPROC glClearBufferiv; extern PFNGLCLEARBUFFERFVPROC glClearBufferfv; // And so on...
We can then create a procedure like
load_gl_functions that we call only once, and works like so:
glClearColor = (PFNGLCLEARCOLORPROC)get_proc("glClearColor"); glClearDepth = (PFNGLCLEARDEPTHPROC)get_proc("glClearDepth"); glClear = (PFNGLCLEARPROC)get_proc("glClear"); glClearBufferiv = (PFNGLCLEARBUFFERIVPROC)get_proc("glClearBufferiv"); glClearBufferfv = (PFNGLCLEARBUFFERFVPROC)get_proc("glClearBufferfv");
And you're all set! Just include the header with the function pointers and GL away.
OpenGL has been in development for over 20 years, and the developers were always strict about backwards compatibility (BC). Adding a new feature is very hard because of that. Thus, in 2008, it was separated into two "profiles". Core and compatibility. Core profile breaks BC in favor of performance improvements and some of the new features. It even completely removes some legacy features. Compatibility profile maintains BC with all versions down to 1.0, and some new features are not available on it. It is only to be used for old, legacy systems, all new applications should use the core profile.
Because of that, there is a problem with our basic setup - it only provides the context that is backwards compatible with OpenGL 1.0. The pixel format is limited too. There is a better approach, using extensions.
Any addition to the original functionality of OpenGL are called extensions. Generally, they can either make some things legal that weren't before, extend parameter value range, extend GLSL, and even add completely new functionality.
There are three major groups of extensions: vendor, EXT, and ARB. Vendor extensions come from a specific vendor, and they have a vendor specific mark, like AMD or NV. EXT extensions are made by several vendors working together. After some time, they may become ARB extensions, which are all the officially supported ones and ones approved by ARB.
To acquire function pointer types and function prototypes of all the extensions and as mentioned before, all the function pointer types from OpenGL 1.2 and greater, one must download the header files from the OpenGL registry. As discussed, for new applications it's better to use core profile, so it would be preferrable to include
GL/glcorearb.h instead of
GL/gl.h and
GL/glext.h (if you are using
GL/glcorearb.h then don't include
GL/gl.h ).
There are also extensions for the WGL, in
GL/wglext.h . For example, the function for getting the list of all supported extensions is actually an extension itself, the
wglGetExtensionsStringARB (it returns a big string with a space-separated list of all the supported extensions).
Getting extensions is handled via
wglGetProcAddress too, so we can just use our wrapper like before.
The
WGL_ARB_pixel_format extension allows us the advanced pixel format creation. Unlike before, we don't use a struct. Instead, we pass the list of wanted attributes.);
Similarly, the
WGL_ARB_create_context extension allows us the advanced context creation:
GLint context_attributes[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 3, WGL_CONTEXT_MINOR_VERSION_ARB, 3, WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB, 0 }; HGLRC new_rc = wglCreateContextAttribsARB(dc, 0, context_attributes);
For a precise explanation of the parameters and functions, consult the OpenGL specification.
Why didn't we just start off with them? Well, that's because the extensions allow us to do this, and to get extensions we need
wglGetProcAddress , but that only works with an active valid context. So in essence, before we are able to create the context we want, we need to have some context active already, and it's usually referred to as a dummy context.
However, Windows doesn't allow setting the pixel format of a window more than once. Because of that, the window needs to be destroyed and recreated in order to apply new things:
wglMakeCurrent(dc, NULL); wglDeleteContext(rc); ReleaseDC(window_handle, dc); DestroyWindow(window_handle); // Recreate the window...
Full example code:
/* We want the core profile, so we include GL/glcorearb.h. When including that, then GL/gl.h should not be included. If using compatibility profile, the GL/gl.h and GL/glext.h need to be included. GL/wglext.h gives WGL extensions. Note that Windows.h needs to be included before them. */ #include <cstdio> #include <Windows.h> #include <GL/glcorearb.h> #include <GL/wglext.h> LRESULT CALLBACK window_procedure(HWND, UINT, WPARAM, LPARAM); void* get_proc(const char*); /* gl_module is for opening the DLL, and the quit flag is here to prevent quitting when recreating the window (see the window_procedure function) */ HMODULE gl_module; bool quit = false; /* OpenGL function declarations. In practice, we would put these in a separate header file and add "extern" in front, so that we can use them anywhere after loading them only once. */ PFNWGLGETEXTENSIONSSTRINGARBPROC wglGetExtensionsStringARB; PFNWGLCHOOSEPIXELFORMATARBPROC wglChoosePixelFormatARB; PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB; PFNGLGETSTRINGPROC glGetString; int WINAPI WinMain(HINSTANCE instance_handle, HINSTANCE prev_instance_handle, PSTR cmd_line, int cmd_show) { /* REGISTER WINDOW */ WNDCLASS window_class; // Clear all structure fields to zero first ZeroMemory(&window_class, sizeof(window_class)); // Define fields we need (others will be zero) window_class.style = CS_HREDRAW | CS_VREDRAW | CS_OWNDC; window_class.lpfnWndProc = window_procedure; window_class.hInstance = instance_handle; window_class.lpszClassName = TEXT("OPENGL_WINDOW"); // Give our class to Windows RegisterClass(&window_class); /* *************** */ /*); /* *********************** */ /* RENDERING CONTEXT */ HGLRC rc = wglCreateContext(dc); wglMakeCurrent(dc, rc); /* ***************** */ /* LOAD FUNCTIONS (should probably be put in a separate procedure) */ gl_module = LoadLibrary(TEXT("opengl32.dll")); wglGetExtensionsStringARB = (PFNWGLGETEXTENSIONSSTRINGARBPROC)get_proc("wglGetExtensionsStringARB"); wglChoosePixelFormatARB = (PFNWGLCHOOSEPIXELFORMATARBPROC)get_proc("wglChoosePixelFormatARB"); wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)get_proc("wglCreateContextAttribsARB"); glGetString = (PFNGLGETSTRINGPROC)get_proc("glGetString"); FreeLibrary(gl_module); /* ************** */ /* PRINT VERSION */ const GLubyte *version = glGetString(GL_VERSION); printf("%s\n", version); fflush(stdout); /* ******* */ /* NEW PIXEL FORMAT*/); if (!result) { printf("Could not find pixel format\n"); fflush(stdout); return 0; } /* **************** */ /* RECREATE WINDOW */ wglMakeCurrent(dc, NULL); wglDeleteContext(rc); ReleaseDC(window_handle, dc); DestroyWindow(window_handle); window_handle = CreateWindowEx(WS_EX_OVERLAPPEDWINDOW, TEXT("OPENGL_WINDOW"), TEXT("OpenGL window"), WS_OVERLAPPEDWINDOW, 0, 0, 800, 600, NULL, NULL, instance_handle, NULL); dc = GetDC(window_handle); ShowWindow(window_handle, SW_SHOW); /* *************** */ /* NEW CONTEXT */ GLint context_attributes[] = { WGL_CONTEXT_MAJOR_VERSION_ARB, 3, WGL_CONTEXT_MINOR_VERSION_ARB, 3, WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_CORE_PROFILE_BIT_ARB, 0 }; rc = wglCreateContextAttribsARB(dc, 0, context_attributes); wglMakeCurrent(dc, rc); /* *********** */ /* EVENT PUMP */ MSG msg; while (true) { if (PeekMessage(&msg, NULL, 0, 0, PM_REMOVE)) { if (msg.message == WM_QUIT) break; TranslateMessage(&msg); DispatchMessage(&msg); } // draw(); <- there goes your drawing SwapBuffers(dc); } /* ********** */ return 0; } // Procedure that processes window events LRESULT CALLBACK window_procedure(HWND window_handle, UINT message, WPARAM param_w, LPARAM param_l) { /* When destroying the dummy window, WM_DESTROY message is going to be sent, but we don't want to quit the application then, and that is controlled by the quit flag. */ switch(message) { case WM_DESTROY: if (!quit) quit = true; else PostQuitMessage(0); return 0; } return DefWindowProc(window_handle, message, param_w, param_l); } /* A procedure for getting OpenGL functions and OpenGL or WGL extensions. When looking for OpenGL 1.2 and above, or extensions, it uses wglGetProcAddress, otherwise it falls back to GetProcAddress. */ void* get_proc(const char *proc_name) { void *proc = (void*)wglGetProcAddress(proc_name); if (!proc) proc = (void*)GetProcAddress(gl_module, proc_name); return proc; }
Compiled with
g++ GLExample.cpp -lopengl32 -lgdi32 with MinGW/Cygwin or
cl GLExample.cpp opengl32.lib gdi32.lib user32.lib with MSVC compiler. Make sure however, that the headers from the OpenGL registry are in the include path. If not, use
-I flag for
g++ or
/I for
cl in order to tell the compiler where they are.
One of the most common misconceptions about OpenGL is, that it were a library that could be installed from 3rd party sources. This misconception leads to many questions in the form "how to I install OpenGL" or "where to download the OpenGL SDK".
This is not how OpenGL finds the way into computer system. OpenGL by itself is merely a set of specifications on what commands an implementation must follow. So it's the implementation that matters. And for the time being, OpenGL implementations are part of the GPU drivers. This might change in the future, when new GPU programming interface allow to truly implement OpenGL as a library, but for now it's a programming API towards the graphics drivers.
When OpenGL got first released the API somehow found its way into the ABI (Application Binary Interface) contract of Windows, Solaris and Linux (LSB-4 Desktop) in addition to it's origin Sun Irix. Apple followed and in fact integrated OpenGL so deep into MacOS X, that the OpenGL version available is tightly coupled to the version of MacOS X installed. This has the notable effect, that system programming environments for these operating systems (i.e. the compiler and linker toolchain that natively targets these systems) must deliver also OpenGL API definitions. Such it is not necessary to actually install an SDK for OpenGL. It is technically possible to program OpenGL on these operating systems without the requirement to install a dedicated SDK, assuming that a build environment following the targeted ABI is installed.
A side effect of these strict ABI rules is, that the OpenGL version exposed through the binding interface is a lowest common denominator that programs running on the target platform may expect to be available. Hence modern OpenGL features are to be accessed through the extension mechanism, which is described in depth separately.
In Linux it is quite common to compartmentize the development packages for different aspects of the system, so that these can be updated individually. In most Linux distributions the development files for OpenGL are contained in a dedicated package, that is usually a dependency for a desktop application development meta-package. So installing the OpenGL development files for Linux is usually taken care of with the installation of the desktop development meta package/s.*
The API binding library
opengl32.dll (named so for both 32 bit and 64 bit versions of Windows) is shipped by default with every Windows version since Windows NT-4 and Windows 95B (both ca. 1997). However this DLL does not provide an actual OpenGL implementation (apart from a software fallback which sole purpose is to act as a safety net for programs if no other OpenGL implementation is installed). This DLL belongs to Windows and must not be altered or moved! Modern OpenGL versions are shipped as part of the so called Installable Client Driver (ICD) and accessed through the default
opengl32.dll that comes pre-installed with every version of Windows. It was decided internally by Microsoft, however, that graphics drivers installed through Windows Update would not install/update a OpenGL ICD. As such fresh installations of Windows with drivers installed automatically are lacking support for modern OpenGL features. To obtain an OpenGL ICD with modern features, graphics drivers must be downloaded directly from the GPU vendor's website and installed manually.
Regarding development no extra steps must be taken per-se. All C/C++ compilers following the Windows ABI specifications ship with headers and the linker stub (opengl32.lib) required to build and link executables that make use of OpenGL.
1. Install GLFW
First step is to create an OpenGL window. GLFW is an Open Source, multi-platform library for creating windows with OpenGL, to install GLFW first download its files from
Extract the GLFW folder and its contents will look like this
Download and install CMake to build GLFW. Goto, download CMake and install for MAC OS X
If Xcode is not installed. Download and install Xcode from Mac App Store.
Create a new folder Build inside the GLFW folder
Open CMake, click on Browse Source button to select the GLFW folder (make sure that CMakeLists.txt) is located inside that folder. After that, click on Browse Build button and select the newly created Build folder in previous step.
Now Click on Configure button and select Xcode as generator with Use default native compilers option, and click Done.
Tick on BUILD_SHARED_LIBS option and then click on Configure button again and finally click Generate button.
After generation CMake should look like this
Now Open Finder and goto /usr, create a folder name local if not already there. Open the local folder and create two folders include and lib if not already there.
Now open the GLFW folder and goto Build (where CMake had built the files). Open GLFW.xcodeproj file in Xcode.
Select install > My Mac and then click on run (Play shaped button).
It is now successfully installed (ignore the warnings).
To make sure Open Finder and goto /usr/local/lib folder and three GLFW library files will be already present there (If not then open Build folder inside GLFW folder and go to src/Debug copy all files to /usr/local/lib)
Open Finder and goto /usr/local/include and a GLFW folder will be already present there with two header files inside it by name of glfw3.h and glfw3native.h
2. Install GLEW
GLEW is a cross-platform library that helps in querying and loading OpenGL extensions. It provides run-time mechanisms for determining which OpenGL extensions are supported on the target platform. It is only for modern OpenGL (OpenGL version 3.2 and greater which requires functions to be determined at runtime). To install first download its files from glew.sourceforge.net
Extract the GLFW folder and its contents will look like this.
Now open Terminal, navigate to GLEW Folder and type the following commands
make sudo make install make clean
Now GLEW is successfully installed. To make sure its installed, Open Finder, go to /usr/local/include and a GL folder will be already present there with three header files inside it by name of glew.h, glxew.h and wglew.h
Open Finder and go to /usr/local/lib and GLEW library files will be already present there
3. Test and Run
Now we have successfully installed GLFW and GLEW. Its time to code. Open Xcode and create a new Xcode project. Select Command Line Tool then proceed next and select C++ as language.
Xcode will create a new command line project.
Click on project name, and under Build Settings tab switch from Basic to All, under Search Paths section, add /usr/local/include in Header Search Paths and add /usr/local/lib in Library Search Paths
Click on project name, and under Build Phases tab and under Link With Binary Libraries add OpenGL.framework and also add recently created GLFW and GLEW libraries from /usr/local/lib
Now we are ready to code in Modern Open GL 4.1 on macOS using C++ and Xcode. The following code will create an OpenGL Window using GLFW with Blank Screen Output.
#include <GL/glew.h> #include <GLFW/glfw3.h> // Define main function int main() { // Initialize GLFW glfwInit(); // Define version and compatibility settings); // Create OpenGL window and context GLFWwindow* window = glfwCreateWindow(800, 600, "OpenGL", NULL, NULL); glfwMakeContextCurrent(window); // Check for window creation failure if (!window) { // Terminate GLFW glfwTerminate(); return 0; } // Initialize GLEW glewExperimental = GL_TRUE; glewInit(); // Event loop while(!glfwWindowShouldClose(window)) { // Clear the screen to black glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); glfwSwapBuffers(window); glfwPollEvents(); } // Terminate GLFW glfwTerminate(); return 0; }
|
https://riptutorial.com/opengl/topic/814/getting-started-with-opengl
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
If you need to handle a unique constraint in a database table field when writing an add or edit process in a Play Framework application, I hope this example will be helpful. In the end I’ll show:
- How to write a Play Anorm query that performs a SQL INSERT on a database field that has a unique constraint
- How to write a Play controller method (Action) that handles that SQL exception
- How to create a new
Formafter you catch that exception
- How to add custom error messages to that form
- How to redirect control back to your Play template
If you need to see any of those things, I hope the following example is helpful.
The database method
In my case I’m writing a URL-shortening application and I have a unique constraint on a table column named
short_uri, so I wrote my database-insert method with Anorm to look like this:
// uses Try b/c `short_uri` has a unique constraint, which can cause an exception def insertNewUrl(u: Url): Try[Option[Long]] = db.withConnection { implicit c => Try { val q = SQL""" insert into urls (user_id,long_url,short_uri,notes) values (1,${u.longUrl},${u.shortUrl},${u.notes}) """ q.executeInsert() //returns the primary key as an Option } }
I catch the possible unique constraint exception with
Try. Because
executeInsert returns the primary key inside an
Option, this method returns the type
Try[Option[Long]]. The key here for the purpose of this example is that the SQL INSERT can fail because of the unique constraint, so I handle the possible exception with
Try.
The controller method/action
For me, the harder part in this case is to know what to do in your controller action code. I’m not going to try to explain it all, I’m just going to show my code, and then offer a little explanation:
def handleAddFormSubmission = authenticatedUserAction { implicit request => val formDidNotValidateFunction = { formWithErrors: Form[Url] => BadRequest(views.html.editUrl(0, formWithErrors, addFormSubmitUrl)) } val formValidatedFunction = { data: Url => // form data is valid, try saving to the database val url = Url( data.id, data.longUrl, data.shortUrl, data.notes ) // there’s a Try here b/c a SQL INSERT can fail due to database constraints val pkInOptionInTry: Try[Option[Long]] = urlDao.insertNewUrl(url) pkInOptionInTry match { case Success(maybePk) => { // no exception was throw, so `maybePk` probably has the primary key Redirect(routes.UrlAdminController.list()) .flashing("info" -> s"URL '${data.shortUrl}' was added.") } case Failure(e) => { // an exception was thrown, so the SQL INSERT failed, probably a duplicate key error val formBuiltFromRequest: Form[Url] = form.bindFromRequest() val newForm = formBuiltFromRequest.copy( errors = formBuiltFromRequest.errors ++ Seq( FormError("short_uri", "Probably have a duplicate URI in the database."), FormError("short_uri", e.getMessage) ) ) BadRequest(views.html.editUrl(0, newForm, addFormSubmitUrl)) } } } val formValidationResult: Form[Url] = form.bindFromRequest formValidationResult.fold( formDidNotValidateFunction, formValidatedFunction ) }
One big key in this method is knowing that you can use
Try to easily handle the exception that can be returned by the database access method.
Creating a new Play Form after the error
A second big key is knowing that you can rebuild a
Form with either of these two approaches:
val newForm: Form[Url] = form.bindFromRequest() val newForm: Form[Url] = form.fill(data)
I haven’t looked into the differences between those two possible approaches, but it took me a while to figure out those approaches, so hopefully that knowledge will save someone else time in the future. The key here is that you need to build a new form after the exception that you can pass to your template, and this is how you do that.
Adding custom error messages to the form
Another big key was find out that methods like
form.withError(...) and
form.withGlobalError(...) don’t work as expected, so I had to write code like this to get my custom error messages to show up in the Play template:
val newForm = formBuiltFromRequest.copy( errors = formBuiltFromRequest.errors ++ Seq( FormError("short_uri", "Probably have a duplicate URI in the database."), FormError("short_uri", e.getMessage) ) )
You probably only need one error message there, but since I’m writing this application for myself I don’t mind seeing the output from
e.getMessage. Note that this approach binds those error messages to the
short_uri field in the template.
The Play template
You might not need to see the Play Framework template to understand what I just showed, but it may help to see the template code, so here it is:
@( urlId: Long, form: Form[Url], postUrl: Call )(implicit request: RequestHeader, messagesProvider: MessagesProvider) <!DOCTYPE html> <html lang="en"> <head> <link rel="stylesheet" media="screen" href="@routes.Assets.versioned("stylesheets/main.css")"> <link rel="stylesheet" media="screen" href="@routes.Assets.versioned("stylesheets/admin.css")"> </head> <body id="edit-url"> <div id="content"> <div id="edit-url-form"> <h1>URL Shortener</h1> @* Flash shows updates to a page *@ @* id is 0 for 'add', and is set by the 'edit' process *@ <input type="hidden" name="id" value='@urlId'> @helper.inputText( form("long_url"), '_label -> "Long URL", 'placeholder -> "the original, long url", 'id -> "long_url", 'size -> 68 ) @helper.inputText( form("short_uri"), '_label -> "Short URI", 'placeholder -> "the short uri you want", 'id -> "short_uri", 'size -> 68 ) @helper.textarea( form("notes"), '_label -> "Notes", 'id -> "notes", 'rows -> 5, 'cols -> 60 ) <button>Save</button> } </div> </div> </body> </html>
I’m not going to bother to explain that code at all, other than to say that the
short_uri field in the template is where my custom error message will show up.
As a final note, here’s what those custom form error messages look like in the template (a data-entry form):
Summary
In summary, if you wanted to see how to handle Play Framework things like how to handle a SQL exception related to a unique constraint violation; how to create a new form after an exception like that; and how to add custom error messages to a Play form, I hope this example is helpful.
Add new comment
|
https://alvinalexander.com/scala/play-framework-controller-action-sql-exception-create-form-custom-errors
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
SwiftYNAB
SwiftYNAB is a Swift framework for iOS/macOS/WatchOS/tvOS for accessing the You Need a Budget API. It currently supports all endpoints made available by the API.
You can browse the online documentation here to see what features this framework offers.
How to use it
CocoaPods
- Create a new project in Xcode
- Add a
Podfileto the root directory of your project with the following contents:
use_frameworks! target :'Test' do pod 'SwiftYNAB', :git => '' end
- Run
pod install
Swift Package Manager
You can also use the Swift Package Manager. It's especially easy with Xcode 11 where adding a package dependency is as simple as choosing File > Swift Packages > Add Package Dependency.
Trying it out
Personal API access token
The project comes with a small iOS demo that shows you how to use the framework. If you want to try that out, or if you want to write your own code, you will need a personal API access token.
Make sure you go here and get one:
Sample code
Once you have your personal access token, you can use it to try out the framework. Start by creating a new project and at the top of the file where you plan to use SwiftYNAB, add:
import SwiftYNAB
Then, you can try it out by writing something like:
let ynab = YNAB(accessToken: "TOKEN_GOES_HERE") ynab.budgets.getBudgets() { (budgets, error) in if let budgets = budgets { for budget in budgets { print(budget.name) } } else { print("Uh oh, something went wrong") } }
Github
Help us keep the lights on
Dependencies
Used By
Total: 0
|
https://swiftpack.co/package/andrebocchini/swiftynab
|
CC-MAIN-2019-30
|
en
|
refinedweb
|
11 posts in this topic
You need to be a member in order to leave a comment
Sign up for a new account in our community. It's easy!
Register a new account
Already have an account? Sign in here.
Similar Content
- By paullauze
i just got a new windows 7 64 bit machine at work. I get a error when complilling with #include <AD.au3>
excerpt from console:
xxxAuto ItIncludeAD.au3(423,15) : ERROR: undefined macro.
If @OSArch =
~~~~~~~~~~~^
xxxAuto ItIncludeAD.au3(423,35) : ERROR: undefined macro.
If @OSArch = "IA64" Or @OSArch =
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
xxxAuto ItIncludeAD.au3(1290,66) : ERROR: _ArrayToString() called with wrong number of args.
$aAD_Objects[$iCount2][$iCount1 - 1] = _ArrayToString($aTemp)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
C:\Program Files (x86)\AutoIt3\Include\Array.au3(808,87) : REF: definition of _ArrayToString().
Func _ArrayToString(Const ByRef $avArray, $sDelim, $iStart = Default, $iEnd = Default)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
xxxAuto ItIncludeAD.au3(1306,51) : ERROR: _ArrayToString() called with wrong number of args.
$aAD_Objects[$iCount2] = _ArrayToString($aTemp)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^
even if all i am compiling is th eincludes
#include <Word.au3>
#include <Excel.au3>
#include <AD.au3>
it has always worked fine XP, so this must have something to do with window 7 or being 64 bit.
-.
|
https://www.autoitscript.com/forum/topic/184779-any-clever-way-to-convert-hex-to-uint64-and-beyond-any-udf-that-does-it-solved/
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Daniel Nielsen is an Embedded Software Engineer. He is currently using D in his spare time for an unpublished Roguelike and warns that he “may produce bursts of D Evangelism”.
I remember one day in my youth, before the dawn of Internet, telling my teachers about “my” new algorithm, only to learn it had been discovered by the ancient Greeks in ~300 BC. This is the story of my life and probably of many who are reading this. It is easy to “invent” something; being the first, not so much!
Anyway, this is what all the fuss is about this time:
template from(string moduleName) { mixin("import from = " ~ moduleName ~ ";"); }
The TL;DR version: A new idiom to achieve even lazier imports.
Before the C programmers start running for the hills, please forget you ever got burned by C++ templates. The above snippet doesn’t look that complicated, now does it? If you enjoy inventing new abstractions, take my advice and give D a try. Powerful, yet an ideal beginner’s language. No need to be a template archwizard.
Before we proceed further, I’d like to call out Andrei Alexandrescu for identifying that there is a problem which needs solving. Please see his in depth motivation in DIP 1005. Many thanks also to Dominikus Dittes Scherkl, who helped trigger the magic spark by making his own counter proposal and questioning if there really is a need to change the language specification in order to obtain Dependency-Carrying Declarations (DIP 1005).
D, like many modern languages, has a fully fledged module system where symbols are directly imported (unlike the infamous C
#include). This has ultimately resulted in the widespread use of local imports, limiting the scope as much as possible, in preference to the somewhat slower and less maintainable module-level imports:
// A module-level import import std.datetime; void fun(SysTime time) { import std.stdio; // A local import ... }
Similar lazy import idioms are possible in other languages, for instance Python.
The observant among you might notice that because
SysTime is used as the type of a function parameter,
std.datetime must be imported at module level. Which brings us to the point of this blog post (and DIP 1005). How can we get around that?
void fun(from!"std.datetime".SysTime time) { import std.stdio; ... }
There you have it, the Scherkl-Nielsen self-important lookup.
In order to fully understand what’s going on, you may need to learn some D-isms. Let’s break it down.
- When instantiating a template (via the
!operator), if the TemplateArgument is one token long, the parentheses can be omitted from the template parameters. So
from!"std.datetime"is the same as
from!("std.datetime"). It may seem trivial, but you’d be surprised how much readability is improved by avoiding ubiquitous punctuation noise.
- Eponymous templates. The declaration of a template looks like this:
template y() { int x; }
With that, you have to type
y!().xin order to reach the int. Oh, ze horror! Is that a smiley? Give me
xalready! That’s exactly what eponymous templates accomplish:
template x() { int x; }
Now that the template and its only member have the same name,
x!().xcan be shortened to simply
x.
- Renamed imports allow accessing an imported module via a user-specified namespace. Here,
std.stdiois imported normally:
void printSomething(string s) { import std.stdio; writeln(s); // The normal way std.stdio.writeln(s) // An alternative using the fully qualified // symbol name, for disambiguation }
Now it’s imported and renamed as
io:
void printSomething(string s) { import io = std.stdio; io.writeln(s); // Must be accessed like this. writeln(s); // Error std.stdio.writeln(s); // Error }
Combining what we have so far:
template dt() { import dt = std.datetime; } void fun(dt!().SysTime time) {}
It works perfectly fine. The only thing which remains is to make it generic.
- String concatenation is achieved with the
~operator.
string hey = "Hello," ~ " World!"; assert(hey == "Hello, World!");
- String mixins put the power of a compiler writer at your fingertips. Let’s generate code at compile time, then compile it. This is typically used for domain-specific languages (see Pegged for one prominent use of a DSL in D), but in our simple case we only need to generate one single statement based on the name of the module we want to import. Putting it all together, we get the final form, allowing us to import any symbol from any module inline:
template from(string moduleName) { mixin("import from = " ~ moduleName ~ ";"); }
In the end, is it all really worth the effort? Using one comparison made by Jack Stouffer:
import std.datetime; import std.traits; void func(T)(SysTime a, T value) if (isIntegral!T) { import std.stdio : writeln; writeln(a, value); }
Versus:
void func(T)(from!"std.datetime".SysTime a, T value) if (from!"std.traits".isIntegral!T) { import std.stdio : writeln; writeln(a, value); }
In this particular case, the total compilation time dropped to ~30% of the original, while the binary size dropped to ~41% of the original.
What about the linker, I hear you cry? Sure, it can remove unused code. But it’s not always as easy as it sounds, in particular due to module constructors (think
__attribute__((constructor))). In either case, it’s always more efficient to avoid generating unused code in the first place rather than removing it afterwards.
So this combination of D features was waiting there to be used, but somehow no one had stumbled on it before. I agreed with the need Andrei identified for Dependency-Carrying Declarations, yet I wanted even more. I wanted Dependency-Carrying Expressions. My primary motivation comes from being exposed to way too much legacy C89 code.
void foo(void) { #ifdef XXX /* needed to silence unused variable warnings */ int x; #endif ... lots of code ... #ifdef XXX x = bar(); #endif }
Variables or modules, in the end they’re all just symbols. For the same reason C99 allowed declaring variables in the middle of functions, one should be allowed to import modules where they are first used. D already allows importing anywhere in a scope, but not in declarations or expressions. It was with this mindset that I saw Dominikus Dittes Scherkl’s snippet:
fun.ST fun() { import someModule.SomeType; alias ST = SomeType; ... }
Clever, yet for one thing it doesn’t adhere to the DRY principle. Still, it was that tiny dot in
fun.ST which caused the spark. There it was again, the Dependency-Carrying Expression of my dreams.
Criteria:
- It must not require repeating
fun, since that causes problems when refactoring
- It must be lazy
- It must be possible today with no compiler updates
Templates are the poster children of lazy constructs; they don’t generate any code until instantiated. So that seemed a good place to start.
Typically when using eponymous templates, you would have the template turn into a function, type, variable or alias. But why make the distinction? Once again, they’re all just symbols in the end. We could have used an alias to the desired module (see Scherkl’s snippet above); using the renamed imports feature is just a short-cut for import and alias. Maybe it was this simplified view of modules that made me see more clearly.
Now then, is this the only solution? No. As a challenge to the reader, try to figure out what this does and, more importantly, its flaw. Can you fix it?
static struct STD { template opDispatch(string moduleName) { mixin("import opDispatch = std." ~ moduleName ~ ";"); } }
3 thoughts on “A New Import Idiom”
I’d like to say, this could be a language feature instead of a design pattern. Make the compiler read imports that occur at the beginning of the function before parsing the function declaration. e.g.:
void fun(SysTime time) {
import std.datetime;
import std.stdio;
…
}
Actually, the D Improvement Proposal linked in the blog post (DIP 1005) was about adding a language feature to handle this. The idiom the post describes came about as part of a debate on the DIP (it’s not so simple a thing as just letting the compiler parse the local imports first — there was a lot of discussion on it). If it can be done cleanly in the library, there’s no need to add a feature.
The original forum discussion:
|
https://dlang.org/blog/2017/02/13/a-new-import-idiom/
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
I understand better now, thank you. > after-change-major-mode ran to soon. (Maybe we should also print an > error message in this case, which my current patch does not). Yes, I think it's more important to signal a clear warning/error than to try and auto-fix the problem. BTW, why use "check-" as a prefix (rather than a "-check-" infix or suffix), thus potentially breaking the usual namespace conventions? > (I do not know how to do that without setting > font-lock-defaults to nil.) Yes, in that case we're screwed either way, which is why we need to signal a warning/error so someone can fix the problem at its source. The auto-fix you suggest has the shortcoming I mentioned but is probably the right "best effort" solution because other solutions probably suffer from more common/serious problems. Please add some comments explaining how this code is only used to double check erroneous situations and to try and salvage such "desperate" cases (the presence of a warning/error should already make the code more understandable). Stefan PS: By warning/error I'm not sure what I mean, but it should be more obnoxious than a (message "foo") and less than (error "foo"). Probably something like (progn (message "foo") (ding) (sit-for 1)).
|
http://lists.gnu.org/archive/html/emacs-devel/2005-05/msg01213.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
rpc_if_id_vector_free- frees a vector and the interface identifier structures it contains
#include <dce/rpc.h>
void rpc_if_id_vector_free( rpc_if_id_vector_t **if_id_vector, unsigned32 *status);
Input/Output
- if_id_vector
- Specifies the address of a pointer to a vector of interface information. On success this argument is set to NULL.
Output
- status
- Returns the status code from this routine. The status code indicates whether the routine completed successfully, or if not, why not.
Possible status codes and their meanings include:
- rpc_s_ok
- Success.
The rpc_if_id_vector_free() routine frees the memory used to store a vector of interface identifiers when they have been obtained by calling either rpc_ns_mgmt_entry_inq_if_ids() or rpc_mgmt_inq_if_ids(). This freed memory includes memory used by the interface identifiers and the vector itself.
None.
rpc_if_inq_id().
|
http://pubs.opengroup.org/onlinepubs/9629399/rpc_if_id_vector_free.htm
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Morning!
I've run into a snag when running the peach fuzzer. The python script runs fine when you don't evoke any command line options, but crashes when you attempt to test any of the pits.
For example, when I attempt to test any of the samples that come with peach, it cannot find the module 'psyco'. I've searched my whole BT4 R2 install, and it doesn't exist.
Here's the output:
I am running a pretty vanilla HDD install of BT2 R2 and I did attempt to update the system via apt-get update ; apt-get upgradeI am running a pretty vanilla HDD install of BT2 R2 and I did attempt to update the system via apt-get update ; apt-get upgradeCode:
root@bt:/pentest/fuzzers/peach# ./peach.py -t samples/HTTP.xml
] Peach 2.2.1 Runtime
Traceback (most recent call last):
File "./peach.py", line 251, in <module>
from Peach.Engine import *
File "/pentest/fuzzers/peach/Peach/__init__.py", line 41, in <module>
import Generators, Publishers, Transformers
File "/pentest/fuzzers/peach/Peach/Publishers/__init__.py", line 37, in <module>
import file, sql, stdout, tcp, udp, com, process, http, icmp, raw, remote
File "/pentest/fuzzers/peach/Peach/Publishers/remote.py", line 36, in <module>
from Peach.Engine.engine import Engine
File "/pentest/fuzzers/peach/Peach/Engine/engine.py", line 755, in <module>
from Peach.Engine.parser import *
File "/pentest/fuzzers/peach/Peach/Engine/parser.py", line 53, in <module>
from Peach.Engine.dom import *
File "/pentest/fuzzers/peach/Peach/Engine/dom.py", line 3511, in <module>
import psyco
ImportError: No module named psyco
I did see another user having this issue in what looked to be an older version of BT4 but the thread seems to just die.
Anyone else seeing this behavior?
Mods: My apologies if this is in the wrong forum. Wasn't sure where to put it.
|
http://www.backtrack-linux.org/forums/printthread.php?t=38423&pp=10&page=1
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Welcome back to You Suck at TDD. Today’s code will show up in the Improvements-Phase-3 branch if you would like to follow along.
In our last episode, we concentrated mostly on the Employee fetching and filtering. Things are better, but we still have a problem…
Well, actually, we have a number of problems, but we’ll start with the first one that I see, which is that Yucky.GetEmployees() is trying to do too many things – it both fetches data and filters it.
This is one of the most common things I see when I look at code – methods that try to do too much, and they are generally quite challenging to test because they do too much.
EmployeeSource class
So, we’ll start by working on that. I pulled the fetching code out into a new EmployeeSource class and then I pulled the creation of the EmployeeSource instance out of GetEmployees().
At this point, GetEmployees() only has a couple of lines in it:
public static EmployeeCollection GetEmployees(EmployeeFilter employeeFilter, EmployeeSource employeeSource) { var employeeCollection = employeeSource.FetchEmployees(); return employeeCollection.Filter(employeeFilter.Matches); }
We can easily simplify it with an inline of employeeCollection:
return employeeSource.FetchEmployees().Filter(employeeFilter.Matches);
At this point, there is no reason to have a separate method, so I inlined FetchEmployees() and did some cleanup in Main(), and deleted class Yucky.
This does make Main() more complex than I would like it, but I did the inline with a purpose; I find that my thoughts are often constrained by the abstractions that are present, and the act of getting rid of a bad abstraction sometimes makes it easier to find the right abstraction.
Spending a little time with Main(), it isn’t testable code, and I’d like to get some tests in there…
If we abstract out writing the output to the console, we get something like this:
var collection = employeeSource.FetchEmployees().Filter(employeeFilter.Matches); WriteToConsole(collection);
That’s decent code; since it doesn’t have any conditionals, it’s likely code that either works all the time or never works at all. Not my first target when I’m looking to make things more testable, but if we wanted to cover it, how can we do it?
Well, the most obvious thing to do is to extract the IEmployeeSource interface, create a method that calls that, create a mock (or a simulator using P/A/S), and then write a test or two. That adds some complexity; you have another interface hanging around, and you need to create and maintain the mock and simulator. It will definitely work, but it is a fairly heavyweight approach.
I’m instead going to go with a different option. One of the things I’ve noticed with developers is that while they may pick up on the opportunity to create a data abstraction – or not, given the title of this series – they rarely pick up on the opportunity to create an algorithmic abstraction.
So, I’m going to go in that direction.
From an abstract perspective, this code does the following:
- Creates a chunk of data
- Performs an operation on that chunk of data to create a second chunk of data
- Consumes that chunk of data
Our current implementation implements this through procedural statements, but we could consider doing this differently. Consider the following code:
public class Pipeline { public static void Process<T1, T2>(Func<T1> source, Func<T1, T2> processor, Action<T2> sink ) { sink(processor(source())); } }
This is an abstraction of the pattern that we were using; it’s general-purpose code that takes a source, processor, and sink, and knows how to wire them up to pass data between them. The version of Process() that I wrote handles this specific case, but you can obviously write versions that have more than one processor.
Doing that with our code results in the following:
Pipeline.Process( employeeSource.FetchEmployees, (employees) => employees.Filter(employeeFilter.Matches), WriteToConsole);
I like the first and third arguments, as they are just method names. I would like the second line also to be a method name.
We can recast this by adding a Filter() method to EmployeeFilter, which then yields:
Pipeline.Process( employeeSource.FetchEmployees, employeeFilter.Filter, WriteToConsole);
That makes me fairly happy; Pipeline.Process() is tested, and the code to use pipeline is almost declarative.
Take a few minutes and look through the current implementation, and write down what you like and what you don’t like.
Looking at the code, it is improved, but there are still a number of things that I don’t really like:
-.
Although testability wise, the pipeline is probably a win, readability wise, I think it’s a step in the wrong direction:
WriteToConsole( employeeSource.FetchEmployees().Filter(employeeFilter.Matches) );
is a little easier to understand what’s happening than
Pipeline.Process( employeeSource.FetchEmployees, employeeFilter.Filter, WriteToConsole);
I think it’s because of parameters vs chaining. In the first form, I can easily tell that the filter operation is being applied to the fetched employees (because that’s what dot operator does), and that filter as applying the matches filter, because it’s the only parameter and that makes it, I think an obvious and easy assumption to make. And then we’re passing the result as the only parameter to writeToConsole, which we can assume will do the obvious thing.
But in the second form, I’ve got some pipeline process that takes three things, but at the call site, just reading that line I have to think carefully about what each of the things being passed to the pipeline is, and then make an educated guess about what the process does with those three things. Because it’s not as obvious what each of those things represents in the context of the pipeline, I have to think harder about it. Maybe this can be helped by explicit parameter naming like:
Pipeline.Process( source: employeeSource.FetchEmployees, processor: employeeFilter.Filter, sink: WriteToConsole);
|
https://blogs.msdn.microsoft.com/ericgu/2016/07/05/you-suck-at-tdd-8-doing-fewer-things/
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Hi i have this program and have a problem it is a basic program for exponents and now i need to add a while loop so that if the user enters a number greater that 10 it asks them to input a number lower or equal to 10, here is my code can someone help me?.
#include <stdio.h> main( ) { /* y */ /* calculate x */ /* */ /* where x and y are integers and y >= 0 */ int base, power; long result = 1; int counter = 0; while ( x <= 10); printf("Enter the base number : "); scanf("%d",&base); printf("Enter the nth power to which base will be raised"); scanf("%d",&power); { for (counter++ < power) result = result * base; } printf("The base of %d raised to",base the %dth power is % ld\n",power,result); return 0; }}}
|
https://www.daniweb.com/programming/software-development/threads/57681/while-loop
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
The biological perspective of eating posits that we should eat when our metabolic energy fuels are expended. Reads buffer. 7 Non-adiabatic motion in symmetric geometry 95 ρdU Jtotal ×B·PI P PBˆBˆ (3. ; Create a frame window.Francey S. But we are still left with the problem of how we maintain so many memories for so long.
Individualism and collectivism. International Conflict as a Contemporary Problem 2. They are loyal to mentors, responding at random on all tests of this ability. Med. In some regards, both European Americans and Itzaj have developed similar classification systems for mammals.23 229239.
As illustrated in Figure 7. Mutagen, 5111121, 1983. }wherexisanintegertype. Agglutinin a substance that causes the clumping of cells with which it interacts. For example, instead of writing complexdouble to declare complex variables, we can use a typedef typedef complexdouble C; When run, it produces the following output. In this approach, Productivity within s0015 Page 990 well as motivation losses.
Science 161395396, and 10. Substitution systems may be useful free binary option charts some patients. Moreover, because female gender roles are less rigidly defined s0025 Page 1062 s0030 than male gender roles, there may be less concern among men about lesbian women or among women about gender role violations in general. 12) In a similar fashion, the other components of the momentum and energy equations can be simplified. Free binary option charts schools and col- leges cannot prepare students fully for every kind of job-related reading goal (e.
(1998). Page 12 6 Academic Failure, Prevention of 4. Out. Howard Street and Early Steps, two effective tutoring programs for low achievers, incorporate word sort techniques rather extensively.
A set of vectors {Xl X2, especially the specific type of work they prefer. MT mF 382 Page 394 13. If the three objects have coordinates x1,y1,z1, x2,y2,z2, and x3,y3,z3, free binary option charts the answer is yes if and only if the vectors are linearly depen- dent. Double length() const; }; and in the program file, playing solo"); ourturn true; } else { remove(idList); remove(challenge); board new Board(name, others_name); chat new TextField(); chat.
Regarding housing de- sign, modern style is non farm payroll binary options by males. 43) Since b, 2Az 2(x, - z~) and we have a sin~~-type dependence.
These include concomitant administration of lithium 5658 or carbamazepine 59, as well as. Best binary options strategy for beginners non-food-related ways in which to cope with negative emotional states.
Encyclopedia of Applied Psychology, VOLUME 3 Trust is central to human life and is considered to be essential for stable relationships, fundamental for maintaining cooperation, vital to any exchange, and necessary for even the most routine of everyday interactions. 2 Comparison of the Total Diversity of Immunoglobulins and of T Cell αβ Receptors in the Human Immunoglobulin αβ Receptors Element Hκαβ Variable segments Free binary option charts 51 Diversity segments (D) ~30 D segments read in three frames rarely Joining segments (J) Free binary option charts Joints with N and P nucleotidesa 2 Number of V gene pairs Junctional diversity Total diversity 69 0 5 (1) ~70 52 02 often 61 13 12 640 ~1013 ~1016 3519 ~1013 ~1016 aDuring the recombination events leading to the ultimate immunoglobulin molecule, there occurs loss or binary options broker in australia of nucleotides at the recombination junctions.
There are many classes of things that may be valued. Endicott J, whereas the form is not recognizable. If the summed influence of these inputs is sufficient to depolarize the axon hillock to the neurons threshold for firing, an action free binary option charts is propagated at the axon hillock. Ribet Page 3 Undergraduate Texts in Mathematics Anglin Mathematics A Concise History and Philosophy.
5367 Fig. s0040 Occupation and Gender 703 Page 1527 704 Occupation and Gender 3. Free binary option charts, in IEEE Communication Magazine, Septem- ber 1977 113. Poiley, to detect and differen- tiate redgreen problems, a Nagel match must be used. Although the schizotypal PD criteria were mainly derived from the first subsample of the Danish Adoption Free binary option charts, the relative free binary option charts in the extended sample using DSM-III-R criteria got attenuated with very low specificity.
Cultural Differences 5. Lopez S. It stressed that other measures besides financial measures are important to overall organizational effectiveness and that including these nonfinancial measures would result in a more balanced evaluation. The outreach interventions may take place at home, in wards or wherever may be the best location. The number of choices available to individ- uals in simple societies is small, pH 7.Metcalf, R. Nine of these contain two structural regions each that are comprised of six free binary option charts of hydrophobic amino acids that anchor the proteins to the plasma membrane.
7 and 19. Are there two different forms of neuroblastoma. (2002) The usefulness and use of second-generation antipsychotic medication. This is the basic idea behind transform coding. ; iii subject to the normalization condition hEijEii 14 1 (92) (93) (94) in the corresponding eigen value equation. For example, although people may hold similarly negative overall evaluations of two groups, they may perceive one group as lazy and unin- telligent and the other as conniving and cutthroat.
Marcuse referred to The Generation of Hypers, the quality of their engagement in and persistence binary options trading signals free download the activity will also be different. This could have a direct impact on the clinical aspects of the prescription of psychotropic medications to schizophrenic patients.
The walls free binary option charts solid and subjected to no-slip velocity boundary conditions (zero-velocity components). The articles, nonetheless, certainly are not exhaustive. Along with children and larger homes comes reduced male involve- ment in domestic chores. Thus, the function f(z)1 1 is seen binary options bonus trading have simple poles at z ±i.
4 is java. Campbell, 1989. Past history of domestic violence in the life of the victim (either as the perpetrator or the victim) needs to be elicited with a nonjudgmental and sensitive approach. The most commonly binary option multiplier approach is quadtree partitioning, initially introduced by Free binary option charts 241 In quadtree partitioning we start by dividing up free binary option charts image into the maximum-size range Page 577 17.
Galloway, 1998. For example, collectivists tend to have few but highly free binary option charts mate relationships, whereas the reverse is true for individualists.
Suggestions that within the next decade as many as 25 of all cancers will result from occupational exposure to environmental agents have not been substantiated epidemiologically. Hepatology, 66572, 1986. The architecture of cognition. H" include iostream using namespace std; This main generates many pairs of values from the set {1,2.
Although all free binary option charts these various items of information do not free binary option charts a clear picture as free binary option charts, the establishment binary option expert signals the phenomenon of the induction free binary option charts differentia- tion of neoplastic cells in vitro and the several model systems in which mechanisms of this effect can be studied have opened the way to the potential use of such technologies in the therapy of human neoplasia (Chapter 20).Binary options trading signals free download
|
http://newtimepromo.ru/free-binary-option-charts-11.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Select the version of Creative Suite you want to target and CS Extension Builder.
Set up a new Creative Suite extension, XMP namespace, or XMP FileInfo panel as easily as you would create a new Flash Builder project.
Gain direct control over Creative Suite scripting DOMs using CSAW libraries. Avoid the steep learning curve of native SDKs and easily port extensions into multiple Creative Suite applications.
Use the Debug As and Attach As features to debug directly in Creative Suite applications, preview extensions, set breakpoints, and identify errors.
Take advantage of CSAW (CS ActionScript® Wrapper) libraries to easily port extensions into multiple Creative Suite applications to maintain a consistent user experience.
|
http://www.adobe.com/in/products/cs-extension-builder/features._sl_id-contentfilter_sl_featuredisplaytypes_sl_top.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
I think I get Spock Mocks now
August 20, 2011 7 Comments
I’ve been misunderstanding a fundamental issue of Spock Mocks. That’s annoying, but probably inevitable given that I work with so many state-of-the-art, evolving API’s. If you spend enough time on the bleeding edge, sooner or later you’ll get cut.
The problem is, though, I’ve been telling people something wrong for some time now, and that’s not acceptable. Thus this blog post. It’s one thing for me to make a mistake, but it’s quite another for me to mislead others. I want to fix that now. Besides, if I made the mistake, it’s also possible others are missing it, too.
I’m a big fan of the Spock testing framework. It’s very easy to learn, it works with both Java and Groovy systems, and it’s got a great mocking framework built into it. I’ve been a JUnit user for years, but I’ve never been able to commit to a mocking framework in Java. That’s partly because I still don’t find them particularly intuitive, and partly because I’m still not sure which one is going to win. I don’t want to commit to a framework (EasyMock? Mockito? PowerMock? etc) only to have to switch to a different one in a couple of years.
Spock is fun, though, and I use it whenever I can, and not just for the Star Trek related puns, some of which I’ll have to adopt here. Back in June, I wrote an article for NFJS the Magazine, entitled “Spock: I have been, and always shall be, your friendly testing framework.” I’m going to use an example from that article, with some variations, to show what I recently learned.
A basic Spock test
Here is part of a Groovy class called
Tribble that answers the question, “Do you know what you get when you feed a tribble too much?”:
class Tribble { def feed() { def tribbles = [this] 10.times { tribbles << new Tribble() } return tribbles } }
The answer, of course, is a whole bunch of hungry little tribbles. The
feed method creates an
ArrayList of tribbles by starting with the current instance and then adding 10 more. I know the
return keyword isn’t strictly necessary, since closures automatically return their last evaluated value, but I use it sometimes for clear documentation. Groovy isn’t about writing the shortest code — it’s about writing the simplest, easiest to understand code that gets the job done.
To test this method, here’s a Spock test. It extends the
spock.lang.Specification class (which is required) and ends in the word “Spec” (which isn’t, but makes for a nice convention):
import spock.lang.Specification class TribbleSpec extends Specification { Tribble tribble = new Tribble() def "feed a tribble, get more tribbles"() { when: def result = tribble.feed() then: result.size() == 11 result.each { it instanceof Tribble } } }
I never thought JUnit was verbose until I met Spock. For those who haven’t used it much, first let me say you have something fun to look forward to. That said, let me explain the test. Spock tests have a
def return type, then have a test name that describes what you’re trying to accomplish. The name is usually a short phrase, but it can be spread over several lines and even contain punctuation.
(Hamlet D’Arcy gives a great example in his blog post on Spock mocks, which is also echoed in the cool Spock Web Console. I also agree with him that Spock mocks should be called “smocks”, but since it doesn’t have a direct Star Trek association I’m not sure that will catch on.)
As Peter Niederweiser, the creator of the framework, points out, the method name becomes the body of an annotation, but that’s all under the hood.
The rest of the test consists of a
when and a
then block, representing a stimulus/response pair. The
when block contains the method invocation, and the
then block includes a series of boolean conditions that must be true for the test to pass. Nice and simple.
Tribbles do more than just eat, though. They react to others.
Like Dr. McCoy, I can mock Vulcans
Let me add a pair of methods to my
Tribble class:
String react(Klingon klingon) { klingon.annoy() "wheep! wheep!" } String react(Vulcan vulcan) { vulcan.soothe() "purr, purr" }
The overloaded
react method is based on a pair of interfaces. Here’s the
Vulcan interface:
interface Vulcan { def soothe() def decideIfLogical() }
Here’s the
Klingon interface:
interface Klingon { def annoy() def fight() def howlAtDeath() }
(Yeah, I know howling at death is a Next Generation thing, but go with it.)
Since both
Vulcan and
Klingon are interfaces, a mocking framework can generate an implementation with just Java’s basic dynamic proxy capabilities, which means I don’t need CGLIB in my classpath. To test the
react method that takes a
Vulcan, here’s the Spock mocking feature in action:
def "reacts well to Vulcans"() { Vulcan spock = Mock() when: String reaction = tribble.react(spock) then: reaction == "purr, purr" 1*spock.soothe() }
Spock provides the
Mock method to create a mock implementation of the interface. When I then invoke the
react method, I check that it returns the proper
String and (here’s the cool part), I verify that the
soothe method in the mock is invoked exactly once.
So far, so good. Klingons react rather badly to tribbles, however, so I thought it would funny if I had them throw an exception. Here’s my original test for the
react method that takes a Klingon (warning: this doesn’t do what it looks like it does!):
def "reacts badly to Klingons"() { Klingon koloth = Mock() koloth.annoy() >> { throw new Exception() } when: String reaction = tribble.react(koloth) then: 0*koloth.howlAtDeath() 1*koloth.annoy() reaction == "wheep! wheep!" notThrown(Exception) }
Using the right-shift operator, my intention was to set the expectation that invoking the
annoy method on a Klingon resulted in an exception. The plan was:
- In the setup block (above
when), declare that the
annoymethod throws an exception,
- In the
thenblock, verify that the method got called.
The problem is, what happens to the exception? Spock has two great methods for exception handling, called
thrown and
notThrown. I was able to verify that the method got called, but why did
notThrown(Exception) return true? I even got back the string I expected. What’s wrong?
The right way to mock a Klingon
(from a galaxy far, far away, right?)
Here’s the problem: according to the Spock wiki page on Interactions, interactions defined outside a
then block (here declaring that
react throws an exception) are called global, while those defined inside a
then block are called local, and local overrides global. Also, interactions without a cardinality are optional, while those with a cardinality are required.
In other words, I may have declared that
react throws an exception, but in the
then block I then changed it to say it actually doesn’t. In the
then block, I say that
react must be called once and doesn’t return anything. Therefore, no exception is thrown and the return value is as expected.
To achieve what I was actually after, here’s the right way to mock my Klingon:
def "reacts badly to Klingons"() { Klingon koloth = Mock() 1 * koloth.annoy() >> { throw new Exception() } 0 * koloth.howlAtDeath() when: String reaction = tribble.react(koloth) then: reaction == null thrown(Exception) }
Now I set both the cardinality and the behavior outside the
then block. The
then block verifies both that the exception was thrown, and it checks the cardinality, too. Oh, and while I was at it, I verified that the
howlAtDeath method didn’t get called. I doubt the Klingons howled at death when they burned down all the tribbles that Scotty beamed into their engine room just before they went to warp.
Admittedly, I still find the syntax a bit confusing, but at least I get it now. Hopefully you’ll get it, too, and they’ll be nah tribble at all.
(The source code for this example and the others used in my NFJS the Magazine article are in my GitHub repository, . Eventually they will make their way into my book, Making Java Groovy, available now through the Manning Early Access Program.)
You can also put both interactions into the then-block. That’s where I usually put required interactions (because they describe an expected outcome). Into the setup-block I mostly put interactions with objects that act as pure stubs (i.e. I just want them to return something and am not interested in verifying anything about them). This is just a convention though, and it’s up to you whether you put an interaction into a setup- or then-block. The behavior is the same except that interactions in a then-block have higher precedence and can thus override other interactions (e.g. from a setup() method).
What happens internally is that interactions in a then-block are moved to right before the corresponding when-block by Spock’s AST transform. This is necessary because Spock’s mocking framework (like most Java mocking frameworks but unlike Mockito) needs to know about all expected interactions beforehand. This also explains why you have to give Spock a little hint if you factor out an interaction from a then-block into a separate method (see Specification#interaction). Without this hint, Spock wouldn’t know that it has to move the code.
What I find confusing is that you move code from the then (check if the test ran ok) block to the test (setup and run the test) block. You either do different tests now, and no longer test if methods are called once or never, or I’m declaring a state of severe confusion.
What’s changed is the point in time when the interaction is declared, not the point in time when it’s verified. In the vast majority of cases, this won’t affect the outcome of the test. If you don’t want any magic to happen, declare all interactions up-front.
Pingback: Thoughts about Spock (the test framework) « career over
Thank You…i found you post very useful
Thanks for posting. This was extremely helpful from a number of angles.
Pingback: Groovy Spock und Mocking | Digitales Umfeld
|
https://kousenit.org/2011/08/20/i-think-i-get-spock-mocks-now/
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
colonymechscoutecescoutos
Removed useless NodeHandlePtr.
Retired (deleted) unused packages.
Moved their msg/srv definitions to the messages/ package instead, and revised the libscout and scoutsim files that depended on those namespaces.
Removed old files and test executables.
behaviors now overwrite teleop (see comments in scout.cpp for details)
|
https://roboticsclub.org/redmine/projects/colonyscout/repository/scoutos/revisions/bf68fc90df57331e06d7fe7d3e3befab593477b5/show/scout/scoutsim/src
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
1 /*******************************************************************************2 * Copyright (c) 2000, 2004 IBM Corporation and others. All rights reserved.3 * The contents of this file are made available under the terms4 * of the GNU Lesser General Public License (LGPL) Version 2.1 that5 * accompanies this distribution (lgpl-v21.txt). The LGPL is also6 * available at. If the version7 * of the LGPL at is different to the version of8 * the LGPL accompanying this distribution and there is any conflict9 * between the two license versions, the terms of the LGPL accompanying10 * this distribution shall govern.11 *******************************************************************************/12 package org.eclipse.swt.internal.gtk;13 14 15 public class GtkAllocation {16 public int x;17 public int y;18 public int width;19 public int height;20 public static final int sizeof = OS.GtkAllocation_sizeof();21 }22
Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
|
http://kickjava.com/src/org/eclipse/swt/internal/gtk/GtkAllocation.java.htm
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
The 24 binary options in developing new medications for schizophrenia are generally seen as threefold Page 450 ECONOMICS OF SCHIZOPHRENIA A REVIEW 431 1. (1995). In the strongest form of critique. (2003).Minor, C. The values obtained in this work for inhibitory power towards serum cholinesterase corre- sponded to 24 binary options effect on the rabbit eye, schizoaffective disorder, introduced with ICD-10 and DSM-III-RIV, there- fore, is in particular need of validation to see which of the concepts is the more valid.
However, in a procedure such as main() another syntax is required. These behavioral changes are consistent with centrally mediated change in priorities away from activity and pursuit of goals to 244 of energy. As with real 60 second binary options trading software, the triangle inequality holds |z w| |z| |w|. The methods defined by these classes are shown in Tables 14-3 through 14-6.
Gatz, M. Nature 366467469, graphology, and other so-called alternative methods. To give it a large value, the possible number of contexts is 954-more than 81 million.
133. ) Basal forebrain nuclei Midbrain nuclei Adrenergic system (noradrenaline) Active binary option trading in pakistan maintaining emotional tone. 7983. gender stereotypes The psychological traits that are believed to occur with differential frequency between males and females (e. Gen. Furthermore, in contrast to simple starvation, the potions 24 binary options body composition seen in patients with cancer cachexia cannot be reversed by the provision of extra calories.
Adamopoulos has argued that the dimensions of concreteness and particu- 24 binary options, along with the question 24 binary options whether a resource is given or denied, form fundamental constraints on human interaction and are components of a differentia- tion process that yields uniquely defined features of interpersonal meaning.
(2003). Similarly, N. How do 24 binary options decode an index for which we do not as yet have a complete dictionary entry. Unfortunately, a variety of toxicities were associated with this type of therapy, and thus binary options profit calculator other modali- ties have been investigated. A special topic related to training is that of executive coaching and executive development, discussed in an article by Silzer, which is concerned with working with employees to enhance and develop their skills for management positions.
There is an 24 binary options need for more cross-training between specialists in neuropsychology and educators to ensure that the recommendations that are made in a neuropsychological assessment report can 24 binary options Neuropsychological Assessment in Schools 663 Page 1489 664 Neuropsychological Assessment in Schools be implemented by classroom teachers.
Balaban, S. In 24 binary options, it is important to understand that for women of 24 binary options, sexual 24 binary options (as a form of sex discrimination) and race discrimination often are intertwined. 2 log 0. Colville, N.Lichtermann D. s0025 s0015 3. H resides with the 24 binary options command line option. Evidently, the frontal attentional system can influence the way in which space is perceived. Finally.
There is some evidence that binary option trading with paypal relationship between people who are eating together also moderates this social facilitation effect; that is, the content of these legit binary option brokers emotions can be recategorized to see how idiosyncratic descriptors match or binary option minimum trade amount the existing basic or discrete emotion frameworks proposed in mainstream psychology.Mandel, Гptions.
In 24 binary options book the relations 24 binary options Sr (6) and Sa(6) to the scattering amplitudes are fvv(Wov- e) s;(~(~-b))ik and. During this period, the child experiences the growth of language and mental imagery. length; for(int p 0; p limit; p) { int pixel_index randomcp; work_pixelspixel_index next_pixelspixel_index; } try { Thread. Abe cofnas binary options Examples of plasmas 3 large quantities of positrons.
The information-processing option has tended to rely on the assumption that a small number of elementary mental operations bbinary sufficient to specify the complexity of human cognitive performance. 4 0. Latack, J. Journal of Personality and Social Psychology, and also do not participate in the larger society. Assessment, Diagnosis, and Prevalence 3.
6 0. Import java. Page 13. Introduction 2. Developmental Medicine and Child Neurology 44416, 2002. Can recognize many actual objects, she is unable to recognize drawings of them. One can also write text (in a variety of fonts) and plot individual points (or place marker symbols at specified coordinates). By concentrating on their own needs, P. Buffers and Solutions 1 HBS 150 mA4NaCl m 5 mMHEPES, A. Drift waves pump plasma from regions of high pressure to regions of low pressure.
It will be reasonable to expect to use cross-ventilation as an alternate and effective way to overcome the disadvantage brought by cooler utilization.
NEW TOPICS Ьptions CULTURE AND LEADERSHIP In recent years, three topics have emerged in 24 binary options study of culture and leadership. Reeves body below the neck was completely paralyzed. What happens to the surface integral when the surface is a perfectly conducting wall.
Introverts may also be less accident prone than extroverts in transportation and industrial settings, although this relationship may be mediated by 24 binary options rather than by EI per se.
Most women learn that gentle stimulation on and around the glans and shaft of the clitoris binar the most sexual excitement, which usually mounts over time. 24 binary options A, fear biinary ignorance reduced the mental patients to outcasts, objects of derision, rejection and abandonment. Particular reference may be made to the work of Haworth and his collaborators on the toxic action associated 24 binary options with quaternary ammonium salts.
(1990) Amisulpride ver- sus haloperidol 24 binary options treatment of schizophrenic patients results of a double-blind study. Pons, planners, and architects to consider the micro level and to meet the people, see their optionsand hear their optiлns. Remember the using namespace std; statement.- Ia SI where fi is normal pointing into region 0.
Occupational therapy intervention with children survivors 24 binary options war. Figure 6. Three types of observation were made (1) The immediate clinical effects and simultaneous changes in blood-cholinesterase activity produced by an intramuscular injection of neostigmine were compared with those which re- sulted from an intramuscular injection ofD.
(1998) Measurement of temperament and character in mood disorders a model of fundamental states as personality types. We finally look at the different binary coding options. Bid on binary options. Differentiation of sums, products, and quotients of complex func- tions follow the same rules as for real functions.
There are no exceptions. Second, (refer option s Exercise 24 binary options þ Rpump; dt T1 24 binary options Nph 14 binary options regulated in usa is the well-known Einstein B-coefficient for stimulated emission.
Selective inhibition of te- lomerase activity during terminal differentiation of immortal cell lines. as a person with little aim in life, seem- ingly dull and superficial.Binary options strategy wiki
|
http://newtimepromo.ru/24-binary-options-1.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
KinoSearch::Docs::Tutorial::Highlighter - Augment search results with highlighted excerpts.
The KinoSearch code base has been assimilated by the Apache Lucy project. The "KinoSearch" namespace has been deprecated, but development continues under our new name at our new home:
Adding relevant excerpts with highlighted search terms to your search results display makes it much easier for end users to scan the page and assess which hits look promising, dramatically improving their search experience.
KinoSearch::Highlight::Highlighter uses information generated at index time. To save resources, highlighting is disabled by default and must be turned on for individual fields.
my $highlightable = KinoSe = KinoSe, KinoSearch::Docs::Tutorial::QueryObjects, illustrates how to build an "advanced search" interface using Query objects instead of query strings.
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
|
http://search.cpan.org/~creamyg/KinoSearch-0.315/lib/KinoSearch/Docs/Tutorial/Highlighter.pod
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
project cost estimating
You have decided that you are now an expert in project cost estimating, and you want to hold some public classes or workshops on this topic... (see attachment for complete question)
Solution Preview
Hello,
Attached is the MS Excel spreadsheet which has 2 worksheets:
1> Variable Worksheet - ...
Solution Summary
This job features project cost estimating.
$2.19
|
https://brainmass.com/computer-science/software-development/project-cost-estimating-42002
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
A table in a database that contains three columns, Id, Word, Opposite.(In the context below the "Word" and "Opposite" is the actual data/values in the table) The values from Word is get by a LINQ-quey in a controller and sent to the view and presented as list. Next to the Word ther is a
@EditorFor
List<string> and is compared to another
that contains the actual values from the column
@ÈditorFor
Opposites
Opposites
Word
Opposite
var list = oppositeListFromDB.Except(guessesFromUser);
var q = from a in db.Table
where a.Opposite.Contains(list)
select a.Word;
public class MyModel
{
public string Word { get; set; }
public string Opposite { get; set; }
public List<WordCl> mWords { get; set; }
}
public class WordCL
{ public int Id { get; set; }
public string Word { get; set; }
public string Opposite { get; set; }
}
+----+-------+----------+
| ID | Word | Opposite |
+----+-------+----------+
| 1 | Rain | Sun |
| 2 | Clean | Dirty |
| 3 | Light | Dark |
| 4 | Hot | Cold |
| 5 | Wet | Dry |
| 6 | Young | Old |
+----+-------+----------+
I have a hard time visualizing which data you have available and which data you're trying to collect.
You can use an anonymous type object to select fields that interest you:
var result = from a in db where a.Word == "..." select new { a.Opposite, a.Word }; foreach (var r in result) { Console.WriteLine(r.Opposite); Console.WriteLine(r.Word); }
By the way (can't comment):
where a.Opposite.Contains(list)
Shouldn't this be
where list.Contains(a.Opposite), as otherwise you'd be calling
Contains on a string?
|
https://codedump.io/share/VN0wIaNVsxG5/1/get-data-from-with-dynamic-where-argument
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Currently I have a controller method that will increment 1 every time a user inputs there number in a view form. I'm going to show the code and the error I just want to make sure I'm giving enough background here. So, I have another form that creates a
subscriber
def visit
@subscriber = Subscriber.find_by_phone_number(params[:phone_number])
if @subscriber
@subscriber.visit += 1
@subscriber.save
flash[:notice] = flash[:notice] = "Thank You #{@subscriber.first_name}. You have #{@subscriber.days_till_expired} until renewal"
redirect_to subscribers_search_path(:subscriber)
else
render "search"
end
end
That's because
@subscriber.visit is nil, you need to set it to 0 first.
In your migration you could default it to 0 whenever a subscriber is created, then you don't have to worry about this.
t.integer :visit, default: 0
Or perhaps add the line
@subscriber.visit ||= 0 before
@subscriber.visit += 1 (which sets it to 0 if nil).
|
https://codedump.io/share/eTbkmTshM7iY/1/undefined-method-39-for-nilnilclass---ruby
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Community mailing list archives
community@mail.odoo.com
Browse archives
Re: Domain for Many2one on onchangeby
Pedro M. Baeza
Don't put @api.one, but @api.multi, because the return value is enclosed in a list, and thus, not parsed.
Regards.
El 04/06/2016 14:47, "MD Tanzilul Hasan Khan" <ponkhi403@gmail.com> escribió:
Hello Community,I am trying to filter values for a Many2one field on onchange function. But it isn't working.My goal is to filter vehicle on a given date. I am getting values from the ORM. Like after searching I am getting 13 in the vehicle_models list. But the domain isn't working.I have tried various ways by searching in odoo forum and stack overflow. There is no problem with my values but the Domain isn't working. Can anyone please help me with this or let me know what I am missing? My onchange code is below -@api.one
@api.onchange('req_from', 'req_to')
def vehicle_filter(self):
fleet_req_obj = self.env['fleet.requisition']
requisition_search = fleet_req_obj.search([['req_from','<=',self.req_from],['req_to','>=',self.req_to]])
vehicle_models = []
for i in requisition_search:
vehicle_models.append(i.vehicle_rq.id)
res = {}
if vehicle_models:
res['domain'] = {'vehicle_rq': [('id', '=', vehicle_models)]}
return resRegards,--MD. Tanzilul Hasan Khan.
_______________________________________________
Mailing-List:
Post to: mailto:community@mail.odoo.com
Unsubscribe:
ReferencebyNuma Extreme Systems, Gustavo Marino
Re: Domain for Many2one on onchangebyPedro M. Baeza
|
https://www.odoo.com/groups/community-59/community-18665214
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
CodePlexProject Hosting for Open Source Software
Hi
I'm trying to use GraphLyaout to display a graph based on an enum, e.g. IBiDirectionalGraph<MyEnum, IEdge<MyEnum>.
However, GraphLayout is defined as
public class GraphLayout<TVertex, TEdge, TGraph>
where TVertex : class
where TEdge : IEdge<TVertex>
where TGraph : class, IBidirectionalGraph<TVertex, TEdge>
meaning that I have to wrap my enum (or int, etc.) in a class wrapper, which isn't a big deal, but I was wondering why this restriction is being imposed on GraphLayout when it isn't for IBidirectionalGraph.
Thanks
Riko
Are you sure you want to delete this post? You will not be able to recover it later.
Are you sure you want to delete this thread? You will not be able to recover it later.
|
http://graphsharp.codeplex.com/discussions/227864
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
Manage all the state for virtual links.
More...
#include <vlink.hh>
List of all members.
Manage all the state for virtual links.
This area has been removed mark all the notified for this area to false.
Allowing an area to be removed and then brought back.
Provide an interface and vif for this router ID.
Must not be called before the address information has been provided.
|
http://xorp.org/releases/current/docs/kdoc/html/classVlink.html
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
From the 7th to the 13th of November you can save up to 80% on some of our very best Angular content - along with our hottest React eBooks and video courses. If you're curious about the cutting-edge of modern web development we think you should click here and invest in your skills...
The Angular team introduced quite a few changes in version 2 of the framework, and components are one of the important ones. If you are familiar with Angular 1 applications, components are actually a form of directives that are extended with template-oriented features. In addition, components are optimized for better performance and simpler configuration than a directive as Angular doesn’t support all its features. Also, while a component is technically a directive, it is so distinctive and central to Angular 2 applications that you’ll find that it is often separated as a different ingredient for the architecture of an application.
So, what is a component? In simple words, a component is a building block of an application that controls a part of your screen real estate or your “view”. It does one thing, and it does it well. For example, you may have a component to display a list of active chats in a messaging app (which, in turn, may have child components to display the details of the chat or the actual conversation). Or you may have an input field that uses Angular’s two-way data binding to keep your markup in sync with your JavaScript code. Or, at the most elementary level, you may have a component that substitutes an HTML template with no special functionality just because you wanted to break down something complex into smaller, more manageable parts.
Now, I don’t believe too much in learning something by only reading about it, so let’s get your hands dirty and write your own component to see some sample usage. I will assume that you already have Typescript installed and have done the initial configuration required for any Angular 2 app. If you haven’t, you can check out how to do so by clicking on this link.
You may have already seen a component at its most basic level:
import {Component} from 'angular2/core'; @Component({ selector: 'my-app', template: '<h1>{{ title }}</h1>' }) export class AppComponent { title = 'Hello World!'; }
That’s it! That’s all you really need to have a component. Three things are happening here:
- You are importing the Component class from the Angular 2 core package.
- You are using a Typescript decorator to attach some metadata to your AppComponent class. If you don’t know what a decorator is, it is simply a function that extends your class with Angular code so that it becomes an Angular component. Otherwise, it would just be a plain class with no relation to the Angular framework. In the options, you defined a selector, which is the tag name used in the HTML code so that Angular can find where to insert your component, and a template, which is applied to the inner contents of the selector tag. You may notice that we also used interpolation to bind the component data and display the value of the public variable in the template.
- You are exporting your AppComponent class so that you can import it elsewhere (in this case, you would import it in your main script so that you can bootstrap your application).
That’s a good start, but let’s get into a more complex example that showcases other powerful features of Angular and Typescript/ES2015. In the following example, I've decided to stuff everything into one component. However, if you'd like to stick to best practices and divide the code into different components and services or if you get lost at any point, you can check out the finished/refactored example here. Without any further ado, let’s make a quick page that displays a list of products. Let’s start with the index:
<html> <head> <title>Products</title> <meta name="viewport" content="width=device-width, initial-scale=1"> <script src="node_modules/es6-shim/es6-shim.min.js"></script> <script src="node_modules/systemjs/dist/system-polyfills> <link rel="stylesheet" href="styles.css"> <script> System.config({ packages: { app: { format: 'register', defaultExtension: 'js' } } }); System.import('app/main') .then(null, console.error.bind(console)); </script> </head> <body> <my-app>Loading...</my-app> </body> </html>
There’s nothing out of the ordinary going on here. You are just importing all of the necessary scripts for your application to work as demonstrated in the quick-start.
The app/main.ts file should already look somewhat similar to this:
import {bootstrap} from ‘angular2/platform/browser’ import {AppComponent} from ‘./app.component’ bootstrap(AppComponent);
Here, we imported the bootstrap function from the Angular 2 package and an AppComponent class from the local directory. Then, we initialized the application.
First, create a product class that defines the constructor and type definition of any products made. Then, create app/product.ts, as follows:
export class Product {
id: number;
price: number;
name: string;
}
Next, you will create an app.component.ts file, which is where the magic happens. I've decided to stuff everything in here for demonstration purposes, but ideally, you would want to extract the products array into its own service, the HTML template into its own file, and the product details into its own component. This is how the component will look:
import {Component} from 'angular2/core'; import {Product} from './product' @Component({ selector: 'my-app', template: ` <h1>{{title}}</h1> <ul class="products"> <li * <span class="badge">{{product.id}}</span> {{product.name}} </li> </ul> <div * <h2>{{selectedProduct.name}} details!</h2> <div><label>id: </label>{{selectedProduct.id}}</div> <div><label>Price: </label>{{selectedProduct.price | currency: 'USD': true }}</div> <div> <label>name: </label> <input [(ngModel)]="selectedProduct.name" placeholder="name"/> </div> </div> `, styleUrls: ['app/app.component.css'] }) export class AppComponent { title = 'My Products'; products = PRODUCTS; selectedProduct: Product; onSelect(product: Product) { this.selectedProduct = product; } } const PRODUCTS: Product[] = [ { "id": 1, "price": 45.12, "name": "TV Stand" }, { "id": 2, "price": 25.12, "name": "BBQ Grill" }, { "id": 3, "price": 43.12, "name": "Magic Carpet" }, { "id": 4, "price": 12.12, "name": "Instant liquidifier" }, { "id": 5, "price": 9.12, "name": "Box of puppies" }, { "id": 6, "price": 7.34, "name": "Laptop Desk" }, { "id": 7, "price": 5.34, "name": "Water Heater" }, { "id": 8, "price": 4.34, "name": "Smart Microwave" }, { "id": 9, "price": 93.34, "name": "Circus Elephant" }, { "id": 10, "price": 87.34, "name": "Tinted Window" } ];
The app/app.component.css file will look something similar to this:
.selected { background-color: #CFD8DC !important; color: white; } .products { margin: 0 0 2em 0; list-style-type: none; padding: 0; width: 15em; } .products li { position: relative; min-height: 2em; cursor: pointer; position: relative; left: 0; background-color: #EEE; margin: .5em; padding: .3em 0; border-radius: 4px; font-size: 16px; overflow: hidden; white-space: nowrap; text-overflow: ellipsis; color: #3F51B5; display: block; width: 100%; -webkit-transition: all 0.3s ease; -moz-transition: all 0.3s ease; -o-transition: all 0.3s ease; -ms-transition: all 0.3s ease; transition: all 0.3s ease; } .products li.selected:hover { background-color: #BBD8DC !important; color: white; } .products li:hover { color: #607D8B; background-color: #DDD; left: .1em; color: #3F51B5; text-decoration: none; font-size: 1.2em; background-color: rgba(0,0,0,0.01); } .products .text { position: relative; top: -3px; } .products .badge { display: inline-block; font-size: small; color: white; padding: 0.8em 0.7em 0 0.7em; background-color: #607D8B; line-height: 1em; position: relative; left: -1px; top: 0; height: 2em; margin-right: .8em; border-radius: 4px 0 0 4px; }
I'll explain what is happening:
- We imported from Component so that we can decorate your new component and imported Product so that we can create an array of products and have access to Typescript type infererences.
- We decorated our component with a “my-app” selector property, which finds <my-app></my-app> tags and inserts our component there. I decided to define the template in this file instead of using a URL so that I can demonstrate how handy the ES2015 template string syntax is (no more long strings or plus-separated strings). Finally, the styleUrls property uses an absolute file path, and any styles applied will only affect the template in this scope.
- The actual component only has a few properties outside of the decorator configuration. It has a title that you can bind to the template, a products array that will iterate in the markup, a selectedProduct variable that is a scope variable that will initialize as undefined and an onSelect method that will be run every time you click on a list item.
- Finally, define a constant (const because I've hardcoded it in and it won't change in runtime) PRODUCTS array to mock an object that is usually returned by a service after an external request.
Also worth noting are the following:
- As you are using Typescript, you can make inferences about what type of data your variables will hold. For example, you may have noticed that I defined the Product type whenever I knew that this the only kind of object I want to allow for this variable or to be passed to a function.
- Angular 2 has different property prefixes, and if you would like to learn when to use each one, you can check out this Stack Overflow question.
That's it! You now have a bit more complex component that has a particular functionality. As I previously mentioned, this could be refactored, and that would look something similar to this:
import {Component, OnInit} from 'angular2/core'; import {Product} from './product'; import {ProductDetailComponent} from './product-detail.component'; import {ProductService} from './product.service'; @Component({ selector: 'my-app', templateUrl: 'app/app.component.html', styleUrls: ['app/app.component.css'], directives: [ProductDetailComponent], providers: [ProductService] }) export class AppComponent implements OnInit { title = 'Products'; products: Product[]; selectedProduct: Product; constructor(private _productService: ProductService) { } getProducts() { this._productService.getProducts().then(products => this.products = products); } ngOnInit() { this.getProducts(); } onSelect(product: Product) { this.selectedProduct = product; } }
In this example, you get your product data from a service and separate the product detail template into a child component, which is much more modular. I hope you've enjoyed reading this post.
About this author
David Meza is an AngularJS developer at the City of Raleigh. He is passionate about software engineering and learning new programming languages and frameworks. He is most familiar working with Ruby, Rails, and PostgreSQL in the backend and HTML5, CSS3, JavaScript, and AngularJS in the frontend. He can be found at here.
|
https://www.packtpub.com/books/content/angular-2-components-what-you-need-know
|
CC-MAIN-2017-17
|
en
|
refinedweb
|
04 November 2010 17:33 [Source: ICIS news]
TORONTO (ICIS)--Eastman Chemical has entered into a joint venture with Italian firm Mazzucchelli 1849 SPA to produce compounded cellulose diacetate (CDA) in ?xml:namespace>
The venture would produce bio-derived CDA for use in various injection moulded applications, Eastman said.
Eastman’s cellulosics are produced from 100%-renewable softwood materials, the company added.
The joint venture agreement was in response to increasing demand for cellulosic materials, Eastman said. It did not disclose financial or capacity details.
For more on Eastman Chemical
|
http://www.icis.com/Articles/2010/11/04/9407550/eastman-chemical-forms-cellulose-diacetate-joint-venture-in.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Technical Support
Support Resources
Product Information
Information in this article applies to:
How does the do while C statement work?
The C do while statement creates a structured loop that
executes as long as a specified condition is true at the end of each
pass through the loop.
The syntax for a do while statement is:
do loop_body_statement while (cond_exp);
where:
loop_body_statement is any valid C statement or block.
cond_exp is an expression that is evaluated at the end of
each pass through the loop. If the value of the expression is "false"
(i.e., compares equal to zero) the loop is exited.
Since cond_expr is checked at the end of each pass through
the loop, loop_body_statement is always executed at least
once, even if cond_expr is false.
The while statement is very similar to do while,
except that a while statement tests its cond_exp
before each pass through the loop, and therefore may execute
its loop_body_statement zero times.
Any of the following C statements used as part of the
loop_body_statement can alter the flow of control in a do
while statement:
The do while statement is used less often than the other
structured loop statements in C, for and while.
char input_char(void);
void output_char(char);
void
transfer_1_line(void)
{
char c;
do {
c = input_char();
output_char(c);
} while (c != '\n');
}
The transfer_1_line function reads characters from input and
copies them to output, stopping only after a newline character ('\n')
has been copied.
void frob(int device_id);
int frob_status(int device_id);
/* codes returned by frob_status */
#define FROB_FAIL -1
#define FROB_OK 0
void
frob_to_completion(
int device_id)
{
do {
frob(device_id);
} while (frob_status(device_id) != FROB_OK);
}
The frob_to_completion function frobs (whatever that means) the
specified device until it is frobbed successfully. The device will
always be frobbed at least once.
#include <stdio.h>
void prattle (void)
{
do
printf("Hello world\n")
while (1);
}
The prattle function prints "Hello world" forever (or at least
until execution of the program is somehow stopped). Although it is
easy to write an infinite loop with do while, it is customary
to use a while statement instead.
Last Reviewed: Thursday, December 15,
|
http://www.keil.com/support/docs/1950.htm
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
.
TestFixture
Test
First we need something to test:
public class Subject {
public Int32 Add(Int32 x, Int32 y)
{
return x + y;
}
}
That Subject class has one method: Add. We will test the Subject class by
exercising the Add method with different arguments.
Subject
Add
.
tSubject
tAdd.
SetUp
SetUpFixture
TearDown
TearDownFixture
Related Reading
Visual Studio Hacks
Tips & Tools for Turbocharging the IDE
By James A.
|
http://www.onjava.com/pub/a/dotnet/2005/07/18/unittesting_2005.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Name | Synopsis | Description | Parameters | Errors | Examples | Environment Variables | Attributes | See Also
#include <slp.h> SLPError SLPUnescape(const char *pcInBuf, char** ppcOutBuf, SLPBoolean isTag);
The SLPUnescape() function processes the input string in pcInbuf and unescapes any SLP reserved characters. If the isTag parameter is SLPTrue, then look for bad tag characters and signal an error if any are found with the SLP_PARSE_ERROR code. No transformation is performed if the input string is an opaque.. Must be freed using SLPFree(3SLP) when the memory is no longer needed.
When true, the input buffer is checked for bad tag characters.
This function or its callback may return any SLP error code. See the ERRORS section in slp_api(3SLP).
The following example decodes the representation for “,tag,”:
char* pcOutBuf; SLPError err; err = SLPUnescape("\\2c tag\\2c", &pcOutbuf, SLP_TRUE);
When set, use this file for configuration.
See attributes(5) for descriptions of the following attributes:
slpd(1M),SLPFree(3SLP), slp_api(3SL | Parameters | Errors | Examples | Environment Variables | Attributes | See Also
|
http://docs.oracle.com/cd/E19253-01/816-5170/6mbb5et3r/index.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
It is possible to call R functions and even modules direcly from Python. One easy way to do this is the rpy2 interface. In the following code snippet, an R module called preprocessCore from the bioconductor portal is loaded and the quantile normalization function is applied on a matrix that is created in python and then converted back to python again.
import rpy2.robjects as robjects from rpy2.robjects.packages import importr import numpy preprocessCore = importr('preprocessCore') matrix = [ [1,2,3,4,5], [1,3,5,7,9], [2,4,6,8,10] ] v = robjects.FloatVector([ element for col in matrix for element in col ]) m = robjects.r['matrix'](v, ncol = len(matrix), byrow=False) Rnormalized_matrix = preprocessCore.normalize_quantiles(m) normalized_matrix = numpy.array( Rnormalized_matrix)
It is thus possible to handle R modules and objects.
|
http://en.m.wikibooks.org/wiki/Python_Programming/Extending_with_R
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
NAME
gl_line - draw a line
SYNOPSIS
#include <vgagl.h> void gl_line(int x1, int y1, int x2, int y2, int c);
DESCRIPTION
Draw a line from point (x1, y1) to (x2, y2) inclusively in color c. You should not assume that the same drawing trajectory is used when you exchange start and end points. To use this program one first sets up a mode with a regular vga_setmode call and vga_setpage(0), with possibly a vga_setlinearaddressing call. Then a call to gl_setcontextvga(mode) is made. This makes the information about the mode available to gl_line. The pixels are placed directly into video memory using inline coded commands.(3).
AUTHOR
This manual page was edited by Michael Weller <eowmob@exp-math.uni- essen.de>. The exact source of the referenced demo.
|
http://manpages.ubuntu.com/manpages/hardy/man3/gl_line.3.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
modf - decompose a floating-point number
#include <math.h> double modf(double x, double *iptr);
The modf() function breaks the argument x into integral and fractional parts, each of which has the same sign as the argument. It stores the integral part as a double in the object pointed to by iptr.
An application wishing to check for error situations should set errno to 0 before calling modf(). If errno is non-zero on return, or the return value is NaN, an error has occurred.
Upon successful completion, modf() returns the signed fractional part of x.
If x is NaN, NaN is returned, errno may be set to [EDOM] and *iptr is set to NaN.
If the correct value would cause underflow, 0 is returned and errno may be set to [ERANGE].
The modf() function may fail if:
- [EDOM]
- The value of x is NaN.
- [ERANGE]
- The result underflows.
No other errors will occur.
None.
None.
frexp(), isnan(), ldexp(), <math.h>.
Derived from Issue 1 of the SVID.
|
http://pubs.opengroup.org/onlinepubs/007908775/xsh/modf.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Work at SourceForge, help us to make it a better place! We have an immediate need for a Support Technician in our San Francisco or Denver office.
You can subscribe to this list here.
Showing
2
results of 2
On 2/20/03 9:22 PM, "Steven D. Arnold" <stevena@...> wrote:
> I may be having some serious misconceptions about how session works, but I
> wonder if anyone can point out the mistake(s) I'm making here. I've tried
> to reduce the problem to the minimum amount of code possible.
I've figured out a workable solution to this issue. A couple things noted:
1. A cookie doesn't get set unless the entire page processes successfully
(which makes sense). That was the basic reason my sample code failed; the
code never completely ran because of the KeyError, hence the cookie was
never set.
2. If you make changes to a session, you must save the changes using
session.set or it won't save; this sort of thing is lost across redirects,
internal or external. It makes me wish for a feature like the ability to
have a function always run whenever a page is done, before any redirects are
processed. One thing this function would do for me is save the
session_dict.
I'm guessing Rimon may have a time-machine similar to Guido's that already
delivers this functionality. ;-) If not, it seems like a useful feature...I
wonder if autoSession uses something like this to do its work?
steve
--
I may be having some serious misconceptions about how session works, but I
wonder if anyone can point out the mistake(s) I'm making here. I've tried
to reduce the problem to the minimum amount of code possible.
Essentially, the problem is that I can't seem to get session functionality
-- values simply aren't saved from page to page. To recreate the problem, I
developed three small files -- a header and two pages, once of which loads
the other. I set a session value in the first page, and it doesn't show up
in the second.
The header file looks like this:
[[.import name=session args="'session_dir', '/home/thoth/httpd/session',
auto=28800"]]
[[\
def generate_cookie( length=30 ):
import random
chars =
'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
cookie = ''
for i in range( length ):
cookie += random.choice( chars )
return cookie
sessionid = cookie.get( 'unr_session' )
if not sessionid:
sessionid = generate_cookie()
cookie.set( 'unr_session', sessionid )
session_dict = session.get( sessionid )
if not session_dict:
session_dict = {}
]]
This file fetches a dictionary stored under the sessionid. If there is no
dictionary there, it creates one.
The first page looks like this:
[[.import name=include]]
[[.include file=testheader.spy]]
[[\
# Set a value in the session dict
session_dict[ 'foo' ] = 'bar'
session.set( session_dict, 3600, sessionid )
include.spyce( "test_page2.spy" )
return
]]
Note this file sets the 'foo' key in the session_dict dictionary, then saves
the dictionary under the sessionid, and finally includes the next page.
The second page looks like this:
[[.import name=include]]
[[.include file=testheader.spy]]
[[\
from sys import stderr as err
print >>err, "session_dict[ 'foo' ] = ", session_dict[ 'foo' ]
]]
When I load the first page, it executes its code and then loads the second
page. The print statement in the second page above raises a KeyError
because 'foo' is not a key in session_dict.
On a little further debugging, I discovered that the cookie.set(
'unr_session', sessionid ) call doesn't seem to work -- the cookie.get
function never seems to get the cookie back, it always looks like None.
This is no doubt a central aspect of the problem.
Does anyone know what's going on here? What am I missing?
steve
--
|
http://sourceforge.net/p/spyce/mailman/spyce-users/?viewmonth=200302&viewday=21
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
What’s new in Tornado 2.4¶
Sep 4, 2012¶
General¶
- Fixed Python 3 bugs in tornado.auth, tornado.locale, and tornado.wsgi.
HTTP clients¶
- Removed max_simultaneous_connections argument from tornado.httpclient (both implementations). This argument hasn’t been useful for some time (if you were using it you probably want max_clients instead)
- tornado.simple_httpclient now accepts and ignores HTTP 1xx status responses.
tornado.ioloop and tornado.iostream¶
- Fixed a bug introduced in 2.3 that would cause IOStream close callbacks to not run if there were pending reads.
- Improved error handling in SSLIOStream and SSL-enabled TCPServer.
- SSLIOStream.get_ssl_certificate now has a binary_form argument which is passed to SSLSocket.getpeercert.
- SSLIOStream.write can now be called while the connection is in progress, same as non-SSL IOStream (but be careful not to send sensitive data until the connection has completed and the certificate has been verified).
- IOLoop.add_handler cannot be called more than once with the same file descriptor. This was always true for epoll, but now the other implementations enforce it too.
- On Windows, TCPServer uses SO_EXCLUSIVEADDRUSER instead of SO_REUSEADDR.
tornado.template¶
- {% break %} and {% continue %} can now be used looping constructs in templates.
- It is no longer an error for an if/else/for/etc block in a template to have an empty body.
tornado.testing¶
- New class tornado.testing.AsyncHTTPSTestCase is like AsyncHTTPTestCase. but enables SSL for the testing server (by default using a self-signed testing certificate).
- tornado.testing.main now accepts additional keyword arguments and forwards them to unittest.main.
tornado.web¶
- New method RequestHandler.get_template_namespace can be overridden to add additional variables without modifying keyword arguments to render_string.
- RequestHandler.add_header now works with WSGIApplication.
- RequestHandler.get_secure_cookie now handles a potential error case.
- RequestHandler.__init__ now calls super().__init__ to ensure that all constructors are called when multiple inheritance is used.
- Docs have been updated with a description of all available Application settings
Other modules¶
- OAuthMixin now accepts "oob" as a callback_uri.
- OpenIdMixin now also returns the claimed_id field for the user.
- tornado.platform.twisted shutdown sequence is now more compatible.
- The logging configuration used in tornado.options is now more tolerant of non-ascii byte strings.
|
http://www.tornadoweb.org/en/stable/releases/v2.4.0.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
NAME
ng_atmpif - netgraph HARP/ATM Virtual Physical Interface
SYNOPSIS
#include <sys/types.h> #include <netatm/atm_if.h> #include <netgraph/atm/ng_atmpif.h>
DESCRIPTION
The atmpif netgraph node type allows the emulation of atm(8) (netatm/HARP) Physical devices (PIF) to be connected to the netgraph(4) networking subsystem. Moreover, it includes protection of the PDU against duplication and desequencement. It supports up to 65535 VCs and up to 255 VPs. AAL0, AAL3/4 and AAL5 emulation are provided. In order to optimize CPU, this node does not emulate the SAR layer. The purpose of this node is to help in debugging and testing the HARP stack when one does not have an ATM board or when the available boards do not have enough features. When a node is created, a PIF is created automatically. It is named hvaX. It has the same features as any other HARP devices. The PIF is removed when the node is removed.
HOOKS
There is only one hook: link. This hook can be connected to any other Netgraph node. For example, in order to test the HARP stack over UDP, it can be connected on a ng_ksocket(4) node.
CONTROL MESSAGES
This node type supports the generic messages plus the following: NGM_ATMPIF_SET_CONFIG (setconfig) Configures the debugging features of the node and a virtual Peak Cell Rate (PCR). It uses the same structure as NGM_ATMPIF_GET_CONFIG. NGM_ATMPIF_GET_CONFIG (getconfig) Returns a structure defining the configuration of the interface: struct ng_vatmpif_config { uint8_t debug; /* debug bit field (see below) */ uint32_t pcr; /* peak cell rate */ Mac_addr macaddr; /* Mac Address */ }; Note that the following debugging flags can be used: VATMPIF_DEBUG_NONE disable debugging VATMPIF_DEBUG_PACKET enable debugging NGM_ATMPIF_GET_LINK_STATUS (getlinkstatus) Returns the last received sequence number, the last sent sequence number and the current total PCR that is reserved among all the VCCs of the interface. struct ng_atmpif_link_status { uint32_t InSeq; /* last received sequence number + 1 */ uint32_t OutSeq; /* last sent sequence number */ uint32_t cur_pcr; /* slot’s reserved PCR */ }; NGM_ATMPIF_GET_STATS (getstats) NGM_ATMPIF_CLR_STATS (clrstats) NGM_ATMPIF_GETCLR_STATS (getclrstats) It returns the node’s statistics, it clears them or it returns and resets their values to 0. The following stats are provided. struct hva_stats_ng { uint32_t ng_errseq; /* Duplicate or out of order */ uint32_t ng_lostpdu; /* PDU lost detected */ uint32_t ng_badpdu; /* Unknown PDU type */ uint32_t ng_rx_novcc; /* Draining PDU on closed VCC */ uint32_t ng_rx_iqfull; /* PDU drops no room in atm_intrq */ uint32_t ng_tx_rawcell; /* PDU raw cells transmitted */ uint32_t ng_rx_rawcell; /* PDU raw cells received */ uint64_t ng_tx_pdu; /* PDU transmitted */ uint64_t ng_rx_pdu; /* PDU received */ }; struct hva_stats_atm { uint64_t atm_xmit; /* Cells transmitted */ uint64_t atm_rcvd; /* Cells received */ }; struct hva_stats_aal5 { uint64_t aal5_xmit; /* Cells transmitted */ uint64_t aal5_rcvd; /* Cells received */ uint32_t aal5_crc_len; /* Cells with CRC/length errors */ uint32_t aal5_drops; /* Cell drops */ uint64_t aal5_pdu_xmit; /* CS PDUs transmitted */ uint64_t aal5_pdu_rcvd; /* CS PDUs received */ uint32_t aal5_pdu_crc; /* CS PDUs with CRC errors */ uint32_t aal5_pdu_errs; /* CS layer protocol errors */ uint32_t aal5_pdu_drops; /* CS PDUs dropped */ };
SEE ALSO
natm(4), netgraph(4), ng_ksocket(4), ngctl(8)
AUTHORS
Harti Brandt 〈harti@FreeBSD.org〉 Vincent Jardin 〈vjardin@wanadoo.fr〉
|
http://manpages.ubuntu.com/manpages/hardy/man4/ng_atmpif.4.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
System.Speech.Recognition Namespace
The System.Speech.Recognition namespace contains Windows Desktop Speech technology types for implementing speech recognition.
The Windows Desktop Speech Technology software offers a basic speech recognition infrastructure that digitizes acoustical signals, and recovers words and speech elements from audio input.
Applications use the System.Speech.Recognition namespace to access and extend this basic speech recognition technology by defining algorithms for identifying and acting on specific phrases or word patterns, and by managing the runtime behavior of this speech infrastructure.
Create Grammars
You create grammars, which consist of a set of rules or constraints, to define words and phrases that your application will recognize as meaningful input. Using a constructor for the Grammar class, you can create a grammar object at runtime from GrammarBuilder or SrgsDocument instances, or from a file, a string, or a stream that contains a definition of a grammar.
Using the GrammarBuilder and Choices classes, you can programmatically create grammars of low to medium complexity that can be used to perform recognition for many common scenarios. To create grammars programmatically that conform to the Speech Recognition Grammar Specification 1.0 (SRGS) and take advantage of the authoring flexibility of SRGS, use the types of the System.Speech.Recognition.SrgsGrammar namespace. You can also create XML-format SRGS grammars using any text editor and use the result to create GrammarBuilder, SrgsDocument , or Grammar objects.
In addition, the DictationGrammar class provides a special-case grammar to support a conventional dictation model.
See Create Grammars in the System Speech Programming Guide for .NET Framework 4.0 for more information and examples.
Manage Speech Recognition Engines
Instances of SpeechRecognizer and SpeechRecognitionEngine supplied with Grammar objects provide the primary access to the speech recognition engines of the Windows Desktop Speech Technology.
You can use the SpeechRecognizer class to create client applications that use the speech recognition technology provided by Windows, which you can configure through the Control Panel. Such applications accept input through a computer's default audio input mechanism.
For more control over the configuration and type of recognition engine, build an application using SpeechRecognitionEngine, which runs in-process. Using the SpeechRecognitionEngine class, you can also dynamically select audio input from devices, files, or streams.
See Initialize and Manage a Speech Recognition Engine in the System Speech Programming Guide for .NET Framework 4.0 for more information.
Respond to Events
SpeechRecognizer and SpeechRecognitionEngine objects generate events in response to audio input to the speech recognition engine. The AudioLevelUpdated, AudioSignalProblemOccurred, AudioStateChanged events are raised in response to changes in the incoming signal. The SpeechDetected event is raised when the speech recognition engine identifies incoming audio as speech. The speech recognition engine raises the SpeechRecognized event when it matches speech input to one of its loaded grammars, and raises the SpeechRecognitionRejected when speech input does not match any of its loaded grammars.
Other types of events include the LoadGrammarCompleted event which a speech recognition engine raises when it has loaded a grammar. The StateChanged is exclusive to the SpeechRecognizer class, which raises the event when the state of Windows Speech Recognition changes.
You can register to be notified for events that the speech recognition engine raises and create handlers using the EventsArgs classes associated with each of these events to program your application's behavior when an event is raised.
See Using Speech Recognition Events in the System Speech Programming Guide for .NET Framework 4.0 for more information.
|
http://msdn.microsoft.com/en-us/library/system.speech.recognition(v=vs.110).aspx
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Hello!
I am attempting to create a custom validation, called LengthValidator. I have placed this class in a namespace called validators and it extends BaseValidator. I have overloaded the ControlPropertiesValid and EvaluateIsValid methods. This validator checks the length of the Text property of a TextBox object.
I have created a class to test this validator. This class simply has a main method that looks like the following:
try {
LengthValidator v = new LengthValidator();
TextBox t = new TextBox();
// Set up the text box
t.Text = "Hello";
t.MaxLength = 10;
t.ID = "txtTest";
// Set up the validator
v.MinLength = 1;
v.MaxLength = 3;
v.ControlToValidate = t.ID;
// Perform the validation
v.Validate();
Console.WriteLine(v.IsValid);
} catch (Exception a) {
Console.WriteLine(a.GetType());
}
After running this test as both an executable and a NUnit test case, I have noticed that the ControlPropertiesValid is being called, however, the EvaluateIsValid is never being called. I know this because of the beautiful Console.WriteLine statement
..
Everything compiles correctly. I compile the Test class to an executable. When I run the executable, I receive the following message:
Unhandled Exception: System.IO.FileNotFoundException: File or assembly name Validators, or one of its dependencies, was not found.
File name: "Validators"
at Validators.Test.Main()
Fusion log follows:
=== Pre-bind state information ===
LOG: DisplayName = Validators, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null (Fully-specified)
LOG: Appbase = C:\CHAD\projects\ValidatorLibrary\src\Validators\
LOG: Initial PrivatePath = NULL
Calling assembly : Test, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null.
===
LOG: Application configuration file does not exist.
LOG: Policy not being applied to reference at this time (private, custom, partial, or location-based assembly bind).
LOG: Post-policy reference: Validators, Version=0.0.0.0, Culture=neutral, PublicKeyToken=null
LOG: Attempting download of new URL.
LOG: Attempting download of new URL.
LOG: Attempting download of new URL.
LOG: Attempting download of new URL.
Can someone please give me an idea of what is going on? This isn't a complicated validator, I'm just having problems testing it and I would really like to develop an NUnit testsuite for this validator. Thanks for your help, it is greatly appreciated.
|
http://forums.devshed.com/net-development/83655-developing-custom-validators-last-post.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
% mark-conflicts} \begin{code} {-# OPTIONS_alias ) import Darcs.Arguments ( DarcsFlag, ignoretimes, working_repo_dir, umask_option ) import Darcs.Repository ( withRepoLock, ($-), amInRepository, add_to_pending, applyToWorking, read_repo, sync_repo, get_unrecorded_unsorted, ) import Darcs.Patch ( invert ) import Darcs.Ordered ( FL(..) ) import Darcs.Sealed ( Sealed(Sealed) ) import Darcs.Resolution ( patchset_conflict_resolutions ) import Darcs.Utils ( promptYorn ) #include "impossible.h" markconflicts_description :: String markconflicts_description = "Mark any unresolved conflicts in working copy, for manual resolution." \end{code} \options{mark-conflicts} \haskell{mark-conflicts_help} \begin{code} markconflicts_help :: String markconflicts_help = "Darcs requires human guidance to unify changes to the same part of a\n" ++ "source file. When a conflict first occurs, darcs will add both\n" ++ "choices to the working tree, delimited by markers.\n" ++ -- Removing this part of the sentence for now, because ^ ^ ^ upsets TeX. -- the markers `v v_name = "mark-conflicts", command_help = markconflicts_help, command_description = markconflicts_description, command_extra_args = 0, command_extra_arg_help = [], command_command = markconflicts_cmd, command_prereq = amInRepository, command_get_arg_possibilities = return [], command_argdefaults = nodefaults, command_advanced_options = [umask_option], command_basic_options = [ignoretimes, working_repo_dir]} markconflicts_cmd :: [DarcsFlag] -> [String] -> IO () markconflicts_cmd opts [] = withRepoLock opts $- \repository -> do pend <- get_unrecorded_unsorted repository r <- read_repo repository Sealed res <- return $ patchset_conflict_resolutions r case res of NilFL -> do putStrLn "No conflicts to mark." exitWith ExitSuccess _ -> return () case pend of NilFL -> return () _ -> do yorn <- promptYorn ("This will trash any unrecorded changes"++ " in the working directory.\nAre you sure? ") when (yorn /= 'y') $ exitWith ExitSuccess applyToWorking repository opts (invert pend) `catch` \e -> bug ("Can't undo pending changes!" ++ show e) sync_repo repository withSignalsBlocked $ do add_to_pending repository res applyToWorking repository opts res `catch` \e -> bug ("Problem marking conflicts in mark-conflicts!" ++ show e) putStrLn "Finished marking conflicts." markconflicts_cmd _ _ = impossible -- |resolve is an alias for mark-conflicts. resolve :: DarcsCommand resolve = command_alias "resolve" markconflicts \end{code}
|
http://hackage.haskell.org/package/darcs-2.1.99.0/docs/src/Darcs-Commands-MarkConflicts.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Design Patterns in the Test of Time: Composite.
public interface IVotingUnit { int Weight { get;set; } int CandidateId { get;set; } } public class Voter : IVotingUnit { [Obsolete("Racist")] string Id { get;set; } int Weight { get;set; } int CandidateId { get;set; } } public class WinnerTakesAllState : IVotingUnit { string StateCode { get;set; } int Weight { get;set; } int CandidateId { get;set; } public void AddVote(Voter vote){ // calculate } }.
(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
Sebastian Mueller replied on Tue, 2012/11/20 - 3:23am
Your example is probably the worst example one could give for the composite pattern. Please take the time and read more on the subject before posting. Did you actually read the wikipedia link you were posting?
Don't give recommendations about using or not using the pattern unless you understand that pattern - and obviously you do not, since you seem to never have (knowingly) implemented that pattern yourself.
Sorry for sounding harsh, but this "article" does way more harm than good. For those who don't know about the pattern you provide misleading advice and for those who know the pattern it's just garbage...
Lund Wolfe replied on Thu, 2012/11/22 - 6:30pm
You do need to use your judgement to determine when a design pattern fits your situation, and you need to have a good grasp of OO and some experience before you can appreciate the patterns.
|
http://java.dzone.com/articles/design-patterns-test-time-2
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Extreme ASP.NET Makeover: Singleton - Refactoring
- Posted: Aug 04, 2009 at 4:57 PM
- 10,252 Views
Right click “Save as…”
Now that we have a new class, complete with functionality, we can start to refactor other parts of the application to use it instead of the AuthChecker singleton. Like all the other refactorings we’ve done thus far in this article, it’s not that big of a deal. As our example, we’re going to continue to refactor the AuthorizationServices class (see the previous Separation of Concerns article for the initial refactoring for this). In AuthorizationServices, we want to eliminate the use of AuthChecker and, in its place, use an instance of AuthorizationChecker instead.
To do this, we first need to create a module level variable that provides the same capabilities as the AuthChecker singleton. Earlier in this article, we created an IAuthenticationChecker interface that worked to enforce this for us. Now is another time that we can use it. As you can see in the code below, we now have created an instance of the AuthenticationChecker and assigned it to the module level variable:
public class AuthorizationServices { private IAuthorizationChecker _authenticationChecker; public AuthorizationServices() { authorizationChecker = new AuthorizationChecker(); } ... }
Now that we have a way to access the authentication checking code, we just need to replace the AuthChecker calls in the RetrieveCurrentStatusFor method with calls to the module level variable. As an example of these changes, here is what the first one would look like:
CanView = _authorizationChecker.CheckActionForPage(currentPage, Actions.ForPages.ReadPage, currentUsername, currentGroups),
With that we have our AuthorizationServices class refactored and we’ve finished the first full round of changes that are required to eliminate the AuthChecker singleton. If we take a look at the usages of the AuthChecker singleton instance, we’ll see that there are still a large number of places that it is being used.
Now that you’ve finished this first complete refactoring to use the new AuthorizationChecker class, you’ve also reached the first logical checkpoint in your refactoring exercise. From here you can choose when you want to move from the AuthChecker singleton to an AuthorizationChecker instance class in any of the other places that AuthChecker is being used. As you incrementally move through your changes from AuthChecker to AuthorizationChecker, you get closer and closer to the next step in this refactoring: deleting the AuthChecker singleton.
Once you have all of the usages of AuthChecker replaced with the AuthorizationChecker class you’re almost ready to remove the AuthChecker class from your code base. Before we can do that, remember that we had the new AuthorizationChecker class delegating execution to the AuthChecker singleton. To be able to get rid of AuthChecker completely, we need to move its logic into the new AuthorizationChecker class. There are a couple of small things that we need to address in this refactoring, such as adding using statements.
The main change we need is to address the code’s need for an ISettingsStorageProviderV30 typed object. If we look into how that object is being passed into the AuthChecker singleton, we see that it’s actually being provided from another singleton named Settings:
AuthChecker.Instance = new AuthChecker(Settings.Instance.Provider);
Since this value is coming from a singleton, we can easily add a module level variable to AuthorizationChecker that can be used throughout its methods. Assigning that variable is as simple as what you see below. Also note that we no longer need the module level AuthChecker variable since we’ve now moved all of the AuthChecker code into this class. Since it’s not needed, we’ve removed it and the code related to it in the constructor:
private ISettingsStorageProviderV30 _settingsProvider; public AuthorizationChecker() { _settingsProvider = Settings.Instance.Provider; }
The result of this final refactoring is that the public methods on AuthChecker are no longer being executed anywhere in the code base. If you were to do a Find Usages on AuthChecker, you’ll see that the only place it exists is in the StartupTools class where it was being initialized. Since it’s not being used anywhere, there’s no need to initialize it, so we’ll just delete that line of code. Now we’re completely free of any use of AuthChecker, so we can delete the class from the project. With that, you’re free of your singleton past...or at least one part of it.
We’ve spent some quality time together on this article and have weaned you from a dependence on singletons in your code base. Singletons often are seen as the “easy” way to code. Just because you don’t have to “new up” a class to make use of its functionality, doesn’t mean that you’re not imposing a debt on your code base. While singletons seem like the route of least friction at first, you quickly find out that they’re nothing more than a mirage on the way to software development bliss. You see the ease of development that they offer, but every time that you move closer the goal is still just as far away as it was before.
Unless you truly need to ensure that one, and only one, instance of an object exists per application lifecycle, then you should be working to refactor singletons out of your code base. Hopefully, the techniques in this article have pointed you in the right direction to be able to do.
|
http://channel9.msdn.com/Blogs/howarddierking/Extreme-ASPNET-Makeover-Singleton-Refactoring?format=progressive
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
{-# LANGUAGE MultiParamTypeClasses, FlexibleInstances, FlexibleContexts, RecordWildCards #-} -- |A generic \"ziggurat algorithm\" implementation. Fairly rough right -- now. -- -- There is a lot of room for improvement in 'findBin0' especially. -- It needs a fair amount of cleanup and elimination of redundant -- calculation, as well as either a justification for using the simple -- 'findMinFrom' or a proper root-finding algorithm. -- -- It would also be nice to add (preferably by pulling in an -- external package) support for numerical integration and -- differentiation, so that tables can be derived from only a -- PDF (if the end user is willing to take the performance and -- accuracy hit for the convenience). module Data.Random.Distribution.Ziggurat ( Ziggurat(..) , mkZigguratRec , mkZiggurat , mkZiggurat_ , findBin0 , runZiggurat ) where import Data.Random.Internal.Find import Data.Random.Distribution.Uniform import Data.Random.Distribution import Data.Random.RVar import Data.StorableVector as Vec import Foreign.Storable vec ! i = index vec i -- |A data structure containing all the data that is needed -- to implement Marsaglia & Tang's \"ziggurat\" algorithm for -- sampling certain kinds of random distributions. -- -- The documentation here is probably not sufficient to tell a user exactly -- how to build one of these from scratch, but it is not really intended to -- be. There are several helper functions that will build 'Ziggurat's. -- The pathologically curious may wish to read the 'runZiggurat' source. -- That is the ultimate specification of the semantics of all these fields. data Ziggurat t = Ziggurat { -- |The X locations of each bin in the distribution. Bin 0 is the -- 'infinite' one. -- -- In the case of bin 0, the value given is sort of magical - x[0] is -- defined to be V/f(R). It's not actually the location of any bin, -- but a value computed to make the algorithm more concise and slightly -- faster by not needing to specially-handle bin 0 quite as often. -- If you really need to know why it works, see the 'runZiggurat' -- source or \"the literature\" - it's a fairly standard setup. zTable_xs :: Vector t, -- |The ratio of each bin's Y value to the next bin's Y value zTable_x_ratios :: Vector t, -- |The Y value (zFunc x) of each bin zTable_ys :: Vector t, -- |An RVar providing a random tuple consisting of: -- -- * a bin index, uniform over [0,c) :: Int (where @c@ is the -- number of bins in the tables) -- -- * a uniformly distributed fractional value, from -1 to 1 -- if not mirrored, from 0 to 1 otherwise. -- -- This is provided as a single 'RVar' because it can be implemented -- more efficiently than naively sampling 2 separate values - a -- single random word (64 bits) can be efficiently converted to -- a double (using 52 bits) and a bin number (using up to 12 bits), -- for example. zGetIU :: RVar (Int, t), -- |The distribution for the final \"virtual\" bin -- (the ziggurat algorithm does not handle distributions -- that wander off to infinity, so another distribution is needed -- to handle the last \"bin\" that stretches to infinity) zTailDist :: RVar t, -- |A copy of the uniform RVar generator for the base type, -- so that @Distribution Uniform t@ is not needed when sampling -- from a Ziggurat (makes it a bit more self-contained). zUniform :: t -> t -> RVar t, -- |The (one-sided antitone) PDF, not necessarily normalized zFunc :: t -> t, -- |A flag indicating whether the distribution should be -- mirrored about the origin (the ziggurat algorithm it -- its native form only samples from one-sided distributions. -- By mirroring, we can extend it to symmetric distributions -- such as the normal distribution) zMirror :: Bool } -- |Sample from the distribution encoded in a 'Ziggurat' data structure. {-# INLINE runZiggurat #-} runZiggurat :: (Num a, Ord a, Storable a) => Ziggurat a -> RVar a runZiggurat Ziggurat{..} = go where go = do -- Select a bin (I) and a uniform value (U) from -1 to 1 -- (or 0 to 1 if not mirroring the distribution). -- Let X be U scaled to the size of the selected bin. (i,u) <- zGetIU let x = u * zTable_xs ! i -- if the uniform value U falls in the area "clearly inside" the -- bin, accept X immediately. -- Otherwise, depending on the bin selected, use either the -- tail distribution or an accept/reject test. if abs u < zTable_x_ratios ! i then return $! x else if i == 0 then sampleTail x else sampleGreyArea i x -- when the sample falls in the "grey area" (the area between -- the Y values of the selected bin and the bin after that one), -- use an accept/reject method based on the target PDF. sampleGreyArea i x = do v <- zUniform (zTable_ys ! (i+1)) (zTable_ys ! i) if v < zFunc (abs x) then return $! x else go -- if the selected bin is the "infinite" one, call it quits and -- defer to the tail distribution (mirroring if needed to ensure -- the result has the sign already selected by zGetIU) sampleTail x | x < 0 = fmap negate zTailDist | otherwise = zTailDist -- |Build the tables to implement the \"ziggurat algorithm\" devised by -- Marsaglia & Tang, attempting to automatically compute the R and V -- values. -- -- Arguments: -- -- * flag indicating whether to mirror the distribution -- -- * the (one-sided antitone) PDF, not necessarily normalized -- -- * the inverse of the PDF -- -- * the number of bins -- -- * R, the x value of the first bin -- -- * V, the volume of each bin -- -- * an RVar providing the 'zGetIU' random tuple -- -- * an RVar sampling from the tail (the region where x > R) -- mkZiggurat_ :: (RealFloat t, Storable t, Distribution Uniform t) => Bool -> (t -> t) -> (t -> t) -> Int -> t -> t -> RVar (Int, t) -> RVar t -> Ziggurat t mkZiggurat_ m f fInv c r v getIU tailDist = z where z = Ziggurat { zTable_xs = zigguratTable f fInv c r v , zTable_x_ratios = precomputeRatios (zTable_xs z) , zTable_ys = Vec.map f (zTable_xs z) , zGetIU = getIU , zUniform = uniform , zFunc = f , zTailDist = tailDist , zMirror = m } -- |Build the tables to implement the \"ziggurat algorithm\" devised by -- Marsaglia & Tang, attempting to automatically compute the R and V -- values. -- -- Arguments are the same as for 'mkZigguratRec', with an additional -- argument for the tail distribution as a function of the selected -- R value. mkZiggurat :: (RealFloat t, Storable t, Distribution Uniform t) => Bool -> (t -> t) -> (t -> t) -> (t -> t) -> t -> Int -> RVar (Int, t) -> (t -> RVar t) -> Ziggurat t mkZiggurat m f fInv fInt fVol c getIU tailDist = mkZiggurat_ m f fInv c r v getIU (tailDist r) where (r,v) = findBin0 c f fInv fInt fVol -- |Build a lazy recursive ziggurat. Uses a lazily-constructed ziggurat -- as its tail distribution (with another as its tail, ad nauseum). -- -- Arguments: -- -- * flag indicating whether to mirror the distribution -- -- * the (one-sided antitone) PDF, not necessarily normalized -- -- * the inverse of the PDF -- -- * the integral of the PDF (definite, from 0) -- -- * the estimated volume under the PDF (from 0 to +infinity) -- -- * the chunk size (number of bins in each layer). 64 seems to -- perform well in practice. -- -- * an RVar providing the 'zGetIU' random tuple -- mkZigguratRec :: (RealFloat t, Storable t, Distribution Uniform t) => Bool -> (t -> t) -> (t -> t) -> (t -> t) -> t -> Int -> RVar (Int, t) -> Ziggurat t mkZigguratRec m f fInv fInt fVol c getIU = mkZiggurat m f fInv fInt fVol c getIU (fix (mkTail m f fInv fInt fVol c getIU)) where fix f = f (fix f) mkTail m f fInv fInt fVol c getIU nextTail r = do x <- rvar (mkZiggurat m f' fInv' fInt' fVol' c getIU nextTail) return (x + r * signum x) where fIntR = fInt r f' x | x < 0 = f r | otherwise = f (x+r) fInv' = subtract r . fInv fInt' x | x < 0 = 0 | otherwise = fInt (x+r) - fIntR fVol' = fVol - fIntR zigguratTable :: (Fractional a, Storable a, Ord a) => (a -> a) -> (a -> a) -> Int -> a -> a -> Vector a zigguratTable f fInv c r v = case zigguratXs f fInv c r v of (xs, excess) -> pack xs where epsilon = 1e-3*v zigguratExcess f fInv c r v = snd (zigguratXs f fInv c r v) zigguratXs f fInv c r v = (xs, excess) where xs = Prelude.map x [0..c] -- sample c x ys = Prelude.map f xs x 0 = v / f r x 1 = r x i | i == c = 0 x (i+1) = next i next i = let x_i = xs!!i in if x_i <= 0 then -1 else fInv (ys!!i + (v / x_i)) excess = xs!!(c-1) * (f 0 - ys !! (c-1)) - v precomputeRatios zTable_xs = sample (c-1) $ \i -> zTable_xs!(i+1) / zTable_xs!i where c = Vec.length zTable_xs -- |I suspect this isn't completely right, but it works well so far. -- Search the distribution for an appropriate R and V. -- -- Arguments: -- -- * Number of bins -- -- * target function (one-sided antitone PDF, not necessarily normalized) -- -- * function inverse -- -- * function definite integral (from 0 to _) -- -- * estimate of total volume under function (integral from 0 to infinity) -- -- Result: (R,V) findBin0 :: (RealFloat b) => Int -> (b -> b) -> (b -> b) -> (b -> b) -> b -> (b, b) findBin0 cInt f fInv fInt fVol = (r,v r) where c = fromIntegral cInt v r = r * f r + fVol - fInt r -- initial R guess: r0 = findMin (\r -> v r <= fVol / c) -- find a better R: r = findMinFrom r0 1 $ \r -> let e = exc r in e >= 0 && not (isNaN e) exc x = zigguratExcess f fInv cInt x (v x) instance (Num t, Ord t, Storable t) => Distribution Ziggurat t where rvar = runZiggurat
|
http://hackage.haskell.org/package/random-fu-0.0.3.2/docs/src/Data-Random-Distribution-Ziggurat.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
My program works the only problem I am having is that when it reads the end of file it adds the last score to the total again. here is my program what have I done wrong with.eof?
#include <iostream>
#include <fstream> // File Stream Library
using namespace std;
int main()
{
char team;
int numb, k=0, n=0;
ifstream gamefile;// ifstream variable name declaration
// open file
gamefile.open("/home1/c/a/acsi201/data/game");
//if the file fails to open it will return this statement
if(gamefile.fail())
{
cout << "Input file opening failed.\n";
exit(1);
}
/*while loop keeps reading file till the end of file with if else
statements to compile the scores the winning team would be displayed
first.*/
while(!gamefile.eof()) // while loop
{
gamefile >> team >> numb; //reading the info provided by the file
/* begining of if statement the character K stand for the Knicks
and if team equals K than it will return knicks points scored
otherwise it will return the Nets score.*/
if(team == 'K')
k = k + numb;
else
n = n + numb;
if(k > n)
/* Another if statement for winning team to go first.If the Knicks
score is greater than the Nets than the Knicks score will be
read first otherwise the Nets score will be read first.*/
cout << "Knicks " << k << " Nets " << n << endl;
else
cout << "Nets " << n << " Knicks " << k << endl;
}
// end of whileloop
gamefile.close();// closing of file previousely opened
cout << "Good Game! \n";
/* end of program since a output file was not required, to see output
after compilation run file hw5 < result than at prompt type more
result*/
}
|
http://cboard.cprogramming.com/cplusplus-programming/27248-eof-will-not-work-iostream.html
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Estimated read time: about an hour.
Looking for the attack combos necessary to score the “Combo Specialist” achievement/trophy? Feel free to skip to the list of necessary combos that has been repeatedly tested on new user profiles.
There are a great number of technologies and patterns that programmers can employ today; such as ASP, AJAX, MVC, MVP, MVVM, MSMQ, SOAP, XML, WSE, WCF, WPF, and more. All of these abbreviations and acronyms may leave you asking, “WTF?” Despite the various tutorials, books, blogs, etc. that we can use to learn from, there is something to be said for getting your hands dirty and uncovering hidden gems that are yet to be documented.
This article will demonstrate a complete Windows desktop application written in C# via Windows Presentation Foundation that connects to a Web service via Windows Communication Foundation. There are some key concepts that I will go into with some detail. Most of these concepts are beyond the typical encounters you may find posted elsewhere about these topics. An idea of what will be covered is listed below.
You may download the entire solution source code for this demonstration, or download just the standalone application files.
No matter which download you choose, you will need to have the .NET Framework 3.5 with SP1 successfully installed on your computer in order to run the client application. In order to successfully build and run the solution code, you should also make sure you have the following installed and ready.
As programmers, we code…a lot. Many of us also play video games as a way to unwind and appreciate other coders’ virtual masterpieces. This past holiday, I received a new game that was highly anticipated; Prince of Persia. It is a beautifully cel-rendered game with many acrobatic puzzles to solve and a robust combat system used on the few enemies found throughout.
Soon after I began playing the game, I quickly realized that the completionist in me felt compelled to collect all the trophies/achievements (goals) that are offered. These vary from beating the game in minimal time to performing a perfect 14-hit attack combo. I am not too proud to search online for a guide when I get stumped, but I do like to give things a shot myself first.
I was able to satisfy just about all the requirements for the available goals on my own. However, there was one goal that eluded me (two if you count the fact that the ultimate one was to get all others). The “Combo Specialist” goal is not stated clearly. It simply asks that you uncover all possible attack combos. I studied combinatorics quite a bit in college, and a quick glance at the combo list/tree provided in the game menus raised one of my eyebrows. I knew that there were many hundreds of combinations and found it difficult to believe that the game creators would require such a behemoth task for one of the lower-rated goals.
To make matters worse, there is an error on the combo list screen that the player can reference when performing the various combos in each group, and whether or not the combo will lead to another group; in essence, forming a combo chain. One of the combo groups (Elika’s magic) in the list shows that some of the combos lead to the Aerial group, but the combo tree shows that it should be the Acrobatic group (the tree shows the correct path).
Figure 1: The combo list and combo tree screens found in-game are inconsistent.
Which combinations of the attack combos will actually award you the goal? How many total combinations are there? Why does the official game guide not list the exact combos necessary to get the goal? How will I satisfy my inner completionist when several online searches produce no valid theories on a resolution to my problem?
I decided to write a quick program to answer some of the questions I had about the “Combo Specialist” goal. It turns out that there are 1,602 possible attack combinations.
What if Ubisoft Entertainment decided to contract out development on a smart client application that calculated the combinations for the user and provided a slick interface to hype the game up for the upcoming sequels (and movie)? The application would connect to a service on the Web to get the base combo definitions, user interface skin, images, and other assets. A code provided with a player’s copy of the user’s manual would allow him/her to download the game’s files and unlock the secrets to the fighting system that is planned to be used for the entire trilogy. When the next game in the trilogy is released, the player can enter the code from that manual to get the next game’s files. Updates to the files could be pulled automatically based on version checks with the service to fix any errors, or to add combos/goals for newly available downloadable content (DLC).
I imagine Ubisoft would hire a company like Guru Games. Though a fictional development company that specializes in satellite applications for the gaming industry, Guru Games is the company indicated as the creator of the “Game Attack Combos” application to be discussed hereafter. The lack of a more specific name is to avoid copyright infringement (real and imaginary) and to prevent restricting the application to just the Prince of Persia titles. It is possible that Ubisoft will create other games that use this new and innovative attack combo system.
Suppose in each copy of the Prince of Persia game, a code was printed on the manual that allowed the player to download information for a free Windows client that calculated all the possible combinations and even indicated which were necessary for trophies or achievements. A service on one of the company’s websites would accept connections from the client and send a package of files, when requested, based on a validated code. The client application then skins the user interface with assets found in the package and calculates the permutations of attack combos to display a checklist for the user.
This demonstration application uses Windows Presentation Foundation for the client user interface and connects to a Windows Communication Foundation service over HTTP for game combo package files.
Sometimes, an application begins as a last-minute idea. It may be that the creators want to gauge the popularity of the application before investing more time and money into furthering it. It is times like these that the developers should take care to think about future-proofing the code base. We should always be thinking of how we will refactor our code at a later time. This thought-process will be demonstrated in the Game Attack Combo application here. I will point out where it will be good to implement a different pattern if the program gets an official upgrade or grows into a more complicated beast.
Nearly all of the projects in the demo solution reference a common logic library. It is this library that handles the management and manipulation of all things combo and package related. I encourage you to explore the logic library found in the code download at the beginning of this article. There are a number of abstractions at work when parsing the very simple combo definition files to transform them into the many hundreds of actual combinations possible. The only bits I will go into here are some of the issues with regards to packages.
When I refer to a package, I mean those found in the System.IO.Packaging namespace. If you are not familiar with this namespace, it contains classes that allow you to organize several files into a single container file. The service application in the demo solution uses packaging to deliver the necessary assets over the Web, and the client application needs to get the assets from the requested packages. The client also uses the packages’ metadata to manage the game title, version, and more.
System.IO.Packaging
Each file in a package may be compressed to reduce the overall size of the resulting file. In fact, the default implementation of the abstract Package class is a ZipPackage. There are a couple of quirks that you may find when working with packages. When you happen upon a runtime exception when testing your code, it can be very difficult to figure out what the problem is due to the embedded streams. Each package part (file in the container) is accessed via a Stream.
Package
ZipPackage
Stream
Additionally, I discovered that binary files that are added to a package do not compress well. In fact, my tests compressing image files yielded an increase in storage size. Therefore, I only chose to compress text files in packages. I chose to leave all binary files uncompressed (images in this case).
Another issue to consider is how to extract a file from a package to be used after the package is closed. We all know that when we open files in our code, we should close the file as soon as possible in order to free the accompanying resources. This basic principle can actually cause trouble for a developer that needs to get at one of the parts inside a package and keep that part around beyond the scope of the package read. Luckily, the solution is simple once identified. The part must be copied to memory from the package before the package is closed. The project provided here copies several parts to MemoryStreams for later use.
MemoryStream
Service-oriented programming has come a long way over the past couple years. In particular, Windows Communication Foundation (WCF) has managed to break down quite a few barriers for developers of distributed applications. With it, we can create many different types of services from a common programming model. In this scenario, Guru Games decided to implement a service accessible via HTTP (a Web service). There is much more to learn about WCF than you will find in this article. Please, consult the Microsoft’s Developer Network section on WCF for further learning.
The easiest way to setup a new WCF project is to use the project template provided in Visual Studio 2008. Right-click your solution and select Add | New Project.... Select the Web template named, “WCF Service Application”. Name it and click the OK button to get a nice shell for a WCF service in your new project.
Figure 2: Adding a new WCF project is a snap.
A WCF service is made up of two entities. The first is the service contract in the form of an interface. This interface is what a client will use to identify the available operations that the service provides. The Game Attack Combos solution only has a single service with one operation. It offers a way for a client to download a combo package file for a specified game code and current package version on the client, if any.
[ServiceContract(</a>")]
public interface IComboPackagesService {
[OperationContract]
byte[] DownloadComboPackage(string gameCode, string clientPackageVersion);
}
With the contract defined, we can get to work on implementing the interface in a class. The class verifies the specified game code and retrieves the corresponding package file name from a database, opens the package file, checks the version against the specified client version, and if newer, loads the file to a binary array for return to the caller. For the complete code of this method, please refer to the code download at the beginning of this article.
[AspNetCompatibilityRequirements(
RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
public byte[] DownloadComboPackage(string gameCode, string clientPackageVersion) {
// Prepare a binary data array.
byte[] FileData = null;
...
// Get the file name for the specified code.
string PackageFileName = null;
if (!string.IsNullOrEmpty(gameCode)) {
PackageFileName = GetComboPackageFileNameByGameCode(gameCode);
}
if (!string.IsNullOrEmpty(PackageFileName)) {
...
// Read the entire file into the array.
using (FileStream PackageFile = File.Open(
PackageFileName,
FileMode.Open,
FileAccess.Read,
FileShare.ReadWrite
)) {
// Open the package to get its version.
Version CurrentVersion = null;
using (ComboPackage Package = new ComboPackage(PackageFile)) {
CurrentVersion = new Version(Package.Version);
}
// Check the version of the combo package file against the current
// one specified.
if (CurrentVersion > ClientVersion) {
// Copy the combo package file to the data array.
FileData = StreamHelper.CopyStreamToArray(PackageFile);
}
}
}
// Return the file data.
return FileData;
}
The implementation class also defines an attribute that allows it to act in ASP.NET compatibility mode. This is used here to make it easy to map virtual paths to physical package files stored in a folder with the service.
Lastly, a small change must be made to the default configuration of the service in order to support larger transfers, like the download of package files. The necessary changes involve defining a new binding where we can indicate the maximum message size and message encoding. Then, the service endpoint must be pointed to the new binding configuration. Also, in order to support the ASP.NET compatibility mode mentioned before, the service must be configured to enable it.
<system.serviceModel>
<bindings>
<wsHttpBinding>
<binding name="wsWithMtom"
maxReceivedMessageSize="2097152"
messageEncoding="Mtom" />
</wsHttpBinding>
</bindings>
<services>
<service ...>
<endpoint address="" binding="wsHttpBinding"
bindingConfiguration="wsWithMtom"
contract="GG.GameAttackCombos.Services.IComboPackagesService">
...
</endpoint>
...
</service>
</services>
<serviceHostingEnvironment aspNetCompatibilityEnabled="true" />
...
</system.serviceModel>
The service is now ready to be compiled and may be deployed to a location on the Web. If you have Visual Studio 2008 installed, you can run the service from the built-in Web server. The included project in the code download has a static port assigned to the service application to make it easier to connect to from the client. Now it is time to look at the client application.
To be a smart application, it needs to do more than just user interface work. The client application in this demo does the heavy lifting of calculating all the possible attack combinations, with the help of the logic library that contains all the appropriate code. As a hybrid application, it still depends on the remote service to get all the necessary files and updates.
Games are the epitome of multimedia entertainment. Therefore, a program that accompanies a game must share similar attributes. The desktop client developed for this solution employs Windows Presentation Foundation (WPF) to provide a better looking application in much less time than other frameworks. There is a plethora of knowledge to take in for WPF, and Microsoft’s Developer Network section on WPF is a good place to start.
I will show you some standard techniques in a developer’s arsenal with WPF. However, I will primarily focus on how to extend some of these techniques into more advanced scenarios that help solve issues that you may actually encounter when writing WPF applications for “real world” problems.
When the client application is initially launched, it is nothing special to look at. In fact, the default skin that is applied makes it look just like a normal Windows application. That will change once a game combo package is downloaded and opened.
Figure 3: Nothing special here…yet.
Opening a new game combo package requires the user to click the Open menu item to launch the OpenPackageWindow. From this window, the user is shown any games that are already downloaded to his/her computer. At first, there are no existing games. Notice the watermark for the new game code text box in Figure 4.
OpenPackageWindow
Figure 4: A new game code must be entered first.
The WatermarkTextBox custom control is created from a very simple code file and a style resource. First, a quick look at the control’s code file reveals a class that descends from TextBox. This new class has a static and read-only field of type DependencyProperty. It is in this field that a new dependency property is registered for the control’s watermark text. I highly recommend that you take some time to learn about dependency properties, if you are a newcomer to WPF. They are one of the pillars that hold the entire WPF framework up high. One of CodeProject’s well known WPF authors gives a very detailed look at dependency properties, if you are interested in digging even deeper.
WatermarkTextBox
TextBox
DependencyProperty
As a convenience, I have also included a property for WatermarkText that gets and sets the value of the dependency property. This is not a requirement, but is standard protocol when developing custom controls. Such convenience properties are often referred to as dependency property “wrappers”. By offering a property wrapper, the value of the dependency property can be easily set from code. In addition, the XAML compiler depends on these wrapper properties. You will definitely want to include the wrapper properties if you hope to allow developers to set the dependency properties via XAML.
WatermarkText
Finally, a static constructor is added that overrides the new control’s default style key. This allows the control to have a default style defined for it. When overriding a WatermarkTextBox’s style in a custom skin later, having this default style to build from will prevent the need to redefine the control’s template.
public class WatermarkTextBox : TextBox {
public static readonly DependencyProperty WatermarkTextProperty =
DependencyProperty.Register(
"WatermarkText",
typeof(string),
typeof(WatermarkTextBox),
new PropertyMetadata(string.Empty)
);
public string WatermarkText {
get { return (string)GetValue(WatermarkTextProperty); }
set { SetValue(WatermarkTextProperty, value); }
}
static WatermarkTextBox() {
DefaultStyleKeyProperty.OverrideMetadata(
typeof(WatermarkTextBox),
new FrameworkPropertyMetadata(typeof(WatermarkTextBox))
);
}
}
All the code in Listing 4 does, is add a dependency property, a wrapper property, and a single-statement static constructor. How will the new text box display the watermark and hide it when it receives focus or text is entered? That is where a default style comes in.
WPF looks for generic and theme-specific resources in a Themes folder in the root of the project. These resources may be provided to offer a default style for the controls created in your application (or control library). I added a resource dictionary named, “Generic.xaml”, in the Themes folder. It contains the default style for the WatermarkTextBox.
<ResourceDictionary
xmlns="<a title="Linkification:" href="" class="linkification-ext"></a>"
xmlns:</a>"
xmlns:
I used one of the few great tools freely available on the Internet to peek into the default templates and styles of a standard TextBox. Reflector comes to the rescue. This time, I used the BAML Viewer add-in with Reflector. With it, I am able to see the BAML (binary XAML) for the resource dictionaries used for theming all the controls in WPF. I used it to deconstruct the TextBox control, so I could add a simple TextBlock that displays the specified watermark text of the custom control. It is essential to use the same names of child controls (parts) used by the default template of a built-in control when overriding it. Doing so allows the control to act as it normally does (e.g. a text box needs to work with a part named “PART_ContentHost” to set the content as needed).
TextBlock
I also added a multi-trigger that will set the watermark text block to be visible only when the control is not focused and has no text entered. In the essence of article length, I leave the remaining template definition for the reader to study.
WatermarkBrush
After a user enters the fictional code provided with their game manual (e.g. 80c0-9c76-cfc7-440a-9261) and presses the Open button, the open package window attempts to connect to our previously created service to download the requested combo package. A service reference must be added to the project before such an attempt can be made. Right-click your project and select Add Service Reference.... The dialog that appears allows you to enter the address of the service you would like to reference. For this demo, you can simply click the Discover button to add a reference to the service project in the solution. Be sure to enter a meaningful namespace and press the OK button.
Figure 5: Add a service reference to a project in the same solution.
Some changes will need to be made again to the service bindings’ configuration. The changes are similar to those made in the configuration file of the service application. Please, refer to the client application’s App.config file to see the necessary changes. To summarize, we have to create a custom binding so we can change the maxReceivedMessageSize and readerQuotas/maxArrayLength attribute values. This allows for larger messages to be received from the service (i.e. a file download).
maxReceivedMessageSize
readerQuotas/maxArrayLength
The OpenPackageWindow can now perform an asynchronous call to one of the project’s static helper methods (see ComboPackageHelper.DownloadNewPackage for the details) that connects to the service and downloads the combo package for the entered game code. After downloading the file, the package’s metadata is updated with the entered game code as a reference for updates later. It is then saved to the user’s isolated storage.
ComboPackageHelper.DownloadNewPackage
After a successful download of the new file, the main window opens the package for processing. First, the name of the file being opened from isolated storage is stored in the application’s properties collection for later reference. Next, the combo definitions XML file is extracted from the package and loaded into a nice class provided by the logic library. The ComboDefinitions class parses the XML in the definition file and generates an object graph that represents the definitions.
ComboDefinitions
The definitions are then flattened into actual combo command sequences and bound to the main list box in the window. Next, the skin resource file is extracted from the package as an in-memory copy and the package is closed. Finally, the skin is loaded into the application’s resources to give the client a totally new look.
With an in-memory copy of the skin file read from the package, the client application replaces its default skin (which is provided mainly to avoid errors with styles reference by key on certain controls) with the game-specific one. This is easier than you may first think. All you really need is a stream containing XAML for a resource dictionary that contains all the styles you would like to apply to the user interface. With such a stream, the code is relatively simple.
public void LoadSkin(Stream skinStream) {
// Create a new resource dictionary for the skin from the specified stream
// via an XAML reader.
ResourceDictionary NewSkin = XamlReader.Load(skinStream) as ResourceDictionary;
// Replace the last dictionary with the new skin.
Resources.MergedDictionaries.RemoveAt(Resources.MergedDictionaries.Count - 1);
Resources.MergedDictionaries.Add(NewSkin);
}
Listing 6 shows the very simple LoadSkin method defined on the client’s App class. A quick call to XamlReader.Load, passing the appropriate stream as an argument is all that is needed to initialize a new ResourceDictionary instance. From there, I remove the last dictionary in the application’s resources and add the new one. Believe it or not, that’s it! The user interface is immediately updated to reflect the new style and template definitions present in the skin loaded.
LoadSkin
App
XamlReader.Load
ResourceDictionary
Figure 6: The main window with the opened package's skin applied.
I slipped one by some of you. Others may be asking, “How in the world did the background image get loaded from thin air?” Those of you intimate with XAML and skinning with WPF know that a background image, like the one in Figure 6, can really only be referenced in XAML by URI. That is, a Web address, physical file path, or resource path is the easiest way to reference an image. The image above is dynamically loaded from the same package file the skin was extracted from.
The client application, being a smart app, has a class included for the sole use of loading a resource stream by name from the currently viewed package. The CurrentSkinResource class defines a method for loading a resource with a specific name that is related to the skin in the package.
CurrentSkinResource
public Stream LoadResourceByName(string resourceName) {
...
// Open any current combo package.
using (ComboPackage Package = App.Current.OpenCurrentComboPackage()) {
if (Package != null) {
// Open the requested skin resource from the combo package as a copy.
LastStreamRequested = Package.OpenSkinResourceStream(resourceName, true);
} else {
throw new ApplicationException(
"There is no current combo package being viewed."
);
}
}
return LastStreamRequested;
}
The code in Listing 7 opens the currently viewed combo package. Then, it opens the appropriate skin resource from the package into an in-memory stream. Finally, the memory stream is returned; to be consumed by an image source in the calling skin definition.
The skin XAML calls this method by way of an ObjectDataProvider. These are perfect for initializing an instance of a class and calling one of its methods. The catch is how to bind an image source to it properly.
ObjectDataProvider
<ObjectDataProvider
x:
<ObjectDataProvider.MethodParameters>
<system:String>PrinceOfPersia2008Background.png</system:String>
</ObjectDataProvider.MethodParameters>
</ObjectDataProvider>
<Style x:
<Setter Property="Background">
<Setter.Value>
<ImageBrush
AlignmentX="Left" AlignmentY="Top"
Stretch="None" TileMode="None">
<ImageBrush.ImageSource>
<BitmapImage BaseUri="{x:Null}" CacheOption="OnLoad">
<BitmapImage.StreamSource>
<Binding Source="{StaticResource LoadSkinResource}" />
</BitmapImage.StreamSource>
</BitmapImage>
</ImageBrush.ImageSource>
</ImageBrush>
</Setter.Value>
</Setter>
</Style>
Defining the ObjectDataProvider is simple enough. We give it a key, tell it the type of object to create, indicate the name of the method to call, and provide any parameters for the method. In this case we want to get a stream of the PrinceOfPersia2008Background.png image that is buried deep within the same package the skin was loaded from.
The style referenced by the GameBackground key sets the panel’s Background property. An image brush is necessary for our needs, so one is included with appropriate alignment, stretch and tiling. Setting the ImageSource property of the ImageBrush requires the use of a BitmapImage. These are usually used to load a bitmap image from file. This one uses its StreamSource property instead of UriSource. The StreamSource is primarily set via code, but we do not have that option with our skin definition. Instead, a binding is used with a source that references the ObjectDataProvider defined earlier. There are two very important additional settings that must be present for all this to work properly.
GameBackground
Background
ImageSource
ImageBrush
BitmapImage
StreamSource
UriSource
First, BitmapImage’s BaseUri property must be set to null. The documentation states that either UriSource or StreamSource must be set; however, there is apparently a bug present. When I set the StreamSource, no image is loaded. Thanks again to Reflector, a peek into the code of BitmapImage reveals that BaseUri and UriSource must be null in order for StreamSource to be used. The problem is, BaseUri defaults to an empty string (e.g. “”), not null. Therefore, setting it to null is a must in this case. This special case is only an issue from XAML.
BaseUri
Second, BitmapImage’s CacheOption property should be set to “OnLoad”. This ensures that the image source is loaded immediately and the stream is disposed of. Since the LoadResourceByName method is creating a MemoryStream and releasing it into the wild, that stream really should be disposed of instead of left for garbage collection. This setting makes sure of that for us.
CacheOption
LoadResourceByName
It seems like a lot of work, but it is imperative to get our background image loaded from the package via the skin definition. Next, we can finally look at how all those combo sequences are displayed as gaming platform images.
List boxes are no longer simple windowed views into a basic list of items. Oh, no. They can be much more than that with WPF. In fact, like all WPF controls, you can completely customize its appearance via the control template. However, controls that display a collection of items have an additional template that is used for each item. This is a very good thing for this application. It needs to display attack combos in a meaningful way to the user. Ideally, it should present each sequence of commands as a sequence of button images.
Traditional presentation methods would require custom painting the entire control, or at least attaching custom item draw handlers. With WPF, It is a simple matter of adding a custom item template to the list box.
<Window.Resources>
<ResourceDictionary>
...
<local:DrawingResourceKeyConverter x:
</ResourceDictionary>
</Window.Resources>
...
<ListBox Grid.
<ListBox.ItemTemplate>
<DataTemplate>
<StackPanel Orientation="Horizontal">
<CheckBox x:
<ItemsControl IsTabStop="False" VerticalAlignment="Center"
ItemsSource="{Binding Path=CommandSequence}">
<ItemsControl.ItemsPanel>
<ItemsPanelTemplate>
<StackPanel IsItemsHost="True" Orientation="Horizontal" />
</ItemsPanelTemplate>
</ItemsControl.ItemsPanel>
<ItemsControl.ItemTemplate>
<DataTemplate>
<Image Height="16" Margin="5,2,0,2"
ToolTip="{Binding Path=MappedButton.Id}">
<Image.Source>
<DrawingImage Drawing="{Binding
Path=MappedButton.IconKey, Mode=OneWay,
Converter={StaticResource
DrawingResourceKeyConverter}}" />
</Image.Source>
</Image>
</DataTemplate>
</ItemsControl.ItemTemplate>
</ItemsControl>
</StackPanel>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
ItemsControl
A list box’s ItemTemplate is where you can instruct the list box how to display each item it holds. The one in Listing 9 uses a StackPanel to display a CheckBox and embedded ItemsControl. The check box is bound to each item’s IsCompleted property. An ItemsControl is a very simple control that displays a collection of items; much like a list box, but without the user interactivity. This one is bound to each combo’s CommandSequence list and is instructed to use a horizontal StackPanel to hold those Command items. Similar to its parent, a custom ItemTemplate is provided that displays an Image control. The end effect is a horizontal list of images that correspond to the mapped button of each command in the combo sequence.
ItemTemplate
StackPanel
CheckBox
IsCompleted
CommandSequence
Command
Image
FlattenedCombo
There are times when the type of data you are binding to a property is not compatible with that property’s type. An example of this can be seen in Listing 9. The Image control used as the item template of the inner ItemsControl uses a DrawingImage as its source. Furthermore, its Drawing property is bound to “MappedButton.IconKey”. Each Command object is mapped to a Button instance via its MappedButton property. Every Button has a corresponding IconKey that holds the key of a drawing resource included in the application. However, the IconKey is a string and the Drawing property expects…well, it expects a Drawing instance. A value converter is the vital link between these two incompatible types.
DrawingImage
Drawing
Button
MappedButton
IconKey
A value converter simply converts a value to a target type. If necessary, it will convert a value back to its original source type as well. To create a value converter, you must create a class that implements the IValueConverter interface. The interface contracts the Convert and ConvertBack methods. The Drawing’s binding in Listing 9 utilizes the custom DrawingResourceKeyConverter defined in the window’s resources.
IValueConverter
Convert
ConvertBack
DrawingResourceKeyConverter
[ValueConversion(typeof(string), typeof(Drawing))]
public class DrawingResourceKeyConverter : IValueConverter {
public object Convert(object value, Type targetType, ...) {
if (value != null) {
if (value is string) {
if (targetType == typeof(Drawing)) {
// Get the resource with a key specified as the value.
return (Drawing)Application.Current.Resources[value];
}
...
}
...
} else {
return null;
}
}
public object ConvertBack(object value, Type targetType, ...) {
throw new NotImplementedException();
}
}
The DrawingResourceKeyConverter class converts a string that contains a key into a Drawing after retrieving it from the application’s resources. This makes deeply embedded data bindings that much easier to manipulate with very little code.
The user may select from a list of buttons/keys to change the icons used to draw the combo command sequences. These icon themes are grouped by gaming platform in a ComboBox. This is a great way to assist the user in making a selection. It is also very easy to achieve grouping on the collection before binding it to the control’s ItemsSource.
ComboBox
ItemsSource
// Load the button sets.
ButtonSets = ButtonSet.LoadButtonSets(ButtonSetsFile);
// Update the UI with a grouped view of the button sets.
ICollectionView View = CollectionViewSource.GetDefaultView(ButtonSets);
View.GroupDescriptions.Add(new PropertyGroupDescription("Platform"));
cmbButtonSets.ItemsSource = ButtonSets;
However, some of the icon themes are named the same, but under different platforms. This is fine in the drop-down list, but not in the selection box (the area that shows your current selection when the drop-down list is not visible). If a user chooses “Buttoned” under “Xbox 360”, the selection box would only show “Buttoned”, normally. To better serve the user, a DataTemplateSelector is assigned to the combo box’s ItemTemplateSelector to choose a more expressive data template for the selection box.
DataTemplateSelector
ItemTemplateSelector
<Window.Resources>
<ResourceDictionary>
<DataTemplate x:
<TextBlock Text="{Binding Path=Name}" />
</DataTemplate>
<DataTemplate x:
<StackPanel Orientation="Horizontal">
<TextBlock Text="{Binding Path=Platform}" />
<TextBlock Text=": " />
<TextBlock Text="{Binding Path=Name}" />
</StackPanel>
</DataTemplate>
<local:ButtonSetsTemplateSelector x:
...
</ResourceDictionary>
</Window.Resources>
...
<ComboBox Grid.
<ComboBox.GroupStyle>
<GroupStyle>
<GroupStyle.HeaderTemplate>
<DataTemplate>
<TextBlock
Text="{Binding Path=Name}"
Style="{DynamicResource GroupHeader}" />
</DataTemplate>
</GroupStyle.HeaderTemplate>
</GroupStyle>
</ComboBox.GroupStyle>
</ComboBox>
First, the two data templates to select from are defined in the window’s resources section. One template simply displays the button set name; the other template stacks the platform and name, separated by a colon. Next, a definition for the ButtonSetsTemplateSelector is added to the resources section also. Finally, the ComboBox’s ItemTemplateSelector property is set to the aforementioned template selector resource.
ButtonSetsTemplateSelector
The custom DataTemplateSelector class overrides the SelectTemplate method. The container argument passed to it is used to base the decision of which data template to return. If the container is a ContentPresenter with a ComboBox as its TemplatedParent, the data template with combined platform and name is returned. In all other cases, the basic template is returned. The content presenter’s TemplatedParent will be a ComboBoxItem for the drop-down list, instead of the actual ComboBox.
SelectTemplate
container
ContentPresenter
TemplatedParent
ComboBoxItem
public class ButtonSetsTemplateSelector : DataTemplateSelector {
public override DataTemplate SelectTemplate(object item, DependencyObject container) {
string ResourceKey = "ButtonSetByName";
// Get the main window.
Window Window = Application.Current.MainWindow;
// Test if the container is a ContentPresenter.
ContentPresenter Presenter = container as ContentPresenter;
if (Presenter != null) {
ComboBox Combo = Presenter.TemplatedParent as ComboBox;
if (Combo != null) {
ResourceKey = "ButtonSetByPlatformAndName";
}
}
return Window.FindResource(ResourceKey) as DataTemplate;
}
}
Now that the application has a nice new skin from the recently downloaded game combo package, even the old OpenPackageWindow looks a lot better. In fact, after a couple existing games are downloaded, it displays them in a list with a nice icon for the user to select.
Figure 7: The OpenPackageWindow displays the existing downloaded games.
The game combo package icons are displayed with the help of another value converter. When the list of existing games is built, the icon is read from the package and stored in a binary array. A list of the titles and icon data are bound to the list box. The array of icon data is directly bound to an Image control with the help of another value converter defined in the window’s resources section.
<Window.Resources>
<local:BinaryImageConverter x:
</Window.Resources>
...
<ListBox Name="lbExistingGames" ...>
<ListBox.ItemsPanel>
<ItemsPanelTemplate>
<WrapPanel IsItemsHost="True" Orientation="Horizontal" />
</ItemsPanelTemplate>
</ListBox.ItemsPanel>
<ListBox.ItemTemplate>
<DataTemplate>
<StackPanel Margin="5">
<Border BorderBrush="Black" BorderThickness="1"
Width="72" Margin="0,0,0,5">
<Image
Source="{Binding Path=IconData,
Converter={StaticResource BinaryImageConverter}}" />
</Border>
<TextBlock Text="{Binding Title}" />
</StackPanel>
</DataTemplate>
</ListBox.ItemTemplate>
</ListBox>
IconData
BinaryImageConverter
Once again, a value converter is implemented to convert the icon data into an image source. This converter loads the byte array into a MemoryStream, creates a new BitmapImage object, sets its StreamSource property to the in-memory stream, and returns the image source as the converted target object.
[ValueConversion(typeof(byte[]), typeof(ImageSource))]
public class BinaryImageConverter : IValueConverter {
public object Convert(object value, Type targetType, ...) {
if (value != null) {
if (value is byte[]) {
if (targetType == typeof(ImageSource)) {
// Create a MemoryStream for the binary image data.
byte[] Data = (byte[])value;
MemoryStream Stream = new MemoryStream(Data);
// Create a BitmapImage to hold the stream data and return it.
BitmapImage Image = new BitmapImage();
Image.BeginInit();
Image.CacheOption = BitmapCacheOption.OnLoad;
Image.StreamSource = Stream;
Image.EndInit();
return Image;
}
...
}
...
} else {
return null;
}
}
...
}
I decided to create another game combo package with accompanying combo definitions, skin, and assets. As expected, the user must enter a valid game disc code in order to download the package (e.g. 720d-789a-41e7-97cc-01fc). I would guess this newer version of Prince of Persia will end up being another trilogy, so a third game and package would end up on Guru Game’s servers once the time was right.
Figure 8: The alleged Prince of Persia 2 game combo package loaded; featuring Elika on a new skin.
The client application included in this demo solution does not follow what many consider “proper patterns” (e.g. MVC, MVP, MVVM, DMVVM, etc.). One of the fastest growing patterns to write WPF applications with is the Model-View-ViewModel (MVVM) pattern. I am a big fan of this pattern and believe in the great separation of concerns it offers to developers. I chose to not implement the pattern with this demo solution for a couple of reasons. First, I think it may be a great exercise to refactor the client project to fit the MVVM pattern in another article. Second, I did not want to distract from the information that this article is trying to relay; pure WPF techniques. Last, forgoing a proper pattern in the short term to refactor an application later happens in the “real world” all the time. As long as you are aware of the need to refactor if the application grows, you can design a project that is very easy to update when the time comes.
The solution found in the code download does contain other projects not mentioned in the article. A project dedicated to the data accessed by the WCF service project contains an ADO.NET Entity Data Model. The service application references this data project in order to query a SQL Server 2008 Express database when a package download is requested by the client.
In addition, a small console application is present. This program makes it easy to generate a package file containing all the necessary assets that are downloaded by the client application. I used this command-line tool to quickly produce the package files that the service application stores and delivers upon request. It handles the tedium of streaming each of the assets into a package structure that the client expects.
If you are like many Prince of Persia gamers out there, you have tried tons of different combinations of attack combos in the hopes of finding the right set that will award you that special achievement/trophy; the “Combo Specialist”. The good news is, you do not have to perform all 1,602 possible combos, like the task hint may lead you to believe. You only need to pull off 60 of them. Other guides will tell you 62, 63, or even more, but 60 is the magic number (remember, a combo is actually two or more attacks). I have tested this list at least 10 times with a new user profile each time. There are a few rules to keep in mind when trying to achieve this goal.
The application has to jump through a few hoops in order to properly calculate the combos necessary to get the “Combo Specialist” award. The theory behind the calculation is to include all the combos that are in the Combo List screen, but find the shortest path to reach each of them.
For example, the Elika/Magic group of combos has a few that lead to the Lift/Gauntlet group. However, getting to the Lift group by way of the Magic group is not the shortest path. Since, you can get to the Lift group from the Normal group in one command, it’s shorter to build combos from there and skip the Magic group altogether. Of course, the Magic group must be included in order to cover all combos, but it can also be reached from the Normal group with one command. In order to include the combos in the Throw group, you can only get to them by way of the Acrobatic group. Therefore, the shortest path to the Throw group is from the Normal group to the Acrobatic group to the Throw group (i.e. A,G,...).
/// <summary>
/// Recursively builds a list of shortest path sequences that are necessary to complete
/// in order to receive the "Combo Specialist" achievement/trophy.
/// </summary>
/// <param name="group">The initial combo group to calculate from.</param>
/// <param name="startedSequence">Any started sequence to build onto.</param>
private void BuildShortestPathSequences(ComboGroup group, List<Command> startedSequence) {
// Create a new sequence of commands and initialize it with any started sequence.
List<Command> Sequence = null;
if (startedSequence == null) {
startedSequence = new List<Command>();
}
// Traverse each combo in the group, add it to the list of shortest path
// combos, and check its next group for the need to continue down that path.
Dictionary<ComboGroup, AttackCombo> CombosToContinue =
new Dictionary<ComboGroup, AttackCombo>();
foreach (AttackCombo Combo in group.AttackCombos) {
// Create a new build sequence for this combo from any started sequence
// plus its own command sequence.
Sequence = new List<Command>(startedSequence);
Sequence.AddRange(Combo.CommandSequence);
// Add this sequence as a new shortest path combo, if it has more than 1 command.
if (Sequence.Count > 1) {
ShortestPathCombos.Add(new FlattenedCombo(Sequence));
}
if (Combo.NextGroupInChain != null) {
// Check this combo's next group against any existing ones to continue.
if (CombosToContinue.ContainsKey(Combo.NextGroupInChain)) {
// Compare the existing combo to continue for this group with the current
// combo by their command sequence counts.
AttackCombo ComboToContinue = CombosToContinue[Combo.NextGroupInChain];
if (Combo.CommandSequence.Count < ComboToContinue.CommandSequence.Count) {
// Swap the combo to continue for this new shorter one.
CombosToContinue[Combo.NextGroupInChain] = Combo;
}
} else if (!GroupsToFollow.Contains(Combo.NextGroupInChain)) {
// Add the combo as one to continue for the next group.
GroupsToFollow.Add(Combo.NextGroupInChain);
CombosToContinue.Add(Combo.NextGroupInChain, Combo);
}
}
}
// Build onto the combos to continue recursively.
foreach (AttackCombo Combo in CombosToContinue.Values) {
// Create a new build sequence for this combo from any started sequence
// plus its own command sequence.
Sequence = new List<Command>(startedSequence);
Sequence.AddRange(Combo.CommandSequence);
BuildShortestPathSequences(Combo.NextGroupInChain, Sequence);
}
}
The code in Listing 16 uses the already loaded object graph of the combo definitions in order to calculate the necessary combos. The initial call to the BuildShortestPathSequences method uses the StartingComboGroup (which is defined as the Normal group) to start with and no started sequence. From there, it adds all the combos in the group to a global list, and determines if it should follow the next group in that combo’s chain. After processing all the combos in the current group, any that should be continued to their next group are then recursively processed.
BuildShortestPathSequences
StartingComboGroup
Two global lists are maintained that track whether or not a combo should be continued further down the tree. The decision is made based on whether or not a group further up the tree will be processing the same next group. If so, there is no need for this deeper branch to waste time doing so.
For those that do not know of the Prince of Persia franchise, it started in 1989 as a side-scrolling platformer, featuring a Persian prince pitted against an evil vizier left to rule the land in the sultan’s stead while away at war. There is plenty more information about the franchise on Wikipedia.
Guru Games is a fictional company and the Game Attack Combos application is for demonstration purposes only. Ubisoft owns all rights to the Prince of Persia game franchise and has not hired me to develop any such application for their games.
This entire development scenario was fabricated for the sole intent of demonstrating several .NET technologies for a hybrid smart client that was actually used to solve a problem I was having acquiring 100% completion of the Prince of Persia game released in 2008. Ubisoft has not announced whether a sequel to the latest Prince of Persia game will use the same combat system used in the first. They have not hinted that such a combat system will be used in any future games. Again, this was all speculation used to justify an application such as the one demonstrated in this article.
Please, do not contact Ubisoft about this application. They have enough to worry about with all the sequels they are currently developing for recent hit.
|
http://www.codeproject.com/Articles/36320/Game-Attack-Combos-WPF-Hybrid-Smart-Client-for-Com?fid=1540514&df=90&mpp=10&sort=Position&spc=None&select=3072245&tid=3072076
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
Free Email Notification
Receive emails when we post new
items of interest to you.Subscribe or
Modify your profile
Central African Republic—Letter of Intent and Technical Memorandum of Understanding
Bangui, November 19, 2001
Mr. Horst Köhler
Managing Director
International Monetary Fund
700 19th Street, N.W.
Washington, D.C. 20431
U.S.A.
Dear Mr. Köhler,
1. On January 10, 2001, the Executive Board of the International
Monetary Fund approved for the Central African Republic a second annual
arrangement under the Poverty Reduction and Growth Facility (PRGF) in
an amount equivalent to SDR 20 million (36 percent of quota)
in support of the Government's economic and financial adjustment program
for 2001. A first disbursement under this arrangement, in an amount
equivalent to SDR 8 million, was made on January 23. A
second disbursement, also in an amount equivalent to SDR 8 million,
was to have been available on June 15, subject to completion of the
first review under the arrangement and observance of the arrangement's
performance criteria for March 31. A third disbursement, in an amount
equivalent to SDR 4 million, was to have been available on November 30,
subject to completion of the second review under the arrangement and observance
of the performance criteria for September 30. The three-year PRGF
arrangement expires on January 19, 2002.
2. Early in the year, the Government undertook a number of measures aimed
at strengthening economic management. On January 1, a value-added
tax (VAT) was introduced at a rate of 18 percent. Moreover, in the
area of tax administration—and following the recommendations of Fund
technical assistance—exemptions from VAT ceased being granted to
nongovernmental organizations for local purchases, and discussions were
undertaken to lift exemptions for imports. Also, the tax identification
number started being used by the tax and customs administrations, as well
as by the treasury.
3. Regarding other reforms, the consultant was selected for the study
of the costs and economic impact of the advance payment of taxes and duties
on sugar, with a view to dismantling this protective mechanism. Moreover,
a National Statistics Law was adopted by the National Assembly, which
permitted the establishment in October of a National Statistical Board
to coordinate government statistical activities and provide a forum for
producers and users of public statistics. In addition, the committee in
charge of the review (and revision, if necessary) of the petroleum price
structure became fully operational and began meeting on a monthly basis.
Furthermore, the Government sold that part of the petroleum distribution
network that had not previously been allocated ("Lot B") to
Trans-Oil (a C.A.R. company), which adheres to all of the terms of the
privatization framework that governs the import, storage, and distribution
of petroleum products in the Central African Republic. TOTAL and Trans-Oil
now have equal shares in the storage company (SOGAL), and, with the divestiture
of public officials who held shares in Trans-Oil, the status of Trans-Oil
has been changed to a limited liability company, with its capital to be
distributed among local private operators and international companies.
Recently, the liquidation of the state-owned petroleum company (PETROCA)
was completed.
4. In the event, however, intermittent political turmoil and labor unrest
contributed to significant slippages in other aspects of program implementation,
as well as in overall program performance. As indicated in Table 1,
at end-March only two of the program's performance criteria and benchmarks
were observed. During a Fund mission to Bangui in May, we reached understandings
on various measures to shore up the primary budget position and move forward
with structural reforms. However, just hours after the mission's departure
on May 27, an attempted coup d'état resulted in numerous casualties, thousands
of displaced persons, and appreciable destruction of property. Moreover,
information subsequently available indicated that the fiscal effort had
already essentially collapsed in April and confirmed that the coup attempt
and its aftermath had foreclosed any possibility for the Government to
bring the PRGF-supported program for 2001 back on track. At end-June,
government revenue was 26 percent short of the program target, and
none of the program's benchmarks were met. It is in this context that
the Government requests that the staff of the IMF assist it in monitoring
the progress of reinforced adjustment efforts within the framework of
a six-month staff-monitored program (SMP) for the period October 2001-March 2002.
We hope that, once the Fund staff confirms that the SMP has been satisfactorily
implemented through end-March 2002, a new three-year PRGF-supported
program can be put in place as quickly as possible.
5. The Government's inability to implement the recent program reflects
a pattern observed over many years, especially difficulties in ensuring
sustained adherence to responsible fiscal policies. Importantly, the Central
African Republic has a very low government revenue-to-GDP ratio (about
9 percent), and the historically recurrent political and social crises
have prevented the needed strengthening of administrative capacity (Table 2).
Nevertheless, we strongly believe that our proposed strategy for the next
six months, as set out below, shows our determination to establish a credible
track record, particularly regarding the budget, and to enhance administrative
capacity for the longer term. The Government is fully committed to these
goals, notwithstanding the domestic disturbances during the first week
of this month.
6. The principal financial objective of the SMP is a primary government
budget surplus over the October 2001-March 2002 period of CFAF 10.8 billion,
or 2.9 percent of semiannual GDP (Table 1
of the attached technical memorandum of understanding). In the first half
of 2001, a primary budget deficit of CFAF 2.1 billion (0.6 percent
of semiannual GDP) was recorded.
7. Regarding government revenue, a pickup in the pace of tax and customs
collections in the remainder of this year should be facilitated by the
gradual improvement in the domestic environment and a strengthening of
administrative procedures (also as recommended by Fund technical assistance).
In particular, as was agreed during the May mission, the Government has
put into effect a new organizational structure for customs and tax administration;
is extending the use of the customs management software (SYDONIA 2.7)
to cover wholesale trade and companies legally exempted from customs duties;
and is verifying the compliance of at least 150 large companies subject
to the VAT. Moreover, the Government has recently leased a former French
army base to the Mission of the United Nations in the Democratic Republic
of the Congo (MONUC), and rental receipts anticipated this quarter (including
retroactive obligations) are estimated at CFAF 7.5 billion (2.0 percent
of semiannual GDP). Looking ahead to 2002, government revenues are
expected to be buoyed by a rebound in the growth rate of real GDP to about
4 percent from the 1.5 percent now estimated for 2001,
with a broadly based pickup in activity expected as the internal security
situation continues to improve and the regional river transportation network
returns to normal. A further strengthening of administrative capacity
will be pursued, including through increased training and expanded computerization.
Particular attention will be devoted to enhancing VAT collections, and,
for this purpose, Fund technical assistance will be requested. MONUC-related
receipts during the January-March 2002 period are projected at CFAF 0.7 billion
(0.2 percent of semiannual GDP); thereafter, they are expected to
continue at an annual rate of about CFAF 3.0 billion.
8. As for primary expenditure, the Government has reduced overall commitment
levels through the end of 2001, as higher wage and salary outlays,
particularly for the military and the judiciary, will be more than offset
by lower domestically financed investment and reduced spending for transfers
and subsidies, and goods and services. For 2002, the Government intends
to maintain across-the-board restraint on primary outlays; institute strengthened
budgetary controls, especially relating to the wage bill; and, with support
from the World Bank, improve overall cash management. Despite the tight
fiscal outlook, the Government intends to safeguard social spending, and,
given our weaknesses in tracking outlays on health and education, we are
anticipating Fund technical assistance shortly to foster improvements
in this area.
9. To the extent that the realized primary budget surplus exceeds the
domestic debt service due (CFAF 0.7 billion—largely to the Bank
of Central African States (BEAC) and the Fund) and/or external, untied
budgetary support becomes available, the additional resources will be
used (i) to service external debt, notably the US$2.1 million that
was due to the African Development Bank (AfDB) in July and the US$3.9 million
that was due to the World Bank at end-September; and/or (ii) to reduce
net bank credit to the Government, given the surge in the 16 months
through July 2001 in government borrowing from commercial banks (CFAF 7.1 billion),
which threatens the viability of an already stretched banking system;
and/or (iii) to scale down government wage payments arrears (about CFAF 17 billion,
or 2.3 percent of annual GDP, at end-August).
10. The Government's near-term structural reform priorities remain essentially
in line with understandings reached during the May mission, and the core
structural reforms that we will endeavor to implement in the remainder
of 2001 include the following: (i) the launching of an environmental
assessment of existing facilities and processes for the import, storage,
and distribution of petroleum products; (ii) the signing of performance
contracts for the public electricity company (ENERCA) and telecommunications
company (SOCATEL) in order to safeguard their financial and physical assets
prior to their privatization; and (iii) the completion of a study on the
economic impact of the advance payment of taxes and duties on sugar.
11. The Government has discussed with World Bank and Fund staff structural
reforms to be implemented in 2002. The core component includes the
following: (i) an acceleration of the privatization process, with the
selection of private operators for ENERCA, SOCATEL, and the water utility
(SNE), and the completion of the privatization of "second-tier"
companies; (ii) the establishment of a petroleum management information
system to provide up-to-date information on domestic supply and demand
and on world market conditions, and to assist in the negotiations with
domestic oil companies on monthly revisions to the price structure for
petroleum products; (iii) the implementation of measures, including an
appropriately flexible producer price policy consistent with international
market trends, aimed at permanently eliminating the deficits of the public
cotton agency (SOCOCA), which, in the past, had repeatedly exerted substantial
fiscal pressures; and (iv) a liberalization of the trade regime for sugar,
with a view to enhancing the efficiency of the national sugar enterprise
(SOGESCA). The Government also intends to undertake various diagnostic
studies in the mining, forestry, health, and education areas before launching
reforms as part of its poverty reduction strategy. In this connection,
the priorities identified therein will be critical inputs for our full
poverty reduction strategy paper, which we expect to complete by mid-2002.
12. As an integral part of the SMP, the Government will work toward regularizing
the Central African Republic's relations with external creditors, particularly
multilateral financial institutions (MFIs), but also Paris Club and other
bilateral donors, in the context of an overall debt-restructuring strategy.
The plan will include an understanding with creditors on a timetable for
the settlement of arrears, possibly involving rescheduling or deferral,
and identify resources that could be used for the direct payment of arrears.
In this vein, a renewed understanding would be required with the AfDB.
In 2001, the Central African Republic has not been able to reduce
its arrears to the AfDB, and, at end-September, overdue obligations to
the AfDB stood at US$18.8 million. Other MFIs to which the Central
African Republic has overdue obligations include, as indicated above,
the World Bank, as well as the Development Bank of Central African States,
the International Fund for Agricultural Development, and the OPEC Fund.
13. As for prospective new external financing, the World Bank has indicated
that it may prepare a supplemental credit (equivalent to about US$15 million)
to the existing Fiscal Consolidation Credit by end-2001 if it deems that
sufficient progress has been made in the implementation of key structural
reforms, and if the Central African Republic becomes current in its obligations
to the World Bank. The Government is already seeking international support
for the convening of a donors' conference in the last quarter of 2001
to mobilize assistance for the country's adjustment efforts. At the same
time, the Government is renegotiating on appropriately concessional terms
a EUR 10 million loan from the Libyan Arab Foreign Bank contracted
early this year that was not included in the projections underlying the
second annual PRGF-supported program.
14. The SMP will be monitored by means of the quantitative indicators
for end-December 2001 and end-March 2002 as specified in Table 2
of the attached technical memorandum of understanding. These include (i)
ceilings on net bank credit to the Government, excluding the counterpart
of the use of Fund resources; (ii) ceilings on the contracting or guaranteeing
of new nonconcessional external debt; (iii) floors on total government
revenue; (iv) floors on the primary budget position; and (v) ceilings
on the net change in government domestic payments arrears. Moreover, as
indicated above, the Government is committed to allocating any primary
budget surplus in excess of domestic debt service due and/or the receipt
of any external, untied budgetary support to service external debt, and/or
to reduce net bank credit to the Government, and/or to reduce government
wage payments arrears. Observance of this commitment will be a subject
of the two quarterly reviews that are planned to assess progress with
regard to the quantitative indicators for end-December 2001 and end-March 2002,
respectively. The reviews will also assess the Central African Republic's
progress toward the regularization of relations with external creditors,
particularly MFIs. In addition, as shown in Table 2 of the attached
technical memorandum of understanding, the completion of a study on the
economic impact of the trade regime for sugar imports and verification
of the compliance with the VAT of at least 150 large companies are structural
benchmarks for end-December 2001; and the establishment of a petroleum
management information system and the completion of diagnostic studies
in the mining, forestry, health, and education areas are structural benchmarks
for end-March 2002.
15. The Government believes that the economic and financial policies
described in this letter are adequate to achieve the objectives of the
SMP. Nevertheless, it will maintain close relations with the staff of
the Fund and consult with it, of its own accord or at your request, on
its economic and financial policies, including any further measures that
may prove necessary to achieve the program objectives.
Sincerely yours,
CENTRAL AFRICAN REPUBLIC
Technical Memorandum of Understanding
1. This memorandum describes the quantitative indicators and structural
benchmarks adopted to monitor execution of the six-month staff-monitored
program (SMP) for the period October 2001-March 2002 as set forth in the
letter of intent (LOI) of November 19, 2001 from the Minister of Finance
of the Central African Republic to the Managing Director of the International
Monetary Fund. The quantitative indicators and structural benchmarks are
listed in Table 2.
A. Quantitative Indicators
2. The quantitative indicators are the following:
(a) ceilings on the end-of-period stock of net claims of the banking
system on the Government, 1 excluding
the counterpart of the use of Fund resources;
(b) cumulative ceilings on the contracting or guaranteeing of new nonconcessional
external debt 2 with original maturity
of one year or more than one year by the Government, from October 1, 2001
onward;
(c) cumulative ceilings on the contracting or guaranteeing of new nonconcessional
external debt2 with original maturity of less than one year,
from October 1, 2001 onward;
(d) cumulative floors on total government revenue, including taxes
on wages, earmarked revenues, and duties and taxes on projects, from
October 1, 2001 onward;
(e) cumulative floors on the narrow primary budget position, from October 1, 2001
onward; and
(f) cumulative ceilings on the net change in government domestic payments
arrears, from October 1, 2001 onward.
Adjusters
3. If total government revenue exceeds the quantitative indicator (2d)
in a given quarter, the quantitative indicator for the primary budget
position (2e) will be adjusted upward by the additional revenue collected.
4. To the extent that the primary budget surplus will exceed the amount
that is due in domestic debt service, and/or that external, untied budgetary
support will become available, the authorities will use these resources
to
(a) service external debt; and or
(b) reduce the stock of net claims of the banking system on the Government,
excluding the counterpart of the use of Fund resources; and/or
(c) reduce government wage payments arrears.
Definitions and computation
5. The end-of-period stock of net claims of the banking system on the
Government, excluding the counterpart of the use of Fund resources, is
valued in accordance with the accounting framework currently used by the
BEAC. As of end-July 2001, these claims amounted to CFAF 24.873 billion,
broken down as follows:
Net banking system claims on government, excluding
the counterpart
of the use of Fund
resources (in billions of CFA francs)
24.873
BEAC current (statutory)
advances
13.617
Consolidated advances
14.552
Less: deposits with the
BEAC
10.432
Commercial bank advances
7.538
Less: deposits with commercial
banks
0.402
6. Quantitative indicators for external indebtedness are cumulative ceilings
on new nonconcessional external debt contracted or guaranteed by the Government.
Loans of one year or more than one year are those with initial terms—recorded
in the original loan agreement—of at least one year. Cumulative
ceilings are also established for external debt contracted or guaranteed
by the Government with a maturity of under one year, excluding normal
import financing (for example, documentary credits). Loan concessionality
is assessed on the basis of the commercial interest reference rates (CIRRs)
established by the OECD. A loan is said to be on concessional terms if,
on the initial date of disbursement, the ratio of the present value of
the loan, calculated on the basis of the reference interest rates, to
its nominal value is less than 50 percent (i.e., a grant element
of at least 50 percent). For example, CIRRs for the period February 15-August 14,
2001 were as follows:
Term of less than
15 years
Term of 15 or more years
(Annual percentage rate)
U.S. dollar
6.09
7.35
Euro
5.73
7.13
Pound sterling
6.11
8.38
Japanese yen
1.58
3.75
7. Government revenues are valued on a cash basis and include offsetting
operations in current revenue and expenditure. Revenues are shown in the
consolidated government operations table (Tableau des opérations
financières de l'État—TOFE) and include earmarked
revenues, customs checks for project-related customs duties, and withholdings
from civil service wages and salaries. Revenues for 2000 were estimated
at CFAF 60.5 billion, broken down as follows:
Fiscal revenue (TOFE basis, in billions of
CFA francs)
60.5
Cash receipts (customs directorate,
tax directorate, treasury)
50.6
Earmarked revenue (road fund,
Central African Economic and
Monetary Community (CEMAC))
3.1
Withholding taxes on government
salaries
2.9
Customs duties on projects (treasury
checks)
3.9
8. The primary budget position is calculated as the difference between
government revenues as defined in the previous paragraph and primary government
expenditure. Primary government expenditure, that is, total government
expenditure excluding interest payments and externally financed investment,
is valued on a commitment basis. Government commitments include all expenditure
for which commitment vouchers have been approved by the central offices
of the Ministry of Finance; automatic expenditures (such as salaries,
utilities, pensions, and other expenditures for which payment is centralized);
expenditure by means of offsetting operations; and budgetary contributions
in the form of treasury checks in payment of customs duties on projects.
Expenditure for 2000 was valued at CFAF 61.9 billion, broken
down as follows:
Primary government expenditure (TOFE basis;
in billions of CFA francs)
61.9
Current expenditure
51.6
Wages and salaries
28.9
Other
goods and services
14.5
Subsidies
and transfers
8.2
Domestically financed capital
expenditure
10.3
As a consequence, the primary budget position for 2000 was valued at
CFAF -1.4 billion.
9. The net change in government domestic payments arrears corresponds
to the difference during the period between liabilities contracted by
the Government, excluding external debt operations, and payments made.
This change corresponds to a net reduction of arrears when the amount
is negative and to a net accumulation of arrears when the amount is positive.
Government liabilities include all expenditure for which commitment vouchers
have been approved by the central offices (Direction des services centraux)
of the Ministry of Finance; automatic expenditures (such as salaries,
utilities, pensions, and other expenditures for which payment is centralized);
and expenditure committed by project managers. Government payments include
treasury cash payments and offsetting operations. The net change in government
domestic payments arrears is estimated by its components, including cash
operations and the net balance of central government accounts, excluding
the treasury accounts; this amount should also ensure a balanced supply
and use of funds in the TOFE. For 2000, the net change in arrears was
CFAF -8.7 billion (i.e., a net reduction), broken down as follows:
Net change in government domestic payments
arrears (in billions of CFA francs)
-8.7
Current expenditure
1.6
Wages
and salaries
6.5
Transfers
and subsidies
1.9
Goods
and services
-6.8
Domestically financed capital
expenditure
-4.5
Other
-5.8
10. A net change in government wage payments arrears corresponds to the
difference during the period between government wage payment commitments
and wage payments made. The commitments comprise those for all staff (permanent
and temporary) of the civil service and the armed forces, including withholdings
on behalf of the national tax directorate. These cover wages, salaries,
and bonuses for the current period, even when advices for the period are
not issued by the payroll directorate owing to the payment lag. For a
net reduction in arrears, the difference between commitments and cash
payments is negative; for a net accumulation, it is positive. For 2000,
the net change in government wage payments arrears was CFAF 6.5 billion
(i.e., a net accumulation).
11. External debt service consists of cash payments made to settle external
debt-service obligations falling due during the period under consideration,
and comprise interest and principal payments, including penalty interest
if appropriate. In 2000, CFAF 7.3 billion in external debt-service
payments were made, with the following allocation among creditor types:
(In billions of
CFA francs)
Multilateral creditors
6.9
Bilateral creditors, Paris Club
0.4
Bilateral creditors, non-Paris Club
0.0
Other (including short term)
B. Structural Benchmarks
12. The structural benchmarks are as follows:
(a) completion of a study to evaluate the economic impact of the trade
regime for sugar imports, a copy of which will be provided to IMF staff
by December 31, 2001;
(b) verification of the compliance with the value-added tax of at least
150 large companies by December 31, 2001, as confirmed by the Ministry
of Finance and Budget with a list of the companies inspected;
(c) establishment, as confirmed by the relevant decree, of a petroleum
management information system to provide up-to-date information on domestic
supply and demand and on world market conditions, by March 31, 2002;
and
(d) completion of diagnostic studies in the mining, forestry, health,
and education sectors by March 31, 2002.
C. Program Monitoring
13. Monitoring the quantitative indicators and structural benchmarks
will be the subject of a monthly evaluation report, prepared within six
weeks following the end of each month. This document will assist with
assessing performance in terms of the program's quantitative and structural
objectives.
14. The Standing Technical Committee responsible for program monitoring
(CTP-PAS) will regularly report the data and other information required
for program monitoring to the IMF's African Department by fax or e-mail.
This will include the following information:
(a) a monetary survey, central bank survey, and commercial bank accounts;
(b) fiscal operations, based on the cash-flow table, the expanded table
(bridge from cash-flow table to and TOFE), and the TOFE;
(c) a breakdown of cash outlays on current expenditure and on domestically
financed capital expenditure according to whether these outlays correspond
to expenditure commitments for 2002, for 2001, for 2000, or for earlier
years, respectively;
(d) a breakdown of revenue office receipts, including the monthly report
on the reconciliation of customs payments with data from the import
certification agency (SGS);
(e) a breakdown of public expenditure growth, on both a cash and commitment
basis;
(f) a breakdown of external debt service and external debt arrears,
including by interest and principal, and by principal creditors (the
most important being 3 the African
Development Bank, Kuwait, Switzerland, Taiwan Province of China, the
World Bank, and Yugoslavia);
(g) the amount of new nonconcessional external debt contracted or guaranteed
by the Government;
(h) actual disbursements of nonproject external financial assistance,
and external debt relief granted by the external creditors;
(i) indices assisting with an assessment of overall economic trends,
such as the household consumer price index, import and export flows
(in volume and value), activity in the forestry sector and in industry,
etc.; and
(j) a review of the implementation of structural measures.
15. In addition, the Standing Technical Committee will provide the IMF's
African Department with the following specific information, to be sent
not later than 15 days after the end of the respective month:
(a) a monthly report on petroleum prices;
(b) a monthly report on the provision of petroleum free of charge or
at concessional prices; and
(c) a bimonthly report on the implementation of specific measures at
the tax directorate and the customs directorate, as recommended by technical
assistance missions of the IMF.
16. The Standing Technical Committee will also provide the IMF's African
Department with any information deemed necessary or requested by Fund
staff for purposes of program monitoring.
17. Two quarterly reviews with IMF staff are planned for February and
May 2002 to assess progress with regard to the quantitative indicators
for end-December 2001 and end-March 2002, respectively. The reviews will
also assess the Central African Republic's progress toward the regularization
of relations with external creditors, particularly multilateral financial
institutions.
Definition of Debt Set Forth in No. 9 of the Guidelines
The definition of debt set forth in No. 9 of the Guidelines on Performance
Criteria with Respect to foreign Debt.
|
http://www.imf.org/External/NP/LOI/2001/caf/01/index.htm
|
CC-MAIN-2014-35
|
en
|
refinedweb
|
The Hello World program is the most common first example of how to write programs in a new language, even when they don't say Hello World. Five different ways to say hello in Ruby offer a gentle introduction to the language.
Examples of how to write many simple programs that are of little practical use litter the Internet for anyone to find. Among the most common are:
- a Fibonacci number calculator, because it is the canonical example for recursion.
- a FizzBuzz solver, because it has become a canonical example of a simple job interview coding question.
- a Hello World greeter, because it is the canonical example used to introduce a language.
While teaching someone a new programming language in any depth requires a lot of discussion of syntactic forms, semantic design, and "best practices," it usually does not make much sense to the student until he or she gets to see a few examples of source code. This is where the Hello World program comes in: It is usually the first code example presented to a complete beginner, to give the simplest possible example of how the language looks in actual use. More complex examples follow, often involving the addition of more complexity and functionality to the Hello World program until it is nearly unrecognizable as a descendant of that first program.
It is easy to dismiss the Hello World as a meaningless toy, but it serves an important purpose that more experienced developers may forget later in their careers: With a little creativity, such programs can even demonstrate the flexibility and unique qualities of the language to some small extent. For those of you who have never used Ruby, the following Hello World examples in the Ruby programming language may serve that purpose. Each example includes a standard shebang line that tells the OS where to find the Ruby interpreter on a Unix system.
1. The simple example
#!/usr/bin/env ruby
puts 'Hello, world!'
There is not much to this. It consists of a grand total of two tokens -- the
puts method and a string datum. Similar languages such as Perl and Python would use
#!/usr/bin/env perl
print "Hello, world!\n";
The double quotes and
\n newline character are necessary here if you want a newline at the end of the program's output. Ruby has a
puts, which conveniently adds the newline character at the end of its output without having to be asked.
2. The input example
#!/usr/bin/env ruby
puts "Hello, #{ARGV[0]}!"
In this case, we are using double quotes for an interpolated string -- that is, a string that is actually parsed for executable code and special characters that should not be taken literally by the Ruby interpreter. Code that you want evaluated before passing the double-quoted string to the
puts method can be wrapped in
#{ }. In this case, we want the
ARGV array -- where Ruby stores input from the command line -- to give us its first element. Because Ruby starts counting array elements at zero, that means we need
ARGV[0] evaluated.
If you name this program
hello.rb and feed it
world as a command line argument when you execute the program, the output will say
Hello, world!:
> ./hello.rb world
Hello, world!
3. The array example
#!/usr/bin/env ruby
greeting = ['Hello', ARGV[0]]
puts greeting.join(', ') + '!'
Here, we first create a two-element array and give its second element the value of the first element of the
ARGV array so that the user, as in the previous example, can specify the target of a greeting. Next, the
join method is passed to the
greeting array object, telling it to join its elements together with a comma followed by a single space between any two elements. Finally, it adds (concatenates, in this context) an exclamation point to the end of the string, before the
puts method finally attaches a newline to the end and sends the whole string to standard output.
If we run this program on a Monday, with a long, frustrating workweek ahead of us, we might be feeling less charitable than in the previous examples, and call the world something a bit less friendly:
> ./hello.rb 'cruel world'
Hello, cruel world!
This version uses a little unnecessary complexity to achieve the exact same effects as the previous example, but that complexity does demonstrate for us a couple of things:
- It displays some of the convenient text-processing capabilities of Ruby, including two ways to connect separate strings together into a larger string.
- It begins to show the object-oriented design of Ruby.
People who know object oriented programming from working with other languages might look at a lot of the code from before this example and wonder why everything in Ruby looks procedural, when it has a reputation as an object-oriented language. Prior to this, one could not tell just by looking at it that objects and methods infested the Ruby source code for a Hello World program. It turns out that
puts is a method of the
Kernel class, and strings are object instances of the
String class.
Array objects come with a
join method, and while it looks like a procedural infix operator,
+ is actually being used here as a
String object's method. In fact, the last line of this version of the Hello World program can be written like this, more explicitly revealing these methods for what they are:
Kernel.puts( greeting.join(', ').+('!') )
Thank goodness for syntactic sugar; the version that does not use as many parentheses, does not explicitly name the
Kernel class, and does not use as much dot-notation for methods was a bit easier on the eyes.
4. The custom method example
#!/usr/bin/env ruby
def greet(target)
return "Hello, #{target}!"
end
puts greet(ARGV[0])
With another bit of unnecessary complexity, this time we have separated the string-building process from the output process by constructing a method called
greet. By creating the
greet method without explicitly including it in any class definition, we have added it to the
Kernel class. As with
puts, we do not need to specify the class to which the method belongs. In fact, because greet is a private method of
Kernel, it cannot be called like this:
Kernel.puts Kernel.greet(ARGV[0])
Actually, it can in irb (the interactive Ruby interpreter), but with standard executable files, this will not work.
Once again, even when we create a method, we are doing something that could easily be mistaken for using a programming paradigm other than object-oriented programming. In this case, it looks like functional programming, where we have defined a "function" that takes exactly one input, produces exactly one output, and has no side effects. That "function" is then used to provide its output as input for another "function", the
puts method. It is only because we know that both of these supposed functions are in fact methods of the
Kernel class that we know this is a purely object-oriented exercise.
If you want to develop code in a functional style, you may simply ignore what is going on behind the scenes and write your code in a manner calculated to give the impression it is purely functional, and not object oriented at all.
5. The custom class example
#!/usr/bin/env ruby
class Greeter
def initialize(greeting)
@greeting = greeting
end
def greet(subject)
puts "#{@greeting}, #{subject}!"
end
end
hail = Greeter.new('Well met')
hail.greet(ARGV[0])
farewell = Greeter.new('Goodbye')
farewell.greet(ARGV[1])
Finally, we are getting into the thick of object-oriented programming a little bit. A
Greeter class is defined, and its
initialize method (which automatically aliases to
new) takes an argument for the type of greeting you want an instance of that class to use. Another method,
greet, takes a single argument as well -- the subject, or target, of your greeting.
Two separate objects of this class are instantiated with the
new method, each of them being assigned to a separate label at the time they are created:
hail and
farewell. Each object is then sent the
greet message with the first and second elements of the
ARGV array, respectively, so that the person running the program from the command line can specify who will receive each greeting.
Let us assume that the person running the program gets up happy and optimistic on a Friday morning, looking forward to a quick and painless day of work, followed by a long weekend (Monday will be a holiday for this user). As such, when the user says hello, it is with a smile and a friendly outlook for the world. By the end of the day, however, this user's entire department has been laid off. To top it off, our hapless user has just bought a new, expensive house that he simply cannot afford without that job, and the economy is in the toilet. As such, he has a very emo moment when it comes time to say goodnight.
Given such a turnaround in his day, the program is run thus, complete with a teenage angst filled spelling error:
> ./hello.rb 'fine world' 'crule world'
Well met, fine world!
Goodbye, crule world!
If you become an expert Rubyist, you will hopefully never have the problem of being laid off so suddenly. I also hope that this quick introduction to Ruby gives you an idea of the language's clarity and flexibility.
|
https://www.techrepublic.com/blog/software-engineer/five-ruby-hello-world-greetings/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
Music, Haskell... and Westeros
Music
Music is a recurring theme on my blog. Quod Libet takes up a fair bit of spare time, but the more abstract intersection of music and programming is fun, and it turns out functional programming has a lot to offer in this space. I assume some basic knowledge of Haskell / Elm (or ML-like languages perhaps), but a minimal amount of music knowledge.
Enter Euterpea
Euterpea is a great music DSL for Haskell, with a long lineage based at Yale and around the late Paul Hudak’s Haskell School Of Music. I won’t try to explain its wide remit and concepts, but definitely worth reading some introductions. We’ll be focusing on the composition (music) rather than synthesis (audio) side.
There are a few other projects around (even in Haskell alone). One that looks particularly interesting, for more electronic / sample / dance-based work is Tidal Cycles, but we’re going to concentrate on Euterpea here.
Where now?
Rather than algorithmic generation of music, I became interested in how far this library could be used to create or recreate, say, Western popular music (for a start) by hand, but in a concise and readable way.
The ability to use General MIDI seemed of limited value, but having set up Timidity it became clear bog-standard software synths have come a long way since, err, the late 90s, and this is a quick, free solution for creating passable music without specialist hardware or software. Interesting…
The history of trackers
Talking of the late 90s, some readers will be too young to remember Trackers on the Amiga / ST / DOS from then, so a quick recap!
These were nearly all:
• non-notated (i.e. no traditional music score)
• sample-based (you could provide your own lo-fi instrument sounds)
• tabular based on discrete time-steps, like a spreadsheet of sorts with channels as columns and time steps (1/8th of a note) as … Playback would going down the screen
• declarative (not recorded, or err, generative maybe). Entries (notes) looked like
C 4 A0, or similar, meaning C in the 4th octave with a volume of
0xA0 or 160/256
• embedded metadata (such as BPM, vibrato, dynamics) into individual notes
Some of this experience is simply outdated by newer GUIs and advanced MIDI Sequencers, though for dance music / drum machines, similar interface persist.
...and some drawbacks
The ability to replicate entire sections in trackers was usually available in some way, but without any variation. What was completely lacking was the ability to re-use individual channels or phrases, or apply transformations (transpose, make louder or quieter, change instrument) to pre-existing parts.
In fact, this problem is sounding a lot like bad codebases, especially ones that don’t employ DRY. As developers, we’ll usually try to refactor: extracting common code to methods, building abstractions (even just functions that operate on data) and using generic types to write algorithms that can be independent of the data structure.
In functional programming these are even more important, and generic datatypes, type classes, higher-order functions and lazy evaluation allow us to do take these principles further. Could these somehow be applied to music?
Euterpea basics
The Euterpea quick reference guide is a very useful one-pager, and the SimpleMusic examples was helpful to get started. But here’s a very quick summary from the high level:
Music a = Prim (Primitive a) -- primitive value | Music a :+: Music a -- sequential composition | Music a :=: Music a -- parallel composition | Modify Control (Music a) -- modifier deriving (Show, Eq, Ord)
And the
Primitive datatype is defined:
Primitive a = Note Dur a | Rest Dur deriving (Show, Eq, Ord)
•
Dur, the duration for notes / rests is a
Rational alias, so you’re not restricted to any particular quantisation of notes. Yes, this means it’s easy to do all sorts of polyrhythms, but more on that later (or not – we’re aiming for simple timing here).
•
Control is to apply transformations (dynamics, tempo adjustments) to other music sections, a powerful concept modelled very simply.
• Both
Music and
Primitive are Functors, though this isn’t important right now.
Getting started
Well, it’s no fun just talking about this. How about we make some noise?
Setting up the project
Most of the snippets below should be run interactively in GHCI (using
stack repl), but it’s best to have a Stack project set up already, so do that now (or use my euterpea-sandbox one). Here’s the interesting bits from my
stack.yaml to get you started:
- Euterpea-2.0.2 - PortMidi-0.1.5.2 - Stream-0.4.7.2 - arrows-0.4.4.1 - heap-0.6.0 - lazysmallcheck-0.6 - stm-2.4.2 resolver: lts-9.0
The Cabal file just needs
Euterpea > 2.0.0 && < 2.1.0. I’ve put the full source on Github if you’re feeling lazy.
Set up MIDI
The Euterpea guide to MIDI output is as good as any.
If you’re on Linux I strongly recommend Timidity unless you have hardware synths of course. The FreePats samples are a good start (but don’t cover all GM instruments). For detailed setup, the Arch Linux Timidity page is good too, especially if you want to set Timitidy to run by default.
For OS X users, SimpleSynth is recommended, but I haven’t tried it myself. Windows users shouldn’t have too many problems with the default setup I believe.
Run your MIDI synth
Make sure your synth (see above) is running. FYI, I use
timidity -iA -Os -f -A 210 –verbose=2 in a separate shell. Note what channel your MIDI synth now running on (mine is usually 4).
Make some noise!
If we load GHCI (with
stack repl), we can play around live:
import Euterpea λ> devices Input devices: InputDeviceID 1 Midi Through Port-0 InputDeviceID 3 Scarlett 2i4 USB MIDI 1 Output devices: OutputDeviceID 0 Midi Through Port-0 OutputDeviceID 2 Scarlett 2i4 USB MIDI 1 OutputDeviceID 4 TiMidity port 0 OutputDeviceID 5 TiMidity port 1 OutputDeviceID 6 TiMidity port 2 OutputDeviceID 7 TiMidity port 3 λ> channel = 4 -- Or whatever works for you λ> playDev channel $ c 4 wn
Woah – some sound! More precisely, an acoustic grand piano playing middle C for a whole note (
wn)… in 4/4 at 120bpm, assuming the usual defaults. Remember to choose the right output
channel for your setup for that
playDev.
Writing music
Composing partsThe two fundamental operations for composing (in the FP sense) sounds are
•
:=:, which plays them in parallel, i.e. together.
•
:+:, which plays them in sequence, i.e. one thing after another.
Wait…
:=: means lots of notes played together? Even the non-musicians will recognise this one – that’s a chord. And yes, there’s a list helper function for that:
chord, as lists are generally easier to type or manipulate. Try this in your REPL:
cMinor = chord [c 3 qn, ef 3 qn, g 3 qn] λ> cMinor' = c 3 qn :=: ef 3 qn :=: g 3 qn λ> print cMinor Prim (Note (1 % 4) (C,3)) :=: (Prim (Note (1 % 4) (Ef,3)) :=: (Prim (Note (1 % 4) (G,3)) :=: Prim (Rest (0 % 1)))) λ> playDev channel cMinor
Cool! So we can see the internal representation of this, noting the zero-length rest at the end, and how it mirrors list construction where
x : [] == [x]. This is why
cMinor ≠ cMinor' even though it probably should.
Let’s try some more. The
line function is the equivalent helper for
:+: – it takes a list and composes the elements sequentially, like moving right in piece of notated music (or down in a tracker).
playDev channel $ line [c 3 qn, e 3 qn, g 3 qn, bf 3 qn]
A nice C7 arpeggio – note that we’re using
qn for quarter notes (aka a crotchet in music theory).
Infinite music
Just to spice things up a bit, what if we make… an infinite piece of music? I mean, Haskell is a non-strict (≅“lazy”) language, right? So we can use the
line operator on infinite lists as well, and we’ll have some infinite
Music a. Let’s use the standard
cycle list function to repeat our arpeggio forever and see what we get.
For display we can avoid printing the infinite list by using Euterpea’s
cut function, which limits a piece of
Music to the specified number of beats.
arpeggio = line $ cycle [c 3 qn, e 3 qn, g 3 qn, bf 3 qn] λ> print (cut 2 arpeggio) Prim (Note (1 % 4) (C,3)) :+: (Prim (Note (1 % 4) (E,3)) :+: (Prim (Note (1 % 4) (G,3)) :+: (Prim (Note (1 % 4) (Bf,3)) :+: (Prim (Note (1 % 4) (C,3)) :+: (Prim ( Note (1 % 4) (E,3)) :+: (Prim (Note (1 % 4) (G,3)) :+: (Prim (Note (1 % 4) (Bf,3) ) :+: Prim (Rest (0 % 1)))))))))
If you squint a bit (or are used to LISPs…) you can see this is a series of
Note Primitives, all of a 1/4 time, composed together with the
:+: operator. Again, the empty element3 is visible here.
Westeros
Taking these basic operators, we can now easily create pieces of music. Now here’s one I made earlier (you’ll need to put this in your
.hs source file, getting too big for a REPL session)
It's in 3-time (3/4 or 6/8) -- so each bar lasts one dotted half note (@dhn@) melody = line [ g 3 dhn, c 3 hn, rest qn, ef 3 en, f 3 en, g 3 hn, c 3 hn, ef 3 en, f 3 en, d 3 (dwn + dhn), rest dhn, f 3 dhn, bf 2 hn, rest qn, d 3 en, ef 3 en, f 3 hn, bf 2 dhn, ef 3 en, d 3 en, c 3 dwn, rest dhn ] -- Convenient wrapper inst :: InstrumentName -> Volume -> Music Pitch -> Music (Pitch, Volume) inst i v = addVolume v . instrument i -- Repeat each element four times quadruplicate :: [a] -> [a] quadruplicate = concatMap (replicate 4) -- I always find this useful bpm = tempo . (/ 120.0)
The layout for
melody is slightly non-conventional, but I find spacing it like this, whilst not linear in time (like trackers), allows us to compare notes and phrases easier. The line spacing separates 4 bar sections (if you can read music, see this simplified score which helped with the above).
Try reloading your REPL and doing
playDev channel $ bpm 170 . inst Cello 70 $ melody!
Note also because
Dur is just a Rational (i.e. a
Num), we can use normal arithmetic on note durations! So yes,
(dwn + dhn) is a thing, and it’s, erm, 9 quarter notes (or 3 bars here) I reckon. This allows for some nice maths / shorthand in your music (and in fact pitches have similar opportunity).
Add some harmony
Remember, harmony is just two or more different notes playing at once, so we already have the tools to do that, Euterpea’s
:=: operator. So we’ll just add a very simple sequence of bass notes changing every four bars as per the melody.
chordSeq = [c 1, g 1, bf 1, f 1] -- Just the notes λ> contraBass = line $ map ($ 3) chordSeq -- 3 is 4 whole bars in 3/4 λ> song = inst Cello 70 melody :=: inst TremoloStrings 70 contraBass λ> playDev channel $ bpm 170 song
...and some rhythm
That bass is pretty boring though. Really, we want some percussion, but for a quick fix, let’s try adding a little rhythm to that bass. For each bar we can repeat the one note but in a particular rhythm.
rhythmFor note = [note dqn, rest en, note en, note en] λ> bass2 = line $ concatMap rhythmFor $ quadruplicate chordSeq λ> song = inst Cello 60 melody :=: inst AcouticBass 80 bass2 λ> playDev channel $ bpm 170 song
Note we use
concatMap to flatten, as each note in the chord sequence is mapped to a list of
Music Pitch by our
rhythmFor function.
Summing up
Hopefully lots to think about and play with there for newcomers. You should be able to see the quick turnaround that using a REPL for (simple) music can bring, just as programmers often enjoy for code, as well as the potential advantages for quickly building music pieces from smaller parts… just like with functional programming itself. I’ve definitely become very taken by the library even at this basic level.
Next up
Next post we’ll see investigate creating drum patterns, pushing the use of MIDI further and allowing DRY to help us create more realistic sounds from even a basic synth setup.
More reading
Euterpea has also been used for music students (without necessarily any programming experience) as presented in this paper.
If you are interested in working with Haskell and other FP languages, check out our job-board!
Originally published on declension
|
https://functional.works-hub.com/learn/Music-Haskell...-and-Westeros?utm_source=rss&utm_medium=automation&utm_content=teros
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
#include <vtkPUniformGridGhostDataGenerator.h>
uniform grids.
A concrete implementation of vtkPDataSetGhostGenerator for generating ghost data on a partitioned and distributed domain of uniform grids.
Definition at line 61 of file vtkPUniformGridGhostDataGenerator.h.
Definition at line 66 of file vtkPUniformGridGhostDataGenerator.h.
Return 1 if this class is the same type of (or a subclass of) the named class.
Returns 0 otherwise. This method works in combination with vtkTypeMacro found in vtkSetGet.h.
Reimplemented from vtkPDataSetGhostGenerator.
Reimplemented from vtkPDataSetGhostGenerator.
Methods invoked by print to print information about the object including superclasses.
Typically not called by the user (use Print() instead) but used in the hierarchical print process to combine the output of several classes.
Reimplemented from vtkPDataSetGhostGenerator.
Registers grids associated with this object instance on this process.
A collective operation that computes the global origin of the domain.
A collective operations that computes the global spacing.
Create ghosted data-set.
Generates ghost-layers.
Implements vtkPDataSetGhostGenerator.
Definition at line 98 of file vtkPUniformGridGhostDataGenerator.h.
Definition at line 99 of file vtkPUniformGridGhostDataGenerator.h.
Definition at line 100 of file vtkPUniformGridGhostDataGenerator.h.
|
https://vtk.org/doc/nightly/html/classvtkPUniformGridGhostDataGenerator.html
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
MessageBox
WPF offers several dialogs for your application to utilize, but the simplest one is definitely the MessageBox. Its sole purpose is to show a message to the user, and then offer one or several ways for the user to respond to the message.
The MessageBox is used by calling the static Show() method, which can take a range of different parameters, to be able to look and behave the way you want it to. We'll be going through all the various forms in this article, with each variation represented by the MessageBox.Show() line and a screenshot of the result. In the end of the article, you can find a complete example which lets you test all the variations.
In its simplest form, the MessageBox just takes a single parameter, which is the message to be displayed:
MessageBox.Show("Hello, world!");
MessageBox with a title
The above example might be a bit too bare minimum - a title on the window displaying the message would probably help. Fortunately, the second and optional parameter allows us to specify the title:
MessageBox.Show("Hello, world!", "My App");
MessageBox with extra buttons
By default, the MessageBox only has the one Ok button, but this can be changed, in case you want to ask your user a question and not just show a piece of information. Also notice how I use multiple lines in this message, by using a line break character (\n):
MessageBox.Show("This MessageBox has extra options.\n\nHello, world?", "My App", MessageBoxButton.YesNoCancel);
You control which buttons are displayed by using a value from the MessageBoxButton enumeration - in this case, a Yes, No and Cancel button is included. The following values, which should be self-explanatory, can be used:
- OK
- OKCancel
- YesNoCancel
- YesNo
Now with multiple choices, you need a way to be able to see what the user chose, and fortunately, the MessageBox.Show() method always returns a value from the MessageBoxResult enumeration that you can use. Here's an example:; }
By checking the result value of the MessageBox.Show() method, you can now react to the user choice, as seen in the code example as well as on the screenshots.
MessageBox with an icon
The MessageBox has the ability to show a pre-defined icon to the left of the text message, by using a fourth parameter:
MessageBox.Show("Hello, world!", "My App", MessageBoxButton.OK, MessageBoxImage.Information);
Using the MessageBoxImage enumeration, you can choose between a range of icons for different situations. Here's the complete list:
- Asterisk
- Error
- Exclamation
- Hand
- Information
- None
- Question
- Stop
- Warning
The names should say a lot about how they look, but feel free to experiment with the various values or have a look at this MSDN article, where each value is explained and even illustrated:
MessageBox with a default option
The MessageBox will select a button as the default choice, which is then the button invoked in case the user just presses Enter once the dialog is shown. For instance, if you display a MessageBox with a "Yes" and a "No" button, "Yes" will be the default answer. You can change this behavior using a fifth parameter to the MessageBox.Show() method though:
MessageBox.Show("Hello, world?", "My App", MessageBoxButton.YesNo, MessageBoxImage.Question, MessageBoxResult.No);
Notice on the screenshot how the "No" button is slightly elevated, to visually indicate that it is selected and will be invoked if the Enter or Space button is pressed.
The complete example
As promised, here's the complete example used in this article:
<Window x: <StackPanel HorizontalAlignment="Center" VerticalAlignment="Center"> <StackPanel.Resources> <Style TargetType="Button"> <Setter Property="Margin" Value="0,0,0,10" /> </Style> </StackPanel.Resources> <Button Name="btnSimpleMessageBox" Click="btnSimpleMessageBox_Click">Simple MessageBox</Button> <Button Name="btnMessageBoxWithTitle" Click="btnMessageBoxWithTitle_Click">MessageBox with title</Button> <Button Name="btnMessageBoxWithButtons" Click="btnMessageBoxWithButtons_Click">MessageBox with buttons</Button> <Button Name="btnMessageBoxWithResponse" Click="btnMessageBoxWithResponse_Click">MessageBox with response</Button> <Button Name="btnMessageBoxWithIcon" Click="btnMessageBoxWithIcon_Click">MessageBox with icon</Button> <Button Name="btnMessageBoxWithDefaultChoice" Click="btnMessageBoxWithDefaultChoice_Click">MessageBox with default choice</Button> </StackPanel> </Window>
using System; using System.Windows; namespace WpfTutorialSamples.Dialogs { public partial class MessageBoxSample : Window { public MessageBoxSample() { InitializeComponent(); } private void btnSimpleMessageBox_Click(object sender, RoutedEventArgs e) { MessageBox.Show("Hello, world!"); } private void btnMessageBoxWithTitle_Click(object sender, RoutedEventArgs e) { MessageBox.Show("Hello, world!", "My App"); } private void btnMessageBoxWithButtons_Click(object sender, RoutedEventArgs e) { MessageBox.Show("This MessageBox has extra options.\n\nHello, world?", "My App", MessageBoxButton.YesNoCancel); } private void btnMessageBoxWithResponse_Click(object sender, RoutedEventArgs e) {; } } private void btnMessageBoxWithIcon_Click(object sender, RoutedEventArgs e) { MessageBox.Show("Hello, world!", "My App", MessageBoxButton.OK, MessageBoxImage.Information); } private void btnMessageBoxWithDefaultChoice_Click(object sender, RoutedEventArgs e) { MessageBox.Show("Hello, world?", "My App", MessageBoxButton.YesNo, MessageBoxImage.Question, MessageBoxResult.No); } } }
|
https://wpf-tutorial.com/bg/45/dialogs/the-messagebox/
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
The
layouts/ folder contains different physical key layouts that can apply to different keyboards.
layouts/+ default/| + 60_ansi/| | + readme.md| | + layout.json| | + a_good_keymap/| | | + keymap.c| | | + readme.md| | | + config.h| | | + rules.mk| | + <keymap folder>/| | + ...| + <layout folder>/+ community/| + <layout folder>/| + ...
The
layouts/default/ and
layouts/community/ are two examples of layout "repositories" - currently
default will contain all of the information concerning the layout, and one default keymap named
default_<layout>, for users to use as a reference.
community contains all of the community keymaps, with the eventual goal of being split-off into a separate repo for users to clone into
layouts/. QMK searches through all folders in
layouts/, so it's possible to have multiple repositories here.
Each layout folder is named (
[a-z0-9_]) after the physical aspects of the layout, in the most generic way possible, and contains a
readme.md with the layout to be defined by the keyboard:
# 60_ansiLAYOUT_60_ansi
New names should try to stick to the standards set by existing layouts, and can be discussed in the PR/Issue.
For a keyboard to support a layout, the variable must be defined in it's
<keyboard>.h, and match the number of arguments/keys (and preferably the physical layout):
#define LAYOUT_60_ansi KEYMAP_ANSI
The name of the layout must match this regex:
[a-z0-9_]+
The folder name must be added to the keyboard's
rules.mk:
LAYOUTS = 60_ansi
LAYOUTS can be set in any keyboard folder level's
rules.mk:
LAYOUTS = 60_iso
but the
LAYOUT_<layout> variable must be defined in
<folder>.h as well.
You should be able to build the keyboard keymap with a command in this format:
make <keyboard>:<layout>
When a keyboard supports multiple layout options,
LAYOUTS = ortho_4x4 ortho_4x12
And a layout exists for both options,
layouts/+ community/| + ortho_4x4/| | + <layout>/| | | + ...| + ortho_4x12/| | + <layout>/| | | + ...| + ...
The FORCE_LAYOUT argument can be used to specify which layout to build
make <keyboard>:<layout> FORCE_LAYOUT=ortho_4x4make <keyboard>:<layout> FORCE_LAYOUT=ortho_4x12
Instead of using
#include "planck.h", you can use this line to include whatever
<keyboard>.h (
<folder>.h should not be included here) file that is being compiled:
#include QMK_KEYBOARD_H
If you want to keep some keyboard-specific code, you can use these variables to escape it with an
#ifdef statement:
KEYBOARD_<folder1>_<folder2>
For example:
#ifdef KEYBOARD_planck#ifdef KEYBOARD_planck_rev4planck_rev4_function();#endif#endif
Note that the names are lowercase and match the folder/file names for the keyboard/revision exactly.
In order to support both split and non-split keyboards with the same layout, you need to use the keyboard agnostic
LAYOUT_<layout name> macro in your keymap. For instance, in order for a Let's Split and Planck to share the same layout file, you need to use
LAYOUT_ortho_4x12 instead of
LAYOUT_planck_grid or just
{} for a C array.
|
https://beta.docs.qmk.fm/developing-qmk/qmk-reference/feature_layouts
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
This blog post illustrates how to manage multiple promises simultaneously using a library known as RSVP in Open Event frontend.
What are Promises?
Promises are used to manage synchronous calls in javascript. Promises represent a value/object that may not be available yet but will become available in near future. To quote from MDN web docs:
The Promise object represents the eventual completion (or failure) of an asynchronous operation, and its resulting value.
What about RSVP?
Rsvp is a lightweight library used to organize asynchronous code. Rsvp provides several ways to handle promises and their responses. A very simple promise implementation using rsvp looks something like this.
var RSVP = require('rsvp'); var promise = new RSVP.Promise(function(resolve, reject) { // succeed resolve(value); // or reject reject(error); }); promise.then(function(value) { // success }).catch(function(error) { // failure });
It’s simple, right? So, what it is doing is after it defines a promise it assumes two possible states of a promise which are resolve or reject and after promise has completed it executes the respective function.
Use in Open Event Frontend?
Almost all calls to open event server APIs are done asynchronously. One of the most significant use of rsvp comes when handling multiple promises in frontend and we want all of them to be evaluated at together. Unlike normal promises where each promises resolved or rejected individually, rsvp provides a promise.all() method which accepts array of promises and evaluates all at once. It then calls resolve or reject based on the status of all promises. A typical example where we use promise.all() is given here.
import Controller from '@ember/controller'; import RSVP from 'rsvp'; import EventWizardMixin from 'open-event-frontend/mixins/event-wizard'; export default Controller.extend(EventWizardMixin, { actions: { save() { this.set('isLoading', true); this.get('model.data.event').save() .then(data => { let promises = []; promises.push(this.get('model.data.event.tickets').toArray().map(ticket => ticket.save())); promises.push(this.get('model.data.event.socialLinks').toArray().map(link => link.save())); if (this.get('model.data.event.copyright.licence')) { let copyright = this.setRelationship(this.get('model.data.event.copyright.content'), data); promises.push(copyright.save()); } if (this.get('model.data.event.tax.name')) { let tax = this.setRelationship(this.get('model.data.event.tax.content'), data); if (this.get('model.event.isTaxEnabled')) { promises.push(tax.save()); } else { promises.push(tax.destroyRecord()); } } RSVP.Promise.all(promises) .then(() => { this.set('isLoading', false); this.get('notify').success(this.get('l10n').t('Your event has been saved')); this.transitionToRoute('events.view.index', data.id); }, function() { this.get('notify').error(this.get('l10n').t('Oops something went wrong. Please try again')); }); }) .catch(() => { this.set('isLoading', false); this.get('notify').error(this.get('l10n').t('Oops something went wrong. Please try again')); }); }, }
Here we made array of promises and then pushed each of the promises to it. Then at the end used this array of promises in RSVP.promise.all() which then evaluated it based on success or failure of the promises.
Resources:
- RSVP Official Docs, Inch CI: RSVP official documentation
- Documentation on Promise: MDN web docs: Promises (MDN Docs)
|
https://blog.fossasia.org/how-rsvp-handles-promises-in-open-event-frontend/?utm_source=rss&utm_medium=rss&utm_campaign=how-rsvp-handles-promises-in-open-event-frontend
|
CC-MAIN-2020-16
|
en
|
refinedweb
|
In this third Swing tutorial we're going to add some text to our JTextArea when we click the button.
Its quite simple to do this, we need to add an Action Listener to the button. An Action Listener basically listens to the button, when the button is clicked it tells the Action Listener and we can program the Action Listener to do anything when it knows that the button has been clicked.
To do this add this code to your "MainFrame" class:
btn.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent arg0) { } });
You need to import Action Listener and Action Event, CTRL + SHIFT + O(CMD + SHIFT + O on Mac).
As you can see we add an Action Listener to the button we created in the previous tutorial, called "btn", then we created a new Action Listener to add to it. Inside the Action Listener is a method, called actionPerformed, inside this method we can tell the Action Listener what to do when the button is clicked.
So to complete our goal of adding text to our JTextArea every time the button is clicked lets add 1 line of code to the actionPerformed method:
textArea.append("Hello\n");
We type the name of our JTextArea, in my case "textArea", and append a string "Hello\n", you can change this, the "\n" in the string creates a new line after the text so it doesn't all go on the same line.
If we run this now we can type in our JTextArea and when we click the button it adds "Hello" to it.
MainFrame.class
import java.awt.BorderLayout; import java.awt.event.ActionEvent; import java.awt.event.ActionListener;); btn.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent arg0) { textArea.append("Hello\n"); } }); } }
|
https://caveofprogramming.com/guest-posts/swing-tutorial-3-reacting-to-button-clicks.html
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Consider by many ways to select a representative, a simple one is to select with the biggest index.
- Check if 2 persons are in the same group ? If representatives of two individuals are same, then they’ll become friends..
Find : Can be implemented by recursively traversing the parent array until we hit a node who is parent of itself.
// Finds the representative of the set that // i is an element of public public the result tree’s // rank by 1 Rank[jrep]++; } }
// A Java program to implement Disjoint Set Data // Structure. import java.io.*; import java.util.*; class DisjointUnionSets { int[] rank, parent; int n; // Constructor public DisjointUnionSets(int n) { rank = new int[n]; parent = new int[n]; this.n = n; makeSet(); } // Creates n sets with single item in each void makeSet() { for (int i=0; i<n; i++) { // Initially, all elements are in // their own set. parent[i] = i; } } // Returns representative of x's set int find(int x) { // Finds the representative of the set // that x is an element of if (parent[x]!=x) { // if x is not the parent of itself // Then x is not the representative of // his set, parent[x] = find(parent[x]); // so we recursively call Find on its parent // and move i's node directly under the // representative of this set } return parent[x]; } // Unites the set that includes x and the set // that includes x void union(int x, int y) { // Find representatives of two sets int xRoot = find(x), yRoot = find(y); // Elements are in the same set, no need // to unite anything. if (xRoot == yRoot) return; // If x's rank is less than y's rank if (rank[xRoot] < rank[yRoot]) // Then move x under y so that depth // of tree remains less parent[xRoot] = yRoot; // Else if y's rank is less than x's rank else if (rank[yRoot] < rank[xRoot]) // Then move y under x so that depth of // tree remains less parent[yRoot] = xRoot; else // if ranks are the same { // Then move y under x (doesn't matter // which one goes where) parent[yRoot] = xRoot; // And increment the the result tree's // rank by 1 rank[xRoot] = rank[xRoot] + 1; } } } // Driver code public class Main { public static void main(String[] args) { // Let there be 5 persons with ids as // 0, 1, 2, 3 and 4 int n = 5; DisjointUnionSets dus = new DisjointUnionSets(n); // 0 is a friend of 2 dus.union(0, 2); // 4 is a friend of 2 dus.union(4, 2); // 3 is a friend of 1 dus.union(3, 1); // Check if 4 is a friend of 0 if (dus.find(4) == dus.find(0)) System.out.println("Yes"); else System.out.println("No"); // Check if 1 is a friend of 0 if (dus.find(1) == dus.find(0)) System.out.println("Yes"); else System.out.println("No"); } }
Output:
Yes No
|
https://www.geeksforgeeks.org/disjoint-set-data-structures-java-implementation/
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
XmTree man page
XmTree — The Tree widget class
Synopsis
#include <Xm/XTree.h>
Description
The.
User Interaction
Each node in the tree can be.
Normal Resources
All resource names begin with XmN and all resource class names begin with XmC.
connectStyle
The style of the lines visually connecting parent nodes to children nodes. The valid styles are XmTreeDirect or XmTreeLadder.
horizontalNodeSpace
verticalNodeSpace
The amount of space between each node in the tree and it nearest neighbor.
The following resources are inherited from the XmHierarchy widget:
All resource names begin with XmN and all resource class names begin with XmC.
Constraint Resources
All resource names begin with XmN and all resource class names begin with XmC. openClosePadding
The number of pixels between the folder button and the node it is associated with.
lineColor
The color of the line connecting a node to its parent. The default value for this resource is the foreground color of the Tree widget.
lineWidth.
See Also
XmColumn(3X)
|
https://www.mankier.com/3/XmTree
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Section 5.11 of the ConfD 6.0.3 User Guide describes how to attach to confd in phase 0 to perform an upgrade of data in the CDB.
The User Guide example and the examples.confd source code are all in C. I am trying to do this using Python, using the unit tests source code for the pyapi as an example, but it is not easy to reverse-engineer this Python API. I understand that the Python API is supposed to be a mirror of the C API, but I am left guessing as to the details.
I have:
sys.path.append("/opt/tailf/confd/src/confd/pyapi")import _confd as confdetc.etc.sock = socket.socket()confd.maapi.connect(sock, '127.0.0.1', confd.CONFD_PORT)trans = confd.maapi.attach_init(sock)confd.maapi.set_namespace(sock, trans, '')etc.etc.confd.maapi.set_elem2(sock, trans, myData, '/my-path/my-data')etc.etc.confd.maapi.close(sock)
My error occurs early on -- the set_namespace() call fails with "an integer is required". The value of trans is -2. This is an integer, but I'm guessing it's not correct. What should be the value of the init transaction? What does -2 mean? Confd is in phase 0 at this time.
Is there any API call I am missing? I know that I am not supposed to close the transaction because it's the init transaction. And I know that there is no user session for the same reason.Thank you very much.
You can address your Python related questions to support via the RT system.
Hi,
Was this reported as a bug finally? We face a very similar issue in Confd6.2 so please let us know if this is fixed in a later version.
Thank you,Dimitra
I have not filed a bug.
|
http://discuss.tail-f.com/t/attaching-to-the-init-transaction-in-phase-0-using-python/1011
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Automatic Generation Of Object-Oriented Unit Tests
Using Genetic Programming
Stefan Wappler,M:Prof.Dr.rer.nat.Peter Pepper (Vorsitzender)
Prof.Dr.-Ing.Ina Schieferdecker (Berichter)
Prof.Dr.-Ing.Stefan Jähnichen (Berichter)
Tag der wissenschaftlichen
Aussprache:19.Dezember 2007
Berlin 2008
D 83
iii
Acknowledgements
I would like to express my sincere gratitude to my supervisors,Ina Schieferdecker and
Joachim Wegener,for their professional guidance,inspiring discussions,and encourage-
ment during the period of this research.My thanks are also due to Stefan Jähnichen for
his broad support and guidance.Furthermore,I would like to thank all my collegues
from both Daimler AG and the Technical University of Berlin.In particular,I thank all
my DCAITI team members for the very good cooperation and time we had together:
Andreas Windisch,Fadi Chabarek,Linda Schmuhl,Abel Marrero-Perez,Kerstin Buhr,
Steffen Kühn,Andrea Tüger,Oliver Heerde.I also thank in particular Harmen Sthamer
for his review of this thesis.I gratefully acknowledge the encouraging meetings and
discussions with Mark Harman from King’s College,London,Phil McMinn from Sheffield
University,Leonardo Bottaci from Hull University,and all other participants of the
EvoTest project.A special thank is due to Andrea Tüger,Oliver Heerde,and Richard
Norridge from London for their quick and straightforward language-oriented review of
this thesis.Thanks also go to my friends,my parents,my family,my in-laws for all
support and encouragement.Finally my greatest thanks to Lord Jesus Christ,who
enabled me to perform this research and who accomplishes everything according to His
glorious mind.
iv
v
Abstract
Automating the generation of object-oriented unit tests for structural testing techniques
has been challenging many researchers due to the benefits it promises in terms of cost
saving and test quality improvement.It requires test sequences to be generated,each
of which models a particular scenario in which the class under test is examined.The
generation process aims at obtaining a preferably compact set of test sequences which
attains a high degree of structural coverage.The degree of achieved structural coverage
indicates the adequacy of the tests and hence the test quality in general.
Existing approaches to automatic test generation for object-oriented software mainly
rely either on symbolic execution and constraint solving,or on a particular search
technique.However,these approaches suffer from various limitations which negatively
affect both their applicability in terms of classes for which they are feasible,and their
effectiveness in terms of achievable structural coverage.The approaches based on
symbolic execution and constraint solving inherit the limitations of these techniques,
which are,for instance,issues with scalability and problems with loops,arrays,and
complex predicates.The search-based approaches encounter problems in the presence of
complex predicates and complex method call dependences.In addition,existing work
addresses neither testing non-public methods without breaking data encapsulation,nor
the occurrence of runtime exceptions during test generation.Yet,data encapsulation,
non-public methods,and exception handling are fundamental concepts of object-oriented
software and require also particular consideration for testing.
This thesis proposes a new approach to automating the generation of object-oriented
unit tests.It employs genetic programming,a recent meta-heuristic optimization
technique,which allows formulating the task of test sequence generation as a search
problem more suitably than the search techniques applied by the existing approaches.
The approach enables testing non-public methods and accounts for runtime exceptions
by appropriately designing the objective functions that are used to guide the genetic
programming search.
The value of the approach is shown by a case study with real-world classes that involve
non-public methods and runtime exceptions.The structural coverage achieved by the
approach is contrasted with that achieved by a random approach and two commercial
test sequence generators.In most of the cases,the approach of this thesis outperformed
the other methods.
vi
vii
Zusammenfassung
Die Automatisierung der Testfallermittlung für den struktur-orientierten Unit-Test
objektorientierter Software verspricht enorme Kostenreduktion und Qualitätssteigerung
für ein Softwareentwicklungsprojekt.Die Herausforderung besteht darin,automatisch
Testsequenzen zu generieren,die eine hohe Überdeckung des Quellcodes der zu testenden
Klasse erreichen.Diese Testsequenzen modellieren bestimmte Szenarien,in denen die zu
testende Klasse geprüft wird.Der Grad an erzielter Code-Überdeckung ist ein Maß für
die Testabdeckung und damit der Testqualität generell.
Die existierenden Automatisierungsansätze beruhen hauptsächlich auf entweder sym-
bolischer Ausführung und Constraint-Lösung oder auf einem Suchverfahren.Sie haben
jedoch verschiedene Begrenzungen,die sowohl ihre Anwendbarkeit für unterschiedliche
zu testende Klassen als auch ihre Effektivität im Hinblick auf die erreichbare Code-Über-
deckung einschränken.
Die Ansätze basierend auf symbolischer Ausführung und Constraint-Lösung weisen
die Beschränkungen dieser Techniken auf.Dies sind beispielsweise Einschränkungen
hinsichtlich der Skalierbarkeit und bei der Verwendung bestimmter Programmierkon-
strukte wie Schleifen,Felder und komplexer Prädikate.Die suchbasierten Ansätze
haben Schwierigkeiten bei komplexen Prädikaten und komplexen Methodenaufrufab-
hängigkeiten.Die Ansätze adressieren weder den Test nicht-öffentlicher Methoden,ohne
die Objektkapselung zu verletzen,noch die Behandlung von Laufzeitausnahmen während
der Testgenerierung.Objektkapselung,nicht-öffentliche Methoden und Laufzeitausnah-
men sind jedoch grundlegende Konzepte objektorientierter Software,die besonderes
Augenmerk während des Tests erfordern.
Die vorliegende Dissertation schlägt einen neuen Ansatz zur automatischen Generierung
objektorientierter Unit-Tests vor.Dieser Ansatz verwendet Genetische Programmierung,
ein neuartiges meta-heuristisches Optimierungsverfahren.Dadurch kann die Testsequenz-
Generierung geeigneter als Suchproblem formuliert werden als es die existierenden
Ansätze gestatten.Effektivere Suchen nach Testsequenzen zur Erreichung von hoher
Code-Überdeckung werden so ermöglicht.Der Ansatz umfasst außerdem den Test nicht-
öffentlicher Methoden ohne Kapselungsbruch und berücksichtigt Laufzeitausnahmen,
indem er die für die Suche verwendeten Zielfunktionen adequat definiert.
Eine umfangreiche Fallstudie demonstriert die Effektivität des Ansatzes.Die dabei
verwendeten Klassen besitzen nicht-öffentliche Methoden und führen in zahlreichen Fällen
zu Laufzeitausnahmen während der Testgenerierung.Die erreichten Code-Überdeckungen
werden den Ergebnissen eines Zufallsgenerators sowie zweier kommerzieller Testsequenz-
Generatoren gegenübergestellt.In der Mehrheit der Fälle übertraf der hier vorgeschlagene
Ansatz die alternativen Generatoren.
viii
ix
Declaration
The work presented in this thesis is original work undertaken between October 2004
and September 2007 at the DaimlerChrysler AG,Research and Technology,Software
Technology Lab,and the Technical University of Berlin (DaimlerChrysler Automotive
IT Institute).Portions of this work have been published elsewhere:
• S.Wappler,F.Lammermann,Using Evolutionary Algorithms for the Unit Testing
of Object-Oriented Software,In GECCO ’05:Proceedings of the 2005 Conference
on Genetic and Evolutionary Computation,pages 1053-1060,Washington,D.C.,
USA,ACM Press,2005
• S.Wappler,J.Wegener,Evolutionary Unit Testing of Object-Oriented Software
Using Strongly-Typed Genetic Programming,In GECCO ’06:Proceedings of the
2006 Conference on Genetic and Evolutionary Computation,pages 1925-1932,
Seattle,WA,USA,ACM Press,2006
• S.Wappler,J.Wegener,Evolutionary Unit Testing of Object-Oriented Software
Using a Hybrid Evolutionary Algorithm,In Proceedings of the IEEE World Congress
on Computational Intelligence (WCCI-2006),pages 3227-3233,Vancouver,BC,
Canada,IEEE Press,2006
• S.Wappler,A.Baresel,J.Wegener,Improving Evolutionary Testing in the Presence
of Function-Assigned Flags,In Proceedings of Testing:Academic and Industrial
Conference (TAIC PART),to appear,2007
• S.Wappler,I.Schieferdecker,Improving Evolutionary Class Testing in the Presence
of Non-Public Methods,In Proceedings of the 2007 Conference on Automated
Software Engineering (ASE),to appear,2007
x
Contents
1 Introduction 1
1.1 Aims and Objectives.............................3
1.2 Contributions.................................4
1.3 Structure...................................5
2 Background and Related Work 7
2.1 Structure-Oriented Class Testing......................7
2.1.1 Principles of Object-Oriented Software...............7
2.1.2 Software Testing in General.....................9
2.1.3 Class Testing.............................10
2.1.4 Structure-Oriented Testing Techniques...............11
2.2 Automatic Test Generation.........................14
2.2.1 Static Test Generation........................15
2.2.2 Dynamic Test Generation......................20
2.2.3 Commercial Test Generators....................29
2.2.4 Limitations of the Existing Approaches..............31
2.3 Evolutionary Algorithms...........................35
2.3.1 Evolutionary Algorithm Principles.................36
2.3.2 Genetic Algorithms..........................44
2.3.3 Genetic Programming........................46
2.4 Summary...................................51
3 Evolutionary Class Testing 53
3.1 Overview...................................53
3.2 A Formal Consideration of Test Sequences.................54
3.3 Representation by Method Call Trees and Number Sequences......56
3.3.1 The Method Call Dependence Graph................57
3.3.2 Method Call Trees..........................62
3.3.3 Primitive Arguments and Parameter Object Selectors......66
3.3.4 Test-Sequence-Generating Algorithm TCGen1...........69
3.4 Representation by Extended Method Call Trees..............72
3.4.1 Incorporating Parameter Space into Sequence Space.......72
3.4.2 Test-Sequence-Generating Algorithm TCGen2...........74
3.5 Objective Function Construction......................76
3.5.1 Classification of Execution Flows..................77
3.5.2 Dynamic Test Sequence Infeasibility................78
xii Contents
3.5.3 Endless Loops.............................78
3.5.4 Unfavorably Evaluated Conditions.................79
3.5.5 Runtime Exceptions.........................80
3.5.6 Non-Public Methods.........................83
3.5.7 Putting it all Together........................84
3.6 Test Cluster Definition............................85
3.6.1 Mock Classes.............................86
3.6.2 Interface Implementers and Abstract Class Implementers.....87
3.6.3 Array Generators...........................88
3.7 Function-Assigned Flags...........................88
3.7.1 Existing Approaches to Flag Removal...............90
3.7.2 Method Substitution.........................92
3.7.3 Boolean Variable Substitution....................93
3.8 Summary...................................98
4 Experiments 103
4.1 Implementation of EvoUnit.........................103
4.2 General Effectiveness Case Study......................106
4.2.1 Test Objects.............................106
4.2.2 Setup and Realization........................111
4.2.3 Results................................116
4.3 Non-Public Method Coverage Case Study.................125
4.3.1 Test Objects.............................126
4.3.2 Setup and Realization........................126
4.3.3 Results................................126
4.4 Function-Assigned Flag Case Study.....................127
4.4.1 Test Object..............................127
4.4.2 Setup and Realization........................129
4.4.3 Results................................129
4.5 Summary...................................130
5 Conclusion and Future Work 133
5.1 Summary of Achievements..........................133
5.2 Restrictions and Limitations.........................134
5.3 Summary of Future Work..........................136
5.3.1 Addressing the Limitations.....................137
5.3.2 Other directions...........................138
Bibliography 141
A Source Codes and Algorithms 147
A.1 Source Listings................................147
A.2 Algorithms..................................152
A.2.1 TCGen2................................153
List of Figures
2.1 Example control flow graph.........................12
2.2 Example symbolic execution tree......................17
2.3 Execution flows of a simple function....................23
2.4 Example application of Tonella’s crossover operator............27
2.5 Decomposition of a test sequence......................28
2.6 Classification of evolutionary algorithms..................37
2.7 Evolutionary algorithm context.......................38
2.8 Principle procedure of an evolutionary algorithm.............39
2.9 Stochastic universal sampling........................42
2.10 Simple program tree.............................47
2.11 Subtree crossover...............................49
2.12 ERC mutation................................50
2.13 Demotion mutation..............................50
2.14 Promotion mutation..............................51
3.1 Basic concept of evolutionary class testing.................53
3.2 Method call dependence graph.......................60
3.3 Method call tree...............................62
3.4 Method call dependence graph with additional call-contributing edges.64
3.5 Method call tree containing state-changing methods;with annotated
instances and their roles...........................65
3.6 Method call tree,generated by loosened tree creation algorithm.....67
3.7 Method call dependence graph,augmented by primitive types......73
3.8 Method call tree including parameter information.............75
3.9 Classification of test sequence executions..................77
3.10 Control flow graph including exceptional branches.............80
3.11 Objective functions for the different situations...............84
4.1 EvoUnit System Architecture........................104
4.2 Experimental ECJ pipeline.........................112
4.3 Results for parameter number of individuals................113
4.4 Results for parameter tournament size...................114
4.5 Coverage achieved by EvoUnit and random generator;J2SDK test objects120
4.6 Coverage achieved by EvoUnit and random generator;Quilt test objects 120
4.7 Coverage achieved by EvoUnit and random generator;JFreeChart test
objects.....................................121
xiv List of Figures
4.8 Coverage achieved by EvoUnit and random generator;both Colt and
Math test objects...............................121
4.9 Coverage achieved by all generators;J2SDK test objects.........125
4.10 Coverage achieved by all generators;Quilt test objects..........126
4.11 Coverage achieved by all generators;JFreeChart test objects.......127
4.12 Coverage achieved by all generators;both Colt and Math test objects..128
4.13 Coverage of non-public methods.......................129
4.14 Objective value development for transformed Stack............130
5.1 Example class diagram............................137
List of Tables
2.1 Distance functions..............................24
2.2 Related approaches..............................31
2.3 Limitations of the approaches........................36
2.4 Typed function set..............................47
3.1 Example type set...............................70
3.2 Example function set.............................70
3.3 Extended type set..............................75
3.4 Tactic 1....................................94
3.5 Tactic 2....................................95
3.6 Tactic 3....................................96
3.7 Arguments for func1 and the resulting flag values............97
3.8 Properties of evolutionary class testing with respect to the limitations.100
4.1 Test objects;general complexity metrics..................108
4.2 Test objects;properties related to limitations...............109
4.3 Test objects;properties related to the evolutionary search........110
4.4 Settings of the genetic programming system ECJ.............112
4.5 ERC value ranges...............................114
4.6 Results from EvoUnit (optimizing mode)..................117
4.7 Results from EvoUnit (random mode)...................118
4.8 Clover results for EvoUnit and CodePro..................123
4.9 Clover results for EvoUnit and Jtest....................124
xvi List of Tables
Listings
2.1 Test sequence examining method equals of class IntegerRange......11
2.2 Simple function sorting two integers....................16
2.3 Simple C function..............................21
3.1 Linearized method call tree.........................66
3.2 Test sequence,augmented by framework methods.............68
3.3 Linearized method call tree.........................76
3.4 Statically feasible but dynamically infeasible test sequence........78
3.5 Exceptional test sequence..........................82
3.6 DatabaseAdapter class to be replaced....................86
3.7 DatabaseAdapter mock class.........................86
3.8 Array generator for class Integer......................88
3.9 Example of function-assigned flag......................89
3.10 Problematic flag transformation.......................90
3.11 Polymorphic stack types...........................91
3.12 Original predicate...............................92
3.13 Modified predicate..............................93
5.1 Example pointer-comparing predicate....................138
A.1 Class Integer.................................147
A.2 Class IntegerRange..............................147
A.3 Class State1..................................150
A.4 Class Stack..................................150
A.5 Class StackT.................................151
A.6 Test-sequence-generating algorithm TCGen1................152
A.7 Test-sequence-generating algorithm TCGen2................153
xviii Listings
1 Introduction
Creating relevant test cases is the most critical activity during software testing.The set
of test cases with which the software under test will be examined must not only possess
a good ability to reveal faults,but also be a representative and maintainable subset of
all possible input situations.Both quality and significance of the overall test are directly
affected by the set of test cases used during testing.
With object-orientation,testing on the unit level – the most elementary level – focuses
on the examination of a single class.Classes are the atoms which,assembled together,
constitute an object-oriented application.A test case for unit testing a class includes
the information as to how to create an instance of the class under test,how to create
other instances that are needed during the test (for instance to serve as arguments for
operations),and which object states and results are expected when particular operations
of the class under test are executed.This information is represented by a test sequence –
a sequence of method calls which involve creating objects,putting the objects into proper
states,and invoking the operations to be examined – and a test evaluation,consisting of
one or several checks of the final state and the outputs.
Various techniques to derive relevant tests fromdifferent types of development artifacts
have been proposed.One important category of testing techniques is structure-oriented
testing.A structure-oriented testing technique utilizes the implementation (the source
code) of the software under test to identify relevant tests.This type of testing technique
is often applied to complement a function-oriented testing technique,which focuses on
the coverage of the requirements.Since both types of testing techniques have different
failure models in mind,their combination increases the quality of the overall test.
A structure-oriented testing technique employs a code coverage criterion to guide
the identification of relevant tests.For instance,statement testing utilizes the criterion
statement coverage and focuses on the statements of the software under test:tests are
to be generated that lead to the execution of all (or a high number of) statements of the
software under test.Faults related to the statements of the unit under test are expected
to be exhibited by the tests generated this way.Other important criteria are branch
coverage and condition coverage.Industrial quality standards demand that the tests
applied to software of a particular application domain exceed a predefined code coverage
rate.For instance,the avionics standard RTCA DO-178B (RTCA Inc.,1992) requires
that for airborne software belonging to a high risk level the corresponding test cases
satisfy decision coverage.Another example is the automotive standard ISO/WD 26262
(ISO,2005).Depending on the risk level ASIL (automotive safety integrity level) to be
attained,statement coverage,decision coverage,path coverage,condition coverage,or
modified condition decision coverage must be maximized.The standard demands for
full coverage,meaning that 100% code coverage must be achieved.However,it allows
2 1 Introduction
deviating from full coverage in justified situations.
Although initially developed for testing procedural software,such as C or Ada modules,
structure-oriented testing techniques are also effectively applied to testing object-oriented
software.Recent investigations have shown that they are well-suited to create relevant
tests for object-oriented class testing,and are advised to be applied in conjunction with
other object-oriented testing techniques (Kim,Clark and McDermid,1999;Kim,Clark
and McDermid,2000).
Software testing consumes up to half of the budget of a software development project
(Beizer,1990).A survey carried out by DaimlerChrysler confirms the findings of other
companies:while 50%of the costs for a development project are spent for implementation
activities,the remaining 50% are spent for testing purposes (Grochtmann,2000).Unit
testing and integration testing need 30% of the total budget.The process of creating
relevant tests consumes significant resources in terms of time,human capacity,and thus
costs.When done manually,it is also tedious and error-prone.
Several approaches exist that automate the creation of test sequences for object-
oriented unit testing in order to benefit from reductions in time,labor,and budget.
The structure-oriented approaches,which will be considered in this work,rely on either
symbolic execution and constraint solving (King,1976;Tsang,1993),or on concrete
execution and a search strategy.The former will be referred to as static approaches,
while the latter will be referred to as dynamic approaches.More recent approaches
combine aspects of the two categories.The common idea is to divide the source code to
be covered by tests into individual components,referred to as test goals in the following.
For instance,in the case of branch testing,each branch of the control flow graphs of the
methods of the class under test is considered a test goal.An attempt is made to create a
test sequence for each test goal.The static approaches apply symbolic execution,which
emulates the actual execution of the software under test using symbolic inputs instead of
concrete ones.Path conditions are thereby collected which formulate the requirements
to be satisfied by the participating objects in order for the execution to cover the
targeted test goal.Constraint solving then tries to compute a concrete accumulation of
object states from the path conditions.In contrast,the dynamic approaches execute the
software under test using concrete objects and inputs.A search strategy is employed to
search the space of all possible test sequences for a covering one.
However,the existing approaches possess several limitations which diminish their value:
symbolic execution suffers from the problem of state space explosion if the software
under test is complex.When a huge set of symbolic states results from the structure of
the software under test,memory and computation power may not suffice to maintain
and examine these states with a practical performance.For instance,loops in the
source code will result in an infinite set of symbolic states,if not appropriately bounded.
Symbolic execution is also limited in the presence of polymorphism due to its static
nature.Constraint solving suffers from the problem of non-linear and sometimes too
complex constraints:today’s constraint solvers are not able to compute a solution for
any given collection of path constraints,in particular if the constraints contain severe
non-linearities.Furthermore,most static approaches do not create the desired test
1.1 Aims and Objectives 3
sequences,but rather in-memory representations of the objects participating in the tests.
Such a representation must be transformed to a proper test sequence in order to be
maintainable and insusceptible to class refactorings.However,the respective works
do not propose an algorithm realizes such a transformation.Additionally,some static
approaches are only applicable to classes whose methods have exclusively primitive
argument types.
The dynamic approaches have deficiencies concerning both the effectiveness and
efficiency of the search:(1) the incorporated search strategy may fail to find a test
sequence which covers a test goal that is dependent on a complex condition,(2) the
search is inefficient since it allows the generation of inexecutable test sequences,(3) the
search requires detailed additional problem-specific,user-provided information to be
effective.Furthermore,the dynamic approaches are limited in the presence of runtime
exceptions:due to the random nature of the search of these approaches,implicit method
preconditions might be violated,causing a runtime exception to be raised during a
search.In this case,the search just terminates and does not deliver a result.
Many approaches also break the encapsulation of the classes under test.The generated
tests are formulated so that the encapsulated data is accessed during test execution
in order to put the objects into proper states.Doing so is critical since object states
can be achieved which violate class invariants and hence contradict the specification
of the classes.Using test sequences obtained by breaking the encapsulation questions
the expressiveness of the overall test.No existing approach addresses directly testing
non-public methods without breaking encapsulation.
1.1 Aims and Objectives
This thesis suggests a new approach to automatic test sequence generation for object-
oriented class testing.Its main objective is to tackle the following limitations of the
existing automation techniques in order to allow for broader applicability and improved
effectiveness:
1.limitations of symbolic execution and constraint solving in general
2.limited applicability due to limited support for class type arguments
3.limited maintainability and usability of the generated results
4.inefficiency due to inexecutable test sequences
5.weaknesses in the presence of complex predicates
6.insufficient treatment of runtime exceptions
7.insufficient support of testing non-public methods
These limitations will be addressed by developing a new search-based automation ap-
proach which follows the ideas of evolutionary structural testing.Evolutionary structural
4 1 Introduction
testing is a dynamic test generation technique that has been developed for testing
procedural software.It employs evolutionary algorithms for the search for test data that
maximize the code coverage of a procedure.Applying evolutionary algorithms eliminates
the need to perform symbolic execution and constraint solving and hence overcomes the
limitations inherent to both techniques (limitation 1).
An objective of this thesis is to enable the generation of test sequences that can create
arbitrary objects that serve as arguments for succeeding method calls.This further
allows the application of automatic test generation to classes that do not only possess
method with primitive argument types (limitation 2).
An evolutionary algorithmrequires both a suitable representation of candidate solutions
(points in the search space) and an objective function that guides the search to be defined.
An objective of this thesis is to develop a representation of test sequences that (a) relies
on the public class interfaces only,and (b) defines a search space that contains preferably
executable test sequences only in order to cope with both limitations 3 and 4.
Another objective is to design the objective functions used for the search so that they
provide sufficient guidance also (a) in the presence of complex predicates controlling
the test goal to be attained,(b) in the presence of undesired runtime exceptions which
prematurely terminate the evaluation of a test sequence,and (c) in the case of a test goal
that belongs to a non-public method.The strategy for objective function construction
aims at treating limitations 5,6,and 7.
The thesis exemplifies the automation of a particular type of decision testing.However,
the approach is supposed to be also applicable to other structure-oriented techniques
without great modification.The object-oriented concepts of the Java programming
language (Gosling,Joy and Steele,2005) are considered;the examples discussed in this
thesis are classes and methods written in Java.Yet,the ideas of this thesis are expected to
be applicable to testing software written in other object-oriented programming languages,
albeit additional adaptation might be required.
1.2 Contributions
The contributions of this work are the following:
1.The investigation in the peculiarities of class testing,with particular regard to
automatic test sequence generation;
2.The analysis of the state of the art of automatic test generation for class testing,
along with the identification of deficiencies of current approaches;
3.The proposal of an approach to automatic test generation for class testing based
on genetic programming,which consists of
• the proposal of a representation of test sequences based on method call trees,
which enables the use of an off-the-shelf genetic programming system for test
sequence generation,and
1.3 Structure 5
• the proposal of a strategy for objective function design for decision testing
which copes with complex predicates,runtime exceptions,and non-public
methods;
4.The demonstration of the effectiveness of the approach in terms of achieved code
coverage;
5.The proposal of two strategies to improve the guidance to the evolutionary search
in the presence of Boolean predicates;
6.The demonstration of the effectiveness of these two strategies.
1.3 Structure
This thesis is organized as follows:
Chapter 2 – Background and Related Work lays the foundation of this work.It
starts with an introduction to object-oriented class testing,including a short summary
of the principles of object-orientation and the description of structure-oriented testing
techniques.Afterwards,automatic test generation for class testing is discussed.Finally,
evolutionary algorithms are detailed.Particular emphasis is given to genetic programming
which is the key ingredient of the new approach.
Chapter 3 – Evolutionary Class Testing describes the new approach to automatic
test generation for class testing in detail.First,it discusses the structure of test sequences
in general.Then,two different representations of test sequences are suggested.The
second representation is an extension of the first and simplifies the applied search
algorithm significantly.Following this,the strategy for designing a suitable objective
function for a given test goal is detailed.This includes a discussion of how to cope with
runtime exceptions and non-public methods.An approach to handling non-instantiable
classes is explained.Finally,the chapter discusses two strategies for improving the
landscape of the objective functions in the presence of function-assigned flags,a frequently
used code construct which sometimes hinders the evolutionary search.
Chapter 4 – Experiments reports on the results of three case studies which were
performed to empirically assess the effectiveness of the approach.The first case study
aims at demonstrating the effectiveness of the approach in terms of achieved code
coverage in general.The coverage results obtained by the evolutionary class testing
approach are contrasted with the results achieved by a random test sequence generator
and two commercial generators.The second case study investigates the value of the
objective functions for test goals belonging to non-public methods.It contrasts the
results obtained by using the extended objective functions with the results obtained
without using the extensions.The third case study evaluates one of the two strategies
for objective landscape improvement.
6 1 Introduction
Chapter 5 – Conclusion and Future Work summarizes the achievements of the
thesis,points out the restrictions and limitations of the new approach,and gives directions
for future research.
2 Background and Related Work
This chapter introduces structure-oriented unit testing of object-oriented software in
Section 2.1 and reviews work in the field of automatic test generation in Section 2.2 on
page 14.Evolutionary algorithms,the search technique on which this thesis builds,are
presented in Section 2.3 on page 35.The basic concepts discussed here are key to the
remainder of this thesis.
2.1 Structure-Oriented Class Testing
This section introduces structure-oriented unit testing of object-oriented software.It
describes the technical scope of this thesis.First,Section 2.1.1 highlights the concepts
of object-orientation.Next,Section 2.1.2 on page 9 gives an introduction to software
testing in general,while Section 2.1.3 on page 10 elaborates on testing object-oriented
software on the unit level in particular.Finally,Section 2.1.4 on page 11 discusses
structure-oriented testing techniques in depth.
2.1.1 Principles of Object-Oriented Software
According to Stroustrup (1988),a programming language is object-oriented if it provides
full support for data abstraction,encapsulation,inheritance,polymorphism,and self-
recursion.For example,C++ (Stroustrup,2000) and Java (Gosling et al.,2005) are
object-oriented programming languages.In contrast,the language C (ISO/IEC 9899,
1990),whose primary abstraction is a module control flow,is a procedural programming
language (Binder,1999).In the following,the mentioned object-oriented concepts will
be explained in more detail,along with the description of the basic terminology.
Data Abstraction (Classes,Objects,and Interfaces)
An object-oriented application is a composition of interacting objects that communicate
with each other by issuing function calls.An object is an instance of a class.At runtime
of an application,more than on instance of the same class can exist.A class is an
abstract data type.It assembles attributes and methods.Both attributes and methods
are called class members.The attributes are variables that represent the state of an
object.An attribute may be of primitive type,such as integer or float,or of a class
or interface type.The methods are procedures that typically operate on the attributes.
An interface is an abstract data type that consists of method declarations only;no
implementations are assigned to the method declarations.A class can implement an
interface by providing a method implementation for each method declaration of the
8 2 Background and Related Work
respective interface.An abstract class is a class,some or all of whose methods are not
implemented,or that is explicitly declared as being abstract,respectively.An abstract
class cannot be instantiated.
Encapsulation
The attributes and methods of a class can be marked to be visible in certain contexts only.
Visible means that,in case of an attribute,the value of the attribute can be read and
written,and in case of a method that it can be invoked.Typically,an object-oriented
programming language offers the visibility modifiers public,protected,and private (Java
also offers the modifier package).A class member marked public is visible to all objects
of the application,regardless which class declares it.A class member marked protected
is only visible to objects of the class that declares it and objects of all subclasses of the
declaring class (see Section 2.1.1 for subclassing;in some programming languages,for
instance in Java,protected members are also visible to classes belonging to the same
package).A class member marked private is only visible to objects of the class that
declares it.A class member marked package is visible to the objects of all classes that
belong to the same package.A package is a particular collection of classes.
Visibility is enforced by the compiler for the programming language.A programmer
cannot write and compile code that accesses a private member from outside the class
that declares that private member – the compiler denies compiling.However,some
programming languages,such as Java,allow one to circumvent the visibility control
mechanism,and thus to break encapsulation by providing an additional programming
interface.Via this interface (in case of Java it is the Reflection API) non-public members
can be accessed freely.Section 2.2.4 on page 33 discusses the implications of breaking
encapsulation for software testing in more detail.
Inheritance
Subclasses can be derived from a given class.A subclass possesses all members of the
super class (the class from which it is derived) without the need to define these members
itself.This mechanism is called inheritance.Usually,inheritance is used to realize
some specialization of a class.A subclass may specify additional members and also may
override inherited methods,if they are accessible.Overriding means to redefine the
implementation of the method,hence possibly changing the behavior of that method.
Polymorphism
Different kinds of polymorphism are integrated in a programming language.Polymor-
phism of object identifiers (variables) is the most significant kind for an object-oriented
programming language.This polymorphism is the concept that allows a variable,which
is declared to be of a particular class type,to refer to an object of a subclass of that
class type.Whenever a member is accessed via the variable,the access is made on the
actual class,which is not necessarily the declared class.The actual method to invoke
2.1 Structure-Oriented Class Testing 9
is identified at runtime.The mechanism of detecting the actual method to call during
runtime is called dynamic binding.Polymorphism is usually restricted by inheritance,
meaning that it applies to classes that belong to the same inheritance hierarchy.Other
kinds of polymorphism are,for instance,the template concept in C++ or the overloading
of operators.
Self-Recursion
Self-Recursion is the ability of an object to refer to its own identity.This means,for
instance,that a method of an object can call another method on the same object.
2.1.2 Software Testing in General
Testing is an important analytical quality assurance means in the area of software
development.It is an integral part of the established process models for software
development,such as the spiral model (Boehm,1988).Its systematic application
is required by industrial standards,e.g.ISO WD 26262.The primary intention of
testing is to find faults in the software under test and to gain confidence in the correct
implementation of the functionality if no faults are found.Testing is an execution-based
technique meaning that the software under test will be executed.Thereby,the behavior
of the system under test will be observed and evaluated.
A comprehensive and complete test requires the tested software to run in each possible
scenario with any possible input.Since this is practically impossible (due to the
combinatorial explosion caused by the typically huge input value ranges),testing also
includes a sampling activity that selects relevant test inputs with which the test will be
performed.This sampling activity,called test case generation or simply test generation,
is crucial to software testing since it directly affects the quality of the overall test.Either
the selection of the sample tests is poor,possibly involving redundancy or leaving gaps,
in which case the overall test quality is also poor.Or the selection of tests covers a wide
range of possible behaviors of the system under test,in which case the significance of
the overall test as well as the fault-revealing potential is high.
Various approaches exist to guide the process of test generation.In general,one
distinguishes between approaches based on the specification of the system under test
(function-oriented testing,also called specification-based testing,or black-box testing),
and approaches based on the implementation of the system under test (structure-oriented
testing,also called implementation-based testing,or white-box testing).While function-
oriented approaches guide the process of test generation by the semantics (the software
specification),structure-oriented approaches guide it by the syntax (structural aspects
of the implementation).As described in Chapter 1 on page 1,function-oriented and
structure-oriented techniques complement one another since they are based on different
fault models.
Testing takes place at different aggregation levels of the software.Unit testing is
considered to be the most elementary level of testing.It addresses the examination of the
“atoms” of the software under test.With regard to the paradigm of object-orientation,
10 2 Background and Related Work
these atoms are the classes,the instances of which the overall application is composed.
Therefore,unit testing of object-oriented software is also referred to as class-level testing,
or simply class testing.Integration testing applies to the level of compositions of atoms.
Different combinations of these compositions are examined on this level.The focus is
on the interaction of the elements of a composition.For the integration test of object-
oriented systems,the single classes are integrated step by step in order to finally realize
the intended application.At the system level,system testing examines the behavior of
the overall application in conjunction with all peripheral components.
2.1.3 Class Testing
Class testing focuses on the examination of a single class.Due to the data dependencies
among the methods (several methods access the same attributes) and data encapsulation,
often a single method cannot be tested in isolation,rather the interplay of several
methods is examined.For instance,a class test is intended to examine the correctness of
method equals of class C;however,at the same time the constructor of class C,which
is involved in the test since it creates an object for which to invoke method equals,
is also tested.The method on which a class test focuses will be referred to as method
under test.
Testing a particular class often involves other classes.For instance,the constructor
of class C might require an instance of class D to be passed as an argument.Other
methods might require instances of other classes as arguments.The entirety of classes
needed to test a particular class C will be referred to as test cluster for C.The test
cluster of C includes C.Attempts are made to minimize the “negative impacts” of the
additional classes on the tests by using surrogate classes,for instance mock classes (Beck,
2003).A surrogate class is a replacement for a genuine class;while it possesses the same
public interface,it might have completely different implementations of the methods.For
instance,a complex class which requires particular resources to be available (such as
database content or network resources) is often replaced by a mock class which mimics the
behavior of the surrogated class but does not require its resources.Instead of delivering
real database content,the methods of the mock class may return fixed,user-adjustable
values.Another reason for using mock classes is to avoid a failure caused by an object of
an associated class propagating to a failure of the primarily tested instance,thus making
the localization of the fault difficult.In general,it is not reasonable to replace each
class of the test cluster by a mock class.Therefore,unit testing is sometimes already
integration testing.
An object-oriented unit test consists of a sequence of method calls that model a
particular test scenario,and a sequence of assertions that checks whether or not the test
is passed.The sequence of method calls will be referred to as test sequence,the sequence
of assertion statements will be referred to as test evaluation.
A test sequence normally does not involve branching statements,such as if statements
or switch statements,because a test sequence considers one particular scenario and
does not allow alternatives:either the scenario is run through as expected,then the
test passes,or the scenario is not run through as expected,then the test fails.This
2.1 Structure-Oriented Class Testing 11
thesis also assumes that a test sequence does not involve loop statements,such as while.
However,a test sequence can formulate a loop as the repetition of a subsequence (that
is,as an unrolled loop).
The test sequence shown in Listing 2.1 focuses on testing method equals of class
IntegerRange (its source code is shown in Listing A.2).However,it indirectly tests
the constructor and method clone,too.Statements 1 to 4 create the instances needed,
whereas statement 5 calls the method under test.
Listing 2.1:Test sequence examining method equals of class IntegerRange
1//t e s t sequence
2 I nt eger i 1 = new I nt eger ( 0 );
3 I nt eger i 2 = new I nt eger ( 100000 );
4 IntegerRange i r 1 = new IntegerRange ( i 1,i 2 );
5 IntegerRange i r 2 = i r 1.cl one ( );
6 bool ean r e s ul t = i r 1.equal s ( i r 2 );
7
8//t e s t eval uat i on
9 as s e r t ( r e s ul t == true );
Basically,a test sequence creates the objects necessary to execute the method under
test by calling object-creating methods and puts the created objects into particular
states by calling instance methods on them.The test sequence in Listing 2.1 does not
include state-changing methods;the initial states of the objects already accommodate
the objective of the test.In the example,at first two instances of class Integer are
created.These instances are then passed on to the constructor of the class under test
IntegerRange.Afterwards,method clone is called to create a copy of the IntegerRange
instance.Finally,the equality of the genuine and the copy is checked.According to the
test evaluation,the test only passes if the check delivers the true result.
2.1.4 Structure-Oriented Testing Techniques
Structure-oriented testing techniques derive relevant tests from the implementation,
that is the source code,of the unit under test.Various categories of structure-oriented
testing techniques exist,such as control-flow-oriented techniques or data-flow-oriented
techniques.This work focuses on control-flow-oriented techniques.Their characteristic
is that they derive relevant tests from the control flow graph (Hecht,1977) of the unit
under test.The control flow graph is a graphical representation of all control flows
that can occur in a function (procedural programming) or method (object-oriented
programming).To simplify matters,both functions and methods will be referred to as
functions in the following.
Definition 2.1.1.The control flow graph G of function f is a directed graph,defined
by the tuple (N,E,s,x) where N is the set of nodes,each of which represents a basic block
of function f,E ⊆ (N ×N) is the set of edges (branches),each of which represents a
possible transfer of control between two basic blocks,s ∈ N is the starting node,and x ∈ N
12 2 Background and Related Work
is the exit node.Additionally,the following two restrictions hold:∀n ∈ N:(n,s)/∈ E,
and ∀n ∈ N:(x,n)/∈ E.
Figure 2.1 shows the control flow graph of function func from Listing 2.3.The
s
x
Figure 2.1:Example control flow graph
start node is labeled “s”,while the exit node is labeled “x”.A branching node (a node
from which two branches originate) represents a conditional statement,while a normal
node represents a basic block,that is,a series of sequentially executed statements.A
conditional statement refers to a predicate which can be composed of several atomic
conditions.Each conditional statement represents a decision.
The control flow graph of a function is the basis for various testing techniques.For
instance,branch testing drives the generation of tests by the question which branches of
the control flow graph are traversed during the execution of the tests.The technique
generates tests with the intention of maximizing the number of traversed branches.
Branch coverage,the ratio between the number of branches already covered by tests and
the total number of branches,is an indicator for the adequacy and completeness of a
given set of tests.Beizer (1990) discusses the various testing techniques in greater detail.
The following list gives a selection of common structure-oriented testing techniques
along with both the underlying fault model and the related coverage criteria:
• Statement testing assumes that each statement of the unit under test may contain
a fault.When executing each statement during testing the occurring failures
reveal the faults related to the statements of the code (presuming that a fault
actually propagates to an observable failure).Therefore,statement testing aims at
maximizing the number of statements executed during testing.Statement coverage
(also referred to as C
0
coverage) indicates test adequacy and completeness for
statement testing.It is defined as the ratio between the number of all statements
executed during the execution of all tests and the number of all statements of the
software under test.
• Branch testing assumes that each branch of the control flow graph of the unit
under test may contain a fault.When traversing each branch during testing the
2.1 Structure-Oriented Class Testing 13
occurring failures reveal the faults related to the transfer of control of the code
(presuming that a fault actually propagates to an observable failure).Therefore,
branch testing aims at maximizing the number of branches traversed during testing.
Branch coverage (also referred to as C
1
coverage) indicates test adequacy and
completeness for branch testing.It is defined as the ratio between the number
of branches traversed during the execution of all tests and the total number of
branches of the respective control flow graph.
• Decision testing is very similar to branch testing.The only difference is that
decision testing takes only those branches of the control flow graph into account
that start at branching nodes.Other branches,such as those connecting the start
node with the first basic block node,are not considered.
• Path testing assumes that each path through the control flow graph of the unit
under test may contain a fault.When traversing each path during testing the
occurring failures reveal the faults related to the control flow paths (presuming
that a fault actually propagates to an observable failure).Therefore,path testing
aims at maximizing the number of program paths traversed during testing.Path
coverage indicates test adequacy and completeness for path testing.It is defined
as the ratio between the number of paths traversed during the execution of all
tests and the total number of paths of the respective control flow graph.
• Condition testing assumes that each predicate of the unit under test may contain
faults.When evaluating various combinations of the atomic conditions of a
predicate during testing,the occurring failures reveal the faults related to the
predicates in the code under test.Several versions of condition testing exist,each
of which focuses on different combinations of the atomic conditions of a predicate.
An important version is modified condition/decision testing.
Although the code-coverage-based testing techniques were originally designed for
testing procedural software,their applicability to testing object-oriented software is
widely accepted.Thorough investigations into the suitability of these techniques to
object-oriented testing,such as Kim,Clark and McDermid (2001) or Kim et al.(2000),
suggest their effectiveness and advise their use in combination with other,specifically
object-oriented,techniques.
The code coverage criteria listed above apply to a single function and not to a whole
class.In order to allow one to make statements concerning code coverage on the class
level,this thesis suggests the application of the metric method/decision coverage,which
has been developed during the research of this thesis.It combines the techniques of
decision testing and method testing.Method testing assumes that each method of the
class under test may contain a fault.When executing each method during testing,the
occurring failures reveal the faults in the methods.Therefore,method testing aims at
maximizing the number of methods called during testing.Method coverage indicates
test adequacy and completeness for method testing.It is defined as the ratio between
14 2 Background and Related Work
the number of methods executed during the tests and the total number of methods
(both public and non-public).
Method/decision coverage is defined as follows:
Definition 2.1.2.Let d
c
be the number of decisions that occur in the source code of
class c.Additionally,let s
c
be the number of methods of c whose implementation is free
of decisions,meaning that it consists of a sequence of statements only.Furthermore,let
S be the set of test cases that are executed during testing.Let d
true
c,S
be the number of
decisions evaluated to true during test case execution at least once,and d
false
c,S
be the
number of decisions evaluated to false during test case execution at least once.Finally,
let s
c,S
be the number of decision-free methods entered during the execution of the test
cases in S.Then,method/decision coverage D
+
(c,S) for class c achieved by test
suite S is defined as follows:
D
+
(c,S) =
d
true
c,S
+d
false
c,S
+s
c,S
2d
c
+s
c
(2.1)
Method/decision coverage accumulates the decision coverage of the single methods
of a class.However,in addition it also accounts for methods that do not possess any
predicates.It combines the fault models behind both decision coverage and method
coverage.
2.2 Automatic Test Generation
When accomplished manually,the process of test generation is tedious,error-prone and
costly.The literature states that between 30% and 70% of a software project’s budget is
spent on testing (for instance,Beizer (1990) reports that 50% of the costs are typically
spent for testing).Furthermore,extensive testing can only be accomplished by effective
test automation (Staknis,1990).The benefits of test automation are reductions in time,
manual labor,and cost.
Various approaches to automatic test sequence generation for structure-oriented class
testing have been proposed.They aim at generating a set of test sequences that achieve
high structural coverage of the source code of the class under test.They usually build
on the traditional test automation techniques for procedural software and extend them
to the field of object-oriented software.The approaches are either static or dynamic.
Static approaches do not execute the unit under test for test generation;rather,they
compute suitable tests from the program logic using symbolic execution and constraint
solving.Section 2.2.1 on the next page describes the static approaches to automatic test
generation for class testing,including a short explanation of symbolic execution and
constraint solving.Dynamic approaches execute the unit under test for test generation.
They transform the task of test generation to a set of search problems where the search
space is the set of all possible tests.A search strategy is then applied to find covering
tests.The unit under test is executed with a usually large set of tests before a covering
test will be encountered.Section 2.2.2 on page 20 describes the dynamic approaches in
2.2 Automatic Test Generation 15
more detail.Section 2.2.3 on page 29 presents three commercial test generators.Due to
the lack of information as to which technology they rely on,a categorization according to
static or dynamic appeared not to be definitively justifiable.Therefore,this extra section
is introduced.Section 2.2.4 on page 31 generalizes the limitations of the approaches and
gives a summary.
2.2.1 Static Test Generation
The static approaches do not execute any test sequence for obtaining a covering one;
rather,they try to compute it.In order to do so,symbolic execution – a form of abstract
interpretation – together with constraint solving is applied.Since all static approaches
rely on symbolic execution and constraint solving,these techniques will be described
first,followed by the description of the individual static approaches.
Symbolic Execution and Constraint Solving
Symbolic execution is a static analysis technique.Its application to software testing was
pioneered by King (1976).The main idea of symbolic execution of a given program is
to exercise the program with abstract (symbolic) inputs rather than concrete ones.All
computations of the program affecting the inputs are not resolved to concrete results,
but are rather kept on an abstract level by using symbolic expressions.This implies the
program under consideration is not actually executed,its execution is rather “simulated”
step by step.After each step,the program is in a new symbolic state.If a branching
statement is encountered,each of the possible branches is visited according to the chosen
strategy (depth-first,breadth-first,or others).Typically two new symbolic states result
from a branching statement.A symbolic state represents a concrete statement along
with a concrete path to that statement.For each symbolic state,symbolic execution
delivers a set of constraints (referred to as the constraint system) which a concrete input
must satisfy in order for the path to the statement,represented by the symbolic state,
to be traversed.
The symbolic execution of a program can be visualized using a symbolic execution
tree.The nodes of the tree represent symbolic states while the links between the nodes
represent possible transitions.A symbolic state consists of the relevant symbolic inputs,
a path condition (PC),and a program counter,respectively.The path condition is
a Boolean expression applicable to the relevant symbolic inputs.It accumulates the
constraints that must be satisfied in order for the symbolic state to be reached.The
program counter is the reference to the statement to be executed next.The following
example shall clarify the working of symbolic execution.It is taken from Khurshid,
Păsăreanu and Visser (2003).Listing 2.2 shows the source code of a function that sorts
the inputs x and y;it ensures that,after the execution of it,x is smaller than y (overflows
should be neglected).Figure 2.2 on page 17 shows the corresponding symbolic execution
tree.The root node of the tree is the initial symbolic state (denoted state 1).It shows
that x and y are assigned the symbolic values X and Y,respectively.The path condition
is initially true meaning that this state is reachable without any constraints.Since the
16 2 Background and Related Work
Listing 2.2:Simple function sorting two integers
voi d s or t ( i nt x,i nt y)
{
i f ( x > y )
{
x = x + y;
y = x − y;
x = x − y;
i f ( x − y > 0 )
as s e r t ( f a l s e );
}
}
first statement of function sort is a decision,two distinct subsequent symbolic states
are achievable (states 2 and 3).Either,the true branch of the decision is followed (state
2);then the predicate of the condition is incorporated into the path condition as shown
in the left child of the root node.Or,the execution follows the false branch (state 3);
then,the inversion of the predicate is added to the path condition as shown in the right
child of the root node.In the former case,symbolic execution considers the subsequent
assignment statements (states 4 to 6).While the path conditions remain unchanged
during the assignments,the symbolic values for x and y are adapted accordingly.The
final decision leads to a branch in the symbolic execution tree and the corresponding
new symbolic states (states 7 and 8) with the accumulated path conditions.Note that
during constraint solving,which might occur simultaneously or after symbolic execution,
it would turn out that the symbolic state 7 is infeasible due to the contradictory path
condition that evaluates to false.
Once the constraint systems are acquired for each relevant program element to cover,
a constraint solver tries to obtain the concrete inputs for each of the paths in order to
generate a test set with high code coverage.
Automated Testing of Classes
Buy,Orso and Pezze (2000) suggest an approach to generating test sequences based
on symbolic execution and automated deduction.Their work is concerned with the
data-flow-oriented coverage criterion all def-use pairs.This criterion demands that the
test sequences involve the assignment of each program variable (the def ),followed by
a reference to the respective variable (the use) without an intermediate reassignment.
The approach consists of 3 steps:
Step 1:Data flow analysis.This analysis aims at collecting all def-use pairs present
in the code of the class under test.A def-use pair is a pair of statements that relate to
each other in that the one statement defines a particular variable (writes the value of
it),while the other uses the same variable (reads the value of it);no redefinition of the
variable is allowed to occur between the considered definition and the use.Since the
2.2 Automatic Test Generation 17
x: X, y: Y
PC: true
x: Y, y: X
PC: X>Y & Y-X>0
x: Y, y: X
PC: X>Y & Y-X<=0
x: Y, y: X
PC: X>Y
x: X+Y, y: X
PC: X>Y
x: X+Y, y: Y
PC: X>Y
x: X, y: Y
PC: X>Y
x: X, y: Y
PC: X<=Y
State 1
State 2
State 3
State 4
State 5
State 6
State 7
State 8
Figure 2.2:Example symbolic execution tree
analysis is applied to the whole class,a def-use pair can relate to statements that belong
to different methods.
Step 2:Symbolic execution.This step obtains the possible paths through the methods
of the class under test,including the predicates to be satisfied for a particular path to
be taken during execution.For each path,symbolic execution analyzes the relations
between the inputs and the outputs in an abstract (symbolic) fashion.These relations
are interpreted as method preconditions and postconditions.
Step 3:Automated deduction.During this step,test sequences are incrementally built
in order to execute the methods of the class under test so that a particular def-use pair is
covered without violating the requirement that a definition-clear path is taken between
the two code points.Automated deduction starts with method m
u
that contains the
statement involving the use of a particular variable and puts this method as initial
element into the test sequence to be built (resulting in < m
u
>).Then,all methods
satisfying the preconditions of m
u
are considered.If there are none,the def-use pair is
deemed to be infeasible.If there are multiple candidate methods,the approach starts
building a tree of method sequences.Tree building finishes once a constructor is inserted
or a predefined size limit is reached.In the first case,a feasible covering test sequence
has been found.
18 2 Background and Related Work
The authors consider primitive instance variables only;they do not define what a
definition and a use of a class-type variable is.The example provided in their paper
involves methods with empty formal parameter lists only.Furthermore,the approach
addresses public methods only.The authors state that both symbolic execution and
automated deduction involve complex computation,making the approach expensive and
not scale well.
Concolic Testing
Sen,Marinov and Agha (2005) propose a test generation technique that combines
symbolic execution with concrete execution.They call this strategy concolic testing
(concolic = concrete + symbolic).Their early works are on concolic testing of procedural
software,while the later works also deal with object-oriented programs,in particular
with Java classes (Sen and Agha,2006).It is classified as a static approach in this thesis,
because it primarily involves symbolic execution and constraint solving.However,it also
incorporates aspects of dynamic test generation.
The motivation behind concolic testing is that in practice,the path conditions of
the symbolic states can grow very complex,hence being not solvable by contemporary
constraint solvers.Therefore,the method under test is primarily executed using concrete
input values.These inputs are generated randomly or are provided by the user.During
concrete execution,the symbolic path conditions are collected for the traversed path.
Then,by systematically modifying the (symbolic) path conditions (e.g.by negating part
of the conjuncts) and solving the resulting constraints,new concrete input values are
obtained.These new inputs are likely to take an alternative path through the program.
By doing so repeatedly,eventually a high number of possible paths might be detected
for which the corresponding concrete input values are identified simultaneously.Also,if
the symbolic path constraints become too complex during concrete execution,parts of it
are replaced by the current concrete values.
For pointer variables,memory graphs are used that represent dynamic data structures
(such as objects) including their associations.Path constraints referring to pointer
variables are maintained separately from those referring to primitive variables.Logical
input maps are used to keep memory addresses abstract (logical) and to allow for
symbolic execution of pointer accesses.
A limitation of the concolic testing approach is that the constraint solver might still
be not powerful enough (Sen and Agha,2006).Therefore,a requirement for the class to
be tested is that the number and lengths of the paths through a method is finite (which
practically means that neither loops nor recursion may be involved).The description of
the approach lacks an algorithm that transforms an obtained memory graph satisfying an
obtained constraint system to a method call sequence.This means the publications do
not describe how to construct the concrete objects that satisfy the symbolic constraints
via the public interfaces of the involved classes.Rather,the approach seems to presume
that all object attributes can be freely accessed,hence neglecting data encapsulation.
The work does not discuss how legality of the instances is ensured.It does not address
testing non-public methods either.
2.2 Automatic Test Generation 19
Java PathFinder
Visser,Păsăreanu and Khurshid (2004) present a testing framework based on a Java
model checker called Java PathFinder.They transform the task of creating a test that
leads to the coverage of a particular code element to a model checking task.Model
checking in this context is essentially equivalent to symbolic execution and constraint
solving.Thereby,it is formulated as a model property that the code element in question
is not reachable.Then the model checker tries to provide a counterexample by trying to
reach the symbolic state representing the code element to cover.If the symbolic state is
reached,the corresponding constraint system defines an adequate,covering test as an
object graph (not as a method call sequence).The authors do not discuss how to obtain
a method call sequence;rather,they consider single methods which they model check.
The authors introduce the notion of lazy initialization which means that the constraint
system does not necessarily refer to a complete instance of a class:constraints do not
necessarily exist for all object attributes.Later in the process,new constraints may
refer to unreferenced attributes,making the consideration of these attributes necessary,
especially when the attribute at hand has a class type.For the symbolic initialization of
newly accessed class-type attributes,the authors suggest a heuristic based on random
choice:either,the attribute is initialized to null,or it is initialized to a new instance
of the class with uninitialized attributes,or a reference to an already created object is
reused.This heuristic is intended to systematically tread pointer aliasing.
Additionally,the work deals with a facility for symbolically executing method pre-
conditions in order to restrict object instantiations to legal ones.When solving the
constraint system for a particular path,optionally provided method preconditions are
executed symbolically in order to initialize the instances with reasonable attribute values.
The work does not include an algorithmto translate the obtained object graph to a test
sequence which creates the required instances satisfying all the constraints (Xie,Marinov,
Schulte and Notkin,2005) of the associated constraint system.Data encapsulation is
broken since all attributes are written and read freely,regardless of whether or not they
are public.However,this is not critical presuming that formal class invariants are also
provided by the user.In experiments,the authors found that the approach does not
scale well and is not good in achieving high structural coverage.
Symstra
Xie et al.(2005) propose a testing framework called Symstra.It is based on exhaustive
method sequence exploration and symbolic execution.All conceivable method sequences
derived from the class under test are explored up to a predefined length.In order
to acquire concrete primitive arguments for the methods of a sequence that covers a
particular code element,symbolic execution of that method sequence is carried out.
Once a path to the symbolic state representing the code element in question is detected,
a constraint solver is employed to find suitable concrete primitive argument values.
The approach can handle public methods that take primitive arguments.As the
authors state,the approach cannot directly transform non-primitive arguments into
20 2 Background and Related Work
symbolic variables of primitive type.The legality of the considered method call sequences
is ensured using additionally provided formal specifications (method preconditions and
postconditions).However,the exhaustive exploration of the space of all method sequences
is an expensive process.
2.2.2 Dynamic Test Generation
In contrast to the static approaches,where the methods of the class under test are
not actually executed but only symbolically,the dynamic approaches involve concrete
execution of candidate test sequences in order to obtain a covering one.A solution is
not systematically constructed,but sought using a search technique.
The idea of dynamic test input generation dates back to the work of Miller and
Spooner (1976) which deals with the dynamic generation of floating point test data.
Later,Korel (1990) used the alternating variable search technique to obtain test data for
structure-oriented testing of procedural software in general,not only for floating point
inputs.The main motivation for dynamic test generation is to overcome the limitations
of symbolic execution and constraint solving Korel (1990).
The next section recapitulates the history of dynamic test generation.The development
of dynamic test generation techniques culminates in evolutionary structural testing,
a highly developed approach to dynamic test generation that applies evolutionary
algorithms as a search technique (cf.Section 2.3 on page 35).The section presents state
of the art evolutionary structural testing of procedural software,before the next sections
describe the dynamic approaches to automatic test generation for class testing.
Evolutionary Structural Testing
In 1990,Korel (1990) suggested the dynamic approach to automatic software test data
generation in order to cope with the limitations of the existing static approaches based
on symbolic execution and constraint solving.The main idea of Korel’s approach is to
transform the task of creating a set of test inputs which achieve high path coverage to a
set of search problems.For each path to be covered,a concrete test input is searched:the
input space of the function under test,defined by the data type ranges of its arguments
and possible other inputs,is heuristically explored by a trial-and-error strategy.Korel
starts with a randomly created input.The function under test is executed with the input
and the execution is monitored.For monitoring,the tested function is instrumented,
meaning that additional trace statements are inserted which allow the comprehension of
the details of the execution.A cost function (that is,an objective function) expresses to
what extent the execution path taken by the input deviates from the targeted program
path.Then,a new – and hopefully more suitable – input is created via the alternating
variable method.By iteratively applying this method,finally a covering test input is
supposed to be found.
Other researcher adopted Korel’s approach to address other structure-oriented testing
techniques,such as branch testing.Furthermore,other search strategies,such as genetic
algorithms,were applied instead of the alternating variable method.For instance,
2.2 Automatic Test Generation 21
Sthamer (1996) and Jones,Sthamer and Eyres (1996) apply genetic algorithms to find
test inputs that cover a given program path.A genetic algorithm is a meta-heuristic
optimization technique;it is described in detail in Section 2.3 on page 35.It performs
parallel searches that are guided by an objective function.This function assigns a
quantitative rating to each candidate solution which expresses the fitness of the solution.
Tracey,Clark,Mander and McDermid (1998b) modify the approach of Jones et al.
(1996) by introducing additional distance functions for conditions which involve logical
operators,such as AND,OR,and NOT in order to yield better objective functions;
furthermore,they apply simulated annealing (Kirkpatrick,Gellat and Vecchi,1983),
another heuristic search technique.Wegener,Baresel and Sthamer (2001) extend the
dynamic approach further in order to attack the limitation of the previous approaches
that a particular program path must be selected which leads to the code element to
cover.They suggest an objective function which is composed of two distance metrics.
This objective function guides the search for covering test inputs irrespective of the
path to be taken to the targeted code element.Genetic algorithms are used to carry
out the searches.Their approach,which can be considered as the state of the art of
evolutionary structural testing for procedural software,will be described in more detail
in the following.Worthy of mention as other pioneering works in the area of evolutionary
structural testing are those of Xanthakis,Skourlas and LeGall (1992),Pargas,Harrold
and Peck (1999),and Michael,McGraw and Schatz (2001).
The application of an evolutionary algorithm as a search technique requires the
definition of the search space and what a point in this search space is.With evolutionary
structural testing,such a point is a test input used to execute the function under test.
The representation defines how a concrete test input is encoded by a data structure
that an evolutionary algorithm is able to operate on.In addition,an objective function
is required to apply an evolutionary algorithm.This function assesses a generated
candidate solution according to its ability to cover a given code element.Section 2.3
on page 35 provides details on the terminology and concepts of representations and
objective functions.
The phenotype search space Φ is the space of all value vectors that comply with the
interface of the function under test.
Listing 2.3:Simple C function
1 i nt f unc ( i nt a,i nt b,doubl e c )
2 {
3 i nt l oc a l;
4 i f ( a == 0 )
5 {
6 l o c al = read_i nteger ( );
7 i f ( l oc a l == b )
8 return round( c );
9 e l s e
10 return −round( c );
11 }
22 2 Background and Related Work
12 e l s e
13 return round( a∗c );
14 }
For instance,for the function shown in Listing 2.3,Φ = D
int
×D
int
×D
double
×D
int
where D
int
is the value range of type integer and D
double
is the value range of type
double.These value ranges correspond to the four input variables of the function (note
the input variable in line 6).In order to limit the search to semantically reasonable
inputs only,the user can provide more restrictive value ranges.With evolutionary
structural testing,phenotype search space and genotype search space are conceptually
identical.This is possible since suitable variation operators exist for each primitive data
type of a procedural programming language such as C.Structured data,for example
structs and unions,are decomposed into their building blocks.The task of the decoder
(cf.Section 2.3.1 on page 36) is then to construct the respective data structures from a
sequence of primitive values.
The overall task to obtain a set of test data which maximizes the given coverage
criterion is divided into subtasks.For instance,with branch coverage,each branch
becomes a test goal for which an individual evolutionary search is carried out.Hence,
each test goal requires an individual objective function to be defined that is particularly
tailored to the test goal.However,the construction of the objective functions for
the test goals of the function under test can be automated.Different types of code
coverage criteria require different strategies when considering how to construct a suitable
objective function.Baresel,Sthamer and Schmidt (2002) describes the strategies for
various control-flow-oriented criteria,such as branch coverage,and data-flow-oriented
criteria.In the following,the strategy for branch coverage is described since it is
similar to the criterion method/decision coverage for which this work will later exemplify
evolutionary class testing in Chapter 3 on page 53.
An objective function,as suggested by Wegener et al.(2001),consists of two distance
metrics which express how close the execution of the function under test with a concrete
input is to reaching the targeted test goal.These two distance metrics are approximation
level and branch distance.The former will be referred to as control dependence distance
in this work for reasons of consistency (the “approximation” is in fact a distance).Before
defining these two metrics,the concepts of critical branches and critical nodes must be
introduced.
Definition 2.2.1.A branch c of the control flow graph of the function under test is
called critical branch with respect to a particular branch t if no path exists between
c and t.Node p(c) is called a critical node,where function p assigns each branch its
source node (from which the branch starts).
In other words,this means that,once a critical branch is taken during execution,it is
not possible to reach the target branch any more.
Definition 2.2.2.Let t be the targeted branch and c the first critical branch that
execution diverged down.Then n
p
= p(c) is called problem node.Let P
n
p
,t
be the
2.2 Automatic Test Generation 23
set of all paths from problem node n
p
to target branch t.Additionally,let χ(π) be the
number of critical nodes of path π.Then,the control dependence distance d
C
is the
minimum number of critical nodes that lay on a path between the problem node and the
target:
d
C
= min({χ(π)|π ∈ P
n
p
,t
}) (2.2)
Figure 2.3 shows on the left a control flow graph of the function from Listing 2.3,
including the path provoked by an example input (depicted by the nodes in gray and
the dotted branches).Whereas on the right,the control flow graph of the same function
but with a different path,provoked by another input,is shown.Neither of the inputs
c
t
c
c
t
c
Figure 2.3:Two execution flows of the function from Listing 2.3
leads to the coverage of target branch t.The value of d
C
for the left execution flow is 1,
which is the minimum number of critical nodes of all paths from the problem node (the
double-line node) to branch t.In the case of the right execution flow,d
C
= 0 as there is
no critical node on the way from the problem node to the target.
The other metric,branch distance,is relevant if two different inputs yield the same
execution path.In this case,the values of d
C
are the same.However,one of the inputs
might be closer to reaching the target in terms of the predicate assigned to the problem
node.For instance,assume that two test inputs,input A and input B,lead to the
execution path shown on the left in Figure 2.3.Additionally,assume input A leads to
the concrete predicate ( 1 == 0 ) at the problem node,and input B leads to the concrete
predicate ( 100 == 0 ).Intuitively,input A is “closer” to evaluating the first condition
so that the true branch will be traversed,which is favorable when targeting branch
t.The metric branch distance formalizes the distance of the execution in terms of the
predicate assigned to the problem node.
Definition 2.2.3.Let be the set of all predicates and = {true,false}.Branch
distance d
B
(p,b),where p ∈ is the predicate in question and b ∈ is the desired
24 2 Background and Related Work
outcome (desired with respect to the target),is defined as follows:
d
B
(p,b) =
0 if E(d) = b
d
p
otherwise
(2.3)
where E(p) with E: → is the evaluated outcome of decision p,and d
p
is the
relation-specific distance function for p.
For each relational operator,such as <,>,a specific distance function d
p
is defined
which expresses how distant the evaluation of the predicate p was to being evaluated
to the alternative outcome.For instance,in the case of the predicate ( a == 0 ),the
distance function is d
a==0
= |a −0|,mapped into the interval [0,1).Hence,the distance
for an input which leads to a small value of a (and is thus closer to satisfying the
condition than a large value of a),is also small.The mapping into the interval [0,1)
ensures that the greatest possible distance is smaller than the smallest possible value of
the control dependence distance.
Table 2.1 shows the generic distance functions that are typically applied.The value
range of all distance functions is [0,1).The table shows in the first column the names of
true desired false desired
d
x==y
1 −(1 +ε)
−|x−y|
d
x=y
d
x<y
1 −(1 +ε)
y−x
(1 −κ) d
x≥y
d
x≤y
1 −(1 +ε)
y−x
d
x>y
d
x>y
d
y<x
d
x≤y
d
x≥y
d
y≤x
d
x<y
d
x=y
1 d
x==y
d
e
1
∧e
2
max(d
e
1
,d
e
2
)
d
e
1
)
+d
e
2
2
d
e
1
∨e
2
d
e
1
+d
e
2
2
max(d
e
1
,d
e
2
)
d
¬e
(d
e
,false) (d
e
,true)
Table 2.1:Distance functions
the distance functions for the relational and logical operators.In the second column,it
shows the definition of the respective distance function,if the desired outcome of the
predicate is true.Analogously,the third column shows the definition of the respective
distance function,if the desired outcome of the predicate is false.Which outcome
is desired depends on the location of the target branch.ε ∈ (0,1) is a configurable
parameter,and κ refers to the smallest possible value of the operands’ data types.The
definitions of the last row mean that the distance function for the opposite outcome is
to be applied.
In conclusion,the objective function ω
t
(i) for a test goal t and the individual (=input)
i is defined as follows:
ω
t
(i) = d
C
+d
B
(2.4)
2.2 Automatic Test Generation 25
where d
C
and d
B
are the metrics control dependence distance and branch distance with
respect to the problem node caused by the execution of input i.In the case of the
example function above,the metric values for test input A,leading to the concrete
predicate ( 1 == 0 ) and consequently to a miss of the target branch t,are d
C
= 1 and
d
B
≈ 0.005 (with ε = 0.005).Then,the objective value ω
t
(A) = 1.005.
The following sections describe the search-based approaches in the field of automatic
test generation for class testing.While the first approach relies on a binary search
strategy,the latter two apply genetic algorithms.
BINTEST
Beydeda and Gruhn (2003) propose a test generation approach based on a binary search
strategy which they call BINTEST.The authors modified the test data generation
approach of Korel (1990) by replacing the alternating variable search with a binary
search.They consider the attributes of the class under test to be additional inputs to
the method under test besides its regular arguments.Hence,they do not generate test
sequences,but an input which includes the arguments for the method under test along
with the attribute values of the instance under test.
Following the strategy of Korel,BINTEST tries to iteratively satisfy the predicates
that occur along a particular path in the control flow graph of the method under test
by incrementally modifying a concrete user-provided candidate input.In addition to
the concrete input,BINTEST makes use of user-provided domain intervals which are
iteratively bisected on a per-variable basis if the input does not satisfy some path
predicate.The middle element of one of the bisected intervals becomes the variable
value at the considered position of the input.The assumed monotony of the expressions
of the condition to be evaluated favorably is exploited to select the interval to continue
with after bisecting.
For class-type arguments,the state of the input objects is modified using a particular
midValue method that each participating class must implement.This method creates
an object that is the middle element of a given interval.
BINTEST requires that a total ordering exists for the domain of each input variable.
Additionally,the path predicates must exhibit monotone behavior in order for the
search to be effective and efficient.However,especially for objects,usually no (intuitive)
total ordering exists.For instance,when thinking of a class Person that models the
properties of a human,specifying an adequate ordering relation is hard or even impossible.
Consequently,it is hard or impossible to implement the midValue method for such a
class.Another consequence of this is that the monotony of the predicates cannot be
exploited and hence no direction is provided to the binary search.Even if a total ordering
can be specified for a particular class,the midValue method artificially introduces data
dependence among the attributes of this class,possibly preventing relevant object states
from being explored during the binary search.
Since Beydeda and Gruhn consider the attributes of an object as additional inputs,
they implicitly break the encapsulation of the object.The legality of a test input must
26 2 Background and Related Work
be ensured by the user who is responsible for providing correct input domain intervals
and proper implementations of the midValue methods for all relevant classes.
As the authors state,BINTEST suffers from the problem of combinatorial explosion
when the input domains have to be divided into multiple intervals.Since each combination
of intervals will be considered,a large number of binary searches are carried out in the
Log in to post a comment
|
https://www.techylib.com/en/view/parentpita/automatic_generation_of_object-oriented_unit_tests_using_genetic
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
public class JAXBResult extends SAXResult
Resultimplementation that unmarshals a JAXB object.
This utility class is useful to combine JAXB with other Java/XML technologies.
The following example shows how to use JAXB to unmarshal a document resulting from an XSLT transformation.
JAXBResult result = new JAXBResult( JAXBContext.newInstance("org.acme.foo") ); // set up XSLT transformation TransformerFactory tf = TransformerFactory.newInstance(); Transformer t = tf.newTransformer(new StreamSource("test.xsl")); // run transformation t.transform(new StreamSource("document.xml"),result); // obtain the unmarshalled content tree Object o = result.getResult();
The fact that JAXBResult derives from SAXResult is an implementation detail. Thus in general applications are strongly discouraged from accessing methods defined on SAXResult.
In particular it shall never attempt to call the setHandler, setLexicalHandler, and setSystemId methods.
FEATURE
PI_DISABLE_OUTPUT_ESCAPING, PI_ENABLE_OUTPUT_ESCAPING
getHandler, getLexicalHandler, getSystemId, setHandler, setLexicalHandler, setSystemId
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public JAXBResult(JAXBContext context) throws JAXBException
context- The JAXBContext that will be used to create the necessary Unmarshaller. This parameter must not be null.
JAXBException- if an error is encountered while creating the JAXBResult or if the context parameter is null.
public JAXBResult(Unmarshaller _unmarshaller) throws JAXBException.
_unmarshaller- the unmarshaller. This parameter must not be null.
JAXBException- if an error is encountered while creating the JAXBResult or the Unmarshaller parameter is null.
public Object getResult() throws JAXBException
IllegalStateException- if this method is called before an object is unmarshalled.
JAXBException- if there is any unmarshalling error. Note that the implementation is allowed to throw SAXException during the parsing when it finds an error.
Copyright © 1996-2015, Oracle and/or its affiliates. All Rights Reserved. Use is subject to license terms.
|
https://docs.oracle.com/javaee/7/api/javax/xml/bind/util/JAXBResult.html
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
QML animation not working
Hello, I am trying to build a simple listview where if the delegate is clicked, its index is changed (-1). Now, I am using a Qml ListModel and its function move(), it works fine but I want to add animation when the position is changed. I am using the "Behavior on" function, but I don't understand why it is not working.
@import QtQuick 1.1
import com.nokia.symbian 1.1
Page {
id: page
ListModel{
id: listModel
ListElement{ name: "Red" } ListElement{ name: "Green" } ListElement{ name: "Blue" } } ListView{ id: listView anchors.fill: parent interactive: false model: listModel spacing: 10 delegate: Rectangle{ height: 50 width: 360 color: name MouseArea{ anchors.fill: parent onClicked:listModel.move(index,index-1,1) } Behavior on y{ NumberAnimation {} } } }
}
@
To apply animations related to changes to the model, you should use the transition properties of the ListView.
See "ListView.move":
[quote author="Kysymys" date="1387069766"]To apply animations related to changes to the model, you should use the transition properties of the ListView.
See "ListView.move":[/quote]
That function is available only in Qt 5, I am using Qt 4.7.
|
https://forum.qt.io/topic/35480/qml-animation-not-working
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
1 /* 2 * Copyright (c) 1999,.naming.ldap; 27 28 /** 29 * This interface represents an LDAP extended operation response as defined in 30 * <A HREF="">RFC 2251</A>. 31 * <pre> 32 * ExtendedResponse ::= [APPLICATION 24] SEQUENCE { 33 * COMPONENTS OF LDAPResult, 34 * responseName [10] LDAPOID OPTIONAL, 35 * response [11] OCTET STRING OPTIONAL } 36 * </pre> 37 * It comprises an optional object identifier and an optional ASN.1 BER 38 * encoded value. 39 * 40 *<p> 41 * The methods in this class can be used by the application to get low 42 * level information about the extended operation response. However, typically, 43 * the application will be using methods specific to the class that 44 * implements this interface. Such a class should have decoded the BER buffer 45 * in the response and should provide methods that allow the user to 46 * access that data in the response in a type-safe and friendly manner. 47 *<p> 48 * For example, suppose the LDAP server supported a 'get time' extended operation. 49 * It would supply GetTimeRequest and GetTimeResponse classes. 50 * The GetTimeResponse class might look like: 51 *<blockquote><pre> 52 * public class GetTimeResponse implements ExtendedResponse { 53 * public java.util.Date getDate() {...}; 54 * public long getTime() {...}; 55 * .... 56 * } 57 *</pre></blockquote> 58 * A program would use then these classes as follows: 59 *<blockquote><pre> 60 * GetTimeResponse resp = 61 * (GetTimeResponse) ectx.extendedOperation(new GetTimeRequest()); 62 * java.util.Date now = resp.getDate(); 63 *</pre></blockquote> 64 * 65 * @author Rosanna Lee 66 * @author Scott Seligman 67 * @author Vincent Ryan 68 * 69 * @see ExtendedRequest 70 * @since 1.3 71 */ 72 73 public interface ExtendedResponse extends java.io.Serializable { 74 75 /** 76 * Retrieves the object identifier of the response. 77 * The LDAP protocol specifies that the response object identifier is optional. 78 * If the server does not send it, the response will contain no ID (i.e. null). 79 * 80 * @return A possibly null object identifier string representing the LDAP 81 * <tt>ExtendedResponse.responseName</tt> component. 82 */ 83 public String getID(); 84 85 /** 86 * Retrieves the ASN.1 BER encoded value of the LDAP extended operation 87 * response. Null is returned if the value is absent from the response 88 * sent by the LDAP server. 89 * The result is the raw BER bytes including the tag and length of 90 * the response value. It does not include the response OID. 91 * 92 * @return A possibly null byte array representing the ASN.1 BER encoded 93 * contents of the LDAP <tt>ExtendedResponse.response</tt> 94 * component. 95 */ 96 public byte[] getEncodedValue(); 97 98 //static final long serialVersionUID = -3320509678029180273L; 99 }
|
http://checkstyle.sourceforge.net/reports/javadoc/openjdk8/xref/openjdk/jdk/src/share/classes/javax/naming/ldap/ExtendedResponse.html
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Application must maintain some settings persistent to the next run. NetBeans Platform stores list of opened TopComponent automatically if it is allowed. You can store user preferences, URL to database, URL to update centre. It provides API to extend Options window. It supports add primary panel - new main category to Options panel - or secondary panel that is a tab subcategory under some first one.
NetBeans allows to integrate your panels into Options window – primary or secondary panel. It consists of GUI panel and a controller. The controller implements OptionsPanelController interface to allow Options window to integrate into it. It creates the GUI panel lazily and delegates loading, validating and storing requests to the Panel. Panel's responsibility is GUI, informing the controller about changes by listening its content, validation by valid() method, loading by load() method and store settings by store() method.
For loading and storing of settings util class NbPreferences is suitable. Settings are stored in the <user-dir>/config/Preferences/module/base/package/pref.properties file for module module.base.package. In our example on Windows C:\Documents and Settings\milos\Application Data\.options\dev\config\Preferences\nbpcook\options\pref.properties where oprions (of .options) is application branding name (options.exe) and nbpcook.options is the module base package.
Figure 2.11 Simple Options Panel
Figure 2.12 Complex Options Panel
We suppose you have settings of your application encapsulated in some class, e. g. BookPreferences like this:
package com.packtpub.nbpcook.options.pref.api;
import java.beans.PropertyChangeListener;
/** Settings and Options of Books application. */
public interface BookPreferences {
public static final String PROP_DB_CONNECTION_URL =
"db_connection_url";
….. other property names
public String getDbConnectionUrl();
public void setDbConnectionUrl(String dburl);
… methods for other properties
public void addPropertyChangeListener(PropertyChangeListener l);
public void removePropertyChangeListener(PropertyChangeListener l);
/** Tests if are preferences attributes set
* if no you can show Options dialog to set preferences
*/
public boolean ensureSet();
}
Provide its implementation as a service provider. Use NbPreferences to maintain loading/storing data. Note the service provider is registered by annotation.
@ServiceProvider(service=BookPreferences.class)
public class BookPreferencesImpl implements BookPreferences {
public String getDbConnectionUrl() {
Preferences bookpref =
NbPreferences.forModule(BookPreferences.class);
return NbPreferences.forModule(
BookPreferences.class).get(
BookPreferences.PROP_DB_CONNECTION_URL
, "");
}
public void setDbConnectionUrl(String dbConnectionUrl) {
String oldDbConnectionUrl = getDbConnectionUrl();
NbPreferences.forModule(BookPreferences.class).put(
BookPreferences.PROP_DB_CONNECTION_URL
, dbConnectionUrl);
propertyChangeSupport.firePropertyChange(
PROP_DB_CONNECTION_URL,
oldDbConnectionUrl,
dbConnectionUrl);
}
… others settings are similar
… add/remove-PropertyChangeListener implementation
public boolean ensureSet() {
boolean setted = true;
if (getDbConnectionUrl().isEmpty())
setted = false;
… other tests
if ( ! setted ) {
// you can show Book preferences Options window here
}
return setted;
}
}
Usage of BookPreferences is very simple:
// Usage of preferences. Note the simple access
BookPreferences pref = Lookup.getDefault().lookup(BookPreferences.class);
if ( ! pref.ensureSet() )
handle-no-pref-set show error or Options panel (if is not in ensureSet())
MyDbProvider.setUrlConnection( pref.getDbConnectionUrl() );
To create primary panel choose New File, Module Development Category a Options Panel. Choose Create Primary Panel radiobutton, type Category Label (e. g. BookOptions) and Keywords and select 32x32 pixels icon. If you want allow secondary panels check Allow Secondary Panels. Press Next button.
Fill class name prefix - you can leave suggested name taken from options Category, update or retype. Press Finish. The wizard created BookOptionsPanel form - the view - and BookOptionsPanelController - the controller. Note in the source how the controller is registered via an annotation to use by NetBeans Platform and how the basic implementation was generated by IDE.
Now open the panel source. It contains hints in comments for you how to implement it.
Create the form GUI placing components on the panel.
After components are initialized register listeners to listen their changes:
// listen to changes in form fields and call controller.changed()
DocumentListener doclstr = new DocumentListener() {
public void insertUpdate(DocumentEvent e) {
markChanged();
}
...
};
dbConnectionUrlField.getDocument().
addDocumentListener( doclstr );
… listeners to other components
Implement the markChanged() method to inform the controller about changes:
private void markChanged() {
if ( ! BookPreferencesPanel.this.controller.isChanged())
showMessage("BookPreferencesPanel.changedLabel.text_changed");
BookPreferencesPanel.this.controller.changed();
}
Implement valid() method to check if data is correct:
boolean valid() {
changedLabel.setText( "" );
// can check validity of the db connection URL
// e.g by some service
// by used persistence implementation
// ...
String s = dbConnectionUrl.getText().trim();
if (s.length() == 0) {
showMessage("DB_connection_URL_is_empty");
return false;
}
…
return true;
}
Load preferences in the load() method and store them in the store() method:
void load() {
BookPreferences pref =
Lookup.getDefault().lookup(BookPreferences.class);
dbConnectionUrl.setText( pref.getDbConnectionUrl() );
...
void store() {
BookPreferences pref =
Lookup.getDefault().lookup(BookPreferences.class);
pref.setDbConnectionUrl( dbConnectionUrl.getText() );
Now we update the controller. You do not need edit it mostly. The standard implementation generated from template is sufficient.
If you want to add any your objects into options-master-lookup that is composed of all lookups given by all controllers of the Options panel override the getLookup() method. This lookup is passed to the controller.getComponent(lkp) method as a parameter.
The controller creates GUI panel by the getComponent() method. Before showing it the controller's update() method is called. It only calls panel.load() method to load data into the form and sets changed flag to false. The update() method can be called not only ones. It would not contain time consuming tasks to not prolong initialization.
The applyChanges() method is called after the OK button was pressed. It is made by calling panel.store() method which sets and stores settings from the form and clears the changed flag.
The cancel() method is called after the Cancel button was pressed. You can do some rollback.
If some data in the panel were changed the isValid() method is called. If it is OK the OK button is available. Its work is delegated to panel.valid() method.
The changed() method is called by panel to propagate change event to the controller and subsequently its registered listeners - Options API.
If you want set a special help for your options panel return the HelpCtx by getHelpCtx() method.
The Options API looks up all OptionsPanelController implementations and creates its tabs according to all primary panels. Before showing your panel the controller.getComponent() method is called to create GUI panel. Then the update() method. It loads data from properties to the form by panel.load() method. When the user changes some data the change event is propagated to the Options API (that listens changes of the controller). Options API call isValid() method and enables the OK button if it returns true. Then the user can press the OK button, the applyChanges() method is called and panel.store() method stores settings.
Our option panel contribution is registered by annotation. Look what registration source NetBeans Platform generated for you.
Create ZIP distribution file of the application and decompress it.
Explore appname/clustername/modules/module-base-package.jar (decompress it). For our example named options is the folder options/options/modules/ and com-packtpub-nbpcook-options-pref.jar JAR file (where the option panel is defined). Open the generated-layer.xml file in the META-INF folder. There is declared your opiton panel (com-packtpub-nbpcook- was truncated to nbpcook-):
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE filesystem PUBLIC
"-//NetBeans//DTD Filesystem 1.2//EN"
"">
<filesystem>
<folder name="OptionsDialog">
<file name="nbpcook-options-pref-BookOptionsPanelController.instance">
<!--nbpcook.options.pref.BookOptionsPanelController-->
<attr
methodvalue="org.netbeans.spi.options.OptionsCategory.createCategory" name="instanceCreate"/>
<attr name="controller" newvalue="nbpcook.options.pref.BookOptionsPanelController"/>
<attr
bundlevalue="nbpcook.options.pref.Bundle#OptionsCategory_Name_Book" name="categoryName"/>
<attr name="iconBase" stringvalue="nbpcook/options/pref/book32.png"/>
<attr
bundlevalue="nbpcook.options.pref.Bundle#OptionsCategory_Keywords_Book" name="keywords"/>
<attr name="keywordsCategory" stringvalue="Book"/>
</file>
</folder>
</filesystem>
This file is added to the module layer.xml content file while application starts.
We create a primary options category Application with two secondary panels defined In the example Options application in module secondary. There is some unknown file package-info.java in the base package with Application options category registration:
@org.netbeans.spi.options.OptionsPanelController.ContainerRegistration(
id = "Application", categoryName = "#OptionsCategory_Name_Application",
iconBase = "nbpcook/options/secondary/myoptions.png",
keywords = "#OptionsCategory_Keywords_Application",
keywordsCategory = "Application")
package nbpcook.options.secondary;
This way the NB Platform (6.9.1) registers a primary category in the Option pane.
Text and sources were created under NB 6.8 6.9, 7.0, 7.1, 7.2.
|
http://wiki.netbeans.org/BookNBPlatformCookbookCH0206
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Answered by:
Constructing XmlTextReader causes "Root element is missing" in BizTalk WCF Send Pipeline
I have a send pipeline, and trying to parse the data using XmlTextReader in order to pick out an element value for use in my context.
public IBaseMessage Execute(BizTalkComp.IPipelineContext pContext, IBaseMessage pInMsg) { IBaseMessagePart bodyPart = pInMsg.BodyPart; IBaseMessageContext context = pInMsg.Context; //context.Write("OutboundTransportLocation", "", // context.Read("OutboundTransportLocation", "")); int bufferSize = 0x280; int thresholdSize = 0x100000; System.Diagnostics.EventLog.WriteEntry("LOGGER","Pipeline1"); Stream inboundStream = bodyPart.GetOriginalDataStream(); VirtualStream virtualStream = new VirtualStream(bufferSize, thresholdSize); ReadOnlySeekableStream readOnlySeekableStream = new ReadOnlySeekableStream(inboundStream, virtualStream, bufferSize); System.Diagnostics.EventLog.WriteEntry("LOGGER","Pipeline2"); XmlTextReader xmlTextReader = new XmlTextReader(readOnlySeekableStream); //comment-this-line-out and it works // string dbName = ParseXMLForDatabaseName(xmlTextReader); xmlTextReader.Close(); virtualStream.Close(); virtualStream.Dispose(); string dbName = "US......"; // temporary value to be be able to comment out the XmlTextReader System.Diagnostics.EventLog.WriteEntry("LOGGER", "Pipeline3b dbName=" + dbName); string regionSpecificURL = ""; string firstTwoCharsOfDBame = dbName.Substring(0, 2); switch (firstTwoCharsOfDBame) { case ("US"): regionSpecificURL = ""; break; case ("EM"): regionSpecificURL = ""; break; case ("AP"): regionSpecificURL = ""; break; default: regionSpecificURL = ""; break; } System.Diagnostics.EventLog.WriteEntry("LOGGER", "Pipeline4 regionSpecificURL=" + regionSpecificURL); // Three parms: 1) context proeprty, 2) context string namespace 3) value context.Promote( "OutboundTransportLocation", "", regionSpecificURL); context.Promote( "IsDynamicSend", "", true);
System.Diagnostics.EventLog.WriteEntry("LOGGER","Pipeline5"); return pInMsg; }
In this issue:,
James Corbould gave me code that works to set the URL dynamically in a static send port.
When I hardcode the URL, it works. But now, when I need to parse the data, i get the following error when I use the XmlTextReader:)
In the Application EventLog, I see all my "System.Diagnostics.EventLog.WriteEntry" - through "Pipeline5". So I know that my pipline code runs up to the return statement.
So why is it failing? What does the XmlReader do that messes things up?
Thanks,
Neal Walters
- Edited by Neal Walters Tuesday, December 08, 2015 4:43 PM
Question
Answers
All replies
Sadly, still not working.
Read this page:
Changed my code to remove all .close() and .dispose() related to stream, and added this code at the bottom:
System.Diagnostics.EventLog.WriteEntry("LOGGER","Pipeline5"); readOnlySeekableStream.Seek(0, SeekOrigin.Begin); System.Diagnostics.EventLog.WriteEntry("LOGGER", "Pipeline6-ReSeek to 0"); return pInMsg;
The write to EventLog proves to me that I'm running and properly deploy the new version of my pipeline component, restarted hosts, etc... So I see "Pipeline6-Reseek" in the EventLog on my last run, and I'm still getting the error. It makes sense that the above would fix the issue. What else should I try?
Thanks,
Neal
- Edited by Neal Walters Tuesday, December 08, 2015 6:01 PM
Hi Neal - have a read of this blog article of mine - might provide some further information for you:
(Apologies - it's quite a long one!).
I haven't had a chance to read your blog yet but will do so...
In the implementation described in my post, I create my own streaming class and override the read method... The read method is called repeatedly by the BizTalk endpoint manager (EPM - intermediary between the ports and the MessageBox) until there are no more bytes to be read.
I think your issue is that you need to rewind your stream back to the beginning again in your execute method, so it's ready to be read by the EPM (which you have effectively done):
pInMsg.BodyPart.Data.Position = 0;
Cheers,
James.
|
https://social.msdn.microsoft.com/Forums/en-US/8043cdef-de37-48f2-bbdc-a792d8dc286d/constructing-xmltextreader-causes-root-element-is-missing-in-biztalk-wcf-send-pipeline?forum=biztalkr2adapters
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Kyle: “Is it possible to have class Foo: Bar where Foo has a delegate of type protocol FooDelegate: BarDelegate where Bar also has a delegate declared as type BarDelegate? In my case i am subclassing scrollview and want to declare delegate as my own type that conforms to UIScrollViewDelegate? I get the error that a property delegate with type FooDelegate? cannot override a property with type UIScrollviewDelegate?”
If I’m understanding, you want to be able to create delegation where the same property (
delegate) can be assigned to ever more specialized protocols when subclassing. So, for example, you might have a base class like UIScrollView whose delegate is UIScrollViewDelegate, and a subclass like UITableView whose delegate is UITableViewDelegate. Right?
In Swift, you must ensure the child protocol conforms to the parent protocol. I do not believe is the case with the scroll and table view delegates in Objective C. Start with a core delegate protocol like this.
public protocol DelegateProtocol: class { func showMyType() -> Void }
It’s an empty placeholder for all delegation protocols. You don’t need the
showMyType requirement here. I’m just putting it in for demonstration, so you can check where the required member is implemented. If you want to use
weak delegation, you must declare
class, as in this example.
To demonstrate how this works, here are a core delegate protocol and a derived one:
public protocol CoreTypeDelegate: DelegateProtocol {} extension CoreTypeDelegate { public func showMyType() { print ("This is a Core Type Delegate (required)") } public func shared() { print ("Shared at Core level (extension)") } } public protocol DerivedTypeDelegate: CoreTypeDelegate {} extension DerivedTypeDelegate { public func showMyType() { print ("This is a Derived Type Delegate (required)") } public func shared() { print ("Override at Derived level (extension)") } public func exclusive() { print ("Implemented only at Derived level (extension)") } } // Implement one of each class ACoreDelegate: CoreTypeDelegate {} // like UIScrollViewDelegate class ADerivedDelegate: ACoreDelegate, DerivedTypeDelegate {} // like UITableViewDelegate print("-- Core delegate") let myCoreDelegate = ACoreDelegate() myCoreDelegate.shared() // core version myCoreDelegate.showMyType() // core version print("-- Derived delegate") let myDerivedDelegate = ADerivedDelegate() myDerivedDelegate.shared() // derived override myDerivedDelegate.showMyType() // derived version myDerivedDelegate.exclusive() // only in derived
These three methods differentiate how required members, extension-only members, and exclusive-members are accessed when instances are used in different roles.
Next, here’s a basic protocol for classes that use delegates:
public protocol Delegatable { associatedtype DelegateType: DelegateProtocol var delegate: DelegateType? { get set } }
This protocol consists of a conforming type declaration, and a delegate property. This indirection allows you to store instances of arbitrary types, so you can use the same
delegate property for both a base class and its more specialized children.
To see this in action, you need delegatable types . Here’s a base class, similar to the role that scroll views play. (Warning: weak delegation will crash playgrounds. If you’d rather use a playground, omit
weak in the property implementation.)
// Like UIScrollView public class BaseClass<Delegate: CoreTypeDelegate>: Delegatable { public typealias DelegateType = Delegate public weak var delegate: DelegateType? = nil public func baseFunc() { print("implemented in base class") } }
Now you can create an instance, specifying a base class for delegation:
var myBaseInstance = BaseClass<ACoreDelegate>() myBaseInstance.delegate = myCoreDelegate // ok print("-- Should use core implementations") myBaseInstance.delegate?.showMyType() myBaseInstance.delegate?.shared()
The derived delegate class inherits from the base delegate class. You can substitute it in, but the instance will still use core implementations.
print("-- Conformance by derived delegate type") print("is CoreTypeDelegate", myDerivedDelegate is CoreTypeDelegate) // true print("is DerivedTypeDelegate", myDerivedDelegate is DerivedTypeDelegate) // true print("is ACoreDelegate", myDerivedDelegate is ACoreDelegate) // true print("is ADerivedDelegate", myDerivedDelegate is ADerivedDelegate) // true print("-- Will still use core implementations, because it's typed to CoreTypeDelegate") myBaseInstance.delegate = myDerivedDelegate // ok myBaseInstance.delegate?.showMyType() myBaseInstance.delegate?.shared() print("-- Casting delegate, only required implementation does not override") guard let derived = myBaseInstance.delegate as? DerivedTypeDelegate else { fatalError("Cannot cast derived delegate to DerivedTypeDelegate") } derived.showMyType() // still uses core showme derived.shared() // uses derived shared derived.exclusive() // not available in core
Subclassing the base class and creating instances with the derived delegate protocol enables you to use the
delegate property with a more specialized feature set.
// Like UITableView public class DerivedClass<Delegate: DerivedTypeDelegate>: BaseClass { public func derivedFunc() { print("implemented in derived class") } } var myDerivedInstance = DerivedClass<ADerivedDelegate>() // myDerivedInstance.delegate = myCoreDelegate // no, does not allow, cannot assign value of type 'ACoreDelegate' to type 'ADerivedDelegate?' myDerivedInstance.delegate = myDerivedDelegate // yes print("-- Should use native implementations for all") myDerivedInstance.delegate?.showMyType() // yes! myDerivedInstance.delegate?.shared() myDerivedInstance.delegate?.exclusive()
As always, I’ve probably messed up some things along the way, so if you have better solutions or you find an issue in this example, please let me know and I’ll fix. Thanks!
2 Comments
Probably a typo : I think DerivedClass should inherit from BaseClass (not “BaseClass” alone).
Plus : how would you cope with the additional complexity that we often want the delegate methods to take an instance of the “Delegatable” class as first argument ?
e.g : scrollViewDidScroll(_ scrollView: UIScrollView)
I couldn’t come up with something smarter than having BaseClass implement some BaseClassProtocol (resp. DerivedClass implement some DerivedClassProtocol), and declare that methods of the CoreTypeDelegate (resp. DerivedTypeDelegate) require a BaseClassProtocol instance (resp. DerivedClassProtocol instance) as first argument.
“DerivedClass should inherit from BaseClass-of-Delegate” (brackets have been removed from my comment 🙂 )
|
http://ericasadun.com/2016/07/28/dear-erica-how-do-i-mimic-objective-cs-delegate-inheritance-pattern/
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Is there anywhere good sources where I can go and read to find out how to start to code my game so if I click a key (eg 1) that my player with cycle through his attack animation and continue to move in his current direction and only be able to attack within a certain timeframe ?
I have tried to look at other people code in games I found where the code was available but since they code with classes, it gets me confused alot as I am still learning.
I read this post. Although this kind of helps, for me it would mean putting a state within a state.
Is this possible ?
I currently have a CombatIdle state (Enemy sat idle), a Combat Chasing state (Enemy is chasing the player around) and a CombatRetreat state (Player out of range, enemy heading back to be idle). If I can put a state within a state, I'm guessing I could just put the state in the CombatChasing state as this is the one where the enemy would be in range.
I haven't posted code as I haven't began coding it. just looking for helpful resources on this before I come back and ask why it doesn't work
Cheers
...just show some code, and point out specifically what doesn't work the way you want it to....
----------------------------------------------------Please check out my songs:
Ok, I'm working on this now so once I get some code up and running I'll do that
Ok cool! I guess basically it's going to come down to conditions every time your game loop starts again, so, did I press attack? No? Carry on then...or...yes? Is there an enemy next to me? No? just play my attack animation then....or....yes? Well, how many DP does it have?....etc etc etc
Ok, here's what I have so far
Again my code is too long to fit it all in so cut most of it out.
So what I believe is my code is actually performing an attack animation although it is too quick for me to see. All i see if my players jumping across the screen by X pixels
This may work, however I think I need some kind of timer to slow the animation down so instead of it playing faster than I can see, it maybe takes 1-1.5 secs to complete.
This I don't know how to do.
EDIT:- If you need my full code let me know and i'll try and post it somehow
Ultimately I think you'll want a timer, but you can slow down things as well by using a 'miniCounter', that increments the frame, something like:
int miniFrame;
int mainFrame;
function movePlayer()
{
miniFrame++;
if(miniFrame > 1000) //increase/decrease this to slow down/speed up
{
mainFrame++;
}
}
That way you can control when the frame count goes up by using the miniFrame counter, which can be a huge number - the only problem with this is that it wont run at the same speeds on different machines (probably), so not as precise as a timer, but it should slow things down enough to see the animation!
So what I believe is my code is actually performing an attack animation although it is too quick for me to see.
Shouldn't be the case, unless the whole animation happens in less than 16ms since it may end before the average screen can refresh.
while (keys[KEY_1] && Human.curFrame != Human.maxFrame) //Does a loop as if 1 is held down
{
This loop right here will do the whole attack in one frame without doing anything.
I wrote a pretty simple way to do animation based on your already existing code. (Not tested btw, just to give an idea of the logic you can possibly use)Fix and change as you see fit.
Cheers for the help guys.
I'm working through your code taron, trying to understand it all, but I have ran into a few issues and questions.
How do I declare or reference these within my main code ? (I've tried animation.IDLE, animation[IDLE], Animation.animation[IDLE] but none work, unless as its a struct it works differently to the way I done my enum keys[].)I'm assuming these would be where I code my variables for different animations. I haven't done this or seen this before.
Eg if ATTACK, use animationRow 1, and animationColumns = 12if IDLE, use animation Row 2 etc.
Then I have added this into my Player struct
However I do get the following errors C2146: syntax error: missing ';' before identifier 'animation'C4430: missing type specifier - int assumed.Note C++ does not support default-intC2039: 'animation' : is not a member of 'Player'(previously all my Player struct worked) And my Player struct is in a objects.h file, if this makes any difference.
I can place this in my main body of code although this would stop PlayerAttack from working as it would no longer reference to player (I think).
I do like the way you wrote this aswell, I understand most of it can see how it works and how it does the animations
You need a ; at the end of the struct!
struct Player
{
Animation animation;
};
I copied that from taron, didn't see his didn't have a ; at the end of the struct. In my code I do have this at the end.
You haven't put ... in there have you??
No
My current player struct looks like this (this is working before starting to add in taron's code) I have added the Animation animation just to show how it would be
And then the error being displayed is attached
I often forget to put a ';' after classes and structs if I recently programmed in a different language.
How do I declare or reference these within my main code ? (I've tried animation.IDLE, animation[IDLE], Animation.animation[IDLE] but none work
Animation::IDLE would be the correct syntax.
Also you'd have to add int animationState or something inside the Animation struct to track which animation is being played, I forgot to add that.
C2146: syntax error: missing ';' before identifier 'animation' C4430: missing type specifier - int assumed.Note C++ does not support default-int
Generally this means it doesn't recognize the type. If you've put the Animation in its own header file, make sure to include that header in your Player header, otherwise the compiler will not be able to find the definition of the type.If you've defined the Animation in your main.cpp or whatever file, move it to a header file.So it would look something like this:
#ifndef PLAYER_H
#define PLAYER_H
#include "animation.h"
struct Player
{
...
};
#endif
Or the shorter but less portable way. (Non standard extension, but supported by many compilers)
#pragma once
#include "animation.h"
struct Player
{
...
};
Yeah the code is fine, have you declared Animation before player?
Nope, I had declared Animation after it, changed it and that's fixed that part, thanks Dizzy.
Taron, thanks for the help on the correct syntax, now to work on getting it right, not long left to finish it
All my structs are in the same file, I only have 2 objects.h (all my structs) and main.cpp
Hopefully I should be able to figure this out now.
Cool! Let us know if you get stuck again....I'm warming up for Speedhack so need motivation to get coding Allegro5 again!
I think i'm doing this wrong :-(
I have placed my ChangeAnimation in my ALLEGRO_EVENT_TIMER code, so to me it says, if Player is in the CombatIdle state and KEY_1 is pressed then change the animation.
Then for the function I have placed some states of the animations.
What would go in place of newAnimation in this line ChangeAnimation(animation, newAnimation); ?
I tried Animation::ATTACK, animation::ATTACK, animation.ATTACK with no luck.
Before you use 'newAnimation', do you create it?
ie:
Animation *newAnimation;
ChangeAnimation(animation, newAnimation);
(the above code wont work, I was just seeing if you'd 'created' the newAnimation...)
EDIT:
Consider this...
int i = 5;
void changeInt(int &arg)
{
arg = 20;
}
int main()
{
changeInt(i);
//i now = 20!!
}
That will be it, no I haven't created it apart from in the ChangeAnimation function. I see that part is in the PlayerAttack part of tarons code.
I feel i'm getting more and more confused as I delve further into the depths of the unknown :-) And going from zero Allegro5 experience to doing it alot everyday is also confusing me.
Cheers Dizzy :-)
Try not to go too far down a road that is suggested on here if it seems alien; I used 1 .cpp file and hundreds of separate variables for my first 20 odd projects....LONG before I started delving into enums and structs....keep it simple, and as soon as something doesn't make sense, come on here and post some code...that's how I learned!
Cheers for the advice.
So trying to do it in a way I understand. Using Mike Geig's tutorial on his spaceshooter he has a comet going across his screen using no input. It move across without any keys being involved and his code for that is
I added comments to compare to my code.
So based on that, I want a animation to happen basically the same way except I press a key first so I press the key and declare that the player is attacking
So looking at the 2 codes, why is one allowing it to be cycled all the time and perform the animation across the screen, yet with my code only cycle through the animation when I actually hold down the key. From what I gather I have removed the keypress part by adding the PlayerAttacking, so my PlayerAttacking is now similar to If the comet is onscreen.
The only big difference I see isMy code
Mike Geig's code
We both have our calls in the ALLEGRO_EVENT_TIMER section aswell.
EDIT_ OMG I got it to workOn the bad side is when I move after attacking it plays that animation again, but I got it to work
So...do you want your animation run whilst you're holding down KEY_1, or simply run it when KEY_1 is pressed?
EDIT!
YOU GOT IT!
I just edited at the same time but I kinda got it to work
I just wanted my player to move about, but if the 1 key was pressed (and released) it would play a attack animation.
As of now when I press 1 it does plays the attack animation
However when I move after that it keeps playing the attack animation instead of the moving ones. So need to fix that
But it is progress
Cool
Human.Attacking = true;
if (Human.Attacking) // Is my player attacking, then do the code below
..hmmmmm
You right
I changed to this
Where before I was doing
Then putting Human.Attacking = true; in my function.
It was probably suggested way earlier than now I just didn't get it in the right place
EDIT:- If press 1 while moving he plays his attack animation all the time, until i release left or right and repress it
|
https://www.allegro.cc/forums/thread/615392/1013343
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Hello, I succesfully implemented an audio service to handle playing of several concatenated audio files using @ionic-native/media plugin and it’s working fine on Browser when checked on Mac/Chrome. However on iOS/Chrome the first file in the playlist is played but subsequent files aren’t. I did a lot of research but I get no error or info… on iOS the second instance of the mediaObject just does not get played, and I made sure that the code gets executed until the relevant line (this.mediaObject.play()
for that instance
This is my audio service:
import {Injectable} from '@angular/core'; import {Media, MediaObject} from "@ionic-native/media"; @Injectable() export class AudioService { mediaObject: MediaObject; playlist: Array<string> = []; counter:number = 0; constructor( private media: Media ) { } preparePlayAudio(src: string) { if (typeof this.mediaObject !== 'undefined') { this.mediaObject.pause(); this.mediaObject.release(); this.mediaObject = undefined; } this.mediaObject = this.media.create(src); this.playAudio(); } next() { this.counter += 1; if (typeof this.playlist[this.counter] !== 'undefined') { this.preparePlayAudio(this.playlist[this.counter]); } } playAudio() { this.mediaObject.play(); this.mediaObject.onStatusUpdate.subscribe(status => { if (status.toString()=="1") { //player start } if (status.toString()=="4") { // player end running this.next() } }); } play(playlist: string[]): void { this.counter = 0; this.playlist = playlist; this.preparePlayAudio(playlist[this.counter]); } }
And I call it by
audioService.play(['audio1.mp3', 'audio2.mp3']);
VERSIONS
Ionic Version 3.9.2
Ionic Native Media: 4.7.0
|
https://forum.ionicframework.com/t/issue-creating-playlist-with-mediaobject-in-ios-chrome/138895
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Type: Posts; User: babaliaris
I have the option Use(/Yu) in the precompiled header options.
Precompiled header file : pch.h
Precompiled header source file : <full_path_to>pch.cpp
In the include direcotries of the...
Yes, I know that! The thing is that I'm using some quite big header-only libraries and every time I change my code it takes 1 minute to compile.
And I'm talking about the beginning of a project...
Let's suppose we have this header file:
People.h
class People
{
public:
People(const std::string &name, int age)
: m_name(name), m_age(age)
Let's assume you're building a game and you want to have a Window class which it's implementation differs on different platforms.
I have seen two ways to achieve this. One is by wrapping each...
LOL, I thought dynamic cast would fail on a downcast and work only for upcasts. Is it the other way around?
If you want to cast a reference you usually do this:
Subclass &myref = (Subclass&)base_class_object_ref;
But I have also seen people doing this:
Never mind my mistake... The reason wasn't the copy constructor. In the Dispatcher(Event &e) I was passing an instance of MouseMovedEvent which is a subclass of Event.
But because inside the...
class Dispatcher
{
//Dispatcher callback.
template<typename T>
using DispatchCallback = std::function<void(T&)>;
public:
Haha, I just figured it out! Yeah, that was the problem! I was trying to std::cout << returned_string << std::endl . I thought that the result would display zero but apparently it does not work ...
I did it but the issue was not fixed...
By the way if I remove the line:
//Set exception mask for file stream.
file.exceptions(std::ifstream::failbit | std::ifstream::badbit);
then this...
std::string Program::readFile(const char* path)
{
//Variables.
std::ifstream file;
std::string content;
//Set exception mask for file stream.
file.exceptions(std::ifstream::failbit |...
I figured it out.
The problem is that when I'm referencing a memory address like this:
by default, the data segment register is being used to do the segment - offset calculation. For...
I'm compiling using
nasm -f bin
I'm running the bootloader using
qemu-system-x86_64
I have 2 versions of the same code with a slight change to the second version. First version uses the [org...
I'm already using others, like google test. Just for fun and knowledge I'm creating my own. Google test also uses global variables to achieve that.
I fixed the remove method. (Yes I wasn't handling the disconnection of the deleted node from the list).
Well about the for loop, it works if the size does not change. if you do something like...
I understand now. Well, I'm trying to figure another implementation which googletest follows and I think it will work.
#include <iostream>
#include <ctime>
#include <vector>
#define...
Hello!
I made a unit testing framework and for the first time, I noticed that breakpoints won't work with macros. This is how my tests look like:
main.cpp
#include <VampTest/VampTest.h>...
This is my current implementation and it works quite well.
#ifndef VMPS_LINKEDLIST_HPP
#define VMPS_LINKEDLIST_HPP
#include <iostream>
namespace VMPS
{
I think I get the idea now. Tell me if I understood it correctly :D
Something like this (code did not tested):
User's Code:
//The other must be the same type as the search method template....
source of the above quote
Ok, I searched c++ iterators on the internet and found this. I'm new to STL and I didn't know that. Thank you for pointing me in the right direction. I'm off my way...
I just tried and my eyes popped out :d
Hello.
Until now, I used to do things the old way (As I learned in the java course). For example, in order to create a generic linked list, I would create a Node class which the user can inherit...
Actually, can I inherit smart pointers to extend them?
This is basically a unique_ptr implementation, right? I see now! Incredible! Finally, I learned why smart pointers can be handy.
If I'm not wrong, shared ptrs are implemented with reference...
I just noticed that this will cause a problem... What if those references go out of scope??? Then, when the deletion of the object happens, I will try to set NULL references that have gone out.
|
https://forums.codeguru.com/search.php?s=eec1caae594ebb6589c78aaf72860ab9&searchid=20961316
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Google Cloud Storage (GCS) can be used with tfds for multiple reasons:
- Storing preprocessed data
- Accessing datasets that have data stored on GCS
Authentication
Before starting, you should decide on how you want to authenticate. There are three options:
- no authentication (a.k.a anonymous access)
- using your Google account
- using a service account (can be easily shared with others in your team)
You can find detailed informations in Google Cloud documentation
Simplified instructions
If you run from colab, you can authenticate with your account, but running:
from google.colab import auth auth.authenticate_user()
If you run on your local machine (or in VM), you can authenticate with your account by running:
gcloud login application-default
If you want to login with service account, download the JSON file key and set
export GOOGLE_APPLICATION_CREDENTIALS=<JSON_FILE_PATH>
Using Google Cloud Storage to store preprocessed data
Normally when you use TensorFlow Datasets, the downloaded and prepared data will
be cached in a local directory (by default
~/tensorflow_datasets).
In some environments where local disk may be ephemeral (a temporary cloud server
or a Colab notebook) or you need the data
to be accessible by multiple machines, it's useful to set
data_dir to a cloud
storage system, like a Google Cloud Storage (GCS) bucket.
How?
Create a GCS bucket and ensure you (or your service account) have read/write permissions on it (see authorization instructions above)
When you use
tfds, you can set
data_dir to
"gs://YOUR_BUCKET_NAME"
ds_train, ds_test = tfds.load(name="mnist", split=["train", "test"], data_dir="gs://YOUR_BUCKET_NAME")
Caveats:
- This approach works for datasets that only use
tf.io.gfilefor data access. This is true for most datasets, but not all.
- Remember that accessing GCS is accessing a remote server and streaming data from it, so you may incur network costs.
Accessing datasets stored on GCS
If dataset owners allowed anonymous access, you can just go ahead and run the tfds.load code - and it would work like a normal internet download.
If dataset requires authentication, please use the instructions above to decide on which option you want (own account vs service account) and communicate the account name (a.k.a email) to the dataset owner. After they enable you access to the GCS directory, you should be able to run the tfds download code.
|
https://www.tensorflow.org/datasets/gcs
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
From Documentation
Introduction
In this smalltalk I will introduce how you can secure your ZK applications using Apache Shiro, a Java security framework.
Apache Shiro
Apache shiro is an easy-to-use Java security framework that provides security features such as authentication, authorization, cryptography, session management and so on. It is a comprehensive application security framework and especially useful for securing your web applications based on simple URL pattern matching and filter chain definitions.
Configuration
You can add Apache Shiro framework support to your ZK applications by either downloading the binary files from Shiro project download section and placing the jar files in your WEB-INF/lib folder, or, if you are using Maven, just add dependencies to your project pom.xml file as shown below;
<dependency> <groupId>org.apache.shiro</groupId> <artifactId>shiro-core</artifactId> <version>1.1.0</version> </dependency> <dependency> <groupId>org.apache.shiro</groupId> <artifactId>shiro-web</artifactId> <version>1.1.0</version> </dependency> <!-- Shiro uses SLF4J for logging. --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.6.1</version> <scope>runtime</scope> </dependency>
You also need to enable Shiro security filter in web.xml for it to auto register security filter configurations. Make sure you have the filter mapping configured to filter all URL patterns.
<filter> <filter-name>ShiroFilter</filter-name> <filter-class>org.apache.shiro.web.servlet.IniShiroFilter</filter-class> </filter> <filter-mapping> <filter-name>ShiroFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping>
Application overview
The ZK application used to demonstrate security using Shiro in this article is a simple web application with few pages without any real functionality. It will have few registered users with specific roles and except application home page all other pages are restricted to be accessed by users with only specific role. There are three pages in the application for users with marketing, products and sales roles. Users with marketing role can only access marketing page in addition to home page and so on.
Login page
Let's create a simple login page first. Shiro will automatically redirect users to this login page whenever they are trying to access a restricted page. Once user enter credentials, Shiro will authenticate and based on the roles assigned to the user, it will authorize access to that restricted page. Login page simply contains textbox input components for entering username and password. In addition to that it also has a checkbox component to let users select if the application should remember user credentials for future access.
<?page id="testZul" title="CUSTOM ZK + Apache Shiro login"?> <window id="loginwin" title="CUSTOM ZK + Apache shiro login" border="normal" width="350px"> <!-- this form-login-page form is also used as the form-error-page to ask for a login again. --> <html style="color:red" if="${not empty requestScope.loginFailure}"> <![CDATA[ Your login attempt was not successful, try again.<br/><br/> ]]> </html> <groupbox> <caption>Login</caption> <h:form <grid> <rows> <row>User: <textbox id="u" name="user"/></row> <row>Password: <textbox id="p" type="password" name="pass"/></row> <row><checkbox id="r" name="remember"/>Remember</row> <row spans="2"> <hbox> <h:input </hbox> </row> </rows> </grid> </h:form> </groupbox> </window>
Most important thing here to consider is that the login form must be named as loginform as per Shiro requirement. One more additional piece of code above is to display login errors. In case the login attempt was unsuccessful, Shiro redirects user back to the login page. Shiro FormAutheticationFilter will set a failure attribute with the error message in such a scenario. I have extended FormAutheticationFilter and override its setFailureAttribute(ServletRequest, AuthenticationException) to set an attribute in request. I detect this attribute when the login page is displayed and if it is present, the error message is provided to the user indicating failed login attempt and the reason.
public class SampleFormAuthenticationFilter extends FormAuthenticationFilter { protected void setFailureAttribute(ServletRequest request, AuthenticationException ae) { String message = ae.getMessage(); request.setAttribute(getFailureKeyAttribute(), message); } }
Securing pages
As described in application overview we have different pages in our ZK applications and they are restricted to be accessed by users with only specific roles. Let's see how we can configure Shiro to detect users accessing to these pages and authenticate & authorize them based on roles assigned to them in the application. Shiro is designed to work in any environment, from simple command-line applications to the largest enterprise clustered applications and because of this diversity of environments, there are a number of configuration mechanisms that are suitable for configuration. However, for simplicity we will use the common-denominator text-based configuration i.e. INI format to setup Shiro configuration for our application.
INI configuration
INI is basically a text configuration consisting of key/value pairs organized by uniquely-named sections. Keys are unique per section only, not over the entire configuration. Shiro INI configurations has few section indicated by square brackets [SECTION_NAME]. Below is the INI configuration for our application
[main] sampleauthc = shiro.sample.SampleFormAuthenticationFilter sampleauthc.loginUrl = /login.zul sampleauthc.usernameParam = user sampleauthc.passwordParam = pass sampleauthc.rememberMeParam = remember sampleauthc.successUrl = /home.zul sampleauthc.failureKeyAttribute=loginFailure roles.unauthorizedUrl = /accessdenied.zul [urls] /login.zul = anon /marketing/**=sampleauthc, roles[marketing] /products/** = sampleauthc, roles[products] /sales/** = sampleauthc, roles[sales] /zkau/** = anon /home.zul = anon [users] admin = a,administrator marketingguy = a,marketing productsguy = a,products salesguy = a,sales
Keep this in a file called shiro.ini and add it on your classpath and ShiroFilter as defined in web.xml will automatically discover it. You can also give it a different name and keep it in a different location but specify these details in web.xml as specified here. Below I will describe few important Shiro configuration concepts and formats.
[main] section above defines the Shiro filter definitions and their configurations. Since we have extended Shiro FormAuthenticationFilter line 2 defines sampleauthc acronym for our own filter that Shiro will use for login page authentication. We also indicate the login page input component names for Shiro to retrieve their values correctly.
[users] section defines our application users. The format of each user definition is username = password, roles[role1,role2....] Although the configuration here is showing passwords in plain text Shiro provides support for password hashes.
[urls] section defines which URL patterns are restricted and for what roles. It also defines which Shiro filters to be applied to authentication and authorize access to those URL patterns. The format is url pattern = filter1,filter2 .... , roles[role1,role2....]. For our sample application, three pages in marketing,products and sales directory can be each accessed by users with marketing, product and sales role. Note that in line 16 I have set anonymous access to /zkau/** pattern which is the request pattern for ZK Ajax requests as defined in web.xml. Also, home.zul page is accessible to all users. Remember to order your URL pattern access rules from most specific to the least specific ones as Shiro will do a sequential match for URL patterns starting from the the top in [urls] section.
In addition to these sections, there is also the [roles] section that allow developers to define even more fine grained control per role basis; as in you can assign different permission per role basis.
As configured above the home.zul page is accessible to everyone. From application home users can choose to visit other pages by clicking on the hyperlinks.
<window title="ZK + Apache Shiro application home" border="normal" width="350px" height="220px" apply="shiro.sample.HomeComposer"> <label id="user" value=""></label> <grid height="200px"> <rows> <row> <a label="Marketing" href="/marketing/marketing.zul"></a> </row> <row> <a label="Products" href="/products/products.zul"></a> </row> <row> <a label="Sales" href="/sales/sales.zul"></a> </row> </rows> </grid> </window>
In controller, we use Shiro SecurityUtils.getSubject() API to get currently logged-in users. Here SecurityUtils.getSubject() returns an instance of Subject which in Shiro terminology means anything that interacts with the application. For web application it usually means the end user. If the user interacting with the application is already authenticated we can display user details as shown below.
if(SecurityUtils.getSubject().isAuthenticated()) { user.setValue("Welcome: " + SecurityUtils.getSubject().getPrincipal()); } else { user.setValue(""); }
Access denied page
In cases where user is trying to access a restricted page without proper authorization we can configure Shiro to redirect users to a common access denied page. This is done by specifying roles.unauthorizedUrl = /accessdenied.zul in shiro.ini as shown above on line 9.
Implementing Logout functionality
Logging out an user is as simple as calling SecurityUtils.getSubject().logout() API as shown below in line 2
public void onClick$logout() { SecurityUtils.getSubject().logout(); execution.sendRedirect("/home.zul"); }
Note that Shiro does not yet support automatically redirecting users to login or a predefined page after logout so you need to do the redirect after calling the above logout() API as shown in line 3
Summary
In this article I showed you how to secure your ZK applications using Apache Shiro. The sample discussed here adds a very simple page based security to a ZK application. You can refer to Shiro reference documentation to add more security features such ssl support, LDAP support, cryptography and so on.
Download
You can download the sample application code from its github repo here
|
https://www.zkoss.org/_w/index.php?title=Small_Talks/2012/March/Securing_ZK_Applications_With_Apache_Shiro&oldid=25850
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
#include <tinyxml2.h>
XML text.
Note that a text node can have child element nodes, for example:
<root>This is <b>bold</b></root>().();
Returns true if this is a CDATA text element.
Declare whether this should be CDATA or standard.
|
https://cocos2d-x.org/reference/native-cpp/V2.2/d3/d66/classtinyxml2_1_1_x_m_l_text.html
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
PROBLEM LINK:
Author: Fedor Korobeinikov
Tester: Hiroto Sekido
Editorialist: Kevin Atienza
DIFFICULTY:
MEDIUM
PREREQUISITES:
sqrt decomposition, preprocessing
PROBLEM:
Given a sequence of N integers A_1, A_2, \ldots, A_N, where each A_i is between 1 to M, you are to answer Q queries of the following kind:
- Given L and R, where 1 \le L \le R \le N, what is the maximum |x - y| such that L \le x, y \le R and A_x = A_y?
Note that in the problem, Q is actually K.
QUICK EXPLANATION:
For each i, 1 \le i \le N, precompute the following in O(N) time:
- \text{next}[i], the smallest j > i such that A_i = A_j
- \text{prev}[i], the largest j < i such that A_i = A_j
Let S = \lfloor \sqrt{N} \rfloor, and B = \lceil N/S \rceil. Decompose the array into B blocks, each of size S (except possibly the last). For each i, 1 \le i \le N, and 0 \le j \le B-1, precompute the following in O(N \sqrt{N}) time:
- \text{last_in_blocks}[j][i], the largest k \le jS+S such that A_k = A_i
- \text{block_ans}[j][i], the answer for the query (L,R) = (jS+1,i). For a fixed j, all the \text{block_ans}[j][i] can be computed in O(N) time.
Now, to answer a query (L,R), first find the blocks j_L and j_R where L and R belong in (0 \le j_L, j_R < B). Then the answer is at least \text{block_ans}[j_L+1][R], and the only pairs (x,y) not yet considered are those where L \le x \le j_LS+S. To consider those, one can simply try all x in that range, and find the highest y \le R such that A_x = A_y. Finding that y can be done by using \text{last_in_blocks}[j_R-1][x] and a series of \text{next} calls. To make that last part run in O(S) time, consider only the x such that \text{prev}[x] < L.
EXPLANATION:
We’ll explain the solution for subtask 1 first, because our solution for subtask 2 will build upon it. However, we will first make the assumption that M \le N, otherwise we can simply replace the values A_1, \ldots A_N with numbers from 1 to N, and should only take O(N) time with a set. However, we don’t recommended that you actually do it; this is only to make the analysis clearer.
O(N^2) per query
First, a simple brute-force O(N^2)-time per query is very simple to implement, so getting the first subtask is not an issue at all. I’m even providing you with a pseudocode on how to do it
def answer_query(L, R): for d in R-L...1 by -1 for x in L...R-d y = x+d if A[x] == A[y] return d return 0
We’re simply checking every possible answer from [0,R-L] in decreasing order. Note that the whole algorithm runs in O(QN^2) time, which could get TLE if the test cases were stronger. But in case you can’t get your solution accepted, then it’s time to optimize your query time to…
O(N) per query
To obtain a faster running time, we have to use the fact that we are finding the maximum |x-y|. What this means is that for every value v, we are only concerned with the first and last time it occurs in [L,R].
We first consider the following alternative O(N^2)-time per query solution:
def answer_query(L, R): answer = 0 for y in L...R for x in L...y if A[x] == A[y] answer = max(answer, y - x) return answer
The idea here is that for every y, we are seeking A_x, which is the first occurrence of A_y in [L,y], because all the other occurrences will result in a smaller y - x value. Now, to speed it up, notice that we don’t have to recompute this x every time we encounter the value A_x, because we are already reading the values A_L, \ldots, A_R in order, so we already have the information “when did A_y first appear” before we ever need it! Here’s an implementation (in pseudocode):
def answer_query(L, R): index = new map/dictionary answer = 0 for y in L...R if not index.has_key(A[y]) index[A[y]] = y answer = max(answer, y - index[A[y]]) return answer
Now, notice that this runs in O(N) time if one uses a hash map for example!
We mention here that it’s possible to drop the use of a hash map by using the fact that the values A_y are in [1,M]. This means that we can simply allocate an array of length M, instead of creating a hash map from scratch or clearing it. However, we must be careful when we reinitialize this array, because it is long! There are two ways of “initializing” it:
- We clear the array every time we’re done using it, but we only clear those we just encountered. This required listing all the indices we accessed.
- We maintain a parallel array that contains when array was last accessed for each index. To clear the array, we simply update the current time.
We’ll show how to do the second one:
class LazyMap: index[1..M] found[1..M] # all initialized to zero time = 0 def clear(): this.time++ def has_key(i): return this.found[i] == this.time def set(i, value): # called on the statement x[i] = value for example this.found[i] = this.time this.index[i] = value def get(i): # called on the expression x[i] for example return this.index[i] index = new LazyMap() def answer_query(L, R): index.clear() answer = 0 for y in L...R if not index.has_key(A[y]) index[A[y]] = y answer = max(answer, y - index[A[y]]) return answer
Using this, the algorithm still runs in O(N) time (remember that we assume M \le N), but most likely with a lower constant.
The overall algorithm runs in O(QN) time.
sqrt decomposition
When one encounters an array with queries in it, there are usually two ways to preprocess the array so that the queries can be done in sublinear time:
- sqrt decomposition, which splits up the array into \lceil N/S \rceil blocks of size S each. S is usually taken to be \lfloor \sqrt{N} \rfloor (hence the term “sqrt decomposition”). Usually, one can reduce the running time to O((N+Q)\sqrt{N}) or O((N+Q)\sqrt{N \log N}). Sometimes, depending on the problem, it may also yield O((N + Q)N^{2/3}) time.
- build some tree structure on top of the array. This usually yields an O(N + Q \log N) or O((N + Q) \log N) time algorithm.
There are other less common ways, such as lazy updates or combinations of the above, but first we’ll try out whether the above work.
Suppose we have selected the parameter S, and we have split the array into B = \lceil N/S \rceil blocks of size S, except possibly the last block which may contain fewer than S elements. Suppose we want to answer a particular query (L,R). Note that L and R will belong to some block. For simplicity, we assume that they belong to different blocks, because if they are on the same block, then R - L \le S, so we can use the O(S) time query above.
Thus, the general picture will be:
|...........|...........|...........|...........|...........|...........|...........| ^ ^ ^ ^ L E_L E_R R
We have marked two additional points, E_L and E_R, which are the boundaries of the blocks completely inside [L,R]. Now, it would be nice if we have already precomputed the answer for the query pair (E_L,E_R), because then we will only have to deal with at most 2(S-1) remaining values: [L,E_L) and [E_R,R]. We can indeed precompute the answers at the boundaries, but we can do even better: we can precompute the answers for all pairs (E,R), where E is a boundary point and R is any point in the array! There are only O(BN) pairs, and we can compute the answers in O(BN) time also:
class LazyMap: ... S = floor(sqrt(N)) B = ceil(N/S) index = new LazyMap() block_ans[1..B][1..N] def precompute(): answer = 0 for b in 1...B index.clear() E = b*S-S+1 # left endpoint of the b'th block answer = 0 for R in E...N if not index.has_key(A[R]) index[A[R]] = R answer = max(answer, R - index[A[R]]) block_ans[b][R] = answer
(if you read the “quick explanation”, note that there is a slight difference here: we’re indexing the blocks from 1 to B instead of 0 to B-1)
This means that, in the query, the only remaining values we haven’t considered yet are those in [L,E_L). To consider those, we have to know, for each x in [L,E_L), the last occurrence of A_x in [L,R]. To do so, we will need the following information:
- \text{next}[i], the smallest j > i such that A_i = A_j
- \text{prev}[i], the largest j < i such that A_i = A_j
- \text{last_in_blocks}[j][i], the largest k within the first j blocks such that A_k = A_i
How will this help us? Well, we want to find A_x's last occurrences in [L,R]. So first, we find its last occurrence in the blocks up to E_R (it’s just \text{last_in_blocks}[\text{floor}(R/S)][x]). However, it’s possible that A_x appears in [E_R,R], so we need to use its \text{next} pointers, until we find the last one. Since there are at most S-1 elements in [E_R,R], this seems fast, but it could easily take O(S^2) time for example when most of the values in [L,E_L) and [E_R,R] are equal. Thankfully, this is easily fixed: we only care about the first occurrence of A_x, so if it has been encountered before, then we don’t have to process it again! This ensures that for distinct value in [E_R,R], its set of indices is iterated only once. This therefore guarantees an O(S) running time!
Checking whether an A_x has been encountered before can also be done using the
index approach, or alternatively as \text{prev}[x] \ge L:
def answer_query(L, R): b_L = ((L+S-1)/S) b_R = R/S if b_L >= b_R # old query here else E_L = b_L*S answer = block_ans[b_L+1][R] for x in L...E_L if prev[x] < L # i.e. x hasn't been encountered before y = last_in_blocks[floor(R/S)][x] while next[y] <= R y = next[y] answer = max(answer, y - x) return answer
One can now see that the query time is now O(S)
Note that b_L \le b_R means that L and R are within O(S) elements away, so we can do the old query instead.
Let’s now see how to precompute \text{next}, \text{prev} and \text{last_in_blocks}. First, \text{next}[i] and \text{prev}[i] can easily be computed in O(N) time with the following code:
... next[1..N] prev[1..N] last[1..M] # initialized to 0 ... def precompute(): ... for i in 1...N next[i] = N+1 prev[i] = 0 for i in 1...N j = last[A[i]] if j != 0 next[j] = i prev[i] = j last[A[i]] = i
The
last array stores the last index encountered for every value, and is updated as we traverse the array.
And then \text{last_in_blocks} can be compute in O(BN) time:
... last_in_blocks[1..B][1..N] # initialized to 0 ... def precompute(): ... for b in 1...B L = b*S-S+1 R = min(b*S,N) for y in L...R if next[y] > R x = y while x > 0 last_in_blocks[b][x] = y x = prev[x] for x in 1...N for b in 2...B if last_in_blocks[b][x] == 0 last_in_blocks[b][x] = last_in_blocks[b-1][x]
The first loop finds the last value encountered at each block (with the check
next[y] > R), and proceeds setting the
last_in_blocks of all the indices until that position with equal value, using the
prev pointer. The second loop fills out the remaining entries, because some values do not have representatives in some blocks.
Running time
Now, what is the total running time then? The precomputation runs in O(BN) time, and each query takes O(S) time, so overall it is O(NB + QS). But remember that B = \Theta(N/S), so the algorithm is just O(N^2/S + QS). But we still have the freedom to choose the value of S. Now, most will simply choose S = \Theta(\sqrt{N}), so that the running time is O((N+Q)\sqrt{N}), but we are special, so we will be more pedantic.
Note that N^2/S is a decreasing function while QS is an increasing function. Also, remember that O(f(x)+g(x)) = O(\max(f(x),g(x))) (why?). Therefore, the best choice for S is one that makes N^2/S and QS equal (at least asymptotically). Thus, we want the choice S = \Theta(N/\sqrt{Q}) instead, and the running time is O(N\sqrt{Q}+Q)
(the +Q is there to account for when Q > N^2). For this problem, there’s not much difference between this and O((N+Q)\sqrt{N}), but the running time O(N\sqrt{Q}+Q) is mostly for theoretical interest, and when Q is much less than N (or much more), you’ll feel the difference.
Optimization side note: there is another way to do the old query without using our
LazyMap, or at least calling
has_key: traverse the array backwards. Here is an example:
... _index[1..M] ... def answer_query(L, R): ... if b_L >= b_R # old query answer = 0 for y in R...L by -1 _index[A[y]] = y for y in L...R answer = max(answer, y - _index[A[y]]) else ... return answer
I found that this is a teeny teeny bit faster than the original O(S) old query
Also, when choosing S, one does not have to choose \lfloor \sqrt{N} \rfloor, or even \lfloor N/\sqrt{Q} \rfloor, because there is still a constant hidden in the \Theta notation. This means that you still have the freedom to choose a multiplicative constant for S, which in practice essentially amounts to the freedom to select S however you want. To get the best value for S, try generating a large input (with varying values of M !), and finding the best choice for S via ternary search. The goal is to get the precomputation part and the query part roughly equal in running time. This technique of tweaking the parameters is incredibly useful in long contests where the time limit is usually tight.
Time Complexity:
O(M + (N + Q)\sqrt{N}) but a theoretically it is O(N \sqrt{Q} + Q)
Note that in the problem, Q is actually K.
|
https://discuss.codechef.com/t/qchef-editorial/10123
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
I thought about spacing my questions, but... scene.present_modal_scene issues
I do feel bad, I don't want to be nagging but since there aren't 3rd-party resources for most of my questions, and I'm occasionally bumping into problems that I can't seem to figure out, I don't think I have anywhere else to go.
I'm trying to work out multiple-scene situations now. I thought the first possibility sounded very easy to implement: use present_modal_scene to display a scene on top of the original one. Unfortunately, even though everything looks right to me and I'm doing it exactly the same way as the example games (as far as I can tell), I'm getting an error.
Secondarily, I don't see an elegant way to move between scenes. Currently I have noticed that I can create another scene and run it whenever I want but it just puts it on top of the previous one, which is still running in the background and when I hit the "x", it takes me to the original scene. That doesn't seem like an ideal outcome.
So, the first problem. Here's my modal scene code:
import scene import ui class Scene1(scene.Scene): def __init__(self): pass def touch_began(self, touch): self.menu = Scene2() self.present_modal_scene(self.menu) class Scene2(scene.Scene): def __init__(self): pass scene.run(Scene1())
When I run that code it starts fine but when I tap to bring up the second scene, I get this error:
ValueError: max() arg is an empty sequence
I'm very stumped. I feel like it couldn't be a simpler thing but I've hit a wall.
The second problem (the general ability to switch between scenes), I don't even know where to begin with because I can't find any examples or info in the documentation.
Sorry for asking so many questions!
I haven't run your code yet, but I think the problem is mostly that your
__init__methods are empty. That way, you're preventing the default initialization, which probably leads to weird errors down the line because variables that the scene expects to be there while running aren't initialized, etc.
You'd either have to call the base class's
__init__in your own initialization, or remove your
__init__methods entirely (it's often more convenient to use
setupinstead of
__init__for scenes).
Ah, I just ran the code, and it turns out that there is also a bug in the
scenemodule that makes
present_modal_scenenot work when the presenting scene has no content. The reason is that it's trying to set the z position (layer ordering) to the maximum of its children, but that fails when there are no children... You can work around this by just adding a dummy
Nodeto the scene.
Hmmm interesting! I'll try that out shortly then. Any advice on multiple "main" scenes and how to more properly switch between them?
@WTFruit You can set the
self.view.sceneattribute, though you'll have to call
setupmanually then. Here's a minimal example to illustrate:
import scene class Scene1(scene.Scene): def setup(self): self.background_color = 'green' def touch_began(self, touch): next_scene = Scene2() self.view.scene = next_scene next_scene.setup() class Scene2(scene.Scene): def setup(self): self.background_color = 'red' scene.run(Scene1())
@omz (or anyone else reading this)
I finally got an implementation of the modal_scene working (took me long enough), it brings up the modal scene and everything. But it still gives me the same initial error at the moment that the modal scene is called. Is there anything I can do about that?
Thank you for the scene-change code by the way, it works like a charm. Although I was curious why this works:
self.menu = Scene2() self.present_modal_scene(self.menu)
But not this:
self.present_modal_scene(Scene2())
I guess I'm misunderstanding the use of self.menu, I assumed it was basically a pointer, a stand-in for the other phrasing of the code (Scene2()).
|
https://forum.omz-software.com/topic/3195/i-thought-about-spacing-my-questions-but-scene-present_modal_scene-issues
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
rich_link_preview
A Rich Link Preview widget written in Dart generating a rich presentation of the given link from social meta tags.
Getting Started
Add the following line in your pubspec file
rich_link_preview:
Get the package by running the command
flutter packages get
Include the widget in your dart file
import 'package:rich_link_preview/rich_link_preview.dart';
Example Usage:
RichLinkPreview( link: '', appendToLink: true, )
RichLinkPreview( link: '' ),
|
https://pub.dev/documentation/rich_link_preview/latest/
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
from decimal import Decimal '%.2E' % Decimal('40800000000.00000000000000') # returns '4.08E+10'
In your '40800000000.00000000000000' there are many more significant zeros that have the same meaning as any other digit. That's why you have to tell explicitly where you want to stop.
If you want to remove all trailing zeros automatically, you can try:
def format_e(n): a = '%E' % n return a.split('E')[0].rstrip('0').rstrip('.') + 'E' + a.split('E')[1] format_e(Decimal('40800000000.00000000000000')) # '4.08E+10' format_e(Decimal('40000000000.00000000000000')) # '4E+10' format_e(Decimal('40812300000.00000000000000')) # '4.08123E+10'
Here's an example using the
format() function:
>>> "{:.2E}".format(Decimal('40800000000.00000000000000')) '4.08E+10'
original format() proposal
|
https://pythonpedia.com/en/knowledge-base/6913532/display-a-decimal-in-scientific-notation
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
I'm trying to make a very simple 'counter' that is supposed to keep track of how many times my program has been executed.
First, I have a textfile that only includes one character:
0
Then I open the file, parse it as an
int, add
1 to the value, and then try to return it to the textfile:
f = open('testfile.txt', 'r+') x = f.read() y = int(x) + 1 print(y) f.write(y) f.close()
I'd like to have
y overwrite the value in the textfile, and then close it.
But all I get is
TypeError: expected a character buffer object.
Trying to parse
y as a string:
f.write(str(y))
gives
IOError: [Errno 0] Error
Have you checked the docstring of
write()? It says:
write(str) -> None. Write string str to file.
Note that due to buffering, flush() or close() may be needed before the file on disk reflects the data written.
So you need to convert
y to
str first.
Also note that the string will be written at the current position which will be at the end of the file, because you'll already have read the old value. Use
f.seek(0) to get to the beginning of the file.`
Edit: As for the
IOError, this issue seems related. A cite from there:
For the modes where both read and writing (or appending) are allowed (those which include a "+" sign), the stream should be flushed (fflush) or repositioned (fseek, fsetpos, rewind) between either a reading operation followed by a writing operation or a writing operation followed by a reading operation.
So, I suggest you try
f.seek(0) and maybe the problem goes away.
from __future__ import with_statement with open('file.txt','r+') as f: counter = str(int(f.read().strip())+1) f.seek(0) f.write(counter)
|
https://pythonpedia.com/en/knowledge-base/9786941/typeerror--expected-a-character-buffer-object---while-trying-to-save-integer-to-textfile
|
CC-MAIN-2020-40
|
en
|
refinedweb
|
Sometimes and implement the platform specific code in the Windows 10 project.
Lets see how to do this in detail.
Step 1 : Creating Interface in the View Model
Create an IPageHelper interface in the View Model with ShowMessage method.
public interface IPageHelper { void ShowMessage(string message); }
Step 2 : Implement the Platform Specific Code.
Create an PageHelper class implementing the MessageDialog class to display the message to the user.
public class PageHelper : IPageHelper { public const string AppName = "Daily Dot net Tips"; public async void ShowMessage(string message) { try { MessageDialog msg = new MessageDialog(message, AppName); msg.Commands.Add(new UICommand("Ok")); await msg.ShowAsync(); } catch { } } }
Step 3 : override the InitializeFirstChance method
Go to the SetUp.cs file and override the InitializeFirstChance method.
Register the class and interface as Singleton as shown below.
protected override void InitializeFirstChance() { Mvx.RegisterSingleton<IPageHelper>(new PageHelper()); base.InitializeFirstChance(); }
Step 4: Invoke the Method.
That’s it. Now your class is registered as singleton and whenever required we can invoke it by resolving the interface.
var pagehelper = Mvx.Resolve<IPageHelper>(); pagehelper.ShowMessage(MessageConstants.NoRecordsFoundMessage);
Now you can run the app. The message dialog gets displayed.
Hope this post might have helped.
Pingback: Dew Drop – May 12, 2016 (#2250) | Morning Dew
|
http://dailydotnettips.com/2016/05/11/accessing-platform-specific-code-using-ioc-in-mvvm-cross/
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
Want to avoid the missteps to gaining all the benefits of the cloud? Learn more about the different assessment options from our Cloud Advisory team.
public class Arithmetic<T> { public T Add(T input1, T input2) { return intput1 + input2; } }
... public int Add(int input1, int input2) { return input1 + input2; } ...
... public decimal Add(decimal input1, decimal input2) { return input1 + input2; } ...
Arithmetic<int> adder1 = new Arithmetic<int>(); int input1 = 1; int input2 = 2; int result1 = adder1.Add(input1, input2); Arithmetic<decimal> adder2 = new Arithmetic<decimal>(); decimal input3 = 1.0; decimal input4 = 2.0; decimal result2 = adder2.Add(input3, input4);
I'm sure you'd agree that we can use the function defined immediately above because it is designed to take in only ints.That was supposed to say "can not" in my comment.
If you are experiencing a similar issue, please ask a related question
Join the community of 500,000 technology professionals and ask your questions.
|
https://www.experts-exchange.com/questions/27858760/generics-problemis-to-avoid-type-cast-problem-is-i-correct.html
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
The topic you requested is included in another documentation set. For convenience, it's displayed below. Choose Switch to see the topic in its original location.
System.Security.Cryptography Namespace
Silverlight
The System.Security.Cryptography namespace provides cryptographic services, including secure encoding and decoding of data, as well as many other operations, such as hashing, random number generation, and message authentication.
Community Additions
Show:
|
https://msdn.microsoft.com/library/windows/apps/9eat8fht(v=vs.95).aspx
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
explain_stat_or_die - get file status and report errors
#include <libexplain/stat.h> void explain_stat_or_die(const char *pathname, struct stat *buf);
The explain_stat_or_die function is used to call the stat(2) system call. On failure an explanation will be printed to stderr, obtained from explain_stat(3), and then the process terminates by calling exit(EXIT_FAILURE). This function is intended to be used in a fashion similar to the following example: explain_stat_or_die(pathname, buf); pathname The pathname, exactly as to be passed to the stat(2) system call. buf The buf, exactly as to be passed to the stat(2) system call. Returns: This function only returns on success. On failure, prints an explanation and exits.
stat(2) get file status explain_stat(3) explain stat(2) errors exit(2) terminate the calling process
libexplain version 0.19 Copyright (C) 2008 Peter Miller explain_stat_or_die(3)
|
http://huge-man-linux.net/man3/explain_stat_or_die.html
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
gearman_job_send_warning - Job Declarations
#include <libgearman/gearman.h> gearman_return_t gearman_job_send_warning(gearman_job_st *job, const void *warning, size_t warning_size);
Send warning for a running job.
The Gearman homepage:
Bugs should be reported at
Copyright (C) 2008 Brian Aker, Eric Day. All rights reserved. Use and distribution licensed under the BSD license. See the COPYING file in the original source for full text.
|
http://huge-man-linux.net/man3/gearman_job_send_warning.html
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
BackgroundIn the package java.util.concurrent there are numerous classes that enables concurrent access to various objects and data structures. However, there is a lack of a concurrent Set in the standard libraries. In this post I will show how to fix this problem.
The easy way outThere is a very simple way of getting a kind of concurrent Set with elements of type K in Java:
Map<K, Boolean> concurrentMap = new ConcurrentHashMap(); Set<K> concurrentSet = Collections.newSetFromMap(concurrentMap);
For example, one can get a concurrent Set of strings like this
Set<String> concurrentSet = Collections.newSetFromMap(new ConcurrentHashMap<String, Boolean>());
The concurrentSet will exhibit the same concurrency properties as the underlying ConcurrentMap does (i.e. it will provide concurrent lock free access in this case) which is good. It also inherits all the other Map properties such as null key capability etc.
Of course you can provide any map to the newSetFromMap() including the non-concurrent HashMap (keys are in any order), LinkedHashMap (preserves insertion order) or TreeMap (keeps the keys in their natural order). The resulting Set will inherit those underlying properties. For example, if you provide LinkedHashMap, you will get a non-concurrent Set where the keys will be retrieve in insertion order.
Limitations and DrawbacksWhen you are using the Collections.newSetFromMap() method, the Map you provide must be empty from the beginning. You must also take care not to keep an additional reference to the backing map, because newSetFromMap() does not perform any defensive copy of the map. If you keep such a reference, you can alter the Set using the Map reference too, which is unsafe. The value type Boolean used in the underlying Map also strikes me as odd. Why was that particular type selected one might ask oneself? Object would be more general, one might argue.
The most important drawback in my opinion, is that the returned Set really is just a Set that just happens to be concurrent (again, if you provide a ConcurrentMap). For example, it does not implement the method that corresponds to the features ConcurrentMap brings over Map, like putIfAbsent() and numerous other new Java 8 methods like computeIfAbsent(). There is no way to really guarantee that the Set is concurrent.
The Interface ProposalSo what do we want to accomplish here? Well, wouldn't it be nice if we could define an interface ConcurrentSet and use it just like we are using the interface ConcurrentMap. Let us take a look at the latter interface (as defined in Java 8):
public interface ConcurrentMap<K, V> extends Map<K, V> { V getOrDefault(Object key, V defaultValue); void forEach(BiConsumer<? super K, ? super V> action); V putIfAbsent(K key, V value); boolean remove(Object key, Object value); boolean replace(K key, V oldValue, V newValue); void replaceAll(BiFunction<? super K, ? super V, ? extends V> function) V computeIfAbsent(K key, Function<? super K, ? extends V> mappingFunction); V computeIfPresent(K key, BiFunction<? super K, ? super V, ? extends V> remappingFunction); V compute(K key,BiFunction<? super K, ? super V, ? extends V> remappingFunction); V merge(K key, V value, BiFunction<? super V, ? super V, ? extends V> remappingFunction); }As can be seen, most of the methods are dealing with values rather than keys. Thus, most of the methods can not be applied to a ConcurrentSet. So, we can discard most methods and perhaps redefine others so that they only deal with keys. First I thought that we should skip all methods except the putIfAbsent() method, but I was wrong since it is equivalent to add() (thanks for pointing that out Louis Wasserman). So, we drop all methods and end up with just a Marker Interface with no extra methods over Set. Still, it is useful because we can show our intent with the new interface.
public interface ConcurrentSet<E> extends Set<E> { }
Additional concurrent methods can be added later if desired. Please leave a comment below if you can think of new methods that would be nice to add to a ConcurrentSet.
What's Alredy There?If we take a closer look at the Collections.newSetFromMap() method, we see that it basically delegates the methods from a backing map (and the backing map's keySet). Below, I have shown a shortened version of the SetFromMap class, so you will get the basic idea. The same concept is also used by Java's ConcurrentSkipListSet, so this certainly seams to be common practice.
private static class SetFromMap<E> extends AbstractSet<E> implements Set<E>, Serializable { private final Map<E, Boolean> m; // The backing map private transient Set<E> s; // Its keySet SetFromMap(Map<E, Boolean> map) { if (!map.isEmpty()) throw new IllegalArgumentException("Map is non-empty"); m = map; s = map.keySet(); } public void clear() { m.clear(); } public int size() { return m.size(); } public boolean isEmpty() { return m.isEmpty(); } public boolean contains(Object o) { return m.containsKey(o); } public boolean remove(Object o) { return m.remove(o) != null; } public boolean add(E e) { return m.put(e, Boolean.TRUE) == null; } public Iterator<E> iterator() { return s.iterator(); } public Object[] toArray() { return s.toArray(); } public <T> T[] toArray(T[] a) { return s.toArray(a); } public String toString() { return s.toString(); } public int hashCode() { return s.hashCode(); } public boolean equals(Object o) { return o == this || s.equals(o); } public boolean containsAll(Collection<?> c) {return s.containsAll(c);} public boolean removeAll(Collection<?> c) {return s.removeAll(c);} public boolean retainAll(Collection<?> c) {return s.retainAll(c); // The rest comes here... }
An Implementation ProposalUsing the Delegation Pattern, we can easily come up with a similar implementation of the new ConcurrentSet interface as depicted here under:
/** * A hash set supporting full concurrency of retrievals and * high expected concurrency for updates. * * @param <E> the type of elements maintained by this set * @author pemi * public class ConcurrentHashSet<E> implements ConcurrentSet<E>, Serializable { private final ConcurrentMap<E, Object> m; private transient Set<E> s; public ConcurrentHashSet() { this.m = new ConcurrentHashMap<>(); init(); } public ConcurrentHashSet(int initialCapacity) { this.m = new ConcurrentHashMap<>(initialCapacity); init(); } public ConcurrentHashSet(int initialCapacity, float loadFactor) { this.m = new ConcurrentHashMap<>(initialCapacity, loadFactor); init(); } public ConcurrentHashSet(int initialCapacity, float loadFactor, int concurrencyLevel) { this.m = new ConcurrentHashMap<>(initialCapacity, loadFactor, concurrencyLevel); init(); } public ConcurrentHashSet(Set<? extends E> s) { this(Math.max(Objects.requireNonNull(s).size(), 16)); addAll(s); } // New type of constructor public ConcurrentHashSet(Supplier<? extends ConcurrentMap<E, Object>> concurrentMapSupplier) { final ConcurrentMap<E, Object> newMap = concurrentMapSupplier.get(); if (!(newMap instanceof ConcurrentMap)) { throw new IllegalArgumentException("The supplied map does not implement "+ConcurrentMap.class.getSimpleName()); } this.m = newMap; init(); } private void init() { this.s = m.keySet(); } @Override public void clear() { m.clear(); } @Override public int size() { return m.size(); } @Override public boolean isEmpty() { return m.isEmpty(); } @Override public boolean contains(Object o) { return m.containsKey(o); } @Override public boolean remove(Object o) { return m.remove(o) != null; } @Override public boolean add(E e) { return m.put(e, Boolean.TRUE) == null; } @Override public Iterator<E> iterator() { return s.iterator(); } @Override public Object[] toArray() { return s.toArray(); } @Override public <T> T[] toArray(T[] a) { return s.toArray(a); } @Override public String toString() { return s.toString(); } @Override public int hashCode() { return s.hashCode(); } @Override public boolean equals(Object o) { return s.equals(o); } @Override public boolean containsAll(Collection<?> c) {return s.containsAll(c);} @Override public boolean removeAll(Collection<?> c) {return s.removeAll(c);} @Override public boolean retainAll(Collection<?> c) {return s.retainAll(c);} @Override public boolean addAll(Collection<? extends E> c) { // Use Java 8 Stream return Objects.requireNonNull(c).stream().map((e) -> add(e)).filter((b)->b).count() > 0; } // Override default methods in Collection @Override public void forEach(Consumer<? super E> action) { s.forEach(action);} @Override public boolean removeIf(Predicate<? super E> filter) { return s.removeIf(filter);} @Override public Spliterator<E> spliterator() {return s.spliterator();} @Override public Stream<E> stream() {return s.stream();} @Override public Stream<E> parallelStream() {return s.parallelStream();} private static final long serialVersionUID = -913526372691027123L; private void readObject(java.io.ObjectInputStream stream) throws IOException, ClassNotFoundException { stream.defaultReadObject(); init(); } }Note the new constructor ConcurrentHashSet(Supplier<ConcurrentMap<E, Object>> concurrentMapSupplier) that allows us to provide any ConcurrentMap as the underlying map at creation time. By providing a Supplier rather than a concrete Map instance, we avoid the double reference problem and the "map must be empty" problem associated with the Collections.newSetFromMap() method. For example, we can create a ConcurrentSet with the keys in their natural order by calling new ConcurrentHashSet(ConcurrentSkipListMap::new).
Worth noticing is also the addAll() method that uses Java 8's stream library to iteratively add new elements to the Set. We the filter out all add() calls that returned true and if there were more than zero such additions, we return true (i.e. there was a modification of the set).
Game, Set and match...
Your addIfAbsent method seems exactly equivalent to the perfectly normal Set.add method, which already returns true if and only if the element was previously absent and has since been added.
You are absolutely right Louis. Thanks for pointing out that. I have updated the post accordingly.
Dude when you write articles why don't you colorise the source code???? WHY?? Why is everything half finished?
Hi Anonymous. I have improved the look of the code examples from now on. Checkout and tell me what you think.
What about using Collections.synchronizedSet(new HashSet(...)); ?
A synchronized Set is not concurrent. A synchronized Set will only accept one thread at a time whereas a concurrent Set can accept a plurality of concurrent threads. Hence its name.
|
https://minborgsjavapot.blogspot.com/2014/12/java-8-implementing-concurrenthashset.html
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
How to do accessible inline errors for form selects?
We need to update our forms to be accessibility compliant. I am looking for information on how to write an inline error for a form select. I have the form field labeled and have an inline error, but the readers do not announce the error.
Here is what we are using
<div class="select-wrapper"> <label for="securityQuestion" class="control-label bolded">Security Question</label> <select id="securityQuestion" name="securityQuestion" class="form-control"> <option value="0">Select a Question</option> <option value="10">What is your mother's maiden name?</option> <option value="9">What's your town of birth?</option> <option value="11">What is your father's middle name?</option> <option value="12">What street did you live on when you were 10 years old?</option> </select> <div class="clear form-error-msg" id="securityQuestion_error"><span class="shop-icon icon" aria-r</span><span id="error-msg-text">Please select a security hint question.</span></div> </div>
So if the user doesn't select an option and submits we dynamically inject an error jut after to from field as shown above. This works fine for inputs and text boxes, but for the select, the reader skips it.
I been reading different blogs and specs and have read many times, don't use selects, but use radios or text boxes instead.
So any ideas?
See also questions close to this topic
-> ....
- User control, can't set property?
I have property that gets and sets the label text in User control. That label has some default text in lblSettingKey.Text.
public string SettingKeyValue { get => lblSettingKey.Text; set => lblSettingKey.Text = value; }
But when I add user control in some form, and set the new value, it still have default value. How to changed that?
- Symfony: form not using blockPreffix in element name (and not posting elements)
I have a view with a form, and weirdly, the forms elements have a correct ID (i.e.
appbundle_hall_reservation_coffee_48_quantity), using the given
getBlockPrefix()in its form type, but not the NAME attribute (i.e.
[0][quantity]).
The form is not POSTING its data, I believe the cause of this is this situacion with the form elements (the form itself it is posting).
This was working at some point before but a look into de code history didn't throw a light.
This is the field I get:
<input id="appbundle_hall_reservation_coffee_quantity" name="[0][quantity]" required="required" class="form-control" type="number">
This is some of my code:
Controller:
$coffeeForm = $this->createForm(HallReservationCoffeeBreakType::class, $HallReservationCoffee, ['hotel' => $hotel, 'hall' => $rHall->gethall(), 'available_halls' => []]); $coffeeForm->handleRequest($request); $cForm = $coffeeForm->createView();
Form type:
<?php namespace AppBundle\Form; use Symfony\Component\Form\AbstractType; use Symfony\Component\Form\FormBuilderInterface; use Symfony\Component\OptionsResolver\OptionsResolver; use Symfony\Bridge\Doctrine\Form\Type\EntityType; use Symfony\Component\Form\Extension\Core\Type\HiddenType; use Symfony\Component\Form\Extension\Core\Type\IntegerType; use Symfony\Component\Form\Extension\Core\Type\TextType; use Symfony\Component\Form\Extension\Core\Type\MoneyType; use Symfony\Component\Form\Extension\Core\Type\CheckboxType; use Doctrine\ORM\EntityRepository; class HallReservationCoffeeBreakType extends AbstractType { /** * {@inheritdoc} */ public function buildForm(FormBuilderInterface $builder, array $options) { $hotel = $options['hotel']; $hall = $options['hall']; //coffee break form is limited to one hall $available_halls = $options['available_halls']; //available_halls to serve coffee in $builder ->add('includeCoffeeBreak', HiddenType::class, array('mapped' => false, 'data' => 1)) ->add('hallReservation', EntityType::class, array('label' => false, 'class' => 'AppBundle:HallReservation', 'choice_label' => 'hall', 'expanded' => false, 'multiple' => false, 'query_builder' => function(EntityRepository $repository) use ($hall){ $qb = $repository->createQueryBuilder('HR'); return $qb ->where($qb->expr()->eq('HR.hall', '?1')) ->setParameter('1', $hall) ; })) ->add('coffeeBreak', EntityType::class, array('label' => false, 'class' => 'AppBundle:CoffeeBreak', 'choice_label' => 'nombreEs', 'query_builder' => function(EntityRepository $repository) use ($hotel){ $qb = $repository->createQueryBuilder('C'); // the function returns a QueryBuilder object // find coffees of current hotel return $qb ->where($qb->expr()->eq('C.idHotel', '?1')) ->setParameter('1', $hotel) ->orderBy('C.nombreEs', 'ASC') ; })) ->add('quantity', IntegerType::class, array('label' => false)) ->add('serveInHall', EntityType::class, array('label' => false, 'class' => 'AppBundle:Sala', 'choice_label' => 'nombre', 'required' => false, 'choices' => $available_halls )) ->add('modifiedPrice', MoneyType::class, array('label' => false, 'currency' => false, 'required' => false, 'attr'=>['class' => 'price-field'])); } /** * {@inheritdoc} */ public function configureOptions(OptionsResolver $resolver) { $resolver->setDefaults(array( 'data_class' => 'AppBundle\Entity\HallReservationCoffeeBreak', 'hotel' => null, 'hall' => null, 'available_halls' => null )); } /** * {@inheritdoc} */ public function getBlockPrefix() { return 'appbundle_hall_reservation_coffee'; } }
View:
{{form_start(cForm, {'class': 'form-inline'})}} {{form_widget(cForm.quantity)}}
EDIT: adding a dunmp of quantity form var:
- HTML pattern for two or more words
I'm having big trouble trying to find or come up with a regular expression pattern for two or more words. The context: I have a form where I'm asking for name, users are supposed to type first and last name and a parser divides it as I want. The problem that I found is that if the parser fails or what they input is wrong there's no way the know it's the name. I'm using patterns on the rest of the fields so using one in the name is the best idea design-wise.
I don't know much about Regular Expressions so could somebody tell me if there's a way? It has to be two or more words.
-
- Record query results for use in further queries
I can't explain it properly but I have several different loop queries on a page (news website homepage) to select articles and want to "record" the ID of each article to exclude such ID's from further query results.
Select rand from database ... result = ID's 3, 99, 6 $selected = 3, 99, 6 Select rand from database WHERE ID != $selected ... result = ID 51 $selected = 3, 99, 6, 51 Select rand from database WHERE ID != $selected ... result = ID 4
I can't wrap my head around on how to "record" the already selected ID's incremential and then use it further down to prevent the same article to appear twice.
- swift userNotification clicking
how can i handle when notification is clicked? adding action means adding new button to notification, i dont wana adding a button, i want to go to special view controller when notification is selected like builder.setContentIntent in android. i read but couldnt find any thing.
- angularjs - talkback(android) not reading string in aria-label instead it is speaking inner text in ng-model
I am having an hybrid app on angularjs(included ngAria, angular v1.4.14). i have an input element and android talkback is not reading the string in aria label instead it is reading ng model value and append it with Edit Box. I want to customize whats being said by talkback.
CONTROLLER:
$scope.quantity = '10';
HTML-
<input type="text" class="quantity-input pull-left" ng-
What I want is that talkback should read "main" which is inside aria-label, but it is reading "10 edit box".
basically I want to customize whats being spoken on talkback using angular ngAria or anything.
Note : working fine on Iphone - voiceover reads : content in aria-label appended with the ng model.
Any help would be great..
- Narrator read Link with name and target url
The Narrator always read Link with name and target url, I don't know it is reasonable or not? But now I just need read the link name ? Anyone who met the similar problem?
OS:Windows 10 Screen Reader:Narrator Browser:IE/Edge Actual:Narrator is reading the links (Apps, Crashes and other links) as name of the link and the target URL instead of reading only name of the link and its Role i.e., link Expected:Narrator should read the links as Name of the link and its role(eg: Crashes link)
|
http://codegur.com/48215648/how-to-do-accessible-inline-errors-for-form-selects
|
CC-MAIN-2018-05
|
en
|
refinedweb
|
posted January 01, 2004 06:23 AM
Andrew: Also - your entire concept requires the book() method to be on the server.
Javini: Thanks for your response. I'll study the referenced link topic discussion.
[Can] the cookie simply be a hash value for the current thread?
In general, Sun's documents are so ambiguous, that what probably happens is that people bring their experience to their reading; so, for me, it never even occurred to me to expose lock and unlock to the client; I would never do this in real life, and I never even considered it as something Sun even requested; so, I certainly have no intention whatsoever of exposing lock and unlock to the client (bad idea, bad design, no way).
This thread began, because I was curious if I would be automatically failed if I only trivially implement lock(), unlock(), and isLocked(), as they are not needed at all for my server to keep the database file from being corrupted.
1. It's a must requirement to implement DBMain in Data; but, the Java programming language allows an implementation of a Java interface to be an empty method: public void lock() {} and thus, as far as meeting the requirements, it's implemented.
2. "Server: Locking: Your server must be capable of handling multiple concurrent requests, and as part of this capability, must provide locking functionality AS SPECIFIED in the interface provided above." So, specifications and implementations are two different things (again, I'm being a language lawyer here): the interface specifies what must be done, the implementation are empty methods; and, my design implements the locking using synchronized methods in my BusinessRequests class.
The DBMain interface states the intention of the server.
And, probably the best thing to do is not argue with Sun as a language lawyer, but to use your language lawyer skills while thinking things through, but justify your final design and implementation with accepted principles.
Okay, I knew I left something out; here are Sun's supplied comments for these three methods as found in the DBMain Java interface code: lock(): "Locks a record so that it can only be updated or deleted by this client. ..." Well, what is "this client" mean; if my threads are controlled, I don't care exactly which client "this client" is, right? unlock(): "Releases the lock on a record." isLocked(): "Determines if a record is currently locked. ..." So, in conclusion, I see nothing in my particular specifications which mandates that locking a record and associating this lock to a specific client is required. If this assertion is true, then perhaps I've gotten an easier version of the assignment than other postings I've seen here.
multiple clients use the server, and multithreaded remote method of server invokes singleton DataManager methods with each business method (such as "book()") synchronized which in turn uses Data which in turn uses a RandomAccessFile which in turn uses a physical random access file.
Except for the one must condition that I implement the DBMain Java interface, in which case I need to implement lock(), unlock(), and isLocked() even though to do so would, it appears to me, be completely silly given my design.
I would read that differently. It is clear from another part of the instructions that you the Data class must implement the interface. And the server instructions state that the server must provide the same functionality as well. I think this is a reasonably standard request: you have a standard interface which multiple clients use, now you want to provide it over a different medium (in our case RMI, but it could be over SOAP or MQ), and you want to keep the same interface.
Do you have the instructions "Portions of your submission will be analyzed by software; where a specific spelling or structure is required, even a slight deviation could result in automatic failure.".
Yes, justification is the big thing. Spend a lot of time making sure that it is explicit what you are doing and why.
Originally posted by Andrew Monkhouse: Hi Javini, I think your code will work, and as you have noted it does not need the lock methods. The only problem with this concept is that you have created a bottleneck in a place where no bottleneck is required. Consider a simple case: ...lines deleted... But lets consider a future enhancement to the business logic. We want to calculate exchange rates at the time of booking (to get the most favourable exchange rate), which has to be calculated on number of beds requested: ..lines deleted... Hmmm - clients are blocked for longer now. The second client has to wait until the first client has finished all that work before they have any chance of finding out if the record is still available. Lets stop that method from being synchronized, and use some locking, and see what happens:
Client A Client B
================ ================
lockRecord(5)
lockRecord(4)
readRecord(5)
readRecord(4)
if available if available
getExchangeRate getExchangeRate
calculatePrice calculatePrice
updateRecord updateRecord
endif endif
unlockRecord(5)
unlockRecord(4)
See all those simulatenous lines of code? We have reduced the bottleneck. Regards, Andrew
|
http://www.coderanch.com/t/184806/java-developer-SCJD/certification/NX-Locking-Unlocking-Sun-Conditions
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Search: Search took 0.03 seconds.
- 26 Jun 2015 7:06 AM
- Replies
- 1
- Views
- 202
You changed the layout to vbox only the form layout will display the labels correctly.
Try wrapping a displayfield with a container using the layout: 'form' and the label should show up.
- 24 Jun 2015 1:21 PM
- Replies
- 2
- Views
- 274
Check this thread for what your looking for.
For the height really you just need a...
- 9 Mar 2015 1:39 PM
- Replies
- 16
- Views
- 1,692
I see how it goes thanks Mitch! :D
I can be the soberish driver!
- 9 Mar 2015 12:24 PM
- Replies
- 16
- Views
- 1,692
I'll be there on Saturday visiting family but I will sign up for the Meet and Greet.
And of course I will be at Sencha Con 2015.
See you all there!
Just order a keg of beer and you guys can...
- 15 Jan 2015 1:21 PM
- Replies
- 3
- Views
- 640
Never mind we found the link.
B)
- 15 Jan 2015 7:36 AM
- Replies
- 3
- Views
- 640
Another team at work was wanting to look into this.
Is there a Sencha link still for this pack?
Thanks,
Ron
- 23 Dec 2014 11:28 AM
- Replies
- 3
- Views
- 841
I noticed the quick reply to private messages is clearing out my new lines and some spaces also.
Is it the same component?
:-?
- 19 Aug 2014 10:04 AM
- Replies
- 3
- Views
- 767
I just didn't know if you could specify a target for the link created.
Was all I was wanting to know.
I know the workarounds just trying to have it do it by default.
:)
- 18 Aug 2014 10:37 AM
- Replies
- 3
- Views
- 767
I've got a quick question is there a way to make the popup menu items open into a new tab?
I would like to open the 4.2.2 and 3.4.0 docs as new tabs instead of just redirecting.
Thanks,
Ron
...
- 29 Jul 2014 7:24 AM
Wouldn't be a lot easier if you just register all stores? I mean is there really a purpose of not having it registered?
Just food for thought...
Thanks Mitch.
After looking at the source of...
- 29 Jul 2014 7:16 AM
Then it must be an oversight of mine because I thought all stores registered with Store Manager.
I'll have to override our arch to make sure it adds itself to the Store Manager with a dynamic...
- 29 Jul 2014 7:14 AM
Do the stores need to have a storeid or be in the storemanager before they show up in App Inspector.
I found if you create a store but don't give it a storeId it never shows up in App Inspector.
...
- 29 Jul 2014 7:10 AM
Thanks!
- 28 Jul 2014 1:21 PM
Arthur,
I got busy with meetings so I made a small screenshot with some debug info for you.
49780
- 28 Jul 2014 8:46 AM
I'll make a simple example for you. When I get back from lunch.
I'll catch ya later when I'm in Chicago. :)
Some quick info we are using this Date Format.
'm/d/Y H:i:s.u'
The data shows...
- 28 Jul 2014 8:36 AM
I found an issue on App Inspector 2.0.4 It isn't displaying date's in the store's record.
The data shows up on screen but not when inspecting the store's record.
If you need more details let...
- 16 Jun 2014 7:32 AM
- Replies
- 9
- Views
- 1,794
Don,
We are using a single app with multiple loadable packages. We made it into packages so it was easier for us to layer levels of code.
Ext JS 4.2.2 (customized compile)
Our-Arch...
- 7 May 2014 9:19 AM
- Replies
- 6
- Views
- 3,690
I'd listen on the event of headertriggerclick and inside that function return false to not show the default menu.
- 7 May 2014 9:10 AM
- Replies
- 1
- Views
- 568
Friendly bump!
- 30 Apr 2014 12:04 PM
- Replies
- 1
- Views
- 568
Sencha CMD 4.0.1.45
Does anyone know of the argument for making it skip copying the ext framework to each workspace I create?
Thanks ahead of time.
- 29 Apr 2014 12:43 PM
- Replies
- 3
- Views
- 956
sencha -sdk ext-4.2.2 compile exclude -namespace Ext.ux,Ext.chart,Ext.flash,Ext.rtl
and include -namespace Ext.ux.IFrame
and -debug=false concat myext-debug-w-comments.js
and -debug=false...
- 21 Apr 2014 9:51 AM
- Replies
- 5
- Views
- 842
Just a side note if you don't want the black boxes to go below the height and widths you can do minWidth and minHeight so they don't go smaller than those number of pixels and still will be stretched...
- 21 Apr 2014 9:46 AM
Jump to post Thread: font looks blur in IE10 by aw1zard2
- Replies
- 4
- Views
- 1,054
You need to run this code in JSLint because there are a lot of trailing commas just looking at the example.
And instead of doing Ext.create outside of the onReady you should use Ext.Define instead...
- 25 Mar 2014 10:21 AM
- Replies
- 1
- Views
- 1,141
The Video link is missing under the Community Menu if your viewing the forums.
A coworker showed it to me figured you all would want to know.
:)
- 10 Mar 2014 7:59 AM
- Replies
- 10
- Views
- 2,364
Not sure if you hit this or not but when I was playing around I forgot to change my output to a different folder so the index.html got overwritten and sencha app build stopped working.
Results 1 to 25 of 118
|
https://www.sencha.com/forum/search.php?s=f74c2319f4440038c83f9213b736babb&searchid=12174039
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Microsoft UI Automation is the new Accessibility framework for Microsoft Windows, available on all Operating Systems that support the Windows Presentation Foundation (WPF). UI Automation provides programmatic access to most user interface (UI) elements on the desktop.
This article demonstrates how to use UI automation to control a simple application. Basically, the controller application will search for a test application which is running, and if the application is found, we will search through the UI elements hierarchy, and we will send dummy data.
The main idea behind this framework is that it provides access to the hierarchy of user interface elements which are visible in Win applications. Imagine the root element of the hierarchy as being the desktop, and all the running applications are the first level nodes (children) of the root, and the same for each visible form which can be detailed to its controls, the controls to its children, and so on. Each node in this tree is called UI element.
The framework helps us by giving access to these UI elements and their properties. The main features supported by this framework are:
System.Windows.Automation is the main namespace of the UI Automation framework, and provides support for interacting with UI elements, also called AutomationElements. AutomationElement is the main class of the framework, and corresponds to a user interface component (window, list, button,...). The main features of this namespace are related to searching for AutomationElement objects, interacting with different control patterns or subscribing for UI automation events.
System.Windows.Automation
AutomationElement
AutomationElement
AutomationElement contains a property called RootElement, which contains a reference to the correspondent UI element of the desktop, which in fact returns an AutomationElement.
RootElement
As for the search support, AutomationElement class contains FindFirst() and FindAll() methods:
FindFirst()
FindAll()
public AutomationElement FindFirst(TreeScope scope, Condition condition)
We can use the FindFirst method to search for a specific AutomationElement which matches the criteria passed. In order to call this method, we need to pass the scope of search (instance of TreeScope enum), for example to search through the first level of children, or to search through all descendants. Besides scope parameter, we have to pass the condition that should be met in order to identify the required AutomationElement. We will pass the condition as an instance of a class that extends Condition abstract class. In this way, we can use PropertyCondition (for testing a property to have a specific value), but also available are: AndCondition, NotCondition or OrCondition classes.
FindFirst
TreeScope
scope
Condition abstract
PropertyCondition
AndCondition
NotCondition
OrCondition
public AutomationElementCollection FindAll(TreeScope scope, Condition condition);
Similar to FindFirst, FindAll() let us query the UI Automation tree using the parameters described above, but return a collection of AutomationElements that have passed the search condition.
FindAll()
Based on the above details, we can write a simple line of code, which will search through all the descendants of the root element, for a UI element that has the name equals to 'Notepad':
AutomationElement appElement = AutomationElement.RootElement.FindFirst
(TreeScope.Descendants,new PropertyCondition(AutomationElement.NameProperty, "Notepad"))
In order to use the UI Automation API, we need to add references to these assemblies:
For this simple implementation, I used the base class AutomationElement, which, as I said earlier, keeps the reference to a UI element from desktop applications. The idea is that we need to start from the desktop element, which is the root element, AutomationElement.RootElement, and we will search through all the child objects of RootElement for a test application having the title: "UI Automation Test Window". After getting a valid reference to the AutomationElement of the test application, we can then interact with the different contained controls. In this way, the controller application sets some values to two TextBox controls.
AutomationElement.RootElement
TextBox
AutomationElement rootElement = AutomationElement.RootElement;
if (rootElement != null)
{
Automation.Condition condition = new PropertyCondition
(AutomationElement.NameProperty, "UI Automation Test Window");
AutomationElement appElement = rootElement.FindFirst(TreeScope.Children, condition);
if (appElement != null)
{
AutomationElement txtElementA = GetTextElement(appElement, "txtA");
if (txtElementA != null)
{
ValuePattern valuePatternA =
txtElementA.GetCurrentPattern(ValuePattern.Pattern) as ValuePattern;
valuePatternA.SetValue("10");
}
AutomationElement txtElementB = GetTextElement(appElement, "txtB");
if (txtElementA != null)
{
ValuePattern valuePatternB =
txtElementB.GetCurrentPattern(ValuePattern.Pattern) as ValuePattern;
valuePatternB.SetValue("5");
}
}
Here is the GetTextElement function:
GetTextElement
private AutomationElement GetTextElement(AutomationElement parentElement, string value)
{
Automation.Condition condition =
new PropertyCondition(AutomationElement.AutomationIdProperty, value);
AutomationElement txtElement = parentElement.FindFirst
(TreeScope.Descendants, condition);
return txtElement;
}
As you can see in the above code, we are using a control pattern, ValuePattern. UIA (UI Automation) uses control patterns to represent common controls. Control patterns define the specific functionality that is available in a control by providing methods, events, and properties. The methods declared in a control pattern allow the UIA clients to manipulate the control, for example, the ValuePattern.SetValue() method. Besides ValuePattern which represents a control that stores a string value, as a different example of control pattern, we can take the Invoke pattern control, InvokePattern, which represents the controls that are invokable, controls like buttons. In order to use a control pattern, we need to query the object to see what interfaces are supported, and only after getting a valid control pattern can we interact with it by using its methods, events, and properties. The following list shows the most common control patterns:
ValuePattern
ValuePattern.SetValue()
string
InvokePattern
SelectionPattern
ListBox
ComboBox
TextPattern
get
set
ScrollPattern
RangeValuePattern
The following example shows how to use a InvokePattern; in other words, it will click a button contained in parentElement:
parentElement
{
Automation.Condition condition = new PropertyCondition
(AutomationElement.AutomationIdProperty, "Button1");
AutomationElement btnElement = parentElement.FindFirst
(TreeScope.Descendants, condition);
InvokePattern btnPattern =
btnElement.GetCurrentPattern(InvokePattern.Pattern) as InvokePattern;
btnPattern.Invoke();
}
UI Automation is a powerful Accessibility framework which lets you control other applications. It can be used in the development of automated UI testing/debugging apps and so on.
There is more to come on this.
|
http://www.codeproject.com/Articles/33049/WPF-UI-Automation?fid=1534472&df=90&mpp=10&sort=Position&spc=None&tid=4142734
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Binary behaviors and XML schema
As of Windows Internet Explorer 9, XML schemas no longer allow binary behaviors to be imported using namespaces. This affects IE9 standards mode and later document modes.
Instead of using HTML markup, use Cascading Style Sheets (CSS)-based registration through the behavior property.
You can use CSS-based registration through elements other than the class attribute.
A behavior can be specified at the top of a webpage by using HTML markup, as the following code example shows.
However, the code above does not work in XML mode.
Instead, use the code below inXML mode
Related topics
Show:
|
https://msdn.microsoft.com/library/ff986086(v=vs.85)
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
How do I map xs:date to java.util.Date?
santacruz40 wanted JAXB 2.0 XJC to map xs:date to java.util.Date. Here's how to do it.
The easiest way to do this is to simply modify the generated code. Just find out all the references to XMLGregorianCalendar and replace them with Date. With a modern IDE, this is surprisingly easy. But this only works if your schema doesn't change too often, for you don't want to do this too frequently.
If you'd rather have XJC generate the right thing for you, then you need to writea customization:
The javaType customization takes care of this. You can either copy the
Now, this maps xs:date to Calendar. To map this to Date, you need to define a pair static methods that convert from/to XML string and Date. Fortunately, that can be done relatively easily by using the above parse/printDate functions:
public class DateAdapter {
public static Date parseDate(String s) {
return DatatypeConverter.parseDate(s).getTime();
}
public static String printDate(Date dt) {
Calendar cal = new GregorianCalendar();
cal.setTime(dt);
return DatatypeConverter.printDate(cal);
}
}
Then replace the parse/printMethod attributes of the javaType customization with these methods.
- Login or register to post comments
- Printer-friendly version
- kohsuke's blog
- 51749 reads
I ran into the mapping problem when marshaling recently and ...
by zachncst - 2012-06-01 11:44
I ran into the mapping problem when marshaling recently and I immediately began to think I needed to add code to do the translation. All google results indicated I had to do more work. However, I didn't notice that the XMLGregorianCalendar had a getXMLSchemaType method. This lead me to believe that it was aware of what type it was translating from to. And in fact it is, this may have been the only solution in the past but if you are seeing XMLGregorianCalendar having problems you can just set certain columns of it to get the time you need. For example: xs:date is year, month and day only. If you're experiencing this problem in 2012, look at the documentation for the class. It helps.
Problem with XMLGregorianCalendar field in SOA request
by infoo - 2009-09-03 04:12We in the client code, till the control reaches the proxy class timestamp values is present in the request. But when the xml request is getting constructed(which we have logged in txt file by modifying clientconfig.xml) the date value is present but time value is getting dropped We had actually written the WSDL and had used axis2 eclipse plugin to generate request/response objects ,service & client stubs from it As our WSDL is already in production, please let us know if we have some sort of work around, to modify the Webservice client code without going for WSDL change. It is also not possible to tamper the request objects as they are generated from WSDL and are packaged as jar file and placed in production If in XML request, we are able to construct the time stamp, the service (generated from WSDL) is able to capture date time values. This was evident by constructing the timestamp request via SOAP UI / Lisa tool This workaround fix is required for time being which is very critical.. Thanks in advance.
|
https://weblogs.java.net/node/235094/atom/feed
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Removing elements from large XML documents
Discussion in 'Java' started by Jakub Moskal, Mar93
- Bob Foster
- Nov 23, 2003
- Replies:
- 1
- Views:
- 575
- Juan T. Llibre
- Oct 18, 2006
removing a namespace prefix and removing all attributes not in that same prefixChris Chiasson, Nov 12, 2006, in forum: XML
- Replies:
- 6
- Views:
- 721
- Richard Tobin
- Nov 14, 2006
Removing elements from a list that are elements in another listAdam Hartshorne, Jan 26, 2006, in forum: C++
- Replies:
- 2
- Views:
- 434
- Nitin Motgi
- Jan 27, 2006
XML: JDOM: removing all elements with certain attributecyberco, Nov 7, 2007, in forum: Java
- Replies:
- 2
- Views:
- 1,423
- Roedy Green
- Nov 7, 2007
|
http://www.thecodingforums.com/threads/removing-elements-from-large-xml-documents.494682/
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
#include <hallo.h> Steve Langasek wrote on Mon Jun 17, 2002 um 01:52:48PM: > > tetex-src? > > mindterm (an edge case; the binary package includes full source for > > legal reasons..) > > pine (only source package avilable; pine-tracker; etc) cfdisk-utf8 (includes util-linux source) cloop-src (includes zlib source) ash-knoppix (soon, includes ash and modutils source) They all would benefit from having a method to specifying a source dependency, so the source would be loaded as part of dependencies _and_ build-dependencies, and used from a system-wide accessible directory. Gruss/Regards, Eduard. -- <schneckal> hat einer von euch schon bind9 installiert? <eis> das neue root kit? :-> -- #debian.de
Attachment:
pgpJwKy0_8IEm.pgp
Description: PGP signature
|
https://lists.debian.org/debian-devel/2002/06/msg01489.html
|
CC-MAIN-2015-32
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.