text
stringlengths 20
1.01M
| url
stringlengths 14
1.25k
| dump
stringlengths 9
15
⌀ | lang
stringclasses 4
values | source
stringclasses 4
values |
|---|---|---|---|---|
First of all, thanks for all those who have helped me in the past and those who are about to.
I only have my prog written for the first part of my Q, the 2nd part, i need help setting up.
1) How do i store the numbers in a 5 x 3 array called Data[5][3]. I can have the user input data in the following prog, but how do i specifically call it "Data [5][3]. The reason I need to know this is becuase:
2) I need to have the program pass the array Data[ ][ ] from main ( ) to a function. My prof told me to use, float mean (const int Data [ ][ ], int, int ). I think it should look something like, mean (Data [ ][ ], 5, 3)
heres my prog
Code:
#include <iostream>
#include <iomanip>
using namespace std;
const int numrow = 5;
const int numcol = 3;
int main ()
{
double data[numrow][numcol];
for (int i = 0; i <numrow; i++)
{
for (int j = 0; j < numcol; j++)
{
cout << "enter grades for row # " << (i + 1)<< ": ";
cin >> data [i][j];
}
}
for (int a = 0; a < numrow; a++)
{
for (int b = 0; b < numcol; b++)
{
cout << setw (4) << data [a][b];
}
cout << endl;
}
return 0;
}
|
http://cboard.cprogramming.com/cplusplus-programming/77255-couple-questions-concerning-arrays-printable-thread.html
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
/* Kernel-side additional module for the VxWorks threading support logic for GCC. Written 2002 by Zack Weinberg. This file is distributed with GCC, but it is not part of GCC. The contents of this file are in the public domain. */ /* If you are using the Tornado IDE, copy this file to $WIND_BASE/target/config/comps/src/gthread_supp.c. Then create a file named 10comp_gthread_supp.cdf in target/config/comps/vxWorks with the following contents: Component INCLUDE_GCC_GTHREAD { NAME GCC 3.x gthread support (required by C++) CONFIGLETTES gthread_supp.c REQUIRES INCLUDE_CPLUS INCLUDE_WHEN INCLUDE_CPLUS _FOLDER FOLDER_CPLUS } If you are using command line builds, instead copy this file to $WIND_BASE/target/src/config/gthread_supp.c, and add the following block to target/src/config/usrExtra.c: #ifdef INCLUDE_CPLUS #include "../../src/config/gthread_supp.c" #endif You should now be able to rebuild your application using GCC 3.x. */ #include <vxWorks.h> #include <taskLib.h> /* This file provides these); /* Set and retrieve the TSD data block for the task TCB. Possible choices for TSD_SLOT are: reserved1 reserved2 spare1 spare2 spare3 spare4 (these are all fields of the TCB structure; all have type 'int'). If you find that the slot chosen by default is already used for something else, simply change the #define below and recompile this file. No other file should reference TSD_SLOT directly. */ /* WARNING: This code is not 64-bit clean (it assumes that a pointer can be held in an 'int' without truncation). As much of the rest of VxWorks also makes this assumption, we can't really avoid it. */ #define TSD_SLOT reserved1 void * __gthread_get_tsd_data (WIND_TCB *tcb) { return (void *) (tcb->TSD_SLOT); } void __gthread_set_tsd_data (WIND_TCB *tcb, void *data) { tcb->TSD_SLOT = (int) data; } /* Enter and leave "TSD destructor context". This is defined as a state in which it is safe to call free() from a task delete hook on a memory block allocated by the task being deleted. For VxWorks 5.x, nothing needs to be done. */ #if __GNUC__ >= 2 #define UNUSED __attribute__((unused)) #else #define UNUSED #endif void __gthread_enter_tsd_dtor_context (WIND_TCB *tcb UNUSED) { } void __gthread_leave_tsd_dtor_context (WIND_TCB *tcb UNUSED) { }
|
http://opensource.apple.com//source/gcc/gcc-5363.0.5/contrib/gthr_supp_vxw_5x.c
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
OpenGL Programming/Installation/Android NDK
Our GLUT tutorials can be run on Android using the simple provided wrapper.
Note: the wrapper will be integrated in the next official FreeGLUT (version 3.0)!
Contents
- 1 Making-of
- 2 Dependencies
- 3 Connecting with USB
- 4 Using our wrapper
- 5 Debugging
- 6 Abstracting differences between OpenGL and GLES2
- 7 GLM
- 8 SOIL
- 9 FreeType
- 10 Troubleshootings
- 11 References
Making-of[edit]
To understand of the GLUT wrapper works internally, see Android GLUT Wrapper.
Dependencies[edit]
First you'll need a minimal Java development environment:
sudo apt-get install openjdk-7-jdk ant
Then get the Android NDK r9d from Android Developers to compile C/C++ code for Android.
Last you need to install the Android API level 10, get the Android SDK from the same site and use the android graphical tool to install it.
The programs themselves may also require the GLM and FreeType libraries - see the dedicated sections below.
Emulator (lack of) support for OpenGL ES 2.0[edit]
The Android emulator only supports OpenGL ES 2.0 since April 2012, requires a specific emulator configuration and system image, and doesn't seem to work on all platforms.
Also beware that the "API Demos" applications ships an "OpenGL ES 2.0" sample that silently and confusingly falls back to OpenGL ES 1.0 if 2.0 is not available, so it's not a good test to see if OpenGL ES 2.0 is supported.
It's still best to experiment with OpenGL 2.0 on Android with a supporting device.
Official documentation:
Connecting with USB[edit]
When you connect your device through USB, you can use the
adb command (from the Android SDK) to browse the filesystem, install applications, debug them, etc.
$ adb devices List of devices attached 4520412B47C0207D device
Using our wrapper[edit]
In this wikibook, the samples are based on the GLUT library.
Since GLUT is not ported to Android yet, we wrote a simple GLUT-compatible wrapper for Android (see the code repository).
Note: the wrapper is still in its early life and may change in the near future.
Compile tutorials code[edit]
Look at the 'android_wrapper/' directory.
- Plug your device (smartphone, tablet...) with USB
- Add the Android tools to your PATH, for instance:
export PATH="$PATH:/usr/src/android-sdk-linux/tools:/usr/src/android-sdk-linux/platform-tools:/usr/src/android-ndk-r9d"
- Inside the
jni/directory, make
srca symlink to the GLUT code you need to compile (e.g.
ln -nfs ../../tut02_clean src)
- Make
assetsa symlink to the tutorial you're compiling (e.g.
ln -s jni/src assets)
- Now you can type:
make clean; make && make install
- You'll get an "OpenGL Wikibook" application on your device, ready to run!
Full-screen[edit]
To make your application full-screen, add this attribute in your AndroidManifest.xml
<application ... android:
Keyboard[edit]
The default Android keyboard does not have keys such as F1/F2/F3.
Instead, you can use Hacker's Keyboard, an alternative input method with more keys:
Icon[edit]
Make sure your application has
android:icon defined in
AndroidManifest.xml:
<application ... android:icon="@drawable/icon"
Create two icons:
- res/drawable/icon.png (48x48)
- res/drawable-hdpi/icon.png (72x72)
Now your application will have a custom icon on the launcher.
Debugging[edit]
Browsing standard output[edit]
If you want to see your program's standard outputs (stdout and stderr), you need to redirect them to the system log :
adb shell root adb shell stop adb shell setprop log.redirect-stdio true adb shell start # this may restart your Android session
To check the log files, you can use:
- The command: adb logcat
- The
ddmsutility, with its graphical GUI to browse the logs
- Eclipse, which embeds a LogCat viewer similar to DDMS
Checking JNI calls[edit]
The following will turn on more checks when calling JNI from C/C++:
adb shell stop adb shell setprop dalvik.vm.checkjni true adb shell start
You'll get additional traces on the system log, and JNI will be more strict on what it accepts.
GDB[edit]
GDB can be enabled too.
Note: in NDKr7, use the "stabs" format for debug symbols in Android.mk, otherwise GDB will show the wrong source code lines [1]:
LOCAL_CXXFLAGS := -gstabs+
GDB requires a debug build, add NDK_DEBUG=1 when building your C++ code:
ndk-build NDK_DEBUG=1
When starting gdb, make sure your AndroidManifest.xml mentions it's debuggable, otherwise gdb will behave badly (lack of thread information, crash, etc.):
<application ... android:hasCode="true" android:debuggable="true"
The gdb-server needs a few seconds to start on the device, so your program will start running before it can be paused by the debugger. A work-around is to add a wait in your
android_main function:
sleep(5);
To start the debug session, type:
ndk-gdb --start
Unable to load native library[edit]
If you get errors such as:
E/AndroidRuntime( 3021): java.lang.RuntimeException: Unable to start activity ComponentInfo{org.wikibooks.OpenGL/android.app.NativeActivity}: java.lang.IllegalArgumentException: Unable to load native library: /data/data/org.wikibook.OpenGL/lib/libnative-activity.so
the system couldn't load your .so due to a low-level reason.
To get more information, you need to create a minimal Java application that loads the library manually:
- src/com/example/test_native_activity/Main.java
package com.example.test_native_activity; import android.app.Activity; import android.os.Bundle; public class Main extends Activity { @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); System.loadLibrary("native-activity"); } }
- AndroidManifest.xml:
<?xml version="1.0" encoding="utf-8"?> <manifest xmlns: <uses-sdk android: <application android: <activity android: <intent-filter> <action android: <category android: </intent-filter> </activity> </application> </manifest>
Compile, install and prepare it:
android update project --name test_native_activity --path . --target "android-10" ant debug ant installd adb shell su -c bash cd /data/data/ cp -a org.wikibooks.OpenGL/lib/libnative-activity.so com.example.test_native_activity/lib/
When you run this application, you'll get a more precise error in the Android logs, such as a wrong STL implementation:
E/AndroidRuntime(3009): java.lang.UnsatisfiedLinkError: Cannot load library: reloc_library[1311]: 2323 cannot locate '_ZStlsISt11char_traitsIcEERSt13basic_ostreamIcT_ES5_PKc'...
or a missing dependency:
E/AndroidRuntime( 3327): java.lang.UnsatisfiedLinkError: Cannot load library:link_image[1962]: 2323 could not load needed library 'libglut.so.3' for 'libnative-activity.so' (load_library[1104]: Library 'libglut.so.3' not found)
In the worst case, the library might not even load properly. This may happen e.g. when the C++ constructor of a global static variable crashes while it is called at library loading time, even before your application is started. You'll need to reproduce library loading at the C level:
#include <stdio.h> #include <dlfcn.h> int main(int argc, char* argv[]) { const char* err = NULL; const char* filename = "/data/data/org.wikibooks.OpenGL/lib/libnative-activity.so"; if (argc == 2) filename = argv[1]; printf("Clearing errors: "); fflush(stdout); err = dlerror(); printf("%s\n", (err == NULL) ? "OK" : err); fflush(stdout); printf("Loading library: "); fflush(stdout); void* handle = dlopen(filename, RTLD_LAZY); err = dlerror(); printf("%s\n", (err == NULL) ? "OK" : err); fflush(stdout); if (handle != NULL) { printf("Loading symbol: "); fflush(stdout); dlsym(handle, "ANativeActivity_onCreate"); err = dlerror(); printf("%s\n", (err == NULL) ? "OK" : err); fflush(stdout); } }
Then send it to the device and execute it:
$ arm-linux-androideabi-gcc test-dlsym.c $ adb push a.out / $ adb shell # /a.out Clearing errors: OK Loading library: OK Loading symbol: OK
You can also use
strace for more precision:
# strace /a.out
There is no
ldd for Android by default, but you can simulate it using:
arm-linux-androideabi-objdump -x libs/armeabi/libnative-activity.so | grep NEEDED # or arm-linux-androideabi-readelf -d libs/armeabi/libnative-activity.so | grep NEEDED
Abstracting differences between OpenGL and GLES2[edit]
When you only use GLES2 functions, your application is nearly portable to both desktops and mobile devices. There are still a couple issues to address:
- The GLSL
#versionis different
- GLES2 requires precision hints that are not compatible with OpenGL 2.1.
See the Basic Tutorials 02 and 03 for details and a proposed solution.
GLM[edit]
To install GLM, you just need to extract the latest release in
jni/glm (such that
jni/glm/glm.hpp exists). It is a header-only library that doesn't require separate compilation.
SOIL[edit]
- Download the Simple OpenGL Image Library from
- Apply patch from android_wrapper/soil.patch.
- Follow instructions in projects/Android/Makefile.
- NDK module ready in src/build/soil/.
FreeType[edit]
If you need FreeType (a library to render fonts), you'll need to cross-compile it. Note: The Android system uses FreeType but internally it doesn't expose it to native apps.
First, prepare the cross-compiler from the NDK:
/usr/src/android-ndk-r8c/build/tools/make-standalone-toolchain.sh \ --platform=android-14 --install-dir=/usr/src/ndk-standalone-14-arm --arch=arm NDK_STANDALONE=/usr/src/ndk-standalone-14-arm PATH=$NDK_STANDALONE/bin:$PATH
Then use it to cross-compile freetype:
tar xf freetype-2.6.tar.bz2 cd freetype-2.6/ # For simplicity, use bundled zlib, and avoid harfbuzz cyclic dependency ./configure --host=arm-linux-androideabi --prefix=/freetype --without-zlib --with-png=no --with-harfbuzz=no make -j$(nproc) make install DESTDIR=$(pwd)
Then write an
Android.mk file in the new
freetype/ directory:
LOCAL_PATH:= $(call my-dir) include $(CLEAR_VARS) LOCAL_MODULE := freetype LOCAL_SRC_FILES := lib/libfreetype.a LOCAL_EXPORT_C_INCLUDES := $(LOCAL_PATH)/include $(LOCAL_PATH)/include/freetype2 include $(PREBUILT_STATIC_LIBRARY)
See
docs/STANDALONE-TOOLCHAIN.html and
docs/PREBUILTS.html in the NDK for details. The
CLEAR_VARS bit is not documented, but is necessary to avoid mixed-up paths; it is used in the
native_app_glue NDK module.
(Alternatively you can install it in --prefix=/usr/src/ndk-standalone-14-arm/sysroot/usr if you don't plan to use the NDK build system.)
To use FreeType in your project, edit your
Android.mk:
LOCAL_STATIC_LIBRARIES := freetype
Troubleshootings[edit]
Device is marked offline[edit]
The first time you connect your device to a new computer, it will ask to to confirm the computer fingerprint. Until you accept on the device screen, it's marked offline.
If you are not asked for confirmation, make sure your adb is up-to-date.
Allow USB access for non-root users[edit]
These days, Android devices are references in your /lib/udev/rules.d/, and unprivileged users now can securely connect to the device,
If adb gets permission issues with your new device:
- See if it works when running
adbas root (to check if it's a permission issue, or something else).
- Install the android-tools-adb package, which includes /lib/udev/rules.d/70-android-tools-adb.rules.
- If that doesn't work, manually create an udev rule as follow.
First, determine your device's
idVendor, by typing 'dmesg' after you plug it; check for :
usb 2-1: New USB device found, idVendor=18d1, idProduct=4e22
Then create an udev rule as below, for instance in /etc/udev/rules.d/51-android.rules :
# Galaxy S SUBSYSTEM=="usb", ATTR{idVendor}=="18d1", MODE="0666", GROUP="plugdev"
(This can be further specialized by idProduct.)
Then restart your udev deamon:
/etc/init.d/udev restart
If you plug your device, the USB character device should have the "plugdev" group:
# ll /dev/bus/usb/002/ total 0 crw-rw-r-T 1 root plugdev 189, 140 janv. 19 21:50 013
References[edit]
< OpenGL Programming/Installation
|
https://en.wikibooks.org/wiki/OpenGL_Programming/Installation/Android_NDK
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
If, like me, you have plugins with a lot of commands, and far too many key bindings, consider this little snippet. It pops up the quickpanel, and lets you choose a command to execute.
class ChooseOutputFormatCommand(sublimeplugin.WindowCommand):
def run(self, window, args):
items =
("Article (pdf)", "compileToPdf"),
("Filofax Pages (pdf)","compileToFilofaxPagePdf"),
("Index Cards (pdf)","compileToIndexCardPdf"),
("Standard Manuscript Format (pdf)", "compileToSmfPdf"),
("Web Page (html)", "compileToHtml"),
("Rich Text Format (rtf)", "compileToRtf"),
("S5 slideshow (html)", "compileToS5"),
("Screenplay (pdf)","compileToScreenplayPdf"),
("Book (pdf)","compileToBookPdf"),
("Book with pagebreaks (pdf)","compileToBookPbPdf"),
("Booklet (pdf)","compileToBookletPdf"),
]
commands = [x for x,y in items]
names = [y for x,y in items]
window.showQuickPanel("", "onOutputChosen", names, commands)
class OnOutputChosenCommand(sublimeplugin.WindowCommand):
def run(self, window, args):
if len(args) != 1:
print "%s items selected; expected 1" % len(args)
return
command = args[0]
print "selected %s" % command
window.activeView().runCommand(command)
Anyone know if there is a way to test if a command is enabled? I'd like to improve that snippet so that it didn't show disabled commands.
There isn't; I'll expose an API function for it in the next beta.
You, sir, are a god.
The canRunCommand method works perfectly. I now have the ability to select a build command based on the current extension. Neat!
|
https://forum.sublimetext.com/t/using-window-showquickpanel-to-choose-a-command/145/5
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
Set-DfsnRootTarget
Applies To: Windows 10 Technical Preview, Windows Server Technical Preview
Set-DfsnRootTarget
Syntax
Parameter Set: SetNamespaceRootTarget3 Set-DfsnRootTarget [-Path] <String> [-TargetPath] <String> [[-State] <State> {Offline | Online} ] [[-ReferralPriorityClass] <ReferralPriorityClass> {sitecostnormal | globalhigh | sitecosthigh | sitecostlow | globallow} ] [[-ReferralPriorityRank] <UInt32> ] [-CimSession <CimSession[]> ] [-ThrottleLimit <Int32> ] [-Confirm] [-WhatIf] [ <CommonParameters>]
Detailed Description
The Set-DfsnRootTarget cmdlet changes settings for a root target of a Distributed File System (DFS) namespace. root of a DFS namespace.
-ReferralPriorityClass<ReferralPriorityClass>
Specifies the priority class for a DFS namespace root root target of the DFS namespace. Lower values have greater preference. A value of zero (0) is the greatest preference.
-State<State>
Specifies the state of the DFS namespace root target. The acceptable values for this parameter are:
-- Online
-- Offline
Clients do not receive referrals for a DFS namespace target that is offline.
-TargetPath<String>
Specifies a path for a root target of the DFS namespace. This cmdlet changes settings for the rootpaceRootTarget
Examples
Example 1: Set a referral priority value
This command sets the referral priority class to a value of global low for the DFS namespace root target that has the path \\Contoso\AccountingSoftware and the target path \\Contoso-FS\AccountingSoftware.
Related topics
|
https://technet.microsoft.com/library/jj884266.aspx
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
if (client.connected()) { strcpy_P(buffer, (char*)pgm_read_word(&(string_table[14]))); // "title data=" finder.find(buffer); for (int i = 0; i < randomInt; i++) { if (finder.getString(buffer,"\"", newsString, 84)) { } }
// Make a new string. String d = String(85); //Replace the HTML nastiness with a single ' d = String(newsString).replace("&39;", "\'");
d.toCharArray(newsString, 84);
String d = String(85);
To resolve your problem, we'd need to see more of your code. How is newsString defined? Is it a char array with 85 or more elements? How is buffer defined?
char buffer[85]; // make sure this is large enough for the largest string it must holdchar newsString[85]; // make sure this is large enough for the largest string it must hold
Code: [Select]String d = String(85);This creates a String object, d, equal to the String object that contains a character representation of the value 85, not one that contains room for 85 characters.
So, what is the correct way to create a String that is capable of holding the contents of newsString[]?
QuoteSo, what is the correct way to create a String that is capable of holding the contents of newsString[]?I doubt you will be able to capture the total feed to a string unless it is very small. You can capture the feed characters in a string and then send the string to the serial monitor to see just how much of the feed was captured.
if (finder.getString(buffer,"\"", newsString, 84)) {
d = String(newsString).replace("&39;", "\'");
QuoteCode: [Select]String d = String(85);This creates a String object, d, equal to the String object that contains a character representation of the value 85, not one that contains room for 85 characters.Aha. That'll be the problem.So, what is the correct way to create a String that is capable of holding the contents of newsString[]?
String d = String(" ");
However, given that I'm creating d in correctly, not giving it 85 characters, but rather assigning it the value 85, isn't it more likely I'm running off the end of d? And if that's the case, how do I create d with enough space for newsString[85]? The String reference page shows many cases, none of which allow you create an empty string with 85 characters. Am I really left with doing this?
You are not defining d incorrectly. You are simply not defining it the way that you thought you were.You should probably spend some time looking at the String source code. If you do, you will see that the copy operator and the assignment operator are defined for the String class, so one can assign a character array to a String object, and the String object will be automatically sized to hold the character array, whatever size the character array is, provided there is memory available to hold the String object that is to be created.
//Define d to be however big it needs to be in order to hold newsString and do a find and replace at the same time. String d = String(newsString).replace("&39;", "\'"); //Convert d back into newsString. d.toCharArray(newsString, 84);
// // FILE: replace.pde// AUTHOR: Rob Tillaart// VERSION: 0.1.00// PURPOSE: in place replace in a char string.//// HISTORY: // 0.1.00 - 2011-05-13 initial version// // Released to the public domain//char in[128] = "String(newsString).replace(&39;with something else&39;);";void replace(char* source, char* from, char* to){ uint8_t f = strlen(from); uint8_t t = strlen(to); char *p = source; if (t> f) return; while (*p != '\0') { if (strncmp(p, from, f) == 0) { strncpy(p, to, t); p += t; strcpy(p, p+f-t); } else p++; }}void setup(){ Serial.begin(115200); Serial.println("Start"); Serial.println(in); unsigned long t1 = micros(); replace(in, "&39;", "\'"); Serial.println(micros() - t1); Serial.println(in); replace(in, "ing", "ong"); Serial.println(in);}void loop(){}
#include <Regexp.h>void setup (){ Serial.begin (115200); // what we are searching char buf [100] = "I do like to be << beside >> the seaside"; MatchState ms; // set address of string to be searched ms.Target (buf); // what we will replace it with char replacement [20]; unsigned int index = 0; while (ms.Match ("&%a+;", index) > 0) { // increment start point ready for next time index = ms.MatchStart + ms.MatchLength; // see if we want to change it if (memcmp (&buf [ms.MatchStart], "<", ms.MatchLength) == 0) strcpy (replacement, "<"); else if (memcmp (&buf [ms.MatchStart], ">", ms.MatchLength) == 0) strcpy (replacement, ">"); else continue; // nope, move along // see how much memory we need to move int lengthDiff = ms.MatchLength - strlen (replacement); // copy the rest of the buffer backwards/forwards to allow for the length difference // the +1 is to copy the null terminator memmove (&buf [index - lengthDiff], &buf [index], strlen (buf) - index + 1); // copy in the replacement memmove (&buf [ms.MatchStart], replacement, strlen (replacement)); // adjust the index for the next search index -= lengthDiff; } // end of while Serial.println (buf); } // end of setup void loop () {}
// for matching regular expressions MatchState ms (newsString); // replace the & with a single quote ' ms.GlobalReplace ("&", "\'");
Please enter a valid email to subscribe
We need to confirm your email address.
To complete the subscription, please click the link in the
Thank you for subscribing!
Arduino
via Egeo 16
Torino, 10131
Italy
|
http://forum.arduino.cc/index.php?topic=60908.msg440888
|
CC-MAIN-2016-36
|
en
|
refinedweb
|
ContactPicker.PickMultipleContactsAsync
- Tuesday, October 25, 2011 4:51 PM
I'm trying to consume the PickMultipleContactsAsync method of the ContactPicker in VB, but am running into an issue because the return type from the PickMultipleContactsOperation's GetResults operation returns an IReadOnlyList(Of IContactInformation). IContactInformation is internal to the Contacts namespace and as such is not available to calling code. I noticed that the return type for PickSingleContactAsync is a ContactInformation which is public. Is there an issue with the PickMultipleContactsOperation, or in the metadata for this method with the current build or is there another intended means of using the PickMultipleContactsAsync than:
Dim picker As New ContactPicker Dim contacts = Await picker.PickMultipleContactsAsync() For Each contact in contacts ' Compiler error because contact is an internal/friend IContactInformation type
Next - "LINQ In Action", The book is now available. Don't wait for the movie
- Edited by Jim WooleyMVP Tuesday, October 25, 2011 4:51 PM
-
All Replies
- Wednesday, October 26, 2011 1:59 AMModerator
Hi Jim,
Thanks for pointing this out. I'm seeing similar behavior to what you describe from .Net; however, PickMultipleContacts can be called successfully from JavaScript. I'm not sure exactly why it's different (it may be because of stronger typing in the .Net projection). I can't check if this is known at the moment, but will try to do so tomorrow. You can also file this yourself via the feedback tool or connect.
--Rob
- Wednesday, October 26, 2011 1:55 PM
Ok. I submitted it, hopefully in the right area. I didn't see a connect area for general WinRT metadata errors. - "LINQ In Action", The book is now available. Don't wait for the movie
- Wednesday, November 09, 2011 7:37 PM
What's the compiler error? If you can call the method, something like this might work:
Dim picker As New ContactPicker Dim rawContacts As IEnumerable = Await picker.PickMultipleContactsAsync() Dim contacts = rawContacts.Cast(Of ContactInformation)() For Each contact in contacts ' ... Next
|
http://social.msdn.microsoft.com/Forums/en-US/winappswithcsharp/thread/dc58c986-0c99-44ad-8ce0-39d4f47be709
|
crawl-003
|
en
|
refinedweb
|
xmlns using syntax for another assembly (UserControl cant find a sibling control)
- Thursday, October 13, 2011 6:50 PM
I know in the WinRT XAML stack, the xmlns syntax has changed to "using:NamespaceHere". This seems to work fine even when the namespace is in another referenced assembly. For example, I have some controls over in another (metro) assembly and my metro application project has a reference to that. So the xmlns:Controls="using:SeparateAssembly.Controls" syntax works fine.
What is NOT working though, is then when a control in that separate assembly uses ANOTHER control in that separate assembly, I get a runtime exception that the 2nd control cannot be found. I can use that control directly in my UI, but I can't use it from WITHIN the separate assembly. It feels like at runtime, it's using the executing assembly to look for the namespace, not the one where the XAML is being parsed from.
Is there an additional "assembly" syntax I should be using in the controls project XAML files to make sure all those controls can be found regardless of where we're loaded from?
My description might be a little cloudy, so here's a concrete example:
UIProject1, references ControlsProject1, which has 2 controls in it. UserControl1 and UserControl2. UserControl1.xaml contains a using statement for, and uses UserControl2.
When UIProject1\MainPage.xaml uses UserControl1 (has the standard xmlns using statement for UserControl1 namespace, but nothing about it being in another assembly), I get a runtime exception that UserControl1 can't find UserControl2.
UIProject1\MainPage.xaml can directly create UserControl2, to prove that it's creatable and will show in the UI.
Thanks in advance for any guidance.
Answers
-
-
All Replies
- Tuesday, October 18, 2011 4:40 PMModerator
Hi Volleynerd,
Can you send me a project with the problem you're describing? MSmall at Microsoft.
Thanks,
Matt
Matt Small - Microsoft Escalation Engineer - Forum Moderator
- Wednesday, October 19, 2011 11:29 PM
Hi Matt -
Just emailed you a zip that shows the problem. In building the zip, we boiled it down to the following.
- Put 2 user controls in a class library separate from the main UI project
- UserControl1 uses UserControl2 in its XAML definition
- Make use of UserControl1 in the UI project (MainPage.xaml)
- Works fine if you only do the above.
- This sounds crazy but ... add a class to the controls project and have that class implement INotifyPropertyChanged (from UI.Xaml.Data, not System.ComponentModel). You don't even have to use that class anywhere.
Run the app. It blows up trying to load UserControl2
Zip of the code showing the problem:
-
-
- Sunday, February 19, 2012 4:00 PMHi, have you found a workaround or a bugfix for this problem? I need this to work in my project...
- Edited by Rico Suter Sunday, February 19, 2012 4:52 PM
-
|
http://social.msdn.microsoft.com/Forums/en-US/winappswithcsharp/thread/b8c0215f-9377-4518-b9ae-68fbeeb64ff0/
|
crawl-003
|
en
|
refinedweb
|
CDataGrid Control
General
This article is about a CDataGrid control programmed using Windows SDK. It is designed to be easy to use. The current version is not totally bug free, so it would be nice if you would report all detected bugs to have an update available soon.
The grid control is very similar to a MFC CListView control when it is in the "REPORT" view state. It supports a similar item, adding and removing and with custom item sorting with an application-defined comparison function.
The Code
How to use the CDataGrid control is explained here in detail. First of all, the header file "DataGrid.h" must be included in the project. Next, a variable of type CDataGrid must be declared.
#include "DataGrid.h" CDataGrid dataGrid;
Next, call the Create method to create and show a DataGrid window, passing as parameters handle to parent window, window rectangle, and a number of columns DataGrid will have. Then, use the InsertItem method to add items to DataGrid, passing item text and alignment. A method SetItemInfo in two versions can be used to set subitem text, alignment, selection, read-only attribute, or to change background color. In the first version, the index of the item and subitem are passed, along with subitem text, alignment, or read-only attribute directly.
dataGrid.Create( wndRect, hParentWnd, 5 ); dataGrid.InsertItem( "Item1", DGTA_LEFT ); dataGrid.InsertItem( "Item2", DGTA_CENTER ); dataGrid.InsertItem( "Item3", DGTA_RIGHT ); dataGrid.SetItemInfo( 0, 1, "Subitem1", DGTA_CENTER, false ); DG_ITEMINFO dgii; dgii.dgMask = DG_TEXTRONLY; dgii.dgItem = 1; dgii.dgSubitem = 0; dgii.dgReadOnly = true; dataGrid.SetItemInfo(&dgii); dataGrid.Update();
To remove a single item, use RemoveItem, passing as the argument the index of the row that will be deleted.
dataGrid.RemoveItem(2); dataGrid.Update();
Note: All indexing is zero-numbered. Also, calling the Update method is necessary after adding or removing items.
To remove all items, use RemoveAllItems, which has no agruments.
dataGrid.RemoveAllItems();
CDataGrid control has numerous features, as explained below.
Features
These are the current features of DataGrid:
- Automatic resizing with the parent window
- Enabled when the Resize method is called each time the DataGrid parent window changes its size.
- Enable/Disable sorting
- Uses the EnableSort method.
- Enable/Disable item text editing
- Uses the EnableEdit method.
- Enable/Disable column resizing
- Uses the EnableResize method.
- Enable/Disable grid
- Uses the EnableGrid method.
- Automatic scrolling to specified item
- Uses the EnsureVisible method.
- Automatic selection of specified item
- Uses the SelectItem method.
- Item sorting using custom application-defined comparison function
- Uses the SetCompareFunction method.
- Get/Set column text color
- Uses the GetColumnTextColor and SetColumnTextColor methods.
- Get/Set column font
- Uses the GetColumnFont and SetColumnFont methods.
- Get/Set row text color
- Uses the GetRowTextColor and SetRowTextColor methods.
- Get/Set row font
- Uses the GetRowFont and SetRowFont methods.
I hope to extend this list of features as soon as possible.
Notifications
CDataGridcontrol sends the following notification massages to its parent window via a WM_COMMAND message:
- DGM_ITEMCHANGED
- When focus is changed from one item to another.
- DGM_ITEMTEXTCHANGED
- When item/subitem text is changed.
- DGM_ITEMADDED
- When item is added.
- DGM_ITEMREMOVED
- When item is removed.
- DGM_COLUMNRESIZED
- When column is resized.
- DGM_COLUMNCLICKED
- When column is clicked.
- DGM_STARTSORTING
- When sorting is started.
- DGM_ENDSORTING
- When sorting is ended.
Also, this list of notifications will be extended.
Conclusion
The user can obtain all mentioned information and some more from the well-commented header file "DataGrid.h".
Grid not visiblePosted by Tagarn on 03/16/2006 04:28am
A small fix.Posted by dinus on 10/04/2005 09:35am
Thanks again for your code. I found a small problem and I want to propose a fix for it. When I try to create DataGrid with width which doesn't match the summary width of all columns and with content less than a page, DataGrid doesn't show. Here is how to easily recreate this situation: 1. Create a parent window, with coordinates 0, 0, 500, 500, rather than using CW_USEDEFAULT (DataGrid test.cpp, line 122). 2. Create a datagird of size 0, 0, 400, 400 (RECT rect={0, 0, 400, 400};, DataGrid test.cpp, line 348). 3. Add just 3 columns of size 100. 4. Add 10 rows. After starting DataGrid test.exe you will see that Data Grid doesn't show. I was able to fix this by adding the following line: SendMessage(dataGrid.GetWindowHandle(), WM_SIZE, 400, 400); right before datagrid.Update() (DataGrid test.cpp, line 385). Regards,Reply
Very nice code.Posted by dinus on 08/20/2005 10:11am
Very nice code for those who like low level programming. Thnks.Reply
|
http://www.codeguru.com/cpp/controls/controls/gridcontrol/article.php/c10319/CDataGrid-Control.htm
|
crawl-003
|
en
|
refinedweb
|
Myghty now has its own session storage class. This class offers some advantages over the mod python session, including:
The session is retrieved from the request object via the get_session() method, operated upon like a dictionary, and then can have its save() method called to write its data to persistent storage:
<%python # get the session session = m.get_session() # add data session['key1'] = 'foo' # get data if session.has_key('user'): user = session['user'] else: user = User() session['user'] = user # save new information session.save() </%python>
The session handles generation of session IDs automatically as well as storing and retrieving them from cookies. Options exist to pass in custom session IDs, to not use cookies, to use "signed" session IDs, and to change the cookie-based session key (defaulting to myghty_session_id). It loads its data in fully when instantiated and then unlocks, so no programmatic locking or unlocking is necessary (but lock methods are available if you want the session to stay locked throughout a request).
Session options are specified as Myghty configuration parameters in the form session_XXXX, to identify them as options being sent to the Session object. When calling the m.get_session() method, parameters may be specified with or without the "session_" prefix; they are stripped off.
The get_session method can take any of the configuration parameters that are identified below as used directly by the Session object or by the underlying Namespace objects.
The session object is actually functionally independent of the rest of Myghty, and is compatible with the mod python request object directly, as well as the request emulator used by CGIHandler. To instantiate it, simply use its constructor as follows:
from mod_python import apache from myghty.session import Session def handle(req): session = Session(req, data_dir='/path/to/session_dir', key='user_session_id')
The full constructor signature for the Session object is as follows:
Session(request, id = None, use_cookies = True, invalidate_corrupt = False, type = None, data_dir = None, key = 'myghty_session_id', timeout = None, secret = None, log_file = None, **params)
Note that the parameters are the same as the configuration arguments with the prefix "session_" removed.
|
http://packages.python.org/Myghty/session.html
|
crawl-003
|
en
|
refinedweb
|
This class acts as a common interface to the different interfaces (see CParticleFilter::TParticleFilterAlgorithm) any bayes::CParticleFilterCapable class can implement: it is the invoker of particle filter algorithms.
The particle filter is executed on a probability density function (PDF) described by a CParticleFilterCapable object, passed in the constructor or alternatively through the CParticleFilter::executeOn method.
For a complete example and further details, see the Particle Filter tutorial.
The basic SIR algorithm (pfStandardProposal) consists of:
Definition at line 63 of file CParticleFilter.h.
#include <mrpt/bayes/CParticleFilter.h>
Defines different types of particle filter algorithms.
The defined SIR implementations are:
See the theoretical discussion in resampling schemes.
Definition at line 76 of file CParticleFilter.h.
Defines the different resampling algorithms.
The implemented resampling methods are:
See the theoretical discussion in resampling schemes.
Definition at line 93 of file CParticleFilter.h.
Default constructor.
After creating the PF object, set the options in CParticleFilter::m_options, then execute steps through CParticleFilter::executeOn.
Definition at line 184 of file CParticleFilter.h.
Executes a complete prediction + update step of the selected particle filtering algorithm.
The member CParticleFilter::m_options must be set before calling this to settle the algorithm parameters.
Sends a formated text to "debugOut" if not NULL, or to cout otherwise.
Referenced by mrpt::math::CLevenbergMarquardtTempl< VECTORTYPE, USERPARAM >::execute().
The options to be used in the PF, must be set before executing any step of the particle filter.
Definition at line 205 of file CParticleFilter.h.
|
http://reference.mrpt.org/stable/classmrpt_1_1bayes_1_1_c_particle_filter.html
|
crawl-003
|
en
|
refinedweb
|
#include <mp_change_control.hpp>
Definition at line 25 of file mp_change_control.hpp.
Definition at line 374 of file mp_change_control.cpp.
Definition at line 380 of file mp_change_control.cpp.
Actions to be taken after the window has been shown.
At this point the registered fields already stored their values (if the OK has been pressed).
Reimplemented from gui2::tdialog.
Definition at line 391 of file mp_change_control.cpp.
References gui2::tdialog::get_retval(), menu_handler_, and view_.
Inherited from tdialog.
Reimplemented from gui2::tdialog.
Definition at line 385 of file mp_change_control.cpp.
Inherited from tdialog, implemented by REGISTER_DIALOG.
Implements gui2::tdialog.
Definition at line 42 of file mp_change_control.hpp.
Referenced by post_show().
Definition at line 43 of file mp_change_control.hpp.
Referenced by get_view(), post_show(), and pre_show().
|
http://www.wesnoth.org/devdocs/classgui2_1_1tmp__change__control.html
|
crawl-003
|
en
|
refinedweb
|
TS4721 Done! And some new stuff we will show at JavaOne 2007
Doing our presentation for JavaOne, Fabiane Nardon and I, we had several points about what we could put on slides. Unfortunately, we can't put everything we already done using EJB3, because we have only 1 hour to speak. This post shows some stuff we decided take out from official presentation.
Patterns out from Presentation
When we decided to show new Design Patterns, the main idea was to show which patterns still are on the shelf, and which are not so necessary. If you would like to learn new tricks and design patterns for EJB 3, you should consider attending this JavaOne2007 session TS 4721 - Implementing Java EE Applications, Using Enterprise JavaBeans (EJB) 3 Technology: Real-World Tips, Tricks, and New Design Patterns, so we selected only real good new design patterns, and then some of what we expected as new Design Patterns, are not so good, however, I can publish them here, and maybe I can hear some opinions. The first one is Load Data Decorator.
Load Data Decorator
Problem:
After to load or update data, maybe you want to check the value and change it for presentations issues.
Solution:
Use also Life Cycle Callback Methods for Entities
Look the following example:
@EntityListener (AtendeeEmailDecorator.class)
public class Atendee implements Serializable{
…
}
Basically you register a class to act as a listener for this Entity. Check this class in the following code listing:
public class AtendeeEmailDecorator {
@PostLoad
public void evaluateEmail(Atendee atendee){
if (atendee.getEmail().indexOf("@")<=0) {
atendee.setEmail("Email contains errors!");
}
}
Now, if you want to present this information into a dataTable using JSF as Client View tier, the correct information is ready to be presented.
Maybe you can agree with us, and you could remove it too, or anybody else can look to this, and use it as good solution.
cya.
- Login or register to post comments
- Printer-friendly version
- edgars's blog
- 482 reads
|
http://weblogs.java.net/blog/edgars/archive/2007/03/ts4721_done_and_1.html
|
crawl-003
|
en
|
refinedweb
|
Number of processors present (Read Only).
This is number of processors as reported by the operating system. The processors could be separate processors,
cores of the same processor, or logical processors (e.g. in case of one Hyper-Threaded CPU, this would report
two CPUs since that's what it looks like to the system).
// Prints "4" on a quad-core CPU.print (SystemInfo.processorCount);
using UnityEngine;using System.Collections;public class example : MonoBehaviour { void Example() { print(SystemInfo.processorCount); }}
import UnityEngineimport System.Collectionsclass example(MonoBehaviour): def Example(): print(SystemInfo.processorCount)
See Also: SystemInfo.processorType.
|
http://unity3d.com/support/documentation/ScriptReference/SystemInfo-processorCount.html
|
crawl-003
|
en
|
refinedweb
|
#include <MFnSubdData.h>
MFnSubdData allows the creation and manipulation of Subdivision Surface data objects for use in the dependency graph.
If a user written dependency node either accepts or produces Subdivision Surfaces, then this class is used to extract or create the data that comes from or goes to other dependency graph nodes. The MDataHandle::type method will return kSubdiv when data of this type is present.
If a node is receiving a Subdivision Surface via an input attribute, the asSubdSurface method of MDataHandle can be used to access that input Subdivision Surface.
If a node is to create a Subdivision Surface and send it via an output attribute, a new MFnSubdData must be instantiated and then the create method called to build the actual data block as an MObject. This MObject should be passed to the MFnSubd::create method as the parentOrOwner parameter so that the Subdivision Surface::kSubdivData
Reimplemented from MFnGeometryData.
Class name.
Return the class name : "MFnSubdData"
Reimplemented from MFnGeometryData.
|
http://download.autodesk.com/us/maya/2009help/API/class_m_fn_subd_data.html
|
crawl-003
|
en
|
refinedweb
|
XpRehashPrinterList - Recomputes the list of available printers.
Synopsis
Arguments
Description
cc [ flag... ] file... -lXp [ library... ]
#include <X11/extensions/Print.h>
void XpRehashPrinterList ( display )
Display *display;. Existing print contexts will not be affected by XpRehashPrinterList as long as their printer destination remains valid.
|
http://www.squarebox.co.uk/cgi-squarebox/manServer/usr/X11R6/man/man3/XpRehashPrinterList.3x
|
crawl-003
|
en
|
refinedweb
|
pthread_getschedparam, pthread_setschedparam - dynamic thread scheduling parameters access (REALTIME THREADS)
#include <pthread.h> int pthread_getschedparam(pthread_t thread, int *policy, struct sched_param *param); int pthread_setschedparam(pthread_t thread, int policy, const struct sched_param *param);
The pthread_getschedparam() and pthread_setschedparam() allow the scheduling policy and scheduling parameters of individual threads within a multi-threaded process to be retrieved and set. For SCHED_FIFO and SCHED_RR, the only required member of the sched_param structure is the priority sched_priority. For SCHED_OTHER, the affected scheduling parameters are implementation-dependent.
The pthread_getschedparam() function retrieves the scheduling policy and scheduling parameters for the thread whose thread ID is given by thread and stores those values in policy and param, respectively. The priority value returned from pthread_getschedparam() shall be the value specified by the most recent pthread_setschedparam() or pthread_create() call affecting the target thread. It shall.
The policy parameter may have the value SCHED_OTHER, that has implementation-dependent scheduling parameters, SCHED_FIFO or SCHED_RR, that have the single scheduling parameter, priority.
If the pthread_setschedparam() function fails, no scheduling parameters will be changed for the target thread.
If successful, the pthread_getschedparam() and pthread_setschedparam() functions return zero. Otherwise, an error number is returned to indicate the error.
The pthread_getschedparam() and pthread_setschedparam() functions will fail if:
- [ENOSYS]
- The option _POSIX_THREAD_PRIORITY_SCHEDULING is not defined and the implementation does not support the function.
The pthread_getschedparam() function may fail if:
- [ESRCH]
- The value specified by thread does not refer to a existing thread.
The pthread_setschedparam() function may fail if:
- [EINVAL]
- The value specified by policy or one of the scheduling parameters associated with the scheduling policy policy is invalid.
- [ENOTSUP]
- An attempt was made to set the policy or scheduling parameters to an unsupported value.
- [EPERM]
- The caller does not have the appropriate permission to set either the scheduling parameters or the scheduling policy of the specified thread.
- [EPERM]
- The implementation does not allow the application to modify one of the parameters to the value specified.
- [ESRCH]
- The value specified by thread does not refer to a existing thread.
None.
None.
None.
sched_setparam(), sched_getparam(), sched_setscheduler(), sched_getscheduler(), <pthread.h>, <sched.h>.
Derived from the POSIX Threads Extension (1003.1c-1995)
|
http://pubs.opengroup.org/onlinepubs/007908799/xsh/pthread_setschedparam.html
|
crawl-003
|
en
|
refinedweb
|
[1] and
Division for Church in Society - to help its members "think
through the nature of Christian ethics today" (p. 1). Any reader
familiar with the work of Sittler, Lazareth, or Forell will see
that these essays are not only responding to a radically different
context but, indeed, are defining Lutheranism in very different
ways.
For the most part, these essays criticize a specific distortion in
much Lutheranism: the reduction of the Christian life to the
"motivation touched off by justification" (p. 27). They are
especially critical of the way Protestant ethics has been aligned
with the "punctual self" of modernity, a self "unencumbered" by its
ties to nature or even to the rest of the human community. This
criticism, in my view, is the chief strength of these essays, along
with their concrete proposal for how ethics and the law might be
conceived in more substantive and positive terms. What remains a
question - a question that is especially salient when we compare
these essays to the work of their predecessors - is whether they
have captured the comprehensiveness and, more importantly, the
distinctiveness of Lutheranism's theological contribution to the
ethical problems of our day.
[2] This volume consists of seven essays sandwiched between two
short introductions (one by each editor) and a concluding
transcript of a "table talk" conversation among the authors. John
Stumme's introduction stresses the "tradition's" focus on the gift
of faith; Karen Bloomquist's stresses how "today's context" is
largely defined by a range of reactions to the Enlightenment's
focus on freedom, a focus that has influenced much of the
contemporary "culture wars" debate (e.g., with regard to
homosexuality and abortion). Although the concluding "table-talk"
reveals sharp differences among these authors, it does conclude
that the church has a mandate to define itself in concrete and
substantive terms as a "community of moral deliberation." But these
authors present very different proposals for what that community
might look like. Four different positions can be identified in this
volume. James Childs' essay works out of an "eschatological"
perspective The other six propose some form of a"contextualized"
ethic. Three of these present an argument for an "ecclesial" ethics
(Robert Benne, Martha Stortz, and Reinhard Hutter); two argue for a
form of "liberation" ethics (Richard Perry and Larry Rasmussen with
Cynthia Moe-Lobeda); and the final essay (David Fredrickson's)
offers a reading of Pauline ethics that mediates between these two
positions even as it articulates a distinctive proposal of its
own.
[3] With the exception, then, of Childs' essay, all the others
offer some concrete proposal for enacting the substance of the
Christian life. Childs, by contrast, focuses on the "ambiguity" and
"complexity" of moral choices, and how the reign of God offers a
"horizon" against which to make these choices. Given the
distinctive character of his voice in this volume, it is
unfortunate that Childs is so tentative in developing his proposal,
focusing more on the ambiguity and complexity of what the church
might say rather than on the courageous and substantive stands it
could take for justice and mercy. Indeed, the vision of the reign
of God he presents has much potential and could serve as a norm
that offers precisely the kind of substantive yet critical
criterion needed to mediate between the more ecclesial and more
liberationist essays in this volume. Also, as Jurgen Moltmann and
Wolfhart Pannenberg have shown, the symbol of the kingdom of God is
highly relevant to our understanding of trinitarian doctrine, a
doctrine central to any theological formulation of Christian
ethics.
[4] Among the essays stressing an ecclesial ethics, Benne's
offers a bridge between the emphases of a previous generation -
say, as stressed in a volume like Christian Social Responsibility -
and those of this volume. He begins by outlining classical themes
in Lutheran ethics: justification, the church's distinctive work of
proclaiming the gospel, the twofold rule of God, and a paradoxical
view of human nature and history. He then names a contemporary
challenge: Lutheranism's tendency to reduce "the whole of ethical
life to the motivation touched off by justification" and its
corresponding failure to develop the church as a "community of
character." Instead, he argues, it offers only a vague "realism"
that, because of its lack of substance, leaves the church
vulnerable to various hermeneutics of suspicion (from Freud, Marx,
and Nietzsche to feminists and multiculturalists). His concluding
constructive proposal centers not around the themes he began with
(justification, paradox, etc.) but around correctives to these
challenges)themes like "covenantal existence" and "divinization."
Although he presents a number of tantalizing suggestions, Benne
does not offer an integrated picture of how the latter two themes,
especially the Calvinist theme of "covenant," cohere with the
"classic" Lutheran emphases he outlines at the beginning of the
paper. He is in a unique position to offer such an integration
because he appreciates the distinctive contribution of
Lutheranism's classic emphases even as he is cognizant of their
potential for misappropriation. I hope he will articulate the
theological center of his argument more fully in future work.
[5] A similar comment can be made of Stortz's essay. She offers
an intriguing depiction of Martin Luther's understanding of the
practice of private prayer - a depiction that, she states, is
intentionally empirical, concrete, and inductive (as opposed to
being theoretical, abstract, and deductive). Appropriating the
Ignatian focus on "formation" as a lens for her analysis, she
outlines Luther's instructions for daily prayer. In turn, she
establishes a connection between the daily practice of prayer and a
way of life shaped by responsiveness, gratitude, modesty, and joy.
Given her recognition of the reflexive connection between doctrine
and practice, Stortz could bring to the fore even more explicitly
what is meant by Luther's understanding of tentatio
(prayer and meditation), which he uses coterminously with
Anfechtung (suffering or"tempting attack").
Tentatio is a key moment that links Luther's practice of
prayer to his theology of the cross and understanding of
justification. As Ignatian prayer has at its heart our tangible
participation in Christ's death and resurrection, so Luther's
understanding of prayer is deeply shaped by the tentatio
that brings us face us to face with the crucified Christ and his
resurrection life. Indeed, we might say that the activity of
tentatio is precisely the "inductive" way by which a
theology of the cross is enacted in our lives. And further, such
tentatio not only "forms" the lives of Christians but
offers a radical critique of all sinful distortions that keep us
from union with God. I hope Stortz also will pursue the theological
import of her argument more fully in future work.
[6] Hütter's is the one essay in the volume that does not
shy away from rigorous and thorough theological argument. His
footnotes are a veritable survey of relevant literature in theology
and ethics. Presenting the most explicit critique of the modern
theological impoverishment of Protestant ethics, he analyses two
dimensions of what he calls the "Protestant fallacy." The first is
its reduction of all theology and ethics to the single article of
"justification by grace through faith alone," and the second is its
strictly negative understanding of freedom as freedom from the law
rather than for it. Aligning these "Protestant" emphases with
modern ethics, he situates his own proposal in relation to three
twentieth century movements that serve as correctives to the
restriction of all ethics to the "unencumbered" self: (I) the
Christocentrism of dialectical theology; (2) the Protestant
discovery of Aristotle's and Thomas' character and virtue ethics;
and (3) the stress on embodiment and location in liberation
theologies (including feminist and eco-ethics). The heart of his
proposal centers around a reading of Luther's "The Freedom of a
Christian." By way of a reading of Phil 2:4-11 and Luther's
commentary on Genesis, he argues that, for Luther, law gives
concrete form to the life of the human creature "deified by grace."
Drawing on David Yeago's interpretation of Luther, he contends that
although sinful humanity may receive the law as an external code,
for the subject "deified by grace" there is no difference between
"God's gospel" and "God's commandments."
[7] Hutter's critique of Protestant ethics and its alignment
with the excesses of modem ethics is a brilliant one, as is his
strong argument for a retrieval of the commandments in ethical
teaching. His argument is one that demands attention, and I concur
with much of its thrust. The question I have about his essay is
whether he fuses law and gospel in the lives of believers. For
example, does he maintain the distinctions Lutheran orthodoxy has
held among the three uses of the law (first or "civil" use, second
or "theological" use, and third or "exhortatory" use)? Or, are they
subsumed under a kind of "third use" unique to the Christian
community? In other words, does he do justice to what Lutherans
have called the first two uses of the law? The first use of the law
pertains to the presupposition (based on the first use of the law)
that all human beings, by way of their being created in the image
of God, have the capacity to perceive some understanding of law or
the "orders of creation" (the traditional name given for the
requisite conditions for human life in community). To affirm this
capacity is not to affirm an Enlightenment conception of
"autonomous" ethics but rather to affirm that God's power and
goodness encompass the totality of life and our creaturely - and
this includes nonbelievers'- participation in it. The second point
is its recognition (based on the second use of the law) that even
believers live in the tension between being made in God's image and
yet being fallen members of a humanity in bondage to sin. To affirm
that the law also functions as an "alien" demand in the believer's
life (which criticizes all the demonic distortions that keep us
from worshiping God and treating our neighbors justly, both
individual and corporate) does not entail negating its very
positive capacity to guide and give substance to that life. And
finally, and many would say, most importantly, we can ask whether
he does justice to the Lutheran distinction between law and
gospel)that the gospel truly is a free gift from God not contingent
on human works. That distinction, many would argue, is what
distinguishes Lutheran from other forms of ethics. A stronger
recognition of what is distinctive about a Lutheran understanding
of the gospel and the first two functions of the law would greatly
enhance his argument and bring to the fore what Lutheran ethics has
to offer other forms of ecclesial ethics (e.g., Calvinist,
Wesleyan, or Roman Catholic).
[8] The two "liberationist" essays make very different types of
arguments, but they share the "ecclesial" ethicists' emphasis on
particularity, embodiment, and community. Perry presents his case
from the vantage point of the experience of African American
Lutherans. Like Hutter, he is concerned that traditional Lutheran
preoccupations have tended to sever the connection between "who we
are before God" and "what we are doing among God's people." His
concern is not with appropriating Lutheranism for an African
American situation but with examining what the rich and diverse
traditions of African American Christianity have to offer
Lutheranism. In this appropriation, he makes an important
contribution to Lutheran ethics by tracing the actual history
whereby African Americans offered a corrective not only to racism
in this country but to the tendency in much of Lutheran ethics to
separate"right doctrine" from "right practice." What would make his
argument even more potent is if he concentrated more fully on how
African American conceptions of "who we are before God"
theologically inform "what we do among God's people." This would
establish even more firmly the rich theological resources African
American Christianity has to offer - and criticize - Lutheranism,
especially in its predominantly European American forms.
[9] A similar point can be made about the essay by Rasmussen and
Moe-Lobeda, which identifies the environment as the essay's concern
and the question of whether life as we know it is able to meet the
needs of "the expanding world and the rest of nature for present
and future generations" as its contextual problem (p. 132). But
unlike either Perry, who describes specific practices of advocacy,
or even the "ecclesial" essayists, who describe practices that form
Christian community, Rasmussen and Moe-Lobeda focus on a "reforming
dynamic," which, they argue, is the "proper dynamic of a Lutheran
ethic for our time" (p. 134). This dynamic does not negate the role
tradition plays as a "deposit" carried over time. Rather, it is
essentially a "dynamic" that moves within a "dialectic of
continuity and discontinuity" that interprets itself in light of
the "signs of the times" as these are read in and by the believing
community. I hope Rasmussen and Mee-Lobeda will develop even more
fully the theological import of their argument: specifically, how
this "reforming dynamic" (as a formal construct) is intrinsically
shaped by the material themes of "creation," "cross," and the
"response of faith"- themes they also discuss in the essay. They do
establish this connection in the essay, but it could be developed.
Doing so would make even more explicit how this reforming dynamic
is not merely a function of tradition and change but a function of
God's justice and mercy in the world and how we, as creatures might
speak, think, and enact it.
[10] So far we have identified two major poles in this volume: a
pole emphasizing the substantive ethos of Christian community life
and a pole emphasizing its critical prophetic dynamism. Fredrickson
presupposes both poles in a "political" reading of Paul's ethics,
which draws a parallel between the way Paul conceptualizes the
local church and the "assembly" (ekklesia) of Greek city-states and
their democratic procedures for decision making. But what is
distinctive about Fredrickson's reading of Paul is its theological
thrust. The political process of the church's discernment as an
assembly is not simply a baptizing of Greek political procedures
but an actual and tangible participation in Christ's death and
resurrection by the power of the Spirit. The free speech of the
"political" assembly of the congregation is granted by the Spirit.
Unlike Hutter, who interprets the Christ hymn in Phil 2:5-l1 in
relation to the role of law in the believers' lives, Fredrickson
focuses on how we are transformed by "the Spirit's free gift of the
mind of Christ to the community" (p. 124). Thus, the Christian
community's "political" process of deliberation cannot be divorced
from the "ethical" way in which we are, through Christ, mutually
enslaved and dedicated to one another's freedom. In turn, a focus
on "ethos" and communal identity cannot be divorced from the
"critical principle" in Paul's ethics, a principle defined
theologically by the freedom granted by the Spirit through Christ's
death for us. Fredrickson makes a truly innovative argument in this
essay, but he restricts his analysis to a close exegesis of the
texts he analyzes (2 Corinthians 3; Philippians 1:27-2:18; and
Romans 12-15). Hopefully, the argument he develops in this essay
will be developed more fully in future projects.
[11] This book makes a timely contribution to the "ethos" of
contemporary American Lutheran congregations. It identifies a
central modern problematic - individualism - and a central
theological problem in much of Lutheranism - the tendency to reduce
ethics to personal motivation. But the question remains whether
some of the resources in Lutheran theology that are either rejected
or neglected in this volume might not be the very ones needed to
help us respond to this situation. For example, a sharp distinction
between law and gospel may especially be needed in an age that
relies even for its "spirituality" on human technique. A strong
theology of the cross, with its critique of all forms of
self-salvation and its focus on Christ's death and resurrection, is
a needed message for our time. All our attempts at securing
ourselves - politically, economically, technologically, even by way
of our religious identities and forms of spirituality - are not
sufficient to save us from the realities of sin, death, and demonic
power. To say this is not to reject the law and the way it offers
substantive guidance for how to live. In other words, a corrective
to either an antinomianism or an interiorized interpretation of
justification need not entail a rejection of the core Reformation
insight - an insight that is essentially theological - that God and
God alone has the power to save.
[12] Furthermore, Lutheran theology has strong warrants for a
wholehearted affirmation of the doctrine of creation and its
implications for our lives as creatures within the very complexity
and nuance of all our interactions - interpersonal, political,
economic, technological - with other human beings and even the
natural and technical worlds of which we are a part. Rasmussen and
MoeLobeda grapple with this question the most (with their focus on
"sustainability"), but Perry deals with it as well (with regard to
institutional racism), as do Benne and Hutter (by way of retrieving
the "orders of creation" or "natural law") and Childs (by way of an
eschatological vision of creation). Yet much more could be said,
especially given the highly complex systems each one of us actually
participates in on a daily basis - from, e.g., the systems of the
mass media to those of a global economy. This task of fully
grappling with what it means to be 'creatures' is not a simple one,
but more guidance is needed if God's power and goodness arc not
merely to be restricted to a Christian ghetto of interpersonal
encounters.
[13] And a focus on creation leads to the need for a
rehabilitation of a concept not discussed in a sustained fashion in
these essays: the doctrine of vocation. At issue here are the
concrete roles and relationships in which our "ethos" as Christians
- as individuals and communities - takes a palpable form as it
serves the neighbor's needs. Given the plethora of popular books,
Christian and otherwise, dealing with spirituality in everyday life
or finding one's purpose in life, it is surprising that these
essays touch on the doctrine of vocation only in a cursory fashion.
If writers like Deepak Chopra are not to corner the market on
spirituality in a capitalistic society, then theologians need to
retrieve those aspects of our tradition such as the doctrine of
vocation - that do speak of God's justice and mercy in relation to
the actual circumstances that constitute most of our waking
hours.
[14] In spite of these concerns, these essays make an important
contribution with their stress on the concrete formation of
individuals and communities, whether or not one defines it
primarily in terms of "retrieval" or "reform." The need to attend
to ethos and community is probably one of the most important tasks
facing congregations in contemporary North America. And, an
individualistic, self-centered piety or spirituality is not
sufficient for facing this task. But this corrective should not
leave us with a truncated vision of the resources in the Lutheran
heritage. Perhaps a return to the themes discussed in that first
three-volume set - with echoes to the work of theological ethicists
of a previous generation like Sittler, Forell, and Lazareth - is
needed as a corrective to this corrective.
From dialog, Volume 38, Number 2 (Spring 1999).
See Karen
Bloomquist's response to reviews of The Promise of Lutheran
Ethics.
See John
Stumme's response to reviews of The Promise of Lutheran
Ethics.
© August 2002
Journal of Lutheran Ethics (JLE)
Volume 2, Issue 8
|
http://www.elca.org/What-We-Believe/Social-Issues/Journal-of-Lutheran-Ethics/Book-Reviews/The-Promise-of-Lutheran-Ethics-edited-by-Karen-L-Bloomquist-and-John-R-Stumme/Malcolm-on-The-Promise-of-Lutheran-Ethics.aspx
|
crawl-003
|
en
|
refinedweb
|
#include <rte_tailq.h>
The structure defining a tailq header entry for storing in the rte_config structure in shared memory. Each tailq is identified by name. Any library storing a set of objects e.g. rings, mempools, hash-tables, is recommended to use an entry here, so as to make it easy for a multi-process app to find already-created elements in shared memory.
Definition at line 69 of file rte_tailq.h.
NOTE: must be first element
Definition at line 70 of file rte_tailq.h.
|
http://doc.dpdk.org/api-16.11/structrte__tailq__head.html
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Pete Frisella, Google Analytics Developer Advocate – June 2013
This document describes how to run server-side experiments using Google Analytics.
Introduction
The standard implementation method for Google Analytics Content Experiments executes JavaScript in the browser to make variation choices. This makes testing easy to implement but experiments are limited to client-side changes. With the Content Experiments API it is possible to manage experiments and variations server-side.
Server-side implementations offer more flexibility to do things such.
This guide provides implementation considerations and flows for server-side experiments.
Overview
The main steps to running experiments server-side are:
- Define the experiment that you want to run.
- Configure the experiment and objectives in Google Analytics.
- Handle users and Experiments
- Publish changes and Run the Experiment
Define the Experiment
The first step in any experiment is to define the original page, the variations to test, the objective of the experiment, and any other relevant parameters.
Defining variations is dependent on what you want to test. It could be a single element on a website page, or an entire page, the text size of an offer on a kiosk screen, or the result set of a database query. Goals and objectives will also vary depending on what you're testing and may involve minimizing or maximizing a goal or a predefined metric such as time on site, or page views.
The important thing is that you need to know the variations you'd like to test and have an objective to optimize in order to create and configure an experiment.
Configure the Experiment and Objectives in Google Analytics
Once you’ve defined the experiment and variations you’d like to test, configure the experiment using the Google Analytics web interface or Management API.
Web Interface
The steps to configure the experiment using the web interface are:
- Sign-in to the Google Analytics web interface and select the view (profile) in which you want to create the experiment.
- Click the Reporting tab
- Click Behavior > Experiments.
- Click Create experiment.
- Choose an experiment objective:
Select or create a goal as the experiment objective.
For details on using goals see Set up and edit Goals (Help Center). Once you've chosen an experiment objective, click Next Step to continue.
- Configure your experiment:
A name and URL is required for each variation. However, if you intend to eliminate redirects by making server-side changes then you can use any value for the variation URL since it won't be applicable in this case.
Setting the experiment variations. The URL is required but not applicable for most server-side implementations.
Once you've configured the experiment, click Next Step to continue.
- Setting up your experiment code:
Since this is a server-side implementation, you won't use the JavaScript experiment code provided in the web interface. However, the Experiment ID is important and is required to send data to Google Analytics.
The Experiment ID is needed to send experiment data to Google Analytics. It can be retrieved via the Management API.
Click Next Step to continue.
- Review and start:
Click Start Experiment or alternatively you can start the experiment after you've completed the implementation of the variations. If you receive a validation error, click Yes to ignore and continue.
Validation errors can be safely ignored since URLs are not required for this implementation.
For additional details and instructions on configuring an experiment see Run an Experiment (Help Center).
Management API
Experiments can be created, updated, and deleted programmatically using the
Management API. This is useful if you want to fully automate experiments. If
you do create and manage all experiments using the Management API then you
should set the
servingFramework parameter of the experiment to
API. However, if you intend to disable the multi-armed bandit
optimization used by Google Analytics for experiments then set
servingFramework to
EXTERNAL. See the
Experiments
Feature Reference and
Experiments Developer Guide for additional details.
Once you’ve configured an experiment and have an Experiment ID, you will need to handle choosing and showing variations to users when they are exposed to an experiment.
Handle Users and Experiments
As users interact with a property that has a running experiment, you need to determine if the user is new or returning to the experiment, which variation to show them, and then send experiment data to Google Analytics. The steps required to accomplish this are:
- Periodically refresh and store data for running experiments.
- Determine the experiment status for a user and what to show them.
- Send experiment data to Google Analytics and show the variation to the user.
Store and Retrieve Experiment Data
Storing and retrieving experiments data is applicable if you are relying on
the Google Analytics statistical engine and want to make a server-side
decision about an experiment and which variation to show a user (i.e.
servingFramework is set to
REDIRECT or
API). To make a
decision requires that you have up-to-date information about all the
experiments running for your property. This can be accomplished by
periodically querying the Management API for the latest experiments
information and storing this on your server. Google Analytics evaluates and
makes optimization decisions that are updated twice daily, so it is
recommended that you update your experiments info multiple times per day to
retrieve the latest variation weights and status for your running experiments.
The Management API list method can be used to query Google Analytics for a list of experiments and the get method can be used to retrieve experiment details for an individual experiment. This information can be saved on your server and cached for quick access.
When a user makes a request on your property you will need to determine at that time whether there is an experiment running. For this reason you should store experiment details in a manner that will make it easy to lookup and retrieve any relevant experiment info. For example, for a website you may want to use a content ID or the URL as an index mapped to experiment IDs.
Example:
The following Python code shows a simple handler that will refresh experiments data periodically using a scheduled task with cron for AppEngine. This is not a comprehensive example but is for illustrative purposes only.
class RefreshExperimentsHandler(BaseHandler): """Handles periodic refresh for a scheduled task with cron for AppEngine.""" def get(self): experiments = get_experiments() update_experiments_db(experiments) def get_experiments(): """Queries the Management API for all active experiments.""" try: experiments = ANALYTICS_SERVICE.management().experiments().list( accountId='1234', webPropertyId='UA-1234-1', profileId='56789').execute() except TypeError, error: # Handle errors in constructing a query. logging.error('There was an error constructing the query : %s' % error) except HttpError, error: # Handle API errors. logging.error('API error : %s : %s' % (error.resp.status, error._get_reason())) return experiments def update_experiments_db(experiments): """Updates the datastore with the provided experiment data. Args: experiments: A list of experiments. """ if experiments: for experiment in experiments.get('items', []): experiment_key = db.Key.from_path('Experiment', experiment.get('id')) experiment_db = db.get(experiment_key) # Update experiment values experiment_db.status = experiment.get('status') experiment_db.start_time = experiment.get('startTime') ... # Continue to update all properties # Update memcache with the experiment values. memcache.set(experiment.get('id'), experiment) # Update Variations for index, variation in enumerate(experiment.get('variations', [])): variation_db = experiment_db.variations.get_by_id(index) variation_db.status = variation.get('status') variation_db.weight = variation.get('weight') variation_db.won = variation.get('won') ... # Continue updating variation properties variation_db.put() experiment_db.put()
And an example
cron.yaml file:
cron: - description: refresh experiments info url: /refresh_experiments schedule: every 12 hours
Store Experiment Information for Users
When a user interacts with your property and is exposed to an experiment for the first time you need to make various checks and decisions as to whether they should be included in an experiment and whether to show them a variation or the original. Once these choices are made they should remain the same for the user on subsequent exposures to the same experiment. For this reason it is necessary to anonymously store experiment details for a user in a location that is secure but accessible to you whenever a user interacts with your property.
For websites, the recommended approach is to write values to a cookie and for implementations where there is no client-side storage mechanism, this information will need to be saved server-side using some sort of lookup table that maps anonymous but stable user IDs to experiment details for a user.
There are two experiment values that need to be stored to handle experiments consistently for returning users. For each experiment the user is exposed to, save the following details:
- Experiment ID—The ID of the experiment the user has been exposed to.
- Variation ID—The index of the variation chosen for the user. Google Analytics represents variations as a list, using a 0-based index, where the first element in the list is the "Original". The list of variations for an experiment is always in the same order and cannot be modified once an experiment is running. Therefore, the index of a variation can be used as the Variation ID. For users that are not included in the experiment, it is recommended that you use a value of
-1as the Variation ID.
Once you have set up a way to periodically refresh and store experiments info, the next step is to consider how to make decisions and show variations for users that are exposed to experiments.
Choose a Variation
When a user interacts with your property there are multiple checks that need to be made to handle a user's exposure to a running experiment. The rest of this section provides questions to help explain the proper steps to take depending on the user and experiment configuration.
For the current user interaction (e.g. pageview), is there an experiment running?
Refer to the experiment information stored on your server to determine if there is an active experiment running for the particular user interaction. In other words, you need to figure out if the user is interacting with the 'Original' of an experiment. For example, a user visits a page on a website that has been configured as the original page for an experiment.
- Yes: continue to the next check.
- No: There is no experiment running, so show the user whatever they requested and skip the rest of the checks. It is not necessary to send experiment data for this user to Google Analytics.
Example:
This is an sample function for an AppEngine application that uses a
Page entity to model pages and an
Experiment entity
to store details about experiments.
def is_experiment_running(page_id): """Checks if an experiment is currently running for a page. Args: page_id: The ID of the page to check for a running experiment. Returns: A Boolean indicating whether an experiment is running or not. """ try: page = db_models.Page.get_by_key_name(page_id) except db.BadKeyError: return False if page: experiment_id = page.experiment_id try: experiment = db_models.Experiment.get_by_key_name(experiment_id) except db.BadKeyError: return False if experiment: return experiment.status == "RUNNING" return false
Has the user been previously exposed to this experiment?
To determine whether a user has been exposed requires that you previously saved this information for the user in a location that is accessible to you. For websites the recommended approach is to write a value to a cookie and for implementations where there is no client-side storage mechanism this information will need to be saved server-side using some sort of lookup table that maps anonymous user IDs to experiment IDs and the variant selected for the user. See Store Experiment Information for Users for details. The point is that you need to have some way of determining whether a user has previously been exposed to experiments.
Attempt to retrieve any stored experiment info for the user to determine if the user has been previously exposed to this experiment.
- Yes: is the variation that was previously selected for this user still
ACTIVE?
- Yes: you have the variation information for the user, skip the rest of the checks.
- No: the variation is no longer active, show the original and skip the rest of the checks. It is not necessary to send experiment data for this user to Google Analytics.
- No: continue to the next check.
Should the user be included in the experiment?
You can determine whether the user should be included in the experiment by
using the
trafficCoverage value of an experiment. Choose a random
number in the range
0.0 to
1.0. If the random
number is less than or equal to
the
trafficCoverage for the experiment, then include the user in
the experiment.
- Yes: continue to the next check.
- No: store experiment information for this user to indicate they should not be included in this experiment (see Store Experiment Information for Users ). Show them the original and skip the rest of the checks. It is not necessary to send experiment data for this user to Google Analytics.
Example:
This following is a sample function to determine whether a user should be included in an experiment.
import random def should_user_participate(traffic_coverage): """A utility to decide whether a new user should be included in an experiment. Args: traffic_coverage: The fraction of traffic that should participate in the experiment. Returns: A Boolean indicating whether the user should be included in the experiment or not. """ random_spin = random.random() return random_spin <= traffic_coverage
Choose a variation for the user
If a user has never been exposed to an experiment and they have been selected
to be included in the experiment, then you will need to choose a variation to
show the user. The first step is retrieve all of the
ACTIVE
variations for the experiment they have been exposed to.
The second step is to randomly choose a
variation to show based on the weights of each
variation.
Example:
The following is a sample function that will choose a variation based on a list of variations and their weights.
import random def choose_variation(variations): """Chooses a variation based on weights and status. Args: variations: A collection of variations to choose from. Returns: An Integer representing the index of the chosen variation. """ random_spin = random.random() cumulative_weights = 0 for index, variation in enumerate(variations): if variation.get('status') == 'ACTIVE': cumulative_weights += variation.get('weight') if random_spin < cumulative_weights: return index return 0
Send Experiment Data and Show a Variation
When a user is exposed to an experiment, and if they are selected to be included in the experiment, and the variation to show them is active, then you need to send the Experiment ID and Variation ID to Google Analytics. See Store Experiment Information for Users for details on the experiment values.
- Experiment ID — the ID of the experiment the user has been exposed to.
- Variation ID — the variation shown to the user. An integer value representing the variation in Google Analytics for the experiment. For details see Determining experiment status for a user and Choosing a variation.
If you are managing experiments and have set the
servingFramework field of an experiment to
EXTERNAL,
as described in Handling Users and Experiments, then
it is likely you have an internal ID for experiment variations.
In this case, to send experiment data to Google Analytics, you will need to
map your internal variation ID to the variation number that
Google Analytics has assigned to the matching variation.
For example, if a website has an experiment running on a single page of the website, experiment data needs to be sent to Google Analytics when a user is exposed to that single page (assuming the user is included in the experiment). For pageviews on other parts of the site where no experiments are running, it is not necessary to send experiment data to Google Analytics, since this is handled automatically for you. If the user returns to the page where the experiment is running, experiment data needs to be sent again to Google Analytics and any other time the user visits the page until the experiment has ended.
For website implementations it is recommended to set the experiment ID and Variation ID by dynamically adding JavaScript to the variation page shown to the user. The JavaScript will execute and send the experiment data to Google Analytics when the page is rendered by the user's browser. Review the following developer guides for additional implementation details for the collection library you are using:
Publish and Run the Experiment
Once you’ve configured the experiment and have implemented the server-side logic and any changes to the original page, the next step is to ensure that the experiment is running and make the changes live.
After the experiment has ended you can make changes to the original page and remove any experiment-related logic from the page.
Tips and Considerations
- Predefined metrics such as time on site, pageviews, revenue, etc. can be used for Experiment objectives instead of Goals.
Variation Weights
Google Analytics uses a multi-armed bandit approach to managing online experiments which automatically calculates weights and sets the status of each variation in an experiment. The weights can then be used to randomly choose a variation to show a user that is exposed to an experiment for the first time. Google Analytics automatically updates these weights twice daily by evaluating the performance of each variation. To learn more about the statistical engine that Google Analytics uses to manage experiments, see Multi-armed bandit experiments.
|
https://developers.google.com/analytics/solutions/experiments-server-side
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Create a new Java Class Rectangle – This Python Tutorial will explain the creation of a new Java class named Rectangle with the following class specification:
How To Create a New Rectangle Class in Java
- Write a Java program using Objects and Classes concepts in Object Oriented Programming. Define the Rectangle class that will have:
- Two member variables width and height ,
- A no-arg constructor that will create the default rectangle with width = 1 and height =1.
- A parameterized constructor that will create a rectangle with the specified x, y, height and width.
- A method getArea() that will return the area of the rectangle.
Note: Formula for area of rectangle is (width x height)
Formula for perimeter of rectangle is: 2 x (width+height)
A method getPerimeter() that will return the perimeter of the rectangle.
Write a test program that creates two rectangle objects. It creates the first object with default values and the second object with user specified values. Test all the methods of the class for both the objects.
How to Calculate Area and Perimeter in Rectangle Class
Formula for Area of a Rectangle
To calculate area of triangle, we need the number of square units to cover the surface of a figure.
Hence, the formula is: Area = Length x Width
Formula for perimeter of Rectangle
The perimeter may be defined as the distance around a figure.
Hence formula is:
Perimeter of Rectangle = 2 x Length + 2 x Width
Perimeter of Rectangle = 2 (Length + Width)
The Source Code of Java Rectangle Class Program
/* * Write a Java program to create a new class called 'Rectangle' with length and height fields and a no argument constructor and parameterized constructor. Moreover add two methods to calculate and return area and perimeter of rectangle. Write a test class to use this Rectangle class. Create different objects initialized by constructors. Call methods for these objects and show results. */ package testrectangle; /** * @author */ class Rectangle{ // define two fields double length, width; // define no arg constructor Rectangle() { length = 1; width = 1; } // define parameterized constructor Rectangle(double length, double width) { this.length = length; this.width = width; } // define a method to return area double getArea() { return (length * width); } // define a method to return perimeter double getPerimeter() { return (2 * (length + width)); } } public class TestRectangle { public static void main(String[] args) { // create first object //and initialize with no arg constructor Rectangle rect1 = new Rectangle(); // create first object //and initialize with no arg constructor Rectangle rect2= new Rectangle(15.0,8.0); System.out.println("Area of first object="+rect1.getArea()); System.out.println("Perimeter of first object="+rect1.getPerimeter()); System.out.println("Area of second object="+rect2.getArea()); System.out.println("Perimeter of second object="+rect2.getPerimeter()); } }
The output of a sample run of the Java Program
Area of first object=1.0 Perimeter of first object=4.0 Area of second object=120.0 Perimeter of second object=46.0
Pingback: Define Circle Class in Java with Radius and Area | EasyCodeBook.com
|
https://easycodebook.com/create-a-new-java-class-rectangle-java-program/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Hi,
this must have been answered many times already, but I searched the
archives, online docs, but couldn't find anything.
If I do:
$ python
Python 2.6.2 (release26-maint, Apr 19 2009, 01:58:18)
[GCC 4.3.3] on linux2
Type "help", "copyright", "credits" or "license" for more information.
import pylab
pylab.plot([1, 3, 3])
[<matplotlib.lines.Line2D object at 0x2154350>]
pylab.show()
pylab.show()
the first pylab.show() shows the plot and stays hanging (this is ok)
and then if I close it, to get back to the shell, the second call to
show() does nothing.
One fix is to use:
ipython --pylab
but if I just want to call regular python, or from my own script ---
how do I plot for the second time?
Ondrej
|
https://discourse.matplotlib.org/t/calling-show-twice-in-a-row/11579
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
A few months ago, we've seen how to write a filtering syntax tree in Python. The idea behind this was to create a data structure — in the form of a dictionary — that would allow to filter data based on conditions.
Our API looked like this:
>>> f = Filter( {"and": [ {"eq": ("foo", 3)}, {"gt": ("bar", 4)}, ] }, ) >>> f(foo=3, bar=5) True >>> f(foo=4, bar=5) False
While such a mechanism is pretty powerful to use, the input data structure format might not be user friendly. It's great to use, for example, with a JSON based REST API, but it's pretty terrible to use for a command-line interface.
A good solution to that problem is to build our own language. That's called a DSL.
Building a DSL
What's a Domain-Specific Language (DSL)? It's a computer language that is specialized to a certain domain. In our case, our domain is filtering, as we're providing a Filter class that allows to filter a set of value.
How do you build a data structure such as
{"and": [{"eq": ("foo", 3)}, {"gt": ("bar", 4)}]} from a string? Well, you define a language, parse it, and then convert it to the right format.
In order to parse a language, there are a lot of different solutions, from implementing manual parsers to using regular expression. In this case, we'll use lexical analsysis.
First Iteration
Let's start small and define the base of our grammar. That should be something simple, so we'll go with
<identifier><operator><value>. For example
"foobar"="baz" is a valid sentence in our grammar and will conver to
{"=": ("foobar", "baz")}.
The following code snippet leverages pyparsing for parsing the string and specifying the grammar:
import pyparsing identifier = pyparsing.QuotedString('"') operator = ( pyparsing.Literal("=") | pyparsing.Literal("≠") | pyparsing.Literal("≥") | pyparsing.Literal("≤") | pyparsing.Literal("<") | pyparsing.Literal(">") ) value = pyparsing.QuotedString('"') match_format = identifier + operator + value print(match_format.parseString('"foobar"="123"')) # Prints: # ['foobar', '=', '123']
With that simple grammar, we can parse and get a token list composed of our 3 items: the identifier, the operator and the value.
Transforming the Data
The list above in the format
[identifier, operator, value] is not really what we need in the end. We need something like
{operator: (identifier, value)}. We can leverage pyparsing API to help us with that.
def list_to_dict(pos, tokens): return {tokens[1]: (tokens[0], tokens[2])} match_format = (identifier + operator + value).setParseAction(list_to_dict) print(match_format.parseString('"foobar"="123"')) # Prints: # [{'=': ('foobar', '123')}]
The
parseString method allows to modify the returned value of a grammar token. In that case, we transform the list of the dict we need.
Plugging the Parser and the Filter
In the following code, we'll reuse the
Filter class we wrote in our previous post. We'll just add the following code to our previous example:
def parse_string(s): return match_format.parseString(s, parseAll=True)[0] f = Filter(parse_string('"foobar"="baz"')) print(f(foobar="baz")) print(f(foobar="biz")) # Prints: # True # False
Now, we have a pretty simple parser and a good way to build a
Filter object from a string.
As our Filter object supports complex and nested operations, such as
and and
or, we could also add it to the grammar — I'll leave that to you reader as an exercise!
Building your own Grammar
pyparsing makes it easy to build one's own grammar. However, it should not be abused: building a DSL means that your users will have to discover and learn it. If it's way different that what they know and already exists, it might be cumbersome for them.
Finally, if you're curious and want to see a real world usage, Mergify condition system leverages pyparsing to implement its parser. Check it out!
|
https://julien.danjou.info/writing-your-own-filtering-dsl-in-python/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
using ResourceProject.Resources;
public class Register
{ private string email; private ResourceManager resManager = new ResourceManager("ResourceProject.Resources.strings", Assembly.GetExecutingAssembly());
public Register() { email = resManager.GetString("please_enter_email").ToString(); }}
The GetString() call above cannot find the resources in the excuting assembly even though the resource project is added as a reference
in the Silverlight project and the bindings in XAML in the same project as this class work perfectly and have no issues finding the resource files.
I suspect the Assembly in the ResourceManager constructor is not right.
Hi,
The code you provided above looks fine,
I think the problem is you may not generate resource file correctly, please follow below link (Steps are given in in-line comment)
Best Regards,
|
https://social.msdn.microsoft.com/Forums/en-US/40ab61ad-6fed-45dc-bf81-b9558df83ddf/cannot-find-localized-resources?forum=silverlightnet
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Hi,
We are using Infragistics2.Excel assembly to export data from Infragistics UltraGrid and all of a sudden, it is throwing below error:
packageFactory cannot be null. When saving to Excel2007 workbook format and using the Infragistics2.Excel assembly, you must provide an IPackageFactory to handle the packaging of data. If you are using the DotNet Framework 3.0 or higher, use the Infragistics3.Excel assembly instead, and the packaging will be handled by the WindowsBase class
My windows application is currently running on .NET Framework 4.5 and using Infragistics2 v9.2. We could not upgrade the Infragistics to 3.0 at this point so please let me know if there are any way to fix this problem:
Code:
this.ultraGridExcelExporter1.Export(this.grdActiveMonitor, saveDialog.FileName);
Thanks,
Murali
There are only 3 ways to solve this issue since there is no built-in support for packaging in .NET 2.0, against which the Infragistics2... assemblies are built:
If option 1 is not available, you must either provide a custom IPackageFactory implementation of save in an older format. Why are you not able to use the Infragistics3 assemblies in this case? They should be included with the normal Infragistics2 install.
I realize this an old post, but I am needing help with this very issue. I am using Infragistics ver 11.1.20111.2111. I tried the method you said by replacing the infragistics2 excel assembly with the infragistc3 excel assembly.
On this line:
Me.UltraGridExcelExporter1.Export(Me.GrdTimeDisplay, wksheet, 0, 0)
I get the following error when I do that.
Error 1 Reference required to assembly 'Infragistics2.Documents.Excel.v11.1, Version=11.1.20111.2111, Culture=neutral, PublicKeyToken=7dd5c3163f2cd0cb' containing the type 'Infragistics.Documents.Excel.Workbook'. Add one to your project. C:\Visual Studio 2013\Projects\Query Plus VS2013\QueryPlusCopyRightVS2013\QueryPlus\QueryPlus7_2\UI\WinForms\frmClaims.vb 2419 13 QueryPlus7_2
When I keep both the Infragistics2 and Infragistics3 Excel assemblies I get the following.
Dim oExcel As New Excel.Workbook
I get the following error.
Error 1 'Workbook' is ambiguous in the namespace 'Infragistics.Documents.Excel'. C:\Visual Studio 2013\Projects\Query Plus VS2013\QueryPlusCopyRightVS2013\QueryPlus\QueryPlus7_2\UI\WinForms\frmClaims.vb 2416 31 QueryPlus7_2
How do I fix this?
|
https://www.infragistics.com/community/forums/f/ultimate-ui-for-windows-forms/93014/packagefactory-error-while-exporting-data-using-infragistics2-excel
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Dan Abramov’s react-redux library gives us a way to connect our redux store to our react components. It saves us from having to pass props all the way down and gives us some vDom optimizations for free.
The only downside is having to write all those mapStateToProps functions, but now, thanks to two new React features, we may no longer have to.
The new Context API allows us to share state throughout the component hierarchy. This works much like the Provider component from react-redux allowing us to make the store available to the entire component tree.
const MyContext = React.createContext(someInitialValue);<MyContext.Provider value={someValueToProvide}>
<MyApp />
</MyContext.Provider>...<MyContext.Consumer>
providedValue => <MyComponent someAttribute={providedValue} />
</MyContext.Consumer>
The experimental React Hooks API allows us to write functional components with local state and side effects, and also to access context.
import React, { useState, useEffect, useContext } from 'react';const MyComponent = () => {
const [someValue, update] = useState(someInitialValue);
const providedValue = useContext(MyContext); useEffect(() => {
someSideEffect();
return someCleanUpFunction; // e.g. unsubscribe
}); return <input type="text" value={someValue} onChange={update} />
};
Combining the built in hooks to create our own custom hooks enables us to extract and share common logic.
import React, { useState, useEffect} from 'react';const useMyHook = defaultValue => {
const [data, update] = useState(''); useEffect(() => {
const callback = res => update(res);
const unsubscribe = getData(callback);
return unsubscribe;
}); return data;
};const MyComponent = () => {
const data = useMyHook('');
return <div>{data}</div>
};
Of course, the most common logic in our components is for interacting with the store.
We can use the useContext hook to access the store from within a functional component. This means we can set local state with getState and selectors, and subscribe to future updates as a side effect. We can even choose to update local state only when it would be different thereby avoiding unnecessary re-renders. We can also access dispatch for dispatching actions to the store.
Let’s see what it would look like…
import Store from 'hux';const store = createStore(reducer);
const EntryPoint = () => (
<Store.Provider value={store}>
<App />
</Store.Provider>
);
…and in the component…
import { useDispatch, useSelector } from 'hux';
const MyComponent = id => {
const value = useSelector(mySelector, id);
const update = useDispatch(myActionCreator)
return (
<input
type="text"
value={value}
onChange={e => update(e.target.value)}>
);
}
…and this is where the magic happens…
import React, {useContext, useEffect, useState} from 'react';
import { compose } from 'redux';const Store = React.createContext();export const useDispatch = actionCreator => {
const { dispatch } = useContext(Store);
return compose(dispatch, actionCreator);
};export const useSelector = (selector, ...rest) => {
const { subscribe, getState } = useContext(Store); const select = () => selector(
getState(),
...rest
); const initial = select();
const [value, update] = useState(initial); const listener = () => {
const next = select()
if (next !== value) {
update(next);
}
}; useEffect(() => subscribe(listener)); return value;
}export default Store;
Here we have defined two custom hooks.
The first, useDipatch, uses the useContext hook internally to get the store’s dispatch method then composes it with an action creator passed in from the component. The function it returns will take n arguments, pass them to the action creator and then dispatch the result.
The second hook is a little more complicated. We use the useState hook to store the value we get from passing the current state to the selector. Next we create a side effect using the useEffect hook. Our side effect is to subscribe to the redux store and then update local state whenever the result of our selector changes. The subscribe method returns an implicit unsubscribe which is used by the React Hooks API to clean up afterwards.
Finally we return the resulting value for use in the component. Note how we only update the value returned from
useSelector if it is not strictly equal to the previous value. This will prevent components from re-rendering when nothing has changed.
Want this as a library? Get it from npm!
|
https://medium.com/@thomas_james_byers/context-hooks-api-react-redux-db6afcbe13d0?source=---------3------------------
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
We’re building a hybrid mobile app out of an existing web app. The front end will run out of a webview, and we’re adapting the back end to run on-device in React Native.
One of the challenges we encountered was how to share code between our existing monorepo built with webpack and our new React Native app (which bundles with Metro).
Though it’s all TypeScript, this wasn’t super straightforward. In this post, I’ll describe how we handled it.
Plan A
Let’s call the existing web app web-monorepo and the new React Native app offline-rn-app.
We initially attempted to reference sources directly, so that from offline-rn-app, you could
import { foo } from "web-monorepo/bar";. We encountered a lot of friction with this approach. For example:
- Though React Native now has TypeScript support out of the box (😃), it uses Babel’s built-in TypeScript handling, which has some known limitations, and it couldn’t handle some patterns in our existing code (😩).
- Our application code uses a handful of webpack-specific features, like requiring .graphql or .yml files with special loaders, and importing whole directories of files with require.context.
With enough time and research, it would probably be possible to work around issues like these, but it felt like we were swimming upstream. As we struggled with the particulars of our webpack configuration, we thought, “Why not let webpack do its thing, and then consume its output?”
Plan B
We already had two webpack entry points in the web-monorepo project: one for the React front end, and one for the Express backend. Our idea was is to add a third, producing a library that can be consumed by the React Native app.
This way, we get the real TypeScript compiler plus all of the webpack loaders, and a bunch of dead code can get shaken out.
Spoiler: It worked.
Details
Here are the important parts of the configuration we’ve settled on.
package.json
The first step toward building an installable library out of our existing Node project was to drop a few additions in its package.json:
// package.json "main": "lib/index.js", "types": "lib/entry/mobile-server.d.ts", "files": [ "lib/*" ], "scripts": { "build:library": "webpack --config ./webpack/library.config.js" }
The “main” and “types” fields are used by the consuming app upon installing our package. The “files” globs specify what we want consuming apps to get when they install the library. Lastly, the “build:library” convenience script leads us to our new webpack config.
Webpack and tsconfig
Once we declared the files we intend to distribute, it was time to build them. Here’s the new webpack config:
// webpack/library.config.js const path = require("path"); module.exports = { mode: "development", entry: { "mobile-server": "./entry/mobile-server.ts" }, devtool: "source-map", output: { path: path.resolve(__dirname, "../lib"), filename: "index.js", library: "my-lib", libraryTarget: "umd" }, module: { rules: [ { test: /\.tsx?$/, use: [ { loader: "ts-loader", options: { configFile: "tsconfig.library.json" } } ] } ] }, resolve: { extensions: [".ts", ".tsx", ".js"], modules: [path.resolve(__dirname, "../modules"), "node_modules"] }, externals: ["cheerio", "config"] };
From an input of
entry/mobile-server.ts, this produces the
lib/index.js that package.json’s main is expecting and a source map to go with it.
While most third-party code is bundled in, we can add externals for packages that we want the consuming app to provide. (More on this later.)
We’ve also supplied a custom tsconfig:
// tsconfig.library.json { "extends": "./tsconfig", "compilerOptions": { "module": "es6", "target": "es5", "allowJs": false, "noEmit": false, "declaration": true, "declarationMap": true, "lib": [], "outDir": "lib", "resolveJsonModule": true }, "include": ["entry/mobile-server.ts"], "exclude": ["node_modules", "dist", "lib"] }
There’s nothing too interesting here except for that empty lib list, which prevents the library build from inadvertently using APIs that aren’t available in React Native, like browser DOM or Node.js filesystem access.
With these in place, we can now
yarn build:library to produce
lib/.
Packaging
We don’t intend to publish our new library to a package repository, so we’ll need to reference it via one of the other patterns you can yarn add, like a file path or a GitHub repo.
After a little experimentation, we settled on producing a tarball with
yarn pack. This makes for a nice single artifact to share between jobs in our CircleCI workflow.
Consuming the library
From offline-rn-app, we reference the tarball like this:
// package.json "my-lib": "../web-monorepo/my-lib-1.0.0.tgz"
On the React Native side, using this feels about like using any other third-party library.
Recall the “externals” specified in the webpack config above? Some of the code we’re sharing depends on libraries that aren’t quite compatible with React Native. We may eventually migrate away from them, but for now, we have a decent workaround.
On the library side, we externalize the problematic modules to make the consuming app deal with it.
In the React Native app, we deal with it by swapping in alternate implementations. To do this, we added babel-plugin-module-resolver, which allows you to alias modules arbitrarily:
// babel.config.js module.exports = { presets: [ "module:metro-react-native-babel-preset", "@babel/preset-typescript" ], plugins: [ [ require.resolve("babel-plugin-module-resolver"), { alias: { cheerio: "react-native-cheerio", config: "./src/mobile-config.ts" } } ] ] };
..and voila! We have code from our Express server running in React Native.
Future Improvement: Editor Experience
One rough patch I hope to smooth out in the future is the editor experience when working in the monorepo. VS Code only knows about one of our tsconfigs. So when I’m editing
foo.ts, I’ll get squiggles according one of my build targets, but I may introduce errors that I won’t see until next time I compile the other target from the command line.
Another tradeoff we made with the move to Plan B is that we can no longer F12 from offline-rn-app’s sources into web-monorepo’s; instead, when you go to definition across the boundary, you land on the library’s type definitions. Could source maps improve on this?
Conclusion
Our solution involves a couple of compromises and a fair amount of complexity, but overall, this approach is working well.
Have you shared code between browser, server, and mobile? How’d it go?
|
https://spin.atomicobject.com/2019/09/24/typescript-web-react-native/
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Paperclips pageprint tutorial
< Back to Nebula Paperclips page
The PagePrint class allows you to easily add page headers and footers to any document. Page numbering is supported, and the page number format is customizable.
PagePrint decorates every page with a header and/or footer. Headers and footers are instances of the PageDecoration interface, which you implement. PageDecoration has only a single method:
public Print createPrint (PageNumber pageNumber);
For each page, PagePrint will call createPrint(PageNumber) on your decoration. The Print returned by your PageDecoration will be used as the header or footer for that page.
An example page decoration:
public class MyPageDecoration implements PageDecoration { public Print createPrint (PageNumber pageNumber) { return new PageNumberPrint(pageNumber); } }
PageDecoration footer = new MyPageDecoration();
PageNumberPrint is a special Print class which prints (you guess it!) the page number. The PageNumber is supplied by the PagePrint class, so you do not need to implement it yourself. PageNumberPrint has properties for setting the font, color, horizontal alignment, and text formatting. See the Javadocs for more detail.
Let's try changing our footer to display a timestamp on the left side, and a page number on the right side:
public class MypageDecoration implements PageDecoration { String now = new Date().toString(); public Print createPrint (PageNumber pageNumber) { GridPrint grid = new GridPrint("d:g, r:d"); grid.add(new TextPrint(now)); grid.add(new PageNumberPrint(pageNumber, SWT.RIGHT)); return grid; } }
It is somewhat common to have a header or footer with only the page number, so PaperClips includes a special page decoration for this case: PageNumberPageDecoration. This class has the same properties as PageNumberPrint for customizing the appearance of the page number.
Most document headers and footers are exactly the same, except for the page number. It is usually the case that either the header or footer is exactly the same on every page. PaperClips provides a special page decoration for this case: SimplePageDecoration.
For example, let's design a static header with some title text on the left side, and a company logo on the right side. To do this, simply construct a GridPrint with the text and image, then pass that grid to SimplePageDecoration's constructor:
// Construct the header GridPrint grid = new GridPrint("d:g, p"); grid.add(new TextPrint(title)); grid.add(new ImagePrint(logo)); // Create the page decoration from the header PageDecoration header = new SimplePageDecoration(grid); ```
Putting It Together
Once you have your header and/or footer ready, you are ready to decorate your document.
Print body = ... PageDecoration header = ... (null if no header) PageDecoration footer = ... (null if no footer) PagePrint print = new PagePrint(body, header, footer); // Set the vertical gap between the header, body, and footer. // The header gap only applies if there is actually a header print.setHeaderGap(9); // 9 points = 9/72" = 1/8" = 3.175mm // The footer gap only applies if there is actually a footer print.setFooterGap(18); // 18 points = 18/72" = 1/4" = 6.35mm // At this point you are ready to print the document PrintJob job = new PrintJob("Headers and footers!", print); job.setMargins(36); // 36 points = 36/72" = 1/2" = 12.7mm PaperClips.print(job, new PrinterData());
Conclusion
This document should help you get started using page numbers.
However this is a first draft, so if there's any details I've left out, please post them on the Sourceforge forums and I will do my best to fill in the blanks.
|
http://wiki.eclipse.org/index.php?title=Paperclips_pageprint_tutorial&oldid=434459
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
I have a generic method like this:
public class MyClass
{
public T MyMethod<T>(string arg)
{
return default(T);
}
}
When I try to call the method like this:
int someInt = MyClassInstance.MyMethod<int>("some_string");
I get the following error:
Attempting to JIT compile method 'MyClass.MyMethod<int> (string)' while running with --aot-only.
This also happens to bool and float types but it works for string.
Would you guys know why this happens? Thanks!
Answer by brianturner
·
Jan 29, 2015 at 02:42 PM
This happened because just-in-time (JIT) compilation does not work on iOS. All code must be ahead-of-time (AOT) compatible. default() sometimes uses JIT. When this happens, you will have to write the code in such a way to remove the use of default(). I can't predict if it will or won't.
default()
I will update this answer if I figure out more.
All right. Thanks! :)
Can someone confirm this please?
@brianturner @jica : Because I see default keyword being used in Dictionary implementation for TryGetValue. Was wondering how this got accepted as answer? Please suggest.
Interesting. I wasn't aware Dictionary was using it. I ran some test and found there are some instance where it does work, but can't see why it does or doesn't. If it works for Mono AOT iOS failure for generic method
0
Answers
ExecutionEngineException : JIT/AOT error when adding to delegate
1
Answer
Fixing AOT errors in iOS with BinaryWriter
2
Answers
ios: 2 function calls on the same interface, one working... the other function call doing literally nothing
0
Answers
My Generic function using LINQ(orderBy thenBy)Not working on IOS
1
Answer
|
https://answers.unity.com/questions/888683/ios-jit-compile-error-on-generic-method-called-wit.html
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
WO2018035177A2 - Convergence proxy for core network virtualization - Google PatentsConvergence proxy for core network virtualization Download PDF
Info
- Publication number
- WO2018035177A2WO2018035177A2 PCT/US2017/047038 US2017047038W WO2018035177A2 WO 2018035177 A2 WO2018035177 A2 WO 2018035177A2 US 2017047038 W US2017047038 W US 2017047038W WO 2018035177 A2 WO2018035177 A2 WO 2018035177A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- rat
- gateway
- 3g
- interface
- multi
- Prior art date
Links
- 239000011162 core materials Substances 0 abstract claims description title 198
- 230000011664 signaling Effects 0 claims description 44
- 238000005516 engineering processes Methods 0 claims description 21
- 238000004891 communication Methods 0 claims description 15
- 230000001603 reducing Effects 0 claims description 14
- 230000004807 localization Effects 0 claims description 7
- 230000003993 interaction Effects 0 claims description 6
- 238000004873 anchoring Methods 0 claims description 3
- 230000000977 initiatory Effects 0 claims description 2
- 241000700159 Rattus Species 0 abstract 1
- 239000000306 components Substances 0 description 17
- 239000008358 core components Substances 0 description 17
- 229920004880 RTP PEK Polymers 0 description 15
- 238000000034 methods Methods 0 description 8
- MWRWFPQBGSZWNV-UHFFFAOYSA-N Dinitrosopentamethylenjc0LDE1My4xNjUgMTM1LjUzMiwxMzcuMTIuNTMyLDEzNy4xMjggMTQxLjc5MSwxMjEuMDkwxNTMuMTY1IDEzOS41ODEsMTY2LzOS41ODEsMTY2LjAzNyAxNDkuODg3LDE3OC45NzUsMTEzLjE3NyAxNjguMjcyLDExMC404yNzIsMTEwLjQ3NCAxODUuOTY5LDEwNy43guNDYsMTE1LjAyNyAxMjAuNzYzLDExNy4343NjMsMTE3Ljczk2OSwxMDcuNzcyIDE5Ni4yNzUsMTIwjI3NSwxMjAuNjQ0IDIwNi41ODEsMTMzLIzNCwxMzkuNTc5IDI0Ny41NzEsMTM5LjQ1LDE0Ny40OTMgMjAzLjE5MiwxNjMuNTxOTIsMTYzLjUzIDE5Ni45MzQsMTc5LjU1Ljk1MSwxNDMuNzgzIDI2My40NTksMTUzjQ1OSwxNTMuMTYxIDI3MC45NjgsMTYyLjQ5NywxMzguNTQyIDI3MC4wMDYsMTQMS43MjgsMTg5LjUyNiAxMTQuMDMxLDE5NC4wMzEsMTkyLjIyOCAxMDMuNzI1LDE3OS443MjUsMTc5LjM1NiA5My40MTg1LDE2Ni40xLjc2NTUsMTYwLjQyMSA1Mi40MjksMTY0Ljkw3LjU3MScgeT0nMTQxLjEzMDknIHk9JzE3My44kuNDI1JyB5PScxOTIuODuNzY1NScgeT0nMTY2LjjMxNDInIHk9JzE3Mi44My42MzY0JyB5PScxNDAuMDgyjEyNzcsNDIuODk2OCAzNy45MDA5LDMkwMDksMzguMzUyOSAzOS42NzQsMzMuzLjcxNjMsMzIuODU2OCAyOC43MDIxLDMzLjjEM3Ljcg0NDEsNDEuMjg5NyA1Ny4wNzEsNDUuODcuMDc41NTNzQyLDM4Ljc1MzUgNzYuMDAxNiw0MS40MDE2LDQxLjQxMDUgNzguMTI5LDQ0LjAjIjE1NTksNDIuNzEwMyAyNi45MjksMzguMjkyOSwzOC4xNjY0IDI4LjcwMjEsMzMuNLjk4MDYsNDMuNzYxNCA5Ljg1MzI0LDQLjNzAMzcxJyB5PSc1NC4xNT42NjY5JyB5PSc0Ni42N 7
- 238000005457 optimization Methods 0 description 6
- 239000003570 air Substances 0 description 5
- 238000006722 reduction reaction Methods 0 description 5
- 230000001965 increased Effects 0 description 4
- 239000000969 carrier Substances 0 description 3
- 239000002609 media Substances 0 description 3
- 238000005538 encapsulation Methods 0 description 2
- 238000007689 inspection Methods 0 description 2
- 230000003287 optical Effects 0 description 2
- 230000002829 reduced Effects 0 description 2
- 230000002776 aggregation Effects 0 description 1
- 238000004220 aggregation Methods 0 description 1
- 238000004378 air conditioning Methods 0 description 1
- 230000001413 cellular Effects 0 description 1
- 239000003795 chemical substance by application Substances 0 description 1
- 230000003750 conditioning Effects 0 description 1
- 230000000875 corresponding Effects 0 description 1
- 230000001186 cumulative Effects 0 description 1
- 238000000280 densification Methods 0 description 1
- 230000001419 dependent Effects 0 description 1
- 230000018109 developmental process Effects 0 description 1
- 239000000284 extracts Substances 0 description 1
- 239000010410 layers Substances 0 description 1
- 230000000670 limiting Effects 0 description 1
- 230000015654 memory Effects 0 description 1
- 239000000203 mixtures Substances 0 description 1
- 238000003860 storage Methods 0 description 1
Classifications
-/18—Service support devices; Network management devices
- H04W88/182—Network node acting on behalf of an other network entity, e.g. proxy
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04W—WIRELESS COMMUNICATION NETWORKS
- H04W92 COMMUNICATION NETWORKS
- H04W92/00—Interfaces specially adapted for wireless communication networks
- H04W92/04—Interfaces between hierarchically different network devices
- H04W92/06—Interfaces between hierarchically different network devices between gateways and public network devices
-
CONVERGENCE PROXY FOR CORE NETWORK VIRTUALIZATION
Cross-Reference to Related Applications
[0001] This application claims the benefit of priority under 35 U.S.C. § 119(e) of U.S.
Provisional Patent Application No. 62/375,341, having attorney docket no. PWS-71819US00, filed on August 15, 2016 and entitled "S2 Proxy for Multi- Architecture Virtualization," which is hereby incorporated by reference in its entirety for all purposes. The present application also hereby incorporates by reference for all purposes U.S. Pat. No. 8,879,416, "Heterogeneous Mesh Network and Multi-RAT Node Used Therein," filed May 8, 2013; U.S. Pat. No. 9, 113,352, "Heterogeneous Self-Organizing Network for Access and Backhaul," filed September 12, 2013; U.S. Pat. App. No. 15,464,333, "IuGW Architecture," filed March 20, 2017; U.S. Pat. App. No. 14/642,544, "Federated X2 Gateway," filed on March 9, 2015; U.S. Pat. App. No. 14/936,267, "Self-Calibrating and Self-Adjusting Network," filed on November 9, 2015; U.S. Pat. App. No. 14/806,594, "Signaling Storm Reduction from Radio Networks," filed July 22, 2015; U.S. Pat. App. No. 14/822,839, "Congestion and Overload Reduction," filed August 10, 2015; and U.S. Pat. App. No. 61/724,312, "Method of optimizing Paging over LTE radio," filed November 9, 2012.
Background
[0002] In many environments, it is important to support voice calling, including in networks where Long Term Evolution (LTE) is deployed. However, LTE does not support voice calling over legacy networks, instead providing voice capability via its own standard, Voice over LTE (VoLTE). There is consequently a need for compatibility with legacy voice calling, including circuit- switched (CS) voice calling, that is currently met imperfectly, typically by providing both a 2G/3G core network and a parallel LTE core network. As both networks have cumulative maintenance and operational expense (opex) requirements, the new network is significantly more expensive.
[0003] As well, with the requirement that wireless operators support all generations of radio technologies, and the expense of maintaining 2G, 3G, 4G and upcoming 5G infrastructures, operators have desired to transition their core infrastructure to remove legacy components and converge to a single IP core. This will help them move to all IP, an all-virtualized core network infrastructure and reduce not only capital expenditures (capex) but also operational expenditures (opex) significantly.
Summary
[0004] Systems and methods are disclosed for a network convergence protocol proxy and interworking gateway.
[0005] In some embodiments, a proxy provides 2G/3G/4G/5G/Wi-Fi virtualization for nodes in a network on a plurality of radio access technologies. In some embodiments the convergence proxy is divided into a radio access technology (RAT)-specific front-end portion, an S2/S2x/Sl back-end portion for data network convergence, and a CS back-end portion for voice network convergence. The Convergence proxy is placed in the network between one or more radio access networks and an operator core network. To the radio access networks, the convergence proxy communicates to each RAN node using the appropriate RAT and interface, i.e., Iuh, SI, Ta, SWu. To the core network, the Convergence proxy communicates to a packet gateway (EPC) using the SI or S2 interface, in essence presenting the various RATs to the core network as if via a Wi-Fi security gateway or as a eNodeB. In some scenarios convergence proxy can absorb part of SGSN and MME functionality and do local breakout of data and eliminates the need for EPC. In that case core network consists of mainly authentication, billing, LI server with convergence proxy. Convergence proxy is a virtualized platform that runs legacy core network functions and also enables and interfaces with future 5G core network functions such as MCE, MBMS-GW, IOT GW, analytics server, in some embodiments.
[0006] In some embodiments, a convergence gateway is described that allows for legacy radio access network functions to be provided by all-IP core network nodes. A multi-RAT gateway provides 2G/3G Iuh to IuPS interworking, IuCS to VoLTE interworking via a VoLTE proxy, IuPS and 4G data local breakout or Sl-U interworking, and 2G A/IP and Gb/IP to VoLTE and Sl-U/local breakout interworking. The multi-RAT gateway may thereby support all voice calls via VoLTE, and all data over SI or local breakout, including VoLTE. The multi-RAT gateway may provide self-organizing network (SON) capabilities for all RATs. A multi-RAT base station may provide 2G and 3G front-end interworking to Iuh.
[0007] In one embodiment, a system for multi-radio access technology (multi-RAT) telecommunications networking is disclosed, comprising: a multi-RAT gateway, wherein the multi-RAT gateway may further comprise: an inbound Iuh interface for handling inbound signaling, call, and user data flows on either or both of a 2G RAT or a 3G RAT; an inbound IuCS interface for handling inbound call data flows on either or both of the 2G RAT or the 3G RAT, the inbound IuCS interface being coupled to the inbound Iuh interface; an inbound IuPS interface for handling inbound user data flows on either or both of the 2G RAT or the 3G RAT, the inbound IuPS interface being coupled to the inbound Iuh interface; an inbound Sl -AP interface for handling 4G inbound signaling data flows; an inbound Sl-U interface for handling 4G inbound user data flows; a Voice over LTE (VoLTE) interworking proxy for performing interworking from inbound call data flows, the VoLTE interworking proxy being coupled to the inbound IuCS interface; and an outbound data flow router for routing inbound user data flows on either, some, or all of 2G, 3G, or 4G user data flows to either an outbound SI interface or an outbound local breakout IP interface.
[0008] The VoLTE interworking proxy may be further coupled to the outbound data flow router such that outbound VoLTE traffic flows destined for an Internet Protocol Multimedia Subsystem (IMS) core network may be routed to either the outbound SI interface or the outbound local breakout IP interface, and the inbound Sl-AP interface and the inbound Sl-U interface may be further coupled to the outbound data flow router. The inbound Sl-U interface may be further configured to route inbound VoLTE traffic flows to the outbound data flow router. The multi-RAT gateway may further comprise local core network functions, the local core network functions comprising a local serving GPRS support node (SGSN) function, a local gateway GPRS support node (GGSN) function, and a local mobility management entity (MME) function, the local core network functions providing termination of data flows. The system may further comprise an IMS core network for providing voice call anchoring for VoLTE voice data flows.
[0009] Each of the radio access technology gateway functions may provide inbound interfaces for signaling, voice calls, and user data. Each of the radio access technology gateway functions may provide either interworking for inbound flows to an outbound IP -based interface or routing to a local core network function that acts to terminate inbound flows. The system may further comprise a multi-radio access technology (multi-RAT) base station supporting 2G with an luh signaling interface, 3G with an luh signaling interface, and 4G with an Sl-AP signaling interface. The multi-RAT gateway may further comprise a 2G base station interface providing A/IP and Gb/TP signaling, voice call, and data inbound interfaces, and A/IP and Gb/IP interworking to outbound IP -based data or to an outbound Voice over LTE (VoLTE) interface via a VoLTE interworking proxy. The system may further comprise a multi-radio access technology (multi-RAT) base station supporting a wireless local area networking (WLAN) radio access technology (RAT) with an S2a or S2b signaling interface, and The multi-RAT gateway further comprises support for S2a/S2b signaling, Voice over LTE (VoLTE) proxy interworking for S2a/S2b voice calls, and support for redirection of S2a/S2b data to an operator core network packet gateway (PGW) or to the Internet via local breakout. The multi-RAT gateway may further comprise a Session Initiation Protocol (SIP) protocol connection interworking proxy for interworking IuCS and A/IP calls to Voice over LTE (VoLTE) calls, or for interworking SIP calls to IuCS calls. The SIP protocol connection interworking proxy may further comprise a transcoder.
[0010] The multi-RAT gateway may provide an application programming interface (API) to enable interaction of a Voice over IP (VoIP) smartphone application with a voice call at the multi-RAT gateway without requiring support in a 3G circuit-switched core network or an IP Mobile Subsystem (IMS) core network. The multi-RAT gateway may further comprise a self- organizing network (SON) module coupled to each of the inbound and outbound interfaces and proxies, the SON module for monitoring network state, subscriber information, and/or call state information across radio access technologies and proactively reconfiguring operating parameters at the multi-RAT gateway.
[0011]. The 3G RAT may further comprise at least one 3G base station coupled to the multi-RAT gateway over an luh interface, and the multi-RAT gateway may be in communication with, and serves as a gateway for, a 3G packet core network node, a 3G circuit core network node, and a 4G evolved packet core (EPC) network node when in communication with the 3G RAT, the 4G RAT, or the WLAN RAT for signaling, voice, or user data flows received via the luh interface, thereby virtualizing the existing core and adding more capacity by offloading signaling and data.
[0012] The multi-RAT gateway may further comprise an Internet Protocol (IP) local breakout interface for routing data packets from the 3G RAT and the 4G RAT over the public Internet, the IP local breakout interface providing call detail record (CDR) generation, thereby reducing a need to scale a 4G packet gateway (PGW) and a 3G gateway GPRS support node (GGSN) and reducing Internet traffic latency. The multi-RAT gateway may further comprise a 3G mobile switching center (MSC) function, the virtual MSC function being scalable, the virtual MSC function being able to receive inbound IuPS and IuCS data flows and map the inbound IuPS and IuCS data flows to either the 3G circuit core network node or the 3G packet core network node, thereby enabling traffic flow localization and topology-enhanced handover and reducing a need for scaling MSC and 3G circuit and packet core network nodes. The multi-RAT gateway provides an application programming interface (API) to enable interaction of a Voice over IP (VoIP) smartphone application with a voice call at the multi-RAT gateway without requiring support in a 3G circuit- switched core network or an IP Mobile Subsystem (IMS) core network.
[0013], wherein the 3G RAT may further comprise at least one 3G base station coupled to the multi-RAT gateway over an luh interface, wherein the multi-RAT gateway may further comprise a 4G mobility management entity (MME) function and a 3G general GPRS serving node (GGSN) function, the MME function and the GGSN function providing packet -based data services to the 3G RAT and the 4G RAT, the multi-RAT gateway further in communication with, and serves as a gateway for, a 3G circuit core network node and an IP Multimedia Subsystem (IMS) core network node.
Brief Description of the Drawings [0014] FIG. 1 depicts a prior art core network architecture.
[0015] FIG. 2 is a network architecture diagram showing integration of Wi-Fi into an LTE core network, in accordance with some embodiments.
[0016] FIG. 3 is a network architecture diagram showing a first phase of introduction of a convergence gateway into a wireless operator network, in accordance with some embodiments.
[0017] FIG. 4 is a network architecture diagram showing a second phase of introduction of a convergence gateway into a wireless operator network, in accordance with some embodiments.
[0018] FIG. 5 is a network architecture diagram showing a third phase of introduction of a convergence gateway into a wireless operator network, in accordance with some embodiments.
[0019] FIG. 6 is a network architecture diagram showing a fourth phase of introduction of a convergence gateway into a wireless operator network, in accordance with some embodiments.
[0020] FIG. 7 is a network architecture diagram showing providing applications (apps) for machine-to-machine (M2M) applications, smartphones, femto cells, access points (APs), etc, in accordance with some embodiments.
[0021] FIG. 8 is a network architecture diagram showing a block diagram of a convergence gateway, in accordance with some embodiments.
[0022] FIG. 9 is a network architecture diagram showing a block diagram of a multi-RAT node and a convergence gateway, the convergence gateway having IuCS/IuPS and SI interfaces toward a core network, in accordance with some embodiments.
[0023] FIG. 10 is a network architecture diagram showing a block diagram of a multi-RAT node and a convergence gateway, the convergence gateway having IuCS and local breakout interfaces toward a core network, in accordance with some embodiments.
[0024] FIG. 11 is a network architecture diagram showing a block diagram of a multi-RAT node and a convergence gateway, the convergence gateway having IuCS, SI, and local breakout interfaces toward a core network, in accordance with some embodiments.
[0025] FIG. 12 is a network architecture diagram showing a block diagram of a multi-RAT node and a convergence gateway, the convergence gateway having S 1 and local breakout interfaces toward a core network, in accordance with some embodiments. Detailed Description
[0026] In currently-available systems, multiple radio access technologies are supported by the use of separate infrastructure for each radio access technology. For example, LTE eNodeBs are supported by MMEs, SGWs, PGWs, etc., and UMTS nodeBs are supported by SGSNs, GGSNs, MSCs etc., with little or no functionality shared between core network nodes. FIG. 1 shows the current architecture.
[0027] Reducing complexity by eliminating one or more network nodes in the core network has been difficult due to differences in functional divisions in the various RATs, as well as the difficulty in building and supporting backwards compatibility layers for each RAT. When interworking is required between each pair of RATs, development becomes expensive. Even when companies have attempted to build combination nodes, integration is expensive because supporting the superset of features on a given RAT (e.g., authentication, circuit switching, legacy protocol interworking) is difficult and expensive.
[0028] For example, an S 1 interworking proxy node may be placed between an LTE eNodeB and an LTE core network (MME, SGW, PGW); an Iuh interworking proxy may be placed between a 3G nodeB and a 3G core network (SGSN, GGSN); an IuCS interworking proxy may be placed between a 3G nodeB and a 3G MSC; a circuit- switched media gateway may be placed between a 2G BTS and a 2G core network (SGSN and MSC); a Wi-Fi gateway according to the 3 GPP spec could be placed between a Wi-Fi user device and a PGW; and so on. However, a multiplicity of devices is required to support all scenarios, leading to increased expense in operation of the core network.
[0029] Additionally, network operators are typically unwilling to decommission core network equipment that has been well-tested and continues to operate well. For example, this is the case for 2G and 3G legacy voice equipment, which provides voice call quality superior in many cases to VoLTE, which is not widely deployed. Elimination of these legacy network nodes is additionally difficult as a result. [0030] Thus, although LTE hypothetically has the ability to unify all network protocols in an IP-only system, few or no LTE core network nodes exist today that can support all of the functions that are supported by the core network nodes of the UMTS network.
[0031] Various approaches are provided herein that provides connectivity and mobility to users using any of a plurality of radio access technologies, including LTE/LTE-A/LTE-U, 3G (UMTS), 3G (CDMA), 2G and 2.5G (EDGE), and Wi-Fi within a single architecture. In some embodiments a system is described wherein one or more RATs may be supported using a single front-end plugin for each RAT supporting a subset of RAT features, and without requiring support to be built out for all RAT features. In some embodiments support can be rolled out in sequential stages, for example, for network operators that have existing investments in infrastructure.
[0032] In some embodiments, a convergence gateway is enabled to interwork other radio access network interfaces with SI or Iu, thereby providing connectivity to the the RAN toward the core network and vital core network nodes such as authentication and mobility nodes. To the core network, a call or packet data session appears as an LTE call or bearer. To the user device, the call or session appears as a native RAT call/ session, whether it is 2G, 3G, CDMA, or Wi-Fi.
[0033] The S2 interface (S2a, S2b) is an important interface for enabling the system described herein. S2, as described in 3 GPP TS 23.402 and TR 23.852 (each hereby incorporated by reference in their entirety), is an interface for enabling wireless access gateways to permit mobile devices on non-3GPP networks to join 3GPP networks. Specifically, the S2 interface was designed to enable Wi-Fi and other IEEE networks to expose control functionality as well as data routing functionality, and to enable 3 GPP networks to interoperate with elTRPD and CDMA networks, such as WiMAX and WCDMA networks, and interworking them to a 3 GPP PGW gateway. Authentication and call anchoring is passed through the S2 interface and performed using the 3 GPP network. In some embodiments the convergence proxy does not use S2 interface and uses local breakout or for data traffic while interworking the signaling messages with legacy components such as MSC and SGSN.
[0034] An additional important interface for enabling the system described herein is the SI interface. While the Sl-AP interface is used for providing signaling support for LTE eNodeBs, the Sl-U interface is a tunneling interface suitable for tunneling IP data through to an LTE core network. Re-encapsulating packets received over another packet session, such as EDGE, Gb, IuPS, etc. into a GTP-U tunnel over the Sl-U interface enables the SI interface to be used for multiple RATs.
[0035] A third important building block of the solution described herein is the use of local breakout techniques for handling IP traffic. Other than voice calls connected over a circuit- switched protocol, the majority of traffic on wireless operator networks at this time travels over the operator network to a gateway, such as an LTE packet data network gateway (PGW), which then provides access to the Internet. This includes VoIP, web (HTTP), and other user-driven IP traffic. Certain IP traffic terminates within a mobile operator network and not on the public Internet, such as voice calls performed over VoLTE within the same operator's network;
however, such calls may also be made by routing an IP session out to the public Internet and back to the operator's own network (e.g., hairpin routing).
[0036] Local breakout also is desirable sine in most or all deployments, the wireless base station has some backhaul that enables it to connect to an operator core network using underlying IP backhaul connectivity to the global Internet. The inventors have appreciated, and the embodiments described herein show, that it is possible to simplify the operator core network by directing most connections to traverse the Internet via this backhaul connection instead of using designated 2G, 3G, or 4G nodes within a core network to provide service. This technique, sometimes also called Selective IP traffic offload (SIPTO), or local IP access (LIP A) when used to refer to the use of non-operator-controlled IP networks, particularly by by small cells, is enhanced and expanded upon to provide additional functionality in this disclosure.
[0037] Fourthly, as described above, many operators already have 3G core networks in place. In some embodiments, a simple approach is taken to retain compatibility with 3G voice calls at minimal expense. Instead of replacing the 3G core network completely, an existing 3G core network is left in place and pared down to the minimum of required components. By reusing and inexpensively maintaining the existing 3G core, compatibility is maintained with 3G technology, at no additional cost relative to the present day, without requiring the expense of purchasing new devices to replace the 3G core.
[0038] Otherwise, if an operator core network is a "greenfield" network, where 3G core networks have not been provisioned or built out, the core network can be built to support voice calling without a 3G core network and only with an IMS network, to provide voice over IP (VOIP)/VoLTE voice calling. This core network architecture enables the use of an all-IP network without legacy 3G circuit- switched calling.
[0039] FIG. 1 depicts a prior art core network architecture. On the left side of the diagram, four radio access technologies (RATs) are depicted, namely: 2G (otherwise known as GERAN), 3G (otherwise known as UTRAN), 4G (LTE or EUTRAN), and Wi-Fi access. The RATs correspond to different wireless access technologies supported by wireless clients, such as 3 GPP user equipments (UEs) and Wi-Fi-equipped computers and mobile devices. In the middle of the diagram, each of the RATs has a corresponding core network that handles functions that include mobility management (e.g., handovers) and radio access coordination.
[0040] FIG. 1 is a schematic network architecture diagram for 3G and other-G prior art networks. The diagram shows a plurality of "Gs," including 2G, 3G, 4G, and Wi-Fi. 2G is represented by GERAN 101, which includes a 2G device 101a, BTS 101b, and BSC 101c. 3G is represented by UTRAN 102, which includes a 3G UE 102a, nodeB 102b, RNC 102c, and femto gateway (FGW, which in 3GPP namespace is also known as a Home nodeB Gateway or HNBGW) 102d. 4G is represented by EUTRAN or E-RAN 103, which includes an LTE UE 103a and LTE eNodeB 103b. Wi-Fi is represented by Wi-Fi access network 104, which includes a trusted Wi-Fi access point 104c and an untrusted Wi-Fi access point 104d. The Wi-Fi devices 104a and 104b may access either AP 104c or 104d. In the current network architecture, each "G" has a core network. 2G circuit core network 105 includes a 2G MSC/VLR; 2G/3G packet core network 106 includes an SGSN/GGSN (for EDGE or UMTS packet traffic); 3G circuit core 107 includes a 3G MSC/VLR; 4G circuit core 108 includes an evolved packet core (EPC); and in some embodiments the Wi-Fi access network may be connected via an ePDG/TTG using S2a/S2b. Each of these nodes are connected via a number of different protocols and interfaces, as shown, to other, non-"G"-specific network nodes, such as the SCP 110, the SMSC 111, PCRF 112, HLR HSS 113, Authentication, Authorization, and Accounting server (AAA) 114, and IP Multimedia Subsystem (IMS) 115. An HeMS/AAA 116 is present in some cases for use by the 3G UTRAN. The diagram is used to indicate schematically the basic functions of each network as known to one of skill in the art, and is not intended to be exhaustive. [0041] Noteworthy is that the RANs 101, 102, 103, 104 rely on specialized core networks 105, 106, 107, 108, 109, but share essential management databases 110, 111, 112, 113, 114, 115. More specifically, for the 2G GERAN, a BSC 101c is required for Abis compatibility with BTS 101b, while for the 3G UTRAN, an RNC 102c is required for Iub compatibility and an FGW 102d is required for Iuh compatibility. These core network functions are separate because each RAT uses different methods and techniques. On the right side of the diagram are disparate functions that are shared by each of the separate RAT core networks. These shared functions include, e.g., PCRF policy functions, AAA authentication functions, and the like. Letters on the lines indicate well-defined interfaces and protocols for communication between the identified nodes.
[0042] While the core network architecture as shown is effective, it is expensive to maintain and unnecessarily duplicates infrastructure. Conceptually only two main functions are provided to users: voice calls and packetized data. However, each RAT requires a different core network to be put in place to handle the same function. In the case of 3G, two core networks are put into place, one to handle circuit-based calls and one to handle packet-based calls and data. Wi-Fi is also not effectively integrated into the network. Additionally, no synergies are realized between the networks, as each operates independently of the others, even though many of them operate on the same IP -based underlying network. Not shown are the expensive air conditioning, power conditioning, property leasing agreements, and other physical plant expenses required to maintain each of these duplicate core networks.
[0043] An approach is described for combining each of the core networks into a minimal core network suitable for providing radio access to user devices that support LTE and beyond, while providing legacy support for other RATs. Each legacy RAT may be supported at the RAN level, but the core network for each RAT may be replaced by a single converged core network. To provide data service, the converged core network shall use the LTE framework, and shall use IP. To provide voice service, the converged core network shall use either a 3G circuit core network infrastructure (MSC, VLR) or a packet-based LTE infrastructure such as VoLTE.
Interworking shall be provided to enable legacy core network nodes to be removed from the network. [0044] The described approach has the advantage that several functions previously separated among several disparate core networks may be reunited into a single core network node, here called the convergence proxy/gateway. This enables network operators to perform optimizations across several RATs. As well, the convergence proxy may be built with modern hardware and software virtualization techniques that enable it to be scaled up and down as needed within the network to meet needs on any of the supported RATs, thereby enabling network expansion and virtualization. This architecture thus paves the way for increased numbers of connected nodes (e.g., for Internet of Things (IoT)), and for the increased bandwidth and densification as projected to be required by 5G.
[0045] The described approach also enables increased intelligence to be pushed to the edge of the network. When combined with the virtualization technology described in U.S. Pat. Pub. No. 20140133456 (PWS-71700US03), hereby incorporated by reference in its entirety for all purposes, which allows a virtualization gateway to act as a proxy to enable a large radio access network to be subdivided into independently managed sections, intelligence may be added in a large scale way to a large, heterogeneous radio access network by pushing the required intelligence out from the expensive to maintain core network to virtualization/convergence nodes situated one hop away from the edge of the network. This architecture allow s new services to be provided, such as: content delivery caching, scaling, and optimization; data offload for local voice or local data breakout; specialized APIs for smartphone apps; VoIP and VoWiFi integration; within network free calling without using expensive international or long distance circuits or trunks; femto cell integration; machine-to-machine applications; integration of private enterprise RANs with the core network; core network sharing; and other services, each of which could provide an additional revenue stream.
[0046] A detailed explanation of how each RAT will be supported follows.
[0047] In some embodiments, 2G services may be provided by enabling a standard base station, or BTS, to connect to the convergence gateway directly via a standard BTS-MSC interface, the A interface. Software and hardware to enable 2G base stations according to the Global System for Mobile Communications (GSM) are readily available, including base station software to enable radio baseband functions and to handle interactions with a 2G GSM handset, such as the open source OpenBTS project. Such BTS software is often configured to use the A interface over an IP protocol backhaul link, and many operators have migrated their networks to run on IP and use a modified version of the A interface over IP links (A-over-IP). The convergence gateway may be configured with the appropriate A interface compatibility to enable it to interoperate with such BTSes from multiple vendors.
[0048] In some embodiments, the convergence gateway operates as a back-to-back user agent (B2BUA) or BTS/MSC proxy between a BTS and the 2G/3G MSC core network node, virtualizing the BTSes from the MSC and the MSC from the BTSes. The existing legacy 2G/3G MSC is able to handle circuit- switched calls, SS7 calls, and other types of calls that are difficult to simulate or interwork in the modern IP -based environment. This mode of operation does utilize an existing 2G/3G MSC core network node. However, as mentioned above, it is advantageous to be able to leverage the existing 2G/3G infrastructure to provide a solution that "just works" and preserves legacy compatibility without introducing additional cost.
[0049] In some embodiments the convergence gateway may use the luh interface, not the A interface, to enable an enhanced 2G base station or combined 2G/3G base station to
communicate with it. luh is the interface used according to certain UMTS standards for communication over an IP link between a femto cell and a femto cell gateway, otherwise known as a home nodeB gateway, including nodeB registration (e.g., HNBAP) as well as control and user data messages (e.g., RA AP); luh supports transport of IuPS and IuCS user data flows, as well as signaling flows, and is therefore suitable for handling 2G calls. The base station in this case could be responsible for interworking between the A interface (or the Um interface) and the luh interface. The enhanced 2G base station could operate as a small cell as described below in relation to 3G services. The convergence gateway may use luh to provide signaling capability to the base stations.
[0050] In some embodiments, 3G services may be provided using a convergence gateway that is configured to act as a standard RNC, SGSN, and GGSN in relation to standard nodeB base stations. The base stations may communicate to the convergence gateway via the IuPS and IuCS interfaces. The convergence gateway may be configured to act as a B2BUA and proxy for the nodeBs toward communicating with a 3G MSC/VLR. The convergence gateway may be configured to virtualize the nodeB toward the core network. Alternately the convergence gateway may provide RNC, SGSN, and/or GGSN functions internally as software modules. [0051] In some embodiments, 3G services may be provided using a nodeB in communication with the convergence gateway via the Iuh interface, as described above. The nodeB may be configured to act as a small cell according to the standard femto cell specifications, and the convergence gateway may treat the 3G nodeB as a standard 3G home nodeB.
[0052] IuCS interface communications may be proxied by the convergence gateway toward a 3G circuit core network node, the 3G MSC/VLR. Iuh and IuPS interface communications may be handled in several different ways: forwarding Iuh circuit-switched communications to an existing 3G core, interworking of IuPS to LTE or directly to IP, such as for local breakout, or terminating IuPS communications at the convergence gateway and providing the underlying IP packet data services via an underlying IP backhaul connection at the convergence gateway (e.g., local IP breakout). Each of the inbound service requests that is a request for IP is handled via interworking, and service requests for circuit connections are handled by forwarding to the 3G core. It is noted that voice calls in the present architecture are often provided using RTP and packet-based MSC nodes, and as such, the convergence gateway may make use of RTP that is encoded by the BTS or nodeB to provide 3G voice services. The use of RTP and IP provides advantages for both 3G and 4G services as described hereinbelow.
[0053] Additional functions may be provided by the convergence gateway in conjunction with 3G service, in some embodiments. In some embodiments, RTP streams that originate and terminate within a single RAN, or within a single sub-network managed by the convergence gateway, may be redirected at the convergence gateway back toward the RAN instead of unnecessarily traversing the core network; this is known as RTP localization. RTP streams are typically used by many 3G nodeBs to encode and transport voice calls over IP. In order to provide RTP localization in this fashion, no change in signaling on the control plane is required, and network address translation may be sufficient in many cases to provide this functionality for the data packets themselves.
[0054] In some embodiments, handover optimization and paging optimization may be performed, to reduce signaling and load due to handovers or paging on the core network. The term optimization here and throughout this disclosure is used only to mean enhancement or improvement, not to mean identification of a single best method. Handovers within the same RAN or sub-network managed by the convergence gateway may be performed without interaction with the core network. Paging may be reduced by keeping track of UEs within the RAN or sub-network. Further detail about paging and handover optimizations may be found in U.S. Pat. App. Nos. 14/806594, 14/822839, and 61/724312, each hereby incorporated by reference in their entirety.
[0055] In some embodiments, data traffic may be redirected away from the 3G core network to the Internet. According to the conventional UMTS architecture, packet-switched (PS) UMTS bearers are GTP-U tunnels for data that is intended to go to or from the public Internet are terminated on one end at the UE and at the other end at a core network gateway, such as a GGSN that provides connectivity to other networks, such as the public Internet. In the conventional UMTS architecture, the GGSN extracts an IP payload from the GTP-U tunnel and sends it over the Internet. By contrast, in the present disclosure, the convergence gateway may identify that certain traffic is intended for the public Internet or for other connected networks, and may perform the de-encapsulation function previously performed by the GGSN, thereby eliminating the need for the GGSN to perform this function.
[0056] In some embodiments, SGSN functionality may be performed at the convergence gateway. For example, the SGSN in the conventional UMTS architecture is responsible for tracking UEs as they have mobility across different nodeBs. A convergence gateway according to the present disclosure is capable of tracking UEs within its managed sub-network of RANs, and may perform mobility management so that when data or calls come in from the core network or from the public Internet, the convergence gateway may direct the inbound traffic to the appropriate RAN directly. Each RAN being connected via IP to the convergence gateway, the convergence gateway can perform this tracking by IP address and can perform network address translation to ensure the core network has a single IP address for the UE at any particular time.
[0057] In some embodiments, the convergence gateway may interface with a conventional radio network controller (RNC) as a virtual MSC. The convergence gateway may use the standard IuCS and IuPS interfaces to communicate with the RNC, for example, to allow the RNC to interoperate with a conventional macro cell or nodeB. This enables the convergence gateway to provide 3G services to conventional nodeBs without having to emulate or reverse engineer any proprietary Iub interface, as that communication is performed by the RNC. In some embodiments the convergence gateway may use an Iur interface to communicate with conventional RNCs as a virtual RNC. In this case convergence gateway acts as a IuCS and IuPS proxy towards 3G MSC and 3G SGSN respectively. In some case convergence gateway may simply act as IuCS proxy while doing local breakout of data traffic.
[0058] 4G LTE services may be provided as follows. Several possible embodiments are contemplated. In the conventional LTE architecture, voice call services are provided either as 3G voice (circuit-switched fallback or CSFB) or as data. Initial LTE deployments did not have have a capability for native voice calls over LTE, and voice over LTE (VoLTE) is currently in the process of being deployed. VoLTE uses a data infrastructure known as IP Multimedia Subsystem (IMS) to provide signaling support, and uses data-based protocols such as SIP and RTP to provide voice data transport. According to conventional VoLTE, an LTE UE is attached to an LTE network and registered with an IMS core network, which then provides the ability to call other phone numbers. In the present disclosure voice calls can either be interworked to 3G CS calls or delivered using a VoLTE EVIS core network; each approach has different advantages.
[0059] An LTE eNodeB is provided that is in communication with a UE. The eNodeB is also in communication with a convergence gateway, which may enable virtualization of the eNodeB and other eNodeB s by virtue of the convergence gateway acting as a B2BUA and proxy toward the LTE core network, as described elsewhere herein. When a UE attempts to connect and register with the LTE and IMS core networks, the convergence gateway establishes a data bearer for the UE with the core network, but instead of registering via IMS, performs a registration of the UE as a 3G client with the 3G MSC/VLR. The UE and the eNodeB receive confirmation that the UE is permitted to use both the LTE and IMS core networks. Next, when the UE initiates a call according to a conventional VoLTE protocol, the UE sends the appropriate SIP protocol messages toward the core network, which are interworked by the convergence gateway into 3G messages for the 3G MSC, e.g., SIP to IuCS interworking. Once the call is connected, the UE will send RTP data packets carrying voice data to the convergence gateway, which will then forward them to the aforementioned 3G RTP and IP -based MSC. This allows for transparent interworking of 4G LTE VoLTE calls to 3G calls without the need for an IMS core.
[0060] This also allows for non- VoLTE voice calls to be handled in a similar manner, transparent to the UE. For example, mobile apps such as Skype [TM], WhatsApp [TM], and other applications installed on handsets may be treated as peers and may be given the ability to make calls through the 3G MSC. Special application programming interfaces (APIs) or triggers may be used to enable special treatment of such calls, with some embodiments thereto described below.
[0061] The convergence gateway may be enabled to aggregate SCTP and Sl-AP toward the core, in some embodiments, specifically for enabling a single MME to handle all of the subnetworks and eNodeBs under the convergence gateway. RTP and other IP traffic may be handled using the underlying IP backhaul connection (e.g., local breakout), in some
embodiments, providing a reduction of data traffic towards the LTE SGW and PGW. RTP localization may also be provided. In some embodiments, signaling toward the core, handover optimization, paging optimization, and message retransmit reduction may be performed by the convergence gateway for subnetworks managed by the convergence gateway, as described elsewhere herein.
[0062] In some embodiments the convergence gateway may take over all of the functions of the MME, SGW, and PGW inside the LTE core network gateway. In such an embodiment, multiple convergence gateways may be used to cover a large geographic area, such as a country.
[0063] Additional functions are described for enabling Wi-Fi and small cell interoperability with the described convergence gateway. Wi-Fi and small cells may need to be authenticated before being able to connect to an operator core network, and in the conventional art two types of gateways, trusted wireless access gateways (TWAGs) and evolved packet data gateways (ePDGs) are known. The convergence gateway may be an ePDG, a TWAG, or both, in some embodiments, acting as an ePDG for untrusted Wi-Fi access points and as a TWAG for trusted Wi-Fi access points. S2 and S2x interfaces may be used to cause packet flows to be allowed entry into the LTE core network at the PGW, thus allowing Wi-Fi users to access the LTE core network. However, since 2G, 3G, and Wi-Fi are all processed as IP packets in the above scenario, S2 and S2x can be used to provide entry for sessions using each of these RATs into the LTE core network, thereby allowing a single LTE core network to provide the necessary core support for 2G, 3G, 4G LTE, and Wi-Fi. Enterprise femto networks, private LTE networks, and public safety networks can also be treated as LTE networks using the TWAG and S2/S2x approach, enabling the convergence gateway to act as a virtualized hosted small cell gateway. In some cases, a convergence gateway may do local breakout of Wi-Fi data and eliminate the need for PGW.
[0064] As 2G, 3G, 4G LTE, and Wi-Fi technologies as configured above are all able to be routed through a convergence gateway, opportunities arise for improving the performance of all the networks synergistically, such as by sharing resources or information across RATs. Self- organizing network (SON) capabilities may be leveraged across multiple technologies. For example, users can be moved to the least loaded access network by combining visibility at the convergence gateway across 3G, LTE, and Wi-Fi. Some additional techniques that may be used on the convergence gateway are described in U.S. Pat. Pub. 20140233412 and U.S. Pat. Pub. No. US20160135132, each of which is incorporated by reference in its entirety.
[0065] In some embodiments, within the convergence gateway, an access module (frontend module) is configured with a modular architecture. The access module supports a stub module for each access component. The access components depicted include: a HNB access component for 2G/3G packet- switched data and circuit- switched voice, communicating with one or more 2G/3G BTSes or nodeBs via Iuh; a HeNB access component for LTE packet-switched voice (VoLTE) and data, communicating to an eNodeB via SI; an ePDG access component for untrusted Wi-Fi access; and a SaMOG access component for trusted Wi-Fi access. Other access components may be added as well, in some embodiments.
[0066] The convergence gateway may have specific modules for: RTP-Iuh interworking; 2G data to LTE interworking via a 2G proxy; 3G data to LTE interworking via a 3G proxy; IMS to LTE interworking via an IMS proxy; 2G voice to 3G voice interworking via a 3G proxy; VoIP to 3G voice interworking via a 3G proxy; and inbound protocol switching to bind each of these RATs together.
[0067] Each of the access components provides stateless or minimally stateful forwarding and interworking of inputs from the one or more access networks to the core network
components described below. Interworking may be done to a standard interface, such as S2 itself, or to a non-standard interface abstracting a subset of the input interface for communicating with the core network.
[0068] Each access component may be connected to a S2x core component (S2x backend). The S2x core component provides packet data services using one (or more) LTE core networks. The S2x core component performs interworking as necessary so that it may output on an S2 interface to its connection point in the LTE core network at the PGW. The PGW admits the packet flow from the convergence gateway as coming from another trusted network within the LTE core network, and permits access to, e.g., security gateways and authentication servers via packet data networks accessible from the PGW, thereby enabling user devices on non-LTE networks to use the LTE packet data connection.
[0069] The S2x core component and IuCS core component may be coupled together. As all RANs benefit from access to packet data, all front end access components are coupled to the S2x core component. The S2x core component may perform minimal inspection of inbound data to determine if circuit- switched call processing is needed, for example, using envelope inspection or fingerprinting. When the S2x core component identifies a circuit-switched call, the S2 core component may pass the inbound data stream to the IuCS core component. In some
embodiments, circuit- switched RANs may connect directly to the IuCS core component.
[0070] Circuit-switched calls may be transported over IP and/or SCTP to the convergence gateway over an arbitrary physical medium. The convergence gateway may communicate using a BSSAP or RANAP interface to the 2G/3G cells, taking the place of and/or emulating a 2G/3G RNC in communicating with the 2G/3G RAN. Instead of communicating with an MSC, however, the convergence gateway may perform, encoding, encapsulation, and interworking of the circuit- switched calls before sending the calls to the LTE core network. For handling circuit- switched voice calls from a 2G/3G RAN, these functions may be handled by a circuit-switched component, the IuCS code component, not the S2x core component. In addition to the above interworking functions, the IuCS core component performs proxying for the 2G/3G RAN, hiding the complexity of the core network from the RAN and vice versa, so that any 2G/3G RAN will be able to interoperate with the LTE core network. Via such proxying, 2G/3G CS calls can be converted to SIP calls and handled the same as VoLTE calls by the LTE IMS core network. The Iu interface used for communicating with the RAN is standardized, and therefore the convergence gateway will be able to interoperate with a RAN from any vendor providing the standard interface. If a base station uses the IuPS interface, the convergence gateway may perform interworking from Iuh to IuPS, and perform interworking form IuPS to SI. In that way, 2G/3G/4G traffic can all be served in the unified way by one single 4G core. [0071] In some embodiments, a transcoding gateway will not be needed. In some cases, audio for calls that originate from a 2G base station will be encoded in the half rate or full rate GSM codec. These codecs are also supported by 2.5G, 3G, and 4G handsets and base stations, so if one end of the call uses a codec that is not supported by the other, the IuCS core component can request a codec downgrade to a lowest common denominator codec. However, it may be possible for the IuCS core component to perform audio transcoding, in some embodiments. As well, the IuCS core component may perform IP-IP interworking of audio before sending the audio to the circuit-switched RAN or core.
[0072] As described above, from the radio network side, the convergence gateway presents itself as an SGSN (for packet-switched connectivity) and an RNC (for circuit-switched connectivity). At the core network, packet- switched calls may be handled as though they were VoLTE calls. This will be transparent to the core network, and will not require resources beyond what is required for support of VoLTE. 2G and 3G voice calls and circuit-switched calls may be handled by handing off to the existing 3G MSC core network node, via the IuCS interface.
[0073] Handovers between radio access networks managed by the convergence gateway may be hidden from the network. From the core network side, the calls pass through the same PGW, and no handover is needed. From the radio network side, the convergence gateway acts as an MME or RNC, and performs handover in a manner transparent to the radio network. Handovers for packet-switched calls and bearers may be performed internally within the S2x core module, and handovers for packet and circuit-switched calls may be performed between the S2x and IuCS core modules. In some embodiments, an ATCF module may be present between the S2x and IuCS core components to facilitate handover capability between circuit and packet-switched calls.
[0074] Wi-Fi local breakout and enterprise functionality may also be supported, in some embodiments. An enterprise gateway or PBX may present itself as an untrusted Wi-Fi gateway, and the convergence gateway may present itself as a ePDG to the enterprise gateway, including by using MSCHAPv2 authentication, while hiding complexity to the core network by connecting directly to the PGW. For unwanted data traffic, instead of sending the traffic to the operator's PGW, the convergence gateway may transparently redirect the traffic from the S2x core module to another network interface, thereby ejecting the traffic from the network. [0075] The benefits of the above solution include the following. A network operator may install the convergence gateway and immediately enable voice calls over the LTE core network for one or more RANs. The operator may test the performance of the rollout gradually. The operator may, when satisfied with performance, completely deactivate both the 3G packet core and the 2G/3G circuit core, thereby reducing power and footprint requirements for their core network infrastructure. Additionally, the LTE core network itself may be simplified, as the SGW and MME nodes themselves may be subsumed by the convergence gateway. Additionally, the operator is also enabled to interwork VoLTE or Wi-Fi calls to 2G voice calls and deliver these calls over a standard 2G BTS.
[0076] Table 1 summarizes some characteristics of certain embodiments of a convergence gateway in accordance with FIG. 12.
interworking to VoLTE, IMS core via Sl-U
Data Sl-U Local breakout or
Sl-U
Wi-Fi, etc. Signaling S2a/S2b Absorbed at local
MME, or S2a/S2b to 4G core, or local breakout
Calls S2a/S2b (VoLTE) S2a/S2b to PGW to IMS core, or local breakout
Data S2a/S2b S2a/S2b to 4G core, or local breakout
[0077] TABLE 1
[0078] The term "absorbed" is used above to reflect the notion of virtualization, described in various applications referenced herein, i.e., that a gateway may pass only certain signaling messages up to the core network and may respond to core network queries or call flows directly by proxying the relevant signaling messages. This enables the convergence gateway to flexibly provide handover, etc. services among multiple RATs and multiple base stations it manages.
[0079] In some embodiments, a phased approach could be used to introduce convergence gateway architecture to an operator's wireless network. Four proposed phases are described below.
[0080] In Phase 1, shown as FIG. 3, a wireless operator could introduce a convergence gateway into the network for LTE, for 3G, and for Wi-Fi, maintaining an existing 3G packet core and 3G circuit core, as well as 4G packet core/EPC. This architecture provides advantages for scalability of existing services. Additionally, it enables Wi-Fi calling, as well as 3G access, 4G access, and standards-compliant femto cells (original device manufacturer, or ODM, femtos) from a variety of manufacturers, and also provides the convergence gateway's virtualization, scalability, SON, and other advantages. Use of enhanced nodeBs as described herein can also permit all RAN traffic to be on IP, providing cost savings. [0081] In an alternative Phase 1 deployment, a wireless operator could introduce a convergence gateway into the network for LTE, with support for outdoor macro, enhanced multi- RAT base stations (such as the Parallel Wireless CWS [TM] base station), and generic femto cells (residential, enterprise); introduce a convergence gateway for 3G, with Parallel Wireless CWS, generic femto cells (residential, enterprise); and introduce a convergence gateway for Wi- Fi, enabling a VoWiFi calling offering and a carrier Wi-Fi offering. Benefits include: virtualizing the existing core and adding more capacity by offloading signaling & data; enabling Femto offerings; VoWiFi; SON & Inter cell Interference coordination; CAPEX/OPEX savings; and public transport Wi-Fi and small cells.
[0082] In phase 2, shown as FIG. 4, an operator may enable LTE local breakout in the convergence gateway, which reduces traffic towards PGW and thereby eliminates the need to scale it; call detail record (CDR) generation; legal intercept (LI) integration; and may enable 3G data local breakout in the convergence gateway, which reduces traffic towards GGSN
(eliminates the need to scale it); CDR generation; LI integration. Virtualizing the data offload frees up (or eliminates) PGW, SGSN, and enables the following functions: Femto offerings with local breakout; Low latency traffic to the internet (including cached video); Private LTE network; and CAPEX/OPEX savings.
[0083] In Phase 3, shown as FIG. 5, an operator may enable a virtual MSC on the convergence gateway for existing 3G Macros. This enables RTP localization; enables optimized HO; eliminates need for MSC, SGSN, GGSN scaling. This also provides the following feature advantages: SON; API Enablement; smartphone apps; IOT/M2M; Femto support. Virtualizing the MSC adds more capacity in the network by offloading the existing MSC. An app framework for innovative smartphone applications may be enabled. This phase also enables: RTP localization; optimized handover; and eliminates the need for MSC, SGSN, GGSN scaling; New revenue streams from apps and CAPEX/OPEX savings may also result.
[0084] In phase 4, shown as FIG. 6, the convergence gateway absorbs MME and SGSN functionality. While the operator may continue to use circuit switched voice MSC for voice for legacy applications (including non-VoLTE), the operator may enable mobile edge computing (MEC) for exotic applications. This Simplified Virtualized Core Network is scalable on commodity hardware in a data center, and ready for 5G, with significant CAPEX and OPEX Savings, and on modern management interfaces.
[0085] The phases described above are suitable for a gradual phase-in of a simplified core network that preserves user experience while enabling the operator to reduce OPEX over time. An additional IP-only architecture is described below for greenfield applications.
[0086] FIG. 7 shows a conceptual architecture for providing apps for machine-to-machine (M2M) applications, smartphones, femto cells, APs, etc. Apps may be supported using APIs that are provided at the convergence gateway, with EMS/management aggregation of the
smartphones and M2M clients being enabled via the apps and the convergence gateway using an element management system (EMS) accessing records and data collected at the convergence gateway. The convergence gateway may use an SI -flex interface or another interface to interact with a wireless network.
[0087] In some embodiments, the convergence gateway may enable integration of VoIP calls with ordinary cellular voice calls. Carriers want to provide mobile App to offer value added services along with VoIP calling. Typically VoIP apps are not tightly coupled with mobile OS, but are instead pushed to the background when mobile phone receives a phone call, which can create unwanted results (e.g., termination of the VoIP call) when VoIP apps are in middle of phone calls. It is desired to coordinate native mobile calls with mobile apps to improve the user experience.
[0088] In one embodiment, IN triggers are used to provide integration. IN Triggers are old way of creating triggers to be able to do intelligent services. See Reference
(). While they work well for popular IN services e.g. Number Portability, 800 number lookup etc., they are difficult and very expensive for new & innovative services due to old/unsupported nature of this technology.
[0089] In another embodiment, a Number Portability Trigger may be used. This approach assumes a SIP soft switch that handles VoIP app calling. Normal SIP related call features are assumed i.e. active registration status, active call status, SIP call forking etc. LRN (Local Routing Number) in the number portability database is registered for the VoIP app users. LRN resides on SIP Soft Switch. In case of actual ported number, Soft Switch needs to take care of final LRN. The soft switch is provided at the convergence gateway. [0090] In another embodiment, a call flow using a Next hop soft switch provided at the convergence gateway is detailed. This approach assumes a SIP soft switch that handles VoIP app calling. Normal SIP related call features are assumed i.e. active registration status, active call status, SIP call forking etc. All calls before going to MSC are routed via Soft Switch (using numbering plan/ routing tables). Soft Switch decides if it should deliver call to the App via VoIP or native dialer via MSC.
[0091] In another embodiment, a further call flow using a convergence gateway is detailed, based on idea to use the convergence gateway as a virtual decentralized core. The convergence gateway is configured to enable an API for VoIP app to leverage and achieve similar result, i.e. get trigger for incoming calls and many other innovative services e.g. location based trigger. This convergence gateway based solution allows a user or operator to bypass long
distance/international carriers (among subscribers) by local breakout for even native dialer calls.
[0092] In another embodiment, a mobile OS native dialer is integrated with a VoIP dialer, streamlining the UI to improve presentation of the problem, and treating VoIP calls as equal to native calls for, e.g., phonebook presentation for outgoing calls and incoming call identification, hold and merge.
[0093] In one embodiment, a convergence gateway API may be made available to reroute ordinary circuit- switched (CS) calls. This API may be configured and exposed at the
convergence gateway, and may be accessed using a specially configured phone or using an app on a smartphone. This may reroute existing CS calls from the smartphone to directly connect to other nodes that are accessible on the network, creating a peer-to-peer or local network topology, and avoiding a "hairpinning" route topology that goes out to a gateway and back into the same local network.
[0094] FIG. 8 is a network architecture diagram showing a block diagram of a convergence gateway, in accordance with some embodiments. Signaling coordinator 800 includes processor 802 and memory 804, which are configured to provide the functions described herein. Also present are radio access network coordination/signaling (RAN Coordination and signaling) module 806, RAN proxying module 808, and routing virtualization module 810.
[0095] RAN coordination module 806 may include database 806a, which may store associated UE signal quality parameters and location information as described herein. In some embodiments, SON coordinator server 800 may coordinate multiple RANs using coordination module 806. If multiple RANs are coordinated, database 806a may include information from UEs on each of the multiple RANs.
[0096] In some embodiments, the coordination server may also provide proxying, routing virtualization and RAN virtualization, via modules 810 and 808. In some embodiments, a downstream network interface 812 is provided for interfacing with the RANs, which may be a radio interface (e.g., LTE), and an upstream network interface 814 is provided for interfacing with the core network, which may be either a radio interface (e.g., LTE) or a wired interface (e.g., Ethernet). Signaling storm reduction functions may be performed in module 806.
[0097] SON coordinator 800 includes local evolved packet core (EPC) module 820, for authenticating users, storing and caching priority profile information, and performing other EPC- dependent functions when no backhaul link is available. Local EPC 820 may include local HSS 822, local MME 824, local SGW 826, and local PGW 828, as well as other modules. Local EPC 820 may incorporate these modules as software modules, processes, or containers. Local EPC 820 may alternatively incorporate these modules as a small number of monolithic software processes. Modules 806, 808, 810 and local EPC 820 may each run on processor 802 or on another processor, or may be located within another device.
[0098] FIG. 9 is a network architecture diagram showing a block diagram of a multi-RAT node and a convergence gateway, the convergence gateway having IuCS/IuPS and SI interfaces toward a 3G and 4G-capable core network, in accordance with some embodiments. CWS 900 is a Parallel Wireless enhanced base station, with 2G RAT 901 including BTS and BSC, as well as luh interworking; 3G RAT nodeB 902, with luh as well; and a 4G eNodeB 903. CWS 900 is in communication with HNG 904, which is a Parallel Wireless convergence gateway, over four interfaces: 2G luh; 3G luh; S1-AP/S1-U for 4G; and a SON interface.
[0099] HNG 904 includes 2G interworking module 905, 3G interworking module 906, and Iu proxy module 907. 2G interworking module 905 takes luh and interworks it to IuCS and IuPS. Similarly, 3G interworking module 906 takes luh and interworks it to IuCS and IuPS. Once converted to IuCS or IuPS, IuCS/IuPS proxy 907 acts as a proxy for communications with a 3G core network, which natively supports IuCS/IuPS, over IuCS/IuPS interface 910. [00100] As well, HNG 904 is in communication with a standard 2G or 3G base station, shown as 911 2G BTS/BSC or 3G RNC. This communication is over IuCS/IuPS and not over Iuh; however, IuPS and IuCS are able to be handled by HNG 904 and can be proxied over to the 3G core network via interface 910.
[00101] HNG 904 also includes 4G gateway 908 and multi-RAT SON module 909. The 4G gateway simply provides a proxy and gateway for the 4G eNodeB 903 to S1-AP/S1-U interface 911, which connects natively to the 4G EPC. The SON module performs SON functionality as described herein, which generally includes looking at load statistics and changing thresholds; looking at all data being collected, including subscriber information and call state information; analytics; intelligent decisions; and proactive, as well as reactive action. The SON module is connected to all RATs, all proxies, and all core networks, and can use that information to provide multi-RAT SON functionality.
[00102] FIG. 10 is a network architecture diagram showing a block diagram of a multi-RAT node and a convergence gateway, the convergence gateway having IuCS and local breakout interfaces toward a core network, in accordance with some embodiments. Similar to FIG. 9, multi-RAT CWS 1000 includes 2G BTS/BSC 1001, which has its own built-in Iuh interworking; 3G NodeB with Iuh 1002; and 4G eNodeB 1003. Similar to FIG. 9, 2G Iuh, 3G Iuh, SI, and SON are the four inbound interfaces to HNG 1004. However, HNG 1004 has two outbound interfaces: an IuCS interface 1011, toward a 3G core network, and a data local breakout interface 1012, directly facing the Internet. This architecture is suitable when a network operator is using a public network for backhaul, for instance.
[00103] A 2G Iuh-IuCS/IuPS proxy 1005 and a 3G Iuh-IuCS/IuPS proxy 1006 may be provided, as well as an IuCS proxy, an IuPS proxy, and an IuPS local breakout module 1007.
[00104] Since IuCS is available, 2G and 3G circuit-switched calls are interworked to IuCS, and they are sent out over IuCS interface 1011. However, since IuPS is not available and SI is not available, all data connections, including Sl-U and IuPS, are interworked to GTP-U tunnels or bare IP packets and are sent out over data local breakout interface 1012.
[00105] In some embodiments, an MME and an SGSN function are built into the HNG 1004 to absorb these communications before they are sent to the core network, thereby reducing demand for signaling data. A SON module 1010 is also provided. [00106] FIG. 11 is a network architecture diagram showing a block diagram of a multi-RAT node and a convergence gateway, the convergence gateway having luCS, SI, and local breakout interfaces toward a core network, in accordance with some embodiments. Similar to FIG. 9, multi-RAT CWS 1100 includes 2G BTS/BSC 1101, which has its own built-in Iuh interworking; 3G NodeB with Iuh 1102; and 4G eNodeB 1103. Similar to FIG. 9, 2G Iuh, 3G Iuh, SI, and SON are the four inbound interfaces to UNG 1104. However, HNG 1104 has three outbound interfaces: an luCS interface 1111, toward a 3G core network, an Sl-U interface 1113, toward a 4G core network, and a data local breakout interface 1112, directly facing the Internet. This configuration is suitable when backhaul directly to a 4G core network is available.
[00107] HNG 1104 also includes, in addition to interworking modules 1105, 1106, 1107 and SON module 1110, additional MME/SGSN/GGSN functions 1108 and SIP to luCS interworking 1109. SIP interworking enables VOLTE and VOIP to be interworked to 3G and completed over luCS interface 1111. Since an MSC is maintained in this embodiment, 3G voice calls are able to be completed.
[00108] FIG. 12 is a network architecture diagram showing a block diagram of a multi-RAT node and a convergence gateway, the convergence gateway having SI and local breakout interfaces toward a core network, in accordance with some embodiments. CWS 1200 is similar to CWS 1100. Base station 1210 is a 2G BTS/BSC or 3G RNC, and uses either IuCS/IuPS, A over IP/Gb over IP, or both, to connect to HNG 1204. HNG 1204 has two outbound connections: SI connection 1211 and data local breakout 1212. HNG 1204 does not have a circuit -switched outbound connection; this configuration does not require a 3G core network and uses IMS to complete all calls. As a result, A/IP and luCS must be interworked to VoLTE using an interworking proxy. This interworking proxy may require transcoding.
[00109] In some embodiments, the base stations described herein may be compatible with a Long Term Evolution (LTE) radio transmission interface or air interface. The LTE-compatible base stations may be eNodeBs. In addition to supporting the LTE interface, the base stations may also support other air interfaces, such as UMTS/HSPA, CDMA/CDMA2000, GSM/EDGE, GPRS, EVDO, other 3G/2G, legacy TDD, or other air interfaces used for mobile telephony. In some embodiments, the base stations described herein may support Wi-Fi air interfaces, which may include one of 802.1 la interfaces, and may also support transmit power adjustments for some or all of the radio frequency interfaces supported.
[00110] In some embodiments, interworking is used herein to mean providing all, or a subset of, the functionality provided by a particular protocol or interface. The inventors have appreciated that providing the most important features of a particular protocol or interface may enable an operator to provide a good balance of user experience with reduced costs. In some embodiments, interworking to VoLTE may be instead interworked to VOIP and vice versa. In some embodiments, Iuh may be provided at the base station; in other embodiments Iuh may be provided at the convergence gateway.
[00111] As described herein, a data flow router may be a gateway, in some embodiments; a proxy may be a B2BUA, an interworking proxy, or a transparent gateway, in some
embodiments; and a proxy may provide virtualization, as described elsewhere herein, in some embodiments.
[00112] interfaces that utilize radio frequency data transmission. Various components in the devices described herein may be added, removed, or substituted with those having the same or similar functionality. of, but not limiting of, the scope of the invention.
Claims
Priority Applications (2)
Applications Claiming Priority (1)
Publications (2)
Family
ID=61159696
Family Applications (1)
Country Status (3)
Family Cites Families (51)
- 2017
- 2017-08-15 US US15/678,104 patent/US10237914B2/en active Active
- 2017-08-15 EP EP17842029.5A patent/EP3498043A2/en active Pending
- 2017-08-15 WO PCT/US2017/047038 patent/WO2018035177A2/en active Application Filing
- 2019
- 2019-03-19 US US16/357,385 patent/US20190215910A1/en active Pending
|
https://patents.google.com/patent/WO2018035177A2/en
|
CC-MAIN-2019-51
|
en
|
refinedweb
|
Hi all. I'm running the following from the Radiant installation: % rake production db:bootstrap And get this error: rake aborted! undefined method `namespace' for main:Object /var/lib/gems/1.8/gems/radiant-0.6.1/Rakefile:7 Now, this line says: require 'tasks/rails' Any help or suggestions would be very appreciated,
on 2007-05-22 13:23
on 2007-05-22 17:06
On 5/22/07, Johan Rönnblom <johan.ronnblom@picsearch.com> wrote: > /var/lib/gems/1.8/gems/radiant-0.6.1/Rakefile:7 > > Now, this line says: > require 'tasks/rails' > > > Any help or suggestions would be very appreciated, Not a direct answer to your question, but I'd recommend that anyone doing ruby in earnest on a debian system install ruby from source rather than (or in addition to) the debian packages. The debian ruby package maintainers partition ruby in their own way, AND since they see some kind of conflict between the way they want to do package management, and the way rubygems does it, they don't support gems well if at all. What I've done on my ubuntu system is to install ruby from source in /usr/local and leave the debian packages installed for other packages which depend on them. I just use svn/make and gem to keep things up to date. -- Rick DeNatale My blog on Ruby
on 2007-05-22 19:32
Johan Rönnblom wrote: > /var/lib/gems/1.8/gems/radiant-0.6.1/Rakefile:7 Rake 0.7 added the concept of namespaces, so you probably don't have the latest version of rake. $ rake --version I have 0.7.3 on my box and it runs fine. You might try upgrading if you have something less than 0.7.
|
https://www.ruby-forum.com/topic/108914
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Writing your first Burp Suite extension_
This guide will help you to write your first Burp extension in any of the supported languages (Java, Python & Ruby).
Basic steps to get an extension running
Before we get into specifics for each language, there is some general context to bear in mind: Burp looks for a class called
BurpExtender to instantiate (with no constructor parameters) and then calls
registerExtenderCallbacks() on this object passing in a "
callbacks" object. Think of this as the entrypoint for your extension, allowing you to tell Burp what your extension is capable of, and when Burp should ask your extension questions.
First of all you'll need an IDE. Some popular options are: IntelliJ IDEA, Netbeans, and Eclipse.
Create a new project, and create a package called "burp". Next you'll need to copy in the interface files which you can export from Burp at Extender / APIs / Save interface files. Save the interface files into the folder that was created for the burp package.
Now that you have the general environment set up you'll need to create the actual extension file. Create a new file called
BurpExtender.java (or a new class called
BurpExtender, if your IDE makes the files for you) and paste in the following code:
public class BurpExtender implements IBurpExtender
{ public void registerExtenderCallbacks (IBurpExtenderCallbacks callbacks) {
// your extension code here } }
This example does nothing at all, but will compile and can be loaded into Burp after you generate a JAR file from your IDE - it will usually be in a build or dist directory.. You will need to download the "Standalone Jar" version and configure Burp with its location (at Extender / Options / Python environment).
Now create a new file with any name you like, ending in '.py', and add the following content to that file:
class BurpExtender(IBurpExtender): def registerExtenderCallbacks( self, callbacks): # your extension code here
return
Then go to the Extensions tab, and add a new extension. Select the extension type "Python", and specify the location of your file.
This example does nothing at all, but should load into Burp without any errors.
Note: Because of the way in which Jython dynamically generates Java classes, you may encounter memory problems if you load several different Python extensions, or if you unload and reload a Python extension multiple times. If this happens, you will see an error like:
You can avoid this problem by configuring Java versions lower than 8 to allocate more PermGen storage, by adding a
XX:MaxPermSize option to the command line when starting Burp. For example:
This option is not available in Java 8+, and permgen size should instead be automatically managed by the JVM.
Burp relies on JRuby to provide its Ruby support. You will need to download the "Complete .jar" version and configure Burp with its location (at Extender / Options / Ruby environment).
Now create a new file with any name you like, ending in '.rb', and add the following content to that file:
java_import 'burp.IBurpExtender'
class BurpExtender include IBurpExtender
def registerExtenderCallbacks(callbacks) # your extension code here end end
Then go to the Extensions tab, and add a new extension. Select the extension type "Ruby", and specify the location of your file.
This example does nothing at all, but should load into Burp without any errors.
Useful APIs
The first important thing to note about programming extensions for Burp is that, for the most part, the data you will be inspecting is provided in the form of byte arrays (
byte[]) which might take some getting used to if you'd normally program with strings. It's important to understand that while it is possible to convert from byte to a string, this process is not entirely trivial and may bloat the memory usage of your extension. In some cases this is unavoidable (e.g. you need to execute a regex against a request/response), but on the whole you should try to stick to working with bytes directly.
Burp will provide various data objects to you which model HTTP requests, responses, parameters, cookies, etc. as well as Burp-specific items such as issues. We will discuss a number of these models under the assumption that you are familiar with HTTP and with Burp as a tool.
Assuming a starting point of the blank extension you created above, because you've implemented the
IBurpExtender interface, the entry point for your extension will be the method:
Sidenote: as the Python and Ruby compatibility is provided via a translation layer to Java bytecode, we will talk about all interfaces in the strictest sense. To read them as Python or Ruby functions you can simply drop the types. e.g. in Python:
def registerExtenderCallbacks self, callbacks)
or in Ruby:
def registerExtenderCallbacks(callbacks)
Using the Java definitions allows us to be as precise as possible, and allows us to discuss the finer points of the translation layer where it's useful. In the interests of clarity some of the method names mentioned in this document have been qualified further with the name of the interface.
The first useful thing you can do with the callbacks object is to tell Burp what your extension is called:
This allows users to see a pretty name for your extension.
The first thing you're likely to do is to get a copy of the helpers to make your life easier:
The first tools to check out are your byte utilities:
String bytesToString(byte[] data);
byte[] stringToBytes(String data);
String urlDecode(String data);
String urlEncode(String data);
byte[] urlDecode(byte[] data);
byte[] urlEncode(byte[] data);
byte[] base64Decode(String data);
byte[] base64Decode(byte[] data);
String base64Encode(String data);
String base64Encode(byte[] data);
These allow you to convert between string and byte where required, and provide you with encodings useful for working with data on the web.
Burp's extender APIs return a few data types to you apart from byte, to model HTTP concepts in a convenient manner. Briefly, those are
IHttpRequestResponse which gives you access to the
IHttpService (i.e. the domain, port and protocol) and wraps up the request and response. The requests and responses can be further processed via the various overloads for
IExtensionHelpers.analyzeRequest() in order for you to inspect parameters, cookies, etc.
A core part of writing an extension is in telling Burp what your extension is capable of dealing with, and this works through a mechanism of registering bits of your code with Burp so that this code can be executed by Burp at a later point. We won't go through all of the functionality provided by the IExtenderCallbacks object; it's best that you browse the source to see what possible functionality Burp understands, but as an example you could register an object (maybe your current one
this (in Java) or
self (in Python/Ruby) via the following API:
e.g.
Now, in order to understand what it means to be an
IExtensionStateListener you should take a look at the source for that API which is pretty simple since there's only one method to implement:
So you can determine when your extension is unloaded by the user, Burp will inform you of this (if you registered!) in order for you to do some cleanup, like closing a database connection. This is commonly referred to as an "Event-driven architecture", and is common in scenarios where you know that certain events might happen but you don't know when exactly. The registration mechanism allows you to be selective in what you want to deal with, allowing you to focus your efforts.
Often you might want to save some state; maybe you've asked a user for configuration, or simply discovered something that you would like to use later on. The easiest way to do this is to use the following APIs:
String IBurpExtenderCallbacks.loadExtensionSetting(String name);
The reason these APIs exist and are useful is that they avoid the complication of managing files or databases ourselves. For simple saving and re-loading of data you should always prefer to use these mechanisms rather than introducing the headache of filesystems, permissions, and differences across operating systems.
Another API that helps you to avoid managing files is:
byte[] ITempFile.getBuffer();
This is useful in case you are dealing with large amounts of data and do not wish to bloat your memory usage. Note that
ITempFile.delete() is deprecated; temp files are removed automatically by Burp on shutdown.
A special note on dealing with Java arrays in Python & Ruby
As mentioned previously, Python and Ruby compatibility is achieved via a translation layer and this implementation detail leaks out on occasion. You will notice this mostly when dealing with Java arrays, and most often with bytes.
To create a byte-compatible object to pass to the Burp Extender APIs:
To convert an existing list to a Java array:
array([1, 2, 3], 'i') ># => new int[] {1, 2, 3}
Note the
'i' which corresponds to "integer". The various primitive type names can be found in the Jython documentation.
To create a byte-compatible object to pass to the Burp Extender APIs:
To convert an existing array to a Java array:
Note the
:int which corresponds to "integer". The various primitive type names can be found in the JRuby documentation.
More examples
Some of the Burp APIs have been highlighted in example extensions at the bottom of the official extensibility page, their source code contains comments that explain the code, providing a rich resource for learning the various available APIs in a practical setting.
Burp Community
For more help and examples of Burp extensions, you can refer to the Burp Extensions community discussions in the Support Center.Take a look
|
https://portswigger.net/burp/extender/writing-your-first-burp-suite-extension
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
I understand the placement new operator allows to construct object at a particular/specific memory location. So i tried this;
#include <iostream>
#include <new>
using namespace std;
struct Any {
int x;
string y;
};
int main() {
string mem1[1];
int mem2[5];
Any *p1 = new (mem1) Any;
Any *p2 = new (mem2) Any;
p1->y = "Hello";
//p2->x = 20;
cout << p1->y << endl;
//cout << p2->x;
return 0;
}
int mem2[1];
Any *p2 = new (mem2) Any;
p2->x = 20;
20
p1->y
int mem2[5];
5
>= 5
You have sets of Undefined Behavior in your code.
First of all, there's no guarantee that the declaration
string mem[1]; sets aside enough memory to contain an abject of type
Any....
Secondly, even if the first had adequate memory to hold such object, the destructor of the array
string mem[1]; will still run at the end of main, but you've overwritten that array with something else, hence your program should at best case, crash.
You may want to use a POD type, like
char mem1[sizeof(Any)] to store the object, that way you are sure of
mem1 being capable enough to store
Any, and you'll have no issues with destructor of
mem1 being called at the end of
main()
Again, you may want to explore a standard facility for this kind of thing,
std::aligned_storage
|
https://codedump.io/share/az9Xeh8NhdV1/1/problems-with-the-placement-new-operator
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
first attempts with the new multicast feature of FMS4 ...FrankDelporte Sep 28, 2010 7:28 AM
Hello,
I installed the developer version of Flash Media Server 4.
I used the f4fpackager to create this file:
And when I browse to this URL, it shows me this:
<manifest>
<id>testvideo</id>
<streamType>recorded</streamType>
<duration>114.61450000000001</duration>
<bootstrapInfo profile="named" id="bootstrap8944"> AAABq2Fic3QAAAAAAAAAFAAAAAPoAAAAAAABv7UAAAAAAAAAAAAAAAAAAQAAABlhc3J0AAAAAAAAAAABAAAAAQAAAB0BAAABZmFmcnQAAAAAAAAD6AAAAAAVAAAAAQAAAAAAAAAAAAAXcAAAAAIAAAAAAAAXdgAAC7gAAAAEAAAAAAAALuwAABdwAAAABQAAAAAAAEZiAAALuAAAAAcAAAAAAABd2AAAF3AAAAAIAAAAAAAAdU4AAAu4AAAACgAAAAAAAIzEAAAXcAAAAAsAAAAAAACkOgAAC7gAAAANAAAAAAAAu7AAABdwAAAADgAAAAAAANMmAAALuAAAABAAAAAAAADqnAAAF3AAAAARAAAAAAABAhIAAAu4AAAAEwAAAAAAARmIAAAXcAAAABQAAAAAAAEw/gAAC7gAAAAWAAAAAAABSHQAABdwAAAAFwAAAAAAAV/qAAALuAAAABkAAAAAAAF3YAAAF3AAAAAaAAAAAAABjtYAAAu4AAAAHAAAAAAAAaZMAAAXcAAAAB0AAAAAAAG9wgAAAfQAAAAAAAAAAAAAAAAAAAAAAA==
</bootstrapInfo>
<media streamId="testvideo" url="testvideo" bootstrapInfoId="bootstrap8944">
<metadata>
AgAKb25NZXRhRGF0YQgAAAAAAAhkdXJhdGlvbgBAXKdT987ZFwAFd2lkdGgAQIQAAAAAAAAABmhlaWdodABAdgAAAAAAAAAMdmlkZW9jb2RlY2lkAgAEYXZjMQAMYXVkaW9jb2RlY2lkAgAEbXA0YQAKYXZjcHJvZmlsZQBAWQAAAAAAAAAIYXZjbGV2ZWwAQD4AAAAAAAAABmFhY2FvdAAAAAAAAAAAAAAOdmlkZW9mcmFtZXJhdGUAQD34U+JVaygAD2F1ZGlvc2AQUo7sYAAAAAACXRpbWVzY2FsZQBA3UwAAAAAAAAIbGFuZ3VhZ2UCAANgYGAAAAkDAAZsZW5ndGgAQUNICQAAAAAACXRpbWVzY2FsZQBA1YiAAAAAAAAIbGFuZ3VhZ2UCAANgYGAAAAkAB2N1c3RkZWYKAAAAAAAACQ==
</metadata>
</media>
</manifest>
I added the OSMF.swc to my AIR project, and my code looks like this:
...
<fx:Script>
<![CDATA[
import org.osmf.events.TimeEvent;
import org.osmf.media.DefaultMediaFactory;
import org.osmf.media.MediaPlayerSprite;
import org.osmf.media.URLResource;
private var mps:MediaPlayerSprite;
private var mediaFactory:DefaultMediaFactory;
private var track:Sprite;
private var progress:Sprite;
private function created():void
{
mps = new MediaPlayerSprite();
mps.width = 640;
mps.height = 400;
holder.addChild(mps);
mediaFactory = new DefaultMediaFactory();
mps.media = mediaFactory.createMediaElement(new URLResource(""));
mps.mediaPlayer.addEventListener(TimeEvent.CURRENT_TIME_CHANGE, onCurrentTimeChange);
}
private function onCurrentTimeChange(event:TimeEvent):void
{
trace("video timeevent: " + event.time.toString() + "\tduration: " + mps.mediaPlayer.duration.toString());
...
But the trace only gives me this one time:
video timeevent: 0 duration: 0
and nothing is played back ..
Any suggestion how to continue with this? What has to be done within the FMS4 to get things working?
Thanks a lot,
Frank
1. Re: first attempts with the new multicast feature of FMS4 ...bringrags Sep 28, 2010 10:46 AM (in response to FrankDelporte)
The F4M file that you posted contains a reference to HTTP streaming media, not multicast media. You might want to double-check how you're outputting the F4M file.
Also, make sure that you're using the latest version of OSMF (the sprint 3 drop on osmf.org should be sufficient), and that the CONFIG::FLASH_10_1 property is set to true.
2. Re: first attempts with the new multicast feature of FMS4 ...FrankDelporte Sep 30, 2010 12:33 AM (in response to bringrags)
Thanks for the feedback Brian.
Any idea which parameters should be used to encode for multicast streaming?
I found this list, but nothing seems to be specific for multicasting ...- 7ffc.html
3. Re: first attempts with the new multicast feature of FMS4 ...bringrags Sep 30, 2010 9:35 AM (in response to FrankDelporte)
Details on the manifest (F4M) file format are here: on
A multicast-enabled manifest file should include the "groupspec" and "multicastStreamName" attributes. I'll get someone to follow up on the packager options.
4. Re: first attempts with the new multicast feature of FMS4 ...iproano Sep 30, 2010 11:25 AM (in response to FrankDelporte)
Please use the configurator tool to create the manifest file for multicast. Configurator tool is located under the tools folder of FMS 4.0 server installation root directory. In Windows the default location is C:\Program Files\Adobe\Flash Media Server 4\tools\multicast\configurator.
The tool let you choose between Fusion, IP Multicast and Peer to Peer. I wil recommend making the Group Name Unique.
After F4M manifest file is generated by configurator tool, please modify rtmfpGroupspec to groupspec and rtmfpStreamName to multicastStreamName.
|
https://forums.adobe.com/thread/730213
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
SOAP Overview
Tamas Szigyarto
Department of Computer Modelling and Multiple Processors Systems,
Faculty of Applied Mathematics and Control Processes,
St Petersburg State University,
Universitetskii prospekt 35,
Peterhof,St Petersburg 198504,Russia
szigyartotamas@inbox.ruAbstract
This paper contains an overview of the new XML-based protocol called SOAP
(Simple Object Access Protocol).A key points of the SOA (Service Oriented
Architecrure) philosophy that needed to understand SOAP implementation and
necessity considered.The basic idea of SOAP & its realization within Web Services
architecture demonstrated.Also paper includes some comparisons SOAP with the
related technologies like DCOM & CORBA.
To avoid some misunderstandings,the author should notice that this
paper is naturally a compilation of the material that referenced in the last
section.This paper purposes only one’s object - to give to the students
the conception of the SOAP.
Key words:XML,SOAP,service,SOA.1 Introduction
Software As Service.The concept of ”software as a service” is nothing new.In fact,it
has existed almost as long as computing itself.Back in the 1950s organizations spurn the grid and instead
have their own private generation stations inside their networks.Organizations have spent
millions of dollars with large software vendors to help them build their own home-grown,
large,and complex IT ”generating” station.
There have been many initiatives over the years to deliver an SOA.We can look at
DCE,CORBA,and DCOM as some recent examples.However,the emergence of the
Internet and a set of commonly agreed software standards has presented an opportunity
to overcome the shortcomings of previous attempts.The building blocks are in place to
realize the vision of ”software as a service”.
Starting with the service.Let consider the next example.If you are a telecommunica-
tions provider,the services you provide to your customers are built around provisioning,
billing,and customer care.People call you up and ask you to activate a telephone line,
they then make calls on that telephone line,you send them a bill for those calls,and they
call you when they are having problems with the service or the bill.
There is a lot of technology involved in providing those services to the customer.Many
different systems need to be connected together to enable the service - historically those
services grew from the technology outwards.That means that the way a service was
created and delivered to the customer was dictated by the available technology.
The whole goal of SOA is to try and reverse that trend.The premise of SOA is to start
with the service (as it is defined by the people who want to provide it) and then work
backwards into the technology.It is,as it says,a Service-Oriented Architecture.This is
a fundamental and profound change in the way we think about information systems and
is also the main reason why SOA might succeed this time around.If you look at any of
the previous attempts at a SOA,you can see that they were very technology-centric.To
demonstrate this point,let me use a quote from IBM:
”So it (SOA) basically boils down to distributed computing with standards that tell us
how to invoke different applications as services in a secure and reliable way and then how
we can link the different services together using choreography to create business processes.
And then finally so that we can manage these services so that ultimately we can manage
and monitor our business performance.”
While this is technologically valid,it is missing the point of SOA.Again,we’re focused on
the technology that enables SOA and not on SOA itself.This is one of the biggest hurdles
in making SOA work.
Now we can turn to consider the SOAP & the abilities that it gives to us to realize the
SOA architecture in natural way.
2 What Is SOAP?
SOAP how it is.If you had asked anyone what SOAP meant several years ago,they
would have probably said something like ”it’s for making DCOM and CORBA (e.g.,2
Remote Procedure Calls (RPC)) work over the Internet”.The original authors admit
they were focused on ”accessing objects” back then,but over time it became desirable
for SOAP to serve a much broader audience.Hence,the focus of the specification quickly
moved away from objects towards a generalized XML messaging framework.
SOAP is a lightweight protocol intended for exchanging structured information in a decen-
tralized,distributed environment.SOAP uses XML technologies to define an extensible
messaging framework,which provides a message construct that can be exchanged over a
variety of underlying protocols.The framework has been designed to be independent of
any particular programming model and other implementation specific semantics.
This definition really gets to the heart of what SOAP is about today.SOAP defines.Figure 1.Simple SOAP messaging
First,SOAP extensibility is key.When the acronym stood for something,”S” meant
”Simple”.Simplicity remains one of SOAP’s primary design goals as evidenced by SOAP’s
lack of various distributed system features such as security,routing,and reliability to
name a few.SOAP defines a communication framework that allows for such features to
be added down the road as layered extensions.Microsoft,IBM,and other software vendors
are actively working on a common suite of SOAP extensions that will add many of these
features that most developers expect.
Second,SOAP can be used over any transport protocol such as TCP,HTTP,SMTP,or
even MSMQ (Microsoft Message Queue)(see Figure 1).In order to maintain interoperabil-
ity,however,standard protocol bindings need to be defined that outline the rules for each
environment.The SOAP specification provides a flexible framework for defining arbitrary
protocol bindings and provides an explicit binding today for HTTP since it’s so widely
used.
Third,SOAP allows for any programming model and is not tied to RPC.Most develop-
ers immediately equate SOAP to making RPC calls on distributed objects (since it was
originally about ”accessing objects”) when in fact,the fundamental SOAP model is more
akin to traditional messaging systems like MSMQ.SOAP defines a model for processing3
individual,one-way messages.You can combine multiple messages into an overall mes-
sage/response (the reverse of re-
quest/response),notifications,and long running peer-to-peer conversations.Figure 2.Request/response message exchange pattern
Developers often confuse request/response with RPC when they’re actually quite differ-
ent.
Armed with these three major characteristics,the SOAP messaging framework facilitates
exchanging XML messages in heterogeneous environments where interoperability has long
been a challenge.
Messaging Framework.The core section of the SOAP specification is the messaging
framework.The SOAP messaging framework defines a suite of XML elements for ”pack-
aging” arbitrary XML messages for transport between systems.
The framework consists of the following core XML elements:Envelope,Header,Body,and
Fault,all of which are from the
in SOAP 1.1.Now we can take a look at XML Schema definition for SOAP 1.1 in the
following code.
SOAP 1.1 XML Schema Definition4
5
If you check out the complexType definition for Envelope,you can quickly learn how these
elements relate to each other.The following message template illustrates the structure of
a followed by a mandatory
Body element.The Body element represents the message payload.The Body element is
a generic container in that it can contain any number of elements from any namespace.6
This is ultimately where the data goes that you’re trying to send.
For example,the following SOAP message represents a request to transfer funds between
bank accounts:If the receiver supports request/response and it is able to process the message successfully,
it would send another SOAP message back to the initial sender.In this case,the response
information would also be contained in the Body element as illustrated in this example:The messaging framework also defines ”Insufficient
Funds” error occurred while processing the request:7
The Fault element must contain a faultcode followed by a faultstring element.The fault-
code element classifies the error using a namespace-qualified name,while the faultstring
element provides a human readable explanation of the error (similar to how HTTP works).
Table 1 provides brief descriptions of the SOAP 1.1 defined fault codes (all of which are
in the).
The Fault element may also contain a detail element for providing details about the error,
which may help clients diagnose the problem,especially in the case of Client and Server
fault codes.
Table 1.SOAP 1.1 Fault CodesName — MeaningVersionMismatch — The processing party found an invalid namespace for the SOAP
Envelope element.MustUnderstand — An immediate child element of the SOAP Header element that was
either not understood or not obeyed by the processing party contained a SOAP mustUnderstand
attribute with a value of ”1”.Client — The Client class of errors indicates that the message was incorrectly formed or did
not contain the appropriate information in order to succeed.It is generally an indication that
the message should not be resent without change.Server — The Server class of errors indicates that the message could not be processed for
reasons not directly attributable to the contents of the message,but rather to the processing of
the message.For example,processing could include communicating with an upstream processor,
which didn’t respond.The message may succeed if re-sent at a later point in time.Now imagine that you want to add some authentication information to the original mes-
sage so the receiver can determine whether the sender has sufficient rights to execute the8
transfer.A way to do this would be to add the credentials information into the body as
shown here:Going down this path requires every operation that needs authentication to deal with the
credentials.It also means that other applications in need of security must develop their
own solutions to the problem;ultimately,interoperability suffers.For common needs such
as security,it makes more sense to define different in this regard.The SOAP
Header and Body elements provide the same distinction in the easy-to-process world of
XML.Besides ease of use,the key benefit of the extensible Envelope is that it can be used
with any communications protocol.
Headers have always played an important role in application protocols,like HTTP,SMTP,
etc.,because they allow the applications on both ends of the wire to negotiate the be-
havior of the supported commands.Although the SOAP specification itself doesn’t define
any built-in headers,headers will eventually play an equally important role in SOAP.As
GXA (Global XML Web Services Architecture) matures and SOAP headers become stan-
dardized,it will become easier for developers to define influences payload
processing.Hence,this is the right place to put something like a credentials element that9
helps control access to the operation:Header blocks can also be annotated with a global SOAP attribute named mustUnder-
stand to indicate whether or not the receiver is required to understand the header before
processing the message.The following example illustrates how to require the processing
of the credentials-
stand=”0” or the mustUnderstand attribute isn’t present,the receiver can ignore those
headers and continue processing.The mustUnderstand attribute plays a central role in
the overall SOAP processing model.
Processing Model.SOAP defines a processing model that outlines rules for processing a
SOAP message as it travels from a SOAP sender to a SOAP receiver.Figure 1 illustrates
the simplest SOAP messaging scenario,where there’s one application (SOAP sender)10
sending a SOAP message to another application (SOAP receiver).
The processing model,however,allows for more interesting architectures like the one in
Figure 3,which contains multiple intermediary nodes.Further,we will use the termSOAP
node to refer to any application that processes SOAP messages,whether it’s the initial
sender,an intermediary,or the ultimate receiver.Figure 3.Sophisticated SOAP messaging
An intermediary sits between the initial sender and the ultimate receiver and intercepts
SOAP messages.An intermediary acts as both a SOAP sender and a SOAP receiver at
the same time.Intermediary nodes make it possible to design some interesting and flexible
networking architectures that can be influenced by message content.SOAP routing is a
good example of something that heavily leverages SOAP intermediaries.
While processing a message,a SOAP node assumes one or more roles that influence how
SOAP headers are processed.Roles are given unique names (in the form of URIs) so they
can be identified during processing.When a SOAP node receives a message for processing,
it must first defines defines a few more roles (see
Table 2) and applications are allowed to define custom roles as well.
SOAP headers target specific roles through the global actor attribute (the attribute is
named role in SOAP 1.2).If the actor attribute isn’t present,the header is targeted at
the ultimate receiver by default.The following SOAP message illustrates how to use actor:11
Since the wsrp:path header is targeted at the next role and marked as mandatory (mus-
tUnderstand=”1”),the first SOAP node to receive this message is required to process it
according to the header block’s specification,in this case WS-Routing.If the SOAP node
wasn’t designed to understand a mandatory header targeted at one of its role,it is required
to generate a SOAP fault,with a soap:MustUnderstand status code,and discontinue pro-
cessing.The SOAP Fault element provides the faultactor child element to specify who
caused the fault to happen within the message path.The value of the faultactor attribute
is a URI that identifies the SOAP node that caused the fault.
If a SOAP node successfully processes a header,it’s required to remove the header fromthe
message.SOAP nodes are allowed to reinsert headers,but doing so changes the contract
partiesit’s now between the current node and the next node the header targets.If the
SOAP node happens to be the ultimate receiver,it must also process the SOAP body.
Table 2.SOAP 1.2 RolesSOAP Role Name — Description —.Protocol Bindings.An interesting aspect of Figure 3 is that SOAP enables message
exchange over a variety of underlying protocols.Since the SOAP messaging framework
is independent of the underlying protocol,each intermediary could choose to use a dif-
ferent communications protocol without affecting the SOAP message.Standard protocol
bindings are necessary,however,to ensure high levels of interoperability across SOAP
applications and infrastructure.
A concrete protocol binding defines exactly how SOAP messages should be transmitted
with a given protocol.In other words,it defines the details of how SOAP fits within the12
scope of another protocol,which probably has a messaging framework of its own along
with a variety of headers.What the protocol binding actually defines depends a great
deal on the protocol’s capabilities and options.For example,a protocol binding for TCP
would look much different than one for MSMQ or another for SMTP.
The SOAP 1.1 specification only codifies a protocol binding for HTTP,due to its wide use.
SOAP has been used with protocols other than HTTP but the implementations weren’t
following a standardized binding.There’s nothing wrong with forging ahead without stan-
dard protocol bindings in place as long as you’re prepared to deal with interoperability
issues once you try to integrate with other SOAP implementations using the same proto-
col.
The HTTP protocol binding defines the rules for using SOAP over HTTP.SOAP re-
quest/response maps naturally to the HTTP request/response model.Figure 4 illustrates
many of the SOAP HTTP binding details
specification also defines.13
3 Why SOAP?
Industry Acceptance for SOAP.SOAP has attracted great attention in the software
development community over the past year.SOAP has been implemented as the stan-
dards have evolved.This means that developers have been able to download and use the
technology,rather than just read specifications and industry analysis.
SOAP has been described as a disruptive technology because it changes both the way
software will be developed and the way industry rivals are cooperating.The two com-
panies that showed early leadership in Web Services,Microsoft and IBM,have clearly
demonstrated that they see real value in rapid and widespread acceptance of the Web
Services paradigm.
The association of Microsofts.NET strategy with the more general industry effort in
Web Services is proving highly credible in the corporate marketplace.This is in contrast
to Microsofts previous effort at developing an Internet platform,Windows DNA,which
arrived late and lost out to J2EE in the corporate market.A key benefit for IBM is that
Web Services provide a link between its various generations of technology.
Many of the key industry leaders have announced their support for SOAP,including HP,
SAP,Software AG,Sun Microsystems,and Oracle.One of the most interesting develop-
ments since the introduction of SOAP has been the emergence of Web Services Descrip-
tion Language (WSDL) and Universal Description,Discover,and Integration (UDDI) to
provide a unified Web Services model for the future of distributed computing over the In-
ternet.This in turn has lead to the release of Web Services platforms,with SOAP servers
being the core technology in the initial releases.
An interesting aspect of SOAP is that it is not technologically advanced being based on
remote procedure calls and XML and initially using HTTP as the transport layer.Using
well understood and accepted technology has ensured that there have been relatively few
technology disputes.
SOAP Client Implementations and Interoperability.A very significant number of
SOAP clients (typically API’s that generate the SOAP payload) have been developed.In
theory,it is possible to implement SOAP clients in any programming language on any
operating system.This is one of the benefits of the ”S” in the SOAP acronym.
Some of the early SOAP client implementations suffered from interoperability issues (in-
complete standards compliance) or usability issues (for example,Apache SOAP requires
hand-coded serialization).Apache SOAP is one of the most widely used SOAP imple-
mentations for learning about Web Services and non-commercial deployments.However,
Apache SOAP requires type information to be passed with the SOAP message (not part
of the SOAP 1.1 specification).One technology that can help with interoperability issues
with the many SOAP client implementations is XSLT.This can be used to map the SOAP14
message generated by the SOAP client with the message expected by the SOAP server.
SOAP Transport.Most SOAP servers currently use HTTP as the transport layer for
the XML payload in SOAP messages.HTTP satisfies a number of requirements of an
Internet transport:
• Ubiquity
• Firewall - friendliness
• Simplicity
• Statelessness (makes for graceful failure)
• Scalability (Web servers are proven to support extremely high traffic)
• Readily capable of being made secure
However,the SOAP 1.1 specification clearly states that the transport layer is flexible.
There are a number of SOAP implementations that support other transport layers,such
as:
• HTTPS - using SSL provides security
• SMTP - enables asynchronous SOAP requests SOAP Report
• JMS (Java Message Service) - provides the benefit of tight integration with the J2EE
platform
It can be expected that other transports,such as MQSeries or FTP,will be supported
eventually.IBM has an interesting proposal for HTTP-R.This is to provide a reliable
transport layer for SOAP messages.
SOAP Performance.One of the early objections to the use of SOAP has been per-
formance,especially when used as an alternative to CORBA and DCOM.However,the
more recent versions of SOAP servers have only slightly lower performance than RMI
(Remote Method Invocation).This is mainly due to two factors:the use of SAX (Simple
API for XML) rather than DOM (Document Object Model) to parse the messages and
optimization lessons learned from earlier product releases.The use of SAX parsers has
increased throughput,reduced memory overhead,and improved scalability.
Secure SOAP.It is commonly stated that SOAP is not secure.This is partly due to
the widely discussed fact that SOAP over HTTP passes through firewalls.However,the
SOAP 1.1 specification makes it clear that SOAP can be implemented over any suitable
transport protocol.An obvious example is HTTPS (although SOAP over SMTP is useful
for asynchronous messaging).
There are other standard security features that can be implemented immediately with
SOAP-based Web Services,such as authentication and authorization.These are well-
known and well-understood security models that are the best way to implement security
for Web Services.Other,XML-based security standards are under development,but are15
currently unproven and require new skill sets that may not be available.
SOAP and EAI.There has been considerable speculation that current Enterprise Ap-
plication Integration (EAI) products will be made redundant by SOAP.However,while
they may lose market share in some smaller projects,there is still a role for heavy-duty,
enterprise-grade EAI products that integrate with more obscure legacy systems or provide
unusually high qualities of service.What is more significant is that SOAP will encourage
and enable a whole new generation of EAI projects that were not previously possible due
to technical and cost constraints.But there will still be room on the market for other
integration products.
Criteria for Assessing SOAP Servers.There are already several commercial SOAP
servers and Web Services platforms available.Some of these also provide support for
WSDL and UDDI.When assessing the viability of SOAP servers,the key issues to address
include:
• Compliance with the standards
• Interoperability testing and XSLT support
• Support for Microsoft SOAP
• Support for Apache SOAP
• Security features
• Support for multiple transport layers
• Performance
4 Web Services Solution
In the previous section we mentioned some words about Web Services technology and
about place of SOAP within it.Now I want to consider this in more detailed way to give
some idea about how we can use flexibility of the SOAP on a practice.
What is the Web Service?With Web Services,information sources become compo-
nents that you can use,re-use,mix,and match to enhance Internet and intranet ap-
plications ranging from a simple currency converter,stock quotes,or dictionary to an
integrated,portalbased travel planner,procurement workflow system,or consolidated pur-
chase processes across multiple sites.Each is built as stack of layers,or a narrative format.
Each vendor,standards organization,or marketing research firm defines Web Services in
a slightly different way.Gartner,for instance,defines Web Services as ”loosely coupled
software components that interact with one another dynamically via standard Internet
technologies.” Forrester Research takes a more open approach to Web Services as ”au-
tomated connections between people,systems and applications that expose elements of
business functionality as a software service and create new business value.” For these rea-
sons,the architecture of a Web Services stack varies from one organization to another.16
The number and complexity of layers for the stack depend on the organization.Each stack
requires Web Services interfaces to get a Web Services client to speak to an Application
Server,or Middleware component,such as CORBA,J2EE,or.NET.
Web Services,at a basic level,can be considered as a universal client/server architecture
that allows disparate systems to communicate with each other without using proprietary
client libraries.We can points out that this architecture simplifies the development pro-
cess typically associated with client/server applications by effectively eliminating code
dependencies between client and server and the server interface information is disclosed
to the client via a configuration file encoded in a standard format (e.g.WSDL).Doing so
allows the server to publish a single file for all target client platforms.
Web Services stack of layers.Here we show the basic Web Services layers that are
taking from WebServices.Org.
Table 3.Web Services Stack.Layer — ExampleWorkflow,discovery & registries — UDDIService description language — WSDLMessaging — SOAP\XML protocolTransport protocols — HTTP,HTTPS,SMTPWorkflow,Discovery,Registries.Web Services that can be exposed may,for ex-
ample con-
junction with Web Services for business-to-business (B2B) transactions in a complex EAI
infrastructure under certain conditions.Web Services is still primarily an interfacing ar-
chitecture,and needs an integration platformto which it is connected.Such an integration
platform would cover the issue of integrating an installed base of applications that cannot
work as Web Services yet.
The first release of UDDI’s Business Registry became fully operational in May 2001,
enabling businesses to register and discover Web Services via the Internet.Its original
intent was to enable electronic catalogues in which businesses and services could be listed.
The UDDI specification defines a way to publish and discover information about services.
In November 2001,the UDDI Business Registry v2 beta became publicly available.
Hewlett Packard Company,IBM,Microsoft,and SAP launched beta implementation of
their UDDI sites that have conformed to the latest specification,including enhanced sup-
port for deploying public and private Web Service registries,and the interface (SOAP/HTTP17
API) that the client could use to interact with the registry server.In addition to the pub-
lic UDDI Business Registry sites,enterprises can also deploy private registries on their
intranet to manage internal Web Services using the UDDI specification.Access to internal
Web Service information may also be extended to a private network of business partners.
Service Description Language.As you move further down the stack,you need WSDL
to connect to a Web Service.This language is an XML format for describing network
services.With it,service requesters can search for and find the information on services
via UDDI,which,in turn,returns the WSDL reference that can be used to bind to the de-
livery) or Standard Mail Transfer Protocol (SMTP).Then,each Web Service
takes a ride over the Internet to provide a service requester with services or give a status
report to a service provider or broker.
5 Conclusion
Now we stand on a way to review all the paper says.So,as we considered,today developers
have a very flexible (by its nature) tool such as SOAP &,as its implementation,Web
Services technology,by which they can create a distributed systems that interact through
the Internet.It’s necessary to mark,that those technology less complex & more flexible
(in terms of integration) then previous such CORBA,DCOM e.t.c.
The widespread acceptance of SOAP means that almost all platforms and applications
can be expected to eventually provide SOAP interfaces.This means that Web Service
platforms,with SOAP servers at their core,will need to evolve from their current role as
providing adaptors to expose existing logic (by generating WSDL representations of the
back-end logic).It has been suggested that the future of SOAP servers is as Web Services
Platforms that will be a ”design center” in their own rightwhere new Web Services are18
developed,hosted,and made available through UDDI.
References[1]Clear Thinking Series by Annrai O’Toole 2004,Cape Clear Software Documentation,[2]w3c recomendations & notes,[3]Understanding SOAP,Aaron Skonnard,2003 MSDN Library,[4]An XML Overview Towards Understanding SOAP,Scott Seely,2001 MSDN Library,[5]Web Services Architectures,Judith M.My
|
https://www.techylib.com/el/view/hungryhorsecabin/soap_overview
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
AWS Developer Blog
Tuning the AWS SDK for Java to Improve Resiliency
In this blog post we will discuss why it’s important to protect your application from downstream service failures, offer advice for tuning configuration options in the SDK to fit the needs of your application, and introduce new configuration options that can help you set stricter SLAs on service calls.
Service failures are inevitable. Even AWS services, which are highly available and fault-tolerant, can have periods of increased latency or error rates. When there are problems in one of your downstream dependencies, latency increases, retries start, and generally API calls take longer to complete, if they complete at all. This can tie up connections, preventing other threads from using them, congest your application’s thread pool, and hold onto valuable system resources for a call or connection that may ultimately be doomed. If the AWS SDK for Java is not tuned correctly, then a single service dependency (even one that may not be critical to your application) can end up browning out or taking down your entire application. We will discuss techniques you can use to safeguard your application and show you how to find data to tune the SDK with the right settings.
Gathering Metrics
The metrics system in the AWS SDK for Java has several predefined metrics that give you insight into the performance of each of your AWS service dependencies. Metric data can be aggregated at the service level or per individual API action. There are several ways to enable the metrics system. In this post, we will take a programmatic approach on application startup.
To enable the metrics system, add the following lines to the startup code of your application.
AwsSdkMetrics.enableDefaultMetrics(); AwsSdkMetrics.setCredentialProvider(credentialsProvider); AwsSdkMetrics.setMetricNameSpace("AdvancedConfigBlogPost");
Note: The metrics system is geared toward longer-lived applications. It uploads metric data to Amazon CloudWatch at one-minute intervals. If you are writing a simple program or test case to test-drive this feature, it may terminate before the metrics system has a chance to upload anything. If you aren’t seeing metrics in your test program, try adding a sleep interval of a couple of minutes before terminating to allow metrics to be sent to CloudWatch.
For more information about the features of the metrics system and other ways to enable it, see this blog post.
Interpreting Metrics to tune the SDK
After you have enabled the metrics system, the metrics will appear in the CloudWatch console under the namespace you’ve defined (in the preceding example, AdvancedConfigBlogPost).
Let’s take a look at the metrics one by one to see how the data can help us tune the SDK.
HttpClientGetConnectionTime: Time, in milliseconds, for the underlying HTTP client library to get a connection.
- Typically, the time it takes to establish a connection won’t vary in a service (that is, all APIs in the same service should have similar SLAs for establishing a connection). For this reason, it is valid to look at this metric aggregated across each AWS service.
- Use this metric to determine a reasonable value for the connection timeout setting in ClientConfiguration.
- The default value for this setting is 50 seconds, which is unreasonably high for most production applications, especially those hosted within AWS itself and making service calls to the same region. Connection latencies, on average, are on the order of milliseconds, not seconds.
HttpClientPoolAvailableCount: Number of idle persistent connections of the underlying HTTP client. This metric is collected from the respective PoolStats before the connection of a request is obtained.
- A high number of idle connections is typical of applications that perform a batch of work at intervals. For example, consider an application that uploads all files in a directory to Amazon S3 every five minutes. When the application is uploading files, it’s creating several connections to S3 and then does nothing with the service for five minutes. The connections are left in the pool with nothing to do and will eventually become idle. If this is the case for your application, and there are constantly idle connections in the pool that aren’t serving a useful purpose, you can tune the connectionMaxIdleMillis setting and use the idle connection reaper (enabled by default) to more aggressively purge these connections from the pool.
- Setting the connectionMaxIdleMillis too low can result in having to establish connections more frequently, which can outweigh the benefits of freeing up system resources from idle connections. Take caution before acting on the data from this metric.
- If your application does have a bursty workload and you find that the cost of establishing connections is more damaging to performance than keeping idle connections, you can also increase the connectionMaxIdleMillis setting to allow the connections to persist between periods of work.
- Note: The connectionMaxIdleMillis will be limited to the Keep-Alive time specified by the service. For example if you set connectionMaxIdleMillis to five minutes but the service only keeps connections alive for sixty seconds, the SDK will still discard connections after sixty seconds when they are no longer usable.
HttpClientPoolPendingCount: Number of connection requests being blocked while waiting for a free connection of the underlying HTTP client. This metric is collected from the respective PoolStats before the connection of a request is obtained
- A high value for this metric can indicate a problem with your connection pool size or improper handling of service failures.
- If your usage of the client exceeds the ability of the default connection pool setting to satisfy your request you can increase the size of the connection pool through this setting.
- Connection contention can also occur when a service is experiencing increased latency or error rates and the SDK is not tuned properly to handle it. Connections can quickly be tied up waiting for a response from a faulty server or waiting for retries per the configured retry policy. Increasing the connection pool size in this case might only make things worse by allowing the application to hog more threads trying to communicate with a service in duress. If you suspect this may be the case, look at the other metrics to see how you can tune the SDK to handle situations like this in a more robust way.
HttpRequestTime: Number of milliseconds for a logical request/response round-trip to AWS. Captured on a per request-type level.
- This metric records the time it takes for a single HTTP request to a service. This metric can be recorded multiple times per operation, depending on the retry policy in use.
- We’ve recently added a new configuration setting that allows you to specify a timeout on each underlying HTTP request made by the client. The SLAs for requests between APIs or even per request can vary widely, so it’s important to use the provided metrics and consider the timeout setting carefully.
- This new setting can be specified per request or for the entire client (through ClientConfiguration). Although it’s hard to set a reasonable timeout on the client, it makes sense to set a default timeout on the client and override it per request, where needed.
- By default, this feature is disabled.
- Request timeouts are only supported in Java 7 and later.
Using the DynamoDB client as an example, let’s look at how you can use this new feature.
ClientConfiguration clientConfig = new ClientConfiguration(); clientConfig.setRequestTimeout(20 * 1000); AmazonDynamoDBClient ddb = new AmazonDynamoDBClient(credentialsProvider, clientConfig); // Will inherit 20 second request timeout from client level setting ddb.listTables(); // Request timeout overridden on the request level ddb.listTables(new ListTablesRequest().withSdkRequestTimeout(10 * 1000)); // Turns off request timeout for this request ddb.listTables(new ListTablesRequest().withSdkRequestTimeout(-1));
ClientExecuteTime: Total number of milliseconds for a request/response including the time to execute the request handlers, the round-trip to AWS, and the time to execute the response handlers. Captured on a per request-type level.
- This metric includes any time spent executing retries per the configured retry policy in ClientConfiguration.
- We have just launched a new feature that allows you to specify a timeout on the entire execution time, which matches up very closely to the ClientExecuteTime metric.
- This new timeout configuration setting can be set for the entire client in ClientConfiguration or per request.
- By default, this feature is disabled.
- Client execution timeouts are only supported in Java 7 and later.
Using the DynamoDB client as an example, let’s look at how you would enable a client execution timeout.
ClientConfiguration clientConfig = new ClientConfiguration(); clientConfig.setClientExecutionTimeout(20 * 1000); AmazonDynamoDBClient ddb = new AmazonDynamoDBClient(credentialsProvider, clientConfig); // Will inherit 20 second client execution timeout from client level setting ddb.listTables(); // Client Execution timeout overridden on the request level ddb.listTables(new ListTablesRequest().withSdkClientExecutionTimeout(10 * 1000)); // Turns off client execution timeout for this request ddb.listTables(new ListTablesRequest().withSdkClientExecutionTimeout(-1));
The new settings for request timeouts and client execution timeouts are complementary. Using them together is especially useful because you can use client execution timeouts to set harder limits on the API’s total SLA and use request timeouts to prevent one bad request from consuming too much of your total time to execute.
ClientConfiguration clientConfig = new ClientConfiguration(); clientConfig.setClientExecutionTimeout(20 * 1000); clientConfig.setRequestTimeout(5 * 1000); // Allow as many retries as possible until the client execution timeout expires clientConfig.setMaxErrorRetry(Integer.MAX_VALUE); AmazonDynamoDBClient ddb = new AmazonDynamoDBClient(credentialsProvider, clientConfig); // Will inherit timeout settings from client configuration. Each HTTP request // is allowed 5 second to complete and the SDK will retry as many times as // possible (per the retry condition in the retry policy) within the 20 second // client execution timeout ddb.listTables();
Conclusion
Configuring the SDK with aggressive timeouts and appropriately sized connection pools goes a long way toward protecting your application from downstream service failures, but it’s not the whole story. There are many techniques you can apply on top of the SDK to limit the negative effects of a dependency’s outage on your application. Hystrix is an open source library specifically designed to make fault-tolerance easier and your application even more resilient. To use Hystrix to its fullest potential, you’ll need some data to tune it to match your actual service SLAs in your environment. The metrics we discussed in this blog post can give you that information. Hystrix also has an embedded metrics system that can complement the SDKs metrics.
We would love your feedback on the configuration options and metrics provided by the SDK and what you would like to see in the future. Do we provide enough settings and hooks to allow you to tune your application for optimal performance? Do we provide too many settings and make configuring the SDK overwhelming? Does the SDK provide you with enough information to intelligently handle service failures?
|
https://aws.amazon.com/blogs/developer/tuning-the-aws-sdk-for-java-to-improve-resiliency/
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
The copilot-theorem package
Some tools to prove properties on Copilot programs with k-induction model checking.
Properties
Modules
Downloads
- copilot-theorem-2.2.1.tar.gz [browse] (Cabal source package)
- Package description (as included in the package)
Maintainer's Corner
For package maintainers and hackage trustees
Readme for copilot-theorem-2.2.1[back to package description]
Copilot Theorem
Highly automated proof techniques are a necessary step for the widespread adoption of formal methods in the software industry. Moreover, it could provide a partial answer to one of its main issue which is scalability.
copilot-theorem is a Copilot library aimed at checking automatically some safety properties on Copilot programs. It includes:
A general interface for provers and a proof scheme mechanism aimed at splitting the task of proving a complex property into checking a sequence of smaller lemmas.
A prover implementing basic k-induction model checking [1], useful for proving basic k-inductive properties and for pedagogical purposes.
A prover producing native inputs for the Kind2 model checker, developed at University of Iowa. The latter uses both the k-induction algorithm extended with path compression and structural abstraction [2] and the IC3 algorithm with counterexample generalization based on approximate quantifier elimination [3].
A Tutorial
Installation instructions
copilot-theorem needs the following dependencies to be installed:
- The copilot-core and copilot-language Haskell libraries
- The Yices2 SMT-solver:
yices-smt2must be in your
$PATH
- The Z3 SMT-solver:
z3must be in your
$PATH
- The Kind2 model checker:
kind2must be in your
$PATH
To build it, just clone this repository and use
cabal install. You will find
some examples in the
examples folder, which can be built with
cabal install
too, producing an executable
copilot-theorem-example in your
.cabal/bin
folder.
First steps
copilot-theorem is aimed at checking safety properties on Copilot programs. Intuitively, a safety property is a property which express the idea that nothing bad can happen. In particular, any invalid safety property can be disproved by a finite execution trace of the program called a counterexample. Safety properties are often opposed to liveness properties, which express the idea that something good will eventually happen. The latters are out of the scope of copilot-theorem.
Safety properties are simply expressed with standard boolean streams. In
addition to triggers and observers declarations, it is possible to bind a
boolean stream to a property name with the
prop construct in the
specification.
For instance, here is a straightforward specification declaring one property:
spec :: Spec spec = do prop "gt0" (x > 0) where x = [1] ++ (1 + x)
Let's say we want to check that
gt0 holds. For this, we use the
prove ::
Prover -> ProofScheme -> Spec -> IO () function exported by
Copilot.Theorem.
This function takes three arguments:
- The prover we want to use. For now, two provers are available, exported by the
Copilot.Theorem.Lightand
Copilot.Theorem.Kind2module.
- A proof scheme, which is a sequence of instructions like check, assume, assert...
- The Copilot specification
Here, we can just write
prove (lightProver def) (check "gt0") spec
where
lightProver def stands for the light prover with default
configuration.
The Prover interface
The
Copilot.Theorem.Prover defines a general interface for provers. Therefore,
it is really easy to add a new prover by creating a new object of type
Prover. The latter is defined like this:
data Cex = Cex type Infos = [String] data Output = Output Status Infos data Status = Valid | Invalid (Maybe Cex) | Unknown | Error data Feature = GiveCex | HandleAssumptions data Prover = forall r . Prover { proverName :: String , hasFeature :: Feature -> Bool , startProver :: Core.Spec -> IO r , askProver :: r -> [PropId] -> PropId -> IO Output , closeProver :: r -> IO () }
Each prover mostly has to provide a
askProver function which takes as an
argument
- The prover descriptor
- A list of assumptions
- A conclusion
and checks if the assumptions logically entail the conclusion.
Two provers are provided by default:
Light and
Kind2.
The light prover
The light prover is a really simple prover which uses the Yices SMT solver
with the
QF_UFLIA theory and is limited to prove k-inductive properties,
that is properties such that there exists some k such that:
- The property holds during the first k steps of the algorithm.
- From the hypothesis the property has held during k consecutive steps, we can prove it is still true one step further.
For instance, in this example
spec :: Spec spec = do prop "gt0" (x > 0) prop "neq0" (x /= 0) where x = [1] ++ (1 + x)
the property
"gt0" is inductive (1-inductive) but the property
"neq0" is
not.
The light prover is defined in
Copilot.Theorem.Light. This module exports the
lightProver :: Options -> Prover function which builds a prover from a record
of type
Options :
data Options = Options { kTimeout :: Integer , onlyBmc :: Bool , debugMode :: Bool }
Here,
kTimeoutis the maximum number of steps of the k-induction algorithm the prover executes before giving up.
- If
onlyBmcis set to
True, the prover will only search for counterexamples and won't try to prove the properties discharged to it.
- If
debugModeis set to
True, the SMTLib queries produced by the prover are displayed in the standard output.
Options is an instance of the
Data.Default typeclass:
instance Default Options where def = Options { kTimeout = 100 , debugMode = False , onlyBmc = False }
Therefore,
def stands for the default configuration.
The Kind2 prover
The Kind2 prover uses the model checker with the same name, from Iowa
university. It translates the Copilot specification into a modular transition
system (the Kind2 native format) and then calls the
kind2 executable.
It is provided by the
Copilot.Theorem.Kind2 module, which exports a
kind2Prover
:: Options -> Prover where the
Options type is defined as
data Options = Options { bmcMax :: Int }
and where
bmcMax corresponds to the
--bmc_max option of kind2 and is
equivalent to the
kTimeout option of the light prover. Its default value is
0, which stands for infinity.
Combining provers
The
combine :: Prover -> Prover -> Prover function lets you merge two provers
A and B into a prover C which launches both A and B and returns the most
precise output. It would be interesting to implement other merging behaviours
in the future. For instance, a lazy one such that C launches B only if A has
returns unknown or error.
As an example, the following prover is used in
Driver.hs:
prover = lightProver def {onlyBmc = True, kTimeout = 5} `combine` kind2Prover def
We will discuss the internals and the experimental results of these provers later.
Proof schemes
Let's consider again this example:
spec :: Spec spec = do prop "gt0" (x > 0) prop "neq0" (x /= 0) where x = [1] ++ (1 + x)
and let's say we want to prove
"neq0". Currently, the two available solvers
fail at showing this non-inductive property (we will discuss this limitation
later). Therefore, we can prove the more general inductive lemma
"gt0" and
deduce our main goal from this. For this, we use the proof scheme
assert "gt0" >> check "neq0"
instead of just
check "neq0". A proof scheme is chain of primitives schemes
glued by the
>> operator. For now, the available primitives are:
check "prop"checks whether or not a given property is true in the current context.
assume "prop"adds an assumption in the current context.
assert "prop"is a shortcut for
check "prop" >> assume "prop".
assuming :: [PropId] -> ProofScheme -> ProofSchemeis such that
assuming props schemeassumes the list of properties props, executes the proof scheme scheme in this context, and forgets the assumptions.
msg "..."displays a string in the standard output
We will discuss the limitations of this tool and a way to use it in practice later.
Some examples
Some examples are in the examples folder. The
Driver.hs contains the
main
function to run any example. Each other example file exports a specification
spec and a proof scheme
scheme. You can change the example being run just
by changing one import directive in
Driver.hs.
These examples include:
Incr.hs: a straightforward example in the style of the previous one.
Grey.hs: an example where two different implementations of a periodical counter are shown to be equivalent.
BoyerMoore.hs: a certified version of the majority vote algorithm introduced in the Copilot tutorial.
SerialBoyerMoore.hs: a serial version of the first step of the Boyer Moore algorithm, where a new element is added to the list and the majority candidate is updated at each clock tick. See the section Limitations related to the SMT solvers for an analysis of this example.
Technical details
An introduction to SMT-based model checking
An introduction to the model-checking techniques used by copilot-theorem can be
found in the
doc folder of this repository. It consists in a self sufficient
set of slides. You can find some additional readings in the References
section.
Architecture of copilot-theorem
An overview of the proving process
Each prover first translates the Copilot specification into an intermediate representation best suited for model checking. Two representations are available:
The IL format: a Copilot program is translated into a list of quantifier-free equations over integer sequences, implicitly universally quantified by a free variable n. Each sequence roughly corresponds to a stream. This format is the one used in G. Hagen's thesis [4]. The light prover works with this format.
The TransSys format: a Copilot program is flattened and translated into a state transition system [1]. Moreover, in order to keep some structure in this representation, the variables of this system are grouped by nodes, each node exporting and importing variables. The Kind2 prover uses this format, which can be easily translated into the native format.
For each of these formats, there is a folder in
src/Copilot/Theorem which
contains at least
Spec.hswhere the format is defined
PrettyPrint.hsfor pretty printing (useful for debugging)
Translate.hswhere the translation process from
Core.Specis defined.
These three formats share a simplified set of types and operators, defined
respectively in
Misc.Type and
Misc.Operator.
An example
The following program:
spec = do prop "pos" (fib > 0) where fib :: Stream Word64 fib = [1, 1] ++ (fib + drop 1 fib)
can be translated into this IL specification:
SEQUENCES s0 : Int MODEL INIT s0[0] = 1 s0[1] = 1 MODEL REC s0[n + 2] = s0[n] + s0[n + 1] PROPERTIES 'pos' : s0[n] > 0
or this modular transition system:
NODE 's0' DEPENDS ON [] DEFINES out : Int = 1 -> pre out.1 out.1 : Int = 1 -> pre out.2 out.2 : Int = (out) + (out.1) NODE 'prop-pos' DEPENDS ON [s0] IMPORTS (s0 : out) as 's0.out' (s0 : out.1) as 's0.out.1' (s0 : out.2) as 's0.out.2' DEFINES out : Bool = (s0.out) > (0) NODE 'top' DEPENDS ON [prop-pos, s0] IMPORTS (prop-pos : out) as 'pos' (s0 : out) as 's0.out' (s0 : out.1) as 's0.out.1' (s0 : out.2) as 's0.out.2' PROPS 'pos' is (top : pos)
Note that the names of the streams are lost in the Copilot reification process [7] and so we have no way to keep them.
Types
In these three formats, GADTs are used to statically ensure a part of the
type-corectness of the specification, in the same spirit it is done in the
other Copilot libraries. copilot-theorem handles only three types which are
Integer,
Real and
Bool and which are handled by the SMTLib standard.
copilot-theorem works with pure reals and integers. Thus, it is unsafe in the
sense it ignores integer overflow problems and the loss of precision due to
floating point arithmetic.
The rules of translation between Copilot types and copilot-theorem types are
defined in
Misc/Cast.
Operators
The operators provided by
Misc.Operator mostly consists in boolean
connectors, linear operators, equality and inequality operators. If other
operators are used in the Copilot program, they are handled using
non-determinism or uninterpreted functions.
The file
CoreUtils/Operators contains helper functions to translate Copilot
operators into copilot-theorem operators.
The Light prover
As said in the tutorial, the light prover is a simple tool implementing the
basic k-induction algorithm [1]. The
Light directory contains three files:
Prover.hs: the prover and the k-induction algorithm are implemented in this file.
SMT.hscontains some functions to interact with the Yices SMT provers.
SMTLib.hsis a set of functions to output SMTLib directives. It uses the
Misc.SExprmodule to deal with S-expressions.
The code is both concise and simple and should be worth a look.
The prover first translates the copilot specification into the IL format.
This translation is implemented in
IL.Translate. It is straightforward as the
IL format does not differ a lot from the copilot core format. This is the
case because the reification process has transformed the copilot program such
that the
++ operator only occurs at the top of a stream definition.
Therefore, each stream definition directly gives us a recurrence equation and
initial conditions for the associated sequence.
The translation process mostly:
- converts the types and operators, using uninterpreted functions to handle non-linear operators and external functions.
- creates a sequence for each stream, local stream ands external stream.
The reader is invited to use the light prover on the examples with
debugMode
= true, in order to have a look at the SMTLib code produced. For instance, if
we check the property
"pos" on the previous example involving the Fibonacci
sequence, we get:
<step> (set-logic QF_UFLIA) <step> (declare-fun n () Int) <step> (declare-fun s0 (Int) Int) <step> (assert (= (s0 (+ n 2)) (+ (s0 (+ n 0)) (s0 (+ n 1))))) <step> (assert (= (s0 (+ n 3)) (+ (s0 (+ n 1)) (s0 (+ n 2))))) <step> (assert (> (s0 (+ n 0)) 0)) <step> (push 1) <step> (assert (or false (not (> (s0 (+ n 1)) 0)))) <step> (check-sat) <step> (pop 1) <step> (assert (= (s0 (+ n 4)) (+ (s0 (+ n 2)) (s0 (+ n 3))))) <step> (assert (> (s0 (+ n 1)) 0)) <step> (push 1) <step> (assert (or false (not (> (s0 (+ n 2)) 0)))) <step> (check-sat) unsat <step> (pop 1)
Here, we just kept the outputs related to the
<step> psolver, which is the
solver trying to prove the continuation step.
You can see that the SMT solver is used in an incremental way (
push and
pop
instructions), so we don't need to restart it at each step of the algorithm
(see [2]).
The Kind2 prover
The Kind2 prover first translates the copilot specification into a modular
transition system. Then, a chain of transformations is applied to this system
(for instance, in order to remove dependency cycles among nodes). After this,
the system is translated into the Kind2 native format and the
kind2
executable is launched. The following sections will bring more details about
this process.
Modular transition systems
Let's look at the definition of a modular transition systems, in the
TransSys.Spec module:
type NodeId = String type PropId = String data Spec = Spec { specNodes :: [Node] , specTopNodeId :: NodeId , specProps :: Map PropId ExtVar } data Node = Node { nodeId :: NodeId , nodeDependencies :: [NodeId] , nodeLocalVars :: Map Var LVarDescr , nodeImportedVars :: Bimap Var ExtVar , nodeConstrs :: [Expr Bool] } data Var = Var {varName :: String} deriving (Eq, Show, Ord) data ExtVar = ExtVar {extVarNode :: NodeId, extVarLocalPart :: Var } deriving (Eq, Ord) data VarDescr = forall t . VarDescr { varType :: Type t , varDef :: VarDef t } data VarDef t = Pre t Var | Expr (Expr t) | Constrs [Expr Bool] data Expr t where Const :: Type t -> t -> Expr t Ite :: Type t -> Expr Bool -> Expr t -> Expr t -> Expr t Op1 :: Type t -> Op1 x t -> Expr x -> Expr t Op2 :: Type t -> Op2 x y t -> Expr x -> Expr y -> Expr t VarE :: Type t -> Var -> Expr t
A transition system (
Spec type) is mostly made of a list of nodes. A node
is just a set of variables living in a local namespace and corresponding to the
Var type. The
ExtVar type is used to identify a variable in the global
namespace by specifying both a node name and a variable. A node contains two
types of variables:
Some variables imported from other nodes. The structure
nodeImportedVarsbinds each imported variable to its local name. The set of nodes from which a node imports some variables is stored in the
nodeDependenciesfield.
Some locally defined variables contained in the
nodeLocalVarsfield. Such a variable can be
- Defined as the previous value of another variable (
Preconstructor of
VarDef)
- Defined by an expression involving other variables (
Exprconstructor)
- Defined implicitly by a set of constraints (
Constrsconstructor)
The translation process
First, a copilot specification is translated into a modular transition system.
This process is defined in the
TransSys.Translate module. Each stream is
associated to a node. The most significant task of this translation process is
to flatten the copilot specification so the value of all streams at time n
only depends on the values of all the streams at time n - 1, which is not the
case in the
Fib example shown earlier. This is done by a simple program
transformation which turns this:
fib = [1, 1] ++ (fib + drop 1 fib)
into this:
fib0 = [1] ++ fib1 fib1 = [1] ++ (fib1 + fib0)
and then into the node
NODE 'fib' DEPENDS ON [] DEFINES out : Int = 1 -> pre out.1 out.1 : Int = 1 -> pre out.2 out.2 : Int = (out) + (out.1)
Once again, this flattening process is made easier by the fact that the
++
operator only occurs leftmost in a stream definition after the reification
process.
Some transformations over modular transition systems
The transition system obtained by the
TransSys.Translate module is perfectly
consistent. However, it can't be directly translated into the Kind2 native
file format. Indeed, it is natural to bind each node to a predicate but the
Kind2 file format requires that each predicate only uses previously defined
predicates. However, some nodes in our transition system could be mutually
recursive. Therefore, the goal of the
removeCycles :: Spec -> Spec function
defined in
TransSys.Transform is to remove such dependency cycles.
This function relies on the
mergeNodes :: [NodeId] -> Spec -> Spec function
which signature is self-explicit. The latter solves name conflicts by using the
Misc.Renaming monad. Some code complexity has been added so the variable
names remains as clear as possible after merging two nodes.
The function
removeCycles computes the strongly connected components of the
dependency graph and merge each one into a single node. The complexity of this
process is high in the worst case (the square of the total size of the system
times the size of the biggest node) but good in practice as few nodes are to be
merged in most practical cases.
After the cycles have been removed, it is useful to apply another
transformation which makes the translation from
TransSys.Spec to
Kind2.AST
easier. This transformation is implemented in the
complete function. In a
nutshell, it transforms a system such that
- If a node depends on another, it imports all its variables.
- The dependency graph is transitive, that is if A depends of B which depends of C then A depends on C.
After this transformation, the translation from
TransSys.Spec to
Kind2.AST
is almost only a matter of syntax.
Bonus track
Thanks to the
mergeNodes function, we can get for free the function
inline :: Spec -> Spec inline spec = mergeNodes [nodeId n | n <- specNodes spec] spec
which discards all the structure of a modular transition system and turns it
into a non-modular transition system with only one node. In fact, when
translating a copilot specification to a kind2 file, two styles are available:
the
Kind2.toKind2 function takes a
Style argument which can take the value
Inlined or
Modular. The only difference is that in the first case, a call
to
removeCycles is replaced by a call to
inline.
Limitations of copilot-theorem
Now, we will discuss some limitations of the copilot-theorem tool. These limitations are organized in two categories: the limitations related to the Copilot language itself and its implementation, and the limitations related to the model-checking techniques we are using.
Limitations related to Copilot implementation
The reification process used to build the
Core.Spec object looses many
informations about the structure of the original Copilot program. In fact, a
stream is kept in the reified program only if it is recursively defined.
Otherwise, all its occurences will be inlined. Moreover, let's look at the
intCounter function defined in the example
Grey.hs:
intCounter :: Stream Bool -> Stream Word64 intCounter reset = time where time = if reset then 0 else [0] ++ if time == 3 then 0 else time + 1
If n counters are created with this function, the same code will be inlined n times and the structure of the original code will be lost.
There are many problems with this:
- It makes some optimizations of the model-checking based on a static analysis of the program more difficult (for instance structural abstraction - see [2]).
- It makes the inputs given to the SMT solvers larger and repetitive.
We can't rewrite the Copilot reification process in order to avoid these inconvenients as these informations are lost by GHC itself before it occurs. The only solution we can see would be to use Template Haskell to generate automatically some structural annotations, which might not be worth the dirt introduced.
Limitations related to the model-checking techniques used
##### Limitations of the IC3 algorithm
The IC3 algorithm was shown to be a very powerful tool for hardware certification. However, the problems encountered when verifying softwares are much more complex. For now, very few non-inductive properties can be proved by Kind2 when basic integer arithmetic is involved.
The critical point of the IC3 algorithm is the counterexample generalization and the lemma tightening parts of it. When encountering a counterexample to the inductiveness (CTI) for a property, these techniques are used to find a lemma discarding it which is general enough so that all CTIs can be discarded in a finite number of steps.
The lemmas found by the current version fo Kind2 are often too weak. Some suggestions to enhance this are presented in [1]. We hope some progress will be made in this area in a near future.
A workaround to this problem would be to write kind of an interactive mode where the user is invited to provide some additional lemmas when automatic techniques fail. Another solution would be to make the properties being checked quasi-inductive by hand. In this case, copilot-theorem is still a useful tool (especially for finding bugs) but the verification of a program can be long and requires a high level of technicity.
##### Limitations related to the SMT solvers
The use of SMT solvers introduces two kind of limitations:
- We are limited by the computing power needed by the SMT solvers
- SMT solvers can't handle quantifiers efficiently
Let's consider the first point. SMT solving is costly and its performances are
sometimes unpredictable. For instance, when running the
SerialBoyerMoore
example with the light prover, Yices2 does not terminate. However, the z3
SMT solver used by Kind2 solves the problem instantaneously. Note that this
performance gap is not due to the use of the IC3 algorithm because the property
to check is inductive. It could be related to the fact the SMT problem produced
by the light prover uses uninterpreted functions for encoding streams instead
of simple integer variables, which is the case when the copilot program is
translated into a transition system. However, this wouldn't explain why the
light prover still terminates instantaneously on the
BoyerMoore example,
which seems not simpler by far.
The second point keeps you from expressing or proving some properties
universally quantified over a stream or a constant. Sometimes, this is still
possible. For instance, in the
Grey example, as we check a property like
intCounter reset == greyCounter reset with
reset an external stream
(therefore totally unconstrained), we kind of show a universally quantified
property. This fact could be used to enhance the proof scheme system (see the
Future work section). However, this trick is not always possible. For
instance, in the
SerialBoyerMoore example, the property being checked should
be quantified over all integer constants. Here, we can't just introduce an
arbitrary constant stream because it is the quantified property which is
inductive and not the property specialized for a given constant stream. That's
why we have no other solution than replacing universal quantification by
bounded universal quantification by assuming all the elements of the input
stream are in the finite list
allowed and using the function
forAllCst
defined in
Copilot.Kind.Lib:
conj :: [Stream Bool] -> Stream Bool conj = foldl (&&) true forAllCst ::(Typed a) => [a] -> (Stream a -> Stream Bool) -> Stream Bool forAllCst l f = conj $ map (f . constant) l
However, this solution isn't completely satisfying because the size of the
property generated is proportionnal to the cardinal of
allowed.
#### Some scalability issues
A standard way to prove large programs is to rely on its logical structure by writing a specification for each of its functions. This very natural approach is hard to follow in our case because of
- The difficulty to deal with universal quantification.
- The lack of true functions in Copilot: the latter offers metaprogramming facilities but no concept of functions like Lustre does with its nodes).
- The inlining policy of the reification process. This point is related to the previous one.
Once again, copilot-theorem is still a very useful tool, especially for debugging purposes. However, we don't think it is adapted to write and check a complete specification for large scale programs.
Future work
Missing features in the Kind2 prover
These features are not currently provided due to the lack of important features in the Kind2 SMT solver.
Counterexamples displaying
Counterexamples are not displayed with the Kind2 prover because Kind2 doesn't
support XML output of counterexamples. If the last feature is provided, it
should be easy to implement counterexamples displaying in copilot-theorem. For
this, we recommend to keep some informations about observers in
TransSys.Spec and to add one variable per observer in the Kind2 output file.
Bad handling of non-linear operators and external functions
Non-linear Copilot operators and external functions are poorly handled because of the lack of support of uninterpreted functions in the Kind2 native format. A good way to handle these would be to use uninterpreted functions. With this solution, properties like
2 * sin x + 1 <= 3
with
x any stream can't be proven but at least the following can be proved
let y = x in sin x == sin y
Currently, the Kind2 prover fail with the last example, as the results of unknown functions are turned into fresh unconstrained variables.
Simple extensions
The following extensions would be really simple to implement given the current architecture of Kind2.
If inductive proving of a property fails, giving the user a concrete CTI (Counterexample To The Inductiveness, see the [1]).
Use Template Haskell to declare automatically some observers with the same names used in the original program.
Refactoring suggestions
Implement a cleaner way to deal with arbitrary streams and arbitrary constants by extending the
Copilot.Core.Expr type. See the
Copilot.Kind.Libmodule to observe how inelegant the current solution is.
Use
Cnubas an intermediary step in the translation from
Core.Specto
IL.Specand
TransSys.Spec.
More advanced enhancements
Enhance the proof scheme system such that when proving a property depending on an arbitrary stream, it is possible to assume some specialized versions of this property for given values of the arbitrary stream. In other words, implementing a basic way to deal with universal quantification.
It could be useful to extend the Copilot language in a way it is possible to use annotations inside the Copilot code. For instance, we could
- Declare assumptions and invariants next to the associated code instead of gathering all properties in a single place.
- Declare a frequent code pattern which should be factorized in the transition problem (see the section about Copilot limitations)
Why does the light prover not deliver counterexamples ?
The problem is the light prover is using uninterpreted functions to represent streams and Yices2 can't give you values for uninterpreted functions when you ask it for a valid assignment. Maybe we could get better performances and easily counterexample display if we rewrite the light prover so that it works with transition systems instead of IL.
### Why does the code related to transition systems look so complex ?
It is true the code of
TransSys is quite complex. In fact, it would be really
straightforward to produce a flattened transition system and then a Kind2 file
with just a single top predicate. In fact, It would be as easy as producing
an IL specification.
To be honest, I'm not sure producing a modular Kind2 output is worth the complexity added. It's especially true at the time I write this in the sense that:
- Each predicate introduced is used only one time (which is true because copilot doesn't handle functions or parametrized streams like Lustre does and everything is inlined during the reification process).
- A similar form of structure could be obtained from a flattened Kind2 native input file with some basic static analysis by producing a dependency graph between variables.
- For now, the Kind2 model-checker ignores these structure informations.
However, the current code offers some nice transformation tools (node merging,
Renaming monad...) which could be useful if you intend to write a tool for
simplifying or factorizing transition systems. Moreover, it becomes easier to
write local transformations on transition systems as name conflicts can be
avoided more easily when introducing more variables, as there is one namespace
per node.
References
An insight into SMT-based model checking techniques for formal software verification of synchronous dataflow programs, talk, Jonathan Laurent (see the
docfolder of this repository)
Scaling up the formal verification of Lustre programs with SMT-based techniques, G. Hagen, C. Tinelli
SMT-based Unbounded Model Checking with IC3 and Approximate Quantifier Elimination, C. Sticksel, C. Tinelli
Verifying safety properties of Lustre programs: an SMT-based approach, PhD thesis, G. Hagen
Understanding IC3, Aaron R. Bradley
IC3: Where Monolithic and Incremental Meet, F. Somenzi, A.R. Bradley
Copilot: Monitoring Embedded Systems, L. Pike, N. Wegmann, S. Niller
|
https://hackage.haskell.org/package/copilot-theorem
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
mona is a Javascript library for easily writing reusable, composable parsers.
It makes parsing complex grammars easy and fun!
With
mona, you simply write some Javascript functions that parse small pieces
of text and return any Javascript value, and then you glue them together into
big, intricate parsers using
combinators to... combine them! No custom syntax
or separate files or separate command line tools to run: you can integrate this
into your regular JS app.
It even makes it really really easy to give excellent error messages, including line and column numbers, and messages with what was expected, with little to no effort.
New parsers are hella easy to write -- give it a shot! And if you're familiar with Parsec, then you've come to the right place. :)
parse
parseAsync
value
bind
fail
label
token
eof
delay
log
map
tag
lookAhead
is
isNot
and
or
maybe
not
unless
sequence
join
followedBy
split
splitEnd
collect
exactly
between
skip
range
stringOf
oneOf
noneOf
string
alphaUpper
alphaLower
alpha
digit
alphanum
space
spaces
text
trim
trimLeft
trimRight
eol
natural
integer
real
float
cardinal
ordinal
shortOrdinal
$ npm install mona
You can directly require
mona through your module loader of choice, or you can
use the prebuilt UMD versions found in the
browser/ directory:
var mona = require('mona')
import mona from 'mona'
define(['node_modules/mona/browser/mona'], function (mona) { ... })
<script src=/js/node_modules/mona/browser/mona.min.js></script>
{return mona}mona// => [1, 2, 3, 49829, 49, 139]
{return mona}{return mona}{return mona}{return mona}{return mona}{return mona}// => [['foo', 'bar'], ['b"az', 'quux']]
mona is a package composed of multiple other packages, re-exported through a
single module. You have the option of installing
mona from npm directly, or
installing any of the subpackages and using those independently.
This API section is organized such that each parser or function is listed under the subpackage it belongs to, along with the name of the npm package you can find it in.
@mona/parse
This module or one of its siblings is needed in order to actually execute defined parsers. Currently, it exports only a single function: a synchronous parser runner.
> parse(parser, string[, opts]) -> T
Synchronously executes a parser on a given string, and returns the resulting value.
{Parser<T>} parser- The parser to execute.
{String} string- String to parse.
{Opts} [opts]- Options object.
{Boolean} [opts.throwOnError=true]- If truthy, throws a ParserError if the parser fails and returns ParserState instead of its value.
{String} [opts.fileName]- filename to use for error messages.
// => 'a'// => 123
@mona/parse-async
This module exports only a single function: an asynchronous parser runner. You need this module or something similar in order to actually execute your parsers.
> parseAsync(parser, callback[, opts]) -> Handle
Executes a parser asynchronously, returning an object that can be used to manage the parser state.
You can feed new data into the parsing process by calling the returned handle's
#data() method. Unless the parser given tries to match
eof(), parsing will
continue until the handle's
#done() method is called.
{Function} parser- The parser to execute.
{AsyncParserCallback} callback- node-style 2-arg callback executed once per successful application of
parser.
{Object} [opts]- Options object.
{String} [opts.fileName]- filename to use for error messages.
var handle =handledata'foo'// logs:// > Got a token: f// > Got a token: o// > Got a token: o
@mona/core
The core parser package contains essential and dev-utility parsers that are
intended to be the core of the rest of the parser libraries. Some of these are
very low level, such as
bind(). Others are not necessarily meant to be used in
production, but can help with debugging, such as
log().
> value(val) -> Parser<T>
Always succeeds with
val as its value, without consuming any input.
{T} val- value to use as this parser's value.
// => 'foo'
> bind(parser, fun) -> Parser<U>
Calls
fun on the value from
parser. Fails without executing
fun if
parser fails.
{Parser<T>} parser - The parser to execute.
{Function(Parser<T>) -> Parser<U>} fun - Function called with the resulting
value of
parser.
// => 'a!'
> fail([msg[, type]]) -> Parser<Fail>
Always fails without consuming input. Automatically includes the line and column
positions in the final
ParserError.
{String} [msg='parser error']- Message to report with the failure.
{String} [type='failure']- A type to apply to the ParserError.
> label(parser, msg) -> Parser<T>
Label a
parser failure by replacing its error messages with
msg.
{Parser<T>} parser- Parser whose errors to replace.
{String} msg- Error message to replace errors with.
// => unexpected eof// => expected thing
> token([count]) -> Parser<String>
Consumes a single item from the input, or fails with an unexpected eof error if there is no input left.
{Integer} [count=1]- number of tokens to consume. Must be > 0.
// => 'a'
> eof() -> Parser<true>
Succeeds with a value of
true if there is no more input to consume.
// => true
> delay(constructor, ...args) -> Parser<T>
Delays calling of a parser constructor function until parse-time. Useful for recursive parsers that would otherwise blow the stack at construction time.
{Function(...T) -> Parser<T>} constructor- A function that returns a Parser.
{...T} args- Arguments to apply to the constructor.
// The following would usually result in an infinite loop:{return}// But you can use delay() to remedy this...{return}
>log(parser, label[, level]) -> Parser<T>
Logs the
ParserState resulting from
parser with a
label.
{Parser<T>} parser- Parser to wrap.
{String} tag- Tag to use when logging messages.
{String} [level='log']- 'log', 'info', 'debug', 'warn', 'error'.
>map(fun, parser) -> Parser<T>
Transforms the resulting value of a successful application of its given parser.
This function is a lot like
bind, except it always succeeds if its parser
succeeds, and is expected to return a transformed value, instead of another
parser.
{Function(U) -> T} transformer- Function called on
parser's value. Its return value will be used as the
mapparser's value.
{Parser<U>} parser- Parser that will yield the input value.
// => 1234.5
>tag(parser, tag) -> Parser<Object<T>>
Results in an object with a single key whose value is the result of the given parser. This can be useful for when you want to build ASTs or otherwise do some tagged tree structure.
{Parser<T>} parser- Parser whose value will be tagged.
{String} tag- String to use as the object's key.
// => {myToken: 'a'}
>lookAhead(parser) -> Parser<T>
Runs a given parser without consuming input, while still returning a success or failure.
{Parser<T>} parser- Parser to execute.
// => 'a'
>is(predicate[, parser]) -> Parser<T>
Succeeds if
predicate returns a truthy value when called on
parser's result.
{Function(T) -> Boolean} predicate- Tests a parser's result.
{Parser<T>} [parser=token()]- Parser to run.
// => 'a'
> isNot(predicate[, parser]) -> Parser<T>
Succeeds if
predicate returns a falsy value when called on
parser's result.
{Function(T) -> Boolean} predicate- Tests a parser's result.
{Parser<T>} [parser=token()]- Parser to run.
// => 'b'
@mona/combinators
Parser combinators are at the very core of what makes something like mona shine: They are, themselves, parsers, but they are intended to accept other parsers as arguments, that they will then use to do whatever job they're doing.
Combinators do just that: They combine parsers. They act as the glue that lets you take all those individual parsers that you wrote, and combine them into increasingly more intricate parsers.
This package contains things like
collect(),
split(), and the
or()/
and()
pair.
> and(...parsers, lastParser) -> Parser<T>
Succeeds if all the parsers given to it succeed, using the value of the last executed parser as its return value.
{...Parser<*>} parsers- Parsers to execute.
{Parser<T>} lastParser- Parser whose result is returned.
// => 'b'
> or(...parsers[, label]) -> Parser<T>
Succeeds if one of the parsers given to it succeeds, using the value of the first successful parser as its result.
{...Parser<T,*>} parsers- Parsers to execute.
{String} [label]- Label to replace the full message with.
// => 'bar'
> maybe(parser) -> Parser<T> | Parser<undefined>
Returns the result of
parser if it succeeds, otherwise succeeds with a value
of
undefined without consuming any input.
{Parser<T>} parser- Parser to try.
// => 'a'// => undefined
> not(parser) -> Parser<undefined>
Succeeds if
parser fails. Does not consume.
{Parser<*>} parser- parser to test.
// => 'b'
> unless(notParser, ...moreParsers, lastParser) -> Parser<T>
Works like
and, but fails if the first parser given to it succeeds. Like
and, it returns the value of the last successful parser.
{Parser<*>} notParser- If this parser succeeds,
unlesswill fail.
{...Parser} moreParsers- Rest of the parses to test.
{Parser<T>} lastParser- Parser whose value to return.
// => 'b'
> sequence(fun) -> Parser<T>
Put simply, this parser provides a way to write complex parsers while letting
your code look like regular procedural code. You just wrap your parsers with
s(), and the rest of your code can be sequential. If the description seems
confusing, see the example.
This parser executes
fun while handling the
parserState internally, allowing
the body of
fun to be written sequentially. The purpose of this parser is to
simulate
do notation and prevent the need for heavily-nested
bind calls.
The
fun callback will receive a function
s which should be called with
each parser that will be executed, which will update the internal
parserState. The return value of the callback must be a parser.
If any of the parsers fail, sequence will exit immediately, and the entire sequence will fail with that parser's reason.
{Function -> Parser<T>} fun- A sequence callback function to execute.
mona
> join(...parsers) -> Parser<Array<T>>
Succeeds if all the parsers given to it succeed, and results in an array of all the resulting values, in order.
{...Parser<T>} parsers- One or more parsers to execute.
// => ['a', 1]
> followedBy(parser, ...moreParsers) -> Parser<T>
Returns the result of its first parser if it succeeds, but fails if any of the following parsers fail.
{Parser<T>} parser - The value of this parser is returned if it succeeds.
{...Parser<*>} moreParsers - These parsers must succeed in order for
followedBy to succeed.
// => 'a'// => expected {a}
> split(parser, separator[, opts]) -> Parser<Array<T>>
Results in an array of successful results of
parser, divided by the
separator parser.
{Parser<T>} parser- Parser for matching and collecting results.
{Parser<U>} separator- Parser for the separator
{Opts} [opts]- Optional options for controlling min/max.
{Integer} [opts.min=0]- Minimum length of the resulting array.
{Integer} [opts.max=Infinity]- Maximum length of the resulting array.
// => ['a','b','c','d']
> splitEnd(parser, separator[, opts]) -> Parser<Array<T>>
Results in an array of results that have been successfully parsed by
parser,
separated and ended by
separator.
{Parser<T>} parser- Parser for matching and collecting results.
{Parser<U>} separator- Parser for the separator
{Integer} [opts.enforceEnd=true]- If true,
separatormust be at the end of the parse.
{Integer} [opts.min=0]- Minimum length of the resulting array.
{Integer} [opts.max=Infinity]- Maximum length of the resulting array.
// => ['a', 'b', 'c']
> collect(parser[, opts]) -> Parser<Array<T>>
Results in an array of
min to
max number of matches of
parser
{Parser<T>} parser- Parser to match.
{Integer} [opts.min=0]- Minimum number of matches.
{Integer} [opts.max=Infinity]- Maximum number of matches.
// => ['a', 'b', 'c', 'd']
> exactly(parser, n) -> Parser<Array<T>>
Results in an array of exactly
n results for
parser.
{Parser<T>} parser- The parser to collect results for.
{Integer} n- exact number of results to collect.
// => ['a', 'b', 'c', 'd']
> between(open, close, parser) -> Parser<V>
Results in a value between an opening and closing parser.
{Parser<T>} open- Opening parser.
{Parser<U>} close- Closing parser.
{Parser<V>} parser- Parser to return the value of.
// => 'a'
> skip(parser) -> Parser<undefined>
Skips input until
parser stops matching.
{Parser<T>} parser- Determines whether to continue skipping.
// => 'b'
> range(start, end[, parser[, predicate]]) -> Parser<T>
Accepts a parser if its result is within range of
start and
end.
{*} start- lower bound of the range to accept.
{*} end- higher bound of the range to accept.
{Parser<T>} [parser=token()]- parser whose results to test
{Function(T) -> Boolean} [predicate=function(x,y){return x<=y }]- Tests range
// => 'd'
@mona/strings
This package is intended as a collection of string-related parsers. That is, parsers that specifically return string-related data or somehow match and manipulate strings themselves.
Here, you'll find the likes of
string() (the exact-string matching parser),
spaces(), and
trim().
> stringOf(parser) -> Parser<String>
Results in a string containing the concatenated results of applying
parser.
parser must be a combinator that returns an array of string parse results.
{Parser<Array<String>>} parser- Parser whose result to concatenate.
// => 'aaa'
> oneOf(matches[, caseSensitive]) -> Parser<String>
Succeeds if the next token or string matches one of the given inputs.
{String|Array<String>} matches- Characters or strings to match. If this argument is a string, it will be treated as if matches.split('') were passed in.
{Boolean} [caseSensitive=true]- Whether to match char case exactly.
// => 'c'// => 'bar'
> noneOf(matches[, caseSensitive[, other]]) -> Parser<T>
Fails if the next token or string matches one of the given inputs. If the third
parser argument is given, that parser will be used to collect the actual value
of
noneOf.
{String|Array} matches- Characters or strings to match. If this argument is a string, it will be treated as if matches.split('') were passed in.
{Boolean} [caseSensitive=true]- Whether to match char case exactly.
{Parser<T>} [other=token()]- What to actually parse if none of the given matches succeed.
// => 'd'// => 'f'// => 'frob'
> string(str[, caseSensitive]) -> Parser<String>
Succeeds if
str matches the next
str.length inputs,
consuming the string and returning it as a value.
{String} str- String to match against.
{Boolean} [caseSensitive=true]- Whether to match char case exactly.
// => 'foo'
> alphaUpper() -> Parser<String>
Matches a single non-unicode uppercase alphabetical character.
// => 'D'
> alphaLower() -> Parser<String>
Matches a single non-unicode lowercase alphabetical character.
// => 'd'
> alpha() -> Parser<String>
Matches a single non-unicode alphabetical character.
// => 'd'// => 'D'
> digit(base) -> Parser<String>
Parses a single digit character token from the input.
{Integer} [base=10]- Optional base for the digit.
// => '5'
> alphanum(base) -> Parser<String>
Matches an alphanumeric character.
{Integer} [base=10]- Optional base for numeric parsing.
// => '1'// => 'a'// => 'A'
> space() -> Parser<String>
Matches one whitespace character.
// => '\r'
> spaces() -> Parser<String>
Matches one or more whitespace characters. Returns a single space character as its result, regardless of which whitespace characters and how many were matched.
// => ' '
> text([parser[, opts]]) -> Parser<String>
Collects between
min and
max number of matches for
parser. The result is
returned as a single string. This parser is essentially
collect() for strings.
{Parser<String>} [parser=token()]- Parser to use to collect the results.
{Object} [opts]- Options to control match count.
{Integer} [opts.min=0]- Minimum number of matches.
{Integer} [opts.max=Infinity]- Maximum number of matches.
* // => 'abcde'* // => 'bcde'
> trim(parser) -> Parser<T>
Trims any whitespace surrounding
parser, and returns
parser's result.
{Parser<T>} parser- Parser to match after cleaning up whitespace.
// => 'a'
> trimLeft(parser) -> Parser<T>
Trims any leading whitespace before
parser, and returns
parser's result.
{Parser<T>} parser- Parser to match after cleaning up whitespace.
// => 'a'
> trimRight(parser) -> Parser<T>
Trims any trailing whitespace before
parser, and returns
parser's result.
{Parser} parser- Parser to match after cleaning up whitespace.
// => 'a'
> eol() -> Parser<String>
Parses the end of a line.
// => '\n'
@mona/numbers
If you ever need a parser that will take strings and turn them into the numbers
you want the to be, this is the place to look. Parsers in this package include
integer(),
float(), and
ordinal() (which parses English ordinals (
first,
second,
third) into numbers).
> natural(base) -> Parser<Integer>
Matches a natural number. That is, a number without a positive/negative sign or decimal places, and returns a positive integer.
{Integer} [base=10]- Base to use when parsing the number.
* // => 1234
> integer(base) -> Parser<Integer>
Matches an integer, with an optional + or - sign.
{Integer} [base=10]- Base to use when parsing the integer.
// => -1234
> real() -> Parser<Float>
Parses a floating point number.
// => -1.234e-7
> cardinal() -> Parser<Integer>
Parses english cardinal numbers into their numerical counterparts
// => 2000
> ordinal() -> Parser<Integer>
Parses English ordinal numbers into their numerical counterparts.
// 100005
> shortOrdinal() -> Parser<Integer>
Parses shorthand english ordinal numbers into their numerical counterparts. Optionally allows you to remove correct suffix checks and allow any apparent ordinal to get through.
{Boolean} [strict=true]- Whether to accept only appropriate suffixes for each number. (if false,
2thparses to
2)
// 5. Fails if there's nothing left to consume.
Simply creating a parser is not enough to execute a parser, though. We need to
use the
parse function, to actually execute the parser on an input string:
mona // => 'foo'mona // => throws an exceptionmona // => 'a'mona // => error, unexpected eof.
|
https://www.npmjs.com/package/mona
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
Duncan Epping recently wrote an article about VMware related acronyms and shared with the community the history and origins of some of the acronyms (e.g. ESX, GSX). After reading Duncan's article, I realized the acronym list primarily focused on product names and features. They did not cover some of the API related acronyms that I was curious about. If you have worked with the VMware APIs, SDKs or vmkernel, vmkwarning, hostd, vpxa, vpxd, etc. logs you may have noticed some of these acronyms and wondered what they actually stood for.
Here is my compiled list of VMware API related acronyms that I have been able to dig up by looking at these various locations:
DMS: Data Management Service
EAM: ESX Agent Manager
FDM: Fault Domain Manager
HMO: Host Managed Object
LS: License Service
MOB: Managed Object Browser
MOR: Managed Object Reference
OMS: Operation Management Service
RBD: Rule Based Deployment (Auto Deploy)
SMS: Storage Monitoring Service
SPS: Storage Policy Service
STOREMON: Storage Utilization Monitoring
VCI: (vci-integrity deals with VMware Update Manager, but no info on acronym)
VMAM: VM Application Monitoring
VMODL: VMware Managed Object Design Language
VMOMI: VMware Managed Object Management Interface
VOD: VPX Operational Dashboard
VOB/VOBD: VMkernel Observation
VORB: VMOMI Object Request Broker Library/Application
VSM: vService Manager
VWS: VMware Web Services
WOE: Workflow Orchestration Engine
Disclaimer: These namespaces may change without notice as they are part of the internal VMware APIs. These are not publicly documented by VMware and should not be relied on other than for informative purposes.
There was also a recent VMTN question regarding the following two licensed features that is displayed for a given ESX(i) host and it was not clear what they were used for:
Here are the actual names and purposes:
dpvmotion: Direct Path vMotion (VM DirectPath I/O)
dvfilter: vNetwork Appliance API (dvfilter was an internal name)
The following are not VMware acronyms, but you may see these from time to time if you work with the vSphere API/SDKs:
API: Application Programming Interface
SDK: Software Developement Kit
WSDL: Web Services Definition Language
SOAP: Simple Object Access Protocol
I would also like to thank both Steve Jin and Reuben Stump from VMware in helping me track down the last few acronyms that I could not figure out.
If there are any additional acronyms that I missed or ones I should add to the list, please feel free to leave me a comment.
3 thoughts on “VMware API Related Acronyms”
Hi,
I was discussing this on. Im ferdis there on thread. Does dpvmotion: Direct Path vMotion (VM DirectPath I/O) mean that now with 4.1 VM with direct path configured can be migrated with vMotion?
|
https://www.virtuallyghetto.com/2010/08/vmware-api-related-acronyms.html
|
CC-MAIN-2018-09
|
en
|
refinedweb
|
...one of the most highly
regarded and expertly designed C++ library projects in the
world. — Herb Sutter and Andrei
Alexandrescu, C++
Coding Standards
#include <boost/math/distributions/exponential.hpp>
template <class RealType = double, class Policy = policies::policy<> > class exponential_distribution; typedef exponential_distribution<> exponential; template <class RealType, class Policy> class exponential_distribution { public: typedef RealType value_type; typedef Policy policy_type; exponential_distribution(RealType lambda = 1); RealType lambda()const; };
The exponential distribution is a continuous probability distribution with PDF:
It is often used to model the time between independent events that happen at a constant average rate.
The following graph shows how the distribution changes for different values of the rate parameter lambda:
exponential_distribution(RealType lambda = 1);
Constructs an Exponential distribution with parameter lambda. Lambda is defined as the reciprocal of the scale parameter.
Requires lambda > 0, otherwise calls domain_error.
RealType lambda()const;
Accessor function returns the lambda exponential distribution is implemented in terms of the standard
library functions
exp,
log,
log1p
and
expm1 and as such
should have very low error rates.
In the following table λ is the parameter lambda of the distribution, x is the random variate, p is the probability and q = 1-p.
(See also the reference documentation for the related Extreme Distributions.)
|
http://www.boost.org/doc/libs/1_48_0/libs/math/doc/sf_and_dist/html/math_toolkit/dist/dist_ref/dists/exp_dist.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
This C Program checks if a given integer is odd or even. Here if a given number is divisible by 2 with the remainder 0 then the number is even number. If the number is not divisible by 2 then that number will be odd number.
Here is source code of the C program which checks a given integer is odd or even. The C program is successfully compiled and run on a Linux system. The program output is also shown below.
/*
* C program to check whether a given integer is odd or even
*/
#include <stdio.h>
void main()
{
int ival, remainder;
printf("Enter an integer : ");
scanf("%d", &ival);
remainder = ival % 2;
if (remainder == 0)
printf("%d is an even integer\n", ival);
else
printf("%d is an odd integer\n", ival);
}
advertisements
$ cc pgm4.c $ a.out Enter an integer : 100 100 is an even integer $ a.out Enter an integer : 105 105 is an odd integer.
|
http://www.sanfoundry.com/c-program-integer-odd-or-even/
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
A fussy little man in impeccable black jacket and pinstripe trousers.
Mr Bent is a framework for allowing profile data to be collected in a Python application and viewed at different logical levels. The three concepts involved are a plugin, a piece of code that profiles an application, a context, a logical block of code for which you want reporting data, and a filter, a way of getting fine-grained information on where the results for a context came from.
Plugins are callables that are given to the ”’mkwrapper”’ function which applies it to a function in your application.
This looks like:
mr.bent.wrapper.mkwrapper(foo.bar, plugincallable, "myplugin")
Which will cause ”’plugincallable”’ to be called on every invocation of ”’foo.bar”’ and add the results of the plugin to the current context as ”’myplugin”’.
Plugins can return either a number or an iterable. If it returns an iterable it must contain either strings or numbers. The case of returning a number is considered equivalent to returning an iterable of length 1 of numbers.
A context stores data generated by plugins. At any point a new context can be started which will be a “sub-context” of the currently active context. If there is no currently active context a new top-level one will be created.
Contexts are named with the dotted name of the function that they are created around, and return their data to a callback.
This looks like:
def mycallback(context, result, stats): return "%s <!-- %s -->" % (result, `stats`) mr.bent.wrapper.mkcontext(bar.foo, mycallback)
This example would cause invocations of bar.foo, a function that returns XML, to return the XML with a repr of the context dict in a following comment.
When a context ends it returns a mapping of the data it collected. As contexts are nested each time parent contexts include the data of their sub-contexts. Hence, the top level context returns the overall profiling; there is no need to manually aggregate data.
A filter is, like most things in Mr. Bent, a wrapper around a function. This will default to the dotted name of the callable, but an alternative, application specific name can be used instead. This is especially useful for a function that is used to render multiple different logical blocks of content.
This looks like:
mr.bent.wrapper.mkfilter(take.me.to.the.foo.bar)
In this example we have an application that renders a page of HTML including fragments that are logically different files which are then included into the main page.
Example 1:
.-------------. | Top level | `-------------' | | .--------------------. |---------| Left hand column | | `--------------------' | | | | .-------------. | |--------------| Login box | | | `-------------' | | | | | | .------------------. | `--------------| Navigation box | | `------------------' | | .-----------------. |---------| Content block | | `-----------------' | | | .---------------------. `---------| Right hand column | `---------------------' | | .----------------. `--------------| Calendar box | `----------------'
In this system we have the following notional plugins (with short names for brevity):
The return values may look something like this:
{'t': [5, 15, 85, 25], 'd': [0, 1, 2, 8]} .-------------. | Top level | `-------------' | {'t': [5, 15], 'd': [0,1]} | .--------------------. |---------| Left hand column | | `--------------------' | | {'t': [5], 'd': [0]} | | .-------------. | |--------------| Login box | | | `-------------' | | | | {'t': [15], 'd': [1]} | | .------------------. | `--------------| Navigation box | | `------------------' | {'t': [85], 'd': [2]} | .-----------------. |---------| Content block | | `-----------------' | | {'t': [25], 'd': [8]} | .---------------------. `---------| Right hand column | `---------------------' | {'t': [25], 'd': [8]} | .----------------. `--------------| Calendar box | `----------------'
Hence, the user has data at each level he has defined which he can then process as he likes.
Lets see that again as a doctest (sorry Florian!):
>>> from mr.bent.mavolio import create, destroy, current >>> create("top") # Create the top level context >>> create("lefthand") # Create the left hand column >>> create("login") # and the login portlet >>> current() # show that it's an empty context {} >>> current()['t'] = [5] # Simulate plugin results being added to context >>> current()['d'] = [0] >>> destroy() # Leave context {'t': [5], 'd': [0]} >>> create("nav") # Create nav >>> current()['t']=[15] >>> current()['d']=[1] >>> destroy() # Leave nav {'t': [15], 'd': [1]} >>> destroy() # Leave left hand column {'t': [5, 15], 'd': [0, 1]} >>> create("content") # Enter content block >>> current()['t'] = [85] >>> current()['d'] = [2] >>> destroy() # Leave content block {'t': [85], 'd': [2]} >>> create("righthand") # Enter right hand column >>> create("cal") # Enter calendar box >>> current()['t']=[25] >>> current()['d']=[8] >>> destroy() # Leave calendar {'t': [25], 'd': [8]} >>> destroy() # Leave right hand column {'t': [25], 'd': [8]} >>> destroy() # Leave the top level context, get totals {'t': [5, 15, 85, 25], 'd': [0, 1, 2, 8]}
Utility Methods
Low level.
|
https://pypi.org/project/mr.bent/
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
During the development of a new application, almost always, there is a need to have components with specific features in order to meet the requirements of a project or just to make things more natural and comfortable to use. Because of this, you will either extend your component from an existing one or create one from scratch. In both cases, it can bring you a lot of headache, and for a while deviate you from your main goal of development. An example of such a problem is the customization of the tabbed pane (JTabbedPane) to give it the functionality of open tabs' closing.
JTabbedPane
You can solve this problem by adding a button to the tab content, which often appears to be confronting the content, and not good looking at all, or extending the tabbed pane to have the close button embedded in the title bar of each tab. For the second choice, a comparably simple solution can be the usage of a component for tab title rendering. The component can be set for every tab (setTabComponentAt(index, component )) and you need to implement it to show the title and close button.
setTabComponentAt(index, component )
An alternative solution can be the usage of this extension of JTabbedPane that adds not only the functionality of closable tabs, but also gives the ability to control the event of tab closing.
The usage of the code is very simple; just use ClosableTabbedPane instead of JTabbedPane.
ClosableTabbedPane
// Implementation with close event handeling
public class TabFrame extends JFrame {
private ClosableTabbedPane tabbedPane;
public TabFrame() {
tabbedPane = new ClosableTabbedPane() {
public boolean tabAboutToClose(int tabIndex) {
String tab = tabbedPane.getTabTitleAt(tabIndex);
int choice = JOptionPane.showConfirmDialog(null,
"You are about to close '" +
tab + "'\nDo you want to proceed ?",
"Confirmation Dialog",
JOptionPane.INFORMATION_MESSAGE);
return choice == 0;
// if returned false tab
// closing will be canceled
}
};
getContentPane().add(tabbedPane);
}
// Simple implementation
public class TabFrame extends JFrame {
private ClosableTabbedPane tabbedPane;
public TabFrame() {
tabbedPane = new ClosableTabbedPane() ;
getContentPane().add(tabbedPane);
}
}
In the first implementation, you can see that the tabAboutToClose(tabIndex) function of ClosableTabbedPane is overridden to control the event of tab closing. The returned true/false value indicates whether the tab should be closed or not. Here you can do your processing before closing the tab.
tabAboutToClose(tabIndex)
true/false
If you do not need to control the closing event, just use the second code.
The interesting part of this project is that it was written without diving into the tabbed pane implementation, while achieving the desired functionality.
This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
tabbedPane.setBounds(10,10,10,10);
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/18496/JTabbedPane-with-Closing-Tabs?msg=3767342
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Properties
Properties are a central concept in .NET languages and a foundation principal when building classes. They have several advanced features over simple field variables and their use is integrated in many way throughout class interaction. Understanding how to properties work, and more importantly how properties can make your application smarter friendlier and easier to code is vital to advancing your trade-craft as a software developer.
Why is this important to learn about?
The professional coding community uses properties extensively. Its a primary concept. Its how classes communicate with each other. If your code doesn't use them then its going to be a real headache to interact with other classes..
Properties
Properties can best be thought of as advanced variables. You call and use them like variable. From the caller's point of view
public string Name;
and
public string Name { get; set; }
are exactly the same thing. "So why do I care about properties?" is probably your first through. Because as mentioned they are an advanced construct. In reality they are a wrapper around a get and set method. Let me say that again: Properties are a wrapper around two methods. And as we know methods are exectuable code that can do things and make comparrisons and decisions. Simple variables can't do anything.
In its simplest syntax a property can be autoset, meaning it has no backing field variable (more on this in a sec., so don't panic) The Name property above is an autoset property. It has a get and a set method that contain nothing. It will act exactly like any other variable. But that's like using a Ferrari to get groceries: Its under utilizing the tools at your disposal.
The get method within the property uses an important keyword: value
That is the value being passed in. You can work with that value within the get method. The only condition on this is that as soon as you start doing these things you have to have a backing variable for the property. The value has to be stored someplace while the methods do their job. Let's see it in action so it becomes clear.
private int _Age = 0; public int Age { get { return _Age; } set { if (value < 0) value = 0; _Age = value; } }
In our example Age is the property, _Age is the backing variariable.1
If you had a simple int variable for someone's age your code would have to validate it everyplace you used it. A typical program might refer to the Age 100 times. What if somewhere in all that our calculation screwed up? Do we really want to have to qualify every calculation every time? Worse yet, what if there are new rules put into place about validating age: We would have to scrub through thousands of lines of code and copy/paste the new rules into the 100 places we do it. YUCK!
For example, what if the age was accidentally set to a negative number? Its legal for an int to be -4, but that's not a valid age. But the get method executes on incoming values. We can make an age property that does this for us one time, in one place so we don't have to validate the age in 100 places.
void SomeMethod() { // Do a bunch of fancy stuff and get information from a database // In the course of our work we update the Age property. Age = OurNewCalculatedValue; }
Let's assume that after a faulty calculation OurNewCalculatedValue is -18 instead of positive 18. Well -18 is certainly not a valid age. Let's walk through how our property will validate and overcome wrote values being passed in:
Age = OurNewCalculatedValue; // is therefore
Age = -18; // Sending -18 as the value to the set method within the Age property.
set { if (value < 0) // {which is} if (-18 < 0) so yeah, this is true value = 0; // set the value to 0 _Age = value; // set our backing field variable to our corrected value } }
What happens most of the time when the original calculation doesn't screw up? Let's assume that after a correct calculation OurNewCalculatedValue is positive 21: A perfectly valid age
set { if (value < 0) // {which is} if (21 < 0) nope, not the case value = 0; // so this line doesn't execut _Age = value; // set our backing value to the passed in value of 21 } }
So if only for the capability of centralizing the validation of values properties are well worth the time. But wait! There's more! And this one is a doozy. There is an interface called INotifyPropertyChanged. We love this! The short description is: This how a property can raise an event telling any subscriber that it's value has been changed. Picture this very common situation:
You have a window and on it are various controls with information about a persion: Name, age, address, phone number etc. When a user inputs a change you want the property to update. By the same token if the age updates from an programmatic source (like a database update) you want the GUI to update automatically. This is exactly the same as say a calculator. When the math is done the Answer property is updated and you want that to be shown on screen. Or you are writting a chat program and the Message property updates when a new message is received.... Or you are watching a USB weather station and the Temperature gets a new value so you want that on-screen.
This is pretty much how any program should work: The work class does the work and the GUI acts as an interface between the user and the work.
You could do this:
private int _Age = 0; public int Age { get { return _Age; } set { if (value < 0) value = 0; _Age = value; lbl_Age.Text = value; } } //and void lbl_Age_TextChanged(object sender, eventargs e) { Age = Convert.ToInt32(lbl_Age.Text); }
But that violates a prime rule of OOP: One clase shouldn't know more than is necessary about another class. AND your data shouldn't be tightly bound to your GUI. The problem here becomes if a designer changes the layout or names of the controls on the GUI then the data class breaks because it is calling those controls directly. And the controls would have to be made public making them vulnerable to unintentional changes... and... and... and...
So instead we want to just raise up a notice (an event) that the data has been updated. The data class doesn't know or care who's listening: That's the responsiblity of the listener to update itself. Not the responsibility of a pure data class to fix up the GUI.
All we have to do is bind the value of the GUI control to the property, then tell the property to raise a PropertyChanged event when it gets set
WPF example:
In the C# code behind we
- Line 9: add a using statement for System.ComponentModel because that's where interface INotifyPropertyChanged lives.
- line 16: Add INotifyPropertyChanged to our class signature
- Lines 40-48: Impliment the events for INotifyPropertyChanged
That stuff is just one time, to get our class prepped. You don't do that for every property.
- Line 33: When the property gets changed, raise an event saying so.
//.Collections.Generic; using System.ComponentModel; using System.Windows; namespace DicTutorial { public partial class MainWindow : Window, INotifyPropertyChanged { // Constructor method (Note lack of return type and name matches the name of the class) public MainWindow() { InitializeComponent(); } #region Property Tutorial private int _Age = 0; public int Age { get { return _Age; } set { if (value < 0) value = 0; _Age = value; RaisePropertyChanged("Age"); } } #endregion #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string propertyName) { if (PropertyChanged != null) PropertyChanged(this, e: new PropertyChangedEventArgs(propertyName)); } #endregion } }
Then on our WPF window we bind the .Value of the control to the Age property, speicfying two way. That way when the property is changed the GUI is updated, and when the GUI is changed the property is updated.
Now that all the groundwork has been laid we can add as many easy-to-bind properties as we like. For a contact card we might add Name, address and so on to our existing Age property. One of my most common uses is to give the user status updates on the status bar at the bottom of the window.
Add all we had to do was add one more property that that reports when it has a change.
private string _Status; public string Status { get { return _Status; } set { if (_Status == value) return; _Status = value; RaisePropertyChanged("Status"); } }
Now we can give the user status updates easily with a single line from any method in our application.
bool DownloadData() { // Fancy download code Status = "Download complete"; return true; } void ReceiveNewTemperatureFromWeatherStation(int NewTemp) { WriteNewTempToDatabase(NewTemp); Status = string.Format("New Temp: {0}f", NewTemp); } void ExitApplication() { Status = "Cleaning up before close"; SaveUnsavedData(); }
In Conclusion
There is even more advanced features about properties. But this tutorial is already getting pretty long. We've covered the basics about what properties are and even some intermediate stuff about cool ways to use them. Hopefully this will get you thinking, and using properties as much as possible.
See all the C# Learning Series tutorials here!
Footnotes
1Naming convention:
Spoiler
|
http://www.dreamincode.net/forums/topic/299601-c%23-learning-series-properties/page__pid__1743192__st__0&#entry1743192?s=665f7dea2455f5e0ce4e7f1bdfeb4b3b
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
The make_solution() uses several mini-templates to construct different parts of the .sln file. Here are the templates. The names pretty much explain the purpose of each template. The templates use the same principle as the project templates and are just segments of text with placeholder for substitution values that the make_solution() function populates with the proper values and weave together:
# A project template has a header and a list of project sections # such as ProjectDependencies. The ProjectDependencies # duplicate the dependency information in .vcproj files in VS2005. # For generating a solution that contains only C++ projects, no other # project section is needed. project_template_with_dependencies = """\ Project("${TypeGUID}") = "${Name}", "${Filename}", "${GUID}" ProjectSection(ProjectDependencies) = postProject ${ProjectDependencies} EndProjectSection EndProject """ project_template_without_dependencies = """\ Project("${TypeGUID}") = "${Name}", "${Filename}", "${GUID}" EndProject """ project_configuration_platform_template = """\ \t\t${GUID}.Debug|Win32.ActiveCfg = Debug|Win32 \t\t${GUID}.Debug|Win32.Build.0 = Debug|Win32 \t\t${GUID}.Release|Win32.ActiveCfg = Release|Win32 \t\t${GUID}.Release|Win32.Build.0 = Release|Win32 """ # This is the solution template for VS 2008 # The template arguments are: # # Projects # ProjectConfigurationPlatforms # NestedProjects # solution_template = """ Microsoft Visual Studio Solution File, Format Version 10.00 # Visual Studio 2008 ${Projects} Global \tGlobalSection(SolutionConfigurationPlatforms) = preSolution \t\tDebug|Win32 = Debug|Win32 \t\tRelease|Win32 = Release|Win32 \tEndGlobalSection \tGlobalSection(ProjectConfigurationPlatforms) = postSolution ${Configurations} \tEndGlobalSection \tGlobalSection(SolutionProperties) = preSolution \t\tHideSolutionNode = FALSE \tEndGlobalSection \tGlobalSection(NestedProjects) = preSolution ${NestedProjects} \tEndGlobalSection EndGlobal """
Whoever designed the Visual Studio build system was big on GUIDs. Almost every object is identified by a GUID including the types of various solution items like folders and projects:
# Guids for regular project and solution folder project_type = '{8BC9CEB8-8B4A-11D0-8D11-00A0C91BC942}' folder_type = '{2150E333-8FDC-42A3-9474-1A3956D46DE8}'
The SolutionItem class itself is really just a named tuple, but since ibs is not limited to Python 2.6 and up (when named tuples were introduced into the language) I use a dedicated class:
class SolutionItem(object): """Represents a solution folder or project The set of solution projects contain all the information necessary to generate a solution file. name - the name of the project/folder type - folder_type or project_type path - the relative path from the root dir to the .vcproj file for projects, same as name for folders guid - the GUID of the project/folder dependencies - A list of project guids the project depends on. It is empty for folders and projects with no dependencies. projects - list of projects hosted by the folder. It is empty for projects. """ def __init__(self, item_type, name, path, guid, dependencies, projects): title() self.name = name self.type = item_type self.path = path self.guid = guid self.dependencies = dependencies self.projects = projects
The make_solution() takes the source directory and the folders list to generate the solution file using a bunch of nested functions.
def make_solution(source_dir, folders): """Return a string representing the .sln file It uses a lot of nested functions to make the different parts of a solution file: - make_project_dependencies - make_projects - make_configurations - make nested_projects @param folders - a dictionary whose keys are VS folders and the values are the projects each folder contains. Each project must be an object that has a directory path (relative to the root dir), a guid and a list of dependencies (each dependency is another projects). This directory should contain a .vcproj file whose name matches the directory name. @param projects - a list of projects that don't have a folder and are contained directly by the solution node. """
The get_existing_folders() nested function takes and existing .sln file and extracts the GUIDs of every project in it. It returns a dictionary of project names and GUIDs that can be used to regenerate a .sln file with identical GUIDs to the existing ones.
def get_existing_folders(sln_filename): title() lines = open(sln_filename).readlines() results = {} for line in lines: if line.startswith('Project("{2150E333-8FDC-42A3-9474-1A3956D46DE8}") ='): tokens = line.split('"') print tokens name = tokens[-4] guid = tokens[-2] results[name] = guid return results
The make_project_dependencies() nested function takes a list of dependency GUIDs of a project and returns the text fragment that is the ProjectDependencies sub-section of this project in the .sln file.
def make_project_dependencies(dependency_guids): title() if dependency_guids == []: return '' result = [] for g in dependency_guids: result.append('\t\t%s = %s' % (g, g)) result = '\n'.join(result) return result
The make_projects() nested function takes the source directory and the list of projects and generates a text fragment that represents all the projects in the .sln file. It uses the micro templates defined earlier and the make_project_dependencies() function.
def make_projects(source_dir, projects): title() result = '' t1 = string.Template(project_template_with_dependencies) t2 = string.Template(project_template_without_dependencies) for p in projects: if p.type == project_type: filename = p.path[len(source_dir) + 1:].replace('/', '\\') else: filename = p.name dependency_guids = [get_guid(p.path) for d in p.dependencies] guid = get_guid(filename) if p.guid is None else p.guid d = dict(TypeGUID=p.type, Name=p.name, Filename=filename, GUID=guid, ProjectDependencies=make_project_dependencies(p.dependencies)) t = t1 if p.dependencies != [] else t2 s = t.substitute(d) result += s return result[:-1]
The make_configurations() function returns a text fragment that represents all the project configuration platforms. It works by iterating over the projects list and populating the project_configuration_platform template with each project's GUID.
def make_configurations(projects): title() result = '' t = string.Template(project_configuration_platform_template) for p in projects: d = dict(GUID=p.guid) s = t.substitute(d) result += s return result[:-1]
The make_nested_projects() function returns a text fragment that represents all the nested projects in the .sln file. It works by iterating over the folders and populating the nested_project template with the guids of each nested project and its containing folder. Each folder is an object that has guid attribute and a projects attribute (which is a list of its contained projects):
def make_nested_projects(folders): title() for f in folders.values(): assert hasattr(f, 'guid') and type(f.guid) == str assert hasattr(f, 'projects') and type(f.projects) in (list, tuple) result = '' nested_project = '\t\t${GUID} = ${FolderGUID}\n' t = string.Template(nested_project) for folder in folders.values(): for p in folder.projects: d = dict(GUID=p.guid, FolderGUID=folder.guid) s = t.substitute(d) result += s return result[:-1]
These were all the nested functions and here is how the containing make_solution() function puts them to good use.
try: sln_filename = glob.glob(os.path.join(source_dir, '*.sln'))[0] existing_folders = get_existing_folders(sln_filename) except: existing_folders = [] # Use folders GUIDs from existing .sln file (if there is any) for name, f in folders.items(): if name in existing_folders: f.guid = existing_folders[name] else: f.guid = make_guid() # Prepare a flat list of all projects all_projects =[] for f in folders.values(): all_projects.append(f) all_projects += f.projects # Prepare the substitution dict for the solution template projects = [p for p in all_projects if p.type == project_type] all_projects = make_projects(source_dir, all_projects) configurations = make_configurations(projects) nested_projects = make_nested_projects(folders) d = dict(Projects=all_projects, Configurations=configurations, NestedProjects=nested_projects) # Create the final solution text by substituting the dict into the template t = string.Template(solution_template) solution = t.substitute(d) return solution
|
http://www.drdobbs.com/architecture-and-design/a-build-system-for-complex-projects-part/220900411?pgno=4
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
Binary options on gold. (1995). Finally, we copy binary options on gold double buffer image, offscreen2. 121 2004 Elsevier Inc. In 1997, these disparate positions were binary option programs in Sullivans edited collection that includes summaries of important court cases, congressional debate on the Defense of Marriage Act, and passages from the works of both supporters and critics of same-sex marriages.126574, 1992.
1(a). But {cn};;"-oo are simply the samples of f(t) every 2~ seconds, N. Press. However, a much firmer association of skin as well as liver cancer with the chronic administration of Fowlers solution has now been well documented (Pershagen, 1981). Stein L. In our program, we draw the vertices as small circles.
Summarizing multiple validity studies, C. Display a text file.1993) and deoxyguanosine hydroxylation of DNA occurring in hepatocytes of rats fed Binary options strategy ebook methyl-deficient binary options on gold (Nakae et al.
This procedure works only for floating point (double) arrays. The relationship is subdivided into three functional aspects expression of situational and habitual states of the sender, appeal to the receiver, and the kind of interpersonal relationship between the sender and the receiver.Feather, N.
11) To get fvvand fhv,we let Evi 1and Ehi 0so that Ei Ci. The counselors silence after that will allow the client to think about what he said and this will often be binary options on gold by some of the most significant statements that will be made in the interview.
Another one is Digital Image Processing, by R. These bits are saved in a conceptual bit reservoir. The color variable of the Colors class is an example of such an item.read 50 pages). This idea can be traced to Paul Flechsig and his studies of the development of myelin in the cortex.
Masculinityfemininity The degree to which females and males perceive that they binary options on gold others possess traits or behav- binary options on gold that are considered woman-like and manlike in their culture. 1X SSC, 0 1 SDS (made from 10 SDS solution from Life Technologies, cat no. 119135). Patients with schizotypal PD have also been found to be impaired with regard to BM 135. Colors and in the binary options on gold sunwdemocolors relative to the current directory.
001 0. In 1984, Guilford increased the number of abilities pro- posed by his theory, raising the total to 150. We return binary options on gold ideomotor apraxia in Chapter 22. 35) Binary options on gold didnt we simply write the Z-transform of currency binary options com directly in terms of Binary options on gold.that distances between points are symmetric) in the world of spatial cognition.
Support Groups and Self-Help Groups 5. The latter binary options on gold tation brings us to a second line of research in the formal tradition determining and interpreting cross-cultural differences in mean scores on intelligence tests. Or How fast were the cars going when they smashed into each other. The dynamic loading of the transition classes has a big impact on the loading time of the applet as a whole.
Drugs that s0020 Nicotine Addiction 667 s0025 Page Do binary options brokers make money 668 Nicotine Addiction are similar to nicotine), like the height distribution function and height correlation function. Furthermore, Advancements in sport and exercise psychology measurement (pp.
Psychologys special role emerges from the fact that records of events and the accounts of them come from human memory and the perceptual and cognitive processes that both record and report them-both selectively.
Alterations in nucleotide pools in rats fed diets deficient in choline, methionine andor folic acid. In both systems, negative symptoms include affective flattening or blunting markets world binary option emotional responses, alogia or paucity of speech, and apathy or avolition. π 2λ cos t 0 t π y(0)1,y λ 1. Casel. Socioeconomic sta- tus and child development.
Essen- tially, it is only neoplasms that exhibit higher than normal levels of PTHRP in the plasma (Mar- tin and Grill, likely because of an imbalance of neurotransmitter systems, especially the catecholamines and acetylcholine. Lysozyme lyophilized, from chicken egg white (Sigma-Aldrich, cat.
Ann Arbor University of Michigan Press. Presenting Problems 2. Pull et al 103 have produced diagnostic criteria that describe the concept of schizophrenia held by French psychiatrists until recently (Kellam 104. We will also adjust the step size of the quantizer based on the variance of the residual. It is shown that the definition and methods used in the study of culture in leadership can affect the results.
Receptors for many polypeptides hormones (Chapter 3) and growth factors (Chapter 16) have a single transmembrane domain and associate or dimer- ize after interaction with its ligand.
; 975 SOFTWARE DEVELOPMENT USING JAVA Page 1006 976 JavaTM 2 The Complete Reference public class AddCookieServlet extends HttpServlet { public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { Get parameter from HTTP request. This became more evident as the decision became more important.
Assessment enables sport psychology professionals (SPPs) to obtain crucial performance-relevant information. 184) A f 1 ρ a u 2a 2 where D is the drag force, TCDD, and nafenopin, a peroxisome proliferator. 09) or easy (. Res. Out.and Frayssinet, Binary options on gold. XN i141 (77) Ei (78) Page 127 7. Fx binary option scalper free download R. Di-(l-ethyl-n- propyl) phosphorofluoridate (V),the toxicity is less binary options on gold with gem.
Skills commonly dis- played by savants include calendar calculations (some binary options on gold tell the day of a per- sons birthday in any year of a 1000-year period); mathematical ability; musical ability, including the ability to play new pieces of music after hearing them once; sculpting; drawing; and peculiar feats of memory, such as memory binary option brokers no deposit bonus what the weather was like on every day of the savants life, retention of the names of all visitors ever received by the savant and the dates of their visits, and the date of every burial in a parish in a 35-year time span as well as the names of all the attendees.
Early psychologists attempted to eliminate subjective aspects (e. London Allen Unwin, 1972. The important difference is that an binary options on gold is an intelligent program, not just an animation or media file. " endl; break; case d case D cout endl "Goodbye. The energy transferred to the ion binary options on gold this collision is mivi22 4(memi)meve22. gov Elias, M. Confusion in counting results from stimulation of Brocas or Wernickes area. 042915 I 0.
Binary options on gold any tags for binary options on gold chromatography), and time sampling (i.and Pitot, H. 31) We multiply (8. Logically, in order to respect com- mitments taken by public authorities, citizens of many countries should only involve themselves in relation- ships respecting these principles. These findings suggested that economic and educational advancements were accompanied by reductions in the tendency to view men as stronger and more active than women.
(A) (B) S Segment HSF is the horizontal part. titech. Marks I, the dictionary keeps growing free binary options trading demo account no deposit bound. In Schizotypy Implications for Illness and Health (Ed. out. In some states, Speech and Signal Processing, pages 735-738. 233 simplifies to dθ 1 θiμ2ζe θi ζ 11 θj 6 21 θj dζ e0 Constant temperature 54321 l 6 54321 L 10 cm x Insulated Figure 3.
(1996). In K. We therefore take a different approach to equality testing. Haxby. In Group Dynamics in Sport 135 s0130 Page 987 136 Group Best binary options broker uk in Sport contrast, comparisons with others and performing bet- ter than others are promoted in the performance- oriented climate.
21f) d " 14 ðN2 N1Þ Binary options on gold jx12j2gfðÞ "; (830) dt h Converting this to the spatial rate of change of the intensity of the wave gives the spatial gain coefficient, g. Binary options on gold. Social resources operate as protective factors that decrease the negative health effects of life stressors when they occur and that can also reduce the likelihood of stressor occurrence (Fig.
As such, managers assume that it is not possible to change employee nature and behavior. Second,uselongorlong longroutinelyunlessforreasonsofspeedorspaceyou needtouseasmallersize. Examining results from comparable surveys conducted in 1950 and 1996, Phelan et al found that the proportion of respondents describing people with psychosis asbeingviolentincreasedbynearly21 timesoverthisperiod9.
Other spontaneous preneoplastic lesions have been described in experimental sys- tems (Maekawa and Mitsumori, J. Whether trust in such contexts can develop from a calculative basis into binary options on gold mature stages to achieve effective cooperation and whether partners will trust one another with their business secrets to promote learning are questions that need to be researched in more detail.
For example, in relation to food, the loss of taste and binary options on gold may, on the one hand, lead to a reduction in calories ingested (be- cause food is less appetizing) and may, on the binary options on gold, hamper the ability to distinguish food that has binary options ig index bad (with an increasing binary options regulated by cftc of food poisoning).
However, little can be said about the moderating effect of AAP strength because, as noted previously, the vast majority of studies in the principled conservatism and social dominance areas have equated affirmative action with preferential treatment.
When establishing mentoring relationships in the organization, binary options software signals binary options on gold impor- tant to train mentors and prote ́ge ́s on what their roles binary options on gold be in these relationships and on what obstacles they may encounter and how to deal with them.
This dimension was highly correlated with collectivism. King, pH 8. See Also the Following Articles Cognition and Culture n Cognitive Aging n Motives and Goals Further Reading Allen, R. 75) 3 2 t Performing the time integration gives the desired result, namely binary options trading usa change in system po- tential energy as a function of the fluid displacement is δW 1 d3rξ·F (ξ). The former requires work on the plasma and the latter involves work by the plasma.
Jaynes, and G. 205) where the length scale of the turbulence L lm(CDCμ3)14 and CD is equal to 1. NS S CC CC CS CT Ca Connections KEY S, T. The verbal mandatory subtests are information, similarities, arithmetic. Human rights in sport. (1989). In one Whirlpool binary options 1964 Perret, 1974 Milner, 1964 Ramier binary options on gold Hecaen, 1970 Jones-Gotman and Milner, 1977 Taylor, 1979 Reitan and Davison, 1974 Kolb and Milner, 1981 de Renzi and Faglioni, 1978 Taylor, 1979 Binary options on gold, 1979 Owen et al.
Out.4389399, 1992. Curson D, the electric field is binary options on gold decomposed into its longitudinal and trans- Page 215 204 Chapter 6. Binary option trend indicator. (12.
They applied the scien- binary options on gold management principles of Frederick W. The binary spy binary options trading indicator of the commons.
Int. In contrast, patients with both left- and right-frontal-lobe damage were very poor at copying a series of facial movements. Earle and Nettleship, 1943) attempted to in- duce the neoplastic transformation in cultured binary options on gold cells. 6 degrees Fahrenheit (76 and 82 degrees Celsius). 209C, T3 87. Polygynous families are found in many cultures. Psychological trauma The emotional impact of a traumatic event on the person who experienced it.
The fibers of the sensory neurons that binary options on gold up the first system are relatively large and heavily myeli- nated. Dimension. 196. Lucas, Binary options on gold. Providing residents with opportunities to exercise control can binary options trading platform demo having a door that they can open to an outdoor area, having a plant that they can care for, or allowing them to make decisions about meals, clothes.
Somat. Neuropsychologia 35989997, 1997. 2156. 69) On comparing the non-dimensional equations of natural and forced convection, it is cboe binary credit options to identify binary options on gold binary options trading account uk. As Figure 3.
Variables declared in this usual manner are stored in a portion of the computers memory called the stack. Schematic of the Quantized Energy Levels of a Coulomb Potential Well.Dragan, Y.DeNovellis F.
Clinical versus mechanical prediction A meta-analysis. When the sum of all these debits exceeds the sum of the credits in the cur- rent and capital accounts, D.
We might expect tachistoscopic and dichotic measures in the same subjects to be highly concordant, it is imperative to reduce the suffering of patients and their significant others as effectively and quickly as possible. A person who is depressive before the injury is likely to be depressive afterward; a person who is cheerful is likely to remain so. After WWII, Binary option trading india psychology became more recog- nized, and as a result, degree programs were estab- lished at major universities and colleges in the United States.
We also use anti-EE mono- Page 49 50 Beeler and Tang clonal antibodies binary option comodo 12CA5, A. Cell. Treatment of scatter- ing by nonspherical particles must employ all four Stokes parameters. To focus a view of health effect, We have binary options on gold the justification for these two conditions in the opening sections of this chapter. 172. With respect to characteristics of the situations in which injuries occur, athletes report greater postinjury emotional disturbance when they perceive their injuries as severe, their life stress as high, their rehabilitation progress as poor, and their social support as low.
Vocational interests Their meaning, CA Academic Press. 16).37 349 360. A recent development with regard to the emotions performance linkage indicates that Binary options on gold depends on a larger range of binary options on gold than merely cognitive and somatic anxiety. Washington, DC American Psychological Association. Alcohol and cancer.
Otherwise, it returns false. 23) kl is close to the speech sequence Yn Because this is a harmonic approximation, the approximate sequence {Yn} will be most different from the speech sequence {yn} when the segment of speech being encoded is unvoiced. ) Pentostatin IIFN. By aggregating his individual-level data, he has also identified culture-level variations in values.
rider. Establish the factorial property of the gamma function (ν 1) ν (ν), for ν 0. Suppose that there are no exports or imports and that the government neither taxes nor spends. Bull. People with poor long-term memory may not understand the sense of words despite good decoding skills, simply because they do not have much information about the meaning of the words.
Customizing treatment for chronic pain patients Who, what, and why. Gen. The use of the logarithm to obtain a measure of binary options on gold was not an arbitrary choice as we shall see later in this chapter. The Roles of the Prefrontal and Posterior Cortex Movements are usually made in binary options trading free to sensory stimuli-information from touch, binary options on gold, audition, and so binary options on gold they can also be made binary options on gold the absence of such information.
This is a common practice in organizations as project or virtual teams are formed to address specific problems. Psychological Review, 34, 273286. Ideology and Values Definitions 2.
(1996). This is why plainbox cant access weight even when it refers to a BoxWeight object. Shin (1990), Polarimetric remote sensing of geophysical is trading binary options legal in the us with layer random medium model, Progress in Electromag.
It is left hand elliptical polarization because the rotation is clockwise. We could say the frequency of occurrence, or the estimate of the probability, of turning on a television set in the middle of a commercial is 0. For example, such that the achievement of desired roles or goals are constrained, and the individual feels entrapped. See Also the Following Articles Groups, Productivity within n Groupthink n Intergroup Relations and Culture Further Reading Ahluwalia, R.
They are orga- nized as belief systems that pertain to different domains. 1998), and S. In 1916.Miller C. 5h) (2. He grossly mislocated cities and binary options on gold on a map of his country as well as of Europe, a task with binary option trader forum he was familiar, more complicated, but the principles are the same. (2000). The assumed separation of scales is expressed by decomposing the particle motion into a fast, oscillatory component the gyro-motion and a slow component obtained by Page 87 76 Chapter 3.Binary options portal
|
http://newtimepromo.ru/binary-options-on-gold.html
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
SPWeb Class
Represents a Windows SharePoint Services Web site.
Microsoft.SharePoint.SPWeb
Assembly: Microsoft.SharePoint (in Microsoft.SharePoint.dll)
[SharePointPermissionAttribute(SecurityAction.LinkDemand, ObjectModel = true)] [SharePointPermissionAttribute(SecurityAction.LinkDemand, ObjectModel = true)] [SharePointPermissionAttribute(SecurityAction.LinkDemand, ObjectModel = true)] [SharePointPermissionAttribute(SecurityAction.InheritanceDemand, ObjectModel = true)] public class SPWeb : IDisposable, ISecurableObject
Many methods and properties in the Microsoft.SharePoint namespace can return a single Web site. You can use the Webs property of the SPWeb class to return all the immediate child Web sites beneath a Web site, excluding children of those child Web sites. You can also use the AllWebs property of the SPSite class to return all Web sites within the site collection; or use the GetSubwebsForCurrentUser method of SPWeb to return all Web sites for the current user.
Use an indexer to return a single Web site from the collection. For example, if the collection is assigned to a variable named collWebSites, use collWebSites[index] in Microsoft C#, or collWebSites(index) in Microsoft Visual Basic, where index is the index number of the site in the collection, the display name of the Web site, or the GUID for the site.
Use the Web property of the SPContext class to return an SPWeb object that represents the current Web site, as follows.
To return the top-level Web site for the site collection, you can use the Site property of the SPContext class and the RootWeb property of the SPSite class as follows.
To return a specific Web site, Web site Windows SharePoint Services or your portal application manage the object instead. For more information about good coding practices, see Best Practices: Using Disposable Windows SharePoint Services Objects.
|
https://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spweb(v=office.12).aspx
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
This case study describes how to replace the default context menu shipped with the explorer, with a context menu that allows the programmer to set its pop-up items for each HTML tag, or HTML element with a name attribute and menu items that will always be displayed. (such as Add, Back…).
The sample window shows the menu implementation. The Add and Update items are fixed for every table in the application (Elements in the table may be added or updated at any time). The E-mail icon is set so that the user may E-mail customers. The items under the dashed line are global items that are available to the user by right-clicking.
The programmer can also set the location of the event (client or server) that will appear when the user selects one of the menu items. This suggested mechanism enhances the default context menu by enabling the user to set a context menu with type of tags and specific instances of tags with name attributes. Most of the Man Machine Interface (MMI) specialists agree that, enabling sensitive right click applications helps the end user navigate in complex applications (see the links). Usually complex applications in the web area are Intranet applications. Data from many enterprise sources and types are shown on HTML pages in order to help the enterprise reach its objectives (in most cases applications that were previously on Mainframe, AS400 etc.). In such applications the user requires a pop-up menu that meets the functionality of the data that he is working with. For example every table deals with a set of data that the user can update, add new and delete. If a specific table shows a list of customers, we may want to add new menu item that will enable us to send E-mail to customers. The menu system is based on an XML file that defines menu items for HTML types and item names and the event location, a base Form class that encapsulates the underlying code, a web service and a client side script.
Form
It all started when a lady (who is responsible for the man machine interface (MMI) at one of my clients) returned from the last CHI (Conference on Human Factors in Computing Systems) summit with new user interface requirements. One of these requirements was for a dynamic menu on the browser, similar to the one that was shipped with Explorer but with new requirements. She wanted the menu items to be dynamic by right-clicking on the HTML element tag and name. After all, the context menu of a list of workers isn’t the same as the context menu for a list of shipments.
There are some issues that we need to address. The first is how and where to keep the data that the programmer defines in order to control the menu items. We want to enable the programmer to be able to set the menu items in one place to prevent updating the data in many places. But if we need to call on the server each and every time in order to know which items the programmer has set for display, we will have performance problems and stress on the site. Secondly, we need a way to add the menu dynamically without sending the whole page back to the server. There is no existing context menu that can be used to set the items dynamically or determine the location of the menu items events, so we need to use DHTML to build one. Third and finally, we need to handle the event appearing on the server side and at the client side.
In order to solve these problems we can use several techniques. We will create an XML schema that defines the way the programmer sets the menu items to HTML types and element names. To solve the performance issue, we will cache the XML the first time we use it. Then we will write script code at the client side in order to seize the events and handle them. Further, we need a cache menu data at the client side to prevent round trips to the server. A web service will obtain the HTML tag and name, and return HTML that contains the menu elements that match the given HTML type and name.
Using web service implies that there will be performance problems, and some issues need to be considered. The programmer creates an XML file on the server that holds all the instructions required, in order to build the context menu for each element on the page. There are three ways that we can implement this.
The final issue for consideration concerns the application of the event on the server. In order to encapsulate most of the client side script we will create a super class page.
The problems usually appear in enterprises with many in-house web applications existing on the enterprise servers. Those applications are actually seen to be a large single application that handles the enterprise legacy. The user of such an application moves from sub-application to sub-application without being aware that each is a different application built by different programmers. This problem can be solved by maintaining uniformity of user interface design. This uniformity includes colors, fonts and locations of controls on the screen according to their functionality (general buttons at the top, screen specific buttons at the bottom, navigation tree view at the left and so on). This uniformity and other regulations enable the user to operate all the screens of applications in the same way. One of the regulations is a context menu with common items for all right click objects and a set of items that matches certain HTML tags and items that are unique for a certain HTML object. You probably understand that the goal of this article is to enable the organization programmer to define a mechanism that will enable the creation of a dynamic menu for all the enterprise applications. Writing such a mechanism raises problems that require attention:
In order to understand to code lines in this article it is necessary to view the entire process:
Pre running (the programmer)
While running (the user)
The Document_onMouseDown event fires. The source element of the event, which actually fires the event and other data are sent to the web service via HTML components - HTC.
Document_onMouseDown
The data design should define the structure and data types for XML documents, define on which side the event will be fired, and define arguments of the corresponding events. First, we define a simple type named ScriptLoc that keeps our event execute options (client or server). As explained above, the mechanism can upload the event when the user clicks on menu item on the server (server event) or on the client. Next, define the MenuItem type that consists of the name (the caption), argument (the programmer assigns the argument that will be sent to the event), data (for future use) and ScriptLoc (one of the ScriptLoc values, indicating the script location).
ScriptLoc
MenuItem
name
argument
data
Now define the relation between HTML tags and element names of such tags. Build a general mechanism that will allow us to assign generally-used menu items for every HTML tag and to assign specific menu items to specific HTML tag elements. These should have ID's so that they are unique. What we are achieving here is the containment relationship. To present it in a schema, we first define HtmlName type that consists of a Name (the ID of the HTML item) and one or more MenuItems. Then we define the HtmlType type that is built from the Name (the tag name), and none or collection of HtmlNames and one or more MenuItems. All the HtmlTypes are collected under a Page type that defines the web form name and collection of HtmlTypes.
HtmlName
Name
HtmlType
Page
The schema definition:
<?xml version="1.0" encoding="utf-8" ?>
<xs:schema id="DynaMenuSchema" targetNamespace="DynaMenuSchema"
elementFormDefault="qualified"
xmlns:mstns="DynaMenuSchema"
xmlns:
<xs:simpleType
<xs:restriction
>
We begin with the simplest task:- to disable the default right-click behavior. To do so we need to catch the oncontextmenu to cancel its event bubble and to return false.
oncontextmenu
false
sub document_oncontextmenu()
window.event.returnValue = false
window.event.cancelBubble = true
end sub
We want to identify the right click event on any item on the screen. To do so, we take advantage of the event bubbling feature and catch the onmousedown event of the document. Using this technique we will be notified every time the user clicks on any of the page elements. This technique suits our requirement to enable the context menu for every HTML tag or HTML object with a name attribute.
onmousedown
At this stage we check if the user right-clicks and if a shown menu does or does not exist. By using the button property of the event, we check for right or left click. Using the Contains method we check if the menu already exists.
button
Contains
if window.event.button = 2 and not window.document.body.contains
(document.body.children("DevNetmenu")) then
We are going to implement cache on the client side by using the dictionary. After catching the right click event and before calling the web service, we use the ReturnCacheMenu function to check if we already have cached the menu data for the element with the given HTML tag or the given name attribute.
ReturnCacheMenu
function ReturnCacheMenu(ItemTag,ItemName)
RV = ""
if (isempty(arrItemMenu)) then
set arrItemMenu = createobject("Scripting.Dictionary")
RV = ""
else
if (arrItemMenu.Exists(ItemName)) then
RV = arrItemMenu.Item(ItemName)
elseif (arrItemMenu.Exists(ItemTag)) then
RV = arrItemMenu.Item(ItemTag)
end if
end if
ReturnCacheMenu = RV
end function
If the function finds cache data, it is used to show the menu to the user. Then we need to change the locations of the cached data so the menu will show the location where the user right clicked. If the key is not found in the dictionary, we call the web service to receive the menu data. When the data arrives from the web service, we receive at the GetMenusResult parameter, the name of the tag or the name attribute that the web service used to create the menu. The return parameter will be used as the key that will be added to the dictionary, together with the menu data.
GetMenusResult
If the menu data is not found in the cache, we will call the web service. This process should begin by wiring the HTC. For this purpose we connect the HTC to the HTML element by using the BEHAVIOR in the style attribute.
BEHAVIOR
<INPUT type="hidden" id="MenuHTC"
style="BEHAVIOR:url(../../webservice.htc);">
The next step is to use the useService function by supplying the web service URL and a friendly name. This friendly name is used to call the web service function with parameters.
useService
We will call the web service asynchronously to prevent situations when there are large delays in producing the menu on the server side or when the server is unavailable. Further coding is required in order to supply the web service with the HTML tag and name attribute of some element. Some HTML elements are built from inner HTML tags (for example the TABLE element is built from TR and TD tags. If the user right-clicks on TD, we ascend in the canonical hierarchy until we reach the TABLE level or the document level where we stop, in order to prevent a situation called "orphan element". This way we will send the TABLE tag to the web service and its name attribute right clicked by the user and this procedure saves the need to define menus to CAPTION, COL, COLGROUP, TBODY, TD, TFOOT, TH, THEAD, and TR tags. In the code sample only the TABLE tag is handled, but the base table is enabled to hold an unlimited nested table.
TABLE
TR
TD
CAPTION
COL
COLGROUP
TBODY
TFOOT
TH
THEAD
The next step is to define a callback function in order to catch the answer from the web service, and to send the function address to the HTC. In the case of using VBScript, we employ the getref function to send the function address. The following is the coding implementation of the above.
getref
sub document_onmousedown()
if window.event.button = 2 and not window.document.body.contains
(document.body.children("DevNetmenu")) then
'need to check if its on the menu
dim Oelement
dim MenuData
set Oelement = window.event.srcElement
while (((Oelement.tagName <> "TABLE") or _
(Oelement.parentElement.tagName_
= "TD") ) and (Oelement.tagName <> "BODY"))
set Oelement = Oelement.parentElement
wend
if Oelement.tagName <> "TABLE" then
set Oelement = window.event.srcElement
end if
MenuData = ReturnCacheMenu(Oelement.tagName,Oelement.id)
if (MenuData = "") then
call window.document.all("MenuHTC").useService ( _
"../DynaMenu.asmx?WSDL","Menus")
iCallID = _
window.document.all("MenuHTC").Menus.callService(getref( _
"menuhandle"),"GetMenus",Oelement.id ,Oelement.tagName , _
window.event.clientX ,window.event.clienty ,_
window.document.url,"")
'replace the previous location data with
'current location data
iLoc = instr(1,MenuData,"LEFT:")
MenuData = mid(MenuData,1,iLoc + len("LEFT:")) + " " + _
cstr(window.event.clientX) + _
mid(MenuData,instr(iLoc+1,MenuData,"px;"))
iLoc = instr(1,MenuData,"TOP:")
MenuData = mid(MenuData,1,iLoc + len("TOP:")) + "" + _
cstr(window.event.clientY) + _
mid(MenuData,instr(iLoc+1,MenuData,"px;"))
call document.body.insertAdjacentHTML ("beforeEnd",MenuData)
window.document.all("DevNetmenu").focus
end if
end if
end sub
This section describes how to create a web service that handles the creation of the menu HTML corresponding to the programmer definitions in the XML file. Our first task is to create a Web service. Our action will be to add a web service to the existing project (instead of creating another project). This action will also help us to prevent refreshing of the entire page content when you need to change individual parts of the page. One of the innovations of .NET is the postback property, however, the postback property concept is to create all the page content and render it on the client side. In order to add a web service to a project we need to add a new web project item to the project and to select Web Service as the type of project. The name of our web service will be DynaMenu.asmx.
The web service will be built from a public function and several private functions. The public function will get the name of the element as the by-value first parameter. The second by-value parameter will get the HTML type of the element, the third by-value parameter will get the URL of the page and the forth and the fifth will get the location where the user right-clicked and the last, will be a by-reference (ByRef) parameter which is required to modify the variable underlying the tag name or the name attribute that is used to create the menu. All the other by-value (ByVal) parameters will get their values from the XML file of the pre-defined menu items and will use these values to build the context menu. The right-click location parameters (X,Y) will help us generate the HTML of the menu in the exact place where the user pointed (right-clicked). In the attached code the web service is implemented as part of the web application, but it is also possible to create the web service as a stand alone web service project. The code obtains the page name as one of its parameters, so the form name with the path can be sent and the page name in the XML file with path also set.
ByRef
ByVal
The next step is to build the public function of the Web service. This function will be used to generate the DIV tag that will be the container for the displayed menu and set its property and events, in order that it will suit our needs (the context menu HTML). Following from that, the next step is to call the private function ProcessPage which is used to build the menu items. Next we will call the ProcessFixMenu function which is used to enable the addition of the menu default items, (those which will be permanent.) (Print, Move next, etc.)
DIV
ProcessPage
ProcessFixMenu
<WebMethod()>Public Function GetMenus(ByVal ElemName As String,
ByVal ElemType As String, ByVal x As String, ByVal y As String,
ByVal page As String, ByRef ElementMenu As String) As String
Dim oXmlReader As XPathDocument = Nothing
Dim menu As System.Text.StringBuilder = New System.Text.StringBuilder()
Dim PageName As String
To avoid reading the file of the disk every time we send an event, we cache the XML file the first time we open it. Later on we will be able to work with cached objects.
If (Me.Context.Cache("MenuXml") Is Nothing) Then
oXmlReader = New XPathDocument _
(Me.Context.Request.PhysicalApplicationPath _
+ "\html\pages.xml")
Me.Context.Cache.Insert("MenuXml",oXmlReader)
Else
oXmlReader = Me.Context.Cache("MenuXml")
End If
The private method PageName is used literally to get the page name from the URL. This was added in order to remove the query string from the row URL. If you require to make this web service as stand alone, this should be changed to a function which returns a value of the page and its path.
PageName
PageName = GetPageName (page)
As mentioned previously, the menu that the web server produce, is built from the DIV (context menu HTML) that serves as a block for the entire menu items. The DIV will contain a table that holds the menu items as table cells (TD). First we will produce the DIV and its table, then each and every element in the XML file. Matching the HTML tag and the name attribute of the given element we will produce a table cell that holds the caption and other data. While the DIV is produced we catch the OnBlur client side event and call to a client side script that handles the action followed by the user going out from the menu DIV. Next we set the location of the DIV to be at the location where the mouse down event occurred. Finally we will need to retain the element name on which the user right-clicked, in a hidden field. This data will be in a later use for firing the respective event at the client side.
OnBlur
menu.Append("<div onblur=""leaveMenu()"" onmouseout _
="" DevNetmenu.style.cursor = 'auto'"" onmouseover = _
""DevNetmenu.style.cursor = 'hand'"" id=""DevNetmenu"" _
name=""DevNetmenu"" style =""BORDER-TOP-STYLE: outset; _
BORDER-RIGHT-STYLE: outset; BORDER-LEFT-STYLE: _
outset;BORDER-BOTTOM-STYLE: _
outset;Z-INDEX:32000;BACKGROUND-COLOR: gray;LEFT: " + x + _
"px; WIDTH: 20px; POSITION: absolute; TOP: " _
+ y + "px; HEIGHT: 20px"">")
menu.Append ( "<INPUT name=""DevNetMenuItem"" ID=""DevNetMenuItem"" _")
menu.Append("<table id=DevNetmenuTbl name=DevNetmenuTbl>")
We continue by checking the existence of the page in the XML file. If the page exists we will take the definitions for the existing page. If not, we will take definitions from a virtual page called “general”, a page that holds the default menu definitions. The processPage function will return the TD for the matched menu items and the HTML tag or attribute name that was used to produce the TDs.
processPage
If PageExist(PageName, oXmlReader) Then
'process the page
menu.Append(ProcessPage(ElemName, ElemType, PageName, oXmlReader, _
ElementMenu))
Else
menu.Append(ProcessPage(ElemName, ElemType, "general", oXmlReader, _
ElementMenu))
End If
Our last task will be to call the function responsible for adding the default menu items. Because of their default nature they should be always displayed independently to the HTML tag or the attribute name.
'process fix menu
If menu.ToString().IndexOf("<TR>") <> -1 Then
menu.Append("<TR><TD>-------</TD></TR>")
End If
menu.Append(ProcessFixMenu())
menu.Append("</TABLE><DIV></DIV>")
Return menu.ToString()
Catch err As
Exception
Context.Trace.Warn("Error", err.Message, err)
Finally
End Try
The PageExist function is a good example of the advantage of an XML path. A very simple yet elegant code can check if an element or attribute exists in the file. We will use an iteration to check all the elements in the file that match the Xpath query used before and with this iteration we can page through the matched elements.
PageExist
Private Function PageExist(ByVal name As String, ByVal
oXmlReader As System.Xml.XPath.XPathDocument) As Boolean
Dim oPath As XPathNavigator
Dim oIter As XPathNodeIterator
Dim bRetVal As Boolean = False
Try
oPath = oXmlReader.CreateNavigator()
The following code is very important. It handles the creation of the right queries, so that we can get the right answers or simply an answer at all. For example, the query string, in the following code, searches for every page element in the XML with name attribute that has a value equal to the name parameter formerly received.
oIter = oPath.Select("*/page[name='" + name + "']")
If oIter.Count > 0 Then
bRetVal = True
End If
Catch Err As Exception
Context.Trace.Warn("Error", err.Message, err)
Return False
End Try
End Function
Another role of the ProcessPage function is to use the Xpath to search for a specified object name in the XML file. If the name exists its menu item is added as a TR to the table in the DIV block, with hidden data fields that we use in the client side script. After adding the menu items to the file, we apply Xpath query that searches for the specific HTML type. If a matched item is found, its menu items are added to the table.
After using XPath to efficiently query the XML file with, the result is an HTML string that represents a DIV with a table that holds the menu items for display, together with hidden data that will be needed when we fire events (whenever the user selects one of the menu items.) This string reverts back to the call back function that is needed to create the client side script.
We begin by declaring a function that will be the call back function for the web service call. This was previously mentioned when we sent the address of this function. This function interface is fairly simple. It consists of one parameter which is the result of the calling web service function. In the body of the function we:
insertAdjacentHTML
ElementMenu
sub menuhandle(result)
if (result.error) then
msgbox "Error at the server side!"
else
call document.body.insertAdjacentHTML ("beforeEnd", _
result.value.GetMenusResult)
call arrItemMenu.add(result.value.ElementMenu, _
result.value.GetMenusResult)
window.document.all("DevNetmenu").focus
end if
end sub
Now the menu is visible to the user who may select an item. The following code task is to catch this selection and to activate events on the client or the server side, depending on the programmer predefinition.
Before handling the user selections of menu item, we need to handle the situation when the user clicks outside the DIV element. First of all when we add the menu DIV into the HTML, we set the focus for him. That means that every click outside the DIV menu, the OnBlur event will be raised. When we create the DIV HTML, at the web service we assign the client side script leaveMenu to handle this event. Implementation is as follows: There is a simple trick to this function. We use the elementFromPoint to check where the user clicks. Now we have two possibilities. The first is that the user clicks on one of the menu items. In such a case the menu DIV looses its focus but we don’t want to just remove the DIV, we want to handle the selection. The second possibility is that the user clicks outside the menu DIV. First we check if the clicked item is a table. If it’s not a table we remove the DIV by using body.removeChild. If the clicked item is table, we check if the table name matches the menu table. If there is a match we exit the function. If not, we remove the menu.
leaveMenu
elementFromPoint
body.removeChild
sub leaveMenu()
set oRes = window.document.elementFromPoint _
(window.event.clientx,window.event.clienty)
if oRes.Tagname = "TD" then
if oRes.parentElement.parentElement.parentElement.ID = "DevNetmenuTbl" _
then
exit sub
else
set obj = document.body.children("DevNetmenu")
document.body.removeChild(obj)
end if
else
set obj = document.body.children("DevNetmenu")
document.body.removeChild(obj)
end if
end sub
To handle the selection of a menu we will implement the DevNetmenuTbl_onclick that we assign to handle the function to the menu table's click event. When we create the menu HTML, we assign a call to this function, with a parameter that resembles a menu command, to all the fixed menu elements and empty string parameters, to all the dynamic menus. Now we will switch all the command parameters to the right functionality. Those menus are defined in advance and they resemble known functionality such as Move next, print or your own.
DevNetmenuTbl_onclick
All the dynamic menu items (without command) are handled the same way. If the hidden field hidetextLoc, which exists under the TD element on which we click, and which holds the event location, indicates that the event should be on the server side, we fill the hidden fields holding the event data with the event name and with XML string. The XML data holds all the data that the programmer defines, and simulates postback. The data is sent as an XML for two reasons:
hidetextLoc
If the event should happen in the client we call a pre defined function DevNetMenuHandle that the programmer is responsible to implement, in order to process the event. This function has three parameters: the event argument, the event data and the element name that raises the event.
DevNetMenuHandle
sub DevNetmenuTbl_onclick(sCommand)
select case sCommand
case "forward"
call window.history.forward()
case "back"
call window.history.back()
case "print"
window.print ()
case else
if window.event.srcElement.all("hidetextLoc").value = "client" then
dim EventArg,EventValue,srcName
EventArg = window.event.srcElement.all("hidetextArg").value
EventValue = window.event.srcElement.all("hidetextData").value
srcName = window.document.all ("DevNetmenu").all _
("DevNetMenuItem").value
set obj = document.body.children("DevNetmenu")
document.body.removeChild(obj)
call DevNetMenuHandle(EventArg,EventValue,srcName)
else
We insert parameters to the hidden fields that are used by postback, so we can use them at the server side and to call up the desired event with the requested data. Microsoft uses this parameters to know which is the control that called up the event and which event should be raised. The __EVENTTARGET needs to be filled with the name of the web control that will raise the event. The __EVENTARGUMENT holds the name of the event that the web control should fire. In most cases when the web control has only one event, there is no need for this parameter. With this knowledge you can simulate postback of web controls from script on the client side. In the code, a new form is created and the data of the postback hidden fields is changed, because of VBScript limitation. Microsoft renders to the client JavaScript function __dopostback() and then attaches it to the HTML tags event with the right parameters. The problem with VBScript is that it cannot call functions that start with underscore. One workaround to this problem is replacing with the JavaScript function.
__EVENTTARGET
__EVENTARGUMENT
__dopostback()
set theform = document.frmMain
theform.all("__EVENTTARGET").value = "devnetmenu"
theform.all("__EVENTARGUMENT").value = "<?xmlversion=""1.0"" _
encoding=""utf-8"" ?><EVENTDATA><ARG>" + _
window.event.srcElement.all("hidetextArg").value + _
"</ARG><NAME>" + window.document.all ("DevNetmenu").all _
("DevNetMenuItem").value + "</NAME><DATA>" + _
window.event.srcElement.all("hidetextData").value + _
"</DATA></EVENTDATA>"
call theform.submit()
end if
end select
set obj = document.body.children("DevNetmenu")
document.body.removeChild(obj)
end sub
sub DevNetMenuHandle(EventArg,EventValue,srcName)
msgbox "r.click on " + srcName + " with arg = " + EventArg + _
" and data = " + EventData
end sub
In this paragraph we handled the user selection of menu item or the clicking outside of the menu. If the event was set to be in the client, we also showed how to implement it. In the next section we will show who to handle the event in the server and how to implement most of the client script on the server side.
The server side code is actually designed for every web form that we are going to write. So there is good reason to overload the default Page class with our class, which will implement all our specific tasks. This way every page that will inherit from our page class (that inherits from System.Web.UI.Page) can enjoy all the benefits that we put into it.
System.Web.UI.Page
The first step will be to add a class library project to our solution. We will implement our page class in a different assembly so that every page can reference it and inherit its page classes from it. We will name the project as DevOurPage and the class as DevPage. We begin by developing this assembly with C# for two reasons:
DevPage
After adding a class library project, named DevOurPage to our solution we name the class as DevPage, and add a reference to System.Web.UI, using the Using key word. This will give our class all the functionality that the Web.UI.Page class possess, as we will need to inherit from it. Don’t forget to call up the base class constructor.
System.Web.UI
Using
Web.UI.Page
We now need a mechanism to catch the requests that are coming from the client side (via postback) for menu events and raise the menu event on the server side. To do this we need to do two things. Firstly, we declare an event that we can raise. Secondly, we need to catch the client request for the server event and raise it.
To declare new event arguments we need to create a new class that inherits from System.EventArgs, (the default class for event parameters). In this, way we can transfer our parameters to the event. We need to transfer the event name, its argument and its data. So we
System.EventArgs
private
public
set
get
After creating the class that will hold our arguments, we need to declare delegation on our page, that will accept as a parameter only instances of the SelChangeMenu class. Following this, we can declare the event as public member of our base page.
SelChangeMenu
[Serializable]
public class SelChangeMenu : EventArgs
{
string _Args;
string _Name;
string _Data;
public SelChangeMenu()
{
}
public SelChangeMenu(string args,string name,string data)
{
_Args = args;
_Name = name;
_Data = data;
}
public string Args
{
get
{
return _Args;
}
set
{
_Args = value;
}
}
public string Name
{
get
{
return _Name;
}
set
{
_Name = value;
}
}
public string Data
{
get
{
return _Data;
}
set
{
_Data = value;
}
}
}
public class DevPage : Page
{
[Serializable]
public delegate void MenuEventHandler(object sender,SelChangeMenu e);
public event MenuEventHandler MenuClick;
The data sent from the client is received at the server side in the form collection of the Request object. In the form collection there are fields that start with an underline. Those fields are basically internal and hold the data that the server sends to the client and vise versa. These fields include the __<CODE>EVENTTARGET and __<CODE>EVENTARGUMENT field. The first holds the name of the control that should be raised and the second holds the event. MS catches the request for the event and raises the appropriate event for us. When we create an UI control we implement the RaisePostBackEvent function of the IpostBackEventHandler interface to handle the event. But in this case we create a new event. The page default event handler is unsure of what to do with the event that was post back with __<CODE>EVENTTARGET and set to devnetmenu. This must therefore be handled, simply by overriding the OnInit member of the base page class.
form
Request
__<CODE>EVENTTARGET
__<CODE>EVENTARGUMENT
RaisePostBackEvent
IpostBackEventHandler
devnetmenu
OnInit
In the OnInit function we can check if the __EVENTTARGET is set to devnetmenu. If so we can parse the XML that was sent from the client in the __EVENTARGUMENT and raise the server event with the parameters that we declared earlier.
override protected void OnInit(EventArgs e)
{
if (this.Request.Form["__EVENTTARGET"] == "devnetmenu")
{
System.IO.StringReader oReader = new _
System.IO.StringReader(this.Request.Form["__EVENTARGUMENT"]);
System.Xml.XPath.XPathDocument oDoc = new _
System.Xml.XPath.XPathDocument(oReader);
System.Xml.XPath.XPathNavigator oPath = oDoc.CreateNavigator();
System.Xml.XPath.XPathNodeIterator oIter;
oIter = oPath.Select("*/arg");
oIter.MoveNext() ;
string arg = oIter.Current.Value;
oIter = oPath.Select("*/name");
oIter.MoveNext() ;
string name = oIter.Current.Value;
oIter = oPath.Select("*/data");
oIter.MoveNext() ;
string data = oIter.Current.Value;
SelChangeMenu oEvent = new SelChangeMenu(arg,name,data);
We must check if the event handler is null to prevent InvalidUseOfNull exception. We will come across this situation if the programmer failed to create a menu handler on the page.
null
InvalidUseOfNull
if (MenuClick != null)
MenuClick(this,oEvent);
}
In order to prevent an initiation of the default behavior of the function, we now call the base class.
base.OnInit (e);
}
Now we implement the event so we can catch the events and write our own code to handle them. To do this we implement the event on the page on which we are working, (which is actually the same page that will inherit from our page class.)
Private Sub Page_MenuClick(ByVal sender As Object, ByVal e As
System.EventArgs) Handles MyBase.MenuClick
Response.Write("menu click at " + CType(e, DevOurPage.SelChangeMenu).Name)
End Sub
The .NET framework enables us to write client side script from the server side by using the RegisterXXX functions (RegisterStartupScript, RegisterClientScriptBlock, etc.). We can use these methods in conjunction with our new page class, so that most of the code that we write on the client side can be written from the server side. This way all the logic that handles the menu is in one place instead of being in two places (the page class and include file). To do this we will overload the OnLoad method of the base class and there we will use the RegisterClientScriptBlock to render the client, the script. There is one major drawback. It is much more complicated to maintain the client code this way than as regular client script.
RegisterXXX
RegisterStartupScript
RegisterClientScriptBlock
OnLoad
protected override void OnLoad(EventArgs e)
{
System.Text.StringBuilder oSB = new System.Text.StringBuilder ();
oSB.Append("\n<script language ="javascript" >\n");
oSB.Append("\tsub document_oncontextmenu()\n");
oSB.Append("\t window.event.returnValue = false\n");
oSB.Append("\t window.event.cancelBubble = true\n");
oSB.Append("\t end sub \n");
…
this.RegisterClientScriptBlock
("DevMenu",oSB.ToString ());
base.OnLoad(e);
}
We will write all the client script from the server with the exception of DevNetMenuHandle that needs to be changed by the programmer in order to handle client side events.
To take advantage of our DevPage class we must make a reference from the web application to the DevOurPage project, and then change the inheritance of the page from the standard Page to DevPage.
javascript
vbscript
General News Suggestion Question Bug Answer Joke Praise Rant Admin
Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
|
http://www.codeproject.com/Articles/3925/Replacing-the-Internet-Explorer-context-menu-with
|
CC-MAIN-2016-44
|
en
|
refinedweb
|
import java.util.Iterator; 20 21 /** 22 * Defines an iterator that operates over a <code>Map</code>. 23 * <p> 24 * This iterator is a special version designed for maps. It can be more 25 * efficient to use this rather than an entry set iterator where the option 26 * is available, and it is certainly more convenient. 27 * <p> 28 * A map that provides this interface may not hold the data internally using 29 * Map Entry objects, thus this interface can avoid lots of object creation. 30 * <p> 31 * In use, this iterator iterates through the keys in the map. After each call 32 * to <code>next()</code>, the <code>getValue()</code> method provides direct 33 * access to the value. The value can also be set using <code>setValue()</code>. 34 * <pre> 35 * MapIterator<String,Integer> it = map.mapIterator(); 36 * while (it.hasNext()) { 37 * String key = it.next(); 38 * Integer value = it.getValue(); 39 * it.setValue(value + 1); 40 * } 41 * </pre> 42 * 43 * @param <K> the type of the keys in the map 44 * @param <V> the type of the values in the map 45 * @since 3.0 46 * @version $Id: MapIterator.java 1361710 2012-07-15 15:00:21Z tn $ 47 */ 48 public interface MapIterator<K, V> extends Iterator<K> { 49 50 /** 51 * Checks to see if there are more entries still to be iterated. 52 * 53 * @return <code>true</code> if the iterator has more elements 54 */ 55 boolean hasNext(); 56 57 /** 58 * Gets the next <em>key</em> from the <code>Map</code>. 59 * 60 * @return the next key in the iteration 61 * @throws java.util.NoSuchElementException if the iteration is finished 62 */ 63 K next(); 64 65 //----------------------------------------------------------------------- 66 /** 67 * Gets the current key, which is the key returned by the last call 68 * to <code>next()</code>. 69 * 70 * @return the current key 71 * @throws IllegalStateException if <code>next()</code> has not yet been called 72 */ 73 K getKey(); 74 75 /** 76 * Gets the current value, which is the value associated with the last key 77 * returned by <code>next()</code>. 78 * 79 * @return the current value 80 * @throws IllegalStateException if <code>next()</code> has not yet been called 81 */ 82 V getValue(); 83 84 //----------------------------------------------------------------------- 85 /** 86 * Removes the last returned key from the underlying <code>Map</code> (optional operation). 87 * <p> 88 * This method can be called once per call to <code>next()</code>. 89 * 90 * @throws UnsupportedOperationException if remove is not supported by the map 91 * @throws IllegalStateException if <code>next()</code> has not yet been called 92 * @throws IllegalStateException if <code>remove()</code> has already been called 93 * since the last call to <code>next()</code> 94 */ 95 void remove(); 96 97 /** 98 * Sets the value associated with the current key (optional operation). 99 * 100 * @param value the new value 101 * @return the previous value 102 * @throws UnsupportedOperationException if setValue is not supported by the map 103 * @throws IllegalStateException if <code>next()</code> has not yet been called 104 * @throws IllegalStateException if <code>remove()</code> has been called since the 105 * last call to <code>next()</code> 106 */ 107 V setValue(V value); 108 109 }
|
http://commons.apache.org/proper/commons-collections/xref/org/apache/commons/collections/MapIterator.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
In this post I am showing how to implement Column Grouping in SIlverlight 3 DataGrid,
Before starting make sure you have SIlverlight3_Tools installed in your system, not SIlverlight 3 Beta, as there are lost of changes in Grouping of Columns from SIlverlight 3 Beta to Silverlight 3 RTW.
You can follow the few simple steps below to get the Grouping for your Silverlight 3 DataGrid.
Else you can also download the code demonstrated from here.
1. Create an Silverlight Application using you Visual Studio 2008 IDE, and add a hosting web application in the project.
2. Once you have done with this, you will get an two projects in your Solution, you need to code only in your silverlight project.
3. Now add a C# class file called Person.cs in your Silverlight Project. This class is used to provide some sample data to Silverlight DataGrid, you can change this to your other datasources like SQL, XML, etc.
I have added the following lines of code in Person.cs
class Person
2: {
3: public string FirstName { get; set; }
4: public string LastName { get; set; }
5: public string City { get; set; }
6: public string Country { get; set; }
7: public int Age { get; set; }
8:
9: public List<Person> GetPersons()
10: {
11: List<Person> persons = new List<Person>
12: {
13: new Person
14: {
15: Age=32,
16: City="Bangalore",
17: Country="India",
18: FirstName="Brij",
19: LastName="Mohan"
20: },
21: new Person
22: {
23: Age=32,
24: City="Bangalore",
25: Country="India",
26: FirstName="Arun",
27: LastName="Dayal"
28: },
29: new Person
30: {
31: Age=38,
32: City="Bangalore",
33: Country="India",
34: FirstName="Dave",
35: LastName="Marchant"
36: },
37: new Person
38: {
39: Age=38,
40: City="Northampton",
41: Country="United Kingdom",
42: FirstName="Henryk",
43: LastName="S"
44: },
45: new Person
46: {
47: Age=40,
48: City="Northampton",
49: Country="United Kingdom",
50: FirstName="Alton",
51: LastName="B"
52: },
53: new Person
54: {
55: Age=28,
56: City="Birmingham",
57: Country="United Kingdom",
58: FirstName="Anup",
59: LastName="J"
60: },
61: new Person
62: {
63: Age=27,
64: City="Jamshedpur",
65: Country="India",
66: FirstName="Sunita",
67: LastName="Mohan"
68: },
69: new Person
70: {
71: Age=2,
72: City="Bangalore",
73: Country="India",
74: FirstName="Shristi",
75: LastName="Dayal"
76: }
77: };
78:
79: return persons;
80: }
81: }
4. Now since my data is ready, I will add the DataGrid control in my XAML page, to make the presentation more attractive, I have added few more lines of code.
5. I have also added a ComboBox Control to Select the Grouping Columns name.
My XAML code will look something like this below.
1: <UserControl
2: xmlns:data="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data"
3: x:Class="Silverlight3DataGrid.MainPage"
4: xmlns=""
5: xmlns:x=""
6: xmlns:d=""
7: xmlns:mc=""
8: mc:
9: <Grid x:
10: <Grid.RowDefinitions>
11: <RowDefinition Height="0.154*"/>
12: <RowDefinition Height="0.483*"/>
13: <RowDefinition Height="0.362*"/>
14: </Grid.RowDefinitions>
15:
16: <StackPanel Orientation="Horizontal">
17: <TextBlock Text="Select Sort Criteria"
18:
19:
20: <TextBlock Text=" " />
21:
22: <ComboBox Grid.Row="0"
23: HorizontalAlignment="Left"
24: Width="200"
25: Height="30" x:Name="SortCombo"
26:
27:
28: <ComboBoxItem Content="Country" ></ComboBoxItem>
29: <ComboBoxItem Content="City" ></ComboBoxItem>
30: <ComboBoxItem Content="Age" ></ComboBoxItem>
31: </ComboBox>
32: </StackPanel>
33:
34: <data:DataGrid x:</data:DataGrid>
35:
36: </Grid>
37: </UserControl>
6. Now as my Data and Presentation is ready, its time for me to write some lines of code to Retrieve my sample data, group them into columns and then Bind it to the Grid.
Please find below the rest of the Code which demonstrates how I have Grouped the Columns using PagedCollectionView and PropertyGroupDescription.
1: public partial class MainPage : UserControl
3: PagedCollectionView collection;
4:
5: public MainPage()
6: {
7: InitializeComponent();
8: BindGrid();
9: }
10:
11: private void BindGrid()
12: {
13: Person person = new Person();
14: PersonGrid.ItemsSource = null;
15: List<Person> persons = person.GetPersons();
16: collection = new PagedCollectionView(persons);
17: collection.GroupDescriptions.Add(new
18: PropertyGroupDescription("Country"));
20: PersonGrid.ItemsSource = collection;
21: }
22:
23: private void SortCombo_SelectionChanged(object sender,
24: SelectionChangedEventArgs e)
25: {
26: ComboBoxItem person = SortCombo.SelectedItem as ComboBoxItem;
27: collection.GroupDescriptions.Clear();
28: collection.GroupDescriptions.Add(new
29: PropertyGroupDescription(person.Content.ToString()));
30:
31: PersonGrid.ItemsSource = null;
32: PersonGrid.ItemsSource = collection;
33: }
34: }
Yes its that simple, and its done. I know I have not given much description here, because nothing much to explain here. In the code above I am loading the Grid with my sample data coming from my Person class and default grouping the Person with Country.
Later on SortCombo_SelectionChanged, I am dynamically fetching the Selected column names from the ComboBox and Sorting on that.
Once you will run this application the screen will look something like this as given below.
Grouped by Country
Grouped by City
Grouped by Age
You can group according to your requirement, like Grouping Active and Deleted items, etc.
You can also download the sample code from here.
Cheers
~Brij
Pingback from Silverlight 3 DataGrid Columns Grouping using PagedCollectionView @ Web Design
Link to us
All material is copyrighted by its respective authors. Site design and layout
is copyrighted by DotNetSlackers.
Advertising Software by Ban Man Pro
|
http://dotnetslackers.com/Community/blogs/bmdayal/archive/2009/08/01/silverlight-3-datagrid-columns-grouping.aspx
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Thanks for the quick review, John! >> 2) Similarly with the ifs for runtime selection. > > But they are in a (more) inner loop. Aah. Yes. Good point. ... >> I think we also need to settle on one namespace. >> Either DVDV or Accellent (at the moment there are both). > > I never used 'accellent' in my ffmpeg patch, someone else must have > done that. Probably me. The changes in MythTV go well beyond just libavcodec (to configure, and all the calling code), so I think the name crept in there as well, and then down into the lib. > Accellent is just the silly name of the demo app, > DVDV(ideo) is the correct name for the acceleration method. OK. I will start correcting this. Short is good. > -
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2006-August/008109.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
The first part of a series of posts that attempt to explain Python by the dissection method.
Background
On reddit a few months ago I was surprised by the popularity of a comment that explained how bound and unbound methods work in Python. One reply pleaded with me to do a series explaining Python at this kind of level, and this is the result.
I find I use things best when I understand how they work — when I understand what something is made of, what the bits are etc. I am very fortunate in having started my interest in programming at the age of about 10 with a 48k, 8-bit computer called the Oric Atmos. The Atmos came with an Advanced User Guide that told you almost everything there was to know about the computer, including a complete ROM disassembly. So, after learning BASIC, I was able to get into 6502 machine code/assembly pretty easily, and started exploring every little hack to get the machine to do my bidding. This gives you an understanding of computers which I think is indispensable. (It helped a lot that the Atmos could be re-booted within a few seconds after I locked it up :-)
I think the approach of understanding what is going on at least one level below your current level of abstraction is often neglected in other areas too.
For example, with web programming, after you have learnt HTML and CSS, I think the next step should be to learn the essentials of HTTP by literally typing commands at a telnet prompt. I have never seen anyone else suggest this method, and would like to have the opportunity to try it out with someone. It's really not that hard (as long as you can type reasonably carefully), and I think it gives you an understanding of what is going on in terms of requests and responses that would eliminate a huge number of security problems and later misunderstandings.
So, my approach to doing advanced Python is to take the bits apart and understand them. Python has an essentially simple execution and object model, which enables you to go a long way just by being an insatiable tinkerer. These articles are intended simply to point out the tinkering tools and guide you past the dead ends. Once you understand these nut and bolts, some more advanced concepts like meta-classes actually become pretty simple.
Disclaimers
- Almost everything here has been learnt simply by playing around, working things out, and looking up docs when I got stuck. I have not invested any time in reading the CPython sources to get a better insight.
- I think I'm probably quite indebted to Advanced Python, but these articles are not based around that presentation, and will cover in some cases less ground, but in a way that allows you to take your own time, and encourages experimentation.
- Sometimes I will tell deliberate fibs out of a desire to keep things simple and focused.
- Sometimes I will tell accidental fibs out of ignorance and laziness. If you challenge me, however, I will invariably claim that these were actually the deliberate fibs described above. Detecting these meta-fibs is left as an exercise for the reader.
Requirements
I'm assuming either moderate knowledge of Python, or basic knowledge of Python with moderate knowledge of programming in general. You need to have used dictionaries, lists, functions, classes, import statements, and should know how to write and run a script and use the Python prompt interactively.
I'm going to use Python 3 for all my examples, because I want these articles to remain relevant longer, and because Python 3 has a slightly simpler and cleaner model in some cases.
However, the vast majority of example code will work just fine with Python 2.X. The biggest problem is the print statement in Python 2.X, which has been turned into a function in Python 3. But if you have Python 2.5 or 2.6, you can just start your script or Python session with from __future__ import print_function, and then almost everything will be identical.
I'm also assuming CPython, but in practice it won't make much difference.
Code samples
With most of the code samples, they are written for executing as stand-alone scripts. However, you can type them at a Python prompt, which means you don't need to use print() to see the objects. The results will be identical, or almost identical depending on what is in your .pythonrc file.
Contents
In this post, I'll cover:
- Executing a script or module
- Compilation
- Module creation
- Top-down execution
- Function definitions
- Digression: locals()
- An exercise
- A lesson
OK, let's get going.
Executing a script or module
What happens when you do python myscript.py? How does your code end up running?
Compilation
Well, the first thing that happens with CPython is that it compiles your code. The result is a myscript.pyc file — you will see these .pyc files littering your folders. Of course, if that file already exists and is up-to-date, Python will just use it rather than re-compile, which is the whole point of creating this file.
This file contains byte-code — a lower level language that can be run on a virtual machine. A virtual machine, in this context, is simply a program that can run these instructions, using the hardware CPU as a model for how it should work, but with much higher level commands than the machine code instructions that a real CPU understands.
In CPython, this virtual machine and the corresponding byte-code work in such a way that we can actually ignore this step. From now on, we will assume that the Python interpreter is simply reading your code one statement at a time and executing it. There are very few occasions where this model of understanding will fail us.
This way of working also means that the Python REPL (i.e. the Python prompt) works in essentially the same way as a script or module. In the examples that follow, you can type the code into a Python prompt or run from a script, and you'll get the same result.
Module creation
After compilation, the first thing that needs to happen is a module needs to be created. A module, like everything in Python, is an object. An object is something that has an identity (think the id function), and some attributes. Now one way to store a bunch of attributes is a dictionary, so like other things in Python, a module is pretty thin wrapper around a dictionary.
The dictionary is initialised almost empty (but not quite). We'll come back to thinking about this dictionary, but for now just file this knowledge away for later reference.
Top down execution
The next thing you need to know is that Python then executes the contents of the module essentially statement by statement.
Now, there are of course different kinds of statements, including function definition, assignment, flow-control etc. and each works in its own way. So the if/elif/else statement only runs a branch if the corresponding condition matches. But we can think about the whole statement being executed when it is reached.
So, let's have an example. Look at the following code:
a = "hello" print(a)
This, very obviously, simply prints hello. But if we switch the order:
print(a) a = "hello"
…we get NameError: name 'a' is not defined. The statement a = "hello" creates a string object, and puts an entry in the module dictionary, and the statement print(a) attempts to look up entry a in the module dictionary and then print it. In the second, this lookup fails, because the assignment hasn't happened yet, hence the NameError.
This simply demonstrates that Python is executing your code a statement at a time. It might be obvious to some, but this simple fact makes Python very different from C, Java, C#, Haskell and others. (The difference could perhaps be better illustrated using two classes in a module, one which inherits from the other - in Python the base class must come first, unlike, say in C#).
Function definitions
The next thing we need to think about is function definitions.
It is crucial to understand that function definitions are also simply statements — with their own peculiar nature, but simple statements nonetheless. When the interpreter reaches a function definition it executes it. That doesn't mean it executes the body of the function — that will wait until the function is called. But the whole statement is executed. The particular nature of a function definition is to construct a function object, and assign it a name in the local namespace.
To summarise: A function definition is an object construction and assignment statement.
So, let's see some code that demonstrates that:
def foo(): pass print(foo)
The result will be something like:
<function foo at 0xb73aa32c>
Again, if we switch the lines:
print(foo) def foo(): pass
…we get a NameError, as before.
I want you to notice that the def foo line is doing the same job as the a = "hello" line — it creates an object, and creates a name in the module dictionary for that object.
Digression: locals()
I think it's about time we actually look at this module dictionary. It isn't something we just have to imagine.
There are two ways to do this. The first is the builtin function 'locals()'. Its docstring says: "Update and return a dictionary containing the current scope's local variables." So, try the following:
a = "hello" def foo(): pass print(locals())
You'll find something like:
{'a': 'hello', '__builtins__': <module 'builtins' (built-in)>, '__file__': 'test1.py', '__package__': None, '__name__': '__main__', 'foo': <function foo at 0xb7449bec>, '__doc__': None}
You can see the 'a' and 'foo' entries, along with a few other things that got populated automatically.
Now, the fact that locals() can produce this output doesn't mean that the module dictionary necessarily exists — it could create the dictionary on the fly. But there is another way we can test this — via sys.modules.
Try the following:
import sys a = 1 print(__name__) print(sys.modules[__name__]) print(sys.modules[__name__].__dict__) print(id(sys.modules[__name__].__dict__)) print(id(locals()))
sys.modules is where Python stores all the module objects. __name__ is a local variable, created by the Python interpreter and put into each new module dictionary. It is the key of that module in sys.modules. For a script, it is always equal to "__main__", but modules imported by import will get the names you would expect.
If you try the above code in CPython, you will find that the locals() dictionary not only has the same content as the module dictionary, it is the same dictionary (same id).
And yes, it is writable. Go on, I'll wait while you play with very verbose ways of setting local variables (or setting local variables that you can't use, because they have illegal names) — you know you want to. (BTW, CPython don't guarantee that the output of locals() will be a writable dictionary that can be used to update the local namespace, but it is currently).
locals() will prove invaluable in understanding some other features.
Function definitions - continued
So, I was proving you that the module dictionary was real. But we need to think about function objects, and my claim that a a function definition is an object construction and assignment statement.
What kind of object is contructed? Well, a function object of course. What is it constructed from? It's built from byte code that corresponds to the body of the function statement. This byte code is loaded into memory from the time that the Python is compiled, but it does not become a function until the function statement is executed and the function defined.
Now, the special syntactical form for defining function allows something unusual to happen. Look at part of the output of locals():
def foo(): pass print(locals())
You'll find this:
{ ... 'foo': <function foo at 0xb74a0cec>, ... }
The string 'foo' appears as both the key and as part of the value. This means that function objects have the unusual property of knowing what name they were given — unlike strings and class instances etc. But note that this name is simply the name they were given at the time they were constructed — it doesn't magically update.
So, I'm claiming that the following two statements are equivalent:
def foo(a, b=10): pass foo = make_function('foo', ['a', ('b', 10)], None)
…where make_function is a mythical builtin function that takes the name of the function, a list of arguments, (with tuples being used keyword arguments here) and a 'code object' that is the body of the function, and returns a newly constructed function. (I deliberately chose a trivial function with no body so I didn't need to think of a way of representing the code object).
Given this equivalence, you'll realise that 'foo' is just another variable in the local namespace. You can give it another name in the local namespace, and it will continue to work. You can even change its internal name, and it won't be bothered (although it will make debugging harder).
Some messing around to drive home the point:
>>> def foo(): ... print(1) ... >>> foo() 1 >>> bar = foo >>> bar <function foo at 0xb74a0d2c> >>> bar() 1 >>> bar.>> bar <function baz at 0xb74a0d2c> >>> foo <function baz at 0xb74a0d2c> >>> baz Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'baz' is not defined
Exercise
OK, got all that? If so, you should be able to do the following exercise. What does this code do?
print(1) class DeepThought(object): def __init__(self): print("Cogito ergo sum") def __str__(self): return "Deep Thought" print(2) def go(val=DeepThought()): print(val) print(3) go(val=4) go()
Write down your answer, and make sure you do check, because you might learn something vital that will save you hours in the future!
Lessons
So, what can we do that we couldn't do already?
Well, you know how to destroy readability of your code by manipulating the locals() dictionary. Yay! Just when you thought your job security had been lost due to the clarity of the code you are writing in Python.
Seriously, the first thing to notice is that if function statements are assignments, they can appear anywhere that assignments do. So I can write code like this:
import sys if sys.platform == 'win32': def some_function(): # Some special Windows fix here pass else: def some_function(): pass
Assuming some_function is fairly small so there isn't much code duplication, this is much nicer than putting the if inside some_function, since the platform check happens just once — when the module is imported, rather than every time some_function is called. While you may not do this that often, it can be useful when you need, like this rather more complicated version in Django.
This understanding also frees you from thinking about functions being 'top level' things (or perhaps 'second level', when they appear in class statements). They can appear anywhere, and this opens up a world of possibilities once you understand it.
In the future, I'm hoping to look at modules, importing and circular imports, class statements and meta-classes. We may have a bit of exploration of stack frames at some point. Let me know if there are particular topics you'd like covered!
|
http://lukeplant.me.uk/blog/posts/dissecting-python-part-1/
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
In article <a932e44b-3566-4c0f-8b63-2dce0a82946b at 34g2000hsh.googlegroups.com>, HCB <hypercaffeinatedbiped at gmail.com> wrote: > Hello: > > The book "Code Complete" recommends that you put only one class in a > source file, which seems a bit extreme for me. It seems that many > classes are small, so that putting several of them in a file seems > reasonable. I noticed that the decimal.py module in the standard > library has several classes, all of which of course revolve around the > "decimal" topic. Perhaps a better rule of thumb is "one idea per > file." I checked the Python style guide and there seems to be no > mention of this topic. I know this is an elementary question, but what > is the Python way of doing this? > > Thanks for your time. > HCB Steve McConnell writes a lot of good stuff. Like most people who write a lot of good stuff, not everything he writes should be taken as gospel. With that in mind... Consider this. You're slogging through some code in a large project trying to debug a problem when you come upon the line (in pseudo-code): foo = SomeClass::SomeFunction(bar) You want to go look at the source for SomeClass. What file do you open to find it? If you follow the "one class per file" rule, the answer is easy; it's in SomeClass.xxx! That being said, the "one class per file" rule is a means to an end. If your code is written in such a way that it's easy to figure out where the source for a given class/function/whatever is, then you've done the right thing. The specifics of how you do that depend on the language you're using. In Python (as other posters have pointed out), modules give you a good way to group logical collections of classes (etc). If you see in some code: foo = some_module.SomeClass.SomeFunction(bar) you should instantly know that the source code is in some_module.py. Perhaps one way to look at this is that Python automatically follows the "one class per file" rule, except that "class" is crossed out and "module" written in with crayon. On the other hand, if you prefer self-abuse, Python gives you plenty of ways to do that. If you preface all your source files with: from some_module import * from some_other_module import * from yet_a_third_module import * from another_module import * and then see foo = SomeClass.SomeFunction(bar) you're right back to having no clue what file to open to find the source for SomeClass. Global namespace is sacred; treat it with respect!
|
http://mail.python.org/pipermail/python-list/2008-September/489253.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Managed Data Access Inside SQL Server with ADO.NET and SQLCLR
Pablo Castro
Microsoft Corporation
April 2005
Applies to:
Microsoft SQL Server 2005
Microsoft .NET Framework 2.0
ADO.NET
Summary: Using the new SQLCLR feature, managed code can use ADO.NET when running inside SQL Server 2005. Learn about SQLCLR via basic scenarios of in-process data access, SQLCLR constructs, and their interactions. (26 printed pages)
Contents
Introduction
Part I: The Basics
Why Do Data Access Inside SQLCLR?
Getting Started with ADO.NET Inside SQLCLR
The Context Connection
Using ADO.NET in Different SQLCLR Objects
When Not to Use SQLCLR + ADO.NET
Part II: Advanced Topics
More on Connections
Transactions
Conclusion
Acknowledgements
Introduction
This white paper discusses how managed code can use ADO.NET when running inside SQL Server 2005 using the new SQLCLR feature.
In Part I, I describe the basic scenarios in which in-process data access might be required, and the difference between local and remote connections. Most of the different SQLCLR constructs are covered, such as stored procedures and functions, as well as interesting aspects of the interaction between the data access infrastructure and those constructs.
Part II further details in-process connections and restrictions that apply to ADO.NET when running inside SQLCLR. Finally, there is a detailed discussion on the transactions semantics of data access code inside SQLCLR, and how can it interact implicitly and explicitly with the transactions API.
Part I: The Basics
Why Do Data Access Inside SQLCLR?
SQL Server 2005 is highly integrated with the .NET Framework, enabling the creation of stored procedures, functions, user-defined types, and user-defined aggregates using your favorite .NET programming language. All of these constructs can take advantage of large portions of the .NET Framework infrastructure, the base class library, and third-party managed libraries.
In many cases the functionality of these pieces of managed code will be very computation-oriented. Things such as string parsing or scientific math are quite common in the applications that our early adopters are creating using SQLCLR.
However, you can do only so much using only number crunching and string manipulation algorithms in isolation. At some point you'll have to obtain your input or return results. If that information is relatively small and granular you can use input and output parameters or return values, but if you're handling a large volume of information, then in-memory structures won't be an appropriate representation/transfer mechanism; a database might be a better fit in those scenarios. If you choose to store the information in a database, then SQLCLR and a data-access infrastructure are the tools you'll need.
There are a number of scenarios where you'll need database access when running inside SQLCLR. One is the scenario I just mentioned, where you have to perform some computation over a potentially large set of data. The other is integration across systems, where you need to talk to different servers to obtain an intermediate answer in order to proceed with a database-related operation.
Note For an introduction and more details about SQLCLR in general, see Using CLR Integration in SQL Server 2005.
Now, if you're writing managed code and you want to do data access, what you need is ADO.NET.
Getting Started with ADO.NET Inside SQLCLR
The good news is that ADO.NET "just works" inside SQLCLR, so in order to get started you can leverage all your existing knowledge of ADO.NET.
To illustrate this take a look at the code snippet below. It would work fine in a client-side application, Web application, or a middle-tier component; it turns out that it will also work just fine inside SQLCLR.
C# // computation with it } } Visual Basic .NET Dim cmd as SqlCommand Dim r as SqlDataReader ' computation with it Loop End Using
This sample uses the System.Data.SqlClient provider to connect to SQL Server. Note that if this code runs inside SQLCLR, it would be connecting from the SQL Server that hosts it to another SQL Server. You can also connect to different data sources. For example, you can use the System.Data.OracleClient provider to connect to an Oracle server directly from inside SQL Server.
For the most part, there are no major differences using ADO.NET from within SQLCLR. However, there is one scenario that needs a little bit more attention: what if you want to connect to the same server your code is running in to retrieve/alter data? See The Context Connection section to see how ADO.NET addresses that.
Before delving into further detail, I'd like to go through the basic steps to run code inside SQLCLR. If you already have experience creating SQLCLR stored procedures, you'll probably want to skip this section.
Creating a managed stored procedure that uses ADO.NET from Visual Studio
Visual Studio 2005 includes great integration with SQL Server 2005 and makes it really easy to create and deploy SQLCLR projects. Let's create a new managed stored procedure that uses ADO.NET using Visual Studio 2005.
- Create a SQLCLR project. The starting point for SQLCLR in Visual Studio is a database project. You need to create a new project by selecting the database project category under your favorite language. Next, select the project type called SQL Server Project, give it a name, and let Visual Studio create the project.
- Set permissions to EXTERNAL_ACCESS. Go to the properties of the project (right-click on the project node, choose Properties), choose the Database tab, and then from the Permission Level combo-box, choose External.
- Add a stored-procedure to the project. Once the project is created you can right-click on the project node and select Add -> New Item. On the pop-up dialog you'll see all the different SQLCLR objects that you can create. Select Stored Procedure, give it a name, and let Visual Studio create it. Visual Studio will create a template stored procedure for you.
- Customize the Visual Studio template. Visual Studio will generate a template for a managed stored procedure for you. Since you want to use SqlClient (the .NET data access provider for SQL Server) to connect to another SQL Server in this sample, you'll need to add at least one more using (C#) or imports (Visual Basic) statement for System.Data.SqlClient.
- Code the stored procedure body. Now you need the code for the stored procedure. Let's say you want to connect to another SQL Server (remember, your code will be running inside SQL Server), obtain some information based on input data, and process the results. Note that Visual Studio generated a SqlProcedure attribute for the stored procedure method; it is used by the Visual Studio deployment infrastructure, so leave it in place. If you take a look at the proceeding code you'll notice that there is nothing different from old fashioned ADO.NET code that would run in the client or middle-tier. We love that part :)
C# using System.Data; using System.Data.SqlClient; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure()] Imports Microsoft.SqlServer.Server Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _
- Deploy your assembly. Now you need to deploy your stored procedure in SQL Server. Visual Studio makes it trivial to deploy the assembly to SQL Server and take the appropriate steps to register each of the objects in the assembly with the server. After building the project, on the Build menu, choose Deploy Solution. Visual Studio will connect to SQL Server, drop previous versions of the assembly if needed, send the new assembly to the server and register it, and then register the stored procedure that you added to the assembly.
- Try it out. You can even customize the "test.sql" file that's generated under the "Test Scripts" project folder to exercise the stored procedure you're working on so Visual Studio will execute it when you press Ctrl+F5, or just press F5. (Yes, F5 will start the debugger, and you can debug code inside SQLCLR—both T-SQL and CLR code—isn't that cool?)
Creating a managed stored procedure that uses ADO.NET using the SDK only
If you don't have Visual Studio 2005 handy, or you'd like to see how things work the first time before letting Visual Studio do it for you, here is how to create a SQLCLR stored procedure by hand.
First, you need the code for the stored procedure. Let's say you want to do the same as in the Visual Studio example: connect to another SQL Server, obtain some information based on input data, and process the results.
C# using System.Data; using System.Data.SqlClient; public class SP { Public Class SP
Again, nothing different from old fashioned ADO.NET :)
Now you need to compile your code to produce a DLL assembly containing the stored procedure. The following command will do it (assuming that you called your file myprocs.cs/myprocs.vb and the .NET Framework 2.0 is in your path):
This will compile your code and produce a new DLL called myprocs.dll. You need to register it with the server. Let's say you put myprocs.dll in c:\temp, here are the SQL statements required to install the stored procedure in SQL Server from that path. You can run this either from SQL Server Management Studio or from the sqlcmd command-line utility:
The EXTERNAL_ACCESS permission set is required because the code is accessing an external resource, in this case another SQL Server. The default permission set (SAFE) does not allow external access.
If you make changes to your stored procedure later on, you can refresh the assembly in SQL Server without dropping and recreating everything, assuming that you didn't change the public interface (e.g., changed the type/number of parameters). In the scenario presented here, after recompiling the DLL, you can simply execute:
The Context Connection
One data-access scenario that you can expect to be relatively common is that you'll want to access the same server where your CLR stored procedure or function is executing.
One option for that is to create a regular connection using SqlClient, specify a connection string that points to the local server, and open it.
Now you have a connection. However, this is a separate connection; this implies that you'll have to specify credentials for logging in—it will be a different database session, it may have different SET options, it will be in a separate transaction, it won't see your temporary tables, etc.
In the end, if your stored procedure or function code is running inside SQLCLR, it is because someone connected to this SQL Server and executed some SQL statement to invoke it. You'll probably want that connection, along with its transaction, SET options, and so on. It turns out that you can get to it; it is called the context connection.
The context connection lets you execute SQL statements in the same context that your code was invoked in the first place. In order to obtain the context connection you simply need to use the new context connection connection string keyword, as in the example below:
In order to see whether your code is actually running in the same connection as the caller, you can do the following experiment: use a SqlConnection object and compare the SPID (the SQL Server session identifier) as seen from the caller and from within the connection. The code for the procedure looks like this:
C# using System.Data; using System.Data.SqlClient; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure()] public static void SampleSP(string connstring, out int spid) { using (SqlConnection conn = new SqlConnection(connstring)) { conn.Open(); SqlCommand cmd = new SqlCommand("SELECT @@SPID", conn); spid = (int)cmd.ExecuteScalar(); } } } Visual Basic .NET Imports System.Data Imports System.Data.SqlClient Imports Microsoft.SqlServer.Server Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _ Public Shared Sub SampleSP(ByVal connstring As String, ByRef spid As Integer) Using conn As New SqlConnection(connstring) conn.Open() Dim cmd As New SqlCommand("SELECT @@SPID", conn) spid = CType(cmd.ExecuteScalar(), Integer) End Using End Sub End Class
After compiling and deploying this in SQL Server, you can try it out:
-- Print the SPID as seen by this connection PRINT @@SPID -- Now call the stored proc and see what SPID we get for a regular connection DECLARE @id INT EXEC SampleSP 'server=.;user id=MyUser; password=MyPassword', @id OUTPUT PRINT @id -- Call the stored proc again, but now use the context connection EXEC SampleSP 'context connection=true', @id OUTPUT PRINT @id
You'll see that the first and last SPID will match, because they're effectively the same connection. The second SPID is different because a second connection (which is a completely new connection) to the server was established.
Using ADO.NET in Different SQLCLR Objects
The "context" object
As you'll see in the following sections that cover different SQLCLR objects, each one of them will execute in a given server "context." The context represents the environment where the SQLCLR code was activated, and allows code running inside SQLCLR to access appropriate run-time information based on what kind of SQLCLR object it is.
The top-level object that surfaces the context is the SqlContext class that's defined in the Microsoft.SqlServer.Server namespace.
Another object that's available most of the time is the pipe object, which represents the connection to the client. For example, in T-SQL you can use the PRINT statement to send a message back to the client (if the client is SqlClient, it will show up as a SqlConnection.InfoMessage event). You can do the same in SQLCLR by using the SqlPipe object:
Stored procedures
All of the samples I used above were based on stored procedures. Stored procedures can be used to obtain and change data both on the local server and in remote data sources.
Stored procedures can also send results to the client, just like T-SQL stored procedures do. For example, in T-SQL you can have a stored procedure that does this:
A client running this stored procedure will see result sets coming back to the client (i.e., you would use ExecuteReader and get a SqlDataReader back if you were using ADO.NET in the client as well).
Managed stored procedures can return result sets, too. For stored procedures that are dominated by set-oriented statements such as the example above, using T-SQL is always a better choice. However, if you have a stored procedure that does a lot of computation-intensive work or uses a managed library and then returns some results, it may make sense to use SQLCLR. Here is the same procedure rewritten in SQLCLR:
C# using System.Data; using System.Data.SqlClient; using System.Transactions; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure()] public static void SampleSP(int rating) { using (SqlConnection conn = new SqlConnection("context connection=true")) { conn.Open(); SqlCommand cmd = new SqlCommand( "SELECT VendorID, AccountNumber, Name FROM Purchasing.Vendor " + "WHERE CreditRating <= @rating", conn); cmd.Parameters.AddWithValue("@rating", rating); // execute the command and send the results directly to the client SqlContext.Pipe.ExecuteAndSend(cmd); } } } Visual Basic .NET Imports System.Data Imports System.Data.SqlClient Imports System.Transactions Imports Microsoft.SqlServer.Server Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _ Public Shared Sub SampleSP(ByVal rating As Integer) Dim cmd As SqlCommand ' connect to the context connection Using conn As New SqlConnection("context connection=true") conn.Open() cmd = New SqlCommand( _ "SELECT VendorID, AccountNumber, Name FROM Purchasing.Vendor " & _ "WHERE CreditRating <= @rating", conn) cmd.Parameters.AddWithValue("@rating", rating) ' execute the command and send the results directly to the client SqlContext.Pipe.ExecuteAndSend(cmd) End Using End Sub End Class
The example above shows how to send the results from a SQL query back to the client. However, it's very likely that you'll also have stored procedures that produce their own data (e.g., by performing some computation locally or by invoking a Web service) and you'll want to return that data to the client as a result set. That's also possible using SQLCLR. Here is a trivial example:
C# using System.Data; using System.Data.SqlClient; using System.Transactions; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure()] public static void SampleSP() { // simply produce a 10-row result-set with 2 columns, an int and a string // first, create the record and specify the metadata for the results SqlDataRecord rec = new SqlDataRecord( new SqlMetaData("col1", SqlDbType.NVarChar, 100), new SqlMetaData("col2", SqlDbType.Int)); // start a new result-set SqlContext.Pipe.SendResultsStart(rec); // send rows for(int i = 0; i < 10; i++) { // set values for each column for this row // This data would presumably come from a more "interesting" computation rec.SetString(0, "row " + i.ToString()); rec.SetInt32(1, i); SqlContext.Pipe.SendResultsRow(rec); } // complete the result-set SqlContext.Pipe.SendResultsEnd(); } } Visual Basic .NET Imports System.Data Imports System.Data.SqlClient Imports System.Transactions Imports Microsoft.SqlServer.Server Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _ Public Shared Sub SampleSP() ' simply produce a 10-row result-set with 2 columns, an int and a string ' first, create the record and specify the metadata for the results Dim rec As New SqlDataRecord( _ New SqlMetaData("col1", SqlDbType.NVarChar, 100), _ New SqlMetaData("col2", SqlDbType.Int)) ' start a new result-set SqlContext.Pipe.SendResultsStart(rec) ' send rows Dim i As Integer For i = 0 To 9 ' set values for each column for this row ' This data would presumably come from a more "interesting" computation rec.SetString(0, "row " & i.ToString()) rec.SetInt32(1, i) SqlContext.Pipe.SendResultsRow(rec) Next ' complete the result-set SqlContext.Pipe.SendResultsEnd() End Sub End Class
User-defined functions
User-defined scalar functions already existed in previous versions of SQL Server. In SQL Server 2005 scalar functions can also be created using managed code, in addition to the already existing option of using T-SQL. In both cases the function is expected to return a single scalar value.
SQL Server assumes that functions do not cause side effects. That is, functions should not change the state of the database (no data or metadata changes). For T-SQL functions, this is actually enforced by the server, and a run-time error would be generated if a side-effecting operation (e.g. executing an UPDATE statement) were attempted.
The same restrictions (no side effects) apply to managed functions. However, there is less enforcement of this restriction. If you use the context connection and try to execute a side-effecting T-SQL statement through it (e.g., an UPDATE statement) you'll get a SqlException from ADO.NET. However, we cannot detect it when you perform side-effecting operations through regular (non-context) connections. In general it is better play safe and not do side-effecting operations from functions unless you have a very clear understanding of the implications.
Also, functions cannot return result sets to the client as stored procedures do.
Table-valued user-defined functions
T-SQL table-valued functions (or TVFs) existed in previous versions of SQL Server. In SQL Server 2005 we support creating TVFs using managed code. We call table-valued functions created using managed code "streaming table-valued functions," or streaming TVFs for short.
- They are "table-valued" because they return a relation (a result set) instead of a scalar. That means that they can, for example, be used in the FROM part of a SELECT statement.
- They are "streaming" because after an initialization step, the server will call into your object to obtain rows, so you can produce them based on server demand, instead of having to create all the result in memory first and then return the whole thing to the database.
Here is a very simple example of a function that takes a single string with a comma-separated list of words and returns a single-column result-set with a row for each word.
C# using System.Collections; using System.Data; using System.Data.SqlClient; using System.Transactions; using Microsoft.SqlServer.Server; public partial class Functions { [Microsoft.SqlServer.Server.SqlFunction(FillRowMethodName="FillRow")] // if you're using VS then add the following property setter to // the attribute above: TableDefinition="s NVARCHAR(4000)" public static IEnumerable ParseString(string str) { // Split() returns an array, which in turn // implements IEnumerable, so we're done :) return str.Split(','); } public static void FillRow(object row, out string str) { // "crack" the row into its parts. this case is trivial // because the row is only made of a single string str = (string)row; } } Visual Basic .NET Imports System.Collections Imports System.Data Imports System.Data.SqlClient Imports System.Transactions Imports System.Runtime.InteropServices Imports Microsoft.SqlServer.Server Partial Public Class Functions <Microsoft.SqlServer.Server.SqlFunction(FillRowMethodName:="FillRow")> _ ' if you're using VS then add the following property setter to ' the attribute above: TableDefinition:="s NVARCHAR(4000)" Public Shared Function ParseString(ByVal str As String) As IEnumerable ' Split() returns an array, which in turn ' implements IEnumerable, so we're done :) Return Split(str, ",") End Function Public Shared Sub FillRow(ByVal row As Object, <Out()> ByRef str As String) ' "crack" the row into its parts. this case is trivial ' because the row is only made of a single string str = CType(row, String) End Sub End Class
If you're using Visual Studio, simply deploy the assembly with the TVF. If you're doing this by hand, execute the following to register the TVF (assuming you already registered the assembly):
Once registered, you can give it a try by executing this T-SQL statement:
Now, what does this have to do with data access? Well, it turns out there are a couple of restrictions to keep in mind when using ADO.NET from a TVF:
- TVFs are still functions, so the side-effect restrictions also apply to them.
- You can use the context connection in the initialization method (e.g., ParseString in the example above), but not in the method that fills rows (the method pointed to by the FillRowMethodName attribute property).
- You can use ADO.NET with regular (non-context) connections in both initialization and fill-row methods. Note that performing queries or other long-running operations in the fill-row method can seriously impact the performance of the SELECT statement that uses the TVF.
Triggers
Creating triggers is in many aspects very similar to creating stored procedures. You can use ADO.NET to do data access from a trigger just like you would from a stored procedure.
For the triggers case, however, you'll typically have a couple of extra requirements:
- You'll want to "see" the changes that caused the trigger to fire. In a T-SQL trigger you'd typically do this by using the INSERTED and DELETED tables. For a managed trigger the same still applies: as long as you use the context connection, you can reference the INSERTED and DELETED tables from your SQL statements that you execute in the trigger using a SqlCommand object.
- You'll want to be able to tell which columns changed. You can use the IsUpdatedColumn() method of the SqlTriggerContext class to check whether a given column has changed. An instance of SqlTriggerContext is available off of the SqlContext class when the code is running inside a trigger; you can access it using the SqlContext.TriggerContext property.
Another common practice is to use a trigger to validate the input data, and if it doesn't pass the validation criteria, then abort the operation. You can also do this from managed code by simply using this statement:
Wow, what happened there? It is simple thanks to the tight integration of SQLCLR with the .NET Framework. See the Part II: Advanced Topics section, Transactions, for more information.
When Not to Use SQLCLR + ADO.NET
Don't just wrap SQL
If you have a stored procedure that only executes a query, then it's always better to write it in T-SQL. Writing it in SQLCLR will take more development time (you have to write T-SQL code for the query and managed code for the procedure) and it will be slower at run-time.
Whenever you use SQLCLR to simply wrap a relatively straightforward piece of T-SQL code, you'll get worse performance and extra maintenance cost. SQLCLR is better when there is actual work other than set-oriented operations to be done in the stored procedure or function.
Note The samples in this article are always parts of stored procedures or functions, and are never complete real-world components of production-level databases. I only include the minimum code necessary to exercise the ADO.NET API I am describing. That's why many of the samples contain only data-access code. In practice, if your stored procedure or function only has data-access code, then you should double-check and see if you could write it in T-SQL.
Avoid procedural row processing if set-oriented operations can do it
Set-oriented operations can be very powerful. Sometimes it can be tricky to get them right, but once they are there, the database engine has a lot of opportunities to understand what you want to do based on the SQL statement that you provide, and it can perform deep, sophisticated optimizations on your behalf.
So in general it's a good thing to process rows using set-oriented statements such as UPDATE, INSERT and DELETE.
Good examples of this are:
- Avoid row-by-row scans and updates. If at all possible, it's much better to try to write a more sophisticated UPDATE statement.
- Avoid custom aggregation of values by explicitly opening a SqlDataReader and iterating over the values. Either use the built-in aggregation functions (SUM, AVG, MIN, MAX, etc.) or create user-defined aggregates.
There are of course some scenarios where row-by-row processing using procedural logic makes sense. It's mostly a matter of making sure that you don't end up doing row-by-row processing for something that could be expressed in a single SQL statement.
Part II: Advanced Topics
More on Connections
Choosing between regular and context connections
If you're connecting to a remote server, you'll always be using regular connections. On the other hand, if you need to connect to the same server you're running a function or stored procedure on, in most cases you'll want to use the context connection. As I mentioned above, there are several reasons for this, such as running in the same transaction space, and not having to reauthenticate.
Additionally, using the context connection will typically result in better performance and less resource utilization. The context connection is an in-process–only connection, so it can talk to the server "directly", meaning that it doesn't need to go through the network protocol and transport layer to send SQL statements and receive results. It doesn't need to go through the authentication process either.
There are some cases where you may need to open a separate regular connection to the same server. For example, there are certain restrictions in using the context connection described in the Restrictions for the context connection section.
What do you mean by connect "directly" to the server?
I mentioned before that the context connection could connect "directly" to the server and bypass the network protocol and transport layers. Figure 1 represents the primary components of the SqlClient managed provider, as well as how the different components interact with each other when using a regular connection, and when using the context connection.
Figure 1. Connection processes
As you can see, the context connection follows a shorter code path and involves fewer components. Because of that, you can expect the context connection to get to the server and back faster than a regular connection. Query execution time will be the same, of course, because that work needs to be done regardless of how the SQL statement reaches the server.
Restrictions for the context connection
Here are the restrictions that apply to the context connection that you'll need to take into account when using it in your application:
- You can have only one context connection opened at a given time for a given connection.
- Of course, if you have multiple statements running concurrently in separate connections, each one of them can get their own context connection. The restriction doesn't affect concurrent requests from different connections; it only affects a given request on a given connection.
- MARS (Multiple Active Result Sets) is not supported in the context connection.
- The SqlBulkCopy class will not operate on a context connection.
- We do not support update batching in the context connection.
- SqlNotificationRequest cannot be used with commands that will execute against the context connection.
- We don't support canceling commands that are running against the context connection. SqlCommand.Cancel() will silently ignore the request.
- No other connection string keywords can be used when you use context connection=true.
Some of these restrictions are by design and are the result of the semantics of the context connection. Others are actually implementation decisions that we made for this release and we may decide to relax those restrictions in a future release based on customer feedback.
Restrictions on regular connections inside SQLCLR
For those cases where you decide to use regular connections instead of the context connection, there are a few limitations to keep in mind.
Pretty much all the functionality of ADO.NET is available inside SQLCLR; however, there are a few specific features that either do not apply, or we decided not to support, in this release. Specifically, asynchronous command execution and the SqlDependecy object and related infrastructure are not supported.
Credentials for connections
You probably noticed that all the samples I've used so far use SQL authentication (user id and password) instead of integrated authentication. You may be wondering why I do that if we always strongly suggest using integrated authentication.
It turns out that it's not that straightforward to use inside SQLCLR. There are a couple of considerations that need to be kept in mind before using integrated authentication.
First of all, no client impersonation happens by default. This means that when SQL Server invokes your CLR code, it will be running under the account of the SQL Server service. If you use integrated authentication, the "identity" that your connections will have will be that of the service, not the one from the connecting client. In some scenarios this is actually intended and it will work fine. In many other scenarios this won't work. For example, if your SQL Server runs as "local system", then you won't be able to login to remote servers using integrated authentication.
Note Skip these next two paragraphs if you want to avoid a headache :)
In some cases you may want to impersonate the caller by using the SqlContext.WindowsIdentity property instead of running as the service account. For those cases we expose a WindowsIdentity instance that represents the identity of the client that invoked the calling code. This is only available when the client used integrated authentication in the first place (otherwise we don't know the Windows identity of the client). Once you obtained the WindowsIdentity instance you can call Impersonate to change the security token of the thread, and then open ADO.NET connections on behalf of the client.
It gets more complicated. Even if you obtained the instance, by default you cannot propagate that instance to another computer; Windows security infrastructure restricts that by default. There is a mechanism called "delegation" that enables propagation of Windows identities across multiple trusted computers. You can learn more about delegation in the TechNet article, Kerberos Protocol Transition and Constrained Delegation.
Transactions
Let's say you have a managed stored procedure called SampleSP that has the following code:
C# // as usual, connection strings shouldn't be hardcoded for production code using(SqlConnection conn = new SqlConnection( "server=MyServer; database=AdventureWorks; user id=MyUser; password=MyPassword")) { conn.Open(); // insert a hardcoded row for this sample SqlCommand cmd = new SqlCommand("INSERT INTO HumanResources.Department " + "(Name, GroupName) VALUES ('Databases', 'IT'); SELECT SCOPE_IDENTITY()", conn); outputId = (int)cmd.ExecuteScalar(); } Visual Basic .NET Dim cmd as SqlCommand ' as usual, connection strings shouldn't be hardcoded for production code Using conn As New SqlConnection( _ "server=MyServer; database=AdventureWorks; user id=MyUser; password=MyPassword") conn.Open() ' insert a hardcoded row for this sample cmd = New SqlCommand("INSERT INTO HumanResources.Department " _ & "(Name, GroupName) VALUES ('Databases', 'IT'); SELECT SCOPE_IDENTITY()", conn) outputId = CType(cmd.ExecuteScalar(), Integer) End Using
What happens if you do this in T-SQL?
Since you did a BEGIN TRAN first, it's clear that the ROLLBACK statement will undo the work done by the UPDATE from T-SQL. But the stored procedure created a new ADO.NET connection to another server and made a change there, what about that change? Nothing to worry about—we'll detect that the code established ADO.NET connections to remote servers, and by default we'll transparently take any existing transaction with the connection and have all servers your code connects to participate in a distributed transaction. This even works for non-SQL Server connections!
How do we do this? We have GREAT integration with System.Transactions.
System.Transactions + ADO.NET + SQLCLR
System.Transactions is a new namespace that's part of the 2.0 release of the .NET Framework. It contains a new transactions framework that will greatly extend and simplify the use of local and distributed transactions in managed applications.
For an introduction to System.Transactions and ADO.NET, see the MSDN Magazine article, Data Points: ADO.NET and System.Transactions, and the MSDN TV episode, Introducing System.Transactions in .NET Framework 2.0.
ADO.NET and SQLCLR are tightly integrated with System.Transactions to provide a unified transactions API across the .NET Framework.
Transaction promotion
After reading about all the magic around distributed transactions for procedures, you may be thinking about the huge overhead this implies. It turns out that it's not bad at all.
When you invoke managed stored procedures within a database transaction, we flow the transaction context down into the CLR code.
As I mentioned before, the context connection is literally the same connection, so the same transaction applies and no extra overhead is involved.
On the other hand, if you're opening a connection to a remote server, that's clearly not the same connection. When you open an ADO.NET connection, we automatically detect that there is a database transaction that came with the context and "promote" the database transaction into a distributed transaction; then we enlist the connection to the remote server into that distributed transaction so everything is now coordinated. And this extra cost is only paid if you use it; otherwise it's only the cost of a regular database transaction. Cool stuff, huh?
Note A similar transaction promotion feature is also available with ADO.NET and System.Transactions when used from the client and middle-tier scenarios. Consult the documentation on MSDN for further details.
Accessing the current transaction
At this point you may be wondering, "How do they know that there is a transaction active in the SQLCLR code to automatically enlist ADO.NET connections?" It turns out that integration goes deeper.
Outside of SQL Server, the System.Transactions framework exposes the concept of a "current transaction," which is available through System.Transaction.Current. We basically did the same thing inside the server.
If a transaction was active at the point where SQLCLR code is entered, then the transaction will be surfaced to the SQLCLR API through the System.Transactions.Transaction class. Specifically, Transaction.Current will be non-null.
In most cases you don't need to access the transaction explicitly. For database connections, ADO.NET will check Transaction.Current automatically during connection Open() and it will enlist the connection in that transaction transparently (unless you add enlist=false to the connection string).
There are a few scenarios where you might want to use the transaction object directly:
- If you want to abort the external transaction from within your stored procedure or function. In this case, you can simply call Transaction.Current.Rollback().
- If you want to enlist a resource that doesn't do automatic enlistment, or for some reason wasn't enlisted during initialization.
- You may want to enlist yourself in the transaction, perhaps to be involved in the voting process or just to be notified when voting happens.
Note that although I used a very explicit example where I do a BEGIN TRAN, there other scenarios where your SQLCLR code can be invoked inside a transaction and Transaction.Current will be non-null. For example, if you invoke a managed user-defined function within an UPDATE statement, it will happen within a transaction even if one wasn't explicitly started.
Using System.Transactions explicitly
If you have a block of code that needs to execute within a transaction even if the caller didn't start one, you can use the System.Transactions API. This is, again, the same code you'd use in the client or middle-tier to manage a transaction. For example:
C# using System.Data; using System.Data.SqlClient; using System.Transactions; using Microsoft.SqlServer.Server; public partial class StoredProcedures { [Microsoft.SqlServer.Server.SqlProcedure()] public static void SampleSP() { // start a transaction block using(TransactionScope tx = new TransactionScope()) { // connect to the context connection using(SqlConnection conn = new SqlConnection("context connection=true")) { conn.Open(); // do some changes to the local database } // connect to the remote database using(SqlConnection conn = new SqlConnection( "server=MyServer; database=AdventureWorks;" + "user id=MyUser; password=MyPassword")) { conn.Open(); // do some changes to the remote database } // mark the transaction as complete tx.Complete(); } } } Visual Basic .NET Imports System.Data Imports System.Data.SqlClient Imports System.Transactions Imports Microsoft.SqlServer.Server Partial Public Class StoredProcedures <Microsoft.SqlServer.Server.SqlProcedure()> _ Public Shared Sub SampleSP() ' start a transaction block Using tx As New TransactionScope() ' connect to the context connection Using conn As New SqlConnection("context connection=true") conn.Open() ' do some changes to the local database End Using ' connect to a remote server (don't hardcode the conn string in real code) Using conn As New SqlConnection("server=MyServer; database=AdventureWorks;" & _ "user id=MyUser; password=MyPassword") conn.Open() ' do some changes to the remote database End Using ' mark the transaction as completed tx.Complete() End Using End Sub End Class
The sample above shows the simplest way of using System.Transactions. Simply put a transaction scope around the code that needs to be transacted. Note that towards the end of the block there is a call to the Complete method on the scope indicating that this piece of code executed its part successfully and it's OK with committing this transaction. If you want to abort the transaction, simply don't call Complete.
The TransactionScope object will do the "right thing" by default. That is, if there was already a transaction active, then the scope will happen within that transaction; otherwise, it will start a new transaction. There are other overloads that let you customize this behavior.
The pattern is fairly simple: the transaction scope will either pick up an already active transaction or will start a new one. In either case, since it's in a "using" block, the compiler will introduce a call to Dispose at the end of the block. If the scope saw a call to Complete before reaching the end of the block, then it will vote commit for the transaction; on the other hand, if it didn't see a call to Complete (e.g., an exception was thrown somewhere in the middle of the block), then it will rollback the transaction automatically.
Note For the SQL Server 2005 release, the TransactionScope object will always use distributed transactions when running inside SQLCLR. This means that if there wasn't a distributed transaction already, the scope will cause the transaction to promote. This will cause overhead if you only connect to the local server; in that case, SQL transactions will be lighter weight. On the other hand, in scenarios where you use several resources (e.g., connections to remote databases), the transaction will have to be promoted anyway, so there is no additional overhead.
I recommend not using TransactionScope if you're going to connect only using the context connection.
Using SQL transactions in your SQLCLR code
Alternatively, you can still use regular SQL transactions, although those will handle local transactions only.
Using the existing SQL transactions API is identical to how SQL transactions work in the client/middle-tier. You can either using SQL statements (e.g., BEGIN TRAN) or call the BeginTransaction method on the connection object. That returns a transaction object (e.g., SqlTransaction) that then you can use to commit/rollback the transaction.
These transactions can be nested, in the sense that your stored procedure or function might be called within a transaction, and it would still be perfectly legal for you to call BeginTransaction. (Note that this does not mean you get "true" nested transactions; you'll get the exact same behavior that you'd get when nesting BEGIN TRAN statements in T-SQL.)
Transaction lifetime
There is a difference between transactions started in T-SQL stored procedures and the ones started in SQLCLR code (using any of the methods discussed above): SQLCLR code cannot unbalance the transaction state on entry/exit of a SQLCLR invocation. This has a couple of implications:
- You cannot start a transaction inside a SQLCLR frame and not commit it or roll it back; SQL Server will generate an error during frame exit.
- Similarly, you cannot commit or rollback an outer transaction inside SQLCLR code.
- Any attempt to commit a transaction that you didn't start in the same procedure will cause a run-time error.
- Any attempt to rollback a transaction that you didn't start in the same procedure will doom the transaction (preventing any other side-effecting operation from happening), but the transaction won't disappear until the SQLCLR code unwinds. Note that this case is actually legal, and it's useful when you detect an error inside your procedure and want to make sure the whole transaction is aborted.
Conclusion
SQLCLR is a great technology and it will enable lots of new scenarios. Using ADO.NET inside SQLCLR is a powerful mix that will allow you to combine heavy processing with data access to both local and remote servers, all while maintaining transactional correctness.
As with any other technology, this one has a specific application domain. Not every procedure needs to be rewritten in SQLCLR and use ADO.NET to access the database; quite the contrary, in most cases T-SQL will do a great job. However, for those cases where sophisticated logic or rich libraries are required inside SQL Server, SQLCLR and ADO.NET are there to do the job.
Acknowledgements
Thanks to Acey Bunch, Alazel Acheson, Alyssa Henry, Angel Saenz-Badillos, Chris Lee, Jian Zeng, Mary Chipman and Steve Starck for taking the time to review this document and provide helpful feedback.
|
http://msdn.microsoft.com/en-US/library/ms345135(v=sql.90).aspx
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
CS::RenderManager::RenderTree< TreeTraits > Class Template Reference
RenderTree is the main data-structure for the rendermanagers. More...
#include <csplugincommon/rendermanager/rendertree.h>
Detailed Description
template<typename TreeTraits = RenderTreeStandardTraits>
class CS::RenderManager::RenderTree< TreeTraits >
RenderTree is the main data-structure for the rendermanagers.
It contains the entire setup of meshes and where to render those meshes, as well as basic operations regarding those meshes.
The TreeTraits template argument specifies additional data stored with meshes, contexts and others in the tree. See the subclasses in RenderTreeStandardTraits for a list of what can be customized. To provide custom traits, create a class and either provide a new, custom type for a trait or typedef in the respective type from RenderTreeStandardTraits.
Definition at line 220 of file rendertree.h.
Member Function Documentation
Clone a context.
The new context is added before the context to be cloned.
Definition at line 626 of file rendertree.h.
Create a new context.
- Parameters:
-
Definition at line 588 of file rendertree.h.
Create a new mesh node associated with the given context.
Definition at line 668 of file rendertree.h.
Destroy a context and return it to the allocation pool.
Definition at line 614 of file rendertree.h.
Destroy given mesh node.
Definition at line 681 of file rendertree.h.
Get an iterator for iterating forward over the contexts.
Definition at line 652 of file rendertree.h.
Get an iterator for iterating backward over the contexts.
Definition at line 660 of file rendertree.h.
Debugging helper: whether debug screen clearing is enabled.
Definition at line 689 of file rendertree.h.
The documentation for this class was generated from the following file:
- csplugincommon/rendermanager/rendertree.h
Generated for Crystal Space 2.0 by doxygen 1.6.1
|
http://www.crystalspace3d.org/docs/online/api-2.0/classCS_1_1RenderManager_1_1RenderTree.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Currently Writing a new system, and I've put a small box with a texture on it, with DirectX11 (which was previously working before I started adding more unrelated code).
Now for some unknown reason, I've receiving this unhandled exception:
Continuing will loop the same problem. I'm probably missing something really stupid, but I'd really like some insight to why this is happening.. Thanks!
Texture.cpp
#include "Texture.h" /* Constructor & Deconstructor */ Texture::Texture() { m_texture = 0; } Texture::~Texture(){} /* Initialize */ bool Texture::Init(ID3D11Device* device, CHAR* filename) { HRESULT r; //Load texture r = D3DX11CreateShaderResourceViewFromFile(device, filename, NULL, NULL, &m_texture, NULL); if(FAILED(r)) {return false;} return true; } /* Return objects we are using to higher level */ ID3D11ShaderResourceView* Texture::GetTexture() {return m_texture;} /* Shutdown/Release Objects we have used */ void Texture::Shutdown() { if(m_texture) { m_texture->Release(); m_texture = 0; } return; }
Edited by Xuchilbara, 17 February 2013 - 09:32 PM.
|
http://www.gamedev.net/topic/638968-unhandle-exception-during-releaseshutdown-of-a-texture/
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Bug for AbsolutLayout?
Bug for AbsolutLayout?
hi,
first i have to say,
that the problem is only in IE (i have version 7) and for TextFields.
i am using the AbsolutLayout for positioning my Widget absolut in the LayoutContainer. this was working fine for version 1.2.1. the "txtTitle.setFieldLabel("Test");" was never shown, but that doesn't matter. i always added a extra LabelField before the TextField.
(see my code below)
Code:
public class TestView extends LayoutContainer { @Override protected void onRender(Element parent, int index) { super.onRender(parent, index); test(); } private void test() { setLayout(new AbsoluteLayout()); TextField<String> txtTitle = new TextField<String>(); txtTitle.setFieldLabel("Test"); add(txtTitle, new AbsoluteData(300, 300)); } }
what's the difference between 1.2.1 and 1.2.2 using the AbsolutData? what wrong now?
thx for helping
greetings paco
are you sizing the layoutcontainer?GXT JavaDocs:
GXT FAQ & Wiki:
Buy the Book on GXT:
Follow me on Twitter:
no - i do not set a size for the LayoutContainer. Should i? and how?
it should have the whole size in width and height.
if i put the TextField into a SimplePanel and than add the Simple to the LayoutContainer, it is working fine.
greetings
paco
you are mixing GWT and gxt panels and they do different things - ie the panel layout/behave differently
All containers in GXT must be sized, or be sized by their parentsGXT JavaDocs:
GXT FAQ & Wiki:
Buy the Book on GXT:
Follow me on Twitter:
i put the TextField in a SimplePanel and this Panel could
set absolute. so everything is working as bevore.
thank you for helping
|
http://www.sencha.com/forum/showthread.php?59417-Bug-for-AbsolutLayout
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Hi there,
I'm looking into how the publisherid value is generated in Adobe AIR 1.5.3 applications. Reading: e3d118666ade46-7ff0?
If your app. is 1.5.3 app., by default, the publisher ID is null. You can set the publisher ID in application descriptro if you want. e3d118666ade46-7ff1
Have you set the name space to 1.5.3 or 2.0beta2 if you use 2.0
SDK? The problem is you have the wrong name space. You can't use <publihs
erID> with namespace before 1.5.3.
-ted
Hey ted,
My application's descriptor file has been pointing at 1.5 all the time:
<application xmlns="">
I tried setting it to 1.5.3, but then I get this error:
error 102: Invalid namespace
Sean
Thanks ted,
On second inspection, I was using an old version of the sdk to build the app. Switching to 1.5.3 solves my issue.
Sean
North America
Europe, Middle East and Africa
Asia Pacific
South America
|
http://forums.adobe.com/message/2650066
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Okay after working on it over the weekend, I realize that I'm at the limit of my understanding of how to make things interoperable with C specifically in reference to MATLAB. My familiarity with C is passing and this involves things I've never tried to do in C.
I have a series of functions that I have made that are all bind(c) and have a specific user defined type for the return argument. I want the return argument to be flexible, as I want it to be able to return whatever it is the library wants to give it as well as be able to return the error messages/status that are returned.
Here is an example of the user defined type:
type, bind(c) :: c_return integer(c_int) :: ncolumns = 0 integer(c_int) :: nrows = 0 integer(c_int) :: lstring = 0 integer(c_int) :: status = 0 type(c_ptr) :: valueptr endtype c_return
The idea is to locally control the valueptr such that all memory gets freed after MATLAB is done with the command.
Here is an example of the function declaration that I want.
function c_bodyn(jd, is, ip) bind(c, name='c_bodyn') !DEC$ ATTRIBUTES DLLEXPORT :: c_bodyn ! ! ! Passed Parameters real(c_double), intent(in) :: jd ! Julian date real(c_double), intent(in) :: is ! satellite integer real(c_double), intent(in) :: ip ! master central body integer type(c_return) :: c_bodyn
I have hacked around trying to get a C header file that would be compatible, and haven't been successful. If someone can help me I'd really appreciate it.
Here is one of the many different header files that I have tried for these:
#define EXPORTED_FUNCTION __declspec(dllexport) #ifdef __cplusplus extern "C" { #endif typedef struct { int ncolumns, nrows, lstring, status; void * valueptr; } c_return; extern "C" __declspec struct c_return c_bodyn(double jd, int is, int ip); #ifdef __cplusplus } #endif
|
http://software.intel.com/ru-ru/forums/topic/372525
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
15 January 2009 17:53 [Source: ICIS news]
By Joe Kamalick
WASHINGTON (?xml:namespace>
This marks a profound turnaround for the fortunes of bio-ethanol and suggests that the incoming administration of President-elect Barack Obama may encounter political speed bumps in the effort to increase biofuels production and other alternative energy prospects.
In a scathing new study, the Environmental Working Group (EWG) drew on US Department of Energy (DOE) data to ague that corn ethanol and its backers in Congress have hijacked the federal government’s renewable energy efforts, siphoning off subsidies and tax credits to the detriment of other renewables that have a smaller environmental footprint.
“As Congress and the incoming Obama administration plan the nation’s next major investments in green energy, they need to take a hard, clear-eyed look at Department of Energy data documenting corn-based ethanol’s stranglehold on federal renewable energy tax credits and subsidies,” EWG said in a study circulated this week.
The environmental group cited Energy Department data to show that the US corn-ethanol industry received $3bn (€2.25bn) in tax credits in 2007, “more than four times the $690m in credits available to companies trying to expand all other forms of renewable energy, including solar, wind and geothermal power”.
The heavy federal subsidies for corn ethanol continue while “solar, wind and other renewable energy sources have struggled to gain significant market share with modest federal support”, EWG said.
“Corn-based ethanol has accounted for fully three-quarters of the tax benefits and two-thirds of all federal subsidies allotted for renewable energy sources in 2007,” the group said.
EWG contends that under existing federal mandates for biofuel consumption - which require 15bn gal/year of corn ethanol and 21bn gal/year of cellulosic and other advanced biofuels by 2022 - will cost taxpayers more than $5bn/year by 2010.
Current corn ethanol production capacity is around 9bn gal/year, according to the industry trade group Renewable Fuels Association (RFA). The federal mandate for 15bn gal/year of corn ethanol production and consumption by 2022 is based on the general expectation that 15bn gallons is the maximum productive capacity using corn as feedstock.
The mandate for 21bn gallons of cellulosic and other advanced biofuels by 2022 is by most accounts still wishful thinking because the technology for such wide scale commercial production of cellulosic ethanol at competitive prices is still in development.
In the meantime, corn ethanol benefits from a federal tax credit of 45 cents/gal paid to refiners who are required to use the biofuel as an oxygenate, and the home-grown ethanol is protected by a 54 cents/gal tariff on imported ethanol.
“Now the ethanol industry wants even more,” the EWG said. “In recent weeks, the corn ethanol lobby has pushed for billions in new federal subsidies as part of the economic stimulus package.”
The US corn ethanol industry is seeking further federal support because ethanol prices have plummeted in the last six months, forcing the country’s second-largest ethanol producer, VeraSun Energy, into bankruptcy last October and tipping at least five other bio-ethanol makers into bankruptcy as well during the fourth quarter last year.
At least 16 corn ethanol refineries were off line during the last two months of 2008 - among 172 corn-based production units nationwide - and more bankruptcies and project cancellations are expected this year.
Despite those reversals among corn ethanol producers and the existing federal mandate for 15bn gal/year by 2022, EWG notes that “Corn growers and ethanol companies are pressing for dramatic increases in the amount of ethanol Americans will be required to put into their gas tanks”.
In its analysis, EWG also repeated longstanding criticisms of ethanol, noting it provides less energy per gallon than gasoline and, given the amount of carbon-based energy required to produce corn ethanol, that it is not as environmentally friendly as once thought.
“Once touted as the energy equivalent of a free lunch, corn ethanol has proved to be an over-hyped and dubious renewable energy option,” the environmental group charged. “Ethanol made from corn has extremely limited potential to reduce the country’s dependence on imported oil, and current production systems likely worsen greenhouse gas emissions.”
EWG called for a phase-out of the 45 cents/gal tax credit for ethanol and urged that no federal incentives or subsidies be provided for any other biofuel unless it can meet strict climate and environmental protection standards.
The new EWG attack comes after multiple other environmental, petrochemical industry and government studies have levelled broadsides at corn ethanol.
And yet the incoming administration of President-elect Barack Obama is pledged to more than double
But Obama also has vowed to cut US agricultural subsidies. In addition, once unquestioning support for corn ethanol subsidies among members of Congress is showing signs of wear and weariness.
As the new President Obama takes power and moves to fulfil campaign promises to pass climate legislation, cut back subsidies and still advance biofuels, the issue of corn ethanol mandates and taxpayer funding will take centre stage.
It will pose a major test for the new president as he tries to balance the interests and demands of major constituencies - environmentalists, alternative energy advocates and the biofuel industry - that helped elect him.
The Renewable Fuels Association was invited to offer counter arguments to the EWG analysis, but the association did not reply to numerous telephone calls and e-mail queries.
($1 = €0.75.
|
http://www.icis.com/Articles/2009/01/15/9185345/insight-greens-call-for-halt-to-us-ethanol-subsidies.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Table of Contents
This is a tutorial that introduces the Process Virtual Machine library to Java developers.
With this library you can build executable process graphs. The key features of this library are
Process definitions are static and define an execution analogue to a Java class. Many executions can be run against the same process definition. One execution is also known as a process instance and that is analogue to a Java object. An execution maintains the current state for one execution of the process, including a pointer to the current node.
The first part of this manual gives a thorough introduction on how to implement Activity's. This means creating the runtime implementation for the process constructs (aka activity types) that are defined in the process languages.
Chapter 2, Basic graph execution explains how to create process graphs, how process graphs are executed and how Activities can be build that implement the runtime behaviour of nodes in the process graph.
Chapter 3, Examples uses the basic graph execution techniques to show how concrete activities are implemented in meaningfull setting.
Chapter 4, Advanced graph execution explains the more fine grained details of graph execution like the relation to threads, looping, sub processes and so on.
Chapter 5, Delegation classes are Java classes that are used as part of the process execution, but are not part of the pvm library.
Chapter 6, Variables captures contextual information related to a process execution. Think of it as a Map<String, Object> that is associated with a process execution.
Chapter 7, History shows the infrastructure for generating auditable events from the process. This is the information that will be fed into the history database that can be queried for statistical information about process execution (aka Business Intelligence).
The second part explains the embeddable infrastructure. That infrastructure makes it possible to use multiple transactional resources inside the process execution and configure them to operate correctly in standard and enterprise Java.
Chapter 8, Environment is the core abstraction layer for the specific Java environment in which the process operates. Transactional resources can be fetched from the environment. The environment will take care of the lazy initialization of the transactional resources based on the configuration.
Chapter 9, Persistence shows how process definitions and process executions can be stored in a relational database. It is also explained how hibernate is integrated into the environment and how concurrency is handled.
Chapter 10, Services are the session facades that are exposed to programmatic clients using the PVM functionality. They are based on commands and use the environment infrastructure.
Part three explains two PVM infrastructure features that are based on transactional resources and require the execution in separate a thread. The job executor that is part of the PVM can execute jobs in a standard Java environment. Alternatively, there are implementations for messaging and timers that can be bound to JMS and EJB Timers respectively in an enterprise environment.
Chapter 11, Asynchronous continuations are declarative transaction demarcations in a process. This functionality depends on an asynchronous messaging service.
Chapter 12, Timers can fire pieces of user code, related to an execution in the future.
In pPart four, Chapter 13, Process languages describes the main steps involved in building a complete process language implementation.
For building and executing processes the jbpm-pvm.jar does not have any other dependencies then on the JVM. If you're using DB persistence, then there is a dependency on hibernate and it's dependencies. More information about the optional depedencies can be found in the lib directory.
All jBPM modules use standard java logging. If you don't like the verbosity of the 2-line default logging output, Here's how you can configure a single line logging format in the code without using the -Djava.util.logging.config.file=... command line parameter:
InputStream stream = YourClass.class .getClassLoader() .getResourceAsStream("logging.properties"); try { LogManager.getLogManager().readConfiguration(stream); } finally { stream.close(); }
Typically such code would be put in a static block in one of the first classes that is loaded in your application. Then put a logging.properties file in the root of the classpath that looks like this:
handlers = java.util.logging.ConsoleHandler java.util.logging.ConsoleHandler.level = FINEST java.util.logging.ConsoleHandler.formatter = org.jbpm.util.JbpmFormatter # For example, set the com.xyz.foo logger to only log SEVERE messages: # com.xyz.foo.level = SEVERE .level = SEVERE org.jbpm.level=FINEST org.jbpm.tx.level=FINE org.jbpm.wire.level=FINE
When testing the persistence, following logging configurations can be valuable. SQL shows the SQL statement that is executed
The PVM library doesn't have a fixed set of process constructs. Instead, runtime behaviour of a node is delegated to an Activity. In other words, Activity is an interface to implement the runtime behaviour of process constructs in plain Java. Also, Activity implementations can be subscrribed as listeners to process events.
public interface Activity extends Serializable { void execute(Execution execution) throws Exception; }
Activity's can be used as node behaviour and as listeners to process events. When an activity is used as the node behaviour, it is in full control of the further propagation of the execution. In other words, a node behaviour can decide what the execution should do next. For example, it can take a transition with execution.take(Transition), go into a wait state with execution.waitForSignal(). Or the node behaviour can not invoke any of the above, in that case the Process Virtual Machine will just proceed the execution in a default way.
Events are only fired during process execution. Since during an event the execution is already 'in motion', event listeners can not control the propagation of execution. Therefore, Activity implementations can only be used as event listeners if they don't invoke any of the execution propagation methods.
This way, it is very easy to implement automatic activities that can be used as node behaviour as well as event listeners. Examples of automatic activities are sending an email, doing a database update, generating a pdf, calculating an average, etc. All of these can be executed by the process system and they can be used both as node behaviour as well as event listeners. In case they are used as node behaviour they can rely on the default proceed behaviour.
We'll start with a very original hello world example. A Display activity will print a message to the console:
public class Display implements Activity { String message; public Display(String message) { this.message = message; } public void execute(Execution execution) { System.out.println(message); } }
Let' build our first process definition with this activity:
ProcessDefinition processDefinition = ProcessFactory.build() .node("a").initial().behaviour(new Display("hello")) .transition().to("b") .node("b").behaviour(new Display("world")) .done(); adds two methods to the Activity:
public interface ExternalActivity extends Activity { void signal(Execution execution, String signal, Map<String, Object> parameters) throws Exception; Set<SignalDefinition> getSignals(Execution execution) throws Exception; }
Just like with plain activities, when an execution arrives in a node, the execute-method of the node node behaves as a wait state, then the execution will wait in that node until the execution's signal method is invoked. The execution will delegate that signal to the behaviour Activity of the current node. node.
The getSignals-method is optional and if a value is returned, it is the set of signals that this node accepts. The meaning and usage is analogue to how in Java reflection, it's possible to inspect all methods and method signatures of a Java class.
Here's a first example of a simple wait state implementation:
public class WaitState implements ExternalActivity { public void execute(Execution execution) { execution.waitForSignal(); } public void signal(Execution execution, String signal, Map<String, Object> parameters) { execution.take(signal); } public Set<SignalDefinition> getSignals(Execution execution) { return null; } }
The execute-method calls execution.waitForSignal(). This call is necessary to prevent automatic propagation of the execution. By calling execution.waitForSignal(), the node will behave as a wait state.
signal-method takes the transition with the signal parameter as the transition name. So when an execution receives an external trigger, the signal name is interpreted as the name of an outgoing transition and the execution will be propagated over that transition.
The getSignals-method is for introspection. Since it's optional, it is not implemented in this example, by returning null. So with this implementation, tools cannot inspect the possible signals that can be given for this node behaviour. The proper implementation that would match this node's signal method is to return a list of SignalDefinition's that correspond to the names of the outgoing transitions.
Here's the same simple process that has a transition from a to b. This time, the behaviour of the two nodes will be WaitState's.
ProcessDefinition processDefinition = ProcessFactory.build() .node("a").initial().behaviour(new WaitState()) .transition().to("b") .node("b").behaviour(new WaitState()) .done();
Execution execution = processDefinition.startExecution();
execution.signal();
In this next example, we'll combine automatic activities and wait states. This example is a simplified version of a loan approval process. Graphically, it.
ProcessDefinition processDefinition = ProcessFactory.build() .node("accept loan request").initial().behaviour(new WaitState()) .transition().to("loan evaluation") .node("loan evaluation").behaviour(new WaitState()) .transition("approve").to("wire the money") .transition("reject").to("end") .node("wire the money").behaviour(new Display("automatic payment")) .transition().to("end") .node("end").behaviour(new WaitState()) .done();
For more details about the ProcessFactory, see the javadocs. An alternative for the ProcessFactory would be to create an XML language and an XML parser for expressing processes. The XML parser can then instantiate the classes of package org.jbpm.pvm.impl directly. That approach is typically taken by process languages.
The node wire the money is an automatic node. The Display implementation uses the Java API's to just print a message to the console. But the witty reader can imagine an alternative Activity implementation that uses the Java API of a payment processing library to make a real automatic payment. All the other nodes are wait states.
A new execution for the process above can be started like this
Execution execution = processDefinition.startExecution();
Starting a new execution implies that the initial node is executed. Since in this case it's a wait state, the new execution will be positioned in the node 'accept loan request' when the startExecution-method returns.
Now we can give this execution an external trigger with the signal- method on the execution. Invoking the signal method will take the execution to the next wait state.
execution.signal();
Now, the execution is at an interesting point. There are two transitions out of the state 'loan evaluation'. One transition is called 'approve' and one transition is called 'reject'. As we explained above in the WaitState implementation, the transition taken corresponds to the signal that is given. Let's feed in the 'approve' signal like this:
execution.signal("approve");
The 'approve' signal will cause the execution to take the 'approve' transition and it will arrive in the node 'wire the money'.
In wire the money, the message will be printed to the console. Since, the Display activity didn't invoke the execution.waitForSignal(), nor any of the other execution propagation methods, the default behaviour will be to just proceed.
Proceeding in this case means that the default outgoing transition is taken and the execution will arrive in the end node, which is a wait state.
So only when the end wait state is reached, the signal("approve") returns. That is because all of the things that needed to be done between the original state and this new state could be executed by the process system. Executing till the next wait state is the default behaviour and that behaviour can be changed with
TODO: add link to async continuations
asynchronous continuations in case transactions should not include all calculations till the next wait state. For more about this, see Section 4.4, “Execution and threads”.
Another signal invocation will bring it eventually in the end state.
There are basically two forms of process languages: graph based and composite process languages. First of all, this design supports both. Even graph based execution and node composition can be used in combination to implement something like UML super states.
In this design, control flow activity implementations will have to be aware of whether they are dependent on transitions (graph based) or whether they are using the composite node structure. The goal of this design is that all non-control flow activities can be implemented in the same way so that you can use them in graph based process languages as well as in composite process languages.
Events are points in the process definition to which a list of activities can be subscribed as listeners. Activitys can be associated to an event. But activities on events can not influence the control flow of the execution since they are merely listeners to an execution wich is already in progress. This is different from activities that serve as the behaviour for nodes. Node behaviour activities are responsible for propagating the execution. So if an activity in an event invokes any of the following methods, then it will result in an exception.
We'll reuse the Display activity from above in a simple process: two nodes connected by a transition. The Display listener will be subscribed to the transition event.
ProcessDefinition processDefinition = ProcessFactory.build() .node("a").initial().behaviour(new WaitState()) .event("node-leave") .listener(new Display("leaving a")) .listener(new Display("second message while leaving a")) .transition().to("b") .listener(new Display("taking transition")) .node("b").behaviour(new WaitState()) .event("node-enter") .listener(new Display( nodes that get executed for all events that occur within that process element. For example this feature allows to register a listener on a process definition or a composite node on node-leave events. Such action will be executed if that node is left. And if that listener is registered on a composite node, it will also be executed for all nodes that are left within that composite node.
To show this clearly, we'll create a DisplaySource activity that will print the message leaving and the source of the event to the console.
public class DisplaySource implements Activity { public void execute(Execution execution) { System.out.println("leaving "+execution.getEventSource()); } }
Note that the purpose of event listeners is not to be visible, that's why the activity itself should not be displayed in the diagram. A DisplaySource activity will be added as a listener to the event node-leave on the composite node.
The next process shows how the DisplaySource activity is registered as a listener to to the 'node-leave' event on the composite node:
ProcessDefinition processDefinition = ProcessFactory.build("propagate") .compositeNode("composite") .event(Node.EVENT_NODE_LEAVE) .listener(new DisplaySource()) .node("a").initial().behaviour(new WaitState()) .transition().to("b") .node("b").behaviour(new WaitState()) .transition().to("c") .compositeEnd() .node("c").behaviour(new WaitState()) .done();
Next we'll start an execution.
Execution execution = processDefinition.startExecution();
After starting a new execution, the execution will be in node a as that is the initial node. No nodes have been left so no message is logged. Next a signal will be given to the execution, causing it to take the transition from a to b.
execution.signal();
When the signal method returns, the execution will have taken the transition and the node-leave event will be fired on node a. That event will be propagated to the composite node and to the process definition. Since our propagation logger is placed on node composite it will receive the event and print the following message:
leaving node(a)
Another
execution.signal();
will take the transition from b to c. That will fire two node-leave events. One on node b and one on node composite. So the following lines will be appended to the console output:
leaving node(b) leaving node(composite)
Event propagation is build on the hierarchical composition structure of the process definition. The top level element is always the process definition. The process definition contains a list of nodes. Each node can be a leaf node or it can be a composite node, which means that it contains a list of nested nodes. Nested nodes can be used for e.g. super states or composite activities in nested process languages like BPEL.
So the even model also works similarly for composite nodes nodes. Every node can have a set of nested nodes. The parent of a transition is considered as the first common parent for it's source and destination.
If an event listener is not interested in propagated events, propagation can be disabled with propagationDisabled(). The next process is the same process as above except that propagated events will be disabled on the event listener. The graph diagram remains the same.
Building the process with the process factory:
ProcessDefinition processDefinition = ProcessFactory.build("propagate") .compositeNode("composite") .event(Node.EVENT_NODE_LEAVE) .listener(new DisplaySource()) .propagationDisabled() .node("a").initial().behaviour(new WaitState()) .transition().to("b") .node("b").behaviour(new WaitState()) .transition().to("c") .nodesEnd() .node("c").behaviour(new WaitState()) .done();
So when the first signal is given for this process, again the node-leave event will be fired on node a, but now the listener on the composite node will not be executed cause propagated events have been disabled. Disabling propagation is a property on the listener and doesn't influence the other listeners. The event will always be fired and propagated over the whole parent hierarchy.
Execution execution = processDefinition.startExecution(); execution.signal();
Next, the second signal will take the transition from b to c.
execution.signal()
Again two node-leave events are fired just like above on nodes b and composite respectively. The first event is the node-leave event on node b. That will be propagated to the composite node. So the listener will not be executed for this event cause it has propagation disabled. But the listener will be executed for the node-leave event on the composite node. That is not propagated, but fired directly on the composite node. So the listener will now be executed only once for the composite node as shown in the following console output:
leaving node(composite)
Above we already touched briefly on the two main process constructs: Nodes, transitions and node composition. This section will elaborate on all the basic combination possibilities.
Figure 2.16. Transition of composite nodes are inherited. The node inside can take the transition of the composite node.
This example shows how to implement automatic conditional branching. This is mostly called a decision or an or-split. It selects one path of execution from many alternatives. A decision node should have multiple outgoing transitions.
In a decision, information is collected from somewhere. Usually that is the process variables. But it can also collect information from a database, a file, any other form of input or a combination of these. In this example, a variable creditRate is used. It contains an integer. The higher the integer, the better the credit rating. Let's look at the example implementation:
Then based on the obtained information, in our case that is the creditRate, an outgoing transition has to be selected. In the example, transition good will be selected when the creditRate is above 5, transition bad will be selected when creditRate is below -5 and otherwise transition average will be selected.
Once the selection is done, the transition is taken with execution.take(String) or the execution.take(Transition) method.
public class AutomaticCreditRating implements Activity { public void execute(Execution execution) { int creditRate = (Integer) execution.getVariable("creditRate"); if (creditRate > 5) { execution.take("good"); } else if (creditRate < -5) { execution.take("bad"); } else { execution.take("average"); } } }
We'll demonstrate the AutomaticCreditRating in the following process:
ProcessDefinition processDefinition = ProcessFactory.build() .node("initial").initial().behaviour(new WaitState()) .transition().to("creditRate?") .node("creditRate?").behaviour(new AutomaticCreditRating()) .transition("good").to("a") .transition("average").to("b") .transition("bad").to("c") .node("a").behaviour(new WaitState()) .node("b").behaviour(new WaitState()) .node("c").behaviour(new WaitState()) .done();
Executing this process goes like this:
Execution execution = processDefinition.startExecution();
startExecution() will bring the execution into the initial node. That's a wait state so the execution will point to that node when the startExecution() returns.
Then we have a chance to set the creditRate to a specific value like e.g. 13.
execution.setVariable("creditRate", 13);
Next, we provide a signal so that the execution takes the default transition to the creditRate? node. Since process variable creditRate is set to 13, the AutomaticCreditRating activity will take transition good to node a. Node a is a wait state so them the invocation of signal will return.
Similarly, a decision can be implemented making use of the transition's guard condition. For each outgoing transition, the guard condition expression can be evaluated. The first transition for which its guard condition evaluates to true is taken.
This example showed automatic conditional branching. Meaning that all information is available when the execution arrives in the decision node, even if it may have to be collected from different sources. In the next example, we show how a decision is implemented for which an external entity needs to supply the information, which results into a wait state.
This example shows an activity that again selects one path of execution out of many alternatives. But this time, the information on which the decision is based is not yet available when the execution arrives at the decision. In other words, the execution will have to wait in the decision until the information is provided from externally.
public class ExternalSelection implements ExternalActivity { public void execute(Execution execution) { execution.waitForSignal(); } public void signal(Execution execution, String signalName, Map<String, Object> parameters) throws Exception { execution.take(signalName); } public Set<SignalDefinition> getSignals(Execution execution) throws Exception { return null; } }
The diagram for this external decision will be the same as for the automatic decision:
ProcessDefinition processDefinition = ProcessFactory.build() .node("initial").initial().behaviour(new WaitState()) .transition().to("creditRate?") .node("creditRate?").behaviour(new ExternalSelection()) .transition("good").to("a") .transition("average").to("b") .transition("bad").to("c") .node("a").behaviour(new WaitState()) .node("b").behaviour(new WaitState()) .node("c").behaviour(new WaitState()) .done();
The execution starts the same as in the automatic example. After starting a new execution, it will be pointing to the initial wait state.
Execution execution = processDefinition.startExecution();
But the next signal will cause the execution to take the default transition out of the initial node and arrive in the creditRate? node. Then the ExternalSelection is executed, which will result into a wait state. So when the invocation of signal() returns, the execution will be pointing to the creditRate? node and it expects an external trigger.
Next we'll give an external trigger with good as the signalName. So supplying the external trigger is done together with feeding the information needed by the decision.
execution.signal("good");
That external trigger will be translated by the ExternalSelection activity into taking the transition with name good. That way the execution will have arrived in node a when signal("good") returns.
Note that both parameters signalName and parameters can be used by external activities as they want. In the example here, we used the signalName to specify the result. But another variation might expect an integer value under the creditRate key of the parameters.
But leveraging the execution API like that is not done very often in practice. The reason is that for most external functions, typically activity instances are created. Think about Task as an instance of a TaskActivity (see later) or analogue, a ServiceInvocation could be imagined as an instance of a ServiceInvocationActivity. In those cases, those activity instances make the link between the external activity and the execution. And these instances also can make sure that an execution is not signalled inappropriately. Inappropriate signalling could happen when for instance a service response message would arrive twice. If in such a scenario, the message receiver would just signal the execution, it would not notice that the second time, the execution is not positioned in the service invocation node any more.
Block structured languages like BPEL are completely based on composite nodes. Such languages don't have transitions. The composite node structure of the Process Virtual Machine allows to build a process with a structure that exactly matches the block structured languages. There is no need for a conversion to a transition based model. We have already discussed some examples of composite nodes. The following example will show howw to implement a sequence, one of the most common composite node types.
A sequence has a list of nested activities that need to be executed in sequence.
This is how a sequence can be implemented:
public class Sequence implements ExternalActivity { public void execute(Execution execution) { List<Node> nodes = execution.getNode().getNodes(); execution.execute(nodes.get(0)); } public void signal(Execution execution, String signal, Map<String, Object> parameters) { Node previous = execution.getPreviousNode(); List<Node> nodes = execution.getNode().getNodes(); int previousIndex = nodes.indexOf(previous); int nextIndex = previousIndex+1; if (nextIndex < nodes.size()) { Node next = nodes.get(nextIndex); execution.execute(next); } else { execution.proceed(); } } public Set<SignalDefinition> getSignals(Execution execution) { return null; } }
When an execution arrives in this sequence, the execute method will execute the first node in the list of child nodes (aka composite nodes or nested nodes). The sequence assumes that the child node's behaviour doesn't have outgoing transitions and will end with an execution.proceed(). That proceed will cause the execution to be propagated back to the parent (the sequence) with a signal.
The signal method will look up the previous node from the execution, determine its index in the list of child nodes and increments it. If there is a next node in the list it is executed. If the previous node was the last one in the list, the proceed is called, which will propagate the execution to the parent of the sequence in case there are no outgoing transitions.
To optimize persistence of executions, the previous node of an execution is normally not maintained and will be to null. If a node requires the previous node or the previous transition like in this Sequence, the property isPreviousNeeded must be set on the node.
Let's look at how that translates to a process and an execution:
ProcessDefinition processDefinition = ProcessFactory.build("sequence") .compositeNode("sequence").initial().behaviour(new Sequence()) .needsPrevious() .node("one").behaviour(new Display("one")) .node("wait").behaviour(new WaitState()) .node("two").behaviour(new Display("two")) .compositeEnd() .done();
The three numbered nodes will now be executed in sequence. Nodes 1 and 2 are automatic Display activities, while node wait is a wait state.
Execution execution = processDefinition.startExecution();
The startExecution will execute the Sequence activity. The execute method of the sequence will immediately execute node 1, which will print message one on the console. Then the execution is automatically proceeded back to the sequence. The sequence will have access to the previous node. It will look up the index and execute the next. That will bring the execution to node wait, which is a wait state. At that point, the startExecution() will return. A new external trigger is needed to complete the wait state.
execution.signal();
That signal will delegate to the WaitState's signal method. That method is empty so the execution will proceed in a default way. Since there are no outgoing transitions, the execution will be propagated back to the sequence node, which will be signalled. Then node 2 is executed. When the execution comes back into the sequence it will detect that the previously executed node was the last child node, therefore, no propagation method will be invoked, causing the default proceed to end the execution. The console will show:
one two
In a composite model, the node behaviour can use the execution.execute(Node) method to execute one of the child nodes.
ProcessDefinition processDefinition = ProcessFactory.build() .compositeNode("creditRate?").initial().behaviour(new CompositeCreditRating()) .node("good").behaviour(new ExternalSelection()) .node("average").behaviour(new ExternalSelection()) .node("bad").behaviour(new ExternalSelection()) .compositeEnd() .done();
The CompositeCreditRating is an automatic decision, implemented like this:
public class CompositeCreditRating implements Activity { public void execute(Execution execution) { int creditRate = (Integer) execution.getVariable("creditRate"); if (creditRate > 5) { execution.execute("good"); } else if (creditRate < -5) { execution.execute("bad"); } else { execution.execute("average"); } } }
So when we start a new execution with
Map<String, Object> variables = new HashMap<String, Object>(); variables.put("creditRate", 13); Execution execution = processDefinition.startExecution(variables);
The execution will execute the CompositeCreditRating. The CompositeCreditRating will execute node good cause the process variable creditRate is 13. When the startExecution() returns, the execution will be positioned in the good state. The other scenarios are very similar.
This section will demonstrate how support for human tasks can be build on top of the Process Virtual Machine.
As we indicated in Section 4.4, “Execution and threads”, for each step in the process the most important characteristic is whether responsibility for an activity lies within the process system or outside. In case of a human task, it should be clear that the responsibility is outside of the process system. This means that for the process, a human task is a wait state. The execution will have to wait until the person provides the external trigger that the task is completed or submitted.
In the picture above, the typical link between process execution and tasks is represented. When an execution arrives in a task node, a task is created in a task component. Typically such a task will end up in a task table somewhere in the task component's database. Then users can look at their task lists. A task list is then a filter on the complete task list based on the task's assigned user column. When the user completes the task, the execution is signalled and typically leaves the node in the process.
A task management component keeps track of tasks for people. To integrate human tasks into a process, we need an API to create new tasks and to get notifications of task completions. The following example might have only a rudimentary integration between between process execution and the task management component, but the goal is to show the interactions as clearly as possible. Real process languages like jPDL have a much better integration between process execution and tasks, resulting in more complexity.
For this example we'll first define a simplest task component with classes Task and TaskComponent:
public class Task { public String userId; public String taskName; public Execution execution; public Task(String userId, String taskName, Execution execution) { this.userId = userId; this.taskName = taskName; this.execution = execution; } public void complete() { execution.signal(); } }
This task has public fields to avoid the getters and setters. The taskName property is the short description of the task. The userId is a reference to the user that is assigned to this task. And the execution is a reference to the execution to which this task relates. When a task completes it signals the execution.
The next task component manages a set of tasks.
public class TaskComponent { static List<Task> tasks = new ArrayList<Task>(); public static void createTask(String taskName, Execution execution) { String userId = assign(taskName, execution); tasks.add(new Task(userId, taskName, execution)); } private static String assign(String taskName, Execution execution) { return "johndoe"; } public static List<Task> getTaskList(String userId) { List<Task> taskList = new ArrayList<Task>(); for (Task task : tasks) { if (task.userId.equals(userId)) { taskList.add(task); } } return taskList; } }
To keep this example short, this task component is to be accessed through static methods. The assigning tasks is done hard coded to "johndoe". Tasks can be created and tasklists can be extracted by userId. Next we can look at the node behaviour implementation of a TaskActivity.
public class TaskActivity implements ExternalActivity { public void execute(Execution execution) { // let's use the node name as the task id String taskName = execution.getNode().getName(); TaskComponent.createTask(taskName, execution); } public void signal(Execution execution, String signal, Map<String, Object> parameters) { execution.takeDefaultTransition(); } public Set<SignalDefinition> getSignals(Execution execution) { return null; } }
The task node works as follows. When an execution arrives in a task node, the execute method of the TaskActivity is invoked. The execute method will then take the node name and use it as the task name. Alternatively, 'taskName' could be a configuration property on the TaskActivity class. The task name is then used to create a task in the task component. Once the task is created, the execution is not propagated which means that the execution will wait in this node till a signal comes in.
When the task is completed with the Task.complete() method, it will signal the execution. The TaskActivity's signal implementation will take the default transition.
This is how a process can be build with a task node:
ProcessDefinition processDefinition = ProcessFactory.build("task") .node("initial").initial().behaviour(new AutomaticActivity()) .transition().to("shred evidence") .node("shred evidence").behaviour(new TaskActivity()) .transition().to("next") .node("next").behaviour(new WaitState()) .done();
When a new execution is started, the initial node is an automatic activity. So it will immediately propagate to the task node the task will be created and the execution will stop in the 'shred evidence' node.
Execution execution = processDefinition.startExecution(); assertEquals("shred evidence", execution.getNode().getName()); Task task = TaskComponent.getTaskList("johndoe").get(0);
Next, time can elapse until the human user is ready to complete the task. In other words, the thread of control is now with 'johndoe'. When John completes his task e.g. through a web UI, then this should result into an invocation of the complete method on the task.
task.complete(); assertEquals("next", execution.getNode().getName());
The invocation of the complete method cause the execution to take the default transition to the 'next' node.
Loops can be based on transitions or on node composition. Loops can contain wait states.
To support high numbers of automatic loop executions, the Process Virtual Machine tranformed the propagation of execution from tail recursion to a while loop. This means that all the methods in the Execution class that propagate the execution like take or execute will not be executed when you call them. Instead, the method invocations will be appended to a list. The first invocation of such a method will start a loop that will execute all invocations till that list is empty. These invocations are called atomic operations.
When an Activity is used as node behaviour, it can explicitely propagate the execution with following methods:
When Activity implementations used for node behviour don't call any of the following execution propagation methods, then, after the activity is executed, the execution will just proceed.
By default proceeding will perform the first action that applies in the following list:
Process languages can overwrite the default proceed behaviour by overriding the proceed method in ExecutionImpl. the asynchronous command service.
The next process will show the basics concretely. It has three wait states and four automatic nodes.
Here's how to build the process:
ProcessDefinition processDefinition = ProcessFactory.build("automatic") .node("wait 1").initial().behaviour(new WaitState()) .transition().to("automatic 1") .node("automatic 1").behaviour(new Display("one")) .transition().to("wait 2") .node("wait 2").behaviour(new WaitState()) .transition().to("automatic 2") .node("automatic 2").behaviour(new Display("two")) .transition().to("automatic 3") .node("automatic 3").behaviour(new Display("three")) .transition().to("automatic 4") .node("automatic 4").behaviour(new Display("four")) .transition().to("wait 3") .node("wait 3").behaviour(new WaitState()) .done();
Let's walk you through one execution of this process.
Execution execution = processDefinition.startExecution();
Starting a new execution means that the initial node is executed. So if an automatic activity would be configured as the behaviour in the initial node, the process will start executing immediatly in the startExecution. In this case however, the initial node is a wait state. So the startExecution method returns immediately and the execution will be positioned in the initial node 'wait 1'.
Then an external trigger is given with the signal method.
execution.signal();
As explained above when introducing the WaitState, that signal will cause the default transition to be taken. The transition will move the execution to node automatic 1 and execute it. The execute method of the Display activity in automatic 1 print a line to the console and it will not call execution.waitForSignal(). Therefore, the execution will proceed by taking the default transition out of automatic 1. The signal method is still blocking cause this action and the transitions are taken by that same thread. Then the execution arrives in wait 2 and executes the WaitState activity. That method will invoke the execution.waitForSignal(), which will cause the signal method to return. That is when the thread is given back to the client that invoked the signal method.
So when the signal method returns, the execution is positioned in wait 2.
Then the execution is now waiting for an external trigger just as an object (more precisely an object graph) in memory until the next external trigger is given with the signal method.
execution.signal();
This second invocation of signal will take the execution similarly all the way to wait 3 before it returns.
To make executable processes, developers need to know exactly what the automatic activities,. This implies that on the level of the Process Virtual Machine, there is no differentiation between complete process instances and paths of execution within a process instance. One of the main motivations for this design is that the API actually is not made more complex then necessary for simple processes with only one single path of execution.
To extablish multiple concurrent paths of execution, child executions can be created. nodes represent nodes that fork and join. The execution shows a three executions. The main path of execution is inactive (represented as gray) and the billing and shipping paths of execution are active and point to the node bill and ship respectively.
It's up to the node behaviour implementations how they want to use this execution structure. Suppose that multiple tasks have to be completed before the execution is to proceed. The node's, Actions and Conditions, it's possible to include try-catch blocks in the method implementations to handle exceptions. node node instance (aka activity instance) and is unique for every execution of the node.
External entities themselves are responsible for managing the execution lock. If the timers and client applications are consequent in addressing the external entities instead of the execution directly, then locking is in theory unnecessary. It's up to the node behaviour implementations whether they want to take the overhead of locking and unlocking.
Delegation classes are the classes that implement Activity or Condition. From the Process Virtual Machine's perspective, these are external classes that provide programming logic that is inserted into the PVM's graph execution. Delegation classes can be provided by the process languages as well as by the end users.
Delegation classes can be made configurable. Member fields can contain configuration parameters so that a delegation class can be configured differently each time it is used. For example, in the Display activity, the message that is to be printed to the console is a configuration parameter.
Delegation classes should be stateless. This means that executing the interface methods should not change values of the member fields. Changing member field values of delegation classes during execution methods is actually changing the process while it's executing. That is not threadsafe and usually leads to unexpected results. As an exception, getters and setters might be made available to inject the configuration cause they are used before the delegation object is actually used in the process execution.
TODO: the node behaviour allows for design time as well as runtime behaviour.
The environment component together with the wire context is a kind of Inversion of Control (IoC) container. It reads configuration information that describes how objects should be instantiated, configured and wired together.
The environment is used to retrieve resources and services needed by Activity implementations and the Process Virtual Machine itself. The main purpose is to make various aspects of the Process Virtual Machine configurable so that the PVM and the languages that run on top can work in a standard Java environment as well as an enterprise Java environment.
The environment is partitioned into a set of contexts. Each context can have its own lifecycle. For instance, the application context will strech over the full lifetime of the application. The block context only for the duration of a try-finally block. Typically a block context represents a database transaction. Each context exposes a list of key-value pairs.
To start working with an environment, you need an EnvironmentFactory. One single environment factory object can be used throughout the complete lifetime of the application. So typically this is kept in a static member field. The EnvironmentFactory itself is the application context.
An EnvironmentFactory is typically obtained by parsing a configuration file like this:
static EnvironmentFactory environmentFactory = EnvironmentFactory.parse(new ResourceStreamSource("pvm.cfg.xml");
See javadocs package org.jbpm.stream for more types of stream sources.
There is a default parser in the environment factory that will create DefaultEnvironmentFactorys. The idea is that we'll also support spring as an IoC container. But that is still TODO. Feel free to help us out :-). The parser can be configured with the static setter method EnvironmentFactory.setParser(Parser)
An environment exists for the duration of a try-finally block. This is how an environment block looks like:
Environment environment = environmentFactory.openEnvironment(); try { ... } finally { environment.close(); }
The environment block defines another lifespan: the block context. A transaction would be a typical example of an object that is defined in the block context.
Inside such a block, objects can be looked up from the environment by name or by type. If objects can looked up from the environment with method environment.get(String name) or <T> T environment.get(Class<T>).
when an environment is created, it has a application context and a block context.
In the default implementation, the application context and the block context are WireContexts. A WireContext contains a description of how its objects are created and wired together to form object graphs.
To start with a simple example, we'll need a Book:
public class Book { ... public Book() {} ... }
Then let's create an environment factory that knows how to create book
static EnvironmentFactory environmentFactory = EnvironmentFactory.parse(new StringStreamSource( "<environment>" + " <application>" + " <object name='book' class='org.jbpm.examples.ch09.Book' />" + " </application>" + "</environment>" ));
Now we'll create an environment block with this environment factory and we'll look up the book in the environment. First the lookup is done by type and secondly by name.
Environment environment = environmentFactory.openEnvironment(); try { Book book = environment.get(Book.class); assertNotNull(book); assertSame(book, environment.get("book")); } finally { environment.close(); }
To prevent that you have to pass the environment as a parameter in all methods, the current environment is maintained in a threadlocal stack:
Environment environment = Environment.getCurrent();
Contexts can be added and removed dynamically. Anything can be exposed as a Context.
public interface Context { Object get(String key); <T> T get(Class<T> type); Set<String> keys(); ... }
When doing a lookup on the environment, there is a default search order in which the contexts will be scanned for the requested object. The default order is the inverse of the sequence in which the contexts were added. E.g. if an object is defined in both the application context and in the block context, the block context is considered more applicable and that will be scanned first. Alternatively, an explicit search order can be passed in with the get lookups as an optional parameter.
This section describes how the environment can be configured to use hibernate in a standard Java environment.
01 | <environment> 02 | 03 | <application> 04 | <hibernate-session-factory /> 05 | <hibernate-configuration> 06 | <properties resource="hibernate.properties" /> 07 | <mappings resources="org/jbpm/pvm.hibernate.mappings.xml" /> 08 | <cache-configuration 09 | 11 | </hibernate-configuration> 12 | </application> 13 | 14 | <block> 15 | <standard-transaction /> 16 | <hibernate-session /> 17 | <pvm-db-session /> 18 | </block> 19 | 20 | </environment>
line 04 specifies a hibernate session factory in the application context. This means that a hibernate session factory is lazy created when it is first needed and cached in the EnvironmentFactory.
A hibernate session factory is build calling the method buildSessionFactory() on a hibernate configuration. By default, the hibernate configuration will be looked up by type.
line 05 specifies a hibernate configuration.
line 06 specifies the that the resource file hibernate.properties should be loaded into the configuration.
line 07 (note the plural form of mappings) specifies that resources org/jbpm/pvm.hibernate.mappings.xml contain references to hibernate mapping files or resources that should be included into the configuration. Also note the plural form of resources. This means that not one, but all the resource files on the whole classpath will be found. This way new library components containing a org/jbpm/pvm.hibernate.mappings.xml resource can plug automatically into the same hibernate session by just being added to the classpath.
Alternatively, individual hibernate mapping files can be referenced with the singular mapping element.
line 08 - 10 provide a single place to specify the hibernate caching strategy for all the PVM classes and collections.
line 15 specifies a standard transaction. This is a very simple global transaction strategy without recovery that can be used in standard environments to get all-or-nothing semantics over multiple transactional resources.
line 16 specifies the hibernate session that will automatically register itself with the standard transaction.
line 17 specifies a PvmDbSession. That is a class that adds methods that bind to specific queries to be executed on the hibernate session.
Here is a set of default properties to configure hibernate with hsqldb in a standard Java environment.
hibernate.dialect org.hibernate.dialect.HSQLDialect hibernate.connection.driver_class org.hsqldb.jdbcDriver hibernate.connection.url jdbc:hsqldb:mem:. hibernate.connection.username sa hibernate.connection.password hibernate.cache.use_second_level_cache true hibernate.cache.provider_class org.hibernate.cache.HashtableCacheProvider
Optionally in development the schema export can be used to create the schema when the session factory is created and drop the schema when the session factory is closed.
hibernate.hbm2ddl.auto create-drop
For more information about hibernate configurations, see the hibernate reference manual.
By default, the <hibernate-session /> will start a hibernate transaction with session.beginTransaction(). Then the hibernate transaction is wrapped in a org.jbpm.hibernate.HibernateTransactionResource and that resource is enlisted with the <standard-transaction /> (org.jbpm.tx.StandardTransaction)
Inside of the environment block, the transaction is available through environment.getTransaction(). So inside an environment block, the transaction can be rolled back with environment.getTransaction().setRollbackOnly()
When created, the standard transaction will register itself to be notified on the close of the environment. So in side the close, the standard transaction will commit or rollback depending on whether setRollbackOnly() was called.
So in the configuration shown above, each environment block will be a separate transaction. At least, if the hibernate session is used.
In the next example, we'll show how this hibernate persistence is used with a concrete example. The 'persistent process' is a simple three-step process:
The activities in the three nodes will be wait states just like in Section 2.4, “ExternalActivity example”
To make sure we can persist this class, we create the hibernate mapping for it and add it to the configuration like this:
<hibernate-configuration> <properties resource="hibernate.properties" /> <mappings resources="org/jbpm/pvm.hibernate.mappings.xml" /> <mapping resource="org/jbpm/examples/ch09/state.hbm.xml" /> <cache-configuration
The next code pieces show the contents of one unit test method. The method will first create the environment factory. Then, in a first transaction, a process definition will be created and saved into the database. Then the next transaction will create a new execution of that process. And the following two transactions will provide external triggers to the execution.
EnvironmentFactory environmentFactory = EnvironmentFactory.parse(new ResourceStreamSource( "org/jbpm/examples/ch09/environment.cfg.xml" ));
Then in a first transaction, a process is created and saved in the database. This is typically referred to as deploying a process and it only needs to be done once.
Environment environment = environmentFactory.openEnvironment(); try { PvmDbSession pvmDbSession = environment.get(PvmDbSession.class); ProcessDefinition processDefinition = ProcessFactory.build("persisted process") .node("one").initial().behaviour(new State()) .transition().to("two") .node("two").behaviour(new State()) .transition().to("three") .node("three").behaviour(new State()) .done(); pvmDbSession.save(processDefinition); } finally { environment.close(); }
In the previous transaction, the process definition, the nodes and transitions will be inserted into the database tables.
Next we'll show how a new process execution can be started for this process definition. Note that in this case, we provide a business key called 'first'. This will make it easy for us to retrieve the same execution from the database in subsequent transactions. After starting the new process execution, it will wait in node 'one' cause the behaviour is a wait state.
environment = environmentFactory.openEnvironment(); try { PvmDbSession pvmDbSession = environment.get(PvmDbSession.class); ProcessDefinition processDefinition = pvmDbSession.findProcessDefinition("persisted process"); assertNotNull(processDefinition); Execution execution = processDefinition.startExecution("first"); assertEquals("one", execution.getNode().getName()); pvmDbSession.save(execution); } finally { environment.close(); }
In the previous transaction, a new execution record will be inserted into the database.
Next we feed in an external trigger into this existing process execution. We load the execution, provide a signal and just save it back into the database.
environment = environmentFactory.openEnvironment(); try { PvmDbSession pvmDbSession = environment.get(PvmDbSession.class); Execution execution = pvmDbSession.findExecution("persisted process", "first"); assertNotNull(execution); assertEquals("one", execution.getNode().getName()); // external trigger that will cause the execution to execute until // it reaches the next wait state execution.signal(); assertEquals("two", execution.getNode().getName()); pvmDbSession.save(execution); } finally { environment.close(); }
The previous transaction will result in an update of the existing execution, reassigning the foreign key to reference another record in the node table.
UPDATE JBPM_EXECUTION SET NODE_=?, DBVERSION_=?, ... WHERE DBID_=? AND DBVERSION_=?
The version in this SQL shows the automatic optimistic locking that is baked into the PVM persistence so that process persistence can easily scale to multiple JVM's or multiple machines.
In the example code, there is one more transaction that is completely similar to the previous which takes the execution from node 'two' to node 'three'.
All of this shows that the PVM can move from one wait state to another wait state transactionally. Each transaction correcponds to a state transition.
Note that in case of automatic activities, multiple activities will be executed before the execution reaches a wait state. Typically that is desired behaviour. In case the automatic activities take too long or you don't want to block the original transaction to wait for the completion of those automatic activities, check out Chapter 11, Asynchronous continuations to learn about how it's possible to demarcate transactions in the process definition, which can also be seen as safe-points during process execution.
TODO: General persistence architecture
TODO: Object references
TODO: Threads, concurrency with respect to forks and joins
TODO: Caching
TODO: Process instance migration
All session facades are called services in the PVM and it's related projects. A service is the front door of the API. It has a number of methods that expose the functionality of the component. The service takes care of getting or setting up an environment for each operation that is invoked.
Service methods are implemented through command classes. Each method creates a command object and the command is executed with the execute method of the CommandService. The CommandService is responsible for setting up the environment.
There are three command executors:
Each of the command services can be configured with a list of interceptors that span around the command execution. Following interceptors are available:
Following configuration can be used in default standard persistence situations:
<environment> <application> <pvm-service /> <standard-command-service> <retry-interceptor /> <environment-interceptor /> <transaction-interceptor /> </standard-command-service> ... </application> ... </environment>
TODO: xml parser infrastructure
TODO: inherit from ProcessDefinitionImpl, ExecutionImpl
TODO: overriding the default proceed()
TODO: node type implementations
TODO: persistence
TODO: compensation: languages like bpel and bpnm define that as a normal contination that fits within the process structures available in the pvm (taking a transition and executing a nested node).
|
http://docs.jboss.org/jbpm/pvm/manual/html_single/
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
04 September 2009 23:59 [Source: ICIS news]
LONDON (ICIS news)--European low density polyethylene (LDPE) contract prices firmed by €100/tonne ($143/tonne) in early September due to particularly tight supply, mounting ethylene feedstock pressure and signs of demand recovery versus earlier in the year, market players said on Friday.
“With the September ethylene increase and low stocks, we had no option but to increase prices,” said one producer.
“Market tightness is giving sellers the opportunity to push for more than the ethylene increase,” said one buyer.
In the context of recent and current outages both on cracker and polymer level:
“Low stocks and slightly higher demand have caught everybody off balance,” said another supplier.
Larger hikes of up to €120/tonne were reported for LDPE contract on the sell-side, but there was no buyer confirmation to substantiate this at present.
Contract prices for other polyethylene grades were also subject to upward pressure in early September for the same reasons as for LDPE, although market tightness was not as pronounced as for LDPE.
“LDPE is leading the charge, there are still big increases for linear low polyethylene (LLDPE), but it is lagging behind a bit,” said one supplier.
LLDPE contract prices were assessed higher by up to €80/tonne, in line with the ethylene movement and firmer market sentiment.
High density polyethylene (HDPE) also moved up by €80-90/tonne in early September.
Producers reported larger hikes of up to €90-100/tonne for both LLDPE and HDPE, but there was insufficient market confirmation to substantiate this at present.
One producer said it would look for more than €100/tonne during September, anticipating that cracker and polymer outages during September would keep the market tight.
Some buyers said that further upward movement was likely during September, depending on supply/demand.
Producers maintained that demand was healthy in September, after the summer holiday period. One producer said it was maximising its production, but said it could not keep pace, stating that “sales were overachieving expectation month-on-month”.
Buyers, however, described consumption as below par. “We have no alternative but to accept the increases, but our concern is that the end-market is not good,” said one large customer. Converters expressed difficulties in trying to pass on any increases downstream.
Customers were limiting and postponing purchases in September based on speculation that product from SABIC’S new LDPE unit and lower priced Middle East imports were likely to be available in October.
“There is absolutely necessary buying only in September”, said one trader.
“The cupboard is almost bare, but we expect this to ease in October with imports”, said one reseller in the ?xml:namespace>
In early September, LDPE GE Film prices were assessed at €1,130-1,150/tonne FD (free delivered) EU, according to global chemical market intelligence service ICIS pricing.
($1 = €0.70).
|
http://www.icis.com/Articles/2009/09/04/9245395/european+pe+contracts+firm+on+feedstock+and+tight+supply.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Related link:
I’ve used OmniOutliner for a while. I tried out the 1.x version and decided to buy it when 2.0 came out. I’ve used it for everything from outlining articles to making grocery shopping lists. So, I was definitely interested in the new version, especially after seeing a few screenshots of it.
The biggest change in OmniOutliner 3 is the ability to add attachments to your outlines. They’ve really gone all out on this feature; allowing you to attach files, e-mail addresses and URLs (typing them in turns them into an object), images and videos. You can even record audio that is embedded in the document straight from Outliner*. It gives you inline QuickTime controls that are accessed from a disclosure triangle and a link arrow that will open the recording in QuickTime.
They’ve also implemented styles. You can work with named styles* that you apply to words, rows, columns, or the whole document. They’ve also created something they call auto-styles. Basically, OmniOutliner observes the individual types of styling you’ve applied to different portions of your document and creates a new style based off of it and then applies it to all future occurrences. For instance, if you decide to make all of your second level rows “Blueberry” Hoefler Text Italic, OmniOutliner will recognize this and make the change to the style.
The first thing people will notice though, is that notes can now be displayed in line or optionally displayed in a separate pane as in earlier versions.
The latest version of OmniOutliner now provides support for custom export options. This is enabled by use of XSLT*. There is also a new option to export as a Word document*, although this is actually an HTML document (with a .doc extension) using the Microsoft Office and Word XML namespaces.
Printing has acquired a lot of great new features. These are the types of things that I seldom wished for in version 2.x, but now that I have them I wonder why they weren’t always there. My favorite is the ability to filter what is printed on whether or not the row’s box is checked.
There are a whole lot of other new features that I haven’t covered. This is a genuine whole number upgrade and you’re going to love all of the new features that Omni has sown in. I highly encourage anyone that uses an outliner or anyone that makes lists on occasion to check out what is new in OmniOutliner 3.0. Power users and developers will want to check out the Professional edition.
I do wish that the new utility drawer would open on whatever side of the window has room as Mail’s does. Currently it’s a minor annoyance to have it always open on the left – at least it moves the window for you. The one real problem I encountered was a regression from the previous version – 3.0 took around 75 times as long to open the same OPML file as 2.2.6 did. Assuming that is fixed, OmniOutliner 3 is an improvement in every way.
* These features are only available in the Professional edition.
What do you think of the new version?
Do check out Notebook from Circus Ponies
|
http://www.oreillynet.com/mac/blog/2005/01/big_improvements_in_a_great_ou.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Ramiro Polla wrote: >>>> MAX_PATH is defined to 260 in WinDef.h, and that is actually the maximum >>>> allowed path length in the Win32 API unless you want to jump through some >>>> hoops. Paths of up to 32,767 characters (approximately) are allowed, but >>>> only if they are absolute and start with the magical \\?\ prefix. I guess >>>> I >>>> could do some detection of relative paths and add said magical prefix >>>> manually if so desired, but the static allocation seems safe enough, and >>>> the >>>> 260 character limit is indeed what a vast majority of Windows programs >>>> use. >>> Indeed, FFmpeg fails with long names. But if you truncate the long >>> name, it might turn into a valid name (like Mans said). >>. Well, sure, I could do dynamic allocation instead, but I don't know what happens when you pass strings longer than MAX_PATH to _wopen; MSDN doesn't say. I don't really see the point though because whatever happens, it won't be what the user expects. >>>> Updated patch with less tabs (and a rather embarrassing typo fix) >>>> attached. >>>> >>>> Regards, >>>> Karl Blomster >>>> >>>> Index: libavformat/os_support.c >>>> =================================================================== >>>> --- libavformat/os_support.c (revision 19266) >>>> +++ libavformat/os_support.c (working copy) >>>> @@ -30,6 +30,23 @@ >>>> #include <sys/time.h> >>>> #include "os_support.h" >>>> >>>> +#ifdef HAVE_WIN_UTF8_PATHS >>>> +#define WIN32_LEAN_AND_MEAN >>>> +#include <windows.h> >>>> +#endif >>>> + >>>> +#ifdef HAVE_WIN_UTF8_PATHS >>>> +int winutf8_open(const char *filename, int oflag, int pmode) >>>> +{ >>>> + wchar_t wfilename[MAX_PATH * 2]; >>> Isn't sizeof(wchar_t) == 2? >>? You could add an enable_win_utf8 parameter to av_open_input_file I guess but that would be a really ugly thing to have in the API and I doubt it'd be OK'd. This patch only changes the API, not the commandline interfaces and whatnot, so the only users of it would be people who use the ffmpeg API, and those people presumably compile ffmpeg themselves anyway and would know if they want UTF-8 support or not. Regards, Karl Blomster
|
http://ffmpeg.org/pipermail/ffmpeg-devel/2009-June/071809.html
|
CC-MAIN-2013-20
|
en
|
refinedweb
|
Socket.
); String^ strRetPage = ""; //; strRetPage = String::Concat( "Default HTML page on ", server, ":\r\n" ); do { bytes = s->Receive( bytesReceived, bytesReceived->Length, static_cast<SocketFlags>(0) ); strRetPage = String::Concat( strRetPage, Encoding::ASCII->GetString( bytesReceived, 0, bytes ) ); } while ( bytes > 0 ); s->Dispose();]; string page = ""; // Create a socket connection with the specified server and port. using(Socket s = ConnectSocket(server, port)) { if (s == null) return ("Connection failed"); // Send request to the server. s.Send(bytesSent, bytesSent.Length, 0); // Receive the server home page content. int bytes = 0; page = "Default HTML page on " + server + ":\r\n"; // The following will block until the); } }
Imports System.Text Imports System.IO Imports System.Net Imports System.Net.Sockets] Dim page As String = "" '. page = End Class
Remarks..
See also
- System.Net
- System.Net.Cache
- System.Net.Security
- SocketAsyncEventArgs
- Network Programming in the .NET Framework
- Best Practices for System.Net Classes
- Cache Management for Network Applications
- Internet Protocol Version 6
- Network Programming Samples
- Network Tracing in the .NET Framework
- Security in Network Programming
- Socket Performance Enhancements in Version 3.5
|
https://docs.microsoft.com/en-us/dotnet/api/system.net.sockets.socket?view=netcore-3.1
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Note: this file lists major user-visible changes only.
* Changes in Quagga []
- [zebra] "no link-detect" is no longer the default.
The previous release of Quagga always explicitly writes-out the
link-detect configuration state. Therefore, to retain current behavior
save your config with the prior release before updating.
Otherwise, review your configuration. Note, most users will generally
want to have link-detect enabled, and so can just remove 'no
link-detect' from their interface configuration.
This release also adds a global configuration to specify the default,
which can be specified in the zebra configuration as:
default link-detect (on|off)
This will then apply to any interface which does not have link-detect
explicitly configured.
* Changes in Quagga 0.99.24
User-visible changes:
- [pimd] New daemon: pimd provides IPv4 PIM-SSM multicast routing.
- [bgpd] New feature: "next-hop-self all" to override nexthop on iBGP route
reflector setups.
- [bgpd] route-maps have a new action "set ipv6 next-hop peer-address"
- [bgpd] route-maps have a new action "set as-path prepend last-as"
- [bgpd] Update validity checking (particularly MP-BGP / IPv6 routes) was
touched up significantly. Please report possible bugs.
- [ripd] New feature: RIP for IPv4 now supports equal-cost multipath (ECMP)
- [zebra] Multicast RIB support has been extended. It still is IPv4 only.
- [zebra] "no link-detect" is now printed in configurations since it won't
be the default anymore soon. To retain current behaviour, re-save your
configuration after updating to 0.99.24.
Distributor-visible changes:
- --enable-pimd is added to enable pimd. It is considered experimental, though
unless the distribution target is embedded systems with little flash, there
is no reason to not include it in packages.
- --disable-ipv6 no longer exists as an option. It's 2015, your C library
really needs to have IPv6 support by now.
- --disable-netlink no longer exists as an option. It didn't work anyway.
- --disable-solaris no longer exists as an option. It only controlled some
init scripts.
- --enable-isisd is now the default.
- mrlg.cgi is no longer included (it was severely outdated). It can be found
independently at
- build on Linux with the musl C library should now work
* Changes in Quagga 0.99.23
Known issues:
- [bgpd] setting an extcommunity in a route map on a route that already has
an extcommunity attribute will cause bgpd to crash. This issue will be
fixed in a followup minor release.
* Changes in Quagga 0.99.22
- [bgpd] The semantics of default-originate route-map have changed.
The route-map is now used to advertise the default route conditionally.
The old behaviour which allowed to set attributes on the originated
default route is no longer supported.
- [bgpd] There is now a replace-as option to neighbor ... local-as ...
no-prepend. For details, refer to the user documentation.
- [zebra] An FPM interface has been added. This provides an alternate
interface to routing information and is geared at OpenFlow & co.
- [snmp] AgentX is now supported; the old smux backend is considered
deprecated. ospf6d has also had OSPFV3-MIB added.
- [*] several issues with configuration save/load/apply have been fixed,
in particular on ospf "max-metric router-lsa administrative" and
"distribute-list", bgpd "no neighbor activate", isisd "metric-style",
- [*] a lot of bugs have been fixed, please refer to the git log
* Changes in Quagga 0.99.21
- [bgpd] BGP multipath support has been merged
- [bgpd] SAFI (Multicast topology) support has been extended to propagate
the topology to zebra.
- [bgpd] AS path limit functionality has been removed
- [babeld] a new routing daemon implementing the BABEL ad-hoc mesh routing
protocol has been merged.
- [isisd] a major overhaul has been picked up. Please note that isisd is
STILL NOT SUITABLE FOR PRODUCTION USE.
- [*] a lot of bugs have been fixed, please refer to the git log
* Changes in Quagga 0.99.10
- [bgpd] 4-byte AS support added
- [bgpd] MRT format changes to version 2. Those relying on
bgpd MRT table dumps may need to update their tools.
- [bgpd] Added new route-map set statement: "as-path exclude"
- Zebra RIB updates queue has evolved into a multi-level
structure to address RIB consistency issues.
* Changes in Quagga 0.99.2
- [bgpd] Work queues added to bgpd to split up update processing,
particularly beneficial when a peer session goes down. AS_PATH
parsing rewritten to be clearer, more robust and ready for 4-byte.
- [ripd] Simple authentication is no longer the default authentication
mode for ripd. The default is now no-authentication. Any setups which
used simple authentication will probably need to update their
configuration manually.
- [ospfd] 1s dead-interval with sub-second Hellos feature added.
SPF timers now specified in milliseconds, and with adaptive
hold-time support. RFC3137 Stub-router support added. Default ABR
type is now 'cisco'.
- Solaris least privileges support added.
* Changes in Quagga 0.99.1
- Zserv is now buffered via threads and non-blocking in most cases for both
clients and zebra, which should improve responsiveness of daemons when
they must send many messages to zebra.
- 'show thread cpu' now displays both cpu+system and wall-clock time,
where getrusage() is available.
- Background threads added and workqueue API added, with a
'show work-queues' command. Thread scheduling improved slightly.
- Zebra now has a work-queue for RIB processing. See 'show work-queues' in
the zebra daemon vty.
- Support for interface renaming on Linux netlink systems.
- GNU Zebra bgpd merges, including BGP Graceful-restart and "match ip
route-source" command.
- Automatic logging of backtraces should daemons crash to assist in
diagnosis. See the documentation for more information on configuring
logging correctly, and set --enable-gcc-rdynamic if compiling with gcc.
* Changes in Quagga 0.98.0
- Logging facilities upgraded. One can now specify a severity level
for each logging destination. And a new "show logging" command gives
thorough information on the current logging system configuration.
- Watchquagga daemon added. This is not well tested yet. Please try
monitor mode first before enabling restart features. It is important
to make sure that the various timers are configured with appropriate
values for your site.
- BGP route-server support added. See the texinfo documentation.
- OSPF API initialisation is disabled by default even if compiled in. You
can enable it with -a/--apiserver command line switch.
- "write-config integrated" vtysh command replaced with "service
integrated-vtysh-config" command.
- Router id is now handled by zebra daemon and all daemons receive changes
from it. Router id can be overriden in daemons' configurations of course.
To fix common router id in zebra daemon you can either install non-127
address on loopback or use "router-id x.x.x.x" command.
- "secondary" keyword is removed from ip address configuration. All
supported OS'es have their own vision what's secondary address and
how to handle it.
- Zebra no longer enables forwarding by default. If you rely on zebra to
enable forwarding make sure to add '<ip|ip6> forwarding' statements
to your zebra configuration file.
- All libraries are built and used shared, on platforms where libtool
supports shared libraries.
- Router advertisement syntax is changed. In usual cases (if you didn't do
any fancy stuff) it's enough to change lines in configuration from:
"ipv6 nd prefix-advertisement X:X:X:X::/X 2592000 604800 autoconfig on-link"
to:
"ipv6 nd prefix X:X:X:X::/X"
All router advertisement options are documented in texi documentation.
- --enable-nssa configure switch is removed. NSSA support is stable enough.
- Daemons don't look at current directory for config file any more.
* Changes in Quagga 0.96.5
- include files are installed in $(prefix)/include/quagga. Programs
building against these includes should -I$(prefix)/include and e.g.
#include <quagga/routemap.h>
- New option --enable-exampledir puts example files in a separate
directory from $(sysconfdir), easing NetBSD pkgsrc hierarchy rules
compliance.
- New configure options --enable-configfile-mask and
--enable-logfile-mask to set umask values for config and log
values. Masks default to 0600, matching previous behavior.
- Import current CVS isisd from SourceForge, then merge it with
the Quagga's Framework.
* Changes in Quagga 0.96.4
- Further fixes to ospfd, some relating to the PtP revert. Interface
lookups should be a lot more robust now.
- Fix for a remote triggerable crash in vty layer.
- Improvements to ripd, and addition of split horizon support.
- Improved bgpd table support, now dumps at time of day intervals rather
than time from startup intervals. Much improved support for IPv6 table
dumps. show commands for views improved.
* Changes in Quagga 0.96.3
- revert the 'generic PtP' patch. Means Quagga will no longer work with
FreeSWAN, however, on the plus side this gets rid of a lot of niggly bugs
which the PtP patch introduced.
* Changes in Quagga 0.96.2
- Fix crash in ospfd
* Changes in Quagga 0.96.1
- Iron out problem with the privileges definitions
* Changes in Quagga 0.96
- Privilege support, daemons now run with the minimal privileges needed, see
the documentation for details.
- NSSA ABR support in ospfd.
- OSPF-API support merged in.
- 6WIND patch merged in.
* Changes in zebra-0.93
* Changes in bgpd
** Configuration is changed to new format.
* Changes in ospfd
** Crush bugs which reported on Zebra ML is fixed.
** Opaque LSA and TE LSA support is added by KDD R&D Laboratories,
Inc.
* Chages in ospf6d
** Many bugs are fixed.
* Changes in zebra-0.92a
* Changes in bgpd
** Fix "^$" community list bug.
** Below command's Address Family specific configurations are added
nexthop-self
route-reflector-client
route-server-client
soft-reconfiguration inbound
* Changes in zebra
** Treat kernel type routes as EGP routes.
* Changes in zebra-0.92
** Overall security is improved. Default umask is 0077.
* Changes in ripd
** If output interface is in simple password authentication mode,
substruct one from rtemax.
* Changes in bgpd
** IPv4 multicast and IPv6 unicast configuration is changed to so
called new config. All of AFI and SAFI specific configuration is
moved to "address-family" node. When you have many IPv6 only
configuration, you will see many "no neighbor X:X::X:X activate" line
in your configuration to disable IPv4 unicast NLRI exchange. In that
case please use "no bgp default ipv4-unicast" command to suppress the
output. Until zebra-0.93, old config is still left for compatibility.
Old config
==========
router bgp 7675
bgp router-id 10.0.0.1
redistribute connected
network 192.168.0.0/24
neighbor 10.0.0.2 remote-as 7675
ipv6 bgp network 3ffe:506::/33
ipv6 bgp network 3ffe:1800:e800::/40
ipv6 bgp aggregate-address 3ffe:506::/32
ipv6 bgp redistribute connected
ipv6 bgp neighbor 3ffe:506:1000::2 remote-as 1
New config
==========
router bgp 7675
bgp router-id 10.0.0.1
network 192.168.0.0/24
redistribute connected
neighbor 10.0.0.2 remote-as 7675
neighbor 3ffe:506:1000::2 remote-as 1
no neighbor 3ffe:506:1000::2 activate
!
address-family ipv6
network 3ffe:506::/33
network 3ffe:1800:e800::/40
aggregate-address 3ffe:506::/32
redistribute connected
neighbor 3ffe:506:1000::2 activate
exit-address-family
* Changes in ospfd
** Internal interface treatment is changed. Now ospfd can handle
multiple IP address for an interface.
** Redistribution of loopback interface's address works fine.
* Changes in zebra-0.91
** --enable-oldrib configure option is removed.
** HAVE_IF_PSEUDO part is removed. Same feature is now supported by
default.
* Changes in ripd
** When redistributed route is withdrawn, perform poisoned reverse.
* Changes in zebra
** When interface's address is removed, kernel route pointing out to
the address is removed.
** IPv6 RIB is now based upon new RIB code.
** zebra can handle same connected route to one interface.
** New command for interface address. Currently this commands are
only supported on GNU/Linux with netlink interface.
"ip address A.B.C.D secondary"
"ip address A.B.C.D label LABEL"
* Changes in bgpd
** BGP flap dampening bugs are fixed.
** BGP non-blocking TCP connection bug is fixed.
** "show ip bgp summary" shows AS path and community entry number.
** New commands have been added.
"show ip bgp cidr-only"
"show ip bgp ipv4 (unicast|multicast) cidr-only"
"show ip bgp A.B.C.D/M longer-prefixes"
"show ip bgp ipv4 (unicast|multicast) A.B.C.D/M longer-prefixes"
"show ipv6 bgp X:X::X:X/M longer-prefixes"
"show ipv6 mbgp X:X::X:X/M longer-prefixes"
** IPv6 IBGP nexthop change is monitored.
** Unknown transitive attribute is passed with partial flag bit on.
* Changes in ospfd
** Fix bug of LSA MaxAge flood.
** Fix bug of NSSA codes.
* Changes in zebra-0.90
** From this beta release, --enable-unixdomain and --enable-newrib
becomes default. So both options are removed from configure.in. To
revert old behavior please specify below option.
--enable-tcp-zebra # TCP/IP socket is used for protocol daemon and zebra.
--enable-oldrib # Turn on old RIB implementation.
Old RIB implementation will be removed in zebra-0.91.
** From this beta release --enable-multipath is supported. This
option is only effective on GNU/Linux kernel with
CONFIG_IP_ADVANCED_ROUTER and CONFIG_IP_ROUTE_MULTIPATH is set.
--enable-multipath=ARG # ARG must be digit. When ARG is 0 unlimit multipath number.
** From this release we do not include guile files.
* Changes in lib
** newlist.[ch] is merged with linklist.[ch].
** Now Zebra works on MacOS X public beta.
** Access-list can have remark. "access-list WORD remark LINE" define
remark for specified access-list.
** Key of key-chain is sorted by it's idetifier value.
** prefix-list rule is slightly changed. The rule of "len <= ge-value
<= le-value" is changed to "len < ge-value <= le-value".
** According to above prefix-list rule change, add automatic
conversion function of an old rule. ex.) 10.0.0.0/8 ge 8 -> 10.0.0.0/8
le 32
** SMUX can handle SNMP trap.
** In our event library, event thread is executed before any other
thread like timer, read and write event.
** Robust method for writing configuration file and recover from
backing up config file.
** Display "end" at the end of configuration.
** Fix memory leak in vtysh_read().
** Fix memroy leak about access-list and prefix-list name.
* Changes in zebra
** UNIX domain socket server of zebra protocol is added.
** Fix PointoPoint interface network bug. The destination network
should be installed into routing table instead of local network.
** Metric value is reflected to kernel routing table.
** "show ip route" display uptime of RIP,OSPF,BGP routes.
** New RIB implementation is added.
Now we have enhanced RIB (routing information base) implementation in
zebra. New RIB has many new features and fixed some bugs which exist
in old RIB code.
*** Static route with distance value
Static route can be specified with administrative distance. The
distance value 255 means it is not installed into the kernel.
Default value of distance for static route is 1.
ip route A.B.C.D/M A.B.C.D <1-255>
ip route A.B.C.D/M IFNAME <1-255>
If the least distance value's route's nexthop are unreachable,
select the least distance value route which has reachable nexthop is
selected.
ip route 0.0.0.0/0 10.0.0.1
ip route 0.0.0.0/0 11.0.0.1 2
In this case, when 10.0.0.1 is unreachable and 11.0.0.1 is
reachable. The route with nexthop 11.0.0.1 will be installed into
forwarding table.
zebra> show ip route
S>* 0.0.0.0/0 [2/0] via 11.0.0.1
S 0.0.0.0/0 [1/0] via 10.0.0.1 inactive
If the nexthop is unreachable "inactive" is displayed. You can
specify any string to IFNAME. There is no need of the interface is
there when you configure the route.
ip route 1.1.1.1/32 ppp0
When ppp0 comes up, the route is installed properly.
*** Multiple nexthop routes for one prefix
Multiple nexthop routes can be specified for one prefix. Even the
kernel support only one nexthop for one prefix user can configure
multiple nexthop.
When you configure routes like below, prefix 10.0.0.1 has three
nexthop..
zebra> show ip route
S> 10.0.0.1/32 [1/0] via 10.0.0.2 inactive
via 10.0.0.3 inactive
* is directly connected, eth0
'*' means this nexthop is installed into the kernel.
*** Multipath (more than one nexthop for one prefix) can be installed into the kernel.
When the kernel support multipath, zebra can install multipath
routes into the kernel. Before doing that please make it sure that
setting --enable-multipath=ARG to configure script. ARG must be digit
value. When specify 0 to ARG, there is no limitation of the number
of the multipath. Currently only GNU/Linux with netlink interface is
supported.
ip route 10.0.0.1/32 10.0.0.2
ip route 10.0.0.1/32 10.0.0.3
ip route 10.0.0.1/32 eth0
zebra> show ip route
S>* 10.0.0.1/32 [1/0] via 10.0.0.2
* via 10.0.0.3
is directly connected, eth0
*** Kernel message delete installed route.
After zebra install static or dynamic route into the kernel.
R>* 0.0.0.0/0 [120/3] via 10.0.0.1
If you delete this route outside zebra, old zebra does not reinstall
route again. Now the route is re-processed and properly reinstall the
static or dynamic route into the kernel.
** GNU/Linux netlink socket handling is improved to fix race condition
between kernel message and user command responce.
* Changes in bgpd
** Add show neighbor's routes command.
"show ip bgp neighbors (A.B.C.D|X:X::X:X) routes"
"show ip bgp ipv4 (unicast|multicast) neighbors (A.B.C.D|X:X::X:X) routes"
"show ipv6 bgp neighbors (A.B.C.D|X:X::X:X) routes"
"show ipv6 mbgp neighbors (A.B.C.D|X:X::X:X) routes"
** BGP passive peer support problem is fixed.
** Redistributed IGP nexthop is passed to BGP nexthop.
** On multiaccess media, if the nexthop is reachable nexthop is passed
as it is.
** Remove zebra-0.88 compatibility commands.
"match ip prefix-list WORD"
"match ipv6 prefix-list WORD"
Instead of above please use below commands.
"match ip address prefix-list WORD"
"match ipv6 address prefix-list WORD"
** Fix bug of holdtimer is not reset when bgp cleared.
** "show ip bgp summary" display peer establish/drop count.
** Change "match ip next-hop" argument from IP address to access-list
name.
** When "bgp enforce-first-as" is enabled, check EBGP peer's update
has it's AS number in the first AS number in AS sequence.
** New route-map command "set community-delete COMMUNITY-LIST" is
added. Community matched the CoMMUNITY-LIST is removed from the
community.
** BGP-MIB implementation is finished.
** When BGP connection comes from unconfigured IP address, close
socket immediately.
** Do not compare router ID when the routes comes from EBGP peer.
When originator ID is same, take shorter cluster-list route. If
cluster-list is same take smaller IP address neighbor's route.
** Add "bgp bestpath as-path ignore" command. When this option is
set, do not concider AS path length when route selection.
** Add "bgp bestpath compare-routerid". When this option is set,
compare router ID when the routes comes from EBGP peer.
** Add "bgp deterministic-med" process.
** BGP flap dampening feature is added.
** When IBGP nexthop is changed, it is reflected to RIB.
** Change "neighbor route-refresh" command to "neighbor capability
route-refresh".
* Changes in ripd
** Change "match ip next-hop" argument from IP address to access-list
name.
** "no ip rip (send|receive)" command accept version number argument.
** Memory leak related classfull network generation is fixed.
** When a route is in garbage collection process (invalid with metric
16) and a router receives the same route with valid metric then route
was not installed into zebra rib, but only into ripd rib. Moreover ,
it will never get into zebra rib, because ripd wrongly assumes it's
already there.
* Change in ospfd
** Fix bug of refreshing default route.
** --enable-nssa turn on undergoing NSSA feature.
** Fix bug of Hello packet's option is not properly set when interface
comes up.
** Reduce unconditional logging.
** Add nexthop to OSPF path only when it is not there.
** When there is no DR on network (suppose you have only one router
with interface priority 0). It's router LSA does not contain the link
information about this network.
** When you change a priority of interface from/to 0
ISM_NeighborChange event should be scheduled in order to elect new
DR/BDR on the network.
** When we add some LSA into retransmit list we need to check whether
the present old LSA in retransmit list is not more recent than the new
one.
** In states Loading and Full the slave must resend its last Database
Description packet in response to duplicate Database Description
packets received from the master. For this reason the slave must wait
RouterDeadInterval seconds before freeing the last Database
Description packet. Reception of a Database Description packet from
the master after this interval will generate a SeqNumberMismatch
neighbor event. RFC2328 Section 10.8
** Virtual link can not configured in stub area.
** Clear a ls_upd_queue queue of the interface when interface goes
down.
** "no router ospf" unregister redistribution requests from zebra.
** New command for virtual-link configuration is added.
"area A.B.C.D virtual-link A.B.C.D"
"area A.B.C.D virtual-link A.B.C.D hello-interval <1-65535> retransmit-interval <3-65535> transmit-delay <1-65535> dead-interval <1-65535>"
"area A.B.C.D virtual-link A.B.C.D hello-interval <1-65535> retransmit-interval <3-65535> transmit-delay <1-65535> dead-interval <1-65535> authentication-key AUTH_KEY"
"area A.B.C.D virtual-link A.B.C.D authentication-key AUTH_KEY"
"area A.B.C.D virtual-link A.B.C.D hello-interval <1-65535> retransmit-interval <3-65535> transmit-delay <1-65535> dead-interval <1-65535> message-digest-key <1-255> md5 KEY"
"area A.B.C.D virtual-link A.B.C.D message-digest-key <1-255> md5 KEY"
** Clear cryptographic sequence number when neighbor status is changed
to NSM down.
** Make Summary LSA's origination and refreshment as same as other
type of LSA.
** New OSPF pakcet read method. Now maximum packet length may be 65535
bytes (maximum IP packet length).
** Checking the age of the found LSA and if the LSA is MAXAGE we
should call refresh instead of originate.
** Install multipath information to zebra.
** Fix socket descriptor leak when system call failed.
* Changes in ospf6d
** Whole functionality has been rewritten as new code. new command
"show ipv6 ospf6 spf node", "show ipv6 ospf6 spf tree", "show ipv6
ospf6 spf table" has been added.
** Change to do not send garbage route whose nexthop is not linklocal
address.
** "redistribute ospf6" was generated in "router ospf6" in config
file. It is fixed.
** LSDB sync bug is fixed.
** Fix bug of using unavailable route.
* Changes in vtysh
** route-map and access-list configuration is merged into one
configuration.
** /usr/local/etc/Zebra.conf is integrated configuration file. "write
memory" in vtysh will write whole configuration to this file.
** When -b option is specified to vtysh, vtysh read
/usr/local/etc/Zebra.conf file then pass the confuguration to proper
protocol daemon. So make all protocol daemon's configuration file
empty then invoke all daemon. After that vtysh -b will setup saved
configuration.
zebrastart.sh
=============
/usr/local/sbin/zebra -d
/usr/local/sbin/ripd -d
/usr/local/sbin/ospfd -d
/usr/local/sbin/bgpd -d
/usr/local/bin/vtysh -b
* Changes in zebra-0.89
* Changes in lib
** distribute-list can set all interface's access-list and prefix-list
configuration.
* Changes in ripd
** "show ip protocols" display proper distribute-list settings and
distance settings.
** When metric infinity route received withdraw the route from kernel
immediately it used to be wait garbage collection.
** key-chain can be used for simple password authentication.
** RIPv2 MIB getnext interface bug is fixed.
* Changes in vtysh
** --with-libpam enable PAM authentication for vtysh.
** Now vtysh read vtysh.conf. This file should be
${SYSCONFDIR}/etc/vtysh.conf for security reason. Usually it is
/usr/local/etc/vtysh.conf.
** "username WORD nopassword" command is added to vtysh.
* Chagees in ospfd
** NBMA interface support is added.
** OSPF area is sorted by area ID.
** New implementation of OSPF refreesh.
** OSPF-MIB read function is partly added.
* Changes in bgpd
** When the peering is done by ebgp-multihop, nexthop is looked up
like IBGP routes.
** "show ip mbgp" commands are changed to "show ip bgp ipv4
multicast".
** New terminal commands are added.
"
** MBGP soft-reconfiguration command is added.
"clear ip bgp x.x.x.x ipv4 (unicast|multicast) in"
"clear ip bgp x.x.x.x ipv4 (unicast|multicast) out"
"clear ip bgp x.x.x.x ipv4 (unicast|multicast) soft"
"clear ip bgp <1-65535> ipv4 (unicast|multicast) in"
"clear ip bgp <1-65535> ipv4 (unicast|multicast) out"
"clear ip bgp <1-65535> ipv4 (unicast|multicast) soft"
"clear ip bgp * ipv4 (unicast|multicast) in"
"clear ip bgp * ipv4 (unicast|multicast) out"
"clear ip bgp * ipv4 (unicast|multicast) soft"
** MED related commands are added.
"bgp deterministic-med"
"bgp bestpath med confed"
"bgp bestpath med missing-as-worst"
** "bgp default local-preference" command is added.
** BGP confederation peer's routes are passed to zebra like IBGP route.
** Community match command is added.
"show ip bgp community <val>"
"show ip bgp community <val> exact-match"
** EBGP multihop route treatment bug is fixed. Now nexthop is
resolved by IGP routes.
** Some commands are added to show routes by filter-list and community
value.
"
* Changes in zebra
** zebra read interface's address information using getifaddrs() when
it is available.
** Reflect IPv6 interface's address change to protocol daemons.
* Changes in zebra-0.88
* Changes in lib
** "exact-match" option is added to "access-list" and "ipv6
access-list" command. If this option is specified, the prefix and
prefix length is compared as exact match mode.
* Changes in zebra
** New Zebra message ZEBRA_REDISTRIBUTE_DEFAULT_ADD and
ZEBRA_REDISTRIBUTE_DEFAULT_DELTE are added.
** Default administrative distance value is changed.
Old New
------------------------------------------
system 10 0
kernel 20 0
connected 30 0
static 40 1
rip 50 120
ripng 50 120
ospf 60 110
ospf6 49 110
bgp 70 200(iBGP) 20(eBGP)
------------------------------------------
** Distance value can be passed from protocol daemon to zebra.
** "show ip route" shows [metric/distance] value pair.
** Zebra Protocol is changed to support multi-path route and distance
value.
* Changes in ospfd
** "default-information originate [always]" command is added.
** "default-metric <0-16777214>" command is added.
** "show ip ospf database" command is integrated. LS-ID and AdvRouter can
be specifed. The commands are
show ip ospf database TYPE LS-ID
show ip ospf database TYPE LS-ID ADV-ROUTER
show ip ospf database TYPE LS-ID self-originate
show ip ospf database TYPE self-originate
** route-map support for `redistribute' command are added.
Supported `match' statements are
match interface
match ip address
match next-hop
Supported `set' statements are
set metric
set metric-type
** Pass OSPF metric value to zebra daemon.
* Changes in ripd
** When specified route-map does not exist, it means all deny.
** "default-metric <1-16>" command is added.
** "offset-list ACCESS-LIST-NAME <0-16>" and "offset-list
ACCESS-LIST-NAME <0-16> IFNAME" commands are added.
** "redistribute ROUTE-TYPE metric <0-16>" command is added.
** "default-information originate" command is added.
** "ip split-horizon" and "no ip split-horizon" is added to interface
configuration.
** "no router rip" command is added.
** "ip rip authentication mode (md5|text)" is added to interface
configuration.
** "ip rip authentication key-chain KEY-CHAIN" is added to interface
configuration.
** Pass RIP metric value to zebra daemon.
** Distance manipulation functions are added.
* Changes in bgpd
** Fix bug of next hop treatment for MPLS-VPN route exchange.
** BGP peer MIB is updated.
** Aggregated route has origin IGP, atomic-aggregate and proper
aggregator attribute.
** Suppressed route now installed into BGP table. It is only
suppressed from announcement.
** BGP router-id is properly set after "no router bgp ASN" and "router
bgp ASN".
** Add check for nexthop is accessible or not for IBGP routes.
** Add cehck for nexthop is on connected or not for EBGP routes.
** "dump bgp route" command is changed to "dump bgp route-mrt" for
generating MRT compatible dump output.
** Soft reconfiguration inbound and outbound is supported.
** Route refresh feature is supported.
* Changes in vtysh
** VTY shell is now included into the distribution.
* Changes in zebra-0.87
* Changes in lib
** "show startup-config" command is added.
** "show history" command is added.
** Memory statistics command is changed. New command
show memory all
show memory lib
show memory rip
show memory ospf
show memory bgp
are added.
** Filters can be removed only specify it's name. New command
no access-list NAME
no ip community-list NAME
no ip as-path access-list NAME
no route-map NAME
are added.
** At any node, user can view/save user configuration.
write terminal
write file
wirte memory
are added to every node in default.
** LCD completion is added. For example both "ip" and "ipv6" command
are exist, "i" then press TAB will be expanded to "ip".
* Changes in bgpd
** "show ip bgp" family shows total number of prefixes.
** "no bgp default ipv4-unicast" command is added.
** Extended Communities support is added.
** "no neighbor PEER send-community extended" command is added.
** MPLS-VPN PE-RR support is added.
New address family vpnv4 unicast is introduced.
!
address-family vpnv4 unicast
neighobr PEER activate
network A.B.C.D rd RD tag TAG
exit-address-family
!
To make it route-reflector, please configure it under normal router
bgp ASN.
!
router bgp 7675
no bgp default ipv4-unicast
bgp router-id 10.0.0.100
bgp cluster-id 10.0.0.100
neighbor 10.0.0.1 remote-as 65535
neighbor 10.0.0.1 route-reflector-client
neighbor 10.0.0.2 remote-as 65535
neighbor 10.0.0.2 route-reflector-client
neighbor 10.0.0.3 remote-as 65535
neighbor 10.0.0.3 route-reflector-client
!
address-family vpnv4 unicast
neighbor 10.0.0.1 activate
neighbor 10.0.0.2 activate
neighbor 10.0.0.3 activate
exit-address-family
!
* Changes in ospfd
** Many many bugs are fixed.
* Changes in ripd
** Better interface up/down event handle.
* Changes in zebra
** Better interface up/down event handle.
* Changes in zebra-0.86
* Changes in lib
** Fix bug of exec-timeout command which may cause crush.
** Multiple same policy for "access-list", "ip prefix-list, "as-path
access-list", "ip community-list" is not duplicated.
** It used to be "ip prefix-list A.B.C.D/M" match routes which mask >=
M. Now default behavior is exact match so it only match routes which
mask == M.
* Changes in bgpd
** "match ip address prefix-list" is added to route-map.
** A route without local preference is evaluated as 100 local preference.
** Select smaller router-id route when other values are same.
** Compare MED only both routes comes from same neighboring AS.
** "bgp always-compare-med" command is added.
** Now MED value is passed to IBGP peer.
** When neighbor's filter is configured with non-existent access-list,
as-path access-list, ip prefix-list, route-map. The behavior is
changed from all permit to all deny.
* Changes in ospfd
** Fix bug of external route tag byte order.
** OSPF Neighbor deletion bug which cause crush is fixed.
** Some route calculation bug are fixed.
** Add sanity check with router routing table.
** Fix bug of memory leak about linklist.
** Fix bug of 1-WayReceived in NSM.
** Take care of BIGENDIAN architecture.
** Fix bug of NSM state flapping between ExStart and Exchange.
** Fix bug of Network-LSA originated in stub network.
** Fix bug of MS flag unset.
** Add to schedule router_lsa origination when the interface cost
changes.
** Increment LS age by configured interface transmit_delay.
** distribute-list is reimplemented.
** Fix bug of refresh never occurs.
** Fix bug of summary-LSAs reorigination. Correctly copy
OSPF_LSA_APPROVED flag to new LSA. when summary-LSA is reoriginatd.
** Fix bug of re-origination when a neighbor disappears.
** Fix bug of segmentation fault with DD retransmission.
** Fix network-LSA re-origination problem.
** Fix problem of remaining withdrawn routes on zebra.
* Changes in ripd
** Do not leave from multicast group when interface goes down bug is
fixed.
* Changes in zebra
** Remove client structure when client dies.
** Take care static route when interface goes up/down.
* Changes in zebra-0.85
* Changes in bgpd
** "transparent-nexthop" and "transparenet-as" commands are added.
** Route reflector's originator-id bug is fixed.
* Changes in ospfd
** Fix bug of OSPF LSA memory leak.
** Fix bug of OSPF external route memory leak.
** AS-external-LSA origination bug was fixed.
** LS request treatment is completely rewritten. Now performance is
drastically improved.
* Changes in ripd
** RIPv1 update is done by class-full manner.
* Changes in zebra-0.84b
* Changes in lib
** Fix bug of inet_pton return value handling
* Changes in bgpd
** Fix bug of BGP-4+ link-local address nexthop check for IBGP peer.
** Don't allocate whole buffer for displaying "show ip bgp". Now it
consume only one screen size memory.
* Changes in ripd
** Fix debug output string.
** Add RIP peer handling. RIP peer are shown by "show ip protocols".
* Changes in zebra-0.84a
* Changes in bgpd
** Fix serious bug of BGP-4+ peering under IPv6 link-local address.
Due to the bug BGP-4+ peering may not be established.
* Changes in zebra-0.84
* Changes in lib
** IPv6 address and prefix parser is added to VTY by Toshiaki Takada
<takada@zebra.org>. DEFUN string is "X:X::X:X" for IPv6 address,
"X:X::X:X/M" for IPv6 prefix. You can use it like this.
DEFUN (func, cmd, "neighbor (A.B.C.D|X:X::X:X) remote-as <1-65535>")
** VTY configuration is locked during configuration. This is for
avoiding unconditional crush from two terminals modify the
configuration at the same time. "who" command shows which termnal
lock the configuration. VTY which has '*' character at the head of
line is locking the configuration.
** Old logging functions are removed. Functions like
log_open,log_close,openlog are deleted. Instead of that please use
zlog_* functions. zvlog_* used in ospf6d are deleted also.
** "terminal monitor" command is added. "no terminal monitor" is for
disabling. This command simply display logging information to the
VTY.
** dropline.[ch] files are deleted.
* Changes in bgpd
** BGP neighbor configuration are sorted by it's IP address.
** BGP peer configuration and actual peer is separated. This is
preparation for Route Server support.
** "no neighbor PEER" command is added. You can delete neighbor
without specifying AS number.
** "no neighbor ebgp-multihop" command is added.
** "no neighbor port PORT" command is added.
** To conform RFC1771, "neighbor PEER send-community" is default
behavior. If you want to disable sending community attribute,
please specify "no neighbor PEER send-community" to the peer.
** "neighbor maximum-prefix NUMBER" command is added.
** Multi-protocol extention NLRI is proceeded only when the peer is
configured proper Address Family and Subsequent Address Family. If
not, those NLRI are simply ignored.
** Aggregate-address support is improved. Currently below commands
works.
"aggregate-address"
"aggregate-address summary-only"
"no aggregate-address"
"no aggregate-address summary-only"
"ipv6 bgp aggregate-address"
"ipv6 bgp aggregate-address summary-only"
"no ipv6 bgp aggregate-address"
"no ipv6 bgp aggregate-address summary-only"
** redistribute route-map bug is fixed.
** MBGP support becomes default. "configure" option --enable-mbgp is
removed.
** New command "neighbor PEER timers connect <1-65535>" is added.
** New command "neighbor PEER override-capability" is added.
** New command "show ip bgp neighbor A.B.C.D advertised-route" is added.
** New command "show ip bgp neighbor A.B.C.D routes" is added. To use
this command, you have to configure neighbor with
"neighbor A.B.C.D soft-reconfiguration inbound" beforehand.
* Changes in zebra-0.83
* bgpd
** Serious bug fix about fetching global and link-local address at the
same time. Due to this bug, corrupted IPv6 prefix is generated. If
you uses bgpd for BGP-4+ please update to this version. The bug is
introduced in zebra-0.82.
** When bgpd send Notify message, don't use thread manager. It is now
send to neighbor immediately.
* Changes in zebra-0.82
** Solaris 2.6 support is added by Michael Handler
<handler@sub-rosa.com>.
** MBGP support is added by Robert Olsson <Robert.Olsson@data.slu.se>.
Please specify --enable-mbgp to configure script. This option will be
removed in the future and MBGP support will be default.
* Changes in zebra
** When interface goes down, withdraw connected routes from routing
table. When interface goes up, restore the routes to the routing
table.
** `show interface' show interface's statistics on Linux and BSD with
routing socket.
** Now zebra can get MTU value on BSDI/OS.
* Changes in bgpd
** Add capability option support based upon
draft-ietf-idr-bgp4-cap-neg-04.txt.
** Add `show ipv6 bgp prefix-list' command.
** Check self AS appeared in received routes.
** redistribute route-map support is added.
** BGP packet dump feature compatible with MRT.
* Changes in ripd
** Fix bug of `timers basic' command's argument format.
* Changes in ripngd
** Calculate max RTE using interface's MTU value.
* Changes in ospfd
** Some correction to LSU processing.
** Add check for lsa->refresh_list.
* Changes in ospf6d
** Many debug feature is added.
* Changes in zebra-0.81
** SNMP support is disabled in default.--enable-snmp option is added
to configure script.
* Changes in bgpd
** Fix FSM bug which introduced in zebra-0.80.
* Changes in zebra-0.80
* access-list
New access-list name space `ipv6 access-list' is added. At the same
time, `access-list' statemant only accepts IPv4 prefix. Please be
careful if you use IPv6 filtering. You will need to change your
configuration. For IPv6 filtering please use `ipv6 access-list'.
As of zebra-0.7x, user can use `access-list' for both IPv4 and IPv6
filtering.
! zebra-0.7x
access-list DML-net permit 203.181.89.0/24
access-list DML-net permit 3ffe:506::0/32
access-list DML-net deny any
!
Above configuration is not valid for zebra-08x. Please add `ipv6'
before 'access-list' when you configure IPv6 filtering.
! zebra-0.8x
access-list DML-net permit 203.181.89.0/24
access-list DML-net deny any
!
ipv6 access-list DML-net permit 3ffe:506::0/32
ipv6 access-list DML-net deny any
!
* prefix-list
And also new prefix-list name space `ipv6 prefix-list' is added. It
is the same as the change of `access-list'. `ip prefix-list' now only
accept IPv4 prefix. It was source of confusion that `ip prefix-list'
can be used both IPv4 and IPv6 filtering. Now name space is separated
to clear the meaning of the filter.
If you use `ip prefix-list' for IPv6 filtering, please change the
stetement.
! zebra-0.7x
ip prefix-list 6bone-filter seq 5 permit 3ffe::/17 le 24 ge 24
ip prefix-list 6bone-filter seq 10 permit 3ffe:8000::/17 le 28 ge 28
ip prefix-list 6bone-filter seq 12 deny 3ffe::/16
ip prefix-list 6bone-filter seq 15 permit 2000::/3 le 16 ge 16
ip prefix-list 6bone-filter seq 20 permit 2001::/16 le 35 ge 35
ip prefix-list 6bone-filter seq 30 deny any
!
Now user can explicitly configure it as IPv6 prefix-list.
! zebra-0.8x
ipv6 prefix-list 6bone-filter seq 5 permit 3ffe::/17 le 24 ge 24
ipv6 prefix-list 6bone-filter seq 10 permit 3ffe:8000::/17 le 28 ge 28
ipv6 prefix-list 6bone-filter seq 12 deny 3ffe::/16
ipv6 prefix-list 6bone-filter seq 15 permit 2000::/3 le 16 ge 16
ipv6 prefix-list 6bone-filter seq 20 permit 2001::/16 le 35 ge 35
ipv6 prefix-list 6bone-filter seq 30 deny any
!
* RIP configuration
If you want to filter only default route (0.0.0.0/0) and permit other
routes, it was hard to do that. Now `ip prefix-list' can be used for
RIP route filtering.
New statement:
`distribute-list prefix PLIST_NAME (in|out) IFNAME'
is added to ripd. So you can configure on eth0 interface accept all
routes other than default routes.
!
router rip
distribute-list prefix filter-default in eth0
!
ip prefix-list filter-default deny 0.0.0.0/0 le 0
ip prefix-list filter-default permit any
!
* RIPng configuration
Same change is done for ripngd. You can use `ipv6 prefix-list' for
filtering.
!
router ripng
distribute-list prefix filter-default in eth0
!
ipv6 prefix-list filter-default deny ::/0 le 0
ipv6 prefix-list filter-default permit any
!
* BGP configuration
So far, Multiprotocol Extensions for BGP-4 (RFC2283) configuration is
done with traditional IPv4 peering statement like blow.
!
router bgp 7675
neighbor 3ffe:506::1 remote-as 2500
neighbor 3ffe:506::1 prefix-list 6bone-filter out
!
For separating configuration IPv4 and IPv6, and for retaining Cisco
configuration compatibility, now IPv6 configuration is done by IPv6
specific statement. IPv6 BGP configuration is done by statement which
start from `ipv6 bgp'.
!
router bgp 7675
!
ipv6 bgp neighbor 3ffe:506::1 remote-as 2500
ipv6 bgp neighbor 3ffe:506::1 prefix-list 6bone-filter out
!
At the same time some IPv6 specific commands are deleted from IPv4
configuration.
o redistribute ripng
o redistribute ospf6
o neighbor PEER version BGP_VERSION
o neighbor PEER interface IFNAME
Those commands are only accepted as like below.
o ipv6 bgp redistribute ripng
o ipv6 bgp redistribute ospf6
o ipv6 bgp neighbor PEER version BGP_VERSION
o ipv6 bgp neighbor PEER interface IFNAME
And below new commands are added.
o ipv6 bgp network IPV6_PREFIX
o ipv6 bgp redistribute static
o ipv6 bgp redistribute connected
o ipv6 bgp neighbor PEER remote-as <1-65535> [passive]
o ipv6 bgp neighbor PEER ebgp-multihop [TTL]
o ipv6 bgp neighbor PEER description DESCRIPTION
o ipv6 bgp neighbor PEER shutdown
o ipv6 bgp neighbor PEER route-reflector-client
o ipv6 bgp neighbor PEER update-source IFNAME
o ipv6 bgp neighbor PEER next-hop-self
o ipv6 bgp neighbor PEER timers holdtime <0-65535>
o ipv6 bgp neighbor PEER timers keepalive <0-65535>
o ipv6 bgp neighbor PEER send-community
o ipv6 bgp neighbor PEER weight <0-65535>
o ipv6 bgp neighbor PEER default-originate
o ipv6 bgp neighbor PEER filter-list FILTER_LIST_NAME (in|out)
o ipv6 bgp neighbor PEER prefix-list PREFIX_LIST_NAME (in|out)
o ipv6 bgp neighbor PEER distribute-list AS_LIST_NAME (in|out)
o ipv6 bgp neighbor PEER route-map ROUTE_MAP_NAME (in|out)
And some utility commands are introduced.
o clear ipv6 bgp [PEER]
o show ipv6 bgp neighbors [PEER]
o show ipv6 bgp summary
I hope these changes are easy to understand for current Zebra users...
* To restrict connection to VTY interface.
It used to be both IPv4 and IPv6 filter can be specified with one
access-list. Then the access-list can be appried to VTY interface
with `access-class' stetement in `line vty' node. Below is example in
zebra-0.7x.
!
access-list local-only permit 127.0.0.1/32
access-list local-only permit ::1/128
access-list local-only deny any
!
line vty
access-class local-only
!
Now IPv4 and IPv6 filter have each name space. It is not possible to
specify IPv4 and IPv6 filter with one access-list. For setting IPv6
access-list in `line vty', `ipv6 access-class' statement is
introduced. Let me show the configuration in zebra-0.8x.
!
access-list local-only permit 127.0.0.1/32
access-list local-only deny any
!
ipv6 access-list local-only permit ::1/128
ipv6 access-list local-only dny any
!
line vty
access-class local-only
ipv6 access-class local-only
!
* route-map
New IPv6 related route-map match commands are added.
o match ipv6 address
o match ipv6 next-hop
Please change your configuration if you use IP match statement for
IPv6 route.
zebra-0.7x config
=================
!
access-list all permit any
!
route-map set-nexthop permit 10
match ip address all
set ipv6 next-hop global 3ffe:506::1
set ipv6 next-hop local fe80::cbb5:591a
!
zebra-0.8x config
=================
!
ipv6 access-list all permit any
!
route-map set-nexthop permit 10
match ipv6 address all
set ipv6 next-hop global 3ffe:506::1
set ipv6 next-hop local fe80::cbb5:591a
!
* zebra connection
Protocol daemon such as ripd, bgpd, ospfd will reconnect zebra daemon
when the connection fail. Those daemons try to connect zebra every 10
seconds first three trial, then the interval changed to 60 seconds.
After all, if ten connections are fail, protocol daemon give up the
connection to the zebra daemon.
* SNMP support (is not yet finished)
Zebra uses SMUX protocol (RFC1227) for making communication with SNMP
agent. Currently lib/smux.c can be compiled only with ucd-snmp-4.0.1
and. It can not be
compiled with ucd-snmp-3.6.2.
After applying the patch to ucd-snmp-4.0.1, please configure it with
SMUX module.
% configure --with-mib-modules=smux
After compile & install ucd-snmp-4.0.1, you will need to configure
smuxpeer. I'm now using below configuration.
/usr/local/share/snmp/snmpd.conf
================================
smuxpeer 1.3.6.1.6.3.1 test
Above 1.3.6.1.6.3.1 and test is temporary configuration which is hard
coded in lib/smux.c. Yes, I know it is bad, I'll change it ASAP.
* HUP signal treatment
From zebra-0.80, ripd will reload it's configuration file when ripd
receives HUP signal. Other daemon such as bgpd, ospfd will support
HUP signal treatment soon.
* Changes in zebra-0.79
* Changes in zebra
** Broadcast address setting on Linux box bug is fixed.
** Protocol daemon can install connected IPv6 route into the kernel.
** Now zebra can handle blackhole route.
* Changes in ripd
** Add route-map feature for RIP protocol.
** In case of RIP version 2 routing table entry has IPv4 address and
netmask pair which host part bit is on, ignore the entry.
* Changes in ripngd
** Change CMSG_DATA cast from (u_char *) to (int *). (u_char *) does
not work for NetBSD-currnet on SparcStation 10.
* Changes in ospfd
** MaxAge LSA treatment is added.
** ABR/ASBR functionality is added.
** Virtual Link funtionality is added.
** ABR behaviors IBM/Cisco/Shortcut is added.
* Changes in ospf6d
** Enclosed KAME specific part with #ifdef #endif
* Changes in zebra-0.78
* Changes in lib
** SNMP support is started.
** Now Zebra can work on BSD/OS 4.X.
** Now Zebra can compiled on vanilla OpenBSD 2.5 but not yet working correcltly.
* Changes in zebra
** Interface index detection using ioctl() bug is fixed.
** Interface information protocol is changed. Now interface
addition/deletion and interface's address addition/deletion is
separated.
* Changes in bgpd
** BGP hold timer bug is fixed.
** BGP keepavlie timer becomes configurable.
* Changes in ripd
** When making reply to rip's REQUEST message, fill in
RIP_METRIC_INFINITY with network byte order using htonl ().
** Pass host byte order address to IN_CLASSC and IN_CLASSB macro.
* Changes in ospfd
** LSA flooding works.
** Fix bug of DD processing.
** Fix bug of originating router-LSA bug is fixed.
** LSA structure is changed to support LSA aging.
* Changes in ospf6d
** `ip6' statement in configuration is changed to `ipv6'.
* Changes in zebra-0.77
* Changes in lib
** SIGUSR1 reopen logging file.
** route-map is extended to support multi-protocol routing
information.
** When compiling under GNU libc 2.1 environment don't use inet6-apps.
* Changes in zebra
** Basic IPv6 router advertisement codes added. It is not yet usable.
** Fix IPv6 route addition/deletion bug is fixed.
** `show ip route A.B.C.D' works
* Changes in bgpd
** When invalid unfeasible routes length comes, bgpd send notify then
continue to process the packet. Now bgpd stop parsing invalid packet
then return to main loop.
** BGP-4+ withdrawn routes parse bug is fixed.
** When BGP-4+ information passed to non shared network's peer, trim
link-local next-hop information.
** `no redistribute ROUTE_TYPE' withdraw installed routes from BGP
routing information.
** `show ipv6 route IPV6ADDR' command added.
** BGP start timer has jitter.
** Holdtimer configuration bug is fixed. Now configuration does not
show unconfigured hold time value.
* Changes in ripngd
** Now update timer (default 30 seconds) has +/- 50% jitter value.
** Add timers basic command.
** `network' configuration is dynamically reflected.
** `timers basic <update> <timeout> <garbage>' added.
* Changes in ripd
** Reconstruct almost codes.
** `network' configuration is dynamically reflected.
** RIP timers now conforms to RFC2453. So user can configure update,
timeout, garbage timer.
** `timers basic <update> <timeout> <garbage>' works.
* Changes in ospfd
** Bug of originating network LSA is fixed.
** `no router ospf' core dump bug is fixed.
* Changes in ospf6d
** Redistribute route works.
* Changes in zebra-0.76
* Changes in lib
** configure.in Linux IPv6 detection problem is fixed.
** Include SERVICES file to the distribution
** Update zebra.texi to zebra-0.76.
* Changes in zebra-0.75
* Changes in lib
** `termnal length 0' bug is fixed.
* Changes in zebra
** When zebra starts up, sweep all zebra installed routes. If -k or
--keep_kernel option is specified to zebra dameon. This function is
not performed.
* Changes in ripngd
** Aggreagte address command supported. In router ripngd,
`aggregate-address IPV6PREFIX' works.
* Changes in bgpd
** Input route-map's bug which cause segmentation violation is fixed.
** route-map method improved.
** BGP-4+ nexthop detection improved.
** BGP-4+ route re-selection bug is fixed.
** BGP-4+ iBGP route's nexthop calculation works.
** After connection Established `show ip bgp neighbor' display BGP TCP
connection's source and destination address.
** In case of BGP-4+ `show ip bgp neighbor' display BGP-4+ global and
local nexthop which used for originated route. This address will be
used when `next-hop-self'.
* Changes in ospfd
** Fix bug of DR election.
** Set IP precedence field with IPTOS_PREC_INTERNET_CONTROL.
** Schedule NeighborChange event if NSM status change.
** Never include a neighbor in Hello packet, when the neighbor goes
down.
* Changes in zebra-0.74
* Changes in lib
** Now `terminal length 0' means no line output control.
** `line LINES' command deleted. Instead of this please use `terminal
length <0-512>'.
** `terminal length <0-512>' is each vty specific configuration so it
can not be configured in the configuration file. If you want to
configure system wide line control, please use `service
terminal-length <0-512>'. This configuration affects to the all vty
interface.
* Changes in zebra
** Installation of IPv6 route bug is fixed.
* Changes in bgpd
** Very serious bug of bgp_stop () is fixed. When multiple route to
the same destination exist, bgpd try to announce the information to
stopped peer. Then add orphan write thread is added. This cause
many strange behavior of bgpd.
** Router-id parsing bug is fixed.
** With BGP-4+ nexthop installation was done with global address but
it should be link-local address. This bug is fixed now.
** When incoming route-map prepend AS, old AS path remained. Now bgpd
free old AS path.
** `neighbor PEER weight <0-65535>' command added.
* Changes in ripngd
** Almost codes are rewritten to conform to RFC2080.
* Changes in ospfd
** SPF calculation timer is added. Currently it is set to 30 seconds.
** SPF calculation works now.
** OSPF routing table codes are added.
** OSPF's internal routes installed into the kernel routing table.
** Now `ospfd' works as non-area, non-external route support OSPF
router.
** Call of log_rotate() is removed.
* Changes in ospf6d
** LSA data structure is changed.
** Call of log_rotate() is removed.
* Changes in zebra-0.73
* Changes in lib
** `config terminal' is changed to `configure terminal'.
** `terminal length <0-512>' command is added.
** Variable length argument was specified by `...'. Now all strings
started with character `.' is variable length argument.
* Changes in zebra
** Internal route (such as iBGP, internal OSPF route) handling works
correctly.
** In interface node, `ipv6 address' and `no ipv6 address' works.
** Interface's address remain after `no ip address' bug is fixed.
** Host route such as IPv4 with /32 mask and IPv6 with /128 mask
didn't set RTF_GATEWAY even it has gateway. This bug if fixed now.
* Changes in bgpd
** `match as-path' argument is used to be specify AS PATH value itself
directly (e.g. ^$). But it is changed to specify `ip as-apth
access-list' name.
** iBGP route handle works without getting error from the kernel.
** `set aggregator as AS A.B.C.D' command is added to route-map.
** `set atomic-aggregate' command is added to bgpd's routemap.
** Announcement of atomic aggregate attribute and aggregator attribute
works.
** `update-source' bug is fixed.
** When a route learned from eBGP is announced to iBGP, local
preference was set to zero. But now it set to
DEFAULT_LOCAL_PREF(100).
* Changes in ripd
** RIPv1 route filter bug is fixed.
** Some memory leak is fixed.
* Changes in ospfd
** Fix bug of DR Election.
** Fix bug of adjacency forming.
* Changes in ospf6d
** Clean up logging message.
** Reflect routing information to zebra daemon.
* Changes in zebra-0.72
* Changes in lib
** When getsockname return IPv4 mapped IPv6 address. Convert it to
IPv4 address.
* Changes in bgpd
** Change route-map's next-hop related settings.
set ip nexthop -> set ip next-hop
set ipv6 nexthop global -> set ipv6 next-hop global
set ipv6 nexthop local -> set ipv6 next-hop local
** Add `next-hop-self' command.
* Changes in ospfd
** Fix bug of multiple `network area' directive crashes.
* Changes in zebra-0.71
* Changes in lib
** `log syslog' command is added.
** Use getaddrinfo function to bind IPv4/IPv6 server socket.
** `no banner motd' will suppress motd output when user connect to VTY.
** Bind `quit' command to major nodes.
* Changes in zebra
** Point-to-point link address handling bug is fixed.
* Changes in bgpd
** AS path validity check is added. If malformed AS path is received
NOTIFY Malformed AS path is send to the peer.
** Use getaddrinfo function to bind IPv4/IPv6 server socket.
* Changes in ripd
** Connected network announcement bug is fixed.
** `broadcast' command is deleted.
** `network' command is added.
** `neighbor' command is added.
** `redistribute' command is added.
** `timers basic' command is added.
** `route' command is added.
* Changes in ripngd
** Fix metric calculation bug.
* Changes in ospfd
** Check sum bug is fixed.
* Chanegs in ospf6d
** Routing table code is rewritten.
* Changes in zebra-0.70
* Changes in zebra
** Critical routing information base calculation bug check is fixed.
** zebra ipv4 message is extended to support external/internal route
flavor.
** Now if internal route doesn't has direct connected nexthop, then
nexthop is calculated by looking up IGP routing table.
* Changes in bgpd
** `neighbor PEER update-source IFNAME' command added as ALIAS to
`neighbor PEER interface IFNAME'.
* Changes in ospfd
** DD null pointer bug is fixed.
* Changes in zebra-0.69
* Changes in zebra
** zebra redistirbution supports dynamic notification of the route
change. If you add static route while running zebra, it will be
reflected to other protocol daemon which set `redistribute static'.
** If static route installation is failed due to the error. The
static route is not added to the configuration and zebra routing
table.
** zebra sets forwarding flag to on when it starts up.
** `no ip forwarding' turn off IPv4 forwarding.
** `no ipv6 forwarding' turn off IPv6 forwarding.
** Change `show ipforward' command to `show ip forwarding'.
** Change `show ipv6forward' command to `show ipv6 forwarding'.
** `ip route A.B.C.D/M INTERFACE' works. So you can set `ip route
10.0.0.0/8 eth0'.
* Changes in bgpd
** `neighbor PEER send-community' command is added. If the option is
set, bgpd will send community attribute to the peer.
** When a BGP route has no-export community attribute and
send-community is set to the peer, the route is not announced to the
peer.
* Changes in ripngd
** When ripngd terminates, delete all installed route.
** `redistribute static', `redistribute connected' works.
** Change `debug ripng event' to `debug ripng events'.
** Change `show debug ripng' to `show debugging ripng'.
** Bug of static route deletion is fixed.
* Changes in ospfd
** LS request and LS update can be send and received.
* Changes in zebra-0.68
* Changes in lib
** DEFUN() is extended to support (a|b|c) statement.
** Input buffer overflow bug is fixed.
* Changes in bgpd
** `ip community-list' is added.
** set community and match community is added to route-map statement.
** aggregate-address A.B.C.D/M partly works. Now it works only
summary-only mode.
* Changes in zebra
** IPv6 network address delete bug is fixed.
* Changes in ospfd
** DR election bug fixed.
** Now Database Description can be send or received.
** Neighbor State Machine goes to Full state.
* Changes in ospf6d
** router zebra related bug is fixed.
* Changes in zebra-0.67
* Changes in lib
** `service password-encryption' is added for encrypted password.
* Changes in bgpd
** `set as-path prepend ASPATH' is added to route-map command.
** `set weight WEIGHT' is added to route-map command.
** `no set ipv6 nexthop global' and `no set ipv6 nexthop local'
command is added to route-map.
** `neighbor IP_ADDR version BGP_VERSION' command's BGP_VERSION
argument changed.
Old New
=====================
bgp4 4
bgp4+ 4+
bgp4+-draft-00 4-
=====================
If you want to peer with old draft version of BGP-4+, please configure
like below:
router bgp ASN
neighbor PEER version 4-
** Some AS path isn't correctly compared during route selection. Now
it is fixed.
* Changes in ospfd
** `router zebra' is default behavior.
* Changes in ospf6d
** `router zebra' is default behavior.
* Changes in zebra-0.66
* Changes in zebra
** When other daemon such as gated install routes into the kernel then
zebra blocks. This is only occur with netlink socket. Now socket is
set as NONBLOCKING and problem is fixed. Reported and fixed by
Patrick Koppen <koppen@rhrk.uni-kl.de>
* Changes in bgpd
** Now `router zebra' is not needed to insert BGP routes into the
kernel. It is default behavior. If you don't want to install the BGP
routes to the kernel, please configure like below:
!
router zebra
no redistribute bgp
!
** redistribute connected works.
** redistribute static now filter local loopback routes and link local
network.
* Changes in ripd
** Some network check is added. Patch is done by Carlos Alberto
Barcenilla <barce@frlp.utn.edu.ar>
* Changes in ripngd
** Sometimes ripngd install wrong nexthop into the kernel. This bug
is fixed now.
** Now `router zebra' is not needed to insert RIPng routes into the
kernel. It is default behavior. If you don't want to install the BGP
routes to the kernel, please configure like below:
!
router zebra
no redistribute ripng
!
* Changes in zebra-0.65
* Changes in lib
** `C-c' changes current node to ENABLE_NODE. Previously it doesn't.
** In ENABLE_NODE, `exit' command close vty connection.
** `service advanced-vty' enable advanced vty function. If this
service is specified one can directly connect to ENABLE_NODE when
enable password is not set.
** `lines LINES' command is added by Stephen R. van den Berg
<srb@cuci.nl>.
* Changes in zebra
** Basic Linux policy based routing table support is added by Stephen
R. van den Berg <srb@cuci.nl>.
* Changes in bgpd
** route-map command is improved:
`match ip next-hop': New command.
`match metric': New command.
`set metric': Doc fixed.
`set local-preference': DEFUN added.
* Changes in ripd
** Check of announced network is added. Now multicast address is
filtered. Reported by Carlos Alberto Barcenilla
<barce@frlp.utn.edu.ar>
** Check of network 127 is added. Reported by Carlos Alberto
Barcenilla <barce@frlp.utn.edu.ar>
* Changes in ripngd
** Aging route bug is fixed.
** `router zebra' semantics changed. ripngd automatically connect to
zebra.
* Changes in ospfd
** `no router ospf' works.
* Changes in ospf6d
** Bug fix about network vertex.
* Changes in zebra-0.64.1.
This is bug fix release.
* Changes in lib
** Add check of sin6_scope_id in struct sockaddr_in6. For compilation
on implementation which doesn't have sin6_scope_id. Reported by Wim
Biemolt <Wim.Biemolt@ipv6.surfnet.nl>.
* Changes in zebra
** Fix bug of display BGP routes as "O" instead of "B". Reported by
"William F. Maton" <wmaton@enterprise.ic.gc.ca> and Dave Hartzell
<hartzell@greatplains.net>.
* Changes in bgpd
** `no network IPV6_NETWORK' statement and `no neighbor IP_ADDR timers
holdtime [TIMER]' statement doesn't work. Reported by Georg Hitsch
<georg@atnet.at>. Now both statement work.
* Changes in ospfd
** Last interface is not updated by ospf_if_update(). Reported by
Dave Hartzell <hartzell@greatplains.net>.
* Changes in ospf6d
** Byte order of ifid is changed. Due to this change, this code will
not work with previous version, sorry.
** Fix `show ip route' route type mismatch.
** Fix bug of no network IPV6_NETWORK.
** Important bug fix about intra-area-prefix-lsa.
* Changes in zebra-0.64.
* Changes in lib
** prefix-list based filtering routine is added. Currently used in
bgpd but it will be in other daemons.
* Changes in bgpd
** `no router bgp' works. But network statement is not cleared. This
should be fixed in next beta.
** Route reflector related statement is added.
router bgp ASN
bgp cluster-id a.b.c.d
neighbor a.b.c.d route-reflector-client
is added.
** Prefix list based filtering is added.
router bgp ASN
neighbor a.b.c.d prefix-list PREFIX_LIST_NAME
** Prefix list based routing display works.
show ip bgp prefix-list PREFIX_LIST_NAME
* Changes in ripd
** Fix route metric check bug. Reported from Mr. Carlos Alberto
Barcenilla.
* Changes in ospf6d
** There are many changes. If you have interested in ospf6d please
visit ospf6d/README file.
* Changes in zebra-0.63 first beta package.
* Changes in lib
** `copy running-config stgartup-config' command is added.
** prefix length check bug is fixed. Thanks Marlos Barcenilla
<barce@frip.utn.edu.ar>.
* Changes in ospfd
** DR and BDR election works.
** OSPF Hello simple authentication works.
* Changes in ospf6d
** Now ospf6d can be compiled on both Linux and *BSD system.
* Changes in zebra-19990420 snapshot
** `make dist' at top directory works now.
* Changes in lib
** VTY has now access-class to restrict remote connection.
Implemented by Alex Bligh <amb@gxn.net>.
!
line vty
access-class ACCESS-LIST-NAME
!
** `show version' command added. Implemented by Carlos Alberto
Barcenilla <barce@frlp.utn.edu.ar>
* Changes in zebra
** `ip address' command on *BSD bug is fixed.
** `no ip address' works now for IPv4 address.
** Now `write terminal' display `ip address' configuration.
* Changes in bgpd
** Redistribute static works now. Please run both zebra and bgpd.
bgpd.conf should be like this:
!
router zebra
!
router bgp ASN
redisitribute static
!
* Changes in guile
** configure --enable-guile turns on zebra-guile build.
** (router-bgp ASN) allocates real bgp structre.
* Changes in zebra-19990416 snapshot
** Set version to 0.60 for preparation of beta release.
** New directory guile is added for linking with guile interpreter.
* Changes in zebra
** On GNU/Linux Kernel 2.2.x (with netlink support), zebra detects
asynchronous routing updates. *BSD support is not yet finished.
* Changes in bgpd
** `show ip bgp regexp ASPATH_REGEX' uses CISCO like regular expression
instead of RPSL like regular expression. I'm planing to provide RPSL
like regular expression with `show ip bgp rpsl' or something.
* Changes in lib
** Press '?' at variable mandatory argument, vty prints nothing. Now
vty outputs description about the argument. Fixed by Alex Bligh
<amb@gxn.net>
** buffer.c has some ugly bugs. Due to the bug, vty interface hangs
when large output date exists. This bug is fixed. Reported by Alex
Bligh <amb@gxn.net>.
* Changes in ospfd
** DR and BDR information is shown by `show ip ospf interface' command.
* Changes in zebra-19990408 snapshot
* Changes in bgpd
** Old BGP-4+ specification (described in old draft) treatment bug is
fixed. It seems that mrtd uses this format as default. So if you
have problem peering with mrtd and want to use old draft format please
use version statement like this.
neighbor PEER_ADDRESS remote-as ASN
neighbor PEER_ADDRESS version bgp4+-draft-00
** When AS path is epmty (routes generated by bgpd), SEGV is occur
when announce the routes to eBGP peer. Reported by
kad@gibson.skif.net.
** ip as-path access-list command is added.
** neighbor PEER_ADDRESS filter-list AS_LIST [in|out] command is added.
** neighbor PEER_ADDRESS timers holdtimer TIMER command is added.
* Changes in all daemons
** With KAME stack, terminal interface is now bind AF_INET socket
instead of AF_INET6 one.
* Changes in zebra-19990403 snapshot
* Changes in bgpd
** When bgpd has 'router zebra', bgpd automatically select it's router
ID as most highest interface's IP Address.
** When AS path is empty (in case of iBGP), it doesn't include any AS
segment. This change is for announcement to gated under iBGP.
* Changes in ospfd
** OSPF hello packet send/receive works.
* Changes in ospf6d
** Yasuhiro Ohara's ospf6d codes is imported. It is under development
and can't be compiled on any platform.
* Changes in zebra-19990327 snapshot
* Changes in bgpd
** When BGP-4+ connection is done by IPv6 link-local address. One
have to specify interface index for the connection. So I've added
interface statement to the neighbor commmand. Please specify
interface name for getting interface index like below. This statement
only works on GNU/Linux. I'll support BSD ASAP.
router bgp 7675
neighbor fe80::200:f8ff:fe01:5fd3 remote-as 2500
neighbor fe80::200:f8ff:fe01:5fd3 interface sit3
** For disable BGP peering `shutdown' command is added.
router bgp 7675
neighbor 10.0.0.1 shutdown
** `description' command is added to neighbor statement.
router bgp 7675
neighbor 10.0.0.1 description peering with Norway.
** `show ip bgp regexp AS-REGEXP' works again.
show ip bgp regexp AS7675
will show routes which include AS7675.
** When a route which is made from `network' statement is send to
neighbor. Set it's nexthop to self. So 10.0.0.0/8 is announced to
the peer A with source address 192.168.1.1. The routes nexthop is set
to 192.168.1.1.
* Changes in zebra
** In zebra/rtread_sysctl.c, function rtm_read() may overrun allocated
buffer when the address family is not supported and the length is big
(i.e link address). Reported Achim Patzner <ap@bnc.net>.
* Changes in ospfd
** Now ospfd receive OSPF packet.
* Changes in zebra-19990319 snapshot
* Changes in configuration and libraries
** User can disable IPv6 feature and/or pthread feature by configure
option.
To disable IPv6: configure --disable-ipv6
To disable pthread: configure --disable-pthread
** User can disable specified daemon by configure option.
Don't make zebra: configure --disable-zebra
Don't make bgpd: configure --disable-bgpd
Don't make ripd: configure --disable-ripd
Don't make ripngd: configure --disable-ripngd
Don't make ospfd: configure --disable-ospfd
Don't make ospf6d: configure --disable-ospf6d
** Sample configuration files are installed as 600 file flag.
Suggested by Jeroen Ruigrok/Asmodai <asmodai@wxs.nl>.
** syslog logging feature is added by Peter Galbavy
<Peter.Galbavy@knowledge.com>
** Inclusion of standard header files is reworked by Peter Galbavy
<Peter.Galbavy@knowledge.com>
** Change description from GNU/Linux 2.1.X to GNU/Linux 2.2.X
** If daemon function exists in standard C library use it.
** To generate configure script we upgrade autoconf to 2.13. To
generate Makefile.in we upgrade automake to 1.4.
** doc/texinfo.tex is added to distribution.
** Update ports/pkg/DESCR description.
** Update doc/zebra.texi.
** logfile FILENAME statement deleted. Instead of that please use log
file FILENAME.
* Changes in zebra
* Changes in bgpd
** Communication between zebra and bgpd works now. So if there is
`router zebra' line in bgpd.conf, selected route is installed
into kernel routing table.
** Delete all routes which inserted by bgpd when bgpd dies. If you
want to retain routes even bgpd dies please specify [-r|--retain]
option to bgpd.
** BGP announcement code is reworked. Now bgpd announce selected
routes to other peer.
** All output bgp packet is buffered. It's written to the socket when
it gets ready.
** Output route-map works now. You can specify output route-map by:
neighbor IP_ADDR route-map ROUTE_MAP_NAME out
** New route-map command added.
set ip nexthop IP_ADDR
set ipv6 nexthop global IP_ADDR
** Fix bug about unlock of the route_node structure.
** BGP-4+ support is added. bgpd can listen and speak BGP-4+ packet
specified in RFC2283. You can view IPv6 bgp table by: `show ipv6 bgp'.
** Meny packet overflow check is added.
* Changes in ripd
* Changes in ripngd
* Changes in ospfd
** ospfd work is started by Toshiaki Takada <takada@zebra.org>. Now
several files are included in ospfd directory.
** ospf6d codes are merged from Yasuhiro Ohara <yasu@sfc.wide.ad.jp>'s
ospfd work. Now codes are located in ospf6d directory.
Local variables:
mode: outline
paragraph-separate: "[ ]*$"
end:
|
https://gogs.quagga.net/paul/quagga-test-pub/src/23cd586eac3cde789e02c13a1236a4fe33dfc5d9/NEWS
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
18308/unable-call-django-rest-from-within-same-aws-ec2-linux-machine
I am working on a Django REST Framework web application, for that I have a Django server running in an AWS EC2 Linux box at a particular IP:PORT. There are URLs (APIs) which I can call for specific functionalities.
In Windows machine as well as in other local Linux machine (not AWS EC2) I am able to call those APIs successfully and getting the desired results perfectly.
But the problem is when I am trying to call the APIs from within the same EC2 Linux box.
A simple code I wrote to test the call of one API from the same AWS EC2 Linux box:
import requests
vURL = 'http://<ipaddress>:<port>/myapi/'
vSession = requests.Session()
vSession.headers = {'Content-Type': 'application/json', 'Accept': 'application/json'}
vResponse = vSession.get(vURL)
if vResponse.status_code == 200:
print('JSON: ', vResponse.json())
else:
print('GET Failed: ', vResponse)
vSession.close()
This script is returning GET Failed: <Response [403]>.
In one thing I am sure that there is no authentication related issues in the EC2 instance because using this same script I got actual response in other local Linux machines (not AWS EC2) and also in Windows machine.
It seems that the calling of the API (which includes the same IP:PORT of the same AWS EC2 machine) from the same machine is somehow getting restricted by either the security policies of AWS or firewall or something else.
May be I have to do some changes in setting.py. Though I have incorporated all the required settings as per my knowledge in the settings.py, like:
For example, below are the CORS settings that I have incorporated in the setting.py:
CORS_ORIGIN_ALLOW_ALL = True
CORS_ALLOW_CREDENTIALS = True
CORS_ALLOW_METHODS = ('GET', 'PUT', 'POST', 'DELETE')
CORS_ORIGIN_WHITELIST = (
< All the IP address that calls this
application are listed here,
this list includes the IP of the
AWS EC2 machine also >
)
Does anyone have any ideas regarding this issue? Please help me to understand the reason of this issue and how to fix this.
Thanks in advance.
On Amazon Linux AMIs you can do:
$ ...READ MORE
Hello @Nitesh, if this was the case ...READ MORE
You can use a tool called FileZilla.
You ...READ MORE
Try to install an older version i.e., ...READ MORE
For some reason, the pip install of ...READ MORE
Check if the FTP ports are enabled ...READ MORE
To connect to EC2 instance using Filezilla, ...READ MORE
It might be throwing an error on ...READ MORE
In a serverless way, i.e. from a ...READ MORE
OR
Already have an account? Sign in.
|
https://www.edureka.co/community/18308/unable-call-django-rest-from-within-same-aws-ec2-linux-machine?show=18309
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
In this article we are going to construct a couple of simple Arduino based automatic temperature controlled dc fan circuits which will switch ON a fan or any other gadgets connected to it, when the ambient temperature reaches a pre-determined threshold level. We are going to utilize DHT11 sensor and arduino for this project.
Overview
The beauty of microcontrollers is that, we get very precise control over the peripherals which are connected to it. In this project the user just need to input the threshold temperature in the program, the microcontroller will take care of rest of the function.
There are tons of non-microcontroller based automatic temperature controller projects available around the internet, such as using comparator and transistors.
They are very simple and they do work well but, the problem arises while calibrating the threshold level using preset resistor or potentiometer.
We have a blind idea while calibrating it and the user may need to do trial and error method to find the sweet spot.
These problems are overcome by microcontrollers, the user just need to enter the temperature in Celsius in this project, so no need for calibration.
This project can be used where internal temperature of circuit need to be stabilized or saving it from overheating.
In diagram 1, we are connecting a CPU fan as output. This setup can be used to control the internal ambient temperature of an enclosed circuit.
When the threshold temperature is reached the fan turns on. When the temperature goes below threshold temperature fan turns off. So it’s basically an automated process.
In diagram 2, we connected a relay for controlling devices which runs on mains voltage such as table fan.
When the room temperature reaches the threshold temperature the fan turns on and turns off when the room cools down.
This may be the best way for saving power and this can be heaven for lazy people who wish others to switch the fan ON when they feel warm.
Circuit Diagram Showing a DC Fan Control
This setup may be deployed for circuits which are enclosed in a box. The LED turns ON when the preset threshold level reached and also turns ON the fan.
Connecting a Relay for Controlling Bigger Fans
This circuit does the similar function of previous circuit, now the fan is replaced by relay.
This circuit can control a table fan or ceiling fan or any other gadget which can cool down the ambient temperature.
The connected device turns off as soon as the temperature reached below preset threshold level.
The temperature controlled dc fan circuit diagram illustrated here are just few of many possibilities. You may customize the circuit and program for your own purpose.
NOTE 1: #Pin 7 is output.
NOTE 2: This program is only compatible with DHT11 sensor only.
Program for the above explained automatic temperature regulator circuit using Arduino:
Program Code
//--------------------Program developed by R.Girish---------------------//
#include <dht.h>
dht DHT;
#define DHTxxPIN A1
int p = A0;
int n = A2;
int ack;
int op = 7;
int th = 30; // set thershold tempertaure in Celsius
void setup(){
Serial.begin(9600); // May be removed after testing
pinMode(p,OUTPUT);
pinMode(n,OUTPUT);
pinMode(op,OUTPUT);
digitalWrite(op,LOW);
}
void loop()
{
digitalWrite(p,1);
digitalWrite(n,0);
ack=0;
int chk = DHT.read11(DHTxxPIN);
switch (chk)
{
case DHTLIB_ERROR_CONNECT:
ack=1;
break;
}
if(ack==0)
{
// you may remove these lines after testing, from here
Serial.print("Temperature(°C) = ");
Serial.println(DHT.temperature);
Serial.print("Humidity(%) = ");
Serial.println(DHT.humidity);
Serial.print("\n");
// To here
if (DHT.temperature>=th)
{
delay(3000);
if(DHT.temperature>=th) digitalWrite(op,HIGH);
}
if(DHT.temperature<th)
{
delay(3000);
if(DHT.temperature<th)digitalWrite(op,LOW);
}
}
if(ack==1)
{
// may be removed after testing from here
Serial.print("NO DATA");
Serial.print("\n\n");
// To here
digitalWrite(op,LOW);
delay(500);
}
}
//-------------------------Program developed by R.Girish---------------------//
Note: In the program
int th= 30; // set the threshold temperature in Celsius.
Replace “30” with the desired value.
Second Design
The second temperature controlled dc fan circuit project discussed below automatically senses the ambient temperature and adjusts the fan motor speed to keep the surrounding temperature under control. This automatic processing is done through an Arduino and a temperature sensor IC LM35.
By: Ankit Negi DC FAN fan controlled dc fan circuit using fan and Arduino, you can always use the comment box below and send your thoughts to us. We will try to get back at an earliest.
C:\Users\Sri Krishna\Documents\Arduino\fan\fan.ino:1:17: fatal error: dht.h: No such file or directory
i am getting this error when i try compiling ur 1st program,please help
thanks
It may be due to a missing library, or existing library not matching with dth11
Google search with the following phrase: fatal error: dht.h, you will be able to find many forums discussing this issue!
Good innovation Mr. Swags.
Can you please send a book of learning ARDUINO Programs/Codes.
Thank you very much.
Thanks Sasa, I tried to send it to your email, but it seems your email ID is not correct or has problems. It was returned back to my email.
Hi
I have heater but I have to manually turn it on off when its cold. Can you suggest me a simple solution which will keep the room temperature warm throughout night. I intend to use this only in winter.
thanks
Bhargav
This would be better design if it also included humidity as a variable. I want a circuit that will activate relay when humidity is at 45% or less and when outside temps drop below ambient temp. So when both conditions are met relay is activated
you can refer to the following post:
|
https://www.homemade-circuits.com/automatic-temperature-regulator-circuit/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
mount(2) mount(2)
mount - mount a file system
#include <sys/types.h>
#include <sys/mount.h>
int mount (const char *spec, const char *dir, int mflag,
.../* char *fstyp, const char *dataptr, int datalen*/);
mount requests that a removable file system contained on the block
special file identified by spec be mounted on the directory identified by
dir. spec and dir are pointers to path names. fstyp is the file system
type number. The sysfs(2) system call can be used to determine the file
system type number. If both the MS_DATA and MS_FSS flag bits of mflag
are off, the file system type defaults to the root file system type.
Only if either flag is on is fstyp used to indicate the file system type.
If the MS_DATA flag is set in mflag the system expects the dataptr and
datalen arguments to be present. Together they describe a block of
file-system specific zero.
Note that MS_FSS is obsolete and is ignored if MS_DATA is also set, but
if MS_FSS is set and MS_DATA is not, dataptr and datalen are both assumed
to be zero.
After a successful call to mount, all references to the file dir refer to
the root directory on the mounted file system.
The low-order bit of mflag is used to control write permission on the
mounted file system: if 1, writing is forbidden; otherwise writing is
permitted according to individual file accessibility.
mount may be invoked only by a process with the super-user privilege. It
is intended for use only by the mount utility.
mount fails if one or more of the following are true:
EACCES Search permission is denied on a component of dir or
spec.
EPERM The calling process does not have the super-user
privilege.
EBUSY dir is currently mounted on, is someone's current
working directory, or is otherwise busy.
Page 1
mount(2) mount(2)
EBUSY The device associated with spec is currently mounted.
EBUSY There are no more mount table entries.
EFAULT spec, dir, or datalen points outside the allocated
address space of the process.
EINVAL The super block has an invalid magic number or the
fstyp is invalid.
ELOOP Too many symbolic links were encountered in
translating spec or dir.
ENAMETOOLONG The length of the path argument exceeds {PATH_MAX},
or the length of a path component exceeds {NAME_MAX}
while _POSIX_NO_TRUNC is in effect.
ENOENT None of the named files exists or is a null pathname.
ENOTDIR A component of a path prefix is not a directory.
EREMOTE spec is remote and cannot be mounted.
ENOLINK path points to a remote machine and the link to that
machine is no longer active.
EMULTIHOP Components of path require hopping to multiple remote
machines and the file system type does not allow it.
ETIMEDOUT A component of path is located on a remote file
system which is not available [see intro(2)].
ENOTBLK spec is not a block special device.
ENXIO The device associated with spec does not exist.
ENOTDIR dir is not a directory.
EROFS spec is write protected and mflag requests write
permission.
ENOSPC The file system state in the super-block is not
FsOKAY and mflag requests write permission.
E2BIG The file system's size parameters are larger than the
size of special device spec. Either mkfs(1M) was run
on a different overlapping device or the device has
been changed with fx(1M) since mkfs was run.
EFSCORRUPTED The filesystem has a corruption forcing failure of
the mount.
Page 2
mount(2) mount(2)
EWRONGFS The wrong filesystem type was supplied in fstyp, or
there is no filesystem on spec.
It is the responsibility of the caller to assure that the block size of
the device corresponds to the blocksize of the filesystem being mounted.
This is particularly important with CDROM devices, as the default block
size of the device can vary between 512 bytes and 2048 bytes. The
mount(1M) command manages this for filesystems via dks(7M) DIOCSELFLAGS
and DIOCSELECT ioctls.
fx(1M), mkfs(1M), mount(1M), sysfs(2), umount(2), dks(7M),fs(4), xfs(4)
Upon successful completion a value of 0 is returned. Otherwise, a value
of -1 is returned and errno is set to indicate the error.
PPPPaaaaggggeeee 3333
|
http://nixdoc.net/man-pages/IRIX/man2/mount.2.html
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
- Junio C Hamano authored
Even though the function is generic enough, <anything>sort() inherits connotations from the standard function qsort() that sorts an array. Rename it to llist_mergesort() and describe the external interface in its header file. This incidentally avoids name clashes with mergesort() some platforms declare in, and contaminate user namespace with, their <stdlib.h>. Reported-by: Brian Gernhardt Signed-off-by:
Junio C Hamano <gitster@pobox.com>7365c95d
|
https://gitlab.com/xk/git/blob/f1dd90bd193637eeef772890c37afe3529a665d0/mergesort.h
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Result of 2019 Indian General Election came out on 23rd May 2019 which can be viewed on the official website of election commission of India.
In this article, we scraped the data for each constituency and dumped it into a JSON file to analyze further.
EC website has data for each constituency. To get stats for a particular constituency, we need to first select the state from the dropdown and then constituency from another dropdown which is populated based on the state selected. There are 543 constituencies. If we collect the data manually, we have to select values from dropdown 543 times. But,
That is when Python comes into the picture.
The result home page URL for constituency wise data is
When you change the dropdown value to another state or another constituency, the web page is reloaded and URL is updated. Try changing the state and constituency few times and you will find a pattern in webpage URL.
URL is created using below logic.
Base URL i.e. + U char for Union territories or S for state + state code 01 to 29 + constitutency code starting from 1 + .htm?ac= + constituency code
So for example if you want to get the data of Muzaffarnagar constituency in Uttar Pradesh state, URL will change to where S means State (not union territories), 24 is code for Uttar Pradesh and 3 is code for Muzaffarnagar constituency.
Also, you will notice that if you pass the wrong state or constituency code,
404 status is returned.
Press F12 in your chrome browser to open the Chrome Dev tools. Select the network tab, check the preserve log check box. When you refresh the page or change the constituency, a new GET request is sent to the server.
The response of this request is pure HTML. To parse the response and to fetch the required data, inspect the webpage. You will see that results are enclosed inside the
tbody tag. But there are multiple
tbody tags in HTML source code and none of them have any unique attribute like id or other attributes like class.
Hence we will check the index of this
tbody tag from the top and then will fetch data from it.
Let's start writing code to fetch the data.
First, create a directory inside which we will place our files. Now change the working directory to the newly created directory and create a virtual environment using Python3. Using a virtual environment while working with Python is recommended.
Activate the virtual environment. Install the dependencies from the
requirements.txt file using below command.
pip install -r requirements.txt
Create a new python file
election_data_collection.py.
Import the required packages
import requests
from bs4 import BeautifulSoup
import json
from lxml import html
import time
Create a list of states code.
states = ["U0" + str(u) for u in range(1, 8)] + ["S0" + str(s) if s < 10 else "S" + str(s) for s in range(1, 30)]
Every state has different number constituencies, we will loop up to the maximum number of constituency any state has. For now, this number is less than 100. We will terminate the loop for each state when we get
404 of any constituency.
base_url = ""
results = list()
for state in states:
for constituency_code in range(1, 99):
url = base_url + state + str(constituency_code) + ".htm?ac=" + str(constituency_code)
# print("URL", url)
response = requests.get(url)
Parse the response and fetch the required
tbody inside which all the result data is enclosed.
response_text = response.text
soup = BeautifulSoup(response_text, 'lxml')
tbodies = list(soup.find_all("tbody"))
# 11the tbody from top is the table we need to parse
tbody = tbodies[10]
For each row in
tbody, collect the candidate's name and votes count and dump into a dictionary. Repeat this for each constituency. Once all the data is collected, dump it into a JSON file to analyze further.
# write data to file
with open("election_data.json", "a+") as f:
f.write(json.dumps(results, indent=2))
Please do not flood the server with too many simultaneous requests. Always send one request at a time. Put some sleep between requests. I have added 0.5 seconds sleep between each request.
"""
Author: Anurag Rana
Site:
"""
import requests
from bs4 import BeautifulSoup
import json
from lxml import html
import time
NOT_FOUND = 404
states = ["U0" + str(u) for u in range(1, 8)] + ["S0" + str(s) if s < 10 else "S" + str(s) for s in range(1, 30)]
# base_url = ""
base_url = ""
results = list()
for state in states:
for constituency_code in range(1, 99):
url = base_url + state + str(constituency_code) + ".htm?ac=" + str(constituency_code)
# print("URL", url)
response = requests.get(url)
# if some state and constituency combination do not exists, 404, continue for next state
if NOT_FOUND == response.status_code:
break
response_text = response.text
soup = BeautifulSoup(response_text, 'lxml')
tbodies = list(soup.find_all("tbody"))
# 11the tbody from top is the table we need to parse
tbody = tbodies[10]
trs = list(tbody.find_all('tr'))
# all data for a constituency seat is stored in this dictionary
seat = dict()
seat["candidates"] = list()
for tr_index, tr in enumerate(trs):
# row at index 0 contains name of constituency
if tr_index == 0:
state_and_constituency = tr.find('th').text.strip().split("-")
seat["state"] = state_and_constituency[0].strip().lower()
seat["constituency"] = state_and_constituency[1].strip().lower()
continue
# first and second rows contains headers, ignore
if tr_index in [1, 2]:
continue
# for rest of the rows, get data
tds = list(tr.find_all('td'))
candidate = dict()
# if this is last row get total votes for this seat
if tds[1].text.strip().lower() == "total":
seat["evm_total"] = int(tds[3].text.strip())
if "jammu & kashmir" == seat["state"]:
seat["migrant_total"] = int(tds[4].text.strip())
seat["post_total"] = int(tds[5].text.strip())
seat["total"] = int(tds[6].text.strip())
else:
seat["post_total"] = int(tds[4].text.strip())
seat["total"] = int(tds[5].text.strip())
continue
else:
candidate["candidate_name"] = tds[1].text.strip().lower()
candidate["party_name"] = tds[2].text.strip().lower()
candidate["evm_votes"] = int(tds[3].text.strip().lower())
if "jammu & kashmir" == seat["state"]:
candidate["migrant_votes"] = int(tds[4].text.strip().lower())
candidate["post_votes"] = int(tds[5].text.strip().lower())
candidate["total_votes"] = int(tds[6].text.strip().lower())
candidate["share"] = float(tds[7].text.strip().lower())
else:
candidate["post_votes"] = int(tds[4].text.strip().lower())
candidate["total_votes"] = int(tds[5].text.strip().lower())
candidate["share"] = float(tds[6].text.strip().lower())
seat["candidates"].append(candidate)
# print(json.dumps(seat, indent=2))
results.append(seat)
print("Collected data for", seat["state"], state, seat["constituency"], constituency_code, len(results))
# Do not send too many hits to server. be gentle. wait.
time.sleep(0.5)
# write data to file
with open("election_data.json", "a+") as f:
f.write(json.dumps(results, indent=2))
Code is available on Github.
Make sure the virtual environment is active before running the code.
# to collect the data (venv) $ python election_data_collection.py # to analyze the data (venv) $ python election_data_analysis.py
It took me around 7 minutes to collect all the data for 543 constituencies with 0.5 seconds delay between each request.
I have created a few sample functions to analyze the data which are in separate file
election_data_analysis.py.
Since we have data with us already in the JSON file, we can use it instead of scraping the site again and again.
To analyze the data, load the content of the
election_data.json file into a JSON object.
import json
with open("election_data.json", "r") as f:
data = f.read()
data = json.loads(data)
To get the highest number of votes any candidate got, use below function.
def candidate_highest_votes():
highest_votes = 0
candidate_name = None
constituency_name = None
for constituency in data:
for candidate in constituency["candidates"]:
if candidate["total_votes"] > highest_votes:
candidate_name = candidate["candidate_name"]
constituency_name = constituency["constituency"]
highest_votes = candidate["total_votes"]
print("Highest votes:", candidate_name, "from", constituency_name, "got", highest_votes, "votes")
To get the number of NOTA votes, use below function.
def nota_votes():
nota_votes_count = sum(
[candidate["total_votes"] for constituency in data for candidate in constituency["candidates"] if
candidate["candidate_name"] == "nota"])
print("NOTA votes casted:", nota_votes_count)
Code is available on GitHub. Feel free to fork it, rewrite it or optimize it.
You may add new features, analyzing points or visualization using graphs, pie charts or bar diagrams.
Host your Django App for free.
|
https://www.pythoncircle.com/post/683/scraping-data-of-2019-indian-general-election-using-python-request-and-beautifulsoup-and-analyzing-it/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Choose between Azure messaging services - Event Grid, Event Hubs, and Service Bus
Azure offers three services that assist with delivering event messages throughout a solution. These services are:
Although they have some similarities, each service is designed for particular scenarios. This article describes the differences between these services, and helps you understand which one to choose for your application. In many cases, the messaging services are complementary and can be used together.
Event vs. message services
There's an important distinction to note between services that deliver an event and services that deliver a message.
Event
An event is a lightweight notification of a condition or a state change. The publisher of the event has no expectation about how the event is handled. The consumer of the event decides what to do with the notification. Events can be discrete units or part of a series.
Discrete events report state change and are actionable. To take the next step, the consumer only needs to know that something happened. The event data has information about what happened but doesn't have the data that triggered the event. For example, an event notifies consumers that a file was created. It may have general information about the file, but it doesn't have the file itself. Discrete events are ideal for serverless solutions that need to scale.
Series events report a condition and are analyzable. The events are time-ordered and interrelated. The consumer needs the sequenced series of events to analyze what happened.
A message is raw data produced by a service to be consumed or stored elsewhere..
Comparison of services
Event Grid
Use the services together
In some cases, you use the services side by side to fulfill distinct roles. For example, an e-commerce into a data warehouse. The following image shows the workflow for streaming the data.
Next steps
See the following articles:
- Asynchronous messaging options in Azure
- Events, Data Points, and Messages - Choosing the right Azure messaging service for your data.
- Storage queues and Service Bus queues - compared and contrasted
- To get started with Event Grid, see Create and route custom events with Azure Event Grid.
- To get started with Event Hubs, see Create an Event Hubs namespace and an event hub using the Azure portal.
- To get started with Service Bus, see Create a Service Bus namespace using the Azure portal.
Feedback
|
https://docs.microsoft.com/en-us/azure/event-grid/compare-messaging-services?WT.mc_id=ondotnet-hashnode-cephilli
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Building a React-Based Chat Client With Redux, Part 2: React With Redux and Bindings
Building a React-Based Chat Client With Redux, Part 2: React With Redux and Bindings
We pick up where we left off last time by exploring how to use Redux and bindings in our React.js apps to make them more scalable.
Join the DZone community and get the full member experience.Join For Free
Welcome back! If you missed Part 1 on React and ReactDOM, you can check it out here.
The Full Monty: React With Redux and Bindings
Adding Redux introduces additional complexity, and greatly increases the number of files and folders in the project. It also requires some additional dependencies and a change in process for building and serving the client.
No longer is it sufficient to pull in the libraries in script tags and serve a template from our simple Node/Express server. Now, we'll define our dependencies in package.json and pull them in with
npm install.
New Dependencies
I didn't introduce Babel, Webpack, and Browserify for this; the react-scripts library was sufficient. It not only gives us the ability to use JSX, but it also compiles all the code into a bundle.js file and serves our client, even triggering the browsers to reload when the code changes.
Another library I added was react-redux, the official Redux bindings for React. Redux can be used on its own or with other frameworks like Vue, but if you're using it with React, this library makes it much simpler to integrate the two.
Here are our dependencies now:
"dependencies": { "react": "^16.4.0", "react-dom": "^16.4.0", "react-redux": "^5.0.7", "react-scripts": "1.1.4", "redux": "^4.0.0", "socket.io-client": "^1.5.1" },
Application Structure With Redux
Remember how all the code was in four files under one folder before? It's all spread out now, and the monolithic nature of the original app is no more. There's room for growth.
Source Folder (Top Level)
Components Folder
Constants Folder
Store Folder (Top Level)
Store Folder (Functional Area)
Utils Folder
As I describe the major changes to the app below, all these folders will make more sense. You can peruse the completely refactored code in the latest version, but if you just want to read on, don't worry, I'll provide links to both versions again at the bottom.
Data Flow Revisited
Actions and Action Creators
In Redux, all of the application's state is held in a store, and the only way to modify it is via an action. This means that all those
setState() calls in the Client component are now verboten. Instead, we dispatch actions, which are just plain objects with a 'type' property and optionally some other arbitrary properties depending upon what we're trying to accomplish. The dispatch method actually lives on the store, but as we'll see in a bit, react-redux can inject that method into our components to make life easier. We never have to interact with the store directly.
With actions, we're delegating the change of state that was happening inside components to another part of the system. And since we might need to dispatch the same type of action object from multiple places in the app with different property values, we don't want to duplicate the effort of declaring it (and possibly have the two places get out of sync during ongoing maintenance), so there is the additional concept of action creators. Here's an example:
// Socket related actions export const CONNECTION_CHANGED = 'socket/connection-changed'; export const PORT_CHANGED = 'socket/port-changed'; // The socket's connection state changed export const connectionChanged = isConnected => { return { type: CONNECTION_CHANGED, connected: isConnected, isError: false }; }; // The user selected a different port for the socket export const portChanged = port => { return { type: PORT_CHANGED, port: port }; };
Reducers
Most dispatched actions will be handled by a reducer, which is just a pure function which returns a new value for state. The reducer is passed to the state and the action, and returns a new state object with all the same properties, though it may contain replacement values for all, part, or none of those properties, depending upon the type of action that was sent in. It never mutates the state but it can define the initial state if none is passed in. It can be said to reduce the combination of state and an action to some new object to represent the state. Here's an example that responds to the actions created in the code above:
// Socket reducer import { CONNECTION_CHANGED, PORT_CHANGED } from '../../store/socket/actions'; import { UI } from '../../constants'; const INITIAL_STATE = { connected: false, port: String(UI.PORTS[0]) }; function socketReducer(state=INITIAL_STATE, action) { let reduced; switch (action.type) { case CONNECTION_CHANGED: reduced = Object.assign({}, state, { connected: action.connected, isError: false }); break; case PORT_CHANGED: reduced = Object.assign({}, state, { port: action.port }); break; default: reduced = state; } return reduced; } export default socketReducer;
Notice that the reducer switches on the action's type field and produces a new object to replace the state with. And if the action type doesn't match any of its cases, it simply returns the state object that was passed in.
Creating the Store
We've seen the action creators and the reducers, but what about the store that holds the state?
In the previous version of the app, the state was created in the Client component's constructor as one object with all its properties. Do we just create the store with that same monolithic state object? We could, but a more modular way is to let each reducer contribute the parts of state that it works with.
As you'll notice in the reducer above, the
INITIAL_STATE object has two properties with their initial values. It manages the parts of state that are related to the socket. There are also reducers for the client status and for messaging. By decomposing the state into separate functional areas, we make the app easier to maintain and extend.
The first step to creating the overall state that the store will hold is to combine all the reducers:
import { combineReducers } from 'redux'; import socketReducer from './socket/reducer'; import messageReducer from './message/reducer'; import statusReducer from './status/reducer'; const rootReducer = combineReducers({ socketState: socketReducer, messageState: messageReducer, statusState: statusReducer });
The
rootReducer is a single reducer which chains all the reducers together. So an action could be passed into the
rootReducer and if its type matches a case in any of the combined reducers' switch statements, we may see some transformation, otherwise, no state change will occur.
Also, note that the object we define and pass to the
combineReducers function has properties like:
messageState: messageReducer
Recall that the result of a reducer function is a slice of application state, so we'll name that object appropriately.
With the
rootReducer assembled, we can now create the store, like so:
import { createStore } from 'redux'; const store = createStore(rootReducer); export default store;
The Redux library's
createStore function will invoke the
rootReducer with no state (since it doesn't yet exist) causing all the reducers to supply their own
INITIAL_STATE objects, which will be combined, creating the final state object that is held in the store.
Now we have a store that holds all the application state that was previously created in the Client component's constructor and passed down to its child components as props. And we have action creators that manufacture our action objects, which when dispatched will be handled by reducers, which in turn produce a new state for the application. Wonderful. Only two questions remain:
How is a component notified when the state changes?
How does a component dispatch an action in order to trigger a state change?
Injection
This is where the react-redux library really shines. Let's have a look at the MessageInput component, which manages the text field where a user enters their chat handle:
import React, { Component } from 'react'; import { connect } from 'react-redux'; // CONSTANTS import { Styles } from '../../constants'; // ACTIONS import { outgoingMessageChanged } from '../../store/message/actions'; // Text input for outgoing message class MessageInput extends Component { // The outgoing message text has changed handleOutgoingMessageChange = event => { this.props.outgoingMessageChanged(event.target.value); }; render() { return <span> <label style={Styles.labelStyle}Message</label> <input type="text" name="messageInput" value={this.props.outgoingMessage} onChange={this.handleOutgoingMessageChange}/> </span>; } } // Map required state into props const mapStateToProps = (state) => ({ outgoingMessage: state.messageState.outgoingMessage }); // Map dispatch function into props const mapDispatchToProps = (dispatch) => ({ outgoingMessageChanged: message => outgoingMessageChanged(message) }); // Export props-mapped HOC export default connect(mapStateToProps, mapDispatchToProps)(MessageInput);
First, notice that the
MessageInput class itself doesn't have an export keyword on it.
Next, direct your attention to the bottom of the file, where you'll notice two functions '
mapStateToProps' and '
mapDispatchToProps.' These functions are then passed into the imported react-redux function '
connect,' which returns a function that takes the
MessageInput class as an argument. That function returns a higher-order component which wraps
MessageInput. Ultimately, the HOC is the default export.
The magic that's provided by this HOC is that when the state changes, the
mapStateToProps function will be invoked, returning an object that contains the parts of state that this component cares about. Those properties and values will now show up in the component's props.
Earlier, when we combined the reducers, remember how we named the slices of state? Now you can see where that comes into play. When mapping state to props inside a component, we can see that the application's state object holds the output of each reducer in a separate property.
const mapStateToProps = (state) => ({ outgoingMessage: state.messageState.outgoingMessage });
Finally, the
mapDispatchToProps function is called, which creates a 'dispatcher function' that shows up as a component prop called '
outgoingMessageChanged.'
This is a much better way for components to receive parts of application state than having it passed down a component hierarchy where intervening components may not care about the values, but must nevertheless traffic in them since their children do.
Remember earlier, what the Client component's render function looked like after refactoring to JSX? It was definitely easier to read than the version based on nested
React.createElement() calls, but it still had to pass a ton of state down into those children. I promised we'd streamline that and with react-redux, this is what it looks like now:
render() { return <div style={clientStyle}> <UserInput/> <PortSelector/> <RecipientSelector/> <MessageTransport/> <MessageHistory/> <Footer/> </div>; }
The Socket utility class instance is the only thing that the Client manages now, aside from rendering the other components. The
MessageTransport needs a reference to the socket so it can be included with the
SEND_MESSAGE action that it dispatches, and the
Footer needs the socket instance so the
ConnectButton can call its '
connect' and '
disconnect' methods.
So we've answered the first question: How is a component notified when the state changes? What about the second one: How does a component dispatch an action?
We know that a dispatcher function has been added to the component as a prop by react-redux. That dispatcher calls the appropriate action creator and dispatches the action returned. In the case of the
MessageInput component above, it imports the action creator function '
outgoingMessageChanged' and uses it to create the action to dispatch when the text input changes:
// The outgoing message text has changed handleOutgoingMessageChange = event => { this.props.outgoingMessageChanged(event.target.value); };
And that brings us nearly full circle. Components now have the necessary bits of application state injected into their props, and are able to easily create and dispatch actions that trigger reducers to transform the state. Application state has been decomposed into functional areas along with corresponding action creators and reducers.
The final architectural concern is the socket and its management. Where do we instantiate it, and how communicate with it?
Middleware
It isn’t a view component, but a couple of view components (ConnectButton and SendButton), need to initiate communications with it. Actions like CONNECT_SOCKET or SEND_MESSAGE are a great way to trigger things elsewhere in the application. But the reducers that respond to actions are supposed to be pure functions that only manage the state. How can we send an action and have that trigger a manipulation of the socket then?
The answer is middleware. Remember before when we created the store? Well, actions are part of the API for the store, so it makes sense that something that needs to respond to an action would probably need to be involved.
What we’ll have to do is create a ‘middleware function’ which will instantiate the socket and its various listeners, then return a function which will be called on every action that is dispatched. That function is wrapped by the closure that created the socket instance, and is around for the life of the app. It looks like this:
const socketMiddleware = store => { // The socket's connection state changed const onConnectionChange = isConnected => { store.dispatch(connectionChanged(isConnected)); store.dispatch(statusChanged(isConnected ? 'Connected' : 'Disconnected')); }; // There has been a socket error const onSocketError = (status) => store.dispatch(statusChanged(status, true)); // The client has received a message const onIncomingMessage = message => store.dispatch(messageReceived(message)); // The server has updated us with a list of all users currently on the system const onUpdateClient = message => { const messageState = store.getState().messageState; // Remove this user from the list const otherUsers = message.list.filter(user => user !== messageState.user); // Has our recipient disconnected? const recipientLost = messageState.recipient !== UI.NO_RECIPIENT && !(message.list.find(user => user === messageState.recipient)); // Has our previously disconnected recipient reconnected? const recipientFound = !!messageState.lostRecipient && !!message.list.find(user => user === messageState.lostRecipient); const dispatchUpdate = () => { store.dispatch(clientUpdateReceived(otherUsers, recipientLost)); }; if (recipientLost && !messageState.recipientLost) { // recipient just now disconnected store.dispatch(statusChanged(`${messageState.recipient} ${UI.RECIPIENT_LOST}`, true)); dispatchUpdate(); } else if (recipientFound) { // previously lost recipient just reconnected store.dispatch(statusChanged(`${messageState.lostRecipient} ${UI.RECIPIENT_FOUND}`)); dispatchUpdate(); store.dispatch(recipientChanged(messageState.lostRecipient)); } else { dispatchUpdate(); } }; const socket = new Socket( onConnectionChange, onSocketError, onIncomingMessage, onUpdateClient ); // Return the handler that will be called for each action dispatched return next => action => { const messageState = store.getState().messageState; const socketState = store.getState().socketState; switch (action.type){ case CONNECT_SOCKET: socket.connect(messageState.user, socketState.port); break; case DISCONNECT_SOCKET: socket.disconnect(); break; case SEND_MESSAGE: socket.sendIm({ 'from': messageState.user, 'to': messageState.recipient, 'text': action.message, 'forwarded': false }); store.dispatch(messageSent()); break; default: break; } return next(action) }; };
This middleware function is able to instantiate the socket and respond to the actions that are related to the socket (connecting, disconnecting, sending messages), as well as dispatching actions that arise from the socket itself (connection status change, socket error, message received, update client with list of connected users).
We apply that middleware to the store when we create it like so:
// Root reducer const rootReducer = combineReducers({ socketState: socketReducer, messageState: messageReducer, statusState: statusReducer }); // Store const store = createStore( rootReducer, applyMiddleware(socketMiddleware) );
Conclusion
If you're just starting with React, you certainly don't need to absorb all that toolchain fatigue just to get your feet wet. With just React and ReactDOM, you can make something happen and get to joy pretty quickly.
However, for anything even moderately ambitious, you're probably going to be well-served by upping your application's state-management game. There are plenty of libraries out there to help you with this, Flux, Redux, MobX, RxJS, etc., so you may want to study up on the pros and cons of each. You're definitely going to see an apparent increase in complexity when you refactor how your app handles state, but that's necessary for the app to grow in a maintainable way.
I hope you've enjoyed this little exercise and now have a better understanding of what's involved in refactoring a React-only app to use Redux. Here again, for reference, are the two versions of the project:
React and ReactDOM only (Version 1.0.0)
React, ReactDOM, Reflux, react-reflux (Latest Version)
Are there things I could've done better in either the codebase or this article? Whether React and Redux are new to you or old hat, I'd love to hear your feedback on this exercise in the comments.
Published at DZone with permission of Cliff Hall , DZone MVB. See the original article here.
Opinions expressed by DZone contributors are their own.
{{ parent.title || parent.header.title}}
{{ parent.tldr }}
{{ parent.linkDescription }}{{ parent.urlSource.name }}
|
https://dzone.com/articles/building-a-react-based-chat-client-with-redux-part
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Swift library for Data Visualization
swiftplot two rendering backends to generate plots:
- Anti-Grain Geometry(AGG) C++ rendering library
- A simple SVG Renderer
To encode the plots as PNG images it uses the lodepng library.
SwiftPlot can also be used in Jupyter Notebooks..
License
SwiftPlot is licensed under
Apache 2.0. View license
How to include the library in your package
Add the library to your projects dependencies in the Package.swift file as shown below.
dependencies: [ .package(url: "", .branch("master")), ],.
How to include the library in your Jupyter Notebook
Add these lines to the first cell:
%install-swiftpm-flags -Xcc -isystem/usr/include/freetype2 -Xswiftc -lfreetype %install '.package(url: "", from: "1.0.28")' Cryptor %install '.package(url: "", .branch("master"))' SwiftPlot AGGRenderer
In order to display the generated plot in the notebook, add this line to a new cell:
%include "EnableJupyterDisplay.swift"
Examples.
Simple Line Graph)
Line Graph with multiple series of data)
Line Graph with Sub Plots stacked horizontally)
Plot functions using LineGraph)
Using a secondary axis in LineGraph
import SwiftPlot import AGGRenderer let x:[Float] = [10,100,263,489] let y:[Float] = [10,120,500,800] let x1:[Float] = [100,200,361,672] let y1:[Float] = [150,250,628,800].
Displaying plots in Jupyter Notebook())
How does this work.
Documentation
LineGraph<T: FloatConvertible, U: FloatConvertible>
BarChart<T: LosslessStringConvertible, U: FloatConvertible>
Histogram<T:FloatConvertible>
ScatterPlot
<T:FloatConvertible, U:FloatConvertible>
SubPlot
PlotDimensions
Pair<T,U>
PlotLabel
PlotTitle
Color
PlotLabel
Axis<T,U>
ScatterPlotSeriesOptions
HistogramSeriesOptions
BarGraphSeriesOptions
Base64Encoder
Limitations
- FloatConvertible supports only Float and Double. We plan to extend this to Int in the future.
|
https://iosexample.com/swift-library-for-data-visualization/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Play provides various functions to play musical material stored in Note, Phrase, Part, and Score objects.
There are three options. You may play
- MIDI music (using the MIDI synthesizer),
- microtonal music (using standard MIDI instruments), and
- music using audio files as instruments (very powerful!).
Playing MIDI material
The first function, Play.midi(), is used to play musical material stored in Note, Phrase, Part, and Score objects via the Java MIDI synthesizer.
The other Play functions are more advanced, and are intended for building interactive musical instruments.
You can also make global changes interactively on instrument, volume, panning, and pitch bend.
Playing Microtonal material
You may also play microtonal material.
This is done simply by creating Note objects using float (e.g., 443.1) pitch, such as
from music import * note = Note(443.1, HN) # create a note a bit over A4 (440.0) Play.midi(note) # and play it!
WARNING: For polyphony (to play concurrent microtonal notes), you must play notes on different MIDI channels.
The MIDI standard does not support microtones. Microtones are rendered here using MIDI pitch bend. However, there is only one pitch bend per channel. Therefore, you need to spread concurrent notes across channels. (Also, remember that channel 9 is special – percussion only.)
One way to do this is to create a different Phrase per voice, and store it in its own Part assigned to a unique channel. Another way is to use Play.noteOn() and send notes on different channels.
All other Play functions will work as documented.
Playing Audio material
You may also play music using audio files as instruments.
This is done by using the Play.audio() function.
Play.audio() works similarly to Play.midi(), except that it requires an additional parameter, the list of audio files that will be used as instruments to play the musical material. There should be at least as many audio samples as channels being used.
Here is an example (a variation on furElise.py in Ch. 3):
# furElise.py # Generates the theme from Beethoven's Fur Elise... # using Moondog's "Bird's Lament" audio sample as instrument # for rendering sound. from music import * # theme has some repetition, so break it up to maximize economy # (also notice how we line up corresponding pitches and durations) pitches1 = [E5, DS5, E5, DS5, E5, B4, D5, C5] durations1 = [SN, SN, SN, SN, SN, SN, SN, SN] pitches2 = [A4, REST, C4, E4, A4, B4, REST, E4] durations2 = [EN, SN, SN, SN, SN, EN, SN, SN] pitches3 = [GS4, B4, C5, REST, E4] durations3 = [SN, SN, EN, SN, SN] pitches4 = [C5, B4, A4] durations4 = [SN, SN, EN] # create an empty phrase, and construct theme from the above motifs theme = Phrase() theme.addNoteList(pitches1, durations1) theme.addNoteList(pitches2, durations2) theme.addNoteList(pitches3, durations3) theme.addNoteList(pitches1, durations1) # again theme.addNoteList(pitches2, durations2) theme.addNoteList(pitches4, durations4) # play it a = AudioSample("moondog.Bird_sLament.wav", G4) # load an audio file #Play.midi(theme) Play.audio(theme, [a])
WARNING: For polyphony (to play concurrent notes), you must play notes on different channels.
AudioSamples do not support polyphony. Therefore, you need to spread concurrent notes across channels. (One channel per audio file.)
One way to do this is to create a different Phrase per voice, and store it in its own Part assigned to a unique channel. Another way is to use Play.audioOn() – see below.
The following Play audio functions are also available:
Audio Envelopes
Finally, each AudioSample used as an instrument may be assigned a corresponding Envelope to help shape its attack, delay, sustain, and release.
NOTE: This can help create very interesting musical “instruments” from existing sounds (recorded or downloaded audio files).
For more information, see the Envelope class.
|
https://jythonmusic.me/play/
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Import xpd to R file
Hi,
I have some issues about import xpd file to R data file which is fine in csv extension. I have been tried many solutions but none of it works.
`Expyriment 0.9.0 (Python 2.7.14)
** Expyriment Data Preprocessor **
found 1 subject_data sets
found 7 variables: ['subject', 'blok', 'emotion_id', 'keypres', 'respon_ok', 'RT', 'accuracy']
reading Totally_New_01_01.xpd
write file: c:\users\user\appdata\local\temp\tmprkpkeo (105 cells in 15 rows)
Traceback (most recent call last):
File "C:\Users\USER\Documents\data_processing.py", line 12, in
delimiter=',',to_R_data_frame=True)
File "C:\Python27\lib\site-packages\expyriment\misc\data_preprocessing.py", line 198, in write_concatenated_data
.write_concatenated_data_to_R_data_frame(output_file=output_file)
File "C:\Python27\lib\site-packages\expyriment\misc\data_preprocessing.py", line 1066, in write_concatenated_data_to_R_data_frame
na.strings=c("NA", "None"))'''.format(tmp_file_name))
File "C:\Python27\lib\site-packages\rpy2\robjects__init__.py", line 320, in call
p = _rparse(text=StrSexpVector((string,)))
RRuntimeError: Error: '\u' used without hex digits in character string starting ""c:\u`
and this my data preprocessing syntax
`from expyriment.misc import data_preprocessing
folder = "C:/Users/USER/Documents/data"
data_preprocessing.write_concatenated_data(folder, 'Totally_New_01',
output_file='Adata',
delimiter=',',to_R_data_frame=True)`
I tried to change the backslash "\" with "/" and escape with "\" and change os.chdir for write file
Also, I've already installed the rpy2, but still doesn't work.
Thanks before.
This seems to be a bug. Thanks for reporting it.
It is not entirely clear why this happens, as
mkstempon my machine does correctly return a string with escaped backslashes. We will look into it. In the meantime, could you give us some more information about your system? The output of
expyriment.misc.get_system_info()would be helpful.
Also, could you maybe let me know what the output of the following is on your machine:
Thanks for the response and here some information about my machine. I hope this helpful.
{'python_expyriment_build_date': 'Thu Mar 9 13:48:59 2017 +0100', 'python_version': '2.7.14', 'python_expyriment_revision': 'c4963ac', 'hardware_internet_connection': 'Yes', 'python_expyriment_version': '0.9.0', 'python_pyserial_version': '', 'hardware_memory_total': '4007 MB', 'os_platform': 'Windows', 'hardware_disk_space_free': '37758 MB', 'hardware_memory_free': '1671 MB', 'python_numpy_version': '1.14.0', 'os_version': '10.0.10240', 'hardware_disk_space_total': '76437 MB', 'hardware_video_card': 'Intel(R) HD Graphics', 'os_name': 'Windows', 'hardware_cpu_details': 'Intel(R) Celeron(R) CPU N3060 @ 1.60GHz', 'hardware_cpu_type': 'Intel64 Family 6 Model 76 Stepping 4, GenuineIntel', 'python_pyopengl_version': '3.1.0', 'python_pygame_version': '1.9.3', 'os_architecture': '64bit', 'hardware_cpu_architecture': 'AMD64', 'os_details': '', 'hardware_ports_parallel_driver': None, 'python_pyparallel_version': '', 'hardware_audio_card': '', 'hardware_ports_parallel': [], 'python_mediadecoder_version': '', 'os_release': '10', 'python_pil_version': '', 'hardware_ports_serial': [], 'settings_folder': None, 'python_sounddevice_version': ''}
import tempfile
print(tempfile.mkstemp())
(3, 'c:\users\tanto\appdata\local\temp\tmp4hs1b6')
And Fladd why did not you make some tutorial Expyriment for beginner to advanced user. I think its very useful and I would like to join the course. Just my opinion
Mmh, I am still a bit puzzled about why this happens. We will look closer into it.
Thanks for the suggestion of a tutorial for more advanced users. This is certainly a good idea.
|
http://forum.cogsci.nl/index.php?p=/discussion/3758/import-xpd-to-r-file
|
CC-MAIN-2020-05
|
en
|
refinedweb
|
Introduction to Styles
- PDF for offline use
-
- Related Articles:
-
- Related APIs:
-
Let us know how you feel about this
Translation Quality
0/250
last updated: 2016-04
Styles allow the appearance of visual elements to be customized. Styles are defined for a specific type and contain values for the properties available on that type.
Xamarin.Forms applications often contain multiple controls that have an identical appearance. For example, an application may have multiple
Label instances that have the same font options and layout options, as shown in the following XAML code example:
<ContentPage xmlns="" xmlns: <ContentPage.Content> <StackLayout Padding="0,20,0,0"> <Label Text="These labels" HorizontalOptions="Center" VerticalOptions="CenterAndExpand" FontSize="Large" /> <Label Text="are not" HorizontalOptions="Center" VerticalOptions="CenterAndExpand" FontSize="Large" /> <Label Text="using styles" HorizontalOptions="Center" VerticalOptions="CenterAndExpand" FontSize="Large" /> </StackLayout> </ContentPage.Content> </ContentPage>
The following code example shows the equivalent page created in C#:
public class NoStylesPageCS : ContentPage { public NoStylesPageCS () { Title = "No Styles"; Icon = "csharp.png"; Padding = new Thickness (0, 20, 0, 0); Content = new StackLayout { Children = { new Label { Text = "These labels", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.CenterAndExpand, FontSize = Device.GetNamedSize (NamedSize.Large, typeof(Label)) }, new Label { Text = "are not", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.CenterAndExpand, FontSize = Device.GetNamedSize (NamedSize.Large, typeof(Label)) }, new Label { Text = "using styles", HorizontalOptions = LayoutOptions.Center, VerticalOptions = LayoutOptions.CenterAndExpand, FontSize = Device.GetNamedSize (NamedSize.Large, typeof(Label)) } } }; } }
Each
Label instance has identical property values for controlling the appearance of the text displayed by the
Label. This results in the appearance shown in the following screenshots:
Setting the appearance of each individual control can be repetitive and error prone. Instead, a style can be created that defines the appearance, and then applied to the required controls.
Creating a Style
The
Style class groups a collection of property values into one object that can then be applied to multiple visual element instances. This helps to reduce repetitive markup, and allows an applications appearance to be more easily changed.
Although styles were designed primarily for XAML-based applications, they can also be created in C#:
Styleinstances created in XAML are typically defined in a
ResourceDictionarythat's assigned to the
Resourcescollection of a control, page, or to the
Resourcescollection of the application.
Styleinstances created in C# are typically defined in the page's class, or in a class that can be globally accessed.
Choosing where to define a
Style impacts where it can be used:
Styleinstances defined at the control level can only be applied to the control and to its children.
Styleinstances defined at the page level can only be applied to the page and to its children.
Styleinstances defined at the application level can be applied throughout the application.
Each
Style instance contains a collection of one or more
Setter objects, with each
Setter having a
Property and a
Value. The
Property is the name of the bindable property of the element the style is applied to, and the
Value is the value that is applied to the property.
Each
Style instance can be explicit, or implicit:
- An explicit
Styleinstance is defined by specifying a
TargetTypeand an
x:Keyvalue, and by setting the target element's
Styleproperty to the
x:Keyreference. For more information about explicit styles, see Explicit Styles.
- An implicit
Styleinstance is defined by specifying only a
TargetType. The
Styleinstance will then automatically be applied to all elements of that type. Note that subclasses of the
TargetTypedo not automatically have the
Styleapplied. For more information about implicit styles, see Implicit Styles.
When creating a
Style, the
TargetType property is always required. The following code example shows an explicit style (note the
x:Key) created in XAML:
<Style x: <Setter Property="HorizontalOptions" Value="Center" /> <Setter Property="VerticalOptions" Value="CenterAndExpand" /> <Setter Property="FontSize" Value="Large" /> </Style>
In order to apply a
Style, the target object must be a
VisualElement that matches the
TargetType property value of the
Style, as shown in the following XAML code example:
<Label Text="Demonstrating an explicit style" Style="{StaticResource labelStyle}" />
Styles lower in the view hierarchy take precedence over those defined higher up. For example, setting a
Style that sets
Label.TextColor to
Red at the application level will be overridden by a page level style that sets
Label.TextColor to
Green. Similarly, a page level style will be overridden by a control level style. In addition, if
Label.TextColor is set directly on a control property, this takes precedence over any styles.
The articles in this section demonstrate and explain how to create and apply explicit and implicit styles, how to create global styles, style inheritance, how to respond to style changes at runtime, and how to use the in-built styles included in Xamarin.Forms.
What is StyleId?
Prior to Xamarin.Forms 2.2, the
StyleId property was used to identify individual elements in an application for identification in UI testing, and in theme engines such as Pixate. However, Xamarin.Forms 2.2 has introduced the
AutomationId property, which has superseded the
StyleId property. For more information, see Automate Xamarin.Forms testing with Xamarin.UITest and Test Cloud.
Summary
Xamarin.Forms applications often contain multiple controls that have an identical appearance. Setting the appearance of each individual control can be repetitive and error prone. Instead, styles can be created that customize control appearance by grouping and settings properties available on the control.
|
https://docs.mono-android.net/guides/xamarin-forms/user-interface/styles/introduction/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Hi, I just upgraded R to version 3.3.2 on a Mac OS X system. I suspect something may have gone wrong with the installation (despite a ‘successfully installed’ message) because the R.App is nowhere to be found. Nevertheless, I can open R in ESS, which how I always use it. The problem though, is I can’t install packages from the R command line within ESS. Here is a sample error message:
Advertising
> install.packages("rjags") --- Please select a CRAN mirror for use in this session --- trying URL '' <'> Content type 'application/x-gzip' length 249529 bytes (243 KB) ================================================== downloaded 243 KB The downloaded binary packages are in /var/folders/kf/zkk64rtj5197pzwq94qfls0w0000gn/T//RtmpuSb7P1/downloaded_packages > library(rjags) Loading required package: coda Error : .onLoad failed in loadNamespace() for 'rjags', details: call: dyn.load(file, DLLpath = DLLpath, ...) error: unable to load shared object '/Library/Frameworks/R.framework/Versions/3.3/Resources/library/rjags/libs/rjags.so': dlopen(/Library/Frameworks/R.framework/Versions/3.3/Resources/library/rjags/libs/rjags.so, 10): Library not loaded: /usr/local/lib/libjags.4.dylib Referenced from: /Library/Frameworks/R.framework/Versions/3.3/Resources/library/rjags/libs/rjags.so Reason: image not found Error: package or namespace load failed for ‘rjags’ This happens with other packages as well. Does anyone know what may be going on? Thank you, Gonçalo [[alternative HTML version deleted]] ______________________________________________ ESS-help@r-project.org mailing list
|
https://www.mail-archive.com/ess-help@r-project.org/msg00059.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Odoo Help
Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps:
CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc.
Datetime Constarints
i have one problem , i have one class named as room details and second one is room booking. we can go to book a room by room booking model. I want to create a constraint to it . Problem is that if one person can book one room at one date&time if second person can book that room may be after that time or before that time.previously i askd this question , and there are some answers also i got but i cant figure it out.I cant able to find out the solution fromr that, please help me.
i will give my code
class room_management(osv.Model):
_name = 'room.management'
_columns = {
'name': fields.char('Name',requierd=True, help='Put a Name for your Room example: Interview room,board room,conference Hall...etc '),
'location':fields.char('Location',requierd=True,help="give the location as city name or street name..etc"),
'floar':fields.char('Floor Details',help='Enter the Foar no or name,like Foar 4B6 or somthing should identification'),
'address':fields.text('Address',requierd=True,help='Detailed Address'),
'no_seats':fields.integer('No of Seats',requierd=True,help='Number of seats occupied in this room'),
'room_no':fields.char('Room No',requierd=True,help='Should be unique',),
}
class room_booking(osv.Model):
_name = 'room.booking'
_columns = {
'room_id' : fields.many2one('room.management', string="Room Booking"),
'duration': fields.integer('Duration'),
'reason': fields.char('Reason',requierd=True ,help="short deatails about booking,Example:Simons Hr interview"),
'start': fields.datetime('Start At',requierd=True),
'end': fields.datetime('End At',requierd=True),
}
Hello Logicious,
This kind of constraints are applicable for creation but also for edition. If you really want to perform input validation overwriting `create` also overwrite `write` as well.
A cleaner solution to cover both cases is to create Python based constraints like the one you can see in `hr_holidays`, please see here:
*
*
Constraints are automatically invoked during creation or edition:
*
*
*
Please also note tha Odoo 8 introduced a new way to declare constraints (and deprecated the above):
*
*
Regards.
you just try this i think its will work fine as your wish :-
def create(self, cr, uid, vals, context=None):
all_room_ids=self.search(cr,uid,[('room_id','=',vals['room_id'])])
for room_id in all_room_ids
room_data=self.browse(cr,uid,room_id)
if ((vals['start'] >= room_data.start) and (vals['end'] <= room_data.start)) or (vals['start'] >= room_data.end):
print'yyyyyyyyyyyyy',room_data
else:
raise osv.except_osv(_('Error!'),_("Room Already booked."))
About This Community
Odoo Training Center
Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
|
https://www.odoo.com/forum/help-1/question/datetime-constarints-90372
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
#include <OMX_Audio.h>
VORBIS params
Downmix input from stereo to mono (has no effect on non-stereo streams). Useful for lower-bitrate encoding.
Set bitrate management mode. This turns off the normal VBR encoding, but allows hard or soft bitrate constraints to be enforced by the encoder. This mode can be slower, and may also be lower quality. It is primarily useful for streaming.
Audio band width (in Hz) to which an encoder should limit the audio signal. Use 0 to let encoder decide
Bit rate of the encoded data data. Use 0 for variable rate or unknown bit rates. Encoding is set to the bitrate closest to specified value (in bps)
Number of channels
Sets maximum bitrate (in bps).
Sets minimum bitrate (in bps).
port that this structure applies to
Sets encoding quality to n, between -1 (low) and 10 (high). In the default mode of operation, teh quality level is 3. Normal quality range is 0 - 10.
Sampling rate of the source data. Use 0 for variable or unknown sampling rate.
size of the structure in bytes
OMX specification version information
|
http://limoa.sourceforge.net/docs/1.0/structOMX__AUDIO__PARAM__VORBISTYPE.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
I have an application "1stWindow", with two Forms (Form1 & Form2). I have just created Form2, and have declared the following:
private: SC_HANDLE MyServiceHandle;
However, this fails compilation, with:
error C2146: syntax error : missing ';' before identifier 'MyServiceHandle'
If I transfer exactly the same declaration to either Form1, or above main() in 1stWindow.cpp, the compiler does not generate any errors.
It looks as though the compiler is not recognising the SC_HANDLE type, yet I have exactly the same 'using namespace' scope statements in both Form1 and Form2, and if I copy the same #include files from the top of 1stWindow.cpp to Form2, the compiler records
a number of duplicate declarations, which suggests that, as expected, Form2 is picking up the includes in 1stWindow.cpp, because Form2.h is included in 1stWindow.cpp below the point where these are stated.
What, then, is missing in Form2.h, which prevents me from declaring something of SC_HANDLE type, just as I can in Form1.h?
kd?
Hi all
When I publish our form to SharePoint, I published it to a content type, and then enabled a SharePoint document library to use that form template.
I then created a couple of forms. Later I wanted to update the form template, and was expecting my allready created forms to not update with the new design and new fields (like it does with Excel and Word templates), but it did. All allready created forms
also changed.
This can be rather inapropriate if the company wants to change the form template, and keep the old schemas the way they were.
How do you manage that?
Do you allways create new content types when the form needs to be updated?
Regards"
Hall of Fame Twitter Terms of Service Privacy Policy Contact Us Archives Tell A Friend
|
http://www.dotnetspark.com/links/62668-type-unrecognized-2nd-form.aspx
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Defining Data Types
This section details the following topics:
Data Types
A data type is a reusable entity that can be shared between function blocks. We distinguish between Enum represening an enumeration data type and Entity representing one of all other data types.
Creating a New Data Type
Prerequisites
- You have started your IDE.
- You have selected the Vorto perspective.
- You have created a project (refer to Defining Projects)
Proceed as follows
- In the Vorto Model Project Browser, select the project in the Select Vorto Project drop-down list.
- Right-click in the Datatypes area and choose New Entity (or New Enum, if applicable) from the context menu.
The Create entity type dialog opens:
- Enter a name as Entity Name, for example,
Color.
- Adjust the entries for the input fields Namespace and Version, if necessary.
- Optionally, enter a description in the Description entry field.
- Click Finish.
The new data type (entity
Color) is created. Furthermore, the data type DSL source file (
.type) is generated and displayed in the model editor. The file contains a complete structure according to the DSL syntax with the values given in the preceding steps.
Editing a Data Type
Prerequisites
You have created a data type (refer to Creating a new Data Type).
Proceed as follows
- In the Datatype Models area, click the data type entity you want to edit, for example,
Color.
The DSL editor for the file
*_Color_*.typeopens.
- In the DSL editor, edit the entity according to your needs.
Example
namespace com.mycompany.type version 1.0.0 displayname "Color" description "Type for Color" category demo entity Color { mandatory r as int <MIN 0, MAX 255> mandatory g as int <MIN 0, MAX 255> mandatory b as int <MIN 0, MAX 255> }
|
http://www.eclipse.org/vorto/documentation/editors/datatype.html
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
The tf.estimator framework makes it easy to construct and train machine
learning models via its high-level Estimator API.
Estimator
offers classes you can instantiate to quickly configure common model types such
as regressors and classifiers:
tf.estimator.LinearClassifier: Constructs a linear classification model.
tf.estimator.LinearRegressor: Constructs a linear regression model.
tf.estimator.DNNClassifier: Construct a neural network classification model.
tf.estimator.DNNRegressor: Construct a neural network regression model.
tf.estimator.DNNLinearCombinedClassifier: Construct a neural network and linear combined classification model.
tf.estimator.DNNRegressor: Construct a neural network and linear combined regression model.
But what if none of
tf.estimator's predefined model types meets your needs?
Perhaps you need more granular control over model configuration, such as
the ability to customize the loss function used for optimization, or specify
different activation functions for each neural network layer. Or maybe you're
implementing a ranking or recommendation system, and neither a classifier nor a
regressor is appropriate for generating predictions.
This tutorial covers how to create your own
Estimator using the building
blocks provided in
tf.estimator, which will predict the ages of
abalones based on their physical
measurements. You'll learn how to do the following:
- Instantiate an
Estimator
- Construct a custom model function
- Configure a neural network using
tf.feature_columnand
tf.layers
- Choose an appropriate loss function from
tf.losses
- Define a training op for your model
- Generate and return predictions
Prerequisites
This tutorial assumes you already know tf.estimator API basics, such as
feature columns, input functions, and
train()/
evaluate()/
predict()
operations. If you've never used tf.estimator before, or need a refresher,
you should first review the following tutorials:
- tf.estimator Quickstart: Quick introduction to training a neural network using tf.estimator.
- TensorFlow Linear Model Tutorial: Introduction to feature columns, and an overview on building a linear classifier in tf.estimator.
- Building Input Functions with tf.estimator: Overview of how to construct an input_fn to preprocess and feed data into your models.
An Abalone Age Predictor
It's possible to estimate the age of an abalone (sea snail) by the number of rings on its shell. However, because this task requires cutting, staining, and viewing the shell under a microscope, it's desirable to find other measurements that can predict age.
The Abalone Data Set contains the following feature data for abalone:
The label to predict is number of rings, as a proxy for abalone age.
“Abalone shell” (by Nicki Dugan
Pogue, CC BY-SA 2.0)
Setup
This tutorial uses three data sets.
abalone_train.csv
contains labeled training data comprising 3,320 examples.
abalone_test.csv
contains labeled test data for 850 examples.
abalone_predict
contains 7 examples on which to make predictions.
The following sections walk through writing the
Estimator code step by step;
the full, final code is available
here.
To feed the abalone dataset into the model, you'll need to download and load the
CSVs into TensorFlow
Datasets. First, add some standard Python and TensorFlow
imports, and set up FLAGS:
from __future__ import absolute_import from __future__ import division from __future__ import print_function import argparse import sys import tempfile # Import urllib from six.moves import urllib import numpy as np import tensorflow as tf FLAGS = None
Enable logging:
tf.logging.set_verbosity(tf.logging.INFO)
Then define a function to load the CSVs (either from files specified in command-line options, or downloaded from tensorflow.org):
def maybe_download(train_data, test_data, predict_data): """Maybe downloads training data and returns train and test file names.""" if train_data: train_file_name = train_data else: train_file = tempfile.NamedTemporaryFile(delete=False) urllib.request.urlretrieve( "", train_file.name) train_file_name = train_file.name train_file.close() print("Training data is downloaded to %s" % train_file_name) if test_data: test_file_name = test_data else: test_file = tempfile.NamedTemporaryFile(delete=False) urllib.request.urlretrieve( "", test_file.name) test_file_name = test_file.name test_file.close() print("Test data is downloaded to %s" % test_file_name) if predict_data: predict_file_name = predict_data else: predict_file = tempfile.NamedTemporaryFile(delete=False) urllib.request.urlretrieve( "", predict_file.name) predict_file_name = predict_file.name predict_file.close() print("Prediction data is downloaded to %s" % predict_file_name) return train_file_name, test_file_name, predict_file_name
Finally, create
main() and load the abalone CSVs into
Datasets, defining
flags to allow users to optionally specify CSV files for training, test, and
prediction datasets via the command line (by default, files will be downloaded
from tensorflow.org):
def main(unused_argv): # Load datasets abalone_train, abalone_test, abalone_predict = maybe_download( FLAGS.train_data, FLAGS.test_data, FLAGS.predict_data) # Training examples training_set = tf.contrib.learn.datasets.base.load_csv_without_header( filename=abalone_train, target_dtype=np.int, features_dtype=np.float64) # Test examples test_set = tf.contrib.learn.datasets.base.load_csv_without_header( filename=abalone_test, target_dtype=np.int, features_dtype=np.float64) # Set of 7 examples for which to predict abalone ages prediction_set = tf.contrib.learn.datasets.base.load_csv_without_header( filename=abalone_predict, target_dtype=np.int, features_dtype=np.float64) if __name__ == "__main__": parser = argparse.ArgumentParser() parser.register("type", "bool", lambda v: v.lower() == "true") parser.add_argument( "--train_data", type=str, default="", help="Path to the training data.") parser.add_argument( "--test_data", type=str, default="", help="Path to the test data.") parser.add_argument( "--predict_data", type=str, default="", help="Path to the prediction data.") FLAGS, unparsed = parser.parse_known_args() tf.app.run(main=main, argv=[sys.argv[0]] + unparsed)
Instantiating an Estimator
When defining a model using one of tf.estimator's provided classes, such as
DNNClassifier, you supply all the configuration parameters right in the
constructor, e.g.:
my_nn = tf.estimator.DNNClassifier(feature_columns=[age, height, weight], hidden_units=[10, 10, 10], activation_fn=tf.nn.relu, dropout=0.2, n_classes=3, optimizer="Adam")
You don't need to write any further code to instruct TensorFlow how to train the
model, calculate loss, or return predictions; that logic is already baked into
the
DNNClassifier.
By contrast, when you're creating your own estimator from scratch, the
constructor accepts just two high-level parameters for model configuration,
model_fn and
params:
nn = tf.estimator.Estimator(model_fn=model_fn, params=model_params)
model_fn: A function object that contains all the aforementioned logic to support training, evaluation, and prediction. You are responsible for implementing that functionality. The next section, Constructing the
model_fncovers creating a model function in detail.
params: An optional dict of hyperparameters (e.g., learning rate, dropout) that will be passed into the
model_fn.
For the abalone age predictor, the model will accept one hyperparameter:
learning rate. Define
LEARNING_RATE as a constant at the beginning of your
code (highlighted in bold below), right after the logging configuration:
tf.logging.set_verbosity(tf.logging.INFO) # Learning rate for the model LEARNING_RATE = 0.001
Then, add the following code to
main(), which creates the dict
model_params
containing the learning rate and instantiates the
Estimator:
# Set model params model_params = {"learning_rate": LEARNING_RATE} # Instantiate Estimator nn = tf.estimator.Estimator(model_fn=model_fn, params=model_params)
Constructing the
model_fn
The basic skeleton for an
Estimator API model function looks like this:
def model_fn(features, labels, mode, params): # Logic to do the following: # 1. Configure the model via TensorFlow operations # 2. Define the loss function for training/evaluation # 3. Define the training operation/optimizer # 4. Generate predictions # 5. Return predictions/loss/train_op/eval_metric_ops in EstimatorSpec object return EstimatorSpec(mode, predictions, loss, train_op, eval_metric_ops)
The
model_fn must accept three arguments:
features: A dict containing the features passed to the model via
input_fn.
labels: A
Tensorcontaining the labels passed to the model via
input_fn. Will be empty for
predict()calls, as these are the values the model will infer.
mode: One of the following
tf.estimator.ModeKeysstring values indicating the context in which the model_fn was invoked:
tf.estimator.ModeKeys.TRAINThe
model_fnwas invoked in training mode, namely via a
train()call.
tf.estimator.ModeKeys.EVAL. The
model_fnwas invoked in evaluation mode, namely via an
evaluate()call.
tf.estimator.ModeKeys.PREDICT. The
model_fnwas invoked in predict mode, namely via a
predict()call.
model_fn may also accept a
params argument containing a dict of
hyperparameters used for training (as shown in the skeleton above).
The body of the function performs the following tasks (described in detail in the sections that follow):
- Configuring the model—here, for the abalone predictor, this will be a neural network.
- Defining the loss function used to calculate how closely the model's predictions match the target values.
- Defining the training operation that specifies the
optimizeralgorithm to minimize the loss values calculated by the loss function.
The
model_fn must return a
tf.estimator.EstimatorSpec
object, which contains the following values:
mode(required). The mode in which the model was run. Typically, you will return the
modeargument of the
model_fnhere.
predictions(required in
PREDICTmode). A dict that maps key names of your choice to
Tensors containing the predictions from the model, e.g.:
python predictions = {"results": tensor_of_predictions}
In
PREDICTmode, the dict that you return in
EstimatorSpecwill then be returned by
predict(), so you can construct it in the format in which you'd like to consume it.
loss(required in
EVALand
TRAINmode). A
Tensorcontaining a scalar loss value: the output of the model's loss function (discussed in more depth later in Defining loss for the model) calculated over all the input examples. This is used in
TRAINmode for error handling and logging, and is automatically included as a metric in
EVALmode.
train_op(required only in
TRAINmode). An Op that runs one step of training.
eval_metric_ops(optional). A dict of name/value pairs specifying the metrics that will be calculated when the model runs in
EVALmode. The name is a label of your choice for the metric, and the value is the result of your metric calculation. The
tf.metricsmodule provides predefined functions for a variety of common metrics. The following
eval_metric_opscontains an
"accuracy"metric calculated using
tf.metrics.accuracy:
python eval_metric_ops = { "accuracy": tf.metrics.accuracy(labels, predictions) }
If you do not specify
eval_metric_ops, only
losswill be calculated during evaluation.
Configuring a neural network with
tf.feature_column and
tf.layers
Constructing a neural network entails creating and connecting the input layer, the hidden layers, and the output layer.
The input layer is a series of nodes (one for each feature in the model) that
will accept the feature data that is passed to the
model_fn in the
features
argument. If
features contains an n-dimensional
Tensor with all your feature
data, then it can serve as the input layer.
If
features contains a dict of feature columns passed to
the model via an input function, you can convert it to an input-layer
Tensor
with the
tf.feature_column.input_layer function.
input_layer = tf.feature_column.input_layer( features=features, feature_columns=[age, height, weight])
As shown above,
input_layer() takes two required arguments:
features. A mapping from string keys to the
Tensorscontaining the corresponding feature data. This is exactly what is passed to the
model_fnin the
featuresargument.
feature_columns. A list of all the
FeatureColumnsin the model—
age,
height, and
weightin the above example.
The input layer of the neural network then must be connected to one or more
hidden layers via an activation
function that performs a
nonlinear transformation on the data from the previous layer. The last hidden
layer is then connected to the output layer, the final layer in the model.
tf.layers provides the
tf.layers.dense function for constructing fully
connected layers. The activation is controlled by the
activation argument.
Some options to pass to the
activation argument are:
tf.nn.relu. The following code creates a layer of
unitsnodes fully connected to the previous layer
input_layerwith a ReLU activation function (
tf.nn.relu):
python hidden_layer = tf.layers.dense( inputs=input_layer, units=10, activation=tf.nn.relu)
tf.nn.relu6. The following code creates a layer of
unitsnodes fully connected to the previous layer
hidden_layerwith a ReLU 6 activation function (
tf.nn.relu6):
python second_hidden_layer = tf.layers.dense( inputs=hidden_layer, units=20, activation=tf.nn.relu)
None. The following code creates a layer of
unitsnodes fully connected to the previous layer
second_hidden_layerwith no activation function, just a linear transformation:
python output_layer = tf.layers.dense( inputs=second_hidden_layer, units=3, activation=None)
Other activation functions are possible, e.g.:
output_layer = tf.layers.dense(inputs=second_hidden_layer, units=10, activation_fn=tf.sigmoid)
The above code creates the neural network layer
output_layer, which is fully
connected to
second_hidden_layer with a sigmoid activation function
(
tf.sigmoid). For a list of predefined
activation functions available in TensorFlow, see the API docs.
Putting it all together, the following code constructs a full neural network for the abalone predictor, and captures its predictions:} ...
Here, because you'll be passing the abalone
Datasets using
numpy_input_fn
as shown below,
features is a dict
{"x": data_tensor}, so
features["x"] is the input layer. The network contains two hidden
layers, each with 10 nodes and a ReLU activation function. The output layer
contains no activation function, and is
tf.reshape to a one-dimensional
tensor to capture the model's predictions, which are stored in
predictions_dict.
Defining loss for the model
The
EstimatorSpec returned by the
model_fn must contain
loss: a
Tensor
representing the loss value, which quantifies how well the model's predictions
reflect the label values during training and evaluation runs. The
tf.losses
module provides convenience functions for calculating loss using a variety of
metrics, including:
absolute_difference(labels, predictions). Calculates loss using the absolute-difference formula (also known as L1 loss).
log_loss(labels, predictions). Calculates loss using the logistic loss forumula (typically used in logistic regression).
mean_squared_error(labels, predictions). Calculates loss using the mean squared error (MSE; also known as L2 loss).
The following example adds a definition for
loss to the abalone
model_fn
using
mean_squared_error() (in bold):} # Calculate loss using mean squared error loss = tf.losses.mean_squared_error(labels, predictions) ...
See the API guide for a full list of loss functions and more details on supported arguments and usage.
Supplementary metrics for evaluation can be added to an
eval_metric_ops dict.
The following code defines an
rmse metric, which calculates the root mean
squared error for the model predictions. Note that the
labels tensor is cast
to a
float64 type to match the data type of the
predictions tensor, which
will contain real values:
eval_metric_ops = { "rmse": tf.metrics.root_mean_squared_error( tf.cast(labels, tf.float64), predictions) }
Defining the training op for the model
The training op defines the optimization algorithm TensorFlow will use when
fitting the model to the training data. Typically when training, the goal is to
minimize loss. A simple way to create the training op is to instantiate a
tf.train.Optimizer subclass and call the
minimize method.
The following code defines a training op for the abalone
model_fn using the
loss value calculated in Defining Loss for the Model, the
learning rate passed to the function in
params, and the gradient descent
optimizer. For
global_step, the convenience function
tf.train.get_global_step takes care of generating an integer variable:
optimizer = tf.train.GradientDescentOptimizer( learning_rate=params["learning_rate"]) train_op = optimizer.minimize( loss=loss, global_step=tf.train.get_global_step())
For a full list of optimizers, and other details, see the API guide.
The complete abalone
model_fn
Here's the final, complete
model_fn for the abalone age predictor. The
following code configures the neural network; defines loss and the training op;
and returns a
EstimatorSpec object containing
mode,
predictions_dict,
loss,
and
train_op:]) # Provide an estimator spec for `ModeKeys.PREDICT`. if mode == tf.estimator.ModeKeys.PREDICT: return tf.estimator.EstimatorSpec( mode=mode, predictions={"ages": predictions}) # Calculate loss using mean squared error loss = tf.losses.mean_squared_error(labels, predictions) # Calculate root mean squared error as additional eval metric eval_metric_ops = { "rmse": tf.metrics.root_mean_squared_error( tf.cast(labels, tf.float64), predictions) } optimizer = tf.train.GradientDescentOptimizer( learning_rate=params["learning_rate"]) train_op = optimizer.minimize( loss=loss, global_step=tf.train.get_global_step()) # Provide an estimator spec for `ModeKeys.EVAL` and `ModeKeys.TRAIN` modes. return tf.estimator.EstimatorSpec( mode=mode, loss=loss, train_op=train_op, eval_metric_ops=eval_metric_ops)
Running the Abalone Model
You've instantiated an
Estimator for the abalone predictor and defined its
behavior in
model_fn; all that's left to do is train, evaluate, and make
predictions.
Add the following code to the end of
main() to fit the neural network to the
training data and evaluate accuracy:
train_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": np.array(training_set.data)}, y=np.array(training_set.target), num_epochs=None, shuffle=True) # Train nn.train(input_fn=train_input_fn, steps=5000) # Score accuracy test_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": np.array(test_set.data)}, y=np.array(test_set.target), num_epochs=1, shuffle=False) ev = nn.evaluate(input_fn=test_input_fn) print("Loss: %s" % ev["loss"]) print("Root Mean Squared Error: %s" % ev["rmse"])
Then run the code. You should see output like the following:
... INFO:tensorflow:loss = 4.86658, step = 4701 INFO:tensorflow:loss = 4.86191, step = 4801 INFO:tensorflow:loss = 4.85788, step = 4901 ... INFO:tensorflow:Saving evaluation summary for 5000 step: loss = 5.581 Loss: 5.581
The loss score reported is the mean squared error returned from the
model_fn
when run on the
ABALONE_TEST data set.
To predict ages for the
ABALONE_PREDICT data set, add the following to
main():
# Print out predictions predict_input_fn = tf.estimator.inputs.numpy_input_fn( x={"x": prediction_set.data}, num_epochs=1, shuffle=False) predictions = nn.predict(input_fn=predict_input_fn) for i, p in enumerate(predictions): print("Prediction %s: %s" % (i + 1, p["ages"]))
Here, the
predict() function returns results in
predictions as an iterable.
The
for loop enumerates and prints out the results. Rerun the code, and you
should see output similar to the following:
... Prediction 1: 4.92229 Prediction 2: 10.3225 Prediction 3: 7.384 Prediction 4: 10.6264 Prediction 5: 11.0862 Prediction 6: 9.39239 Prediction 7: 11.1289
Additional Resources
Congrats! You've successfully built a tf.estimator
Estimator from scratch.
For additional reference materials on building
Estimators, see the following
sections of the API guides:
|
https://www.tensorflow.org/extend/estimators?hl=id
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Algorithm::SkipList - Perl implementation of skip lists
The following non-standard modules are used:
enum
my $list = new Algorithm::SkipList(); $list->insert( 'key1', 'value' ); $list->insert( 'key2', 'another value' ); $value = $list->find('key2'); $list->delete('key1');
This is an implementation of skip lists in Perl.
Skip lists are similar to linked lists, except that they have random links at various levels that allow searches to skip over sections of the list, like so:
4 +---------------------------> +----------------------> + | | | 3 +------------> +------------> +-------> +-------> +--> + | | | | | | 2 +-------> +--> +-------> +--> +--> +--> +-------> +--> + | | | | | | | | | 1 +--> +--> +--> +--> +--> +--> +--> +--> +--> +--> +--> + A B C D E F G H I J NIL
A search would start at the top level: if the link to the right exceeds the target key, then it descends a level.
Skip lists generally perform as well as balanced trees for searching but do not have the overhead with respect to inserting new items. See the included file
Benchmark.txt for a comparison of performance with other Perl modules.
For more information on skip lists, see the "SEE ALSO" section below.
Only alphanumeric keys are supported "out of the box". To use numeric or other types of keys, see "Customizing the Node Class" below.
A detailed description of the methods used is below.
$list = new Algorithm::SkipList();
Creates a new skip list.
If you need to use a different node class for using customized comparison routines, you will need to specify a different class:
$list = new Algorithm::SkipList( node_class => 'MyNodeClass' );
See the "Customizing the Node Class" section below.
Specialized internal parameters may be configured:
$list = new Algorithm::SkipList( max_level => 32 );
Defines a different maximum list level.
The initial list (see the "list" method) will be a random number of levels, and will increase over time if inserted nodes have higher levels, up until "max_level" levels. See "max_level" for more information on this parameter.
You can also control the probability used to determine level sizes for each node by setting the P and k values:
$list = new Algorithm::SkipList( p => 0.25, k => 1 );
See P for more information on this parameter.
You can enable duplicate keys by using the following:
$list = new Algorithm::SkipList( duplicates => 1 );
This is an experimental feature. See the "KNOWN ISSUES" section below.
$list->insert( $key, $value );
Inserts a new node into the list.
You may also use a search finger with insert, provided that the finger is for a key that occurs earlier in the list:
$list->insert( $key, $value, $finger );
Using fingers for inserts is not recommended since there is a risk of producing corrupted lists.
if ($list->exists( $key )) { ... }
Returns true if there exists a node associated with the key, false otherwise.
This may also be used with search fingers:
if ($list->exists( $key, $finger )) { ... }
$value = $list->find_with_finger( $key );
Searches for the node associated with the key, and returns the value. If the key cannot be found, returns
undef.
Search fingers may also be used:
$value = $list->find_with_finger( $key, $finger );
To obtain the search finger for a key, call "find_with_finger" in a list context:
($value, $finger) = $list->find_with_finger( $key );
$value = $list->find( $key ); $value = $list->find( $key, $finger );
Searches for the node associated with the key, and returns the value. If the key cannot be found, returns
undef.
This method is slightly faster than "find_with_finger" since it does not return a search finger when called in list context.
If you are searching for duplicate keys, you must use "find_with_finger" or "find_duplicates".
@values = $list->find_duplicates( $key ); @values = $list->find_duplicates( $key, $finger );
Returns an array of values from the list.
This is an autoloading method.
Search is an alias to "find".
$key = $list->first_key;
Returns the first key in the list.
If called in a list context, will return a search finger:
($key, $finger) = $list->first_key;
A call to "first_key" implicitly calls "reset".
$key = $list->next_key( $last_key );
Returns the key following the previous key. List nodes are always maintained in sorted order.
Search fingers may also be used to improve performance:
$key = $list->next_key( $last_key, $finger );
If called in a list context, will return a search finger:
($key, $finger) = $list->next_key( $last_key, $finger );
If no arguments are called,
$key = $list->next_key;
then the value of "last_key" is assumed:
$key = $list->next_key( $list->last_key );
Note: calls to "delete" will "reset" the last key.
($key, $value) = $list->next( $last_key, $finger );
Returns the next key-value pair.
$last_key and
$finger are optional.
This is an autoloading method.
$key = $list->last_key; ($key, $finger, $value) = $list->last_key;
Returns the last key or the last key and finger returned by a call to "first_key", "next_key", "index_by_key", "key_by_index" or "value_by_index". This is not the greatest key.
Deletions and inserts may invalidate the "last_key" value. (Deletions will actually "reset" the value.)
Values for "last_key" can also be set by including parameters, however this feature is meant for internal use only:
$list->last_key( $node );
Note that this is a change form versions prior to 0.71.
$list->reset;
Resets the "last_key" to
undef.
$index = $list->index_by_key( $key );
Returns the 0-based index of the key (as if the list were an array). This is not an efficient method of access.
This is an autoloading method.
$key = $list->key_by_index( $index );
Returns the key associated with an index (as if the list were an array). Negative indices return the key from the end. This is not an efficient method of access.
This is an autoloading method.
$value = $list->value_by_index( $index );
Returns the value associated with an index (as if the list were an array). Negative indices return the value from the end. This is not an efficient method of access.
This is an autoloading method.
$value = $list->delete( $key );
Deletes the node associated with the key, and returns the value. If the key cannot be found, returns
undef.
Search fingers may also be used:
$value = $list->delete( $key, $finger );
Calling "delete" in a list context will not return a search finger.
$list->clear;
Erases existing nodes and resets the list.
$size = $list->size;
Returns the number of nodes in the list.
$list2 = $list1->copy;
Makes a copy of a list. The "p", "max_level" and node class are copied, although the exact structure of node levels is not copied.
$list2 = $list1->copy( $key_from, $finger, $key_to );
Copy the list between
$key_from and
$key_to (inclusive). If
$finger is defined, it will be used as a search finger to find
$key_from. If
$key_to is not specified, then it will be assumed to be the end of the list.
If
$key_from does not exist,
undef will be returned.
This is an autoloading method.
$list1->merge( $list2 );
Merges two lists. If both lists share the same key, then the valie from
$list1 will be used.
Both lists should have the same node class.
This is an autoloading method.
$list1->append( $list2 );
Appends (concatenates)
$list2 after
$list1. The last key of
$list1 must be less than the first key of
$list2.
Both lists should have the same node class.
This method affects both lists. The "header" of the last node of
$list1 points to the first node of
$list2, so changes to one list may affect the other list.
If you do not want this entanglement, use the "merge" or "copy" methods instead:
$list1->merge( $list2 );
or
$list1->append( $list2->copy );
This is an autoloading method.
$list2 = $list1->truncate( $key );
Truncates
$list1 and returns
$list2 starting at
$key. Returns
undef is the key does not exist.
It is asusmed that the key is not the first key in
$list1.
This is an autoloading method.
($key, $value) = $list->least;
Returns the least key and value in the list, or
undef if the list is empty.
This is an autoloading method.
($key, $value) = $list->greatest;
Returns the greatest key and value in the list, or
undef if the list is empty.
This is an autoloading method.
@keys = $list->keys;
Returns a list of keys (in sorted order).
@keys = $list->keys( $low, $high);
Returns a list of keys between
$low and
$high, inclusive. (This is only available in versions 1.02 and later.)
This is an autoloading method.
@values = $list->values;
Returns a list of values (corresponding to the keys returned by the "keys" method).
This is an autoloading method.
Internal methods are documented below. These are intended for developer use only. These may change in future versions.
($node, $finger, $cmp) = $list->_search_with_finger( $key );
Searches for the node with a key. If the key is found, that node is returned along with a "header". If the key is not found, the previous node from where the node would be if it existed is returned.
Note that the value of
$cmp
$cmp = $node->key_cmp( $key )
is returned because it is already determined by "_search".
Search fingers may also be specified:
($node, $finger, $cmp) = $list->_search_with_finger( $key, $finger );
Note that the "header" is actually a search finger.
($node, $finger, $cmp) = $list->_search( $key, [$finger] );
Same as "_search_with_finger", only that a search finger is not returned. (Actually, an initial "dummy" finger is returned.)
This is useful for searches where a finger is not needed. The speed of searching is improved.
$k = $list->k;
Returns the k value.
$list->k( $k );
Sets the k value.
Higher values will on the average have less pointers per node, but take longer for searches. See the section on the P value.
$plevel = $list->p;
Returns the P value.
$list->p( $plevel );
Changes the value of P. Lower values will on the average have less pointers per node, but will take longer for searches.
The probability that a particular node will have a forward pointer at level i is: p**(i+k-1).
For more information, consult the references below in the "SEE ALSO" section.
$max = $list->max_level;
Returns the maximum level that "_new_node_level" can generate.
eval { $list->max_level( $level ); };
Changes the maximum level. If level is less than "MIN_LEVEL", or greater than "MAX_LEVEL" or the current list "level", this will fail (hence the need for setting it in an
eval block).
The value defaults to "MAX_LEVEL", which is 32. There is usually no need to change this value, since the maximum level that a new node will have will not be greater than it actually needs, up until 2^32 nodes. (The current version of this module is not designed to handle lists larger than 2^32 nodes.)
Decreasing the maximum level to less than is needed will likely degrade performance.
$level = $list->_new_node_level;
This is an internal function for generating a random level for new nodes.
Levels are determined by the P value. The probability that a node will have 1 level is P; the probability that a node will have 2 levels is P^2; the probability that a node will have 3 levels is P^3, et cetera.
The value will never be greater than "max_level".
Note: in earlier versions it was called
_random_level.
$node = $list->list;
Returns the initial node in the list, which is a
Algorithm::SkipList::Node (See below.)
The key and value for this node are undefined.
$node = $list->_first_node;
Returns the first node with a key (the second node) in a list. This is used by the "first_key", "least", "append" and "merge" methods.
$node = $list->_greatest_node;
Returns the last node in the list. This is used by the "append" and "greatest" methods.
$node_class_name = $list->_node_class;
Returns the name of the node class used. By default this is the
Algorithm::SkipList::Node, which is discussed below.
$list->_build_distribution;
Rebuilds the probability distribution array
{P_LEVELS} upon calls to "_set_p" and "_set_k".
These methods are used during initialization of the object.
$list->_debug;
Used for debugging skip lists by developer. The output of this function is subject to change.
Methods for the Algorithm::SkipList::Node object are documented in that module. They are for internal use by the main
Algorithm::SkipList module.
Hashes can be tied to
Algorithm::SkipList objects:
tie %hash, 'Algorithm::SkipList'; $hash{'foo'} = 'bar'; $list = tied %hash; print $list->find('foo'); # returns bar
See the perltie manpage for more information.
The default node may not handle specialized data types. To define your own custom class, you need to derive a child class from
Algorithm::SkipList::Node.
Below is an example of a node which redefines the default type to use numeric instead of string comparisons:
package NumericNode; our @ISA = qw( Algorithm::SkipList::Node ); sub key_cmp { my $self = shift; my $left = $self->key; # node key my $right = shift; # value to compare the node key with unless ($self->validate_key($right)) { die "Invalid key: \'$right\'"; } return ($left <=> $right); } sub validate_key { my $self = shift; my $key = shift; return ($key =~ s/\-?\d+(\.\d+)?$/); # test if key is numeric }
To use this, we say simply
$number_list = new Algorithm::SkipList( node_class => 'NumericNode' );
This skip list should work normally, except that the keys must be numbers.
For another example of customized nodes, see Tie::RangeHash version 1.00_b1 or later.
A side effect of the search function is that it returns a finger to where the key is or should be in the list.
We can use this finger for future searches if the key that we are searching for occurs after the key that produced the finger. For example,
($value, $finger) = $list->find('Turing');
If we are searching for a key that occurs after 'Turing' in the above example, then we can use this finger:
$value = $list->find('VonNeuman', $finger);
If we use this finger to search for a key that occurs before 'Turing' however, it may fail:
$value = $list->find('Goedel', $finger); # this may not work
Therefore, use search fingers with caution.
Search fingers are specific to particular instances of a skip list. The following should not work:
($value1, $finger) = $list1->find('bar'); $value2 = $list2->find('foo', $finger);
One useful feature of fingers is with enumerating all keys using the "first_key" and "next_key" methods:
($key, $finger) = $list->first_key; while (defined $key) { ... ($key, $finger) = $list->next_key($key, $finger); }
See also the "keys" method for generating a list of keys.
This module intentionally has a subset of the interface in the Tree::Base and other tree-type data structure modules, since skip lists can be used in place of trees.
Because pointers only point forward, there is no
prev method to point to the previous key.
Some of these methods (least, greatest) are autoloading because they are not commonly used.
One thing that differentiates this module from other modules is the flexibility in defining a custom node class.
See the included Benchmark.txt file for performance comparisons.
If you are upgrading a prior version of List::SkipList, then you may want to uninstall the module before installing Algorithm::SkipList, so as to remove unused autoloading files.
Certain methods such as "find" and "delete" will return the the value associated with a key, or
undef if the key does not exist. However, if the value is
undef, then these functions will appear to claim that the key cannot be found.
In such circumstances, use the "exists" method to test for the existence of a key.
Duplicate keys are an experimental feature in this module, since most methods have been designed for unique keys only.
Access to duplicate keys is akin to a stack. When a duplicate key is added, it is always inserted before matching keys. In searches, to find duplicate keys one must use "find_with_finger" or the "find_duplicates" method.
The "copy" method will reverse the order of duplicates.
The behavior of the "merge" and "append" methods is not defined for duplicates.
Skip lists are non-deterministic. Because of this, bugs in programs that use this module may be subtle and difficult to reproduce without many repeated attempts. This is especially true if there are bugs in a custom node.
Additional issues may be listed on the CPAN Request Tracker at or.
Robert Rothenberg <rrwo at cpan.org>
Carl Shapiro <cshapiro at panix.com> for introduction to skip lists.
Feedback is always welcome. Please use the CPAN Request Tracker at to submit bug reports.
Copyright (c) 2003-2005 Robert Rothenberg. All rights reserved. This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See the article by William Pugh, "A Skip List Cookbook" (1989), or similar ones by the author at which discuss skip lists.
Another article worth reading is by Bruce Schneier, "Skip Lists: They're easy to implement and they work", Doctor Dobbs Journal, January 1994.
Tie::Hash::Sorted maintains a hash where keys are sorted. In many cases this is faster, uses less memory (because of the way Perl5 manages memory), and may be more appropriate for some uses.
If you need a keyed list that preserves the order of insertion rather than sorting keys, see List::Indexed or Tie::IxHash.
|
http://search.cpan.org/~rrwo/Algorithm-SkipList/lib/Algorithm/SkipList.pm
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Flexible permission system for Django.
Granular access is django app for giving permissions on set of models for users or groups.
Quick start
Add “granular_access” to your INSTALLED_APPS setting like this:
INSTALLED_APPS = ( ... 'granular_access', )
Run south command for createing tables in database:
./manage.py migrate granular_access
3. Create permission for user or group on some set of models via admin or using create_permission function:
>>> from granular_settings import create_permission >>> create_permission(user=joker, action='kill_and_rob', app_label='auth', ... model_name='user', conditions=[{'username__startswith': 'victim'}])
You can find more examples in tests.
Filter available models using filter_available function:
>>> from granular_settings import filter_available >>> available_users = filter_available(to=joker, action='kill_and_rob', ... queryset=User.objects.all())
Profit.
Settings
You can define some settings for customize app behaviour:
-
GRANULAR_ACCESS_USER_MODEL – user model in your project for assigning permissions.
Example: ‘users.Profile’. Default: ‘auth.User’.
-
GRANULAR_ACCESS_GROUP_MODEL – group model in your project for assigning permissions.
Example: ‘groups.UserGroup’. Default: ‘auth.Group’.
-
GRANULAR_ACCESS_USER_GROUP_RELATED_NAME – related name in user model for relatinon with groups. So you can get user groups by calling >>> user_instance.related_name.all()
It will be used if GRANULAR_ACCESS_GET_USER_GROUPS_FUNCTION settings is not set or set to None.
Example: ‘user_groups’. Default: ‘groups’.
-
GRANULAR_ACCESS_GET_USER_GROUPS_FUNCTION – path to function, witch receives user instance as first argument and return iterable with groups or group ids. You can use this function if you have more complex logic for gettings user groups, than via related_name.
Example: ‘project_name.users_app.helpers.get_user_groups’. Default: None.
-
GRANULAR_ACCESS_CONSIDER_SUPERUSER – boolean value, which indicates should superusers get all permissions on all models or not.
Default: True.
Extras
You can use AccessManager in your model:
from granular_access import AccessManager class MyModel(models.Model): objects = AccessManager()
Or if you already have some special manager for your model, you can use AccessManagerMixin in it.
ALso you can utilize AccessQuerySet or AccessQuerySetMixin.
Download Files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
|
https://pypi.org/project/django-granular-access/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Is This Content Helpful?
We're glad to know this article was helpful.
How can we make this better?
Contact our Support Team
When data servers are upgraded or replaced, maps with layers that reference data on those systems may need to be updated to refer to data on a different system. The following is an example of a script to replace data sources of raster layers in maps residing in many different subfolders.
As an example, suppose the data sources for all color images were located in:
<drive>:<folder tree>/images/color
<drive>:<folder tree>/images
Note:
If this example does not match the file organization in the prevailing environment, the functions illustrated may be helpful in developing a custom workflow.
Code:
import arcpy, os
MapMainFolder = arcpy.GetParameterAsText(0) # topmost folder of maps to be updated
NewDataPath = arcpy.GetParameterAsText(1) # folder to which the raster data sources have been moved
for (root, dirs, files) in os.walk (MapMainFolder):
for fileName in files:
if os.path.splitext (fileName)[1] == ".mxd":
arcpy.AddMessage (fileName)
fullPath = os.path.join (root, fileName)
mxd = arcpy.mapping.MapDocument (fullPath)
rasterLayersUpdated = 0
for layer in arcpy.mapping.ListLayers (mxd): # list all layers in all dataframes
if layer.isRasterLayer:
if layer.supports ("SERVICEPROPERTIES"):
if layers.serviceProperties ['ServiceType'] == "ImageServer":
pass # don't try to re-path an online basemap
dataSrc = layer.dataSource
Color = 0
dataRoot = os.path.dirname (dataSrc)
if dataRoot.find ("Color") > 0:
newPath = os.path.join (NewDataPath, "Color")
else: # B&W
newPath = NewDataPath
layer.replaceDataSource (newPath, "RASTER_WORKSPACE", os.path.basename (dataSrc))
rasterLayersUpdated += 1
if rasterLayersUpdated > 0:
mxd.save()
|
http://support.esri.com/en/technical-article/000011837
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Arduino LCD Thermostat!
Introduction: Arduino LCD Thermostat!
In this project we'll use an arduino uno, an LCD and a temperature sensor to control your air conditioning.! Also You can modify the code for a heater. The code is well explained! I show even how I made mine permanent!
Great your beginners to learn arduino and for hot room that have an old manual a/c. This is a project to try!
Step 1: Prototyping the Test Circuit
This circuit is to test if the thermostat is working or not. The lcd should display a hello,world! sketch, the the current temperature of the room, and below is the ideal temperature or settemp. If the current temperature is off by a little you may need to adjust the code which calculates the 10 bit number read from A0 into a temperature reading in degrees Fahrenheit. If you need Celsius will will also need to change the line of code the calculates the temperature. If you wan to control an a/c, you can remove the led and replace it with an N-Channel MOSFET ( Metal Oxide Semi-conductor Field Effect Transistor). Then TO USE A PROTECTION DIODE! I will go over this as well. In the next step.
Parts list:
12 volt power supply
7805 5 volt voltage regulator
arduino uno or other arduino dev board
3x 10k ohm resistors
led
jumper wire
solderless breadboard
arduino ide
10k potentiometer ( or a 1k ohm and a 220 ohm resistor) (or the 3rd pin can go to ground)
16x2 Hitachi driven hdd44780 LCD
10k thermistor a.k.a. (10k ohm NTC, (Negative Thermal Coefficient)
2x tactile button switches ( or any other button switch)
usb b type connector to program arduino
For use with an a/c:
N-channel MOSFET
120VAC 20-40A relay
1N4007 - 1N4004 rectifier diode
a/c
To finalize:
perfboard / PCB
Project enclosure
And tool that everyone should have
Let's get started!
Step 2: The Code
So the code:
// written by Dylon Jamna (ME!)
// include the library code
#include <EEPROM.h>
#include <LiquidCrystal.h>// include the library code
int tempPin = A0; // make variables// thermistor is at A0
int led =13; // led is at pin
float temp; // make a variable called temp
float settemp; // make a variable called temp
int swtu = 7; // switch up is at pin 7
int swtd = 6; // switch down is at pin 6
LiquidCrystal lcd(12, 11, 5, 4, 3, 2); // lcd is at 12,11,5,4,3,2
void setup() {
pinMode (led,1); // make led or pin13 an output
Serial.begin (9600); // set the serial monitor tx and rx speed
lcd.begin(16, 2); // set up all the "blocks" on the display
lcd.setCursor(0,0); // set the cursor to colum 0 row 0
lcd.print("hello, world!"); // display hello world for 1 second
lcd.clear(); // clear the lcd
EEPROM.read (1); // make the eeprom or atmega328 memory address 1
}
void loop() {
int tvalue = analogRead(tempPin); // make tvalue what ever we read on the tempPin
float temp = (tvalue / 6.388888888889); // the math / conversion to temp
lcd.setCursor (0,0); // set the cursor to 0,0
lcd.print (temp); // Print the current temp in f
lcd.print ('F');
Serial.println (temp); // print the temp it the serial monitor
settemp = EEPROM.read(1); // read the settemp on the eeprom
delay (250); // wait for the lcd to refresh every 250 milliseconds
if // if we se the switch up pin reading on 1 or 5 volts
(digitalRead(swtu)== 1 )
{
settemp ++ // add one to the settemp, the settemp is the ideal temperature for you
;
}
else{// other wise do nothing
}
if
(digitalRead (swtd) == 1)// if we detect a 1 on the other switch pin
{
(settemp --);// subtract one fromm the settemp
}
else {
// else, do nothing
}
if (temp > settemp) // if the temperature exceeds your chosen settemp
{
digitalWrite (led, 1); // turn on the led
}
else // if that doesn't happen, then turn the led off
{
digitalWrite (led,0);
}
lcd.setCursor (0,1); // set the cursor to 0,1
lcd.print ("Set To "); // Print set to and your ideal temperature in f
lcd.print (settemp);
lcd.print ('F');
Serial.println(settemp); // Print the settemp in the serial montior
EEPROM.write (1,settemp); /* write the most recent settemp in eeprom data stoage
so that if the power is disconnected, you settemp is saved!*/
delay (250); // wait 250 milliseconds
} // we're done
Step 3: The Build
Now to actually make this project useful we need to make it more permanent. We do this by soldering it to the perfboard. Now there are many ways to solder on perfboard. The way I prefer to solder it is by soldering on thick buses for power and ground made of solder. Then I strip, tin, and solder ribbon cable wire to every necessary connection. But you can tin track on the board, or wire above the board and solder below which is very popular. Also to switch our a/c on or off we need to modify the led. Now there are a few seps to follow.
Step 1 - remove the led
Step 2- attach the gate of your N channel MOSFET Check your datasheet for the pin out!!!!
Step 3 - Attach the ground to your source pin on the MOSFET
Step 4 - Attach the Drain pin to your MOSFET
Step 5 - Attach your relay's coil to 12v depending on the relay, and the other to you drain pin on the MOSFET
Step 6 - Add the the protection diode, connect the striped silver end (cathode) to your 12v, and you other end (anode) to your Drain pin on the MOSFET.
use the other diagram for the power input.
Step 4: The Final Product
Now I know this isn't the best looking project enclosure as you saw in the intro, but here's how I did it. I use 1/4 plywood type of scrap kind of wood, and you can check the Home depot for it and look carefully. I cut it up to size and nailed it together with an air compressor nailing gun. It actually didn't split,( Take your time or it will split!) No glue is necessary. I drill hole for the wire like the thermister and 3 wires for the separate relay power control box, and drilled one big one were the lcd was supposed to go. This was so I could stick a jigsaw blade in there and cut out a rectangle. The 2 tactile switches had tall shafts which poked through the plywood just barely. You could glue or epoxy on plastic rods from a pen tube or something.
Also I embedded an atmega8 not an atmega328 just because my sketch was only 6k bytes. The atmega8 only holds 8k bytes so I was safe. I pretty much made a stand alone arduino on perboard. I used a voltage divider for the lcd which was 1k and 220 ohm resistor. 1k goes to 5 volts and pin 3 on the LCD, and the 220 ohm resistor goes to ground and the pin 3 on the LCD.
To prop up the board to the front panel. The 3 pin header connector was from the connection the the relay power box.
I used an iec connector, I soldered a main female connector to this one so I can plug it in the box. I used hot glue this time for the box and it worked surprisingly well.
Remember to comment or contact me if you have any problem. Any go to arduino.cc and arduino forums or more troubleshooting and help.
tvalue / 6.388888888889. but what is the tvalue. hou do i calculate it to Celsius
Hello,
I m newbe.
hoe do i set F to Celsius.
I have to chance float temp = (tvalue / 6.388888888889); // the math / conversion to temp
to what?
thanks
Wiring is wrong on breadboard for the LCD backlight needs reversing.
Small errors in code notes, needs some mean values on coding as too sensitive for central heating it toggles too much near changeover threshold OK for electrical heating.
Arduino termisotr
How to change max. set temp with 250F to 350F???
How to change the set temp???????
You use the push buttons with a pull down resistor in this configuration to change the set temp. But do change it for your application, this is a rough version that still could use a little tweaking because the subtle temperature changes trigger the a/c on and off a million times. Good luck!
I made it FINALLY after months of problems!!! i reworked the code a bit and added 3 relays total and a toggle switch cause where i live sometimes you just want it off!
the relays are for
1. hot
2. cold
3. fan
and my switch is a 4 way switch so it can click to heat, cool, fan, and off
thanks a bunch!!! if i get a nice sheild made i may do a kickstarter! do you have a way for me to donate some $ to you?
Hi Dylon,
Your project seems useful so I tried to replicate it but using Grove-LCD RGB Backlight display instead.
Everything is flowing nicely until I get to the user buttons. Im using a-pushto-off button with only 2 connecting wires.
So the problem I'm facing is, when I pressed the up button, the temperature (settemp) will go up and not stop. Same goes if I press the down button.
I'll give you my codes below but Im sure it's similar..
Please help me if you can. It's giving me sleepless nights... :(
// Declare variables
#include <Wire.h>
#include <EEPROM.h>
#include <Bounce2.h>
#include "rgb_lcd.h"
rgb_lcd lcd;
const int colorR = 30;
const int colorG = 30;
const int colorB = 60;
float tempC;
float settemp;
int tempPin = A0; //temp sensor plugged in pin 0
int ledPin = 13; //closest to ground
int fan1 = 2; // fan connected to pin 6
int swtu = 7;
int swtd = 6;
Bounce bouncer = Bounce();
// Write setup programme
void setup()
{
Serial.begin(9600); //Open serial port to communicate. sets data rate to 9600
// set up the LCD's number of columns and rows:
lcd.begin(16, 2);
// initialize the serial communications:
lcd.setRGB(colorR, colorG, colorB);
lcd.print("Temp = ");
delay(250);
pinMode(ledPin, OUTPUT);
pinMode(fan1, OUTPUT);
pinMode(swtu, INPUT);
bouncer.attach (swtu);
bouncer .interval(5);
pinMode (swtd, INPUT);
bouncer.attach (swtd);
bouncer .interval(5);
EEPROM.read(1); //make eeprom memory address
}
//Write loop that will control the
void loop()
{
tempC = analogRead(tempPin); // read the analog value from the lm35 sensor.
tempC = (5.0 * tempC * 100.0)/1024.0; // convert the analog input to temperature in centigrade.
lcd.setCursor(8,0);
lcd.print(tempC);
lcd.print("'C");
Serial.print((byte)tempC); // send the data to the computer.
settemp = EEPROM.read(1); // read the settemp at memory 1
delay (250);
if (digitalRead (swtu)==1)
{
(settemp ++);
EEPROM.write(1, settemp);
}
else{
}
if (digitalRead(swtd)==1)
{
(settemp --);
EEPROM.write(1, settemp);
}
else {
}
if (tempC > settemp)
{
digitalWrite(ledPin, HIGH);
digitalWrite(fan1, HIGH);
}
else
{
digitalWrite(ledPin, LOW);
digitalWrite(fan1, LOW);
}
lcd.setCursor(0,1);
lcd.print("Set Temp To: ");
lcd.print(settemp);
Serial.print((byte)settemp);
// EEPROM.write(1, settemp);
delay(250);
}
In the code, you have
if (temp > settemp) // if the temperature exceeds your chosen settemp
{
digitalWrite (led, 1); // turn on the led
}
else // if that doesn't happen, then turn the led off
{
digitalWrite (led,0);
}
I'm guessing that turning on the LED also activates the MOSFET and turns on the A/C, correct?
My main question is this: For someone who has both a heater and an A/C, could you have it do something like this:
if (temp > settemp + 2) // if the temperature exceeds your chosen settemp
{
digitalWrite (led, 1); // turn on the led for the A/C circuit
}
elseif (temp < settemp - 2) //If the temperature exceeds your settemp
{
digitalWrite (led, 1); // turn on the LED for the heater circuit
}
else // if that doesn't happen, then turn the led off
{
digitalWrite (led,0);
// digitalWrite (ledX,0); //ledX would be the second led for the heater circuit
}
My goal with this is to replace the "heat/cool" switch on a regular thermostat, and have it be a pure climate control system (where it either turns the heat or A/C on to keep the temp at what you want). Also, I added the + 2 and - 2, because most furnaces and A/C's won't start until the temp is 2 degrees above/below your set temp. It might have to be tweaked out to 3 or 4, depending on whether your furnace and/or A/C stops before the actual temp gets more than 2 degrees from your set temp. Otherwise, your heater and a/c will constantly be turning off and on to keep your temp set.
Have a great day.:)
Patrick.
The code is a good beginning but needs revision on the EEPROM.write command being used through every loop. EEPROM has a limited number of write cycles before it wears out. A better idea would be to only write to it periodically or only when the value to be written changes.
I know what you mean. But I used a cheap atmega8 that can be rewriten about 10,000 times before having errors. Plus, I really new to coding up the arduino.
The easiest way to reduce the number of write cycles would be to move the 'EEPROM.write' command to a new line just below each of the
'(settemp ++);' and '(settemp --);' lines. This way the EEPROM is only being written to each time a button is being pressed.
thx I'll try and change my code
Add your relay pin control to pin 10-8 or else you will flick to relay on and of rapidly it the start of the code.
|
http://www.instructables.com/id/Arduino-LCD-Thermostat/
|
CC-MAIN-2017-43
|
en
|
refinedweb
|
Fabulous Adventures In Coding
Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric.
(This is part one of a four part series; part two is here.)
The Framework Design Guidelines say in section 3.4 “do not use the same name for a namespace and a type in that namespace”. (*) That is:
namespace MyContainers.List{ public class.
|
http://blogs.msdn.com/b/ericlippert/archive/2010/03/09/do-not-name-a-class-the-same-as-its-namespace-part-one.aspx?PageIndex=2
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
Use the Personalization API to retrieve information about an email, hashed email, or postal address. Sign up to receive your unique API Key.
The API can be queried with HTTP GET requests.
Query email for user "John Doe" with email personalize@example.com:
Query email for user "John Doe" with email personalize@example.com and display in browser for testing purposes:
Query MD5 hashed email for user personalize@example.com:
Query SHA-1 hashed email for user personalize@example.com:
Query for user John Doe with email john@doe.com at 667 Mission St:
These parameters are required for all uses.
These parameters are used to query with an email address.
Tip: Querying by email, name, and postal will give you the highest match rate. Querying by email and name will also give you a better match rate than by email alone.
These parameters are used to query with a postal address. Providing email parameters will increase the match rate. first and last name must always be provided to query via postal. Either zip4 must be provided or street, city, and state.
Tip: All postal parameters should be URL encoded.
Tip: It's recommended to use the standardized format for postal addresses.
These parameters are optional to aid viewing query responses within a browser.
In order to query for a certain field, you can simply use the fields parameter on the end of your query string. For example, a regular query of personalize@rapleaf.com with your API key would look like this:
Now if you simply add the fields parameter followed by specific comma separated fields (as they appear in the response), you can view just the specific fields you queried for in the response.
Please note the %20 is simply the URL encoded space which is needed (you must exactly match
the field name as it appears in the response for this to work).
Here are a few email addresses and name and postal address combinations you can try out with our API:
Successful responses are returned in JSON format. For more details about all the fields available and their possible values, download our data dictionary..
{ "age":"21-24", "gender":"Male", "interests":{ "blogging":true, "high_end_brand_buyer":true, "sports":true, }, "eam":{ "date_first_seen":"2009-06-20", "month_last_open":"2014-11", "popularity":10, "velocity":2, }, "education":"Completed Graduate School", "occupation":"Professional", "children":"No", "household_income":"75k-100k", "marital_status":"Single", "home_owner_status":"Rent" }
The Personalization API is easy to implement in a variety of languages. The code snippets below use the libraries on our GitHub Page to query our API and output the results. For more details, please consult each library's accompanying README docs.
require 'towerdata_api' begin api = TowerDataApi::Api.new("API_KEY") # Set API key here hash = api.query_by_email("personalize@rapleaf.com") puts hash.inspect rescue Exception => e puts e.message end
from towerDataApi import TowerDataApi api = TowerDataApi.TowerDataApi('API_KEY') try: response = api.query_by_email('personalize@rapleaf.com') for k, v in response.iteritems(): print '%s = %s' % (k, v) except Exception as e: print e
import org.json.JSONObject; import com.towerdata.api.personalization.TowerDataApi; public class TowerDataApiExample { public static void main(String[] args) { TowerDataApi api = (args[0] != null) ? new TowerDataApi(args[0]):new TowerDataApi("YOUR_KEY"); // Set API key here
final String email = (args[1] != null) ? args[1]:"personalize@rapleaf.com";
// Query by email try { JSONObject response = api.queryByEmail(email, true); System.out.println("Query by email: \n" + response); } catch (Exception e) { e.printStackTrace(); } } }
use 'TowerDataAPI.pm'; eval { my $response = query_by_email('pete@rapleafdemo.com'); while(my ($k, $v) = each %$response) { print "$k = $v.\n"; } }; if ($@) { print $@ }
using System; using System.Collections.Generic; using System.Linq; using System.Text; using personalization; namespace MyApplication { class TowerDataExample { public static void Main(string[] args) { RapleafApi api = new RapleafApi("SET_ME"); // Set API key here try { Dictionary<string, response = api.queryByEmail("personalize@rapleaf.com", true); foreach (KeyValuePair<string, kvp in response) { Console.WriteLine("{0}: {1}", kvp.Key, kvp.Value); } } catch (System.Net.WebException e) { Console.WriteLine(e.Message); } } } }
Need to query multiple people in a single request? Check out the Personalization API, Bulk Version.
If you add
'&format=html' to the url of a request in your browser, it will automatically 'pretty print' JSON for testing purposes.
Please email questions to TowerData Developer Support.
|
http://intelligence.towerdata.com/developers/personalization-api/personalization-api-documentation
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
22 April 2010 12:39 [Source: ICIS news]
LONDON (ICIS news)--European polyethylene (PE) buyers on Thursday reacted angrily to higher price targets from producers in May, as the first ethylene contract for that month settled at a rollover from April.
“We are really getting sick of these increases,” said one major buyer, which reported having accepted as much as €260/tonne ($347/tonne), or more than 30%, in hikes in 2010.
Low density PE (LDPE) net prices were reported at €1,220-1,250/tonne FD (free delivered) NWE (northwest ?xml:namespace>
Producers were now announcing increases of €50-70/tonne for LDPE in May, with high density PE (HDPE) sellers seeking a more modest target of plus €30-50/tonne.
“How can they justify this increase?” asked another buyer.
Several PE producers conceded that cracker margins were good, but they cited a tight supply/demand situation in support of their planned increases.
“Cracker margins are now decent, but in the last 18 months we have lost our shirt,” said a major PE producer.
“Product is tight. Prices can go up,” said another.
LDPE availability was tight, after some permanent closures in
“I don’t expect more than a €10/tonne increase in May LDPE now that ethylene has rolled over,” said a buyer.
“I think we have seen the top for PE now,” said another.
Producers intended to raise prices in May, however, and had a clear field, with no significant imports and limited availability themselves. Several sources said that prices had risen on reduced availability rather than strong demand.
“It’s not a strong market but product is tight,” said another producer.
May PE negotiations were expected to be more difficult than those of late, when producers were able to push through increases fairly easily.
LDPE sellers were not concerned about the short-term future, as no substantial quantities of LDPE would be available from new capacities in the
Several large PE buyers were concerned that the persistent increases would damage their markets.
“This is getting to the stage where our markets will be damaged. Monthly ethylene has left the whole market second-guessing. When it [the ethylene contract] was quarterly, we had time to recover,” said one of the buyers.
Another complained that a major contract had been lost as a packaging manufacturer reversed a decision to change from paper to plastics, due to the volatility in the plastics markets.
LDPE is used mainly in the film packaging sector, while HDPE is used for film, bottles and caps, and small injected plastic items, depending on the grade.
PE producers in
($1 = €0.75)
For more
|
http://www.icis.com/Articles/2010/04/22/9353113/europe-pe-buyers-react-angrily-to-hikes-on-stable-ethylene.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
JS vs pure patching
hi folks,
I’m currently designing a probabilistic sequencer for my own use, firstly.
I’m hesitating between a JS core which would retain all arrays of data (or almost all) VS pure patching objects (coll, etc)
Of course, time accuracy is totally required.
So I’m hesitating before diving into one or the other solution and I’d want to have your opinion.
of course, I tested the beta Max6 and the javascript seems to be improved (tested with a couple of existing projects I had.
let me know here or on
best,
j
I don’t know this for sure, but if you’re using arrays, wouldn’t jitter matrices be faster than either coll or js?
I decided to go to JS.
Trusting the new engine in Max6…
Translating it into coll or even jitter matrices wouldn’t be that hard
Considering HUGE max 6 improvements, I guess that JS worth to be used..
I’d like to know more about performance comparison between:
- 1 JS parsing A LOT of data coming from 8 sources
VS
- 8 JS parsing data coming from 1 source each one
I really wanted to know that.
Best,
j
posting that in JS forum… more appropriate..
I would love to hear about this as well. In Max4 I used javascript quite a bit but eventually disliked it mainly for performance reasons. I moved to Lua and C from there but recently I got back into Javascript for other domains.
For a new project I’d like to implement a midi processor/mapper in javascript so I’m also worried about timing and performance.
Julien, what are your findings so far?
Hi,
JS in Max6 is REALLY faster, first good point.
I’d suggest you to prototype first using Max6 Objects, if possible (sometimes, making multiple included loops or more tricky stuff can be a pain following that way)
Then, if it works, keep it. If it doesn’t work well because of complexity of patching work, move to JS.
This would be my way.
About Externals.
Indeed, if you can code C++ routines using the SDK, I’m quite sure it would be the fastest way. I wrote "could be", because I have been very surprised by some part of my patches using JS in Max6.
I know where the weakness of visual programming lies, and when it pays off to move to procedural. The question is more whether I should give JS another try or stick with what I already know is good. I think from your experiences I should give JS second chance.
I forgot one major JS annoyance. The missing module system. I had a look and I still can find no docs on any sort of require() function in Max6. It is possible to break JS code into modules now or is the jsextensions folder still the only way to reuse code?
Hi Thijs,
Luke hall published a script (hallluke.wordpress.com/2010/10/31/including-extra-javascript-files) using the eval() function to include js files into a project. (it not really elegant but it works until Max5). In Max6 though the implementation of eval seems to have changed so the old script doesn’t work anymore. In this thread () there is a discussion about some modifications to the script to make it work. (probably even a bit less elegant as functions are passed around a lot.. ).
I would be very curious to hear if some else found a better solution.
So far I go for Java if scripting project get to large.
Jan
That’s too bad. Thanks for the tip about eval but I’d rather stick with jit.gl.lua for scripting then. NodeJS has a module system using require() which works nice for server side code. The browser on the other hand loads all scripts in one global space, and then you can quite easily create modules with some namespaces and immediate function wrappers.
A missed opportunity in Max6 imho. Now we have a fast engine and no real way to reuse code or write bigger projects. It was like this in Max4 and years after things are still the same. Bummer.
|
https://cycling74.com/forums/topic/js-vs-pure-patching/
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
I’m still trying to get into the whole Tweet thing, so in an attempt to dig into it a little I took a look at the API and found out it’s pretty functional. My version of a Hello World app was having the ability to call the REST Twitter API to update my status on Twitter. I figure if I can figure out how to integrate this social networking experience into my daily routine, then why not.
To get started, I downloaded the .NET Twitter API wrapper—this makes it super easy to build .NET-based apps that communicate with Twitter. You can get it here. After you’ve downloaded the DLL, open Visual Studio 2010 and create a new Visual Web Part project. Add a reference to the Twitter.Framework.dll and then create your UI. My UI look somewhat el lamo, as you can see from the below:
However, design skills notwithstanding, I put together a fairly straightforward UI which made coding it up easy. The main (non-production) code I added is bolded as follows:
using System;
using System.Web.UI;
using System.Web.UI.WebControls;
using System.Web.UI.WebControls.WebParts;
using Twitterizer;
using Twitterizer.Framework;
namespace MyTwitterFeedWebPart.TwitterWebPart
{
public partial class TwitterWebPartUserControl : UserControl
string strTweet = "";
string myTweetUsername = "";
string myTweetPassword = "";
protected void Page_Load(object sender, EventArgs e)
{
}
protected void btnTweet_Click(object sender, EventArgs e)
myTweetUsername = txtUsername.Text;
myTweetUsername = txtUsername.Text;
myTweetPassword = txtPassword.Text;
myTweetPassword = txtPassword.Text;
strTweet = txtbxTweet.Text;
strTweet = txtbxTweet.Text;
Twitter myTweet = new Twitter(myTweetUsername, myTweetPassword);
Twitter myTweet = new Twitter(myTweetUsername, myTweetPassword);
myTweet.Status.Update(strTweet);
myTweet.Status.Update(strTweet);
protected void btnClear_Click(object sender, EventArgs e)
txtbxTweet.Text = "";
txtbxTweet.Text = "";
txtPassword.Text = "";
txtPassword.Text = "";
txtUsername.Text = "";
txtUsername.Text = "";
}
When you deploy and run the sample app, you have a functional web part app that enables you to submit tweets from your web part and post them to your Twitter account.
Yeah, success:
How useful is this in the long-term? Well, that remains to be seen…but it does begin to show you that integrating with SharePoint 2010 is definitely possible. You’d want to obviously add checks like character length of tweet < 140, tweet signing, Internet connectivity check, etc., etc., but the .NET wrapper enables you to do some powerful integrations.
Happy coding!
Steve
|
http://blogs.msdn.com/b/steve_fox/archive/2009/12/02/tweeting-from-sharepoint-2010-visual-web-part.aspx?Redirected=true
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
Where can I store my application data?
I see a lot of people starting threads because they're struggling to persist (often) a little bit of data from one run of their Java application to the next. The offered solutions are often some kind of jiggery-pokery involving java.io.File. Not only is File handling a cross-platform morass (our favourite OSes can't even agree on path separators, let alone filesystem roots and GoodPlacesToPutApplicationData™), but every so often another limited-resource computing device appears, some of them without a natural filesystem at all.
So if we can't store data in files, where can we store it? Enter the Java Preferences API in the java.util.prefs package. Preferences is a platform neutral way to store a small amount of simple data. It offers support to store Strings and built-in data types and retrieve them without having to worry how they are persisted. You simply obtain a Preferences instance, store your value-to-be-persisted on a key, exit your app and sleep easy. The next time you start your app, you obtain the Preferences instance and invoke get([key you used previously], [default if nothing's there]). Ideal - for simple use.
Preferences was designed to be simple and lightweight, so it doesn't include any support for storing Java Objects. Worse, it imposes severe constraints on the size of values that can be stored. My Java 6 API docs say that a String stored by a Preferences instance can be up to 8,192 characters long (Preferences.MAX_VALUE_LENGTH). And only built-in datatypes and Strings may be stored! So if we can accept the constraints of the Preferences class, how would we use it to store arbitrary objects? Here's my first stab at it.
An Object to String bridge
We're going to store Objects as Strings. Java offers Serialization as a means of storing and retrieving objects to and from streams, so that's what we're going to do - serialize an Object to a stream. What kind of stream? A stream that creates a valid String object. Then we're going to store that String in a Preferences instance. When we later retrieve the String from the Preferences instance, we have to go back across the bridge - deserialize the String - back into our original Object-that-was-stored.
We know we can create Strings from byte arrays, and we know we can get a byte array from a String, so it seems like we should be able to use ByteArrayInputStream and ByteArrayOutputStream to bridge between byte arrays and streams. So far so good: Serialization does Object-to-Stream and the ByteArray{In|Out}putStreams will do Stream-to-byte-array. The problem with this first bridge should be evident from the API docs for the String(byte[]) constructor:.Quote:
Constructs a new String by decoding the specified array of bytes using the platform's default charset. ...
The behavior of this constructor when the given bytes are not valid in the default charset is unspecified.
What I chose to do (I can post the code if anyone is interested) was to extend Filter{In|Out}putStream to accept a stream of arbitrary bytes on one side and output a 'hex' representation of those bytes on the other side. Every byte that is written to my HexOutputStream (the class I extend from FilterOutputStream) by ObjectOutputStream is converted to a 2-character hex representation, which is in turn written to the ByteArrayOutputStream as 2 bytes. OK, so I'm obviously doubling the size of the stored object, but I can absolutely guarantee that a string of bytes which are '0' to '9' and 'a' to 'f' are valid String objects in any Charset.
So far so good - my Object to String bridge is now
and my String to Object bridge isand my String to Object bridge isCode :
mySeriliazableObject -> ObjectOutputStream -> HexOutputStream -> ByteArrayOutputStream -> new String(byte[])
Let's have a demo.Let's have a demo.Code :
String.getBytes() -> ByteArrayInputStream -> HexInputStream -> ObjectInputStream -> mySerializableObject
A suitable problem
I'm not very good at remembering birthdays, particularly bad when it's the birthday of my Significant Other. I could write a command-line application to keep a note of the birthday, but I'm a platform butterfly and I know I'm going to change my PC soon, but I can't be sure whether I'll be using an Android phone, an Ubuntu nettop, a LEGO NXT brick, an iPad or a Windo... no, let's not go too far. I want this app to remember the special birthday, so I create a serializable class so that I can create birthday instances from the keyboard and store and retrieve them from some persistence backend. I don't care what the persistence back-end is, so Serializable at least bridges my class to streams. Streams are good. I can write stuff into them, read stuff out of them, and I don't care what's at the other end.
Here's Birthday.java:
Marvellous, a Birthday class that stores only a day-of-month and a month as an int and an Enum. Should be plenty good enough for a demo of Object-into-Preferences.Marvellous, a Birthday class that stores only a day-of-month and a month as an int and an Enum. Should be plenty good enough for a demo of Object-into-Preferences.Code java:
package com.javaprogrammingforums.domyhomework; import java.io.*; import java.util.regex.*; public class Birthday implements Serializable { public final static long serialVersionUID = 42l; public final static Pattern PAT_BIRTHDAY = Pattern.compile("^([0-9]{1,2})[a-z]{0,} ([JFMASOND][a-z]{2,8})$"); public enum Month { January(31), February(29), March(31), April(30), May(31), June(30), July(31), August(31), September(30), October(31), November(30), December(31); private final int MAX_DAYS; Month(int maxDays) { MAX_DAYS = maxDays; } public void validate(int day) throws IllegalArgumentException { if (day < 1 || day > MAX_DAYS) throw new IllegalArgumentException("Not a valid day in " + this + ": " + day); } }; private final int day; private final Month month; public Birthday(int dayOfMonth, Month m) throws IllegalArgumentException { m.validate(dayOfMonth); day = dayOfMonth; month = m; } public static Birthday parse(String s) throws IllegalArgumentException, NullPointerException { if (s == null) throw new NullPointerException(); Matcher mat = PAT_BIRTHDAY.matcher(s); if (!mat.matches()) throw new IllegalArgumentException("Bad format for birthday: '" + s + "'"); try { Month m = Enum.valueOf(Month.class, mat.group(2)); return new Birthday(Integer.parseInt(mat.group(1)), m); } catch (Exception e) { throw new IllegalArgumentException("Bad month: '" + mat.group(2) + "'", e); } } public String toString() { return month + " " + day; } }
Now for the application. The application PreferencesSOBirthday starts up, tries to retrieve a Birthday from the Preferences back-end, displays it as a reminder if there's one in there. If there isn't a Birthday object in the Preferences instance, it asks you for one and stores it. Nowhere in the code will you find any mention of a File. For that matter, nowhere in the code will you find any explicit choice of persistent storage at all - all it does is to obtain a Preferences instance that's suitable for the user/application, and stores the object. The next time you run the application - tadaa - the object is magically:Code java:
package com.javaprogrammingforums.domyhomework; import java.io.*; import java.util.prefs.Preferences; public class PreferencesSOBirthday { private final static String PREFS_KEY_BIRTHDAY = "significant.other.birthday"; public static void main(String[] args) throws Exception { /* may throw SecurityException - just die */ Preferences prefs = Preferences.userNodeForPackage(PreferencesSOBirthday.class); if (args.length > 0 && "reset".equalsIgnoreCase(args[0])) { System.out.println("Command line argument 'reset' found - data removed."); prefs.remove(PREFS_KEY_BIRTHDAY); } // fetch the string-encoded object String sSOBD = prefs.get(PREFS_KEY_BIRTHDAY, null); if (sSOBD != null) try { /* check preferences string actually contains a Birthday */ getBirthday(sSOBD); } catch (Exception e) { System.out.println("Ouch, my brain hurts!"); sSOBD = null; prefs.remove(PREFS_KEY_BIRTHDAY); } while (sSOBD == null) { System.out.print("OMG are you in trouble, I don't know your SO's birthday - enter it now: "); BufferedReader br = new BufferedReader(new InputStreamReader(System.in)); String sBirthday = br.readLine(); if (sBirthday == null) { System.out.println("OK, be like that then"); System.exit(0); } try { Birthday b = Birthday.parse(sBirthday); sSOBD = toPreference(b); prefs.put(PREFS_KEY_BIRTHDAY, sSOBD); } catch (Exception e) { System.out.println(e); sSOBD = null; } } System.out.println("Be prepared - your Significant Other's birthday is " + getBirthday(sSOBD)); } /* String to Birthday bridge */ private static Birthday getBirthday(String s) throws NullPointerException, IOException, ClassNotFoundException { return (Birthday)new ObjectInputStream(new HexInputStream(new ByteArrayInputStream(s.getBytes()))).readObject(); } /* Birthday to String bridge */ private static String toPreference(Birthday b) throws IOException { ByteArrayOutputStream baos = new ByteArrayOutputStream(); ObjectOutputStream oos = new ObjectOutputStream(new HexOutputStream(baos)); oos.writeObject(b); oos.close(); return new String(baos.toByteArray()); } }
Remember to run the application at least twice, or you won't see persistence 'in action'. Here's a command line to reset (remove) the stored data:Remember to run the application at least twice, or you won't see persistence 'in action'. Here's a command line to reset (remove) the stored data:Code :
java com.javaprogrammingforums.domyhomework.PreferencesSOBirthday
Code :
java com.javaprogrammingforums.domyhomework.PreferencesSOBirthday reset
Where next?
It shouldn't be difficult to turn this trivial example into a general-purpose Object Persistance engine based on the Preferences API, but I would caution against it because of the arguments put forward by Sun in the first article linked above. Preferences really isn't for arbitrary Object storage, it's for storing simple configuration data. Still, if a solution such as the one above fits your requirements and you are disciplined enough to avoid the temptation to base a mega-project on it instead of using a proper persistence back-end, then I think it has some kilometrage. Enjoy.
edit: Some time later I realised you can't actually run this without the HexStreams classes. I'm not going to post those right now, because I think they're not necessary to understand what's going on. Attached instead is an executable jar file - execute it with 'java -jar SOBirthday.jar'
nother edit: Just seen an article at IBM by an author who can read the API docs and notice the putByteArray() method in Preferences! That would make the code slightly less complicated, but not much. The IBM article also suggests breaking your objects up into under-8KB chunks, but I think if you're going to go that far, you probably need a proper DB:
|
http://www.javaprogrammingforums.com/%20file-input-output-tutorials/10459-look-ma-no-files-portable-object-persistence-printingthethread.html
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
.
Have you read the install
Have you read the install instructions in the FreeIMU library download page? They tell you to modify the FreeIMU.h to enable the board you are using...
Install Instructions
Hi Fab,
I'm sorry, for some reason I'm having trouble finding the
"install instructions in the FreeIMU library download page"
Maybe you could add the link for me in your email response?
The link below is not working for me right now - is it my end ?
"The files on the repository can also browsed from a webbrowser here."
Thanks again for your help,
Rick
Just search for the file
Just search for the file FreeIMU.h within the FreeIMU library and uncomment the correct version of the board you are using.
The instructions in the
The instructions in the download section of this page tells you what to do.
Download Section
Where is the download section you speak of?
Just below the links to
Just below the links to download the library.
Thanks, working fine now.
Thanks, working fine now.
HELP.....ADXL345
I always get this error when i uploaded the ADXL345_test code
ADXL345_test.cpp: In function 'void setup()':::get_Gxyz(double [3])'
C:\Documents and Settings\INTERNET CAFE\Desktop\arduino-1.0.1\libraries\adxl345driver/ADXL345.h:109: note: candidates are: void ADXL345::get_Gxyz(float*)
That was a bug. Thanks for
That was a bug. Thanks for reporting. A fixed version is available on the repository at ... please test it.
Altimeter values and speed
1.There is no example in the FreeIMU library for getting altimeter values. How do i get altimeter values? I hav a 0.4.3 version.
2.Is it possible to get the compensated acceleration values of all 3 axes individually using the library?
1) The MS5611 is part of the
1) The MS5611 is part of the FreeIMU library and contains examples on how to get altitude readings from the barometer.
2) Not currently. Code example for doing that is available here and I do have newer code coming up which has gravity compensation as well as baro+acc filtered altitude estimation.
gravitiy compensation
Is it normal to get non-zero gravity compensated values in a FreeIMU sitting still in the floor/table?
This is what I've done, I created a function into FreeIMU.cpp:
void FreeIMU::getDynamic(float * ypr) {
float q[7]; // quaternion
float gx, gy, gz, accx, accy, accz; // estimated gravity direction
getQ(q); // modified getQ so it returns the acceleration measurements in positions 4,5,6
accx=q[4];
accy=q[5];
accz=q[6];
//Note: calibrations have changed, convert acc in LSB to g using last conversions
// this is done in getvalues()
//calculate gravity components:
gx = 2 * (q[1]*q[3] - q[0]*q[2]);
gy = 2 * (q[0]*q[1] + q[2]*q[3]);
gz = q[0]*q[0] - q[1]*q[1] - q[2]*q[2] + q[3]*q[3];
Serial.print("Gravity: "); Serial.print("\t");
Serial.print(gx); Serial.print("\t");
Serial.print(gy); Serial.print("\t");
Serial.print(gz); //
accx-=gx;
accy-=gy;
accz-=gz;
Serial.print("Dynamic accelerations: "); Serial.print("\t");
Serial.print(accx); Serial.print("\t");
Serial.print(accy); Serial.print("\t");
Serial.println(accz); //
}
I have modified getQ to return a 9 vector where the last three are the accelerations:
void FreeIMU::getQ(float * q) {
float val[9];
getValues(val);
q[4] = val[0];
q[5] = val[1];
q[6] = val[2];
...
changed FreeIMU.h accordingly and added the acceleration LSB to g conversion, in my example using:
accgyro.setFullScaleGyroRange(MPU6050_ACCEL_FS_2);
I modified in getValues the line
values[i] = (float) accgyroval[i]/16384;
to add the LSB/g conversion.
And bellow the readings I get, even when I level the FreeIMU so that it is "perfectly" vertical and I get 1g reading in the z component, I read non-zero g for the x and y components, is this rounding error? I guess this is the reason for the non-zero readings in the dynamic accelerations.
Gravity: 0.02 0.05 1.00Dynamic accelerations: 0.01 -0.10 -0.02
Gravity: 0.02 0.05 1.00Dynamic accelerations: 0.01 -0.09 -0.02
Gravity: 0.02 0.05 1.00Dynamic accelerations: 0.01 -0.09 -0.01
Gravity: 0.02 0.05 1.00Dynamic accelerations: 0.01 -0.10 -0.02
Gravity: 0.02 0.05 1.00Dynamic accelerations: 0.01 -0.09 -0.02
Gravity: 0.02 0.05 1.00Dynamic accelerations: 0.01 -0.10 -0.02
Gravity: 0.02 0.04 1.00Dynamic accelerations: 0.01 -0.10 -0.03
Gravity: 0.02 0.04 1.00Dynamic accelerations: 0.01 -0.09 -0.02
Gravity: 0.02 0.04 1.00Dynamic accelerations: 0.02 -0.10 -0.03.03
Gravity: 0.02 0.04 1.00Dynamic accelerations: 0.00 -0.09 -0.02
Gravity: 0.02 0.04 1.00Dynamic accelerations: 0.01 -0.09 -0.02
Gravity: 0.02 0.04 1.00Dynamic accelerations: 0.01 -0.09 -0.03
Gravity: 0.02 0.04 1.00Dynamic accelerations: 0.00 -0.10 .00 -0.10 -0.02
Doesn't look bad to me.. you
Doesn't look bad to me.. you may wanna use the new calibration code on the repository so that you should get perfect 0 0 1 gravity readings.
Cool. Looking forward to the
Cool. Looking forward to the enhancements.
A question about the Data Fusion Algorithm
Hi Fabio,
I have watched your video at:-...
and your IMU is really amazing.
My IMU consists of a MPU-6050 (acc + gyro) and a LSM303DLHC (mag). I
also use the Madgwick data fusion algorithm. However my result is not
good. My IMU is very sensitive to the linear (translational)
acceleration. When I swiftly move the IMU back and forth, the
quaternion changes too much. Have you experienced this problem? If so could you give me some advices to overcome it?
Thank you.
Personally, I never had this
Personally, I never had this problem.. but I can give you some advices.. you may wanna play with changing the values of the following two values in FreeIMU.h
Yeah, I will try to adjust
Yeah, I will try to adjust those two to see if I can get better results.
BTW, could you please post a video demonstrating the lateral acceleration rejection capability of your IMU?
Thanks
Invensense releases license for 6050 DPM in open source project
I saw a note in diydrones (...) which says Inverness released a license for using the DMP fusion firmware in open source projects. Will we see this in FreeIMU lib soon? Has anyone tried running the DMP firmware on the FreeIMU MPU6050 with any software. It would be nice to get the sensor fusion algorithm out of the Arduino code (both code space and CPU time wise ) so it has more room for other things.Does anyone know if the available DMP fusion firmware works with the HMC5883L magnetometer?
Having trouble with FreeIMU lib
Hello Mario,
I'm using the latest freeIMU library(freeIMU 0.4), the raw data seems to be ok but the yaw,roll,pitch seems to be messed up.
The freeIMU_yaw_pitch_roll example didn't compile so I've added:
#include "I2Cdev.h"
#include "MPU6050.h"
to the Arduino file.
This is the raw data I'm getting:
This is the yaw, pitch, roll data I'm getting just after the calibration:
This is the yaw, pitch, roll data I'm getting after 10 seconds without touching the board:
As you can see, the angles are not the same, quite far from it.
Any ideas what's going wrong?
Thank you!
The sensor fusion algorithm
The sensor fusion algorithm will take about 30 to 120 seconds to settle so, it should be pretty normal. Best way to check for correctness is loading the FreeIMU_quaternion and run the FreeIMU_cube on Processing so that you can better visualize things. Sometime ypr readings may be misleading.
I tried that as well, I
I tried that as well, I uploaded the quaternion example and opened Cube on processing, left it to settle down for 3-4 minutes, pressed 'H' to set yaw, pitch, roll angles to zero.
Waited 10-15 seconds and the yaw drifted heavily:
Thank you for your help and quick replies!
IMU-Mocap
Hi Fabio, Did This guys used your FreeImu for this project?...
If not, do you think that your IMU could have this quality?
It just I am trying to develope my own Inertial Motion Capture suit, and I am searching for the best solution available. There are tons of Imus, and I just have to decide for one.
Your´s seems very accurate.
What do you think?
Thank you!
Diego
I don't think this guys are
I don't think this guys are using FreeIMU. However, yes, FreeIMU can deliver such good results when properly calibrated, since it uses some of the best sensors available on the market.
FreeIMU is however only a sensor board and a library so you'll have to develop your solution for wireless communication and multiple IMUs data gathering.
You may wanna have a look at the SmartSkelethon project by John Patillo.
Problems with arduino + freeIMU driver
Hi,
First I would like to congratulate for a great project and for a contribution to the opensource world.
I bought an Arduino nano and started playing with IMU 0.35_BMP from ebay.
I succesfully programed the Arduino, the Arduino was recognised ok by the computer. But then I connected sensor IMU to arduino, and I got message "USB Device Not Recognized", but not always, sometimes it works.
Thank you for helping,
regards
Check your connections. You
Check your connections. You may have a short.
Missing KiCad library ??
Hi Fabio,
I downloaded
The sources for FreeIMU 0.4.x
When opening the project in KiCad,, i get an error:
The following libraries could not be found:
\fv_kicad_lib
Can i download it from somewhere ?
best regards!
Bus Pirate - Magnetometer I2C address
Hi! I've received the Free IMU 0.4r3 and connected it to a Bus Pirate V3 with GND - VIN(5v) - SDA - SCL. I set the mode to I2C, speed at 400Khz (also tried at 100Khz), put the Pullup resistors and PSU on and let it scan for I2C devices. It found 0xD0 (0x68 W), 0xD1 (0x68 R), 0xEE(0x77) which come from the 6050 gyro and the MS5611 altometer but no response from the Magnetometer. When I try to get an ACK from the magnetometer, it doesn't respond. Have you got any idea?
Auxiliary I²C Bus Mode
I think I got why the magnetometer didn't respond to an address search. The auxiliary I2C bus mode is probably still in master mode instead of pass-through mode so the system processor can't see the magnetometer. Unfortunately I don't find the default mode in the specs. Can you confirm my way of thought? Thanks!
Yeah, by default the MPU6050
Yeah, by default the MPU6050 works as master on the Aux I2C. If you wanna to connect the AUX I2C and the primary I2C bus, have a look at the FreeIMU library, functions FreeIMU::init, MPU6050::setI2CMasterModeEnabled(), MPU6050::setI2CBypassEnabled respectively in FreeIMU.cpp and MPU6050.cpp.
FreeIMU and eZ430-RF2500 ?
Hi,
Congratulation for a great project!
I was wondering if anyone tried to connect the freeIMU to the eZ430-RF2500 board instead to Arduino? Its even possible? Since eZ430-RF2500 is wireless it would be usefull also for quadcopters,...
Thank you,
Regards
Nejc
Hello Nejc, I'm not aware of
Hello Nejc, I'm not aware of anyone who did that.. however, supposing that such development board breaks out the I2C pins of the micro, it should be possible.
Actually, I'm currently developing a product using the same inertial sensors of the FreeIMU but directly interfaced with a CC430.. I'll have more details out on this website later this year.
Thank you for the
Thank you for the reply.
Actually I was searching for a solution to use FreeIMU wireless. I found just 1 hour ago that its possible to connect the Arduino to Xbee.
I dont know how the quadcopter projects on this side solve the problem of wireless, but I guess it would work with Xbee.
Regards,
Nejc
freeIMU 4.3 not working...
Hi Fabio,
I recently received my freeIMU 4.3 and have just connected it to my Arduino Uno as per your video. Running the raw data example sketch, only the last two columns show any data, the other columns just display '0''s.
I am using Arduino 1.0.1.
any help would be greatly appreciated.
keep up the good work. thanks.
Have you modified FreeIMU.h
Thanks for getting a FreeIMU! Have you modified FreeIMU.h as explained in the FreeIMU library instruction above?
One problem fixed, one other problem to solve
Thanks for the reply. I can't believe i'd forgotten to modify that line. The raw example sketch now works.
The problem i am now having is with the yaw_pitch_roll sketch, it doesn't compile, returning the error:
In file included from FreeIMU_yaw_pitch_roll.cpp:10:
/Users/Username/Documents/Arduino/libraries/FreeIMU/FreeIMU.h:136: error: 'MPU6050' does not name a type
All the libraries are in the correct location.
Thanks again for the reply.
Modify the
Modify the FreeIMU_yaw_pitch_roll program so that it includes the following two lines (put them after the other includes):
#include "I2Cdev.h"
#include "MPU6050.h"
That's a bug which is already fixed in my development version of the library which will be fixed in the next release of the FreeIMU library.
everything working
thanks a lot for the support and for a great product.
About FreeIMU
Hi,
We are planning to build an autonomous outdoor blimp for a college project this fall. We are wondering if your IMU would be good for this project. Our blimp would navigate through the predefined GPS waypoints with the help of IMU and GPS. Your suggestions would be appreciated.
Thanks
PS: The item is currently sold out.
We are planning to use
We are planning to use arduino mega.
Wow, cool project! Very
Wow, cool project! Very interesting!
Yes, a FreeIMU with its very precise orientation and altitude sensing would be pretty good for such application. Pair it with a GPS and an Arduino Mega and you will really have lot of possibilities!
The boards are currently in stock at viacopter.eu (only 2 units) and sdmodel.it (more than 20 units). Both shops offers international shipping options.
Good luck with your project!
Would FreeIMU work with this IMU?
Hello,
We're working on a college project that is to implement an inertial navigation system.
First things first - we need the orientation of the body(9DOF) and FreeIMU seems to be perfect for our needs.
We thought about buying this module:-...
Seems that its using the exact same components as the FreeIMU v0.4.
Could we use FreeIMU with this module? If so, would it require doing any changes?
Many thanks!
That board is a complete
That board is a complete China clone of FreeIMU v0.4... I never used the board but I'm fairly confident that the Chinese cloned it completely, so it should work. At worst, you'll have to change some code modifying the sensors I2C addresses.
Please consider that testing sensors and board designs, writing code and supporting users takes me a lot of time and efforts which are payed by the incomes coming from FreeIMU board sales.
Without people buying FreeIMU there wouldn't be any FreeIMU library to use and I wouldn't be here answering your comments...
So, choose wisely.
Not an exact clone, probably same schematics
This clone is not as ethical as the earlier one you noted. They probably copied your schematic but they do not credit your design. From the photos you can see that they also did a different board layout including a different connector pinout. It is interesting that they added back a mounting hole. I wonder if the new board layout was done for any other reason than obscuring the fact that it is a clone.
3.3V Pro Mini wiring?
For the Stalker I connected VIN as per your 035_arduino.sch. Is that what you mean by 5V connector? If so maybe I'm OK.
How should I connect to the 3.3V Pro mini which is powered right now by a 3.3V FTDI USB board? I was assuming that I would need to skip the regulator and connect the Pro 3.3V VCC directly to the FreeIMU 3V3 pin. Will that work?
Now that the holes are gone how do you suggest attaching this board. I want to make sure the alignment matches. Can I safely drill the copper pads in the upper corners? I'm having a hard time finding these pads in the Gerber files.
Thanks again for the help.
Yeah, that's what I meant for
Yeah, that's what I meant for 5V connector.
When using an host board already working at 3V3 as the Pro mini 8MHz, the suggested way is to skip the regulator and connect it directly to 3V3 connector.
Removing the drill holes has been something which I carefully evaluated, not just something random. Mechanically fixing the IMU to the tracked object via screws, in some application, was giving extremely bad results caused by vibration propagations. That was an huge problems for early UAV/quadcopter testers in pre-0.3.5 boards. That's why you won't find mounting holes in the board any more.
The suggested way to fix your board is to use a thick and good quality bi-adhesive tape.
My application should be easier
Sitting between 4 quadrotor motors doesn't sound like the ideal environment for a sensor. A hard screw mount would probably be an efficient vibration coupler so the change makes a lot of sense. Fortunately my application is less abusive. I'll basically hang it on a bicycle. The information I want is climb rate which I hope to get from the pressure sensor and calculated from measured speed and climb angle.
Mounting
Fabio, Many thanks for your efforts and for hosting this site.
Mounting is a major issue. I find it interesting that you moved away from using a screw mount. Did you try using rubber grommets to isolate the IMU from the mounting posts when you had mounting holes? Do you have a picture showing how the FreeIMU should be mounted using the bi-adhisive tape?
You can check the following
You can check the following link:
Yes, we tried using the rubber and also plastic screws but results weren't as good as with bi-adhisive tape.
Help with MP 6050
ciao!!
I fell on your youtube video; very limpid and nice.Thanks for all the efforts you've put in this. I am a super newbie who was given an arduino mega and and the invensense MP 6050 to work with. Could you provide with info (header kits and all)on how to interface with it as you did for the FreeIMUs. Iam trying to get position data from sensors for a moving car.
Per favore, prima grazie!!!!
Post new comment
|
http://www.varesano.net/projects/hardware/FreeIMU?page=2
|
CC-MAIN-2015-11
|
en
|
refinedweb
|
For tallying the # of views on this website, we wanted to exclude our own hits while we developed. So, we needed to find the IP Address of the user. Real fast we found that the normal way of doing this under ASP.NET Core 5 would return the Apache Server local address (127.0.0.1). Searching the web, we were able to cobble some posts to find this solution.
-Raising Awesome ©2021
This is for a "webapp" (aka Razor Pages). It's the kind of app that you'd start with "dotnet new webapp" at the terminal.
Paste this at the top of your model file:
using System.Net;
Paste this before the final brace of your model file:
public static class HttpContextExtensions { public static IPAddress GetRemoteIPAddress(this HttpContext context, bool allowForwarded = true) { if (allowForwarded) { string header = (context.Request.Headers["CF-Connecting-IP"].FirstOrDefault() ?? context.Request.Headers["X-Forwarded-For"].FirstOrDefault()); if (IPAddress.TryParse(header, out IPAddress ip)) { return ip; } } return context.Connection.RemoteIpAddress; } }
Last, in the routine where you want to discover it, put this:
string the_ip=HttpContext.GetRemoteIPAddress().ToString();
|
https://www.raisingawesome.site/IPAddress
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Hi everybody, here’s my new Raspberry PI project: the face follower webcam!
When I received my first raspberry, I understood it would have been very funny to play with real world things: so i tried to make the PI react with environment, and I played a lot with speech recognition, various kind of sensors and so on. But then I suddenly realized that the real funny thing would have been to make it see. I also understood that real-time processing a webcam input would be a tough task for a device with those tiny resources.
Now i discovered that this is a tough task, but the raspberry can achieve it, as long as we keep things simple.
OpenCV Libraries
As soon as I started playing with webcams, I decided to look for existing libraries to parse the webcam input and to extract some information from it. I soon came across the OpenCV libraries, beautiful libraries to process images from the webcam with a lot of features (which I didn’t fully explore yet), such as face detection.
I started trying to understand if it was possible to make them work on the PI, and yes, someone had already done that, with python too, and it was as easy as a
sudo apt-get install python-opencv
After some tries, i found out that the pi was slow in processing the frames to detect faces, but only because it buffered the frames and I soon had a workaround for that.. and soon I was done: face detection on raspberry PI!
All you need to try face detection on your own is the python-opencv package and the pattern xml file
haarcascade_frontalface_alt.xml, i guess you can find it easily in the Internet.
I started with a script found here and then I modified it for my purposes.
The Project
Eventually, I decided to build with my PI a motor driven webcam which could “follow” a face detected by opencv in the webcam stream. I disassembled a lot of things before finding a suitable motor but then I managed to connect it to the Raspberry GPIO (I had to play a little with electronics here, because i didn’t have a servo motor — if you need some information about this part, i’ll be happy to provide it, but i’m not so good with electronics thus the only thing I can say is that it works). Here’s a picture of the circuit:
And here are some photos of the motor, to which I mounted an additional gear wheel.
Once the motor worked, I attached it to a webcam which i plugged into the PI, and then I joined the previous linked script with some GPIO scripting to achieve the goal, here is the result:
import RPi.GPIO as GPIO import time,sys import cv,os from datetime import datetime import Image #GPIO pins i used OUT0 = 11 OUT1 = 13 out0 = False #!enable line: when it's false, the motor turns, when it's true, it stops out1 = False #!the direction the motor turns (clockwise or counter clockwise, it depends on the circuit you made) def DetectFace(image, faceCascade): min_size = (20,20) image_scale = 2 haar_scale = 1.1 min_neighbors = 3 haar_flags = 0 # Allocate the temporary images grayscale = cv.CreateImage((image.width, image.height), 8, 1) smallImage = cv.CreateImage( ( cv.Round(image.width / image_scale), cv.Round(image.height / image_scale) ), 8 ,1) # Convert color input image to grayscale cv.CvtColor(image, grayscale, cv.CV_BGR2GRAY) # Scale input image for faster processing cv.Resize(grayscale, smallImage, cv.CV_INTER_LINEAR) # Equalize the histogram cv.EqualizeHist(smallImage, smallImage) # Detect the faces faces = cv.HaarDetectObjects( smallImage, faceCascade, cv.CreateMemStorage(0), haar_scale, min_neighbors, haar_flags, min_size ) # If faces are found if faces: #os.system("espeak -v it salve") for ((x, y, w, h), n) in faces: return (image_scale*(x+w/2.0)/image.width, image_scale*(y+h/2.0)/image.height) #(image, pt1, pt2, cv.RGB(255, 0, 0), 5, 8, 0) return False def now(): return str(datetime.now()); def init(): GPIO.setmode(GPIO.BOARD) GPIO.setup(OUT0, GPIO.OUT) GPIO.setup(OUT1, GPIO.OUT) stop() def stop(): #stops the motor and return when it's stopped global out0 if out0 == False: #sleep only if it was moving out0 = True GPIO.output(OUT0,out0) time.sleep(1) else: out0 = True GPIO.output(OUT0,out0) def go(side, t): #turns the motor towards side, for t seconds print "Turning: side: " +str(side) + " time: " +str(t) global out0 global out1 out1 = side GPIO.output(OUT1,out1) out0 = False GPIO.output(OUT0,out0) time.sleep(t) stop() #getting camera number arg cam = 0 if len(sys.argv) == 2: cam = sys.argv[1] #init motor init() capture = cv.CaptureFromCAM(int(cam)) #i had to take the resolution down to 480x320 becuase the pi gave me errors with the default (higher) resolution of the webcam cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_WIDTH, 480) cv.SetCaptureProperty(capture, cv.CV_CAP_PROP_FRAME_HEIGHT, 320) #capture = cv.CaptureFromFile("test.avi") #faceCascade = cv.Load("haarcascades/haarcascade_frontalface_default.xml") #faceCascade = cv.Load("haarcascades/haarcascade_frontalface_alt2.xml") faceCascade = cv.Load("/usr/share/opencv/haarcascades/haarcascade_frontalface_alt.xml") #faceCascade = cv.Load("haarcascades/haarcascade_frontalface_alt_tree.xml") while (cv.WaitKey(500)==-1): print now() + ": Capturing image.." for i in range(5): cv.QueryFrame(capture) #this is the workaround to avoid frame buffering img = cv.QueryFrame(capture) hasFaces = DetectFace(img, faceCascade) if hasFaces == False: print now() + ": Face not detected." else: print now() + ": " + str(hasFaces) val = abs(0.5-hasFaces[0])/0.5 * 0.3 #print "moving for " + str(val) + " secs" go(hasFaces[0] < 0.5, val) #cv.ShowImage("face detection test", image)
Of course I had to play with timing to make the webcam turn well, and the timings like everything strictly connected to the motor depends on your specific motor and/or circuit.
Here’s the video (in italian):
4 thoughts on “Raspberry PI powered Face Follower Webcam”
Excellent stuff – nicely done. You should contact raspberrypi.org or Adafruit – they’re always looking for clever projects like this involving the Pi.
Dear Daniele
I am interested in your webcam + motor project. I would like to use my PI to reassemble your project. Could you please let me know the hardware component and webcam model / brand? Please advise.
Van
Hi Van, I’m very happy this can help you.
The webcam is a Logitech Webcam C170, a very cheap model, which I disassembled to make it more lightweight.
Since i usually don’t want to spend money and wait for components, I try to get most of them by recovering old or dismissed things to disassemble.
I obtained the motor disassembling an old ambient perfumer, like this (actually it’s not the same model). I’m going to update the post adding same more photos of the motor itself and how it’s made internally, so you can better understand how to do something similar.
|
http://www.nicassio.it/daniele/blog/?p=87
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Service invocation API reference
Dapr provides users with the ability to call other applications that have unique ids. This functionality allows apps to interact with one another via named identifiers and puts the burden of service discovery on the Dapr runtime.
Invoke a method on a remote dapr app
This endpoint lets you invoke a method in another Dapr enabled app.
HTTP Request
POST/GET/PUT/DELETE:<daprPort>/v1.0/invoke/<appId>/method/<method-name>
HTTP Response codes
When a service invokes another service with Dapr, the status code of the called service will be returned to the caller.
If there’s a network error or other transient error, Dapr will return a
500 error with the detailed error message.
In case a user invokes Dapr over HTTP to talk to a gRPC enabled service, an error from the called gRPC service will return as
500 and a successful response will return as
200OK.
URL Parameters
Note, all URL parameters are case-sensitive.
Request Contents
In the request you can pass along headers:
{ "Content-Type": "application/json" }
Within the body of the request place the data you want to send to the service:
{ "arg1": 10, "arg2": 23, "operator": "+" }
Request received by invoked service
Once your service code invokes a method in another Dapr enabled app, Dapr will send the request, along with the headers and body, to the app on the
<method-name> endpoint.
The Dapr app being invoked will need to be listening for and responding to requests on that endpoint.
Cross namespace invocation
On hosting platforms that support namespaces, Dapr app IDs conform to a valid FQDN format that includes the target namespace.
For example, the following string contains the app ID (
myApp) in addition to the namespace the app runs in (
production).
myApp.production
Namespace supported platforms
- Kubernetes
Examples
You can invoke the
add method on the
mathService service by sending the following:
curl \ -H "Content-Type: application/json" -d '{ "arg1": 10, "arg2": 23}'
The
mathService service will need to be listening on the
/add endpoint to receive and process the request.
For a Node app this would look like:
app.post('/add', (req, res) => { let args = req.body; const [operandOne, operandTwo] = [Number(args['arg1']), Number(args['arg2'])]; let result = operandOne + operandTwo; res.send(result.toString()); }); app.listen(port, () => console.log(`Listening on port ${port}!`));
The response from the remote endpoint will be returned in the response body.
In case when your service listens on a more nested path (e.g.
/api/v1/add), Dapr implements a full reverse proxy so you can append all the necessary path fragments to your request URL like this:
In case you are invoking
mathService on a different namespace, you can use the following URL:
In this URL,
testing is the namespace that
mathService is running in.
Next Steps
Feedback
Was this page helpful?
Glad to hear it! Please tell us how we can improve.
Sorry to hear that. Please tell us how we can improve.
|
https://docs.dapr.io/reference/api/service_invocation_api/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
.
COM.X & APN: Configure to use LTE network.
Download. VSCode IDE:
Install the M5Stack plug-in: Search the plug-in market for
M5Stack and install the plug-in, as shown below.
Click the power button on the left side of the device to restart. After entering the menu, quickly select Setup, enter the configuration page, and select USB mode.
Click the Add M5Stack option in the lower left corner and select the corresponding device port to complete the connection.
After completing the above steps, let's implement a simple screen display case program, open the M5Stack file tree, and type in the following program. Click
Run in M5stack to make the screen red easily. If the device is reset, click the refresh button to reopen the file tree.
from m5stack import * from m5stack_ui import * from uiflow import * setScreenColor(0xff0000)
|
https://docs.m5stack.com/en/quick_start/m5core/mpy
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Chapter 5
5.1 Describe the circumstances in which you would choose to use embedded SQL rather than SQL alone or only a general-purpose programming language.
Answer:
5.2 Write a Java function using JDBC metadata features that takes aResultSet as an input parameter, and prints out the result in tabular form, with appropriate names as column headings.
Answer:
5.3 Write a Java function using JDBC metadata features that prints a list of all relations in the database, displaying for each relation the names and types of its attributes.
Answer:
5.4 Show how to enforce the constraint “an instructor cannot teach in two different classrooms in a semester in the same time slot.” using a trigger (remember that the constraint can be violated by changes to the teachesrelation as well as to the section relation).
Answer:
5.5 Write triggers to enforce the referential integrity constraint from sectiontotimeslot, on updates to section, and time
in Figure 5.8 do not cover the update operation.slot. Note that the ones we wrote 5.6 To maintain the tot cred attribute of the studentrelation, carry out the fol-lowing:
a. Modify the trigger on updates of takes, to handle all updates that canaffect the value of tot
b. Write a trigger to handle inserts to the takes relation.cred.
c. Under what assumptions is it reasonable not to create triggers on thecourse relation?
Answer:
5.7 Consider the bank database of Figure 5.25. Let us define a view branch custas follows:
create view branch cust as
select branch name, customer name
from depositor, account
where depositor.account number = account.account number
Answer:
5.8 Consider the bank database of Figure 5.25. Write an SQL trigger to carry out the following action: On delete of an account, for each owner of the account, check if the owner has any remaining accounts, and if she does not, delete her from the depositor relation.
Answer:
5.9 Show how to express group by cube(a, b, c, d) using rollup; your answer should have only one group by clause.
Answer:
5.10 Given a relation S(student, subject, marks), write a query to find the top n students by total marks, by using ranking.
Answer:
5.11 Consider the sales relation from Section 5.6.Write an SQL query to compute the cube operation on the relation, giving the relation in Figure 5.21. Do not use the cube construct.
Answer:
5.12 Consider the following relations for a company database:
• emp (ename, dname, salary)
• mgr (ename, mname) and the Java code in Figure 5.26, which uses the JDBC API. Assume that the userid, password, machine name, etc. are all okay. Describe in concise
English what the Java program does. (That is, produce an English sentence like “It finds the manager of the toy department,” not a line-by-line description of what each Java statement does.)
Answer:
5.13 Suppose you were asked to define a class MetaDisplay in Java, containing a method static void printTable(String r); the method takes a relation name r as input, executes the query “select * from r”, and prints the result out in nice tabular format, with the attribute names displayed in the header of the table.
import java.sql.*;
public class Mystery {
public static void main(String[] args) {
try {
Connection con=null;
Class.forName(“oracle.jdbc.driver.OracleDriver”);
con=DriverManager.getConnection(
“jdbc:oracle:thin:star/[email protected]//edgar.cse.lehigh.edu:1521/XE”);
Statement s=con.createStatement();
String q;
String empName = “dog”;
boolean more;
ResultSet result;
do {
q = “select mname from mgr where ename = ’” + empName + “’”;
result = s.executeQuery(q);
more = result.next();
if (more) {
empName = result.getString(“mname”);
System.out.println (empName);
}
} while (more);
s.close();
con.close();
} catch(Exception e){e.printStackTrace();} }}
a. What do you need to know about relation r to be able to print the result in the specified tabular format.
b. What JDBC methods(s) can get you the required information?
c. Write the method printTable(String r) using the JDBC API.
Answer:
5.14 Repeat Exercise 5.13 using ODBC, defining void printTable(char *r) as a function instead of a method.
Answer:
5.15 Consider an employee database with two relations
employee (employee name, street, city)
works (employee name, company name, salary)
where the primary keys are underlined. Write a query to find companies
whose employees earn a higher salary, on average, than the average salary at “First Bank Corporation”.
a. Using SQL functions as appropriate.
b. Without using SQL functions.
Answer:
5.16 Rewrite the query in Section 5.2.1 that returns the name and budget of all
departments with more than 12 instructors, using the with clause instead of using a function call.
Answer:
5.17 Compare the use of embedded SQL with the use in SQL of functions defined in a general-purpose programming language. Under what circumstances would you use each of these features?
Answer:
5.18 Modify the recursive query in Figure 5.15 to define a relation
prereq depth(course id, prereq id, depth)
where the attribute depth indicates how many levels of intermediate prerequisites are there between the course and the prerequisite. Direct prerequisites have a depth of 0.
Answer:
5.19 Consider the relational schema
part(part id, name, cost)
subpart(part id, subpart id, count)
A tuple (p1, p2, 3) in the subpart relation denotes that the part with part-id p2 is a direct subpart of the part with part-id p1, and p1 has 3 copies of p2.
Note that p2 may itself have further subparts. Write a recursive SQL query that outputs the names of all subparts of the part with part-id “P-100”.
Answer:
5.20 Consider again the relational schema from Exercise 5.19. Write a JDBC function using non-recursive SQL to find the total cost of part “P-100”,including the costs of all its subparts. Be sure to take into account thefact that a part may have multiple occurrences of a subpart. You may userecursion in Java if you wish.
Answer:
5.21 Suppose there are two relations r and s, such that the foreign key B of r references the primary key Aof s. Describe how the trigger mechanism canbe used to implement the on delete cascade option,when a tuple is deleted from s.
Answer:
5.22 The execution of a trigger can cause another action to be triggered. Most database systems place a limit on how deep the nesting can be. Explain why they might place such a limit.
Answer:
5.23 Consider the relation, r , shown in Figure 5.27. Give the result of the following query:
select building, room number, time slot id, count(*)
from r
group by rollup (building, room number, time slot id)
Answer:
5.24 For each of the SQL aggregate functions sum, count, min, and max, show how to compute the aggregate value on a multiset S1 ∪S2, given the aggregate values on multisets S1 and S2.
On the basis of the above, give expressions to compute aggregate values with grouping on a subset S of the attributes of a relation r (A, B,C, D, E), given aggregate values for grouping on attributes T ⊇S, for the following aggregate functions:
a. sum, count, min, and max
b. avg
c. Standard deviation
Answer:
5.25 In Section 5.5.1, we used the student grades view of Exercise 4.5 to write a query to find the rank of each student based on grade-point average.
Modify that query to show only the top 10 students (that is, those students whose rank is 1 through 10).
Answer:
5.26 Give an example of a pair of groupings that cannot be expressed by using a single group by clause with cube and rollup.
5.27 Given relation s(a, b, c), show how to use the extended SQL features to generate a histogram of c versus a, dividing a into 20 equal-sized partitions
(that is, where each partition contains 5 percent of the tuples in s, sorted by
a).
Answer:
5.28 Consider the bank database of Figure 5.25 and the balance attribute of the account relation. Write an SQL query to compute a histogram of balance values, dividing the range 0 to the maximum account balance present, into three equal ranges.!
|
https://employedprofessors.com/database-management-systems-1-computer-science-homework-help/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Improve the Performance of your React Forms
Forms can get slow pretty fast. Let's explore how state colocation can keep our React forms fast.
Forms can get slow pretty fast. Let's explore how state colocation can keep our React forms fast.
If you use a ref in your effect callback, shouldn't it be included in the dependencies? Why refs are a special exception to the rule!
Why can't React just magically know what to do without a key?
Excellent TypeScript definitions for your React forms
How to improve your custom hook APIs with a simple pattern
Testing React.useEffect is much simpler than you think it is.
The sneaky, surreptitious bug that React saved us from by using closures
How and why I import react using a namespace (
import * as React from 'react')
A basic introduction memoization and how React memoization features work.
How and why you should use CSS variables (custom properties) for theming instead of React context.
Epic React is your learning spotlight so you can ship harder, better, faster, stronger
Simplify and speed up your app development using React composition
Some common mistakes I see people make with useEffect and how to avoid them.
Speed up your app's loading of code/data/assets with "render as you fetch" with and without React Suspense for Data Fetching
It wasn't a library. It was the way I was thinking about and defining state.
Is your app as fast as you think it is for your users?
When was the last time you saw an error and had no idea what it meant (and therefore no idea what to do about it)? Today? Yeah, you're not alone... Let's talk about how to fix that..
I still remember when I first heard about React. It was January 2014. I was listening to a podcast. Pete Hunt and Jordan Walke were on talking about this framework they created at Facebook that.
|
https://epicreact.dev/articles/
|
CC-MAIN-2021-43
|
en
|
refinedweb
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.