text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
This is the second release with the new Java based graphical user interface (GUI) server for newLISP. Since newLISP-GS appeared for the first time in release 9.2 new functions have been added, many bugs where fixed, and the newLISP-GS based multi tab editor has much improved behavior when launching applications. The monitor area at the bottom of the editor now works as an interactive newLISP shell. The new way of doing Graphics and GUIs in newLISP using the Java based newLISP-GS module has been well received by the community. newLISP can also still be used with GTK-Server or Tcl/Tk. As in previous releases, most new functionality and changes where asked for or contributed by the community of newLISP users. This version has a new set of functions for managing nested association lists. Nested association lists are frequently the result from converting XML into Lisp s-expressions. XML has established itself as a format for data interchange on the internet. The new functions permit access, modification and deletion of associations in nested association lists. These functions also work together with FOOP, which represents objects as associations. Using the : colon operator to implement polymorphism in method application and using context namespaces to encapsulate all methods of an object class, an object oriented programming system for newLISP has been designed. Thanks to Michael Michaels from neglOOk for designing most of this new way of object oriented programming in newLISP and for creating the training video series: “Towards FOOP”. FOOP melts contexts, the new colon operator : and functional programming into a simple and efficient way of object oriented programming in newLISP. The system features polymorphism and inheritance and can have anonymous objects, which are memory managed automatically by newLISP. A very short description of FOOP can be found in the Users Manual. For a more in depth treatment of this topic see Michael Michaels's video series accessible from the newLISP documentation page and the neglOOk website. assoc-set, set-assoc, pop-assoc – change or delete an association, handle nested associations. bind – the function binds expressions to symbols from an association list and is normally used in logic programming together with unify. This function was already present in earlier versions, but was not documented. destroy – destroys a process addressed by the process id returned by yhr newLISP functions process or fork. dostring – iterates over a string with the character value in the loop variable. On UTF-8 compiled newLISP UTF8 character values are returned by the loop variable. NEWLISPDIR - this environment variable will be registered on startup as /usr/share/newlisp on Unix and Unix like OSs and as %PROGRAMFILES%/newlisp on Win32. This allows writing platform independent code for loading modules. An already existing definition of NEWLISPDIR will not be overwritten during startup. ref-set, set-ref, ref-set-all – these functions modify one or all elements in a list searched by a key. The key can be any type of expression and unify, match or a user-defined function can be specified as comparison function. Like the nth, set-nth and nth-set family of functions these new list search-and-replace functions can pass lists by reference using a context name which is interpreted as a default functor. when – works like if without the else clause evaluating a block conditionally and without the necessity of begin. The : (colon) now also works as a function and can be attached to a symbol following it. The operator forms a context symbol from the symbol following it and the context symbol found as the first element of the list contained in the next argument. The colon operator is used to implement polymorphism in FOOP. assoc – now handles nested multilevel associations. count – has been rewritten to be many times faster on Unix and Unix like OSs. dup - without the repetition parameter will assume 2. find – when used with a comparison functor puts the last found expression into $0. find-all – can now be used on lists too. get-url, post-url, put-url, delete-url - have been extended and reworked to return more error information as supplied by the server and have improved debugging support. last, nth, nth-set, set-nth – now have last element speed optimization as previously only present in push and pop. nth, set-nth, nth-set, push, pop – when used on lists, indices overshooting the beginning or the end of the list now will cause an error to be thrown. Before, out of range indices would pick the first or last element in a list. The new behavior is consistent with the behavior of indexing arrays. pack – now can take lists for data. process – this reworked function now creates the new process without the previous time and memory overhead on Unix. No extra newLISP fork will be created to launch the new process. In most cases the full path must now be given for the command in process. rand – integer random number generation with better statistical quality on Win32. ref – when used with a comparison function puts the last found expression into $0. set-assoc – renamed replace-assoc changes an association, handles nested associations. set-nth, nth-set – now return the old list or value when the second argument is not present. They behave now like set without the value argument. Before both, set-nth and nth-set returned nil when the value argument was missing. signal – the nil flag now specifies SIG_IGN and the true flag SIG_DFL. Before 9.3 nil would specify an empty newLISP handler and the true flag was not available. This allows to reset the signal handler for a specific signal to its OS default. newlisp.vim – the syntax highlighting and editing control file for the VIM text editor has been much improved by Cyril Slobin. crypto.lsp – this module has been expanded to offer hmac encryption using MD5 or SHA-1 hashing. amazon.lsp – this new module interfaces to the Amazon Web Services, S3 storage API. newlispdoc – this utility has been improved to handle indices and links to external module collections. wordnet.lsp - this new module interfaces to the WordNet 3.o datebase The amazon.lsp and wordet.lsp modules are not part of the distribution, but can be accessed in the new section. Many bugs have been fixed in this release stabilizing some of the new features in the previous 9.2 release and fixing older previously undetected bugs. For more detail on bug fixes and changes see the CHANGES file in the source distribution of newLISP v.9.3.0 in newlisp-9.3.0/doc/CHANGES. This file details fixes changes for the development versions between 9.2.0 and to 9.3.0. §
http://www.newlisp.org/downloads/previous-release-notes/newLISP-9.3-Release.html
CC-MAIN-2015-18
en
refinedweb
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 4 results of 4 Sam Steingold wrote: >> gcc -shared -oregshared.o regex.o regexi.o >> ../lisp.run -M ../lispinit.mem >> (load"preload.lisp") ; preload >> (sys::dynload-modules "regshared.so" '("regexp")) >> (load"regexp.fas") ; post-load >try the same with the i18n module (see src/TODO for expected errors). I tried it on the Linux box. It worked. The tiny i18n testsuite passed. I didn't even use -fPIC (IIRC, somebody explained whether -fPIC is needed long time ago in this list). src/TODO mentions i18n.dll. So obviously, failure to load the code has to do with cygwin or some such on MS-Windows. My wild guess is that dlopen() emulation is lacking there. It looks like it fails to resolve symbols present and exported in the current binary. This would not surprise me that much since I believe a .dll is like an executable on its own, i.e. fully linked, with a set of entry points, whereas a .so is just an object file, waiting to be linked against a binary. E.g. I'd even expect dlsym("_main",NULL-handle-means-local-code) to fail, however the FFI seems to be able to locate symbols?!? As to a work-around for cygwin, don't ask me now. Probably Bruno would mention libtld or some such (if tld was created specifically because of limitations or absence of dlsym), but I don't know if that's the answer, or just an answer among many others. Regards, Jorg Hohle. Hi, >actually, I don't see why clisp should exit if dynload-modules fails. >could you please debug this? There's nothing to debug. SYS::dynload-modules invokes functionality normally only used when clisp starts up. In the startup code, all clisp does upon encountering an error is to call exit(). >> module regexp requires packages REGEXP >> -- and CLISP exits :-( Maybe SF still holds the patch I once submitted, where packages needed by modules were created on the fly? However, perhaps more is needed than just this patch to make the scenario work. It needs a bit more thought. >> It's not so nice that using dynload precludes creating images :-( >I think this can be fixed in a fairly straightforward way. This use-case also needs more thought. Regards, Jorg Hohle. Hello people, On Monday 19 December 2005 16:11, Sam Steingold wrote: > > Yet Debian might then still want to let CLISP depend on all packages. > > You'd need to install postgres to be able to install clisp?!? > > this is for the debian people to handle. > they have the notion of "weak dependencies" ("recommendation"). If the program loads the library at runtime I can use Suggests or Recommends, but if the clisp library is linked to the library the dependency will be generated automaticly. In general, if there is something that can be improved in the debian packages, just yell. Groetjes, Peter -- signature -at- pvaneynd.mailworks.org "God, root, what is difference?" Pitr | "God is more forgiving." Dave Aronson| configure,1.93,1.94 (Jörg Höhle) 2. clisp/src ChangeLog,1.5184,1.5185 stream.d,1.549,1.550 (Sam Steingold) 3. clisp/src stream.d,1.550,1.551 (Sam Steingold) --__--__-- Message: 1 From: Jörg Höhle <hoehle@...> To: clisp-cvs@... Subject: clisp configure,1.93,1.94 Date: Mon, 19 Dec 2005 10:55:02 +0000 Reply-To: clisp-devel@... Update of /cvsroot/clisp/clisp In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv7880 Modified Files: configure Log Message: configure: fix patch 20051121, need quotes around if [ ... = "set" ] Index: configure =================================================================== RCS file: /cvsroot/clisp/clisp/configure,v retrieving revision 1.93 retrieving revision 1.94 diff -u -d -r1.93 -r1.94 --- configure 21 Nov 2005 18:08:50 -0000 1.93 +++ configure 19 Dec 2005 10:55:00 -0000 1.94 @@ -650,7 +650,7 @@ cannot be provided. Please do this: EOF - if [ ${CC+set} = set ]; then + if [ "${CC+set}" = "set" ]; then echo " CC='$CC'; export CC" 1>&2 fi cat <<EOF 1>&2 --__--__-- Message: 2 From: Sam Steingold <sds@...> To: clisp-cvs@... Subject: clisp/src ChangeLog,1.5184,1.5185 stream.d,1.549,1.550 Date: Mon, 19 Dec 2005 14:53:03 +0000 Reply-To: clisp-devel@... Update of /cvsroot/clisp/clisp/src In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv25287 Modified Files: ChangeLog stream.d Log Message: (find_open_file): fix compilation with gcc4 Index: stream.d =================================================================== RCS file: /cvsroot/clisp/clisp/src/stream.d,v retrieving revision 1.549 retrieving revision 1.550 diff -u -d -r1.549 -r1.550 --- stream.d 18 Dec 2005 17:10:22 -0000 1.549 +++ stream.d 19 Dec 2005 14:53:00 -0000 1.550 @@ -7683,8 +7683,10 @@ var object stream = Car(tail); tail = Cdr(tail); if (TheStream(stream)->strmtype == strmtype_file && TheStream(stream)->strmflags & flags - && file_id_eq(fid,&ChannelStream_file_id(stream))) - return (void*)&(pushSTACK(stream)); + && file_id_eq(fid,&ChannelStream_file_id(stream))) { + pushSTACK(stream); + return (void*)&STACK_0; + } } return NULL; } Index: ChangeLog =================================================================== RCS file: /cvsroot/clisp/clisp/src/ChangeLog,v retrieving revision 1.5184 retrieving revision 1.5185 diff -u -d -r1.5184 -r1.5185 --- ChangeLog 18 Dec 2005 17:10:33 -0000 1.5184 +++ ChangeLog 19 Dec 2005 14:53:00 -0000 1.5185 @@ -1,3 +1,7 @@ +2005-12-19 Sam Steingold <sds@...> + + * stream.d (find_open_file): fix compilation with gcc4 + 2005-12-18 Sam Steingold <sds@...> base check_file_re_open() on unique file ID, not truenames that --__--__-- Message: 3 From: Sam Steingold <sds@...> To: clisp-cvs@... Subject: clisp/src stream.d,1.550,1.551 Date: Mon, 19 Dec 2005 15:18:48 +0000 Reply-To: clisp-devel@... Update of /cvsroot/clisp/clisp/src In directory sc8-pr-cvs1.sourceforge.net:/tmp/cvs-serv30066/src Modified Files: stream.d Log Message: Index: stream.d =================================================================== RCS file: /cvsroot/clisp/clisp/src/stream.d,v retrieving revision 1.550 retrieving revision 1.551 diff -u -d -r1.550 -r1.551 --- stream.d 19 Dec 2005 14:53:00 -0000 1.550 +++ stream.d 19 Dec 2005 15:18:42 -0000 1.551 @@ -7657,10 +7657,10 @@ return stream; } -# UP: add a stream to the list of open streams O(open_files) -# add_to_open_streams() -# <> stream -# can trigger GC +/* UP: add a stream to the list of open streams O(open_files) + add_to_open_streams() + <> stream + can trigger GC */ local maygc object add_to_open_streams (object stream) { pushSTACK(stream); var object new_cons = allocate_cons(); @@ -7674,7 +7674,7 @@ > struct file_id fid = file ID to match > uintB* data = open flags to filter < pointer to the stream saved on STACK or NULL - i.e., on success, addes 1 element to STACK */ + i.e., on success, adds 1 element to STACK */ global void* find_open_file (struct file_id *fid, void* data); global void* find_open_file (struct file_id *fid, void* data) { var object tail = O(open_files); @@ -7691,28 +7691,27 @@ return NULL; } -# */ global maygc object make_file_stream (direction_t direction, bool append_flag, bool handle_fresh) { var decoded_el_t eltype; --__--__-- _______________________________________________ clisp-cvs mailing list clisp-cvs@... End of clisp-cvs Digest
http://sourceforge.net/p/clisp/mailman/clisp-devel/?viewmonth=200512&viewday=20
CC-MAIN-2015-18
en
refinedweb
#include <zzip/fseeko.h> The zzip_entry_fread_file_header functions read the correspoding struct zzip_file_header from the zip disk of the given "entry". The returned off_t points to the end of the file_header where the current fseek pointer has stopped. This is used to immediatly parse out any filename/extras block following the file_header. The return value is null on error. The zzip. Copyright (c) 2003,2004 Guido Draheim All rights reserved, use under the restrictions of the Lesser GNU General Public License or alternatively the restrictions of the Mozilla Public License 1.1
http://www.makelinux.net/man/3/Z/zzip_entry_strdup_name
CC-MAIN-2015-18
en
refinedweb
- service can be started by calling startService(), which allows the service to run indefinitely, should.. even - AIDL (Android Interface Definition Language) performs all the work to decompose objects into primitives that the operating system can understand and marsh mB, mConnection, Context.BIND_AUTO_CREATE); } @Override protected void onStop() { super.onStop(); // Unbind from the service if (mBound) { unbindService(mConnection); mConnection =:();: public class ActivityMessenger extends Activity { /** Messenger for communicating with the service. */ Messenger mService = null; /** Flag indicating whether we have called bind on the service. */ boolean mBound; /** *); mBound = true; } public void onServiceDisconnected(ComponentName className) { // This is called when the connection with the service has been // unexpectedly disconnected -- that is, its process crashed. mService = null; mBound = false; } }; public void sayHello(View v) { if (!mBound) (mBound) { unbindService(mConnection); mBound = false; } } } Notice that this example does not show how the service can respond to the client. If you want the service to respond, then you need to also create a Messenger in the client. Then. bindService() returns immediately and does not return cannot bind to a service from a broadcast receiver. So, to bind to a service from your client, you must: -. - should usually pair the binding and unbinding during matching bring-up and tear-down moments of the client's lifecycle. For example: - If you only need to interact with the service should usually not onStartCommand()). (instead of receiving a call to onBind())..
https://developer.android.com/guide/components/bound-services.html
CC-MAIN-2015-18
en
refinedweb
I have made a class here that has two methods. As you guys can notice, in my two methods that I made, I have listed some arugments in there with parameters. My question is that the variables im using in first method, can they be identical on my second method? Is this ok to do? Code : public class StudentScore { private int math; private int science; private int calc; private int history; private int pe; //Make a constructor public StudentScore() { math=40; science=40; calc=40; history=40; pe=40; } public void getscore(int m_score,int s_score,int c_score,int h_score,int P_score) { math=m_score; science=s_score; calc=c_score; history=h_score; pe=P_score; } public int Scores(int m_score,int s_score,int c_score, int h_score,int P_score)
http://www.javaprogrammingforums.com/%20whats-wrong-my-code/35751-using-same-parameters-but-different-functiosn-allowed-not-printingthethread.html
CC-MAIN-2015-18
en
refinedweb
In configure.ac add: AC_PATH_PROGS([html_browser_open], [xdg-open htmlview]) if test -z $html_browser_open; then AC_MSG_ERROR([cannot find utility to launch browser]) fi AC_SUBST(html_browser_open) Somewhere in your code that is processed by config.status add something like this: web_browser_launcher = '@html_browser_open@' In the spec file add: %if 0%{?fedora} BuildRequires: xdg-utils Requires: xdg-utils %endif %if 0%{?rhel} BuildRequires: htmlview Requires: htmlview %endif What all this does is the following: AC_PATH_PROGS finds the full pathof xdg-open if its installed otherwise htmlview if its installed, and stores the result in the config shell variable html_browser_open. If neither is found the variable will be set to the empty string. You may or may not want to abort in this situation, the code above does abort, if you don't want to abort don't use the 3 lines following AC_PATH_PROGS. Then config.status will substitute that value in your source code or config file. The spec file does a conditional requires depending on whether its fedora or rhel. You need the BuildRequires in addition to the Requires because configure is run on the build host and that is where the AC_PATH_PROGS expansion is evaluated. HTH, John
https://www.redhat.com/archives/fedora-devel-list/2007-October/msg00092.html
CC-MAIN-2015-18
en
refinedweb
To address this question we need to realize that Outlook is a client application which is only a front end and local cache for a mail server (typically exchange server). Most web applications are server applications and thus run on servers which usually don't have client applications, such as Outlook, installed on them. Perhaps a better question would be "How do I access my Exchange contacts from my web application?". With that in mind lets explore a couple possible solutions for accessing data (not really Outlook data) from the Exchange server. -) These are just three common technologies for accessing data on a Exchange server, for more details see the Exchange Server Development Center or Overview of Programming Features for Exchange Server. To provide a more complete example I've included some code samples and links for each of the three technologies. The code examples demonstrate how to do find contacts using each of the three methods. Keep in mind these samples can be extended to other data types such as mail and appointments, with the exception of ADSI. All the code samples are written in C# and write the data to the console, they can easily be converted to be used by an ASP.Net web application. Also remember to substitute <ExchangeServer>, <username>, <password>, and <domain> with the appropriate values for your Exchange server. WebDAV The sample retrieves all contacts from an exchange store which have a first name beginning with 'wes'. Keep in mind before making any Exchange WebDAV requests you must make an forms authorization request first (assuming the Exchange server has forms authorization enabled, which seems to generally be the case). You need to make the authorization request to get the set of authorization cookies to use with any subsequent WebDAV requests. using System; using System.IO; using System.Net; using System.Text; using System.Xml; // When Exchange is setup for Forms Authentication you need to do the login separately public static CookieCollection GetAuthCookies(string server, NetworkCredential credentials) { // URI to OWA authorization dll string authURI = string.Format("{0}/exchweb/bin/auth/owaauth.dll", server, credentials.UserName); // Get byte stream of the post request); // Get response cookies - keep in mind this may throw exceptions using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) return response.Cookies; } public static void PrintContactsUsingExchangeWebDAV() { string server = "http<s>://<ExchangeServer>"; NetworkCredential credentials = new NetworkCredential("<username>", "<password>", "<domain>"); string uri = string.Format("{0}/exchange/{1}", server, credentials.UserName); // Create a byte stream of the SQL query to run against the server // This query searches for contacts with the givenName the begins with 'wes' // Link to Exchange store property names byte[] contents = Encoding.UTF8.GetBytes(string.Format( @"<?xml version=""1.0""?> <g:searchrequest xmlns: <g:sql> ""urn:schemas:contacts:sn"", ""urn:schemas:contacts:givenName"", ""urn:schemas:contacts:email1"", ""urn:schemas:contacts:telephoneNumber"" FROM Scope('SHALLOW TRAVERSAL OF ""{0}/exchange/{1}/contacts""') WHERE ""urn:schemas:contacts:givenName"" LIKE 'wes%' </g:sql> </g:searchrequest>", server, credentials.UserName)); HttpWebRequest request = HttpWebRequest.Create(uri) as HttpWebRequest; request.Credentials = credentials; request.Method = "SEARCH"; request.ContentLength = contents.Length; request.ContentType = "text/xml";) using (Stream responseStream = response.GetResponseStream()) { // Process the response as an XML document XmlDocument document = new XmlDocument(); document.Load(responseStream); foreach (XmlElement element in document.GetElementsByTagName("a:prop")) { // Do work with data returned for each contact Console.WriteLine("Name: {0} {1}\nEmail: {2}\nPhone: {3}", (element["d:givenName"] != null ? element["d:givenName"].InnerText : ""), (element["d:sn"] != null ? element["d:sn"].InnerText : ""), (element["d:email1"] != null ? element["d:email1"].InnerText : ""), (element["d:telephoneNumber"] != null ? element["d:telephoneNumber"].InnerText : "")); } } } Helpful WebDAV Links: Infinitec Exchange posts - C# and JavaScript forms authentication examples; // Generate Exchange Web Services proxy classes by adding a VS web // reference to http<s>://<ExchangeServer>/EWS/Services.wsdl // then add a using reference to your proxy classes public static void PrintContactsUsingExchangeWebServices() { ExchangeServiceBinding esb = new ExchangeServiceBinding(); esb.Credentials = new NetworkCredential("<username>", "<password>", "<domain>"); esb.Url = @"http<s>://<ExchangServer>/EWS/Exchange.asmx"; // Tell it you want all the item properties ItemResponseShapeType itemProperties = new ItemResponseShapeType(); itemProperties.BaseShape = DefaultShapeNamesType.AllProperties; // Tell it you only want to look in the contacts folder DistinguishedFolderIdType[] folderIDArray = new DistinguishedFolderIdType[1]; folderIDArray[0] = new DistinguishedFolderIdType(); folderIDArray[0].Id = DistinguishedFolderIdNameType.contacts; PathToUnindexedFieldType field = new PathToUnindexedFieldType(); field.FieldURI = UnindexedFieldURIType.contactsGivenName; ConstantValueType fieldValue = new ConstantValueType(); fieldValue.Value = "wes"; // Look for contacts which have a given name that begins with 'wes' ContainsExpressionType expr = new ContainsExpressionType(); expr.ContainmentModeSpecified = true; expr.ContainmentMode = ContainmentModeType.Prefixed; expr.ContainmentComparisonSpecified = true; expr.ContainmentComparison = ContainmentComparisonType.IgnoreCase; expr.Constant = fieldValue; expr.Item = field; RestrictionType restriction = new RestrictionType(); restriction.Item = expr; // Form the FindItem request FindItemType findItemRequest = new FindItemType(); findItemRequest.Traversal = ItemQueryTraversalType.Shallow; findItemRequest.ItemShape = itemProperties; findItemRequest.ParentFolderIds = folderIDArray; findItemRequest.Restriction = restriction; //( This is a very simplified .Net sample of doing a search in the default active directory for any users that have a first name beginning with 'wes'. Keep in mind that this uses the default active directory on the machine you are running on, see DirectorySearcher.SearchRoot for more details. If you need to specify a particular machine/directory to start your search from then you need to create a DirectoryEntry and pass that to the DirectorySearcher. using System; using System.DirectoryServices; // Get user information from your default active directory repository public static void PrintUsersFromADSI() { // Find any user that has a name that begins with 'wes' string filter = "(&(objectCategory=person)(objectClass=user)(givenName=wes*))"; DirectorySearcher search = new DirectorySearcher(filter); foreach(SearchResult result in search.FindAll()) { // Do work with data returned for each address entry DirectoryEntry entry = result.GetDirectoryEntry(); Console.WriteLine("Name: {0} {1}\nEmail: {2}\nPhone: {3}", entry.Properties["givenName"].Value, entry.Properties["sn"].Value, entry.Properties["mail"].Value, entry.Properties["telephonenumber"].Value); } } Useful ADSI/LDAP Links: InfiniTec - How to get the Global Address List programmatically Amit Zinman - Creating a list of Users and their e-mail addresses Steve Schofield - .NET sample LDAP query looking for a specific user name of (smith) and System.DirectoryServices namespace Mow - PowerShell and Active Directory Part 1, Part 2, Part 3, Part 4, Part 5 Disclaimer: The sample code described here is provided on an “as is” basis, without warranty of any kind, to the fullest extent permitted by law. It is for demonstration purposes only and is by no means production quality.
http://weblogs.asp.net/whaggard/how-do-i-access-my-outlook-contacts-from-my-web-application
CC-MAIN-2015-18
en
refinedweb
Ahh i dont know i will go read more about codes.Thanks anyway :) Ahh i dont know i will go read more about codes.Thanks anyway :) I just wached tutorial in youtube and i fix all bug and still nothing.Im novice in java programing so i need little help. Okay i edit it.Can you help me now ?Тhanks in advance :) What is wrong with my project?When run it gives me gray screen :confused: First Class-------------- package com.mime.minefront; import java.awt.Canvas; import...
http://www.javaprogrammingforums.com/search.php?s=ac85f8dd99aa0d1ab869ad86326de9ab&searchid=1514950
CC-MAIN-2015-18
en
refinedweb
in reply to Re^2: How to de-reference a coderef? in thread How to de-reference a coderef? What about other namespaces? What about multiple matches? A sub is a little like an inode on a unix filesystem: it may have several names, or none; and several handles, or none. ....
http://www.perlmonks.org/?node_id=413598
CC-MAIN-2015-18
en
refinedweb
Metadata Specifications Index Page Brief WSDL provides an extensible mechanism for defining the base messaging description and metadata for a Web service. The Web Services Policy Framework provides a general-purpose model and corresponding syntax to describe and communicate the policies of a Web service. Web Services Metadata Exchange defines messages to retrieve specific types of metadata associated with an endpoint. Specifications WSDL Web Services Description Language (WSDL) 1.1 WSDL defines an XML-based grammar for describing network services as a set of endpoints that accept messages containing either document-oriented or procedure-oriented information. The operations and messages are described abstractly, which are bound to a concrete network protocol and message format to define an endpoint. Related concrete endpoints are combined into abstract endpoints (services). WSDL is extensible to allow the description of endpoints and their messages regardless of what message formats or network protocols are being used to communicate. However, this document only describes how to use WSDL in conjunction with SOAP 1.1, HTTP GET/POST, and MIME. WSDL 1.1 Binding Extension for SOAP 1.2 WSDL 1.1 Binding Extension for SOAP 1.2 (PDF file) Get the Adobe Reader here. WSDL 1.1 Binding Extension for SOAP 1.2 describes how to indicate in a WSDL 1.1 document that a service uses SOAP 1.2. WS-Policy Web Services Policy Framework (WS-Policy) (PDF file) WS-Policy defines a base set of constructs that can be used and extended by other Web services specifications to describe a broad range of service requirements, preferences, and capabilities. WS-Policy was submitted to the W3C in April 2006. WS-Policy specification at the W3C WS-PolicyAttachment Web Services Policy Attachment (WS-PolicyAttachment) (PDF file) WS-PolicyAttachment specifies three specific attachment mechanisms for using policy expressions with existing XML Web service technologies. Specifically, we define how to associate policy expressions with WSDL type definitions and UDDI entities. We also define how to associate implementation-specific policy with all or part of a WSDL portType when exposed from a specific implementation. WS-PolicyAttachment was submitted to the W3C in April 2006. WS-PolicyAttachment specification at the W3C WS-MetadataExchange Web Services Metadata Exchange (WS-MetadataExchange) (PDF file) Get the Adobe Reader here. WS-MetadataExchange defines how metadata associated with a Web service endpoint can be represented as WS-Transfer resources, how metadata can be embedded in WS-Addressing Endpoint References, and how metadata could be retrieved from a Web service endpoint. WS-Discovery Web Services Dynamic Discovery (WS-Discovery) (PDF file) Get the Adobe Reader here. This specification defines a multicast discovery protocol to locate services. By default, probes are sent to a multicast group, and target services that match return a response directly to the requestor. To scale to a large number of endpoints, the protocol defines the multicast suppression behavior if a discovery proxy is available on the network. To minimize the need for polling, target services that wish to be discovered send an announcement when they join and leave the network. WS-MTOMPolicy MTOM Serialization Policy Assertion (PDF file) Get the Adobe Reader here. This specification describes a domain-specific policy assertion that indicates endpoint support of the optimized MIME multipart/related serialization of SOAP messages defined in section 3 of the SOAP Message Transmission Optimization Mechanism [MTOM] specification. Status Feedback Workshops for the WS-Policy family of specifications have already been held, and Interoperability Workshops are being organized shortly. WSDL 1.1 was submitted to the W3C and became a W3C Note 15 March 2001. WS-Policy was published as a public specification on 9 March 2006. This is the third joint publication of the specification. WS-PolicyAttachment was published as a public specification on 9 March 2006. This is the third joint publication of the specification. WS-PolicyAssertions was published as a public specification on 18 December 2002. This is the first joint BEA/IBM/Microsoft/SAP publication of the specification. WS-MetadataExchange was published as a public specification on 16 August 2006. This is the third joint publication of the specification. WS-Discovery was published as a public specification on 22 April 2005. This is the third joint publication of the specification. WSDL 1.1 Binding Extension for SOAP 1.2 was updated in March 2006 (previously titled: WSDL 1.1 Binding for SOAP 1.2). This is the second publication of the specification. Interoperability testing for WSDL was conducted in the SoapBuilders group. Web Service Metadata Implementation Details Schemas WSDL 1.1 Binding Extension for SOAP 1.2 WSDL HTTP GET & POST binding WSDL Superseded Specifications Web Services Policy Framework (WS-Policy) –September 2004 Web Services Policy Attachment (WS-PolicyAttachment) – September 2004 Web Services Policy Assertion Language (WS-PolicyAssertions) –May 2003 Web Services Metadata Exchange (WS-MetadataExchange) – September 2004 Web Services Dynamic Discovery (WS-Discovery) - October 2004 Web Services Dynamic Discovery (WS-Discovery) - February 2004 WSDL Binding for SOAP 1.2 Superseded Schemas WS-Policy – September 2004 WS-MetadataExchange – September 2004 WS-Discovery - October 2004 WS-Discovery - February 2004 WSDL Binding for SOAP 1.2 Superseded WSDL WS-MetadataExchange – September 2004 WS-Discovery - October 2004 WS-Discovery - February 2004 AppNotes Introduction SOAP defines a message as an Envelope and allows users to define specific Headers and Body formats using XML. XML Schemas (XSD) provides a mechanism for describing an XML format, but cannot describe a message or endpoint. The Web Services Description Language (WSDL) is an XML-based document format that introduces an extensible grammar for describing message endpoints while leveraging XSD for defining message content. Goals and Non-Goals Goals - Transport and encoding extensibility: New transports and encodings can be added to the base specification without having to revise it. - Abstract definitions: Endpoints and messages can be described abstractly, and then mapped onto one or more concrete transports or encodings. - Reuse of definitions: Existing endpoint definitions can be used to create new definitions. Non-goals - Flow language: WSDL describes four basic message flow patterns (one-way, request-response, solicit-response, and notification) and leaves description of more complex flows to other specifications that extend the base patterns. - Expose implementation details: WSDL focuses on describing wire formats, not on describing implementation details of an endpoint. - Exchange of documents: WSDL defines a document format for describing message endpoints but leaves the exchange of such documents to other specifications (such as UDDI). Details The WSDL grammar contains the following elements that are used together to describe endpoints: - Message: References to XML Schemas defining the different parts of the message (for example, Headers and Body). - Operation: Lists the messages involved in one message flow of the endpoint. For example, a request-response operation would refer to two messages. - PortType: The set of message flows (operations) expected by a particular endpoint type, without any details relating to transport or encoding. - Binding: The transport and encoding particulars for a portType. - Port: The network address of an endpoint and the binding it adheres to. - Service: A collection of related endpoints. Figure 1. How WSDL grammar elements are used to define endpoints Example The following listings illustrate each of the WSDL definitions that make up a fictitious stock quote endpoint that implements a single request-response operation. The request is a stock name message, and the response is a stock price message. A message is defined for the request and response. The format of the XML that appears in the body of the message is specified by linking to element definitions in an existing XSD schema: A portType (abstract endpoint definition) specifies a single request-response operation and refers to the previously defined request and response message definitions. A binding (concrete endpoint definition) for the portType specifies that SOAP should be used with a particular action value. <binding name='quoteBinding' type='quote-wsdl-ns:quotePortType'> <operation name='getQuote'> <soap:operation <input> <soap:body </input> <output> <soap:body </output> </operation> </binding> A service defines a stock quote endpoint instance at a particular address. Implications. Related Specifications - WSDL builds on XML, XML namespaces, and XML Schemas. - WSDL is extensible, allowing other specifications that define new protocols to introduce WSDL-specific grammar for conveying information about those protocols. - WSDL deliberately does not define complex flow information, but rather leaves this to flow languages. - WSDL does not define how WSDL documents are exchanged, but instead leaves this to inspection and directory specifications such as UDDI.
https://msdn.microsoft.com/en-us/library/ms951266.aspx
CC-MAIN-2015-18
en
refinedweb
kaiso 0.14.3 A queryable object persistence and relationship framework based on the Neo4j graph database. A graph based queryable object persistance framework built on top of Neo4j. Example In addition to objects, Kaiso also stores the class information in the graph. This allows us to use cypher to query instance information, but also to answer questions about our types. Let’s define some basic classes from kaiso.attributes import Integer, Outgoing, String, Uuid from kaiso.types import Entity, Relationship # define a simple type hierarchy class Knows(Relationship): pass class Animal(Entity): id = Uuid(unique=True) name = String() knows = Outgoing(Knows) class Carnivore(Animal): pass class Herbivore(Animal): pass class Penguin(Herbivore): favourite_ice_cream = String() class Lion(Carnivore): n_siblings = Integer() As with any orm, we can make some instances and persist them in our graph from kaiso.persistence import Manager manager = Manager("") # create some instances fred = Penguin(name="Fred") tom = Lion(name="Tom") relation = Knows(fred, tom) manager.save(fred) manager.save(tom) manager.save(relation) Using the Neo4j web interface to explore our graph, we find Tom and Fred: However, in addition, we can also see the type information in the graph: We can make use of the type information in our queries, e.g. to find all herbivores who know a carnivore START Herbivore=node:persistabletype(id="Herbivore"), Carnivore=node:persistabletype(id="Carnivore") MATCH Carnivore <-[:ISA*]-()<-[:INSTANCEOF]-(carnivore), Herbivore <-[:ISA*]-()<-[:INSTANCEOF]-(herbivore), (herbivore)-[:KNOWS]->(carnivore) RETURN "The herbivore", herbivore.name, "knows the carnivore", carnivore.name; ==> +---------------------------------------------------------------------+ ==> | "The herbivore" | "Fred" | "knows the carnivore" | "Tom" | ==> +---------------------------------------------------------------------+ - Downloads (All Versions): - 178 downloads in the last day - 508 downloads in the last week - 2536 downloads in the last month - Author: onefinestay - License: Apache License, Version 2.0 - Categories - Package Index Owner: onefinestay - Package Index Maintainer: davidszotten, junkafarian - DOAP record: kaiso-0.14.3.xml
https://pypi.python.org/pypi/kaiso/0.14.3
CC-MAIN-2015-18
en
refinedweb
Learn how easy it is to sync an existing GitHub or Google Code repo to a SourceForge project! See Demo You can subscribe to this list here. Showing 3 results of 3 Hi, I've been trying to get DW2 EH exception handling working on windows targets with general success, but have hit a snag. If a C library function that takes a callback arg is compiled using stdcall convention, C++ exception thrown by the callback result in abort. If the C libarary function uses normal (cdelc) calling convention than exceptions thrown by callback are handled. Example: ============================================== /* callback.c library functions */ /* compile this with c */ void caller_c (void (*callback)(void)) { callback(); } void __attribute__((stdcall)) caller_s (void (*callback)(void)) { callback(); } ============================================== /* main.cc */ #include <stdio.h> extern "C" { typedef void (*foo)(void); extern void caller_c (foo); extern void __attribute__ ((__stdcall__)) caller_s (foo); } void callback (void) { printf ("Hello from callback\n"); fflush (stdout); throw 1; } int main () { try { printf ("Testing cdecl\n"); caller_c (callback); } catch (int) { printf ("Caught cdecl\n"); } try { printf ("\nTesting stdcall\n"); caller_s (callback); } catch (int) { printf ("Caught stdcall\n"); } return 0; } ================================================== Compiling with sjlj enabled compiler: gcc -c -o callback.o callback.c g++-sjlj -o main-sjlj.exe main.cc callback.o Both throws are caught. However, if using DW2 EH enabled compiler g++-dw2 -omain-dw2.exe main.cc callback.o I get this: > Testing cdecl > Hello from callback > Caught cdecl > Testing stdcall > Hello from callback > > abnormal program termination. This is significant for windows targets, since the OS win32 api uses stdcall almost exclusively. Before I pull any more hair out trying, does any one have any hints on how to use an MD_FALLBACK_FRAME_STATE_FOR to workaround, or is this a blocker for the use of DW2 EH on windows targets. Danny Earnie Boyd wrote: > Ok. We need a good definition of publicly available documentation. MSDN > is only the preferred reference. Question to other MinGW developers, do > we consider documentation available only within the freely downloadable > SDK "publicly available documentation"? Perhaps someone could find a lawyer who does pro bono to answer a few of our questions. Harvard's Berkman Center granted pro bono legal help to Boost.org to write their software license. I know that Harvard in particular has a pro bono requirement for law students. I think this issue is complicated. Notice that most traditional Unix hackers don't consider "headers" or other interface specifications to be within the domain of copyright anyway. For example, GCC's fixincludes contains small snips of code from proprietary headers that apparently are not considered derived works. I have heard that, in general, the law is fair to people attempting to acheive compatibility. In addition, Microsoft's web MSDN documentation has equal copyright standing as the documentation and headers within the SDK. If documentation on the web is acceptable, why would downloadable resources not be? On the other hand, anything coming from an NDA probably is not acceptable. Some of the click-through agreements for its downloadable SDKs are actually click-through NDAs. Despite the suspicious legal standing of such an agreement, I do not think it would be wise to play here. This is a scary issue, because the danger is very real. MinGW forms part of a gateway for GNU to Windows, and makes a political statement as much as any technical statement. In the present legal climate, I do not think Microsoft would hesitate to attempt to shut MinGW down, SLAPP-style or otherwise, if it suited their fancy and business plan. Aaron W. LaFramboise <quote who="Steven Edwards"> > Hi Earnie, > > --- Earnie Boyd <earnie@...> wrote: >>. > > The issue is not one of the license in fact I like the public domain > license. The issue is that I have had some of the Wine developers to > relicense headers in the past as PD only to have them be rejected due > to the fact that the interfaces were only documented inside of a > Microsoft SDK rather than directly published on MSDN. The example I > gave was the FDI/FCI headers I submitted last year. They were not > documented except in the cabinet SDK and were rejected from w32api. > Ok. We need a good definition of publicly available documentation. MSDN is only the preferred reference. Question to other MinGW developers, do we consider documentation available only within the freely downloadable SDK "publicly available documentation"? > MSDN has now published the cabinet specs directly on the website so > those headers can now be merged. My concern is that MSDN has been known > to be wrong quite a few times and if I have another source (Wine > Project) that proves something in MSDN might be wrong then under the > current rules my change would still be rejected by w32api. > Oh, yes I know MSDN can be wrong as well as contradictory and confusing. Bug reports and patches can resolve those issues. We need to train our users on the construction of a proper bug report by asking them to provide an example, the expected result and the actual result. Any mention though of the SDK code will be grounds for automatic rejection of any patch. Earnie --
http://sourceforge.net/p/mingw/mailman/mingw-dvlpr/?viewmonth=200411&viewday=23
CC-MAIN-2015-18
en
refinedweb
PS PSAMControlLibrary; directive to your code. using PSAMControlLibrary;::.
http://www.codeproject.com/Articles/87329/PSAM-Control-Library?fid=1575460&df=90&mpp=10&sort=Position&spc=None&tid=3960355
CC-MAIN-2015-18
en
refinedweb
. Operator (C# Reference) Visual Studio 2010 The dot operator (.) is used for member access. The dot operator specifies a member of a type or namespace. For example, the dot operator is used to access specific methods within the .NET Framework class libraries: For example, consider the following class: The variable s has two members, a and b; to access them, use the dot operator: The dot is also used to form qualified names, which are names that specify the namespace or interface, for example, to which they belong. The using directive makes some name qualification optional: But when an identifier is ambiguous, it must be qualified: namespace Example2 { class Console { public static void WriteLine(string s){} } } namespace Example1 { using System; using Example2; class C { void M() { // Console.WriteLine("hello"); // Compiler error. Ambiguous reference. System.Console.WriteLine("hello"); //OK Example2.Console.WriteLine("hello"); //OK } } } For more information, see the C# Language Specification. The language specification is the definitive source for C# syntax and usage. Reference Concepts Other Resources Show:
https://msdn.microsoft.com/en-us/library/6zhxzbds(v=vs.100).aspx
CC-MAIN-2015-18
en
refinedweb
Hi, > I posted new ones. > - > (changes in page_cgroup) > (I'm not sure this gets Ack or Nack. but direction will not change.) > > Then, please tell me if you have new troubles with new ones. > Or if you have requests. > Major changes are > > - page_cgroup.h is added. > - lookup_page_cgroup(struct page*), lock_page_cgroup() etc.. is exported. > - All page_cgroup are allocated at boot. > - you can use atomic operation to modify page_cgroup->flags. Good new! > One concern from me to this bio_cgroup is that this increases size of > +#ifdef CONFIG_CGROUP_BIO > + struct list_head blist; /* for bio_cgroup page list */ > + struct bio_cgroup *bio_cgroup; > +#endif > struct page_cgroup...more 24bytes per 4096bytes. > Could you reduce this ? I think 8bytes per memcg is reasonable. > Can you move lru to bio itself ? I have a plan on getting rid of the blist after your work is done, whose design will depend on that all page_cgroups are preallocated. I also think the size of bio_cgroup can be reduced if making bio_cgroup contain a bio-cgroup ID instead of the pointer. Just wait! :) > This makes page_cgroup to be 64bytes from 40bytes and makes it larger > than mem_map.... > After bio_cgroup, page_cgroup, allocated at boot, size on my 48GB box > will jump up. > 480MB -> 760MB. > > Thanks, > -Kame
http://www.redhat.com/archives/dm-devel/2008-September/msg00195.html
CC-MAIN-2015-18
en
refinedweb
#include <X11/extensions/Xcomposite.h> The composite extension provides several related mechanisms: Per-hierarchy storage may be created for individual windows or for all children of a window. Manual shadow update may be selected by only a single application for each window; manual update may also be selected on a per-window basis or for each child of a window. Detecting when to update may be done with the Damage extension. The off-screen storage includes the window contents, its borders and the contents of all descendants. For example, version 1.4.6 would be encoded as the integer 10406. The root window may not be redirected. Doing so results in a BadMatch error. Specifying an invalid window id will result in a BadWindow error. The X server must support at least version 0.2 of the Composite Extension for XCompositeNameWindowPixmap.. The X server must support at least version 0.3 of the Composite Extension for XCompositeReleaseOverlayWindow. Table of Contents
http://www.x.org/releases/X11R7.5/doc/man/man3/XCompositeVersion.3.html
CC-MAIN-2015-18
en
refinedweb
#include <playerc.h> List of all members. Device info; must be at the start of all device structures. Robot geometry in robot cs: pose gives the position3d and orientation, size gives the extent. These values are filled in by playerc_position3d_get_geom(). Device position (m). Device orientation (radians). Linear velocity (m/s). Angular velocity (radians/sec). Stall flag [0, 1].
http://playerstage.sourceforge.net/doc/Player-1.6.5/player-html/structplayerc__position3d__t.php
CC-MAIN-2015-18
en
refinedweb
There are a lot of articles about progress bars (about a dozen in CodeProject [^] ) and they all have different properties. My aim in writing BusyBar was to make all the different styles available in one control. I have coded about twenty-five styles, but the real beauty of BusyBar is that it is easily extendable. I hope that people will give me new styles as they write them, to include in this project. BusyBar First, you must add BusyBar to your project. I have packaged BusyBar as a single C# source file (BusyBar.cs) and its associated RESX file (BusyBar.resx). Visual Studio is a bit temperamental, so please follow these instructions in order: Here are detailed instructions for the "Add your assembly to your toolbox" step (it's not that obvious): I have also now packaged it in a control library, as requested for VB.NET developers. You should now have the BusyBar control and some Painter components in your toolbox. Note that, they will only be enabled when you have a form open in design view. You are now ready to add a BusyBar control to your form. First, a quick diagram to help you to understand how BusyBar works: The BusyBar class is a custom control, and handles the control stuff, like the CornerRadius, and the data properties, like the minimum, maximum and current values. It is also responsible for drawing the border, but it delegates the rest of the painting to an instance of IPainter. I have written some implementations of IPainter, each of which has various Preset settings. CornerRadius IPainter Preset You are now ready to add an instance of BusyBar to your form. This will draw the border, but you need to select an IPainter to draw the client area. You can do this in one of three ways: Component PainterObject PainterPreset PainterWorker And that's it. You can use the demo to test various settings, and to see the different styles available so far. You will notice that the BusyBar control and the Painter components have default bitmaps in your toolbox. If you want pretty bitmaps, you have to follow a few more steps: ResFinder DefaultNamespace You should now have shiny new bitmaps showing in your Toolbox This section will help you to choose a Painter, and explain the properties specific to each. I haven't included images of the different presets as there are too many. You can run the demo to see examples of what is possible. This was the first painter I wrote, just to test the BusyBar control. So it's quite simple, but may be of some use. It only has one preset, "Bar," which sets the width of the line to 50, so it appears as a block of colour. This is an attempt to replicate the .NET ProgressBar control. This Painter introduces the concept of "Blocks." When the BlockLineWidth property is greater than 0, a series of lines are drawn over the bar. When the BlockLineColor property is set to the BackgroundColor of the control, this splits the bar up into blocks. ProgressBar BlockLineWidth BlockLineColor BackgroundColor It has two presets: "System," which sets the Bar.Pin property to BusyBar.Pins.Start, which anchors the bar on the left or top sides; and "Startup," which is a copy of the bar XP shows at startup. Bar.Pin BusyBar.Pins.Start This is one of the most useful and configurable Painters. It uses a PathGradientBrush to paint the bar, which means you can set the colour gradient in two dimensions. Set the Shape property to one of the enumeration values to specify the base shape, and then set the Color properties to define the brush. PathGradientBrush Shape Color It has five presets: "Kitt," "Circle," "Startup," "Startup2003" and "Noise." These show the large range of effects that this Painter can produce. You get points if you know where the "Kitt" preset comes from This Painter works best for a larger, square or slightly rectangular BusyBar control. It draws 12 marks round the edges, and two "hands" that rotate. It has two presets: "Watch" and "Circle." Note that the "Watch" preset is in CP colours! This Painter is supposed to imitate an oscilloscope. The line is drawn as a GraphicsPath. This is defined by the Shape and Points properties. If the Shape property is set to Line, then path.AddLines( points ) is called to create the path. This just results in straight lines connecting the points. GraphicsPath Points Line path.AddLines( points ) If the Shape property is set to Curve, then path.AddCurve( points, 1, points.Length - 3, tension / 100f ) is called to create the path. This results in curves connecting the middle points. The two end points are not drawn, they are just there to define the curvature at the start and end of the path. So this Shape requires at least four points. Curve path.AddCurve( points, 1, points.Length - 3, tension / 100f ) If the Shape property is set to Bezier, then path.AddBeziers( points ) is called to create the path. This results in curves connecting the points. This Shape requires one point for the start position, and then three more points to describe each segment. So it must have 4, 7, 10, ... (1 + 3n) ... points. Bezier path.AddBeziers( points ) Note that the "size" of the path described by the points array does not matter. The path is scaled according to the HorizontalScale and VerticalScale properties, which are percentages of the BusyBar control client area. Also, if the control is in Vertical mode, the path is rotated 90 degrees clockwise, so the points array must always be set horizontally. HorizontalScale VerticalScale There are a few presets in this Painter. "Triangle", "Square" and "Saw" are based on the Line shape; "Sine" is based on the Curve shape; and "Bezier" is based on the Bezier shape. The other two presets, "Circle" and "Heartbeat" are just for fun This Painter is a copy of the progress bar shown during an OS installation. I wrote it from memory; I think it's about right. It has two presets: "Install," which is the classic install bar, and "LED," which imitates a row of (blue) LEDs. This is a joke (I hope). It displays a bar that looks like the one in IE. It displays the log of the value, so it seems to go fast at first, and then slows down. Then it resets itself when it reaches a specified percentage, so it never gets to 100%. This was requested (honest)! All Painters must implement the IPainter interface : public interface IPainter : ICloneable { IPainter CreateCopy(); BusyBar BusyBar { get; set; } void Reset(); void Paint( Graphics g, Region r ); } You probably want to inherit from one of the existing classes. The derivation structure looks like this : IPainter PainterBase PainterLine PainterBlockBase PainterXP PainterFrustratoBar PainterPathGradient PainterClock PainterSillyscope PainterInstall If you derive from PainterBase, you just have to override the CreateCopy and Paint methods. PainterBase holds a reference to the BusyBar control, which it makes available through the protected Bar property. It also handles the ICloneable interface. PainterBase CreateCopy Paint Bar ICloneable The Paint method is the most interesting. It is called from the OnPaint method of the BusyBar control, after it has drawn the border. The Graphics parameter enables you to do your drawing, and the Region parameter defines the client area available to you. The clip region of the Graphics object is already set to this Region. OnPaint Graphics Region The best way to understand this design is to have a look at the existing concrete Painter implementations. Note: To test your Painter in the demo, alter the Painter property of Form1 to return an instance of your Painter (it's at the end of Form1.cs ). You will then be able to play with the settings from the PropertyGrids. Painter Form1 PropertyGrid Have fun Getting the design-time bitmaps to work was a real pain. When Visual Studio embeds a resource, it pretends the default namespace to the name, and this is not configurable ( someone thought this was a good idea! ). Because BusyBar.cs is designed to be included in any project, the default namespace cannot be known. So I added a class called ResFinder in the default namespace, but you have to set the DefaultNamespace constant manually. I could not find a way around this :( Bob Powell's website was a great help, in particular his article "How to find the elusive ToolboxBitmap icon" [^]. BusyBar provides a lot of functionality out of the box, but I hope that people will write more extensions, and email them to me to be included in updates of this article. Appropriate credit will of course be given. I have had fun writing this, especially the "Kitt" and "Heartbeat" presets were quite gratifying. I hope you like what I have done PainterXP PainterPathGradient PainterInstall Minimum PainterFrustratoBar This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) this.painterSillyscope1.Points = new System.Drawing.PointF[] { ((System.Drawing.PointF)(resources.GetObject("resource"))), ((System.Drawing.PointF)(resources.GetObject("resource1"))), ((System.Drawing.PointF)(resources.GetObject("resource2"))), ((System.Drawing.PointF)(resources.GetObject("resource3")))}; using Common; ... PainterXP pxp = busyBar1.PainterWorker as PainterXP; if ( pxp != null ) pxp.ColorDark = Color.Blue; PerformStep General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/10432/BusyBar?msg=1117582
CC-MAIN-2015-18
en
refinedweb
Using .NET Framework, it is very easy to check for valid and broken links in C#. The main namespace required is System.Net 7/5/11 Update: use HttpWebResponse instead of WebResponse In the article describing how to download a file in C# we described how to connect to the internet to retrieve a file. Using the same technique we can connect to a URL and download a webpage (which is also a file) to test whether it is available or not. In fact, HTTP status codes let us know a lot more than that. Setting up the web connection is simple: Uri urlCheck = new Uri(url); HttpWebRequest request = (HttpWebRequest)WebRequest.Create(urlCheck); request.Timeout = 15000; The HttpWebRequest class automatically handles everything required to request data from the given URL. You can also manually specify for how long a request should attempt to connect to the link. In this case we set a timeout of 15 seconds (set in milliseconds). HttpWebResponse response; try { response = (HttpWebResponse)request.GetResponse(); } catch (Exception) { return false; //could not connect to the internet (maybe) } The GetResponse command actually goes and tries to access the website. An exception is thrown if the class can't connect to the link (usually because the computer is not connected to the internet). This is a good place to make the distinction between not being able to connect and the webpage not existing. However there are a few wrinkles. For example, a 403 status code (forbidden access) will throw an exception instead of simply setting a response code. If otherwise the connection when through okay, the HttpWebResponse class will give us access to a status code of the response. This status code tells us the state of the URL. Note that we had to explicitly cast WebResponse to HttpWebResponse to gain access to the status code. There are many status codes and each have their own meaning. The most common one is 200, which means the URL was found. 404 means the page was not found, 302 means the page is redirected somewhere else, etc. You can check out the complete status code definitions. Luckily for us, the HttpStatusCode enum encapsulates the most common status codes and their meaning. So for our example, we might just want to check if the status code is 200 (the page was found) and return false otherwise. return response.StatusCode == HttpStatusCode.Found; Go ahead and download the C# source code. The CheckURL function takes in a webpage address as a parameter and returns a simple boolean value indicating whether the link is valid or broken.
http://www.vcskicks.com/check-website.php
CC-MAIN-2015-18
en
refinedweb
Array Size!!! Hi people, My Question is to: "Read integers from the keyboard until zero is read, storing them in input order in an array A. Then copy them to another array B doubling each integer.Then print B.". If you know any other solution other than this then please do tell me. Please help me, im stuck in this program for the past 3 days but no one has given me any proper solution for this question. Here goes my code: I just don't know how to initialize the array when you don't know the size of the array. Just to get the answer i have initialized arrays A and B to 20 in the program below. import java.io.*; public class CopynDouble1 { public static void main(String[] param){ try{ BufferedReader n = new BufferedReader(new InputStreamReader(System.in)); System.out.print("Input number: "); String userInput = n.readLine(); int numbers = Integer.parseInt(userInput); int[] A = new int[20]; int[] B = new int[20]; int i =0; while(numbers != 0){ A[i] = numbers; B[i] = 2 * A[i]; System.out.println(B[i]); i++; System.out.print("Input number: "); userInput = n.readLine(); numbers = Integer.parseInt(userInput); } } catch(Exception e){ } } } Its just a small question as to how to intialize the array when u don't have the size of it but No one has come with a good answer/code yet with which i can sort out my code. Hi iverlinka, Thanks for replying. Well i am done with that program and i have used ArrayList for that. I have had never used ArrayList before so it took me a week, first to find out the solution i.e ArrayList and then to apply it which took me 2 more days because i didn't know how to exactly use this Arraylist as there are only methods in it and didn't know that how to double the numbers in the Array. So that was quite frustating for this kind of a small program. But anyways All is well that ends well-:) Hello! I came up with some idea I think you can try it, if you haven't found a solution yet. So I see that you read the inputted number as string here: String userInput = n.readLine(); Why don't you make a quick string manipulation to count the number of symbols until 0 in the string? I can give you a hint - you find the indexOf("0") and then you can get the substring that is the actual number from the beginning to the end which is symbol 0 and then you can get length of that string and see what will be the size of your array and set it to the array. After that you can continue with the Integer.parseInt of the string which will now contain only the number without the end mark, you will go through it in the while (since you know how many symbols there are in the string). Did you get my idea? I won't write you the code because you should do it by your self ;) Ivelina
https://www.java.net/node/684314
CC-MAIN-2015-18
en
refinedweb
Understanding Struts Controller Understanding Struts Controller In this section I will describe you the Controller.... It is the Controller part of the Struts Framework. ActionServlet is configured software making software making hello sir , sir you post me answer... in c and c++.sir i regueast you please you guid me. i hope you will help me make..., There are so things you can do with C and C++. Please let's know the description software making software making hello sir you asked me language.... how i will do coding, and how i will give the graphical structure. and how i... software. You can learn java at Thanks   Thanks - Java Beginners Thanks Hi Rajnikant, Thanks for reply..... I am not try for previous problem becoz i m busy other work... please tell me what is the advantage of interface and what is the use of interface... Thanks Reply Me - Struts visit for more information. Thanks...Reply Me Hi Friends, I am new in struts please help me... file,connection file....etc please let me know its very urgent   software making accounting software.how i will do coding . sir i have not difficulty about c and c++. please you give me guidness about making software. sir you tell me its...software making software hello sir sir i have learned thanks - Development process . please help me. thanks in advance...thanks thanks for sending code for connecting jsp with mysql. I have completed j2se(servlet,jsp and struts). I didn't get job then i have learnt do this for me plzz - Java Interview Questions do this for me plzz Given the no of days and start day of a month... to print in the same line i.e., without printing a newline. 2)You do...)); } } ----------------------------------------- Visit for more informaton Thanks - Java Beginners and send me... Thanks once again...for sending scjp link Hi friend...Thanks Hi, Thanks ur sending url is correct..And fullfill...) { e.printStackTrace(); } %> For read more information and details Struts - Struts . thanks and regards Sanjeev Hi friend, For more information.../struts/ Thanks...Struts Dear Sir , I m very new to struts how to make a program-config.xml is used for making connection between view & controller.... The one more responsibility of the controller is to check.../struts/ Thanks Struts - Struts for more information. Thanks...Struts Hi, I m getting Error when runing struts application. i... /WEB-INF/struts-config.xml :// Thanks...Struts Is Action class is thread safe in struts? if yes, how it is thread safe? if no, how to make it thread safe? Please give me with good Reply Me - Struts to provide a better solution for you.. Thanks making a web application using Web-Logic Server - Struts making a web application using Web-Logic Server Hello , I am a beginner to the java platform so i am facing some problem in making a web aplication using a struts framework.Hence i referred the docs given in the link http Answer me ASAP, Thanks, very important Answer me ASAP, Thanks, very important Sir, how to fix this problem in mysql i have an error of "Too many connections" message from Mysql server,, ASAP please...Thanks in Advance Error - Struts these two files. Do I have to do any more changes in the project? Please.... If you can please send me a small Struts application developed using eclips. My...Error Hi, I downloaded the roseindia first struts example struts validations - Struts --------------------------------- Visit for more information. Thanks...struts validations hi friends i an getting an error in tomcat while best Struts material - Struts best Struts material hi , I just want to learn basic Struts.Please send me the best link to learn struts concepts Hi Manju...:// Thanks java - Struts in which one can add modules and inside those modules some more options please give me idea how to start with Hi Friend, Please clarify what do you want in your project. Do you want to work in java swing? Thanks does anybody could tell me who's the author - Struts the author's hard-working !! thanks roseindia you could send me mail...does anybody could tell me who's the author does anyone would tell me who's the author of Struts2 tutorial( Reply Me - Java Beginners use this i don't know... please tell me what is the use of this ....and also solve my previous problem.... Thanks Hello Ragini MVC... with databse. Controller are servlet,jsp which are act as mediater. If u want book for struts 1.2.9 - Struts book for struts 1.2.9 Can any body suggest me book for struts 1.2.9...? Thanks in advance, Hi friend, I think, Jakarta Struts...:// Thanks Thanks - Java Beginners Thanks Hi, thanks This is good ok this is write code but i... either same page or other page. once again thanks hai frnd.. all data means...?wat do u mean by that? that code will display all and send me... Thanks once again...for sending scjp link...Thanks Hi, Thanks ur sending url is correct..And fullfill requirement.. I want this.. I have two master table and form vendor Custom Validations - Struts into Validator ? Can anybody tell me how to do this with a sample example. Thanks in advance Vikrant Hi friend, For more information on struts 1...Custom Validations Hi, I am trying to do the custom validations thanks - Java Beginners thanks Sir , i am very glad that u r helping me a lot in all... to understood...can u please check it and tell me..becoz it says that any two... it to me .....thank you vary much Thanks Thanks This is my code.Also I need code for adding the information on the grid and the details must be inserted in the database. Thanks in advance help - Struts studying on struts2.0 ,i have a error which can't solve by myself. Please give me... Hi friend, Do Some changes in struts.xml... attribute "namespace" in Tag For read more information to visit this Doubts on Struts 1.2 - Struts visit for more information. Thanks...Doubts on Struts 1.2 Hi, I am working in Struts 1.2. My requirement... anyone suggest me how to proceed... Thanx in advance Hi friend Is Singleton Default in Struts - Struts (called a Controller in Spring MVC). If you have any problem then send me...:// Thanks..., Is Singleton default in Struts ,or should we force any use of Struts - Struts a link. This link will help you. Please visit for more information with running example. Thanks...use of Struts Hi, can anybody tell me what is the importance i have a problem to do this question...pls help me.. i have a problem to do this question...pls help me.. Write a program that prompts the user to input an integer and the output the number...(); System.out.println("Reversed Number: "+reverse(num)); } } Thanks do the combinations in java, pls help me its urhent - Development process do the combinations in java, pls help me its urhent import... one help me: action when condition1 and condition3 action when condition1... again : Thanks plz Help me - Java Beginners :// Thanks...plz Help me Hi, I want learn struts,I dont have any idea about... my personal id plz tell me that whose software installed.and give me brief validations in struts - Struts }. ------------------------------- Visit for more information. in struts hi friends plz give me the solution its urgent I an getting an error in tomcat while running the application in struts , sun, and sunw etc. For more information on Struts visit to : Thanks...Struts Tag Lib Hi i am a beginner to struts. i dont have Doubt in struts - Struts know how to do. Please help me in this regard. It will be helpful, if explained...Doubt in struts I am new to Struts, Can anybody say how to retrieve... : Thanks Struts 2.0.6 Released Struts 2.0.6 Released Download Struts 2.0.6 from This release is another grate step towards making Struts 2 more robust and usable.   2.0- Deployment - Struts Struts 2.0- Deployment Exception starting filter struts2... /* For more information on Struts2 visit to : Thanks I placed correct web.xml only. For first time I Struts for Java Struts for Java What do you understand by Struts for Java? Where Struts for Java is used? Thanks Hi, Struts for Java is programming... information and tutorials then visit our Struts Tutorials section. Thanks Integrating Struts 2 with hybernate - Struts Integrating Struts 2 with hybernate Can anyone please help me how... JDBC call? please help me thanks in advance Hi friend, Read for more information. Java Struts - Hibernate Java Struts I am trying to do a completed project so that I can keep it in my resume. So can anyone please send me a completed project in struts.... thanks in advance. Hi friend, Read for more information Money Transfer Applications ? Making Your Life That Much Easier Money Transfer Applications – Making Your Life That Much Easier... decade the industry has undergone an astounding amount of advancements thanks... certain tasks to help you manage your money. You can do just about everything What do you understand by Virtual Hosting? about the hosting terms. Just explain me about Virtual Hosting. Thanks  ... more at the following page: Check the tutorial Virtual Hosting. Thanks...What do you understand by Virtual Hosting? What do you understand file uploading - Struts Struts file uploading Hi all, My application I am uploading files using Struts FormFile. Below is the code. NewDocumentForm.... If the file size is more then 1 mb. I need to read the file by buffer MVC - Struts MVC CAN ANYONE GIVE ME A REAL TIME IMPLEMENTATION OF M-V-C ARCHITECTURE WITH A SMALL EXAMPLE...... Hi friend, Read for more information. Thanks struts - Struts struts how the database connection is establised in struts while using ant and eclipse? Hi friend, Read for more information. Business Intelligence for Decision Making Business Intelligence for Decision Making How do business intelligence helps in better decision making Hi - Struts know it is possible to run struts using oracle10g....please reply me fast its...:// Thanks. Hi Soniya, We can use oracle too in struts...Hi Hi friends, must for struts in mysql or not necessary Thanks for fast reply - Java Beginners Thanks for fast reply Thanks for response I am already use html... oh well... do not get confused with all that! these are very simple...(); } Just read the tutorial completely, and u'll able to do it. Good luck the struts action. Hi friend, For more information,Tutorials and Examples on Checkbox in struts visit to : Thanks Struts - Struts . Please visit for more information. Thanks java - Struts : In Action Mapping In login jsp For read more information on struts visit to : Thanks... friend. what can i do. In Action Mapping In login jsp   in jsp page/ How to to do the whole configuration. thanks Arat Hi...multiple configurstion file in struts Hi, Please tell me the solution. I have three configuration file as 'struts-config.xml','struts validation - Struts single time only. thank you Hi friend, Read for more information, Thanks...validation Hi, Can you give me the instructions about Hello - Struts :// Thanks Amarde... will abort request processing. For more information on struts visit to : Thanks Hi.. - Struts Hi.. Hi, I am new in struts please help me what data write in this file ans necessary also... struts-tiles.tld,struts-beans.tld,struts.../struts/ Thanks. struts-tiles.tld: This tag library provides tiles making trees - Java Beginners making trees pls i do't know what this yerms mean in terms of java trees and stacks? Inorder traverse Switch statement Doing something to the stack thanks in advance Struts - Struts . Struts1/Struts2 For more information on struts visit to : Hello I like to make a registration form in struts inwhich java - Struts java what do u mean by rendering a response in Struts2.0 Hi friend, Read for more information. is running code, read for more information. configuration - Struts /struts/ Thanks...configuration Can you please tell me the clear definition of Action class,ActionForm,Model in struts framework. What we will write in each Java making a method deprecated Java making a method deprecated java making a method deprecated In Java how to declare a method depricated? Is it correct to use depricated... to anyone who is using that method. Please tell me how to achieve that?   Tell me - Struts Struts tutorial to learn from beginning Tell me, how can i learn the struts from beginning Struts - Framework , Struts : Struts Frame work is the implementation of Model-View-Controller... of any size. Struts is based on MVC architecture : Model-View-Controller... are the part of Controller. For read more information,examples and tutorials Tell me - Struts Directory Structure for Struts Tell me the Directory Structure for Struts Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://www.roseindia.net/tutorialhelp/comment/19444
CC-MAIN-2015-18
en
refinedweb
django-uturn 0.2.0 Overriding redirects in Django, to return where you came from Provides the HTTP redirect flexibility of Django’s login view to the rest of your views. Here’s what happens when you –as an anonymous user– try to access a view requiring you to log in: - Django redirects you to /login?next=/page-you-wanted-to-see - You log on - Django’s login view notices the next parameter and redirects you to /page-you-wanted-to-see rather than /. With Uturn, you’ll be able to use the same feature by simply changing some template code and adding middleware or decorators to your views. Installation django-uturn is available on Pypi: pip install django-uturn Uturn is currently tested against Django versions 1.2, 1.3 and 1.4. Typical use cases From master to detail and back again You’ve got a list of… let’s say fish. All kinds of fish. To enable users to find fish by species, you’ve added a filter. Enter bass and your list is trimmed to only contain the Australian Bass, Black Sea Bass, Giant Sea Bass, Bumble Bass… Wait a minute! Bumble Bass isn’t a species you’ve ever heard of - it’s probably the European Bass. So you hit the edit link of the Bumble Bass, change the name and save the form. Your view redirects you to the list. The unfiltered list. Aaargh! If you’d just used the Uturn redirect tools, you would have been redirected to the filtered list. Much better (in most cases). Multiple origins This is basically a more general application of the previous use case. Suppose you have a form to create a new ticket that you can reach from both the project page and the ticket list page. When the user adds a new ticket, you want to make sure she’s redirected to the project page when she came from the project page and the ticket list page when she reached the form from the ticket list page. Enter Uturn. How to use Uturn Redirecting in views A typical form processing view function probably looks a bit like this: from django.shortcuts import redirect, render from forms import TicketForm def add_ticket(request): if request.method == 'POST': form = TicketForm(request.POST) if form.is_valid(): form.save() return redirect('ticket-list') else: form = TicketForm() context = {'form': form} return render(request, 'tickets/ticket_list.html', context) This view always redirects to the ticket list page. Add Uturn redirects: from django.shortcuts import render from uturn.decorators import uturn from forms import TicketForm @uturn def add_ticket(request): if request.method == 'POST': form = TicketForm(request.POST) if form.is_valid(): form.save() return redirect('ticket-list') else: form = TicketForm() context = {'form': form} return render(request, 'tickets/ticket_list.html', context) We simply add the uturn decorator to the view which will check the request for a valid next parameter and - if present - use that value as the target url for the redirect instead of the one you specified. If you want to apply Uturn’s redirect logic to all requests, add the uturn.middleware.UturnMiddleware class to your middleware instead. Passing the next page along How do you add that next parameter to the URL in your project page? Here’s what you’d normally use: <a href="{% url ticket-add %}">Add ticket</a> This would render, depending on your url conf of course, a bit like this: <a href="/tickets/add/">Add ticket</a> Here’s what you’d use with Uturn: {% load uturn %} <a href="{% uturn ticket-add %}">Add ticket</a> The uturn template tag will first determine the actual URL you want to link to, exactly like the default url template tag would. But the uturn tag will also add the current request path as the value for the next parameter: <a href="/tickets/add/?next=%2Fprojects%2F">Add ticket</a> Clicking this link on the project page and adding a ticket will get you redirected to the /projects/ URL if you add the correct field to your form. Passing through forms The easy way to add the parameter to your forms is by adding the uturn_param template tag inside your form tags. If you’re using Django’s builtin CSRF protection, you’ll already have something like this: <form action="." method="post"> {{ form.as_p }} {% csrf_token %} <input type="submit" value="Save"> </form> Change that to this: <form action="." method="post"> {{ form.as_p }} {% csrf_token %} {% uturn_param %} <input type="submit" value="Save"> </form> Note: if you’re using Django 1.2, you will have to pass the request: <form action="." method="post"> {{ form.as_p }} {% csrf_token %} {% uturn_param request %} <input type="submit" value="Save"> </form> Don’t worry if you don’t want to use next as the parameter. You can specify a custom parameter name with the UTURN_REDIRECT_PARAM setting. And if you want to redirect to other domains, you can specify those domains with the UTURN_ALLOWED_HOSTS setting. Otherwise requests to redirect to other domains will be ignored. Overriding URLs in templates There’s just one more thing we need to change: the cancel link on your form: <form action="." method="post"> {{ form.as_p }} {% csrf_token %}{% uturn_param %} <input type="submit" value="Save"> or <a href="{% url ticket-list %}">cancel</a> </form> That link should point to the project page when applicable. Use the defaulturl tag to accomplish this: {% load uturn %} <form action="." method="post"> {{ form.as_p }} {% csrf_token %}{% uturn_param %} <input type="submit" value="Save"> or <a href="{% defaulturl ticket-list %}">cancel</a> </form> The defaulturl tag will default to standard url tag behavior and use the next value when available. Here’s what your form would look like from the ticket list page (with or without the next parameter): <form action="." method="post"> ... <input type="submit" value="Save"> or <a href="/tickets/">cancel</a> </form> And here’s what that same form would look like when you reached it from the project page: <form action="." method="post"> ... <input type="submit" value="Save"> or <a href="/projects/">cancel</a> </form> Thanks to django-cms for the backported implementation of RequestFactory. - Downloads (All Versions): - 40 downloads in the last day - 122 downloads in the last week - 574 downloads in the last month - Author: Kevin Wetzels - License: BSD licence, see LICENCE - Categories - Package Index Owner: roam - DOAP record: django-uturn-0.2.0.xml
https://pypi.python.org/pypi/django-uturn/0.2.0
CC-MAIN-2015-18
en
refinedweb
24 August 2010 06:26 [Source: ICIS news] By Pearl Bantillo SINGAPORE (ICIS)--Southeast Asia’s oil and gas giant PTT is optimistic that all its frozen petrochemical projects in Mab Ta Phut, save one, will be released from suspension by the courts, a company spokesperson said on Tuesday. The Thai Cabinet is currently assessing the list drafted by the National Environmental Board (NEB) late on Monday, which exempted downstream petrochemical projects among those deemed harmful to the environment. A number of Mab Ta Phut projects were ordered stopped by a Thai court in September 2009 on environmental grounds, invoking Article 67 of the country's constitution. “It (cabinet) should approve what the environmental board said,” said the PTT spokesperson. The ?xml:namespace> Expected to be released from suspension is PTT’s sixth gas separation plant, along with the three projects of its affiliate PTT Chemical, namely, the 50,000 tonne/year high-density polyethylene (HDPE) expansion, another 250,000 tonne/year HDPE expansion and its 50,000 tonne/year ethanolamine project. PTT could easily comply with the requirements of conducting a health and environment impact assessment on the projects since the process was underway, the company spokesperson said. “We are going to finish it anyway. It’s going to come between September to November,” she added. PTT Chemical's 95,000 tonne/year monoethylene glycol (MEG) project however, would remain suspended since it falls under the category of “intermediate stream” projects, which are deemed environmentally harmful based on Nonetheless, PTT intended to complete the health and environmental impact assessment of this plant and submit them to the court for appraisal, she added. Asked about PTT’s foregone gains because of the stalled projects, the company spokesperson said: “I don’t have the number in my hand at this moment.” Meanwhile, the Siam Cement Group (SCG), the biggest conglomerate in SCG officials could not be immediately reached for comment. The projects – a 350,000 tonne/year linear low density polyethytlene (LLDPE) unit, a 220,000 tonne/year specialty elastomers plant, a 90,000 tonne/year methyl methacrylate (MMA) plant and a 20,000 tonne/year cast sheet plant – were expected to be completed by mid-2011, the brokerage said in a note to clients. “It is a very big step forward in the Mab Ta Phut case. The only thing is we have to wait for the official document, which will be released in two days,” said Chantaraserekul. “What happens next is that the companies that have projects under suspension would have to go to court and ask [the projects] to be removed from the suspension list,” he added. The process could take two to three months to complete, which meant that commercial production at PTT’s stalled projects would likely happen by the year-end, the DBS Vickers analyst said. “This is sooner than the market expected,” he said, citing that the consensus forecast was for production for the projects to be possible only in the first quarter of 2011. While the At 12.11 hours ($1 = €0.79)
http://www.icis.com/Articles/2010/08/24/9387208/thailands-ptt-optimistic-on-mab-ta-phut-projects-restart.html
CC-MAIN-2015-18
en
refinedweb
08 May 2012 19:25 [Source: ICIS news] WASHINGTON (ICIS)--The ?xml:namespace> In its semi-annual economic forecast, the Institute for Supply Management (ISM) said that two-thirds of its member increase in raw materials costs expected for the balance of 2012, “manufacturers are poised to grow revenues and contain costs through the remainder of the year”. Holcomb said that 16 of the 18 manufacturing industries tracked by the institute predicted growth this year ahead of the 2011 pace, including chemicals producers and the plastics and rubber sector. Perhaps. Among the 17 non-manufacturing industries tracked by ISM are information services, finance, agriculture, forestry, retail trade, entertainment and recreation, real estate, health care, and hotel and food services sectors. The institute’s report of a more positive outlook for the The Commerce Department reported that In addition, But the institute’s survey suggests that those declining numbers in workforce expansion may be temporary. Holcomb said the ISM survey indicates that manufacturers will increase hiring by 1.4% in the balance of this year while the non-manufacturing sector is expected to boost employment by nearly
http://www.icis.com/Articles/2012/05/08/9557512/us-manufacturing-services-to-boost-revenues-this-year-ism.html
CC-MAIN-2015-18
en
refinedweb
In this hands-on lab, we will install Helm and configure the standard repository. Once that is complete, we will release a chart to ensure everything is working properly. After the release is verified, we will use Helm to clean up our cluster and remove the resources we created. Learning Objectives Successfully complete this lab by achieving the following learning objectives: - Install and Configure Helm On the Kubnernetes primary server, download and install Helm from its packaged binary. Once it is installed, configure the repo and ensure it is named stableand is up-to-date. - Create a Helm Release Ensure Helm is working by creating a Helm release in the default namespace. The release should be named testand the WordPress should be the application that is released. - Verify the Release and Clean Up Check the status of the release to ensure the release succeeded, then verify in kubectlthat the resources were created. Once you are satisfied that resources were created, remove the resources from the cluster. Then, confirm there are no resources present in the default namespace.
https://acloudguru.com/hands-on-labs/installing-helm
CC-MAIN-2022-33
en
refinedweb
Content uploaded by Elizabeth Theiss-Morse Author content All content in this area was uploaded by Elizabeth Theiss-Morse on Dec 03, 2017 Content may be subject to copyright. 99© Springer International Publishing Switzerland 2016 E. Shockley et al. (eds.), Interdisciplinary Perspectives on Trust, DOI 10.1007/978-3-319-22261-5_6 Examining the Relationship Between Interpersonal and Institutional Trust in Political and Health Care Contexts Celeste Campos-Castillo , Benjamin W. Woodson , Elizabeth Theiss-Morse , Tina Sacks , Michelle M. Fleig-Palmer , and Monica E. Peek Scholars commonly portray institutional and interpersonal trust as instrumental for social order. Many depict institutional trust—trust in institutions, such as health care and government—as necessary for fostering interactions free from malfea- sance (e.g., Giddens, 1990 ; Gilson, 2003 ; Zucker, 1986 ). Similarly, scholars portray interpersonal trust—trust in individuals who represent these institutions—as crucial for sustaining trust in institutions (Kramer, 1999 ; Maguire & Phillips, 2008 ; Rousseau, Sitkin, Burt, & Camerer, 1998 ; Shamir & Lapidot, 2003 ). A relationship between individual and institutional trust is intuitive, yet scholars have rarely exam- ined the existence of this relationship explicitly (e.g., Rousseau et al., 1998 ; Schoorman, Mayer, & Davis, 2007 ). For purposes of this chapter, we adopt Mayer, Davis, and Schoorman’s (1995/ 2006 ) defi nition of trust as a willingness to be vulnerable to another party. C. Campos-Castillo (*) Department of Sociology , University of Wisconsin–Milwaukee , Milwaukee , WI 53202 , USA B. W. Woodson Department of Political Science , University of Missouri–Kansas City , Kansas City , MO 64110 , USA E. Theiss-Morse Department of Political Science , University of Nebraska–Lincoln , Lincoln , NE 68588 , USA T. Sacks School of Social Welfare, University of California , Berkeley , CA 94720 , USA M. M. Fleig-Palmer Management Department , University of Nebraska–Kearney , Kearney , NE 68849 , USA M. E. Peek Section of General Internal Medicine, University of Chicago , Chicago , IL 60637 , USA 100 Schoorman, Wood, and Breuer ( 2015 ) suggest that this defi nition of trust, although initially developed for the micro-level of analysis, is applicable to the macro-level of analysis as well. We also focus our discussion on situations where the trustor (trusting party) is an individual member of the public and the trustee (party being trusted) is an institution or an individual representing the institution. In our approach, we explore how different researchers have defi ned what constitutes the latter (an “institution”), which reveals unique insights into whether a relationship exists between interpersonal trust and institutional trust. Put differently, while others have approached the problem through discussing the defi nition of “trust,” we highlight that researchers have taken for granted that a shared consensus exists about what is an institution. We examine the evidence concerning the reciprocal relationship between inter- personal and institutional trust. Our discussion focuses on two institutional con- texts: the political arena and health care. Examining these two specifi c contexts permits a better understanding of the relationship between individual and institu- tional trust. First, we consider the differences in how research has conceptualized what constitutes an “institution.” Whereas there is a consensus that interpersonal trust occurs between two people, there is little agreement about where institutional trust is directed. These differences in what constitutes the “institution” carry impor- tance in determining the type of relationship between interpersonal and institutional trust. Second, we examine a trustor’s individual-level characteristics and how these infl uence the relationship between interpersonal and institutional trust. Specifi cally, we analyze how the trustor’s individual-level characteristics have an impact on the direction and strength of the relationship between interpersonal and institutional trust. Further, evidence suggests that certain characteristics infl uence the level of trust directed at any trustee, but we elaborate on these fi ndings by considering how researchers defi ne an institution. D e fi ning the Institution Across the range of defi nitions for what is an institution, two patterns emerge. First, an institution is not only a brick-and-mortar organization, but can be a role or a specifi c type of person (e.g., physicians and judges). In each case, the institution is robust to the turnover of the individuals who compose it, indicating that the institu- tion is persistent and stable. Second, researchers have examined institutions that range in their proximity to the trustor. We characterize a “local” institution as one where the trustor has direct contact with the individuals who are members of the institution and a “remote” institution where such direct contact is minimal or absent. By locating the institution under investigation in this range, we can better under- stand the nature of the relationship between interpersonal trust and institutional trust. Of course, a trustor can have frequent, mediated contact with an institution (e.g., following relevant news reports about the institution), but the absence of direct contact maintains the remoteness of the institution. Further, a trustor can have an C. Campos-Castillo et al. 101 experience with an institutional artifact, such as its Web site, which can infl uence trust in the institution. These cases, however, are outside of our focus because they circumvent the role of interpersonal trust. Because of our focus on the political and healthcare contexts as well as different levels of analysis, we defi ne an institution, using Barley and Tolbert ( 1997 ), as “shared rules and typifi cations that identify categories of social actors and their appropriate activities or relationships” (p. 100). This defi nition permits us to con- sider an institution that is a brick-and-mortar organization as well as one that is a role or a specifi c group of persons. The notion of an institution also being viewed as a role is consistent with the Oxford English Dictionary ( 2015 ) defi nition of an insti- tution as the “establishment in a charge or position.” Thus, our discussion of trust in institutions could address a specifi c institution such as the Supreme Court or a hos- pital or a specifi c group of people such as judges and physicians. There are two characteristics of institutions that are pertinent to our discussion. First, regardless of the level of analysis, an institution should be robust to the turn- over of its members, indicating that the institution is persistent and stable. Second, the spatial location of the institution in relationship to the trustor is an important consideration (Gössling, 2004 ). Geographic proximity will vary at the level of the trustor, depending on his/her personal circumstances. Local institutions are those in which the trustor has frequent contact directly with members of the institution. Remote institutions are those in which direct contact is minimal or absent. A trustor can have frequent, mediated contact with an institution (e.g., following relevant news reports about the institution), but the absence of direct contact maintains the remoteness of the institution. The permanence and the proximity of an institution will infl uence the nature of the relationship between interpersonal and institutional trust. Politics In the context of political institutions, interpersonal trust involves trust in those individuals who compose the institution—e.g., a member of Congress, a justice of the U.S. Supreme Court, or the police offi cer walking the beat in a person’s neigh- borhood. Institutional trust involves the two aspects discussed above: trust in the institution regardless of the people involved (what we refer to as the “ brick-and- mortar ” institution, for lack of a better term) and trust in the institutional roles and types of individuals. When the institution is a brick-and-mortar organization, the literature is fairly straightforward—people can trust or distrust the U.S. Congress, U.S. Supreme Court, or federal government—because the boundaries of the institu- tion (where it starts and ends) are straightforward. These institutions are persistent and stable, which results in trustors possessing political attitudes toward those insti- tutions that differ from their attitudes toward members of those institutions. For example, people’s feelings about the members of Congress are usually much Examining the Relationship Between Interpersonal and Institutional Trust… 102 different from how they feel about the U.S. Congress as an institution (Hibbing & Theiss- Morse, 1995 ). Another line of research concerns institutions that are roles , or specifi c types o f individual (Caldeira, 1986 ; Richardson, Houston, & Hadjiharalambous, 2001 ). The dividing line between interpersonal and institutional trust, in this case, is less clear. Nonetheless, because roles are persistent and stable even when the individuals occu- pying the role change, trust in the role (or type of individual) may be defi ned as institutional trust, while trust in specifi c individuals who occupy the role, such as Representative Alcee Hastings or Senator Susan Collins, is interpersonal trust. The role of being a member of Congress exists long after Hastings and Collins leave Congress. The following discussion will lay out the current research that elucidates the relationship between these different levels of trust—trust in specifi c individuals, trust in roles or specifi c types of individuals, and trust in brick-and-mortar institu- tions—for the three major forms of political institutions: the judicial, legislative, and executive. Judicial . For judicial institutions the relationship between trust in a specifi c judge and trust in a court depends on the type of court. For some courts—like a local criminal court—people interact directly with the individuals composing that court. Conversely, people rarely interact directly with the individuals who compose more remote courts like the U.S. Supreme Court. These differences change not only the relationship between interpersonal and institutional trust, but also the nature of interpersonal trust. An interpersonal trust based on direct experiences will be much different from an interpersonal trust based on other, indirect information. For institutions that are more local to the trustor, such as lower level courts, research consistently shows that interactions with the individuals composing that institution affect perceptions of the legitimacy of an institution, an attitude that con- tributes to trust (see Tyler, 2006a for a review). Tyler ( 2006b ) found that when people perceive that judges are using a fair decision-making process, they are more likely to perceive the institution as legitimate and trust the institution. Crucially, the evidence from Tyler ( 2006b ) involves people’s perception of their direct interac- tions with the courts. Based on Tyler’s work, we propose that this effect operates through interpersonal trust: people develop interpersonal trust through their interac- tions with an individual and their perceptions of a “proper process” being followed in an institution, and this interpersonal trust then affects their trust in the institution. The relationship between interpersonal trust and trust in institutions that are more remote is less established. In the case of judicial institutions, the most salient remote institution is the U.S. Supreme Court. One reason for the lack of evidence on the nature of cross-level trust is that surveys rarely include questions about indi- vidual justices. Further, research has not (to our knowledge) tied attitudes toward specifi c individual justices to attitudes toward the institution. The closest approxi- mation to a question about individual justices are the popular items that assess peo- ple’s confi dence in the “leaders” of the U.S. Supreme Court (without specifying the names of the specifi c occupiers of the bench) or that ask about the procedures and C. Campos-Castillo et al. 103 process that justices generally use to make decisions (e.g., Casey, 1974 ; Gibson & Caldeira, 2011 ; Scheb & Lyons, 2000 ). However, aside from the fact that these items do not measure trust as we are defi ning it, they focus more on the institutional rather than the interpersonal because they ask about a person’s perception of a role—U.S. Supreme Court justices—rather than a specifi c individual. While this research cannot directly answer the question of the relationship between interpersonal trust in specifi c individual justices and institutional trust in the U.S. Supreme Court, a debate between Tyler and Rasinski ( 1991 ) and Gibson ( 1989 , 1991 ) over the relationship between procedural fairness and institutional legitimacy perceptions provides some theoretical guidance. In this debate, Gibson ( 1991 ) made a distinction between remote institutions like the U.S. Supreme Court where people often rely on indirect information about how the institution actually functions and local institutions where people interact directly with the individuals who compose the institution. When people directly interact with a local institution, Gibson ( 1991 ) concedes that their experience with the individuals composing that institution affects their institutional trust, but for the remote institutions the opposite occurs—trust in the institution affects their views of how the institution operates because people have no direct experience with, and thus no direct information on, the decision-making process . 1 While Gibson ( 1991 ) examined procedural fairness perceptions rather than interpersonal trust, the same distinction likely applies to the relationship between interpersonal and institutional trust for remote and local insti- tutions. For local institutions, people interact with the individuals within the institu- tion and develop a sense of interpersonal trust that can then affect institutional trust. Meanwhile for remote institutions, they never interact with the individuals and thus cannot develop a fi rm sense of interpersonal trust. Instead, their institutional trust affects their views of how the individuals operate and thus their interpersonal trust. Executive . Distinguishing among the institution, the specifi c type of person, and an individual is less straightforward with the executive branch . Because of this dif- fi culty, the Presidency would seem to be the prime case to fi nd a strong relationship between interpersonal trust in the current president and institutional trust in the Presidency. However, in one study less than half (46 %) approved of the current president, despite near consensus of approval (96 %) of the presidential institution, suggesting that distrusting the individual occupying the offi ce does not lead to dis- trusting the institution (Hibbing & Theiss-Morse, 1995 ). Political scientists, how- ever, rarely ask about the institution of the executive branch, focusing instead on the specifi c president or on the leadership of the executive branch. Associating the president with the executive branch makes sense since the president “is the embodi- ment of the executive branch to most people” (Moy & Pfau, 2000 , p. 13), and approval of the president is signifi cantly and positively related to trust in govern- ment, although the direction of causation has been debated (Citrin, 1974 ; Hetherington, 2005 ; Williams, 1985 ). While some people will develop a sense of interpersonal trust with a salient and prominent person such as the president, the research suggests that whatever inter- 1 See Mondak ( 1993 ) for experimental evidence supporting Gibson’s ( 1991 ) argument. Examining the Relationship Between Interpersonal and Institutional Trust… 104 personal trust is developed does not affect institutional trust. However, we expect that much of the interpersonal trust developed toward the president is highly infl u- enced by institutional trust. While the president is a part of the institution of the Presidency, the president is also a member of many other role-type institutions such as “politician” or, to be more specifi c in the case of the current president, Barack Obama, a “Democratic politician.” The fact that one of the largest predictors of any individual’s approval rating of a president is political party affi liation (Bond & Fleisher, 2001 ; Gilens, 1988 ) provides ample support that most of the interpersonal trust developed toward a president is a result of that president fulfi lling the role of a “Democratic politician” or a “Republican politician” and actually has little to do with the individual himself. Like the distinction between local courts and the Supreme Court, few people have direct or unmediated interactions with the president, but they do with individu- als who work for the executive branch, such as their mail carriers and other federal workers. At the same time that people distrust the government and have negative views of the federal bureaucracy, they report positive experiences with federal employees (Rein & O’Keefe, 2010 ) and positive assessments of various bureau- cratic agencies (Pew Research Center, 2013 ). People can trust their mail carrier to do a good job delivering the mail; yet, because they do not equate the United States Postal Service with the executive branch of government, this does not produce insti- tutional trust. Consequently, in the case of the executive branch, it appears that trus- tors do not make the connection between interpersonal trust and institutional trust. Legislative . The most explicit discussion of the relationship between interper- sonal and institutional trust occurs within research on Congress. People clearly make a distinction between their own member of Congress (an individual), mem- bers of Congress as a whole (a specifi c type of person), and the institution of Congress. Hibbing and Theiss-Morse ( 1995 ), using approval rather than trust, found that almost 90 % of Americans approved of the institution of Congress (88 %), two- thirds approved of their own member of Congress (67 %), and less than a quarter approved of the members of Congress as a whole (24 %). People are taught to appreciate the role of the institution of Congress in the con- stitutional design of the American government but are encouraged to distrust mem- bers of Congress in general. In other words, they trust the brick-and-mortar institution of Congress but do not trust Congress members as a specifi c type of person or role. Fenno ( 1975 ) provides perhaps the best explanation for the juxtaposition between trusting one’s own specifi c member of Congress and distrusting Congress members more generally. Individual members of Congress spend a great deal of time in their districts working to develop trust with their constituents. They do this through their self-presentations: by emphasizing their qualifi cations and their ability to get things done in Washington; identifying with their constituents; and displaying empathy, especially when constituents are experiencing diffi culties. At the same time, they actively disparage Congress when they run for reelection. All of the negatives asso- ciated with Congress—such as special interest infl uence, unwarranted perquisites of offi ce, ineffi ciency, corruption, and scandals—are due to the undifferentiated mass of other members, not to the representative himself or herself. C. Campos-Castillo et al. 105 The distrust people have toward Congress members is also related to the institu- tion of Congress. Congress is the most transparent institution in the federal government; all of its dirty laundry gets aired in public. In contrast, much of the work of the Supreme Court and the president is conducted behind closed doors. This distinction in transparency helps explain why trust of Congress members is low compared to the Supreme Court justices and the president and bureaucrats (Hibbing & Theiss- Morse, 1995 ). Health Care Regardless of the referent, there are two common themes in defi nitions of trust in health care research: risk and vulnerability (e.g., Abelson, Miller, & Giacomini, 2009 ; Gilson, 2003 , 2006 ; Hall, Dugan, Zheng, & Mishra, 2001 ; Mechanic, 1996 ). Patients who are ill are vulnerable because they do not have the knowledge or skills to cure themselves but must depend upon the expertise and good will residing in healthcare institutions (c.f., Gilson, 2003 ). The risk is that they will not be cured, or even may suffer further injury or harm. Changes to the structure of health care delivery have altered the image that comes to mind when an individual thinks about a health care institution (for detailed discussions, see Scott, Ruef, Mendel, & Caronna, 2000 ; Mechanic, 1996 ; Rao & Hellander, 2014 ), making it more diffi cult than in the case of politics to defi ne “institution.” In health care, research into institutional trust has examined various referents including health systems and medical institutions such as hospi- tals and clinics (e.g., Abelson et al., 2009 ; Cook & Stepanikova, 2008 ; Gilson, 2003 , 2006 ; Hall et al., 2001 ). Therefore, for purposes of this chapter, we defi ne a health care institution broadly as an organization established for the purpose of treating, managing, and preventing disease. Such organizations could be a hospital, an outpatient clinic, or a health plan (Cook & Stepanikova, 2008 ; Hall et al., 2001 ; Mechanic, 1996 ). The question remains, however, whether interpersonal trust and institutional trust infl uence one another. The management of health care delivery has become more remote to the trustor (e.g., Swetz, Crowley, & Maines, 2013 ), yet individuals still have direct contact with their physicians and health care team. Exploring the framework that commonly informs trust in health care—Mayer and colleagues’ model—helps us understand institutional trust (Schoorman et al., 2015 ) and how interpersonal trust helps develop institutional trust (Schilke & Cook, 2013 ). Since many of the scales that measure institutional trust in health care have conceptual roots in this model, a reasonable working claim is that interpersonal trust and insti- tutional trust can infl uence one another in health care. Researchers should exercise caution with our tentative claim, however, since those who constructed these trust models deliberately paid little attention to context in an effort to develop the most generalizable model; research will need to consider whether health care poses an interesting contingency in the extent that these models generalizes across settings. Examining the Relationship Between Interpersonal and Institutional Trust… 106 Lastly, just as with politics we consider the existence of cross-level interaction when the institution is a role or a specifi c type of person. The existence of this relationship within health care is less clear than in politics. In one study, researchers modeled their measure of trust in the physician profession after a similar measure of trust in a specifi c physician (Hall, Camacho, Dugan, & Balkrishnan, 2002 ). Unlike the latter measure, the validated items that resulted in the former did not refl ect a dimension of trust in confi dentiality. The lack of isomorphism in the structure of the two measures raises the question of whether cross-level interaction can occur. Moreover, the study found that, while there was a signifi cant correlation between trust in a specifi c physician and trust in the physician profession, the correlation was only moderate and was lower than correlations with the other measures they used to determine validity. Just as in the political arena, however, research fi nds that trust in a specifi c physician remains high even in the face of declining trust in the medical profession (Blendon & Benson, 2001 ; Hall, Camacho et al., 2002 ; Pescosolido, Tuch, & Martin, 2001 ). In examining the role of physicians, trustors develop perceptions of physicians in general not only through interpersonal interactions but also through portrayals in books or the media (c.f., Hall, Camacho et al., 2002 ), thus the institution of physicians as a role is more remote because it is more impersonal. If we extrapo- late this characterization to the role of medical professionals as more remote to the patient than a more specifi c health institution (e.g., a specifi c hospital or clinic), then we can develop preliminary claims as we did for the political context. As posited earlier, a relationship between interpersonal trust and institutional trust is more likely to occur as the institution becomes more local to the trustor. Indeed, research fi nds that patients’ trust in a specifi c physician is associated with their trust in their local health care team (Kaiser et al., 2011 ) and insurance plan (Zheng, Hall, Dugan, Kidd, & Levine, 2002 ). A relationship between interpersonal and institutional trust in the cases where the institution is a role, however, is likely weaker because the institution is remote. Accordingly, we expect a weaker rela- tionship between trust in one’s physician and trust in medical professional roles than when the institution is more local. Trustors’ Characteristics Politics Demographics . One moderator that likely affects the relationship between interper- sonal and institutional trust is the trustor’s demographics. Specifi cally, whenever a trustor shares demographic traits with the decision makers within an institution, the type of interpersonal trust developed through shared demographics can translate into greater institutional trust, but this only occurs when the institution as a whole is representative of the trustor’s demographics. In political science this phenomenon C. Campos-Castillo et al. 107 is known as descriptive representation. For example, a woman might feel greater trust in her female legislator than in her male legislator. Generalizing to the whole legislature, women might have greater trust in the legislature when it consists of a representative number of women (about 50 %) than when it is dominated by men. The cause of the descriptive representation phenomenon is complex. Both Mansbridge ( 1999 ) and Williams ( 1998 ) argue that, theoretically, the positives that come with being represented by someone who shares one’s race or gender—includ- ing feeling better able to communicate with the representative and better repre- sented in terms of their shared interests—contribute to more interpersonal trust between the representative and the constituent, and this subsequently leads to greater trust in the institution. Empirical work provides a less clear picture than the theoretical argument. In terms of race and ethnicity, whites are more likely to respond favorably to same-race representatives than African Americans (Gay, 2002 ). Whites are more likely to remember what their legislators have accomplished, to approve of their job perfor- mance, and to view them as resources when their representatives are white. African Americans do not have the same positive responses to their African American rep- resentatives. Contrary to expectations, then, it is white constituents who react most positively to having a same-race representative. However, this deals with attitudes toward individual members of Congress rather than attitudes toward the institution. Approval of Congress as an institution is not related to descriptive representation (Gay, 2002 ). Support for the importance of descriptive representation increases when attention shifts from the federal to the local level. African Americans are more likely to trust their local government when they have an African American mayor than a white mayor (Abney & Hutcheson, 1981 ; Howell & Fagan, 1988 ). Latinos also feel less alienated from the political system when they are descriptively repre- sented (Pantoja & Segura, 2003 ), likely because they feel less excluded from the political system (Abramson, 1972 ; Bobo & Gilliam, 1990 ). The theoretical argument is better supported when the focus turns to women. As the proportion of female legislators increases, women view the legislature as more legitimate (Norris & Franklin, 1997 ). Schwindt-Bayer and Mishler ( 2005 ) fi nd, in a cross-national study, that women’s descriptive representation is signifi cantly related to perceived legitimacy of the government. As with race and ethnicity, the descrip- tive representation of women has a more pronounced effect at the local rather than the national level. Focusing on people with a moderate amount of political aware- ness, Ulbig ( 2007 ) found that women living in municipalities with more female representation had signifi cantly more trust in the municipal government than women who experienced more male representation. Interestingly, men reacted in the oppo- site way, becoming much less trusting the more women representatives there were in local government. Familiarity . We conceive of familiarity with the political institution as political knowledge, which we expect to be a key moderator affecting whether interpersonal trust can affect institutional trust within politics. One requirement that must occur before interpersonal trust can affect institutional trust is knowing the people who compose the institution. In the case of political institutions, this basic requirement Examining the Relationship Between Interpersonal and Institutional Trust… 108 is not met by much of the American public. Political science is rife with studies bemoaning Americans’ lack of knowledge concerning politics (Gaziano, 1997 ; Gilens, Vavreck, & Cohen, 2007 ; Prior, 2005 ). In Delli Carpini and Keeter’s ( 1993 ) highly infl uential study on political knowledge, only 29 % of the sample could name their own member of the House of Representatives. If someone cannot name his/her own House member, he/she probably also does not have a sense of interpersonal trust toward that member; lacking that, his/her interpersonal trust cannot affect insti- tutional trust. While a certain level of political knowledge is required for interpersonal trust to affect institutional trust, political knowledge may also provide a buffer that prevents interpersonal trust from affecting institutional trust. Those with more political knowl- edge should be better at separating their feelings concerning what Hibbing and Theiss-Morse ( 1995 ) called the Constitutional and the Washington system, or alterna- tively what Easton ( 1965 ) called the regime and the current authorities. In both cases, the former involve political institutions while the latter involve the individuals who compose those institutions. Only those with an adequate understanding of the politi- cal system can separate the disagreeable actions of the current occupants of an institu- tion from their feelings about the institution itself. Both McCloskey and Zaller ( 1984 ) and Delli Carpini and Keeter ( 1993 ), for example, show that those with more political knowledge were more likely to support democratic values, a key component of which is supporting the political system even when they dislike the current political authori- ties. A survey by Hibbing and Theiss-Morse ( 1995 ) fi nds more direct evidence that knowledgeable citizens are more likely than the average citizen to separate their feel- ings about the people who compose an institution from the institution itself. In their survey, while all groups of people disliked members of Congress and liked Congress as an institution, political involvement, which often is a proxy for political knowl- edge, increased the disconnect between these two types of evaluations. Those who are more involved in politics are more likely to disapprove of members of Congress but also more likely to approve of the institution of Congress. Thus, it appears that know- ing more about politics, and presumably more about the individual members compos- ing an institution, does not necessarily lead to a greater relationship between interpersonal and institutional trust but instead may inhibit that relationship. Health Care Demographics The focus within the health care literature has been primarily on trust in a specifi c physician, but this research sheds light on whether interpersonal and institutional trust infl uence one another in this context. Research documents that trust in one’s own physician infl uences patient satisfaction, adherence to treatment, continuity with a provider, disclosure of medically relevant information, and seeking health- care services (Calnan & Rowe, 2006 ; Saha, Jacobs, Moore, & Beach, 2010 ). The C. Campos-Castillo et al. 109 fact that trust has been found to vary based on individual-level patient factors including gender, race, and education has raised numerous questions about whether trust explains health and health care disparities. Indeed, the literature suggests whites, women, and those with more education are generally more trusting than their counterparts. The evidence suggests African Americans and Latinos are less likely than whites to trust their physician, even after controlling for socioeconomic status, health status, and healthcare access (Armstrong, Ravenell, McMurphy, & Putt, 2007 ; Boulware, Cooper, Ratner, LaVeist, & Powe, 2003 ; Johnson, Saha, Arbelaez, Beach, & Cooper, 2004 ; LaVeist, Nickerson, & Bowie, 2000 ; Peek et al., 2013 ; Schnittker, 2004 ). With regard to gender, the evidence suggests men are less likely to trust their health care provider compared to women (Armstrong et al., 2007 ; Schnittker, 2004 ). Race moderates this relationship for women, in that black women generally report lower trust than white women (Armstrong et al., 2007 ). Lastly, studies have found people with less education, particularly less than a high school diploma, report lower trust than those with a high school diploma and/or a college degree (Schnittker, 2004 ). The persistence of these fi ndings speaks to the relationship between interper- sonal and institutional trust within health care. A confl uence of factors predisposes certain subgroups not to trust in physicians or in health care more generally. Consider, for example, the residue of historic and contemporary racial discrimina- tion in health care. Although the 1932 Tuskegee Syphilis Study is often the most well-known example of racial discrimination in medical research, experimentation and poor treatment occurred before and after this oft-cited historical event (Gamble, 1997 ). A signifi cant body of literature documents more recent instances of experi- mentation and substandard medical care (Dittmer, 2009 ; Washington, 2006 ). All told, historical and contemporary social forces likely affect racial minorities’ trust in the health care context (Gamble, 1997 ; Washington, 2006 ). These issues are particularly relevant in the context of minority, female, and low- income patients who may be more likely to experience discrimination, both in gen- eral and in health care settings. For example, the literature suggests a lack of trust may not necessarily be focused on a single provider. Rather, negative experience with one provider may lead to lower trust of the health care sector in general (LaVeist, Isaac, & Williams, 2009 ; Peek, Sayad, & Markwardt, 2008 ). Consistent with this, other research fi nds that black women tend to have low trust in primary care provid- ers, which is often associated with lower trust in their health care team (Kaiser et al., 2011 ). Based on extant research in the medical fi eld, interpersonal and institutional trust are interdependent such that a person’s assessment of one level or domain is likely to affect the other and demographic variables infl uence this relationship. Familiarity The persistence of demographic differences in trust raises the question of how familiarity contributes to the relationship between institutional and interpersonal trust in health care. Just as in politics, the lack of familiarity with a key institutional Examining the Relationship Between Interpersonal and Institutional Trust… 110 representative—in this case, a specifi c health care professional—hinders the trans- lation of interpersonal to institutional trust. Many studies document that utiliza- tion—i.e., experience with specifi c healthcare professionals—is positively associated with trust (O’Malley, Sheppard, Schwartz, & Mandelblatt, 2004 ; Whetten et al., 2006 ). In the case of racial and ethnic minorities, however, the issue is com- plex because racial and ethnic minorities may cite lower trust precisely because of their experience, and even with a lack of recent or regular experience with a physi- cian. One study (Campos-Castillo, in press ) fi nds that racial and ethnic differences in trust in health care professionals (the role, not a specifi c person who occupies the role) are equivalent between those who are more familiar (i.e., had a recent health care experience) and less familiar (i.e., did not have a recent health care expe- rience). Thus, further research is needed. Discussion Whether interpersonal trust and institutional trust infl uence one another is a common question raised within the literature. Whereas others have focused on defi ning “trust,” we considered how the defi nition of an “institution” impacts the answer to this question. Our defi nition supports the conceptualization of an institution as an organization of “ bricks and mortar” or as a group of people in an identifi ed role. Two characteristics of institutions, as we have defi ned them, are that they are robust to the turnover of individuals and that they vary with respect to proximity to a trustor. Whether an institution is local or remote to the trustor infl uences the type of relation- ship between interpersonal and institutional trust. In the context of these features, we also considered how the individual-level characteristics of the trustor, demographics and familiarity, factor into the relationship between interpersonal and institutional trust. A close examination of research questions that scholars have asked within the two illustrative institutional contexts—health care and the political arena—reveals many differences and agreements. While the approach to the problem has differed, there is much that each institutional context can learn from the other. Many within health care, for example, lament that progress in understanding trust falls behind the progress in other fi elds (e.g., Gilson, 2003 ; Ozawa & Sripad, 2013 ). Indeed, even a cursory overview of the literature reveals stark differences. Whereas the literature in politics can easily be organized based on which specifi c institution is the focus, within health care it is very diffi cult to develop such clear organization. Part of this, as we stated earlier, has to do with the changes in the structure of healthcare delivery, which blur the boundaries of where an institution starts and ends. We noted, however, that in the few instances in which researchers have clearly defi ned what an “institution” is, respondents were able to differentiate among the numerous referents. Future research on institutional trust within health care should defi ne carefully what comprises the referent. One consistent trend across both domains is that racial minorities are less likely to trust institutions than whites. Reducing this gap may be easier for local than for C. Campos-Castillo et al. 111 remote institutions. If the members of the local institution provide a positive experi- ence for those interacting with them, the interpersonal trust developed will likely transfer into institutional trust. The same may not hold for remote institutions. Even if the people composing the remote institution provide positive experiences, the interpersonal trust may not transfer into institutional trust. This can make it diffi cult to bridge the racial trust gap for remote institutions. Even when traditionally disen- franchised groups perceive an individual within an institution providing benefi cial services—whether as a representative in Congress or as a health care provider— they may separate that individual’s actions from the institution. This dynamic can be seen in the impact of descriptive representation, which increases trust for local institutions but not for remote national institutions. Some other strategy besides individuals providing positive services may be required to increase trust in remote institutions. In both contexts we noted a paucity of research that examined explicitly the direction of causality. We relied on peripheral but relevant research to develop a claim that the extent to which an institution is remote or local to the trustor impacts whether interpersonal trust affects institutional trust, or the reverse. Such causal claims are best examined through longitudinal research or controlled laboratory environments, methods rarely used by researchers in politics and health care to examine trust (for some exceptions, see Hall, Dugan, Balkrishnan, & Bradley, 2002 ; Pearson, Kleinman, Rusinak, & Levinson, 2006 ; Scherer & Curry, 2010 ). Given the heightened cynicism many Americans feel toward a variety of institutions and the individuals composing those institutions, determining whether interpersonal trust can increase institutional trust or vice versa is of the utmost importance. Lastly, current changes in the health care and political arenas may potentially complicate the delineation of what constitutes an institution. For example, contem- porary changes stemming from the passage of the Affordable Care Act (ACA) stand to open the door for greater prominence of existing actors that the public tradition- ally does not consider to be members of the health care fi eld (e.g., lawyers, drug courts) and of the rise of brand new actors (e.g., community health workers) needed to fi ll new roles (Kellogg, 2014 ; Peek et al., 2012 ). The recent push by the federal government to incentivize the adoption of electronic health records (EHRs) also complicates the fi eld. A recent study, for example, found that patients’ trust in gov- ernment impacts their acceptance of federal involvement in the push for EHR adop- tion (Herian, Shank, & Abdel-Monem, 2014 ). These two institutional contexts—the political arena and health care—while examined separately thus far will increas- ingly need to be examined jointly by researchers. References Abelson, J., Miller, F. A., & Giacomini, M. (2009). What does it mean to trust a health system?: A qualitative study of Canadian health care values. Health Policy, 91 , 63–70. Abney, F. G., & Hutcheson, J. (1981). Race, representation, and trust: Changes in attitudes after the election of a black mayor. Public Opinion Quarterly, 45 , 91–101. Examining the Relationship Between Interpersonal and Institutional Trust… 112 Abramson, P. (1972). Political effi cacy and political trust among black schoolchildren: Two expla- nations. The Journal of Politics, 34 , 1243–1275. Armstrong, K., Ravenell, K. L., McMurphy, S., & Putt, M. (2007). Racial/ethnic differences in physician distrust in the United States. American Journal of Public Health, 97 , 1283–1289. Barley, S. R., & Tolbert, P. S. (1997). Institutionalization and structuration: Studying the links between actions and institution. Organization Studies, 18 , 93–117. Blendon, R. J., & Benson, J. M. (2001). Americans’ views on health policy: A fi fty-year historical perspective. Health Affairs, 20 , 33–46. Bobo, L., & Gilliam, F. (1990). Race, sociopolitical participation, and Black empowerment. The American Political Science Review, 84 , 377–393. Bond, J. R., & Fleisher, R. (2001). The polls: Partisanship and presidential performance evalua- tions. Presidential Studies Quarterly, 31 , 529–540. Boulware, L. E., Cooper, L. A., Ratner, L. E., LaVeist, T. A., & Powe, N. R. (2003). Race and trust in the health care system. Public Health Reports, 118 , 358. Caldeira, G. A. (1986). Neither the Purse Nor the Sword: Dynamics of Public Confi dence in the Supreme Court. American Political Science Review, 80 , 1209–1226. Calnan, M., & Rowe, R. (2006). Researching trust relations in health care. Journal of Health Organization and Management, 20 , 349–358. Campos-Castillo, C. (in press). Racial and ethnic differences in trust in sources of health informa- tion: A generalized distrust in physicians? Research in the Sociology of Health Care. Casey, G. (1974). The Supreme Court and myth: An empirical investigation. Law and Society Review, 8 , 385–420. Citrin, J. (1974). Comment: The political relevance of trust in government. The American Political Science Review, 68 , 973–988. Cook, K. S., & Stepanikova, I. (2008). The health care outcomes of trust: A review of empirical evidence. In J. Brownlie, A. Greene, & A. Howson (Eds.), Researching trust and health (pp. 194–214). New York: Routledge. Delli Carpini, M. X., & Keeter, S. (1993). Measuring political knowledge: Putting fi rst things fi rst. American Journal of Political Science, 37 , 1170–1206. Dittmer, J. (2009). The good doctors: The medical committee for human rights and the struggle for social justice in health care . New York: Bloomsbury. Easton, D. (1965). A systems analysis of political life . New York: Wiley. Fenno, R. F., Jr. (1975). If, as Ralph Nader says, Congress is ‘the broken branch’, how come we love our congressmen so much? In N. J. Edstein (Ed.), Congress in change: Evolution and reform (pp. 277–287). New York: Praeger. Gamble, V. N. (1997). Under the shadow of Tuskegee: African Americans and health care. American Journal of Public Health, 87 , 1773–1778. Gay, C. (2002). Spirals of trust? The effect of descriptive representative on the relationship between citizens and their government. American Journal of Political Science, 46 , 717–732. Gaziano, C. (1997). Forecast 2000: Widening knowledge gaps. Journalism & Mass Communication, 74 , 237–264. Gibson, J. L. (1989). Understandings of justice: Institutional legitimacy, procedural justice, and political tolerance. Law and Society Review, 23 , 469–496. Gibson, J. L. (1991). Institutional legitimacy, procedural Justice, and compliance with Supreme Court decisions: A question of causality. Law and Society Review, 25 , 631–636. Gibson, J. L., & Caldeira, G. A. (2011). Has legal realism damaged the legitimacy of the U.S. Supreme Court. Law and Society Review, 45 , 195–219. Giddens, A. (1990). Consequences of modernity . Cambridge, MA: Polity. Gilens, M. (1988). Gender and support for Reagan: A comprehensive model of presidential approval. American Journal of Political Science, 32 , 19–49. Gilens, M., Vavreck, L., & Cohen, M. (2007). The mass media and the public’s assessments of presidential candidates, 1952–2000. The Journal of Politics, 69 , 1160–1175. Gilson, L. (2003). Trust and the development of health care as a social institution. Social Science & Medicine, 56 , 1453–1468. C. Campos-Castillo et al. 113 Gilson, L. (2006). Trust in health care: Theoretical perspectives and research needs. Journal of Health Organization and Management, 20 , 359–375. Gössling, T. (2004). Proximity, trust, and morality in networks. European Planning Studies, 12 (5), 675–689. Hall, M. A., Camacho, F., Dugan, E., & Balkrishnan, R. (2002). Trust in the medical profession: Conceptual and measurement issues. Health Services Research, 37 , 1419–1439. Hall, M. A., Dugan, E., Balkrishnan, R., & Bradley, D. (2002). How cisclosing HMO physician incentives affects trust. Health Affairs, 21 , 197–206. Hall, M. A., Dugan, E., Zheng, B., & Mishra, A. K. (2001). Trust in physicians and medical institu- tions: What is it, can it be measured, and does it matter? The Milbank Quarterly, 79 , 613–639. Herian, M. N., Shank, N. C., & Abdel-Monem, T. L. (2014). Trust in government and support for governmental regulation: The case of electronic health records. Health Expectations, 17 , 784–794. Hetherington, M. (2005). Why trust matters: Declining political trust and the decline of American liberalism . Princeton, NJ: Princeton University Press. Hibbing, J. R., & Theiss-Morse, E. (1995). Congress as public enemy . Cambridge, MA: Cambridge University Press. Howell, S., & Fagan, D. (1988). Race and trust in government: Testing the political reality model. Public Opinion Quarterly, 52 , 343–350. Johnson, R. L., Saha, S., Arbelaez, J. J., Beach, M. C., & Cooper, L. A. (2004). Racial and ethnic differences in patient perceptions of bias and cultural competence in health care. Journal of General Internal Medicine, 19 , 101–110. Kaiser, K., Rauscher, G., Jacobs, E., Strenski, T., Estwing Ferrans, C., & Warnecke, R. (2011). The import of trust in regular providers to trust in cancer physicians among White, African American, and Hispanic breast cancer patients. Journal of General Internal Medicine, 26 , 51–57. Kellogg, K. C. (2014). Brokerage professions and implementing reform in an age of experts. American Sociological Review, 79 , 912–941. Kramer, R. M. (1999). Trust and distrust in organizations: Emerging perspectives, enduring ques- tions. Annual Review of Psychology, 50 , 569–598. LaVeist, T. A., Isaac, L. A., & Williams, K. P. (2009). Mistrust of health care organizations is associated with underutilization of health services. Health Services Research, 44 , 2093–2105. LaVeist, T. A., Nickerson, K. J., & Bowie, J. V. (2000). Attitudes about racism, medical mistrust, and satisfaction with care among African American and White cardiac patients. Medical Care Research and Review, 57 (Supplement 1), 146–161. Maguire, S., & Phillips, N. (2008). ‘Citibankers’ at citigroup: A study of the loss of institutional trust after a merger. Journal of Management Studies, 45 , 372–401. Mansbridge, J. (1999). Should Blacks represent Blacks and women represent women? A contin- gent ‘yes’. The Journal of Politics, 61 , 628–657. Mayer, R. C., Davis, J. H., & Schoorman, F. D. (2006). An integrative model of organizational trust. The Academy of Management Review, 20 , 709–734 (Original work published 1995). McCloskey, H., & Zaller, J. (1984). The American ethos: Public attitudes toward capitalism and democracy . Cambridge, MA: Harvard University Press. Mechanic, D. (1996). Changing medical organization and the erosion of trust. The Milbank Quarterly, 74 , 171–189. Mondak, J. (1993). Institutional legitimacy and procedural justice: Reexamining the question of causality. Law and Society Review, 27 , 599–608. Moy, P., & Pfau, M. (2000). With malice toward all? The media and public confi dence in demo- cratic institutions . Westport, CT: Praeger. Norris, P., & Franklin, M. (1997). Social representation. European Journal of Political Research, 32 , 185–210. O’Malley, A. S., Sheppard, V. B., Schwartz, M., & Mandelblatt, J. (2004). The role of trust in use of preventive services among low-income African–American women. Preventive Medicine, 38 , 777–785. Examining the Relationship Between Interpersonal and Institutional Trust… 114 Oxford English Dictionary. (2015). “Institution,” noun . Oxford, England: Oxford University Press. Retrieved from Institution Ozawa, S., & Sripad, P. (2013). How do you measure trust in the health system? A systematic review of the literature. Social Science & Medicine, 91 , 10–14. Pantoja, A., & Segura, G. (2003). Does ethnicity matter? Descriptive representation in legislatures and political alienation among Latinos. Social Science Quarterly, 84 , 441–460. Pearson, S. D., Kleinman, K., Rusinak, D., & Levinson, W. (2006). A trial of disclosing physi- cians’ fi nancial incentives to patients. Archives of Internal Medicine, 166 , 623–628. Peek, M. E., Gorawara-Bhat, R., Quinn, M. T., Odoms-Young, A., Wilson, S. C., & Chin, M. H. (2013). Patient trust in physicians and shared decision-making among African–Americans with diabetes. Health Communication, 28 , 616–623. Peek, M., Sayad, J., & Markwardt, R. (2008). Fear, fatalism and breast cancer screening in low- income African–American women: The role of clinicians and the health care system. Journal of General Internal Medicine, 23 , 1847–1853. Peek, M. E., Wilkes, A. E., Roberson, T. S., Goddu, A. P., Nocon, R. S., Tang, H., et al. (2012). Early lessons from an initiative on Chicago’s south side to reduce disparities in diabetes care and outcomes. Health Affairs, 31 , 177–186. Pescosolido, B. A., Tuch, S. A., & Martin, J. K. (2001). The profession of medicine and the public: Examining Americans’ changing confi dence in physician authority from the beginning of the ‘health care crisis’ to the era of health care reform. Journal of Health and Social Behavior, 42 , 1–16. Pew Research Center. (2013, October 18). Trust in government nears record low, but most federal agencies are viewed favorably. Retrieved from trust-in-government-nears-record-low-but-most-federal-agencies-are-viewed-favorably/ Prior, M. (2005). News v entertainment: How increasing media choice widens gaps in political knowledge and turnout. American Journal of Political Science, 49 , 577–592. Rao, B., & Hellander, I. (2014). The widening US health care crisis three years after the passage of ‘Obamacare’. International Journal of Health Services, 44 , 215–232. Rein, L., & O’Keefe, E. (2010, October 18). New post poll fi nds negativity toward federal workers. Washington Post . Retrieved from Richardson, L., Houston, D., & Hadjiharalambous, C. S. (2001). Public confi dence in the leaders of American governmental institutions. In J. R. Hibbing & E. Theiss-Morse (Eds.), What is it about government that Americans dislike? (pp. 83–97). New York: Cambridge University Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C. (1998). Introduction to special topic forum: Not so different after all: A cross-discipline view of trust. The Academy of Management Review, 23 , 393–404. Saha, S., Jacobs, E. A., Moore, R. D., & Beach, M. C. (2010). Trust in physicians and racial dis- parities in HIV care. AIDS Patient Care and STDs, 24 , 415–420. Scheb, J. M., & Lyons, W. (2000). The myth of legality and public evaluation of the Supreme Court. Social Science Quarterly, 81 , 928–940. Scherer, N., & Curry, B. (2010). Does descriptive race representation enhance institutional legiti- macy? The case of the U.S. courts. Journal of Politics, 72 , 90–104. Schilke, O., & Cook, K. S. (2013). A cross-level process theory of trust development in interorga- nizational relationships. Strategic Organization, 11 , 281–303. Schnittker, J. (2004). Social distance in the clinical encounter: Interactional and sociodemographic foundations for mistrust in physicians. Social Psychology Quarterly, 67 , 217–235. Schoorman, F. D., Mayer, R. C., & Davis, J. H. (2007). An integrative model of organizational trust: Past, present, and future. The Academy of Management Review, 32 , 344–354. Schoorman, F. D., Wood, M. M., & Breuer, C. (2015). Would trust by any other name smell as sweet? Refl ections on the meanings and uses of trust across disciplines and context. In B. H. Bornstein & A. J. Tomkins (Eds.), Motivating cooperation and compliance with authority: The C. Campos-Castillo et al. 115 role of institutional trust ( Vol. 62nd Nebraska Symposium on Motivation , pp. 13–35). New York: Springer. Schwindt-Bayer, L., & Mishler, W. (2005). An integrated model of women’s representation. Journal of Politics, 67 , 407–428. Scott, W. R., Ruef, M., Mendel, P. J., & Caronna, C. A. (2000). Institutional change and healthcare organizations: From professional dominance to managed care . Chicago: University of Chicago Shamir, B., & Lapidot, Y. (2003). Trust in organizational superiors: Systemic and collective con- siderations. Organization Studies, 24 , 463–491. Swetz, K. M., Crowley, M. E., & Maines, T. D. (2013). What makes a Catholic hospital “Catholic” in an age of religious-secular collaboration? The case of the Saint Mary’s Hospital and the Mayo Clinic. HEC Forum, 25 , 95–107. Tyler, T. R. (2006a). Psychologtical perspectives on legitimacy and legitimation. Annual Review of Psychology, 57 , 375–400. Tyler, T. R. (2006b). Why people obey the law . Princeton, NJ: Princeton University Press. Tyler, T. R., & Rasinski, K. (1991). Procedural justice, institutional legitimacy, and the acceptance of unpopular US Supreme Court decisions: A reply to Gibson. Law and Society Review, 25 , 621–630. Ulbig, S. (2007). Gendering municipal government: Female descriptive representation and feel- ings of political trust. Social Science Quarterly, 88 , 1106–1123. Washington, H. A. (2006). Medical apartheid: The dark history of medical experimentation on Black Americans from colonial times to the present . New York: Doubleday Books. Whetten, K., Leserman, J., Whetten, R., Ostermann, J., Thielman, N., Swartz, M., et al. (2006). Exploring lack of trust in care providers and the government as a barrier to health service use. American Journal of Public Health, 96 , 716–721. Williams, J. (1985). Systematic infl uences on political trust: The importance of perceived institu- tional performance. Political Methodology, 11 , 125–142. Williams, M. (1998). Voice, trust, and memory . Princeton, NJ: Princeton University Press. Zheng, B., Hall, M. A., Dugan, E., Kidd, K. E., & Levine, D. (2002). Development of a scale to measure patients’ trust in health insurers. Health Services Research, 37 , 185–200. Zucker, L. G. (1986). Production of trust: Institutional sources of economic structure, 1840–1920. Research in Organizational Behavior, 8 , 53–111. Examining the Relationship Between Interpersonal and Institutional Trust…
https://www.researchgate.net/publication/295257316_Examining_the_Relationship_Between_Interpersonal_and_Institutional_Trust_in_Political_and_Health_Care_Contexts
CC-MAIN-2022-33
en
refinedweb
How to Use Namespaces for Advanced Android Configuration Today, we would like to introduce a feature that we call Custom Configuration. With Custom Config, Esper customers can more effectively finetune Android devices in every way possible. A custom config uses JSON to define changes or actions. These are categorized into sections we like to call Namespaces. Each namespace serves a special purpose. Recently, we introduced 3 new namespaces, the “settings,” “dpcParams,” and “scripts” namespaces: - The “settings” namespace allows you to make over-the-air changes to nearly all Android settings - The “dpcParams” exposes certain tunable parameters within the Esper device Agent. - The “scripts” namespace allows you to perform arbitrary launch actions on the device – whether that be an explicit/implicit intent, a service or a broadcast. To use these namespaces, you will need to perform a POST request using Curl, Postman, or a similar tool to an API. POST <endpoint>-api.esper.cloud/api/v0/enterprise/<enterprise-id>/command/ e.g: Add an Authorization token and Content-Type header to call this API. Authorization: Bearer <your-token-here> Content-Type: application/json Please follow the instructions here to generate an access token. Now, let’s look at the individual namespaces to understand which BODY needs to be sent with this POST request. The ‘settings’ namespace: Android supports three major categories of Settings – System Settings, Global Settings and Secure Settings. These categories collectively comprise over 500 total settings for single-purpose Android experiences. Most of the important System Settings can be controlled seamlessly through Esper’s cloud console UI. But, modifying any of the Secure Settings requires an enabled Supervisor plugin for your devices. Modifying global settings also generally requires a Supervisor plugin, with a few exceptions that can now be modified using JSON. You’ll need to send the following POST body in order to make use of this namespace. { "command_type": "DEVICE", "command": "UPDATE_DEVICE_CONFIG", "command_args": { "custom_settings_config": { "settings": { "global": [ { "key": "data_roaming", "value": "1" } ], "secure": [ { "key": "doze_enabled", "value": "0" } ], "system": [ { "key": "screen_off_timeout", "value": "60000" } ] } } }, "devices": [ "" ], "device_type": "all" } You’ll need to substitute the device’s UUID to which you need to send the command: devices": [ "<device-uuid-here>" ], The UUID of the device can be found out in your browser’s address bar when you view a device. See the screenshot below: Here 77204919-2f56-40df-85c8-f7740bff93ca is the UUID. The area of focus in the above mentioned POST body is: "settings": { "global": [ { "key": "data_roaming", "value": "1" } ], "secure": [ { "key": "doze_enabled", "value": "0" } ], "system": [ { "key": "screen_off_timeout", "value": "60000" } ] } In the code example above, we have specified three different settings in three different categories. The “global”, “secure” and “system” sub namespaces accept arrays, which means you can send multiple settings across all three categories in a single payload. All settings that require secure access, including global settings, will require a supervisor plugin. After navigating to the official Android documentation page, there are a few things to note: - Most of the Settings have been deprecated - The majority of the Secure settings are undocumented, including the settings used in the example above - To explore more Settings you can update, navigate to the AOSP code for the Settings Provider If you need any help finding a particular setting, please don’t hesitate to contact us. It is also important to note that Secure & Global Settings must be used with extreme caution as these can cause unexpected issues if not handled properly. Once you POST this request, you can go and check the ‘Event Feed’ of your device to know the status of the operation. The ‘dpcParams’ namespace: Now that we’ve seen how we can practically tune any Android Settings, it’s time to talk about some other settings which determine how Esper can work best for you. We introduced the dpcParams namespace to allow you to control how the Esper Agent’s different parameters can be optimized to your business needs. We currently support five different DPC parameters and we are actively working to bring more tunable settings for you. Let us look at what these Fabulous Five can do! Now that we know what all the five parameters can do, let’s look at a few examples on how to utilize these. The POST body for the above namespace would be as follows: { "command_type": "DEVICE", "command": "UPDATE_DEVICE_CONFIG", "command_args": { "custom_settings_config": { "dpcParams": [ { "key": "wifiRevertTimeout", "value": "150000" } ] } }, "devices": [ "" ], "device_type": "all" } } The point of focus is: "dpcParams": [ { "key": "wifiRevertTimeout", "value": "150000" } ] The namespace accepts an array, hence you can specify multiple parameters in a single payload. It’s important to note that, no matter the data type of the parameter, the value of the parameter must always be a String e.g “value”: “false”, “value”: “true” Here’s an example of how you might use the nonSuspendableLaunchersList "dpcParams": [ { "key": "nonSuspendableLaunchersList", "value": "com.android.launcher2" } ] Note that you can specify multiple launchers separated by a comma. The ‘scripts’ namespace: The scripts namespace enables the execution of arbitrary components within or outside the DPC. A script, as the name suggests, is a snippet which defines an action. Here’s a preview of how this namespace appears: { "command_type": "DEVICE", "command": "UPDATE_DEVICE_CONFIG", "command_args": { "custom_settings_config": { "scripts": [ { "action": "LAUNCH", "actionParams": { "intentAction": "android.settings.SETTINGS", "extras": { "test1": 12, "test2": "Hello" } } } ] } }, "devices": [ "<device-uuid-here>" ], "device_type": "all" } The point of focus is: { "scripts": [ { "action": "LAUNCH", "actionParams": { "intentAction": "android.settings.SETTINGS", "extras": { "test1": 12, "test2": "Hello" } } } ] } The namespace accepts an array, which means you can enter in multiple scripts to execute. The “actionParams” host the dictionary of parameters needed to define what kind of launch this is. An intent, as Android defines it, “is an abstract description of an operation to be performed”. An intent can be used to launch an activity, start a service, or send a broadcast. Let us look at the different actionParams that we support and what their purpose is. You can access all testing materials from the Esper device sample Applications repository. In this repository, you’ll find example JSONs to perform different kinds of launches. To learn more about Esper’s developer tools, sign up and start using Esper free up to 100 Android devices today.
https://blog.esper.io/introducing-esper-advanced-custom-config-namespaces/
CC-MAIN-2022-33
en
refinedweb
In this hands-on lab, you will be tasked with accessing a persistent volume from a pod in order to view the available volumes inside the Kubernetes cluster. By default, pods cannot access volumes directly, so you will also need to create a cluster role to provide authorization to the pod. Additionally, you cannot access the API server directly without authentication, so you will need to run kubectl in proxy mode to retrieve information about the volumes. Learning Objectives Successfully complete this lab by achieving the following learning objectives: - View the Persistent Volume - Use one command that will list the persistent volumes within the cluster. - Create a ClusterRole and ClusterRoleBinding - Use one command that will create a new ClusterRole with the verb getand listto the resource persistentvolumes. - Use one command that will create a new ClusterRoleBinding to the ClusterRole, in the webnamespace and using the defaultservice account. - Create a pod to access the PV - Create the YAML file including the two containers, using the two images curlimages/curland linuxacademycontent/kubectl-proxy. - Issue a command to the curl container to sleep for 1 hour (3600 seconds). - Apply the YAML to the Kubernetes cluster to run the pod. - Request access to the PV from the pod - Open a shell inside the container. - From the container shell prompt, issue the curlcommand to request persistent volumes from the API server.
https://acloudguru.com/hands-on-labs/creating-a-clusterrole-to-access-a-pv-in-kubernetes
CC-MAIN-2022-33
en
refinedweb
Tag:player Swift: audiotoolbox development C program local player IOS local player, using avaudioplayer is very simple, Take the audio resource file path, create avaudioplayer, thenprepareToPlay, you canplayYes Using audiotoolbox to develop C program local player, the routine is also very simple Playback is divided into three steps: 1. Get the audio file and create an audio output queue Given a local resource path […] Another idea of wav player is to buffer avaudiopcmbuffer This article continues to discuss those things about audiotoolbox and wav player Play the routine in three steps: Read the data first and restore the sampling data from the file For audio resource files, use audio file services, and audio file stream services This step, the next two blogs, are focused on, From the wav […] Installing flash/shockwave player under Linux Now, more and more flash works are applied to websitesDesignMedium. However, many browsers used in the Linux platform do not have flash/shockwave player plug-ins due to older versions or other reasons, so they cannot enjoy wonderful flash works, which has become a great pity for Linux enthusiasts. However, Linux enthusiasts can now download the dedicated […] HTML5 custom player core code Web page HTML Copy code The code is as follows: <body style=”background-color:#8EEE5EE;”> <section> <video width=”640″ height=”360″> <source src=”videos/Introduction.mp4″> </video> <nav> <div> <button type=”button”>Play</button> </div> <div> <div></div> </div> <div style=”clear:both”></div> </nav> </section> </body> CSS Style Copy code The code is as follows: body{ text-align:center; } header,section,footer,aside,nav,article,hgroup{ display:block; } #skin{ width:700px; margin:10px auto; padding:5px; background:red; border:4px solid […] [Mini-Mechanisms#1] State-Driven Camera Motion == Intro == use case: Suppose a room for chess and card games. After entering the room, the player is in a standing perspective. After clicking the seat, the player is seated, the perspective is switched, and the game starts Mechanism: State-Driven Camera Motion == Problems == [1] Locate UI Place the […] How can Apple view the version number of flash player for Mac How to check the version number of flash player in MAC system? When you watch a movie on the web, you suddenly find that your Flash player can’t be used, or you want to check the version number of flash player on your Mac. What should you do? Do you have to look in the […] Android video player plug-in – mxvideo MXVideo introduce Android open source player based on dumpling player and kotlin, out of the box. Welcome to issue and pull requestProject address:MXVideo Small screen Full screen live broadcast Functional characteristics Any player kernel (including open source ijk, Google exo, Alibaba cloud, etc.) For single case playback, only one program can be played at the […] Using QuickTime player to realize screen video graphic tutorial in MAC system I believe many people don’t know that the built-in QuickTime player of the MAC system also has the function of on-screen video recording. Seriously, I didn’t know before (MARS). I just saw it on macstories todayThis article, I found this function, and it is very powerful under lion. In fact, QuickTime player has the function […] 21000 star! An open source, free and powerful video player library The following article comes from the attacking coder, author Cui Qingcai Recently, I was developing a front-end project, which uses the function of playing video, so I checked what front-end video player library can be used. Today, let’s share it with you. The name of this library is plyr. As the name suggests, it is […] Take you to realize audio and video playback with avplayer Project overview The following items are based on the actual application of AVPlayer, which can achieve the full play effect of audio playing, vertical and vertical screen video switching, and tiktok vertical screen.Project address:AVPlayerAudioVideoIf articles and projects are helpful to you, please give a star ⭐ Hello, your star ⭐ It is the driving force […] IOS technology sharing source code (just enter the pit of IOS? Don’t be afraid, come here and learn with me.) IOS technology sharing (APP icon production, apple purchase, payment, imitation WeChat friends circle, imitation WeChat image viewer, add post case, anti Sina @ people, imitation Alipay password box, imitation hair ring, tag, JS interactive +wk, loading web pages, adaptive cell height, TableView embedded player, anti Caton, custom watch box, select address, select time, select color, […] [Ubuntu] DLNA platform construction (build your own video and audio platform in your home and bedroom) 0. Build your own audio-visual learning platform in your home and bedroom I downloaded a lot of public classes and ESL podcast audio at home and abroad and slept in the mobile hard disk. Recently, I made a small tablet, so I almost thought of building a NAS on the bedroom soft routing Ubuntu system. […]
https://developpaper.com/tag/player/
CC-MAIN-2022-33
en
refinedweb
Iterators - Loading data¶ In this tutorial, we focus on how to feed data into a training or inference program. Most training and inference modules in MXNet accept data iterators, which simplifies this procedure, especially when reading large datasets. Here we discuss the API conventions and several provided iterators. Prerequisites¶ To complete this tutorial, we need: - MXNet. See the instructions for your operating system in Setup and Installation. - OpenCV Python library, Python Requests, Matplotlib and Jupyter Notebook. $ pip install opencv-python requests matplotlib jupyter - Set the environment variable MXNET_HOMEto the root of the MXNet source folder. $ git clone ~/mxnet $ export MXNET_HOME='~/mxnet' MXNet Data Iterator¶ Data Iterators in MXNet are similar to Python iterator objects. In Python, the function iter allows fetching items sequentially by calling next() on iterable objects such as a Python list. Iterators provide an abstract interface for traversing various types of iterable collections without needing to expose details about the underlying data source. In MXNet, data iterators return a batch of data as DataBatch on each call to A DataBatch often contains n training examples and their corresponding labels. Here n is the batch_size of the iterator. At the end of the data stream when there is no more data to read, the iterator raises StopIteration exception like Python iter. The structure of DataBatch is defined here. Information such as name, shape, type and layout on each training example and their corresponding label can be provided as DataDesc data descriptor objects via the provide_data and provide_label properties in DataBatch. The structure of DataDesc is defined here. All IO in MXNet is handled via mx.io.DataIter and its subclasses. In this tutorial, we’ll discuss a few commonly used iterators provided by MXNet. Before diving into the details let’s setup the environment by importing some required packages: import mxnet as mx %matplotlib inline import os import subprocess import numpy as np import matplotlib.pyplot as plt import tarfile import warnings warnings.filterwarnings("ignore", category=DeprecationWarning) Reading data in memory¶ When data is stored in memory, backed by either an NDArray or numpy ndarray, we can use the NDArrayIter to read data as below: import numpy as np data = np.random.rand(100,3) label = np.random.randint(0, 10, (100,)) data_iter = mx.io.NDArrayIter(data=data, label=label, batch_size=30) for batch in data_iter: print([batch.data, batch.label, batch.pad]) Reading data from CSV files¶ MXNet provides CSVIter to read from CSV files and can be used as below: #lets save `data` into a csv file first and try reading it back np.savetxt('data.csv', data, delimiter=',') data_iter = mx.io.CSVIter(data_csv='data.csv', data_shape=(3,), batch_size=30) for batch in data_iter: print([batch.data, batch.pad]) Custom Iterator¶ When the built-in iterators do not suit your application needs, you can create your own custom data iterator. An iterator in MXNet should - Implement next()in Python2or __next()__in Python3, returning a DataBatchor raising a StopIterationexception if at the end of the data stream. - Implement the reset()method to restart reading from the beginning. - Have a provide_dataattribute, consisting of a list of DataDescobjects that store the name, shape, type and layout information of the data (more info here). - Have a provide_labelattribute consisting of a list of DataDescobjects that store the name, shape, type and layout information of the label. When creating a new iterator, you can either start from scratch and define an iterator or reuse one of the existing iterators. For example, in the image captioning application, the input example is an image while the label is a sentence. Thus we can create a new iterator by: - creating a image_iterby using ImageRecordIterwhich provides multithreaded pre-fetch and augmentation. - creating a caption_iterby using NDArrayIteror the bucketing iterator provided in the rnn package. next()returns the combined result of image_iter.next()and caption_iter.next() The example below shows how to create a Simple iterator. class SimpleIter(mx.io.DataIter): def __init__(self, data_names, data_shapes, data_gen, label_names, label_shapes, label_gen, num_batches=10): self._provide_data = zip(data_names, data_shapes) self._provide_label = zip(label_names, label_shapes) self.num_batches = num_batches self.data_gen = data_gen self.label_gen = label_gen self.cur_batch = 0 def __iter__(self): return self def reset(self): self.cur_batch = 0 def __next__(self): return self.next() @property def provide_data(self): return self._provide_data @property def provide_label(self): return self._provide_label def next(self): if self.cur_batch < self.num_batches: self.cur_batch += 1 data = [mx.nd.array(g(d[1])) for d,g in zip(self._provide_data, self.data_gen)] label = [mx.nd.array(g(d[1])) for d,g in zip(self._provide_label, self.label_gen)] return mx.io.DataBatch(data, label) else: raise StopIteration We can use the above defined SimpleIter to train a simple MLP program below: import mxnet as mx num_classes = 10 net = mx.sym.Variable('data') net = mx.sym.FullyConnected(data=net, name='fc1', num_hidden=64) net = mx.sym.Activation(data=net, name='relu1', act_type="relu") net = mx.sym.FullyConnected(data=net, name='fc2', num_hidden=num_classes) net = mx.sym.SoftmaxOutput(data=net, name='softmax') print(net.list_arguments()) print(net.list_outputs()) Here, there are four variables that are learnable parameters: the weights and biases of FullyConnected layers fc1 and fc2, two variables for input data: data for the training examples and softmax_label contains the respective labels and the softmax_output. The data variables are called free variables in MXNet’s Symbol API. To execute a Symbol, they need to be bound with data. Click here learn more about Symbol. We use the data iterator to feed examples to a neural network via MXNet’s module API. import logging logging.basicConfig(level=logging.INFO) n = 32 data_iter = SimpleIter(['data'], [(n, 100)], [lambda s: np.random.uniform(-1, 1, s)], ['softmax_label'], [(n,)], [lambda s: np.random.randint(0, num_classes, s)]) mod = mx.mod.Module(symbol=net) mod.fit(data_iter, num_epoch=5) Record IO¶ Record IO is a file format used by MXNet for data IO. It compactly packs the data for efficient read and writes from distributed file system like Hadoop HDFS and AWS S3. You can learn more about the design of RecordIO here. MXNet provides MXRecordIO and MXIndexedRecordIO for sequential access of data and random access of the data. MXRecordIO¶ First, let’s look at an example on how to read and write sequentially using MXRecordIO. The files are named with a .rec extension. record = mx.recordio.MXRecordIO('tmp.rec', 'w') for i in range(5): record.write('record_%d'%i) record.close() We can read the data back by opening the file with an option r as below: record = mx.recordio.MXRecordIO('tmp.rec', 'r') while True: item = record.read() if not item: break print (item) record.close() MXIndexedRecordIO¶ MXIndexedRecordIO supports random or indexed access to the data. We will create an indexed record file and a corresponding index file as below: record = mx.recordio.MXIndexedRecordIO('tmp.idx', 'tmp.rec', 'w') for i in range(5): record.write_idx(i, 'record_%d'%i) record.close() Now, we can access the individual records using the keys record = mx.recordio.MXIndexedRecordIO('tmp.idx', 'tmp.rec', 'r') record.read_idx(3) You can also list all the keys in the file. record.keys Packing and Unpacking data¶ Each record in a .rec file can contain arbitrary binary data. However, most deep learning tasks require data to be input in label/data format. The mx.recordio package provides a few utility functions for such operations, namely: pack, unpack, pack_img, and unpack_img. Packing/Unpacking Binary Data¶ pack and unpack are used for storing float (or 1d array of float) label and binary data. The data is packed along with a header. The header structure is defined here. # pack data = 'data' label1 = 1.0 header1 = mx.recordio.IRHeader(flag=0, label=label1, id=1, id2=0) s1 = mx.recordio.pack(header1, data) label2 = [1.0, 2.0, 3.0] header2 = mx.recordio.IRHeader(flag=3, label=label2, id=2, id2=0) s2 = mx.recordio.pack(header2, data) # unpack print(mx.recordio.unpack(s1)) print(mx.recordio.unpack(s2)) Packing/Unpacking Image Data¶ MXNet provides pack_img and unpack_img to pack/unpack image data. Records packed by pack_img can be loaded by mx.io.ImageRecordIter. data = np.ones((3,3,1), dtype=np.uint8) label = 1.0 header = mx.recordio.IRHeader(flag=0, label=label, id=0, id2=0) s = mx.recordio.pack_img(header, data, quality=100, img_fmt='.jpg') # unpack_img print(mx.recordio.unpack_img(s)) Image IO¶ In this section, we will learn how to preprocess and load image data in MXNet. There are 4 ways of loading image data in MXNet. - Using mx.image.imdecode to load raw image files. - Using mx.img.ImageIterimplemented in Python which is very flexible to customization. It can read from .rec( RecordIO) files and raw image files. - Using mx.io.ImageRecordIterimplemented on the MXNet backend in C++. This is less flexible to customization but provides various language bindings. - Creating a Custom iterator inheriting mx.io.DataIter Preprocessing Images¶ Images can be preprocessed in different ways. We list some of them below: - Using mx.io.ImageRecordIterwhich is fast but not very flexible. It is great for simple tasks like image recognition but won’t work for more complex tasks like detection and segmentation. - Using mx.recordio.unpack_img(or cv2.imread, skimage, etc) + numpyis flexible but slow due to Python Global Interpreter Lock (GIL). - Using MXNet provided mx.imagepackage. It stores images in NDArrayformat and leverages MXNet’s dependency engine to automatically parallelize processing and circumvent GIL. Below, we demonstrate some of the frequently used preprocessing routines provided by the mx.image package. Let’s download sample images that we can work with. fname = mx.test_utils.download(url='', dirname='data', overwrite=False) tar = tarfile.open(fname) tar.extractall(path='./data') tar.close() mx.image.imdecode lets us load the images. imdecode provides a similar interface to OpenCV. Note: You will still need OpenCV(not the CV2 Python library) installed to use mx.image.imdecode. img = mx.image.imdecode(open('data/test_images/ILSVRC2012_val_00000001.JPEG', 'rb').read()) plt.imshow(img.asnumpy()); plt.show() Before we see how to read data using the two built-in Image iterators, lets get a sample Caltech 101 dataset that contains 101 classes of objects and converts them into record io format. Download and unzip fname = mx.test_utils.download(url='', dirname='data', overwrite=False) tar = tarfile.open(fname) tar.extractall(path='./data') tar.close() Let’s take a look at the data. As you can see, under the root folder (./data/101_ObjectCategories) every category has a subfolder(./data/101_ObjectCategories/yin_yang). Now let’s convert them into record io format using the im2rec.py utility script. First, we need to make a list that contains all the image files and their categories: os.system('python %s/tools/im2rec.py --list=1 --recursive=1 --shuffle=1 --test-ratio=0.2 data/caltech data/101_ObjectCategories'%os.environ['MXNET_HOME']) The resulting list file (./data/caltech_train.lst) is in the format index\t(one or more label)\tpath. In this case, there is only one label for each image but you can modify the list to add in more for multi-label training. Then we can use this list to create our record io file: os.system("python %s/tools/im2rec.py --num-thread=4 --pass-through=1 data/caltech data/101_ObjectCategories"%os.environ['MXNET_HOME']) The record io files are now saved at here (./data) Using ImageRecordIter¶ ImageRecordIter can be used for loading image data saved in record io format. To use ImageRecordIter, simply create an instance by loading your record file: data_iter = mx.io.ImageRecordIter( path_imgrec="./data/caltech.rec", # the target record file data_shape=(3, 227, 227), # output data shape. An 227x227 region will be cropped from the original image. batch_size=4, # number of samples per batch resize=256 # resize the shorter edge to 256 before cropping # ... you can add more augumentation options as defined in ImageRecordIter. ) data_iter.reset() batch = data_iter.next() data = batch.data[0] for i in range(4): plt.subplot(1,4,i+1) plt.imshow(data[i].asnumpy().astype(np.uint8).transpose((1,2,0))) plt.show() Using ImageIter¶ ImageIter is a flexible interface that supports loading of images in both RecordIO and Raw format. data_iter = mx.image.ImageIter(batch_size=4, data_shape=(3, 227, 227), path_imgrec="./data/caltech.rec", path_imgidx="./data/caltech.idx" ) data_iter.reset() batch = data_iter.next() data = batch.data[0] for i in range(4): plt.subplot(1,4,i+1) plt.imshow(data[i].asnumpy().astype(np.uint8).transpose((1,2,0))) plt.show()
https://mxnet.apache.org/versions/0.12.1/tutorials/basic/data.html
CC-MAIN-2022-33
en
refinedweb
There are many functions in OpenCV that allow you to manipulate your input image. cv2.Gaussianblur() is one of them. It allows you to blur images that are very helpful while processing your images. In this entire tutorial you will know how to blur an image using the OpenCV python module. Why We blur the image? Just like preprocessing is required before making any machine learning model. In the same way, removing noise in the image is required for further processing of the image. Gaussian Blurring the image makes any image smooth and remove the noises. In the next section, you will know all the steps to do the Gaussian blur using the cv2 Gaussianblur method. Steps to Blur the image in Python using cv2.Gaussianblur() Step 1: Import all the required libraries In the entire tutorial, I am using two libraries. One is OpenCV and another is matplotlib. The latter will be used for displaying the image in the Jupyter notebook. import cv2 import matplotlib.pyplot as plt %matplotlib inline In our tutorial, I am displaying all the images inline. That’s why I am telling the python interpreter to display images inline using %matplotlib inline. Step 2: Read the image file Before blurring the image you have to first read the image. In OpenCV, you can read the image using the cv2.imread() method. Let’s read the image. # read image img = cv2.imread("owl.jpg") plt.imshow(img) Output You can see the original image is not blurred. In the next step, I will perform the Gaussian Blur on the image. Step 3: Blur the image using the cv2.Gaussianblur method Before applying the method first learns the syntax of the method. The cv2.Gaussianblur() method accepts the two main parameters. The first parameter will be the image and the second parameter will the kernel size. The OpenCV python module use kernel to blur the image. And kernel tells how much the given pixel value should be changed to blur the image. For example, I am using the width of 5 and a height of 55 to generate the blurred image. You can read more about it on Blur Documentation. Execute the below lines of code and see the output. blur = cv2.GaussianBlur(img,(5,55),0) plt.imshow(blur) Output Now You can see the blurred image. These are the steps to perform Gaussian Blur on an image. Hope you have loved this article. If you have any queries then you can contact us for getting more help. Join our list Subscribe to our mailing list and get interesting stuff and updates to your email inbox.
https://gmailemail-login.email/blur-an-image-in-python-using-opencv/
CC-MAIN-2022-33
en
refinedweb
#include "inthash.c" Go to the source code of this file. Definition at line 36 of file inthash.h. Definition at line 43 of file inthash.h. hash table top level data structure delete an string from the hash table, given its string name Definition at line 182 of file inthash.c. destroy the hash table completely, deallocate memory Definition at line 231 of file inthash.c. Referenced by close_lammps_read, close_lammpsyaml_read, and delete_pdbxParser. return the number of entries in the hash table Definition at line 222 of file inthash.c. Referenced by read_lammps_timestep, and read_lammpsyaml_timestep. initialize hash table for first use Definition at line 62 of file inthash.c. Referenced by parseStructure, read_lammps_structure, read_lammpsyaml_structure, and rebuild_table_int. insert a string into the hash table, along with an integer key Definition at line 150 of file inthash.c. Referenced by parseStructure, read_lammps_structure, read_lammps_timestep, read_lammpsyaml_structure, and read_lammpsyaml_timestep. lookup a string key in the hash table returning its integer key Definition at line 126 of file inthash.c. Referenced by inthash_insert, read_lammps_structure, read_lammps_timestep, read_lammpsyaml_structure, read_lammpsyaml_timestep, readAngleBonds, and readRMSDBonds. print hash table vital stats Definition at line 277 of file inthash.c. Referenced by close_lammps_read, and close_lammpsyaml_read.
http://www.ks.uiuc.edu/Research/vmd/plugins/doxygen/inthash_8h.html
CC-MAIN-2022-33
en
refinedweb
Module java.logging Package java.util.logging Class ErrorManager java.lang.Object java.util.logging.ErrorManager public class ErrorManager extends Object ErrorManager objects can be attached to Handlers to process any error that occurs on a Handler during Logging. When processing logging output, if a Handler encounters problems then rather than throwing an Exception back to the issuer of the logging call (who is unlikely to be interested) the Handler should call its associated ErrorManager. Field Summary Constructor Summary Method Summary Field Details GENERIC_FAILUREpublic static final int GENERIC_FAILUREGENERIC_FAILURE is used for failure that don't fit into one of the other categories. - See Also: - Constant Field Values WRITE_FAILUREpublic static final int WRITE_FAILUREWRITE_FAILURE is used when a write to an output stream fails. - See Also: - Constant Field Values FLUSH_FAILUREpublic static final int FLUSH_FAILUREFLUSH_FAILURE is used when a flush to an output stream fails. - See Also: - Constant Field Values CLOSE_FAILUREpublic static final int CLOSE_FAILURECLOSE_FAILURE is used when a close of an output stream fails. - See Also: - Constant Field Values OPEN_FAILUREpublic static final int OPEN_FAILUREOPEN_FAILURE is used when an open of an output stream fails. - See Also: - Constant Field Values FORMAT_FAILUREpublic static final int FORMAT_FAILUREFORMAT_FAILURE is used when formatting fails for any reason. - See Also: - Constant Field Values Constructor Details ErrorManagerpublic ErrorManager() Method Details errorThe error method is called when a Handler failure occurs. This method may be overridden in subclasses. The default behavior in this base class is that the first call is reported to System.err, and subsequent calls are ignored. - Parameters: msg- a descriptive string (may be null) ex- an exception (may be null) code- an error code defined in ErrorManager
https://docs.oracle.com/en/java/javase/15/docs/api/java.logging/java/util/logging/ErrorManager.html
CC-MAIN-2022-33
en
refinedweb
A frame descriptor for a YCbCr 4:2:2 packed frame type #include <camera/camera_api.h> typedef struct { uint32_t height; uint32_t width; uint32_t stride; } camera_frame_ycbycr_t; Stride is often called pitch. Use this frame descriptor when CAMERA_FRAMETYPE_YCBYCR is reported as the camera_frametype_t. Each set of two pixel values in the YCbYCr frame makes up a macro-pixel. Each macro-pixel is made up of four components in the following byte order: a Y (luma) component, a Cb (blue-difference chroma) component, a Y (luma) component, and a Cr (red-difference chroma) component. Each macro-pixel is stored contiguously on the same line and each component consumes eight bits.
https://www.qnx.com/developers/docs/7.1/com.qnx.doc.camera/topic/structcamera__frame__ycbycr__t.html
CC-MAIN-2022-33
en
refinedweb
Funergy mythbusterma Thank you both very much. This has helped so much :D I am wondering how do I get what a player has said? So then i can match it to a specific word or phrase Thanks Thank you! Basically the title XD I just wanna know how this would be done... I just have no idea... Thanks for the help "yes" Well that helps I'm good now pal. Thanks! :) You fixed it :) I changed it to a one word command as you said and now it works! :) It's working now. I've just gotta bug fix :) Thank you so much!! bobthefish Here's the second video to help [media] Code: package sototallyroary.superbank.assassinhero; import java.util.logging.Logger;... That worked but the other commands do not The video explains it all :) [media] Thanks for any help! If you wanna add colours just add public class AntiCurse extends JavaPlugin{ public boolean onCommand(CommandSender sender, Command cmd, String... I'm not sure but to be safe I would do them. I think I may have cracked it. Thank you. If you wanna check it then feel free package sototallyroary.superbank.assassinhero; import... So how do i do it so that the player types in the amount and that will deposited into the bank? This is my code so far: package... Separate names with a comma.
https://dl.bukkit.org/members/forge_user_48269466.90852192/recent-content
CC-MAIN-2022-33
en
refinedweb
Windows Forms ("WinForms" for short) is a GUI class library included with the .NET Framework. It is a sophisticated object-oriented wrapper around the Win32 API, allowing the development of Windows desktop and mobile applications that target the .NET Framework. WinForms is primarily event-driven. An application consists of multiple forms (displayed as windows on the screen), which contain controls (labels, buttons, textboxes, lists, etc.) that the user interacts with directly. In response to user interaction, these controls raise events that can be handled by the program to perform tasks. Like in Windows, everything in WinForms is a control, which is itself a type of window. The base Control class provides basic functionality, including properties for setting text, location, size, and color, as well as a common set of events that can be handled. All controls derive from the Control class, adding additional features. Some controls can host other controls, either for reusability ( Form, UserControl) or layout ( TableLayoutPanel, FlowLayoutPanel). WinForms has been supported since the original version of the .NET Framework (v1.0), and is still available in modern versions (v4.5). However, it is no longer under active development, and no new features are being added. According to 9 Microsoft developers at the Build 2014 conference: Windows Forms is continuing to be supported, but in maintenance mode. They will fix bugs as they are discovered, but new functionality is off the table. The cross-platform, open-source Mono library provides a basic implementation of Windows Forms, supporting all of the features that Microsoft's implementation did as of .NET 2.0. However, WinForms is not actively developed on Mono and a complete implementation is considered impossible, given how inextricably linked the framework is with the native Windows API (which is unavailable in other platforms). This example will show you how to create a Windows Forms Application project in Visual Studio. Start Visual Studio. On the File menu, point to New, and then select Project. The New Project dialog box appears. In the Installed Templates pane, select "Visual C#" or "Visual Basic". Above the middle pane, you can select the target framework from the drop-down list. In the middle pane, select the Windows Forms Application template. In the Name text box, type a name for the project. In the Location text box, choose a folder to save the project. Click OK. The Windows Forms Designer opens and displays Form1 of the project. From the Toolbox palette, function. Type the following code: C# MessageBox.Show("Hello, World!"); VB.NET MessageBox.Show("Hello, World!") Press F5 to run the application. When your application is running, click the button to see the "Hello, World!" message. Close the form to return to Visual Studio. Open a text editor (like Notepad), and type the code below: using System; using System.ComponentModel; using System.Drawing; using System.Windows.Forms; namespace SampleApp { public class MainForm : Form { private Button btnHello; // The form's constructor: this initializes the form and its controls. public MainForm() { // Set the form's caption, which will appear in the title bar.). btnHello.Click += new EventHandler(btnHello_Click); // Add the button to the form's control collection, // so that it will appear on the form. this.Controls.Add(btnHello); } // When the button is clicked, display a message. private void btnHello_Click(object sender, EventArgs e) { MessageBox.Show("Hello, World!"); } // This is the main entry point for the application. // All C# applications have one and only one of these methods. [STAThread] static void Main() { Application.EnableVisualStyles(); Application.Run(new MainForm()); } } } X:\MainForm.cs. Run the C# compiler from the command line, passing the path to the code file as an argument: %WINDIR%\Microsoft.NET\Framework64\v4.0.30319\csc.exe /target:winexe "X:\MainForm.cs" Note: To use a version of the C# compiler for other .NET framework versions, take a look in the path, %WINDIR%\Microsoft.NET and modify the example above accordingly. For more information on compiling C# applications, see Compile and run your first C# program. MainForm.exewill be created in the same directory as your code file. You can run this application either from the command line or by double-clicking on it in Explorer. Open a text editor (like Notepad), and type the code below: Imports System.ComponentModel Imports System.Drawing Imports System.Windows.Forms Namespace SampleApp Public Class MainForm : Inherits Form Private btnHello As Button ' The form's constructor: this initializes the form and its controls. Public Sub New() ' Set the form's caption, which will appear in the title bar.). AddHandler btnHello.Click, New EventHandler(AddressOf btnHello_Click) ' Add the button to the form's control collection, ' so that it will appear on the form. Me.Controls.Add(btnHello) End Sub ' When the button is clicked, display a message. Private Sub btnHello_Click(sender As Object, e As EventArgs) MessageBox.Show("Hello, World!") End Sub ' This is the main entry point for the application. ' All VB.NET applications have one and only one of these methods. <STAThread> _ Public Shared Sub Main() Application.EnableVisualStyles() Application.Run(New MainForm()) End Sub End Class End Namespace X:\MainForm.vb. Run the VB.NET compiler from the command line, passing the path to the code file as an argument: %WINDIR%\Microsoft.NET\Framework64\v4.0.30319\vbc.exe /target:winexe "X:\MainForm.vb" Note: To use a version of the VB.NET compiler for other .NET framework versions, take a look in the path %WINDIR%\Microsoft.NET and modify the example above accordingly. For more information on compiling VB.NET applications, see Hello World. MainForm.exewill be created in the same directory as your code file. You can run this application either from the command line or by double-clicking on it in Explorer.
https://riptutorial.com/winforms/topic/1018/getting-started-with-winforms
CC-MAIN-2019-30
en
refinedweb
Constructors in C++ Programming Submitted by Sagun Shrestha Constructors are the member functions that are executed automatically when an object is created. That means no explicit call is necessary to call a constructor. The name constructor is given because it constructs the value of the data member of class. Constructors are mainly used or initializing data. Syntax of Constructors classname ([parameter]) { body of constructor } Types of constructor - Default Constructor: The constructor that accepts no parameters and reserves memory for an object is called default constructor. If no any constructor are explicitly defined then the compiler itself supplies a default constructor. - User defined constructor:The constructor that accepts parameter to initialize the data members of an object are called user defined constructor. Programmer has to explicitly define such constructor in the program. - Copy Constructor: Constructor having object reference as parameter is called copy constructor. A copy constructor when called, creates an object as an exact copy of another object in terms of its attribute. Copy constructor can be called by passing object as parameter or using assignment operator. Characteristics of constructor - Constructor name is same as class name. - They should be declared or defined in public section of class. - They don't have any return type like void, int, etc and therefore can't return values. - They can't be inherited. - They are the first member function of a class to be executed. An example to demonstrate how different types of constructors are created and called: #include <iostream> #include <cstring> #include <conio.h> using namespace std; class student { private: int roll; char name[50]; public: student() // default constructor { roll = 0; strcpy(name," "); } student(char n[50], int r) // user defined constructor { roll = r; strcpy(name,n); } student(student &s) // copy constructor { roll = s.roll; strcpy(name,s.name); } void display() { cout <<"Name : "<<name<<endl; cout <<"Roll : "<<roll<<endl; } }; int main() { student s1; // call default constructor student s2(5,"John"); // call user defined consructor student s3(s1); // call copy constructor cout <<"Display value of s1"<<endl; s1.display(); cout <<"Display value of s2"<<endl; s2.display(); cout <<"Display value of s3"<<endl; s3.display(); s3=s2; // call copy constructor cout <<"Display value of s3"<<endl; s3.display(); getch(); return 0; }
https://www.programtopia.net/cplusplus/docs/constructors
CC-MAIN-2019-30
en
refinedweb
Provided by: libroar-dev_1.0~beta11-7_amd64 NAME roar_vs_stream - Set up stream parameters for VS object SYNOPSIS #include <roaraudio.h> int roar_vs_stream(roar_vs_t * vss, const struct roar_audio_info * info, int dir, int * error); DESCRIPTION This function asks a VS object opened by roar_vs_new_from_con(3) or roar_vs_new(3) to open the data connection using the audio parameters info and the stream direction dir. This function needs to be called before data is read or written if one of the above functions is used to create the VS object. This function is also used to provide parameters for the file mode (which is started by using roar_vs_file(3) or roar_vs_file_simple(3)). To play back a file this is not needed in a common case as the VS API tries to find correct parameters. It is required for all other stream directions. See roar_vs_file(3) and roar_vs_file_simple(3) for more information. On failture this function can be called again with different parameters. PARAMETERS vss The VS object to be updated. info This is a pointer to the roar_audio_info structure storing the audio format parameters. The structure contains the following memebers: rate (sample rate), bits (bits per sample), channels (channels per frame) and codec. struct roar_audio_info info; int err; if ( roar_profile2info(&info, "isdn-eu") == -1 ) { // error handling. } if ( roar_vs_stream(vss, &info, ROAR_DIR_PLAY, &err) == -1 ) { // error handling. } SEE ALSO roar_vs_file(3), roar_vs_file_simple(3), roarvs(7), libroar(7), RoarAudio(7).
http://manpages.ubuntu.com/manpages/xenial/man3/roar_vs_stream.3.html
CC-MAIN-2019-30
en
refinedweb
50013/create-an-ec2-instance-using-boto Hi @Neel, try this script: reservations = conn.get_all_instances(instance_ids=[sys.argv[1]]) instances = [i for r in reservations for i in r.instances] for i in instances: key_name = i.key_name security_group = i.groups[0].id instance_type = i.instance_type print "Now Spinning New Instance" subnet_name = i.subnet_id reserve = conn.run_instances(image_id=ami_id,key_name=key_name,instance_type=instance_type,security_group_ids=[security_group],subnet_id=subnet_name) down voteacceptedFor windows: you could use winsound.SND_ASYNC to play them ...READ MORE I think you should try: I used %matplotlib inline in ...READ MORE Slicing is basically extracting particular set of ...READ MORE You don't have to. The size of ...READ MORE For some reason, the pip install of ...READ MORE Check if the FTP ports are enabled ...READ MORE To connect to EC2 instance using Filezilla, ...READ MORE I had a similar problem with trying ...READ MORE Hi @Neha, try something like thus: import boto3 import ...READ MORE Yes of course that's possible with just ...READ MORE OR
https://www.edureka.co/community/50013/create-an-ec2-instance-using-boto
CC-MAIN-2019-30
en
refinedweb
]) } }, Environment } class HelloModule( environment: Environment, configuration: Configuration) extends AbstractModule { def configure() = { // Expect configuration like: // hello.en = "myapp.EnglishHello" // hello.de = "myapp.GermanHello" val helloConfiguration: Configuration = configuration.getConfig(String(l).get Guice’s eager singleton binding. import com.google.inject.AbstractModule import com.google.inject.name.Names class HelloModule extends AbstractModule { def configure() = {(extra ++.
https://www.playframework.com/documentation/tr/2.4.x/ScalaDependencyInjection
CC-MAIN-2019-30
en
refinedweb
How to use ~/.bashrc aliases on IPython 3.2.0? I need to use my aliases from ~/.bashrc on IPython. First I've tried but it didn't work %%bash source ~/.bashrc According to this post we should do %%bash . ~/.bashrc f2py3 -v It takes 20 sec to run on Jupiter and I get: bash: line 2: f2py3: command not found My ~/.bashrc file looks like alias f2py3='$HOME/python/bin/f2py' bash: line 2: type: f2py3: not found Neither alias, source, nor %rehashx% work %%bash alias f2py3='$HOME/python/bin/f2py' I actually found that the problem is Python, who can't execute alias command neither with sh nor bash. Can I use alias with IPython magics? Answers You can parse your bashrc file in the ipython config and add any custom aliases you have defined: import re import os.path c = get_config() with open(os.path.expanduser('~/.bashrc')) as bashrc: aliases = [] for line in bashrc: if line.startswith('alias'): parts = re.match(r"""^alias (\w+)=(['"]?)(.+)\2$""", line.strip()) if parts: source, _, target = parts.groups() aliases.append((source, target)) c.AliasManager.user_aliases = aliases This should be placed in ~/.ipython/profile_default/ipython_config.py. %rehashx makes system commands available in the alias table so is also very useful if you want to use ipython as a shell. Need Your Help CakePhp 2.0 .htaccess .htaccess cakephp mod-rewrite cakephp-2.0Having trouble configuring CakePhp on an Apache server. It worked fine in my dev environment, and now I'm looking to deploy. Drupal 7, Get values of an Entity Reference in a Field Collection drupal-7 entity field entityreferenceI've got a field collection, which contains
http://www.brokencontrollers.com/faq/31410683.shtml
CC-MAIN-2019-30
en
refinedweb
There are two primary functions for sending data to a server formatted as a HTML form. If you are migrating over from the WWW system, see Using WWWForm, below. To provide greater control over how you specify your form data, the UnityWebRequest system contains a user-implementable IMultipartFormSection interface. For standard applications, Unity also provides default implementations for data and file sections: MultipartFormDataSection and MultipartFormFileSection. An overload of UnityWebRequest.POST accepts, as a second parameter, a List argument, whose members must all be IMultipartFormSections. The function signature is: WebRequest.Post(string url, List<IMultipartFormSection> formSections); UnityWebRequestand sets the target URL to the first string parameter. It also sets the Content-Type header of the UnityWebRequestappropriately for the form data specified in the list of IMultipartFormSectionobjects. DownloadHandlerBufferto the UnityWebRequest. This is for convenience - you can use this to check your server’s replies. WWWForm POSTfunction, this HLAPI function calls each supplied IMultipartFormSectionin turn and formats them into a standard multipart form as specified in RFC 2616. UploadHandlerRawobject, which is then attached to the UnityWebRequest. As a result, changes to the IMultipartFormSectionobjects performed after the UnityWebRequest.POSTcall are not reflected in the data sent to the server. using UnityEngine; using UnityEngine.Networking; using System.Collections; public class MyBehavior : MonoBehaviour { void Start() { StartCoroutine(Upload()); } IEnumerator Upload() { List<IMultipartFormSection> formData = new List<IMultipartFormSection>(); formData.Add( new MultipartFormDataSection("field1=foo&field2=bar") ); formData.Add( new MultipartFormFileSection("my file data", "myfile.txt") ); UnityWebRequest www = UnityWebRequest.Post("", formData); yield return; if( ||) { Debug.Log(); } else { Debug.Log("Form upload complete!"); } } } To help migrate from the WWW system, the UnityWebRequest system permits you to use the old WWWForm object to provide form data. In this case, the function signature is: WebRequest.Post(string url, WWWForm formData); UnityWebRequestand sets the target URL to the first string argument’s value. It also reads any custom headers generated by the WWWFormargument (such as Content-Type) and copies them into the UnityWebRequest. DownloadHandlerBufferto the UnityWebRequest. This is for convenience - you can use this to check your server’s replies. WWWForm objectand buffers it in an UploadHandlerRawobject, which is attached to the UnityWebRequest. Therefore, changes to the WWWFormobject after calling UnityWebRequest.POSTdo not alter the contents of the UnityWebRequest. using UnityEngine; using System.Collections; public class MyBehavior : public MonoBehaviour { void Start() { StartCoroutine(Upload()); } IEnumerator Upload() { WWWForm form = new WWWForm(); form.AddField("myField", "myData"); UnityWebRequest www = UnityWebRequest.Post("", form); yield return; if( ||) { Debug.Log(); } else { Debug.Log("Form upload complete!"); } } } Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/Manual/UnityWebRequest-SendingForm.html
CC-MAIN-2019-30
en
refinedweb
Secure Your Sensitive Data with Kubernetes Secrets Kubernetes secrets are objects that store and manage sensitive data inside your Kubernetes cluster. One mistake developers often make is storing sensitive information like database passwords, API credentials, etc in a settings file in their codebase. This is very bad practice (hopefully for obvious reasons). Most developers know this, but still choose the option because it's easy. Fortunately, if you're running your application in a Kubernetes cluster, managing that sensitive data the right way is easy. This guide will provide an overview to Kubernetes Secrets, including how to create, store, and use secrets in your applications. Create a Kubernetes secret Creating secrets, like most Kubernetes operations, is accomplished using the kubectl command. Fortunately, there are a few ways to create secrets, and each are useful in different circumstances. Let's first look at the secret we want to create. Remember that the secret is an object that contains one or more pieces of sensitive data. For our example, let's imagine we want to create a secret, called database, that contains our database credentials. It will be constructed like this: database - username - password Create a secret from files Suppose you have the following files: username and password. They might have been created like this: echo -n 'databaseuser' > ./username echo -n '1234567890' > ./password We can use these files to construct our secret: kubectl create secret generic database --from-file=./username --from-file=./password Create a secret from string literals If you'd prefer, you can skip the files altogether and create the secret from string literals: kubectl create secret generic database --from-literal=username=databaseuser --from-literal=password=databaseuser Examine the new secret Both of the above examples will create identical secrets that look like this: $ kubectl get secret database NAME TYPE DATA AGE database Opaque 2 1h And let's example the secret: $ kubectl describe secret database Name: database Namespace: default Labels: <none> Annotations: Type: Opaque Data ==== username: 12 bytes password: 10 bytes Copy secrets from another cluster or namespace While this is directly applicable, I'll add this as a note because it could be useful. Sometimes we'll need to copy secrets from one cluster or namespace to another. Here's a quick example: kubectl get secret database --context source_context --export -o yaml | kubectl apply --context destination_context -f - For an explanation and more details, see our guide on copying Kubernetes secrets from one cluster to another. Attach secrets to your pod Secrets aren't all that helpful until they're attached to a pod. In order to actually use the secrets they must be configured in the pod definition. There are two primary ways two use secrets: as files and as environment variables. Attaching secrets as files See the following pod config: apiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: web:1.0.0 volumeMounts: - name: database-volume mountPath: "/etc/secrets/database" readOnly: true volumes: - name: database-volume secret: secretName: database There are two important blocks to take note of. First, let's look at the volumes block. We set the name of the volume and specify which secret we want to use. Note that this is set at the pod level, so it could be used in multiple containers if the pod were to define them. volumes: - name: database-volume secret: secretName: database Next we'll look at how the volume is mounted onto the container using volumeMounts. We'll specify which volume we want to use, and set the mount path to /etc/secrets/database. volumeMounts: - name: database-volume mountPath: "/etc/secrets/database" readOnly: true Inside of the container, we can run an ls on /etc/secrets/database and find that both the username and password files exist. Attaching secrets as environment variables Secrets can also be used inside of containers as environment variables. Check out the same config but with secrets attached as environment variables instead of volumes: apiVersion: v1 kind: Pod metadata: name: web spec: containers: - name: web image: web:1.0.0 env: - name: DATABASE_USERNAME valueFrom: secretKeyRef: name: database key: username - name: DATABASE_PASSWORD valueFrom: secretKeyRef: name: database key: password Summary Both volumes and environment variables are perfectly acceptable ways to access secrets from inside your containers. The major difference is that environment variables can only hold a single value, while volumes can hold any number of files—even nested directories. So if your application requires access to many secrets, a volume is a better choice for organization and to keep the configs manageable. Accessing secrets from within the container (using Python) I know some readers will not be using Python containers, but the purpose of this step is to provide a conceptual understanding of how secrets can be used from within the container. Assuming you've followed the first two steps, you should now have a database secret that contains a username and Reading secrets from a volume If we've mounted the secret as a volume, we can read the secret like this: with open('/etc/secrets/database/password, 'r') as secret_file: database_password = secret_file.read() Grabbing the secret file is as easy reading from a file. Of course, you'd probably abstract this code and add error handling and defaults. After all, this is much more pleasant: get_secret('database/password'). Reading secrets from environment variables This is even more straight forward, at least in Python. You can read the secret just as you would any other environment variable: import os database_password = os.environ.get('DATABASE_PASSWORD') Conclusion I hope this overview of Kubernetes secrets was helpful. By now, you should have a good understand of what Kubernetes secrets are and how to use them. If you have questions, please ask in the comments below or head over to the Kubernetes secrets documentation.
https://howchoo.com/g/nmu3mje4zjz/secure-your-sensitive-data-with-kubernetes-secrets
CC-MAIN-2019-30
en
refinedweb
@Incubating @NotExtensible public interface MockitoSession MockitoSessionis an optional, highly recommended feature that helps driving cleaner tests by eliminating boilerplate code and adding extra validation. If you already use MockitoJUnitRunneror MockitoRule*you don't need* MockitoSessionbecause it is used by the runner/rule. MockitoSession is a session of mocking, during which the user creates and uses Mockito mocks. Typically the session is an execution of a single test method. MockitoSession initializes mocks, validates usage and detects incorrect stubbing. When the session is started it must be concluded with finishMocking() otherwise UnfinishedMockingSessionException is triggered when the next session is created. MockitoSession is useful when you cannot use MockitoJUnitRunner or MockitoRule. For example, you work with TestNG instead of JUnit. Another example is when different JUnit runner is in use (Jukito, Springockito) and it cannot be combined with Mockito's own runner. Framework integrators are welcome to use MockitoSession and give us feedback by commenting on issue 857. Example: public class ExampleTest { @Mock Foo foo; //Keeping session object in a field so that we can complete session in 'tear down' method. //It is recommended to hide the session object, along with 'setup' and 'tear down' methods in a base class / runner. //Keep in mind that you can use Mockito's JUnit runner or rule instead of MockitoSession and get the same behavior. MockitoSession mockito; @Before public void setup() { //initialize session to start mocking mockito = Mockito.mockitoSession() .initMocks(this) .strictness(Strictness.STRICT_STUBS) .startMocking(); } @After public void tearDown() { //It is necessary to finish the session so that Mockito // can detect incorrect stubbing and validate Mockito usage //'finishMocking()' is intended to be used in your test framework's 'tear down' method. mockito.finishMocking(); } // test methods ... } Why to use MockitoSession? What's the difference between MockitoSession, MockitoJUnitRunner, MockitoRule and traditional MockitoAnnotations.initMocks(Object)? Great questions! There is no need to use MockitoSession if you already use MockitoJUnitRunner or MockitoRule. If you are JUnit user who does not leverage Mockito rule or runner we strongly recommend to do so. Both the runner and the rule support strict stubbing which can really help driving cleaner tests. See MockitoJUnitRunner.StrictStubs and MockitoRule.strictness(Strictness). If you cannot use Mockito's JUnit support (for example, you are on TestNG) MockitoSession exactly is for you! You can automatically take advantage of strict stubbing ( Strictness), automatic initialization of annotated mocks ( MockitoAnnotations), and extra validation ( Mockito.validateMockitoUsage()). If you use Mockito annotations with MockitoAnnotations.initMocks(Object) but not Mockito runner/rule please try out Mockito's JUnit support (runner or rule) or start using MockitoSession. You'll get cleaner tests and better productivity. Mockito team would really appreciate feedback about MockitoSession API. Help us out by commenting at issue 857. @Incubating void setStrictness(Strictness strictness) MockitoSession. The new strictness will be applied to operations on mocks and checks performed by finishMocking(). This method is used behind the hood by MockitoRule.strictness(Strictness)method. In most healthy tests, this method is not needed. We keep it for edge cases and when you really need to change strictness in given test method. For use cases see Javadoc for PotentialStubbingProblemclass. strictness- new strictness for this session. @Incubating void finishMocking() UnnecessaryStubbingExceptionor emit warnings ( MockitoHint) depending on the Strictnesslevel. The method also detects incorrect Mockito usage via Mockito.validateMockitoUsage(). In order to implement Strictness Mockito session keeps track of mocking using MockitoListener. This method cleans up the listeners and ensures there is no leftover state after the session finishes. It is necessary to invoke this method to conclude mocking session. For more information about session lifecycle see MockitoSessionBuilder.startMocking(). This method is intended to be used in your test framework's 'tear down' method. In the case of JUnit it is the "@After" method. For example, see javadoc for MockitoSession. finishMocking(Throwable) @Incubating void finishMocking(Throwable failure) finishMocking(). This method is intended to be used by framework integrations. When using MockitoSession directly, most users should rather use finishMocking(). MockitoRule uses this method behind the hood. failure- the exception that caused the test to fail; passing nullis permitted finishMocking()
https://static.javadoc.io/org.mockito/mockito-core/3.0.0/org/mockito/MockitoSession.html
CC-MAIN-2019-30
en
refinedweb
Description Class for clusters of 'clone' particles, that is many rigid objects with the same shape and mass. This can be used to make granular flows, where you have thousands of objects with the same shape. In fact, a single ChParticlesClones object can be more memory-efficient than many ChBody objects, because they share many features, such as mass and collision shape. If you have N different families of shapes in your granular simulations (ex. 50% of particles are large spheres, 25% are small spheres and 25% are polyhedrons) you can simply add three ChParticlesClones objects to the ChSystem. This would be more efficient anyway than creating all shapes as ChBody. #include <ChParticlesClones.h> Member Function Documentation If this physical item contains one or more collision models, add them to the system's collision engine. Reimplemented from chrono::ChPhysicsItem. Add a new particle to the particle cluster, passing a coordinate system as initial state. NOTE! Define the sample collision shape using GetCollisionModel()->... before adding particles! Implements chrono::ChIndexedParticles. When this function is called, the speed of particles is clamped into limits posed by max_speed and max_wvel - but remember to put the body in the SetLimitSpeed(true) mode. Tell if the object is subject to collision. Only for interface; child classes may override this, using internal flags. Reimplemented from chrono::ChPhysicsItem. Access the collision model for the collision engine: this is the 'sample' collision model that is used by all particles. To get a non-null pointer, remember to SetCollide(true), before.. Computes x_new = x + Dt , using vectors at specified offsets. By default, when DOF = DOF_w, it does just the sum, but in some cases (ex when using quaternions for rotations) it could do more complex stuff, and children classes might overload it. particle cluster. Also clear the state of previously created particles, if any. NOTE! Define the sample collision shape using GetCollisionModel()->... before adding particles! Implements chrono::ChIndexedParticles. Enable/disable the collision for this cluster of particles. After setting ON, remember RecomputeCollisionModel() before anim starts (it is not automatically recomputed here because of performance issues.) Trick. Set the maximum linear speed (beyond this limit it will be clamped). This is useful in virtual reality and real-time simulations, because it reduces the risk of bad collision detection. The realism is limited, but the simulation is more stable. Trick. Set the maximum linear speed (beyond this limit it will be clamped). This is useful in virtual reality and real-time simulations, because it reduces the risk of bad collision detection. This speed limit is active only if you set SetLimitSpeed(true); Trick. Set the maximum angular speed (beyond this limit it will be clamped). This is useful in virtual reality and real-time simulations, because it reduces the risk of bad collision detection. This speed limit is active only if you set SetLimitSpeed(true); Set the amount of time which must pass before going automatically in sleep mode when the body has very small movements. After you added collision shapes to the sample coll.model (the one that you access with GetCollisionModel() ) you need to call this function so that all collision models of particles will reference the sample coll.model..
http://api.projectchrono.org/classchrono_1_1_ch_particles_clones.html
CC-MAIN-2019-30
en
refinedweb
cc . . . -lc #include <fcntl.h> int open (const char *path, int oflag, ... ); When opening a FIFO with O_RDONLY or O_WRONLY set: When opening a file associated with a communication line: The open returns without waiting for carrier. The open blocks until carrier is present. When opening a STREAMS file, oflag may be constructed from O_NONBLOCK or-ed with either O_RDONLY, O_WRONLY or O_RDWR. Other flag values are not applicable to STREAMS devices and have no effect on them. The value of O_NONBLOCK affects the operation of STREAMS drivers and certain system calls (see read(S), getmsg(S), putmsg(S), and write(S)). For drivers, the implementation of O_NONBLOCK is device-specific. Each STREAMS device driver may treat this option differently. Certain flag values can be set following open as described in fcntl(S). The file pointer used to mark the current position within the file is set to the beginning of the file. The new file descriptor is set to remain open across exec system calls (see fcntl(S)). The named file is opened unless one or more of the following is true: X/Open Portability Guide, Issue 3, 1989 ; Intel386 Binary Compatibility Specification, Edition 2 (iBCSe2) ; IEEE POSIX Std 1003.1-1990 System Application Program Interface (API) [C Language] (ISO/IEC 9945-1) ; and NIST FIPS 151-1 .
http://osr507doc.xinuos.com/en/man/html.S/open.S.html
CC-MAIN-2019-30
en
refinedweb
- .6.) In this MT application, we execute three separate recursive functions—first in a single-threaded fashion, followed by the alternative with multiple threads. 1 #!/usr/bin/env python 2 3 from myThread import MyThread 4 from time import ctime, sleep 5 6 def fib(x): 7 sleep(0.005) 8 if x < 2: return 1 9 return (fib(x-2) + fib(x-1)) 10 11 def fac(x): 12 sleep(0.1) 13 if x < 2: return 1 14 return (x * fac(x-1)) 15 16 def sum(x): 17 sleep(0.1) 18 if x < 2: return 1 19 return (x + sum(x-1)) 20 21 funcs = [fib, fac, sum] 22 n = 12 23 24 def main(): 25 nfuncs = range(len(funcs)) 26 27 print '*** SINGLE THREAD' 28 for i in nfuncs: 29 print 'starting', funcs[i].__name__, 'at:', 30 ctime() 31 print funcs[i](n) 32 print funcs[i].__name__, 'finished at:', 33 ctime() 34 35 print '\n*** MULTIPLE THREADS' 36 threads = [] 37 for i in nfuncs: 38 t = MyThread(funcs[i], (n,), 39 funcs[i].__name__) 40 threads.append(t) 41 42 for i in nfuncs: 43 threads[i].start() 44 45 for i in nfuncs: 46 threads[i].join() 47 print threads[i].getResult() 48 49 print 'all DONE' 50 51 if __name__ == '__main__': 52 781600 78 all DONE
http://www.informit.com/articles/article.aspx?p=1850445&seqNum=6
CC-MAIN-2019-30
en
refinedweb
Best In Place The Unobtrusive in Place editing solution Description Best in Place is a jQuery based AJAX Inplace-Editor that takes profit of RESTful server-side controllers to allow users to edit stuff with no need of forms. If the server has standard defined REST methods, particularly those to UPDATE your objects (HTTP PUT), then by adding the Javascript file to the application it allows all the fields with the proper defined classes to become user in-place editable. The editor works by PUTting the updated value to the server and GETting the updated record afterwards to display the updated value. Installation Rails Installing best_in_place is very easy and straight-forward. Just begin including the gem in your Gemfile: gem 'best_in_place', '~> 3.0.1' After that, specify the use of the jquery and best in place javascripts in your application.js, and optionally specify jquery-ui if you want to use jQuery UI datepickers: //= require jquery //= require best_in_place //= require jquery-ui //= require best_in_place.jquery-ui If you want to use jQuery UI datepickers, you should also install and load your preferred jquery-ui CSS file and associated assets. Then, just add a binding to prepare all best in place fields when the document is ready: $(document).ready(function() { /* Activating Best In Place */ jQuery(".best_in_place").best_in_place(); }); You are done! Features - Compatible with text inputs - Compatible with textarea - Compatible with select dropdown with custom collections - Compatible with custom boolean values (same usage of checkboxes) - Compatible with jQuery UI Datepickers - Sanitize HTML and trim spaces of user's input on user's choice - Displays server-side validation errors - Allows external activator - Allows optional, configurable OK and Cancel buttons for inputs and textareas - ESC key destroys changes (requires user confirmation) - Autogrowing textarea with jQuery Autosize - Helper for generating the best_in_place field only if a condition is satisfied - Provided test helpers to be used in your integration specs - Custom display methods using a method from your model or an existing rails view helper Usage of Rails 3 Gem best_in_place best_in_place object, field, OPTIONS Params: - object (Mandatory): The Object parameter represents the object itself you are about to modify - field (Mandatory): The field (passed as symbol) is the attribute of the Object you are going to display/edit. Options: - :as It can be only [:input, :textarea, :select, :checkbox, :date] or if undefined it defaults to :input. - :collection: If you are using the :select type then you must specify the collection of values it takes as a hash where values represent the display text and keys are the option's value when selected. If you are using the :checkbox type you can specify the two values it can take, or otherwise they will default to Yes and No. - :url: URL to which the updating action will be sent. If not defined it defaults to the :object path. - :place_holder:. - :ok_button: (Inputs and textareas only) If set to a string, then an OK button will be shown with the string as its label, replacing save on blur. - :ok_button_class: (Inputs and textareas only) Specifies any extra classes to set on the OK button. - :cancel_button: (Inputs and textareas only) If set to a string, then a Cancel button will be shown with the string as its label. - :cancel_button_class: (Inputs and textareas only) Specifies any extra classes to set on the Cancel button. - :sanitize: True by default. If set to false the input/textarea will accept html tags. - :html_attrs: Hash of html arguments such as maxlength, default-value, etc. that will be set on the rendered input not the best_in_place span. - :inner_class: Class that is set to the rendered input. - :display_as: A model method which will be called in order to display this field. Cannot be used when using display_with. - :display_with: A helper method or proc will be called in order to display this field. Cannot be used with display_as. - :helper_options: A hash of parameters to be sent to the helper method specified by display_with. - :data: Hash of custom data attributes to be added to span. Can be used to provide data to the ajax:success callback. - :class: Additional classes to apply to the best_in_place span. Accepts either a string or Array of strings - :value: Customize the starting value of the inline input (defaults to to the field's value) - :id: The HTML id of the best_in_place span. If not specified one is automatically generated. - :param: If you wish to specific the object explicitly use this option. - :confirm: If set to true displays a confirmation message when abandoning changes (pressing the escape key). - :skip_blur: If set to true, blurring the input will not cause changes to be abandoned in textareas. HTML Options: If you provide an option that is not explicitly a best_in_place option it will be passed through when creating the best_in_place span. So, for instance, if you want to add an HTML tab index to the best_in_place span just add it to your method call: <%= best_in_place @user, :name, tabindex: "1" %> best_in_place_if best_in_place_if condition, object, field, OPTIONS see also best_in_place_unless It allows us to use best_in_place only if the first new parameter, a condition, is satisfied. Specifically: - Will show a normal best_in_place if the condition is satisfied - Will only show the attribute from the instance if the condition is not satisfied Say we have something like <%= best_in_place_if condition, @user, :name, :as => :input %> In case condition is satisfied, the outcome will be just the same as: <%= best_in_place @user, :name, :as => :input %> Otherwise, we will have the same outcome as: <%= @user.name %> It is a very useful feature to use with, for example, Ryan Bates' CanCan, so we only allow BIP edition if the current user has permission to do it. Examples Examples (code in the views): Input <%= best_in_place @user, :name, :as => :input %> <%= best_in_place @user, :name, :as => :input, :place_holder => "Click me to add content!" %> Textarea <%= best_in_place @user, :description, :as => :textarea %> <%= best_in_place @user, :favorite_books, :as => :textarea, :ok_button => 'Save', :cancel_button => 'Cancel' %> <%= best_in_place @user, :country, :as => :select, :collection => {"1" => "Spain", "2" => "Italy", "3" => "Germany", "4" => "France"} %> <%= best_in_place @user, :country, :as => :select, :collection => { es: 'Spain', it: 'Italy', de: 'Germany', fr: 'France' } %> <%= best_in_place @user, :country, :as => :select, :collection => %w(Spain Italy Germany France) %> <%= best_in_place @user, :country, :as => :select, :collection => [[1, 'Spain'], [3, 'Germany'], [2, 'Italy'], [4, 'France']] %> Of course it can take an instance or global variable for the collection, just remember the structure is a hash. The value will always be converted to a string for display. Checkbox <%= best_in_place @user, :receive_emails, as: :checkbox, collection: ["No, thanks", "Yes, of course!"] %> <%= best_in_place @user, :receive_emails, as: :checkbox, collection: {false: "Nope", true: "Yep"} %> If you use array as a collection, the first value is always the negative boolean value and the second the positive. Structure: ["false value", "true value"]. If not defined, it will default to Yes and No options. Default true and false values are stored in locales t(:best_in_place.yes', default: 'Yes') t(:best_in_place.no', default: 'No') Date <%= best_in_place @user, :birth_date, :as => :date %> With the :date type the input field will be initialized as a datepicker input. In order to provide custom options to the datepicker initialization you must prepare a $.datepicker.setDefaults call with the preferences of your choice. More information about datepicker and setting defaults can be found here Controller response with respond_with_bip Best in place provides a utility method you should use in your controller in order to provide the response that is expected by the javascript side, using the :json format. This is a simple example showing an update action using it: def update @user = User.find params[:id] respond_to do |format| if @user.update_attributes(params[:user]) format.html { redirect_to(@user, :notice => 'User was successfully updated.') } format.json { respond_with_bip(@user) } else format.html { render :action => "edit" } format.json { respond_with_bip(@user) } end end end Custom display methods Using display_as As of best in place 1.0.3 you can use custom methods in your model in order to decide how a certain field has to be displayed. You can write something like: = best_in_place @user, :description, :as => :textarea, :display_as => :mk_description Then instead of using @user.description to show the actual value, best in place will call @user.mk_description. This can be used for any kind of custom formatting, text with markdown, etc... Using display_with In practice the most common situation is when you want to use an existing helper to render the attribute, like number_to_currency or simple_format. As of version 1.0.4 best in place provides this feature using the display_with option. You can use it like this: = best_in_place @user, :money, :display_with => :number_to_currency If you want to pass further arguments to the helper you can do it providing an additional helper_options hash: = best_in_place @user, :money, :display_with => :number_to_currency, :helper_options => {:unit => "€"} You can also pass in a proc or lambda like this: = best_in_place @post, :body, :display_with => lambda { |v| textilize(v).html_safe } Ajax success callback Binding to ajax:success The 'ajax:success' event is triggered upon success. Use bind: $('.best_in_place').bind("ajax:success", function () {$(this).closest('tr').effect('highlight'); }); To bind a callback that is specific to a particular field, use the 'classes' option in the helper method and then bind to that class. <%= best_in_place @user, :name, :classes => 'highlight_on_success' %> <%= best_in_place @user, :mail, :classes => 'bounce_on_success' %> $('.highlight_on_success').bind("ajax:success", function(){$(this).closest('tr').effect('highlight');}); $('.bounce_on_success').bind("ajax:success", function(){$(this).closest('tr').effect('bounce');}); Providing data to the callback Use the :data option to add HTML5 data attributes to the best_in_place span. For example, in your view: <%= best_in_place @user, :name, :data => {:user_name => @user.name} %> And in your javascript: $('.best_in_place').bind("ajax:success", function(){ alert('Name updated for '+$(this).data('userName')); }); Non Active Record environments We are not planning to support other ORMs apart from Active Record, at least for now. So, you can perfectly consider the following workaround as the right way until a specific implementation is done for your ORM. Best In Place automatically assumes that Active Record is the ORM you are using. However, this might not be your case, as you might use another ORM (or not ORM at all for that case!). Good news for you: even in such situation Best In Place can be used! Let's setup an example so we can illustrate how to use Best In Place too in a non-ORM case. Imagine you have an awesome ice cream shop, and you have a model representing a single type of ice cream. The IceCream model has a name, a description, a... nevermind. The thing is that it also has a stock, which is a combination of flavour and size. A big chocolate ice cream (yummy!), a small paella ice cream (...really?), and so on. Shall we see some code? class IceCream < ActiveRecord::Base serialize :stock, Hash # consider the get_stock and set_stock methods are already defined end Imagine we want to have a grid showing all the combinations of flavour and size and, for each combination, an editable stock. Since the stock for a flavour and a size is not a single and complete model attribute, we cannot use Best In Place directly. But we can set it up with an easy workaround. In the view, we'd do: // @ice_cream is already available - flavours = ... // get them somewhere - sizes = ... // get them somewhere table tr - flavours.each do |flavour| th= flavour - sizes.each do |size| tr th= size - flavours.each do |flavour| - v = @ice_cream.get_stock(flavour: flavour, size: size) td= best_in_place v, :to_i, as: :input, url: set_stock_ice_cream_path(flavour: flavour, size: size) Now we need a route to which send the stock updates: TheAwesomeIceCreamShop::Application.routes.draw do ... resources :ice_creams, :only => :none do member do put :set_stock end end ... end And finally we need a controller: class IceCreamsController < ApplicationController::Base respond_to :html, :json ... def set_stock flavour = params[:flavour] size = params[:size] new_stock = (params["fixnum"] || {})["to_i"] @ice_cream.set_stock(new_stock, { :flavour => flavour, :size => size }) if @ice_cream.save head :ok else render :json => @ice_cream.errors.full_messages, :status => :unprocessable_entity end end ... end And this is how it is done! Configuration You can configure some global options for best_in_place. Currently these options are available: BestInPlace.configure do |config| config.container = :div config.skip_blur = true end Notification Sometimes your in-place updates will fail due to validation or for some other reason. In such case, you'll want to notify the user somehow. Best in Place supports doing so through the best_in_place:error event, and has built-in support for notification via jquery.purr, right out of the box. To opt into the jquery.purr error notification, just add best_in_place.purr to your javascripts, as described below. //= require jquery.purr //= require best_in_place.purr If you'd like to develop your own custom form of error notification, you can use best_in_place.purr as an example to guide you. Development Fork the project on github $ git clone <your fork> $ cd best_in_place $ bundle Run the specs $ appraisal $ appraisal rspec You many need to install appraisal: gem install appraisal Test Helpers Best In Place has also some helpers that may be very useful for integration testing. Since it might very common to test some views using Best In Place, some helpers are provided to ease it. As of now, a total of four helpers are available. There is one for each of the following BIP types: a plain text input, a textarea, a boolean input and a selector. Its function is to simulate the user's action of filling such fields. These four helpers are listed below: - bip_area(model, attr, new_value) - bip_text(model, attr, new_value) - bip_bool(model, attr) - bip_select(model, attr, name) The parameters are defined here (some are method-specific): - model: the model to which this action applies. - attr: the attribute of the model to which this action applies. - new_value (only bip_area and bip_text): the new value with which to fill the BIP field. - name (only bip_select): the name to select from the dropdown selector. Authors, License and Stuff Code by Bernat Farrero from Itnig Web Services (it was based on the original project of Jan Varwig) and released under MIT license. Many thanks to the contributors: Roger Campos, Jack Senechal and Albert Bellonch.
http://www.rubydoc.info/gems/best_in_place/frames
CC-MAIN-2017-17
en
refinedweb
* Signals a parse error. 32 * Parse errors when receiving a message will typically trigger 33 * {@link ProtocolException}. Parse errors that do not occur during 34 * protocol execution may be handled differently. 35 * This is an unchecked exception, since there are cases where 36 * the data to be parsed has been generated and is therefore 37 * known to be parseable. 38 * 39 * @since 4.0 40 */ 41 public class ParseException extends RuntimeException { 42 43 private static final long serialVersionUID = -7288819855864183578L; 44 45 /** 46 * Creates a {@link ParseException} without details. 47 */ 48 public ParseException() { 49 super(); 50 } 51 52 /** 53 * Creates a {@link ParseException} with a detail message. 54 * 55 * @param message the exception detail message, or <code>null</code> 56 */ 57 public ParseException(String message) { 58 super(message); 59 } 60 61 }
http://hc.apache.org/httpcomponents-core-4.2.x/httpcore/xref/org/apache/http/ParseException.html
CC-MAIN-2017-17
en
refinedweb
This class handles reading and storage for the NEXUS block UNALIGNED. More... #include <nxsunalignedblock.h> This class handles reading and storage for the NEXUS block UNALIGNED. It overrides the member functions Read and Reset, which are abstract virtual functions in the base class NxsBlock. > Below is a table showing the correspondence between the elements of an UNALIGNED block in a NEXUS file and the variables and member functions of the NxsUnalignedBlock class that can be used to access each piece of information stored. Items in parenthesis should be viewed as "see also" items. > NEXUS Command Data Member Command Atribute Member Functions --------------------------------------------------------------------- DIMENSIONS NEWTAXA newtaxa NTAX ntax GetNTax FORMAT DATATYPE datatype GetDataType RESPECTCASE respectingCase IsRespectCase MISSING missing GetMissingSymbol SYMBOLS symbols GetSymbols EQUATE equates GetEquateKey GetEquateValue GetNumEquates (NO)LABELS labels IsLabels TAXLABELS taxonLabels GetTaxonLabels MATRIX matrix GetState GetInternalRepresentation GetNumStates GetNumMatrixRows IsPolymorphic > Definition at line 68 of file nxsunalignedblock.h.
http://phylo.bio.ku.edu/ncldocs/v2.1/funcdocs/classNxsUnalignedBlock.html
CC-MAIN-2017-17
en
refinedweb
- Understanding SOAP - SOAP Basics - Messaging Framework - The SOAP Encoding - Transport Options - Summary Some technologies, such as MP3, serve a very specific and well-defined purpose. MP3 is an audio file format specific for audio information, whereas XML, on the other hand, is a versatile technology that is used in a variety of solutions, including audio, voice, and data. One of those solutions is the specific file format for application integration that is associated with Web services. As you will see, there have been several proposals to use XML in the field of Web services, but one of the most promising standards is SOAP, the Simple Object Access Protocol. This article introduces you to the SOAP protocol. History of SOAP SOAP connects two fields that were previously largely unrelated: application middleware and Web publishing. Consequently, depending on whether your background is in middleware or Web publishing, you might understand SOAP slightly differently. Yet it's important to realize that it is neither pure middleware nor is it pure Web publishing; it really is the convergence of the two. The best approach to understanding the dual nature of SOAP is a historical one. If you review the concepts and trends that led to the development of SOAP, you will be better prepared to study it. RPCs and Middleware One of the goals of SOAP is to use XML to enable remote procedure calls (RPCs) over HTTP. Originally, RPC was developed by the Open Group () as part of its Distributed Computing Environment (DCE). When writing distributed applications, programmers spend a disproportionate amount of time implementing network protocols: opening and closing sockets, listening on ports, formatting requests, decoding responses, and more. RPC offers an easier alternative. Programmers simply write regular procedure calls and a pre-compiler generates all the protocol-level code to call those procedures over a network. Even if you have never used RPC, you might be familiar with its modern descendants: CORBA (Common Object Request Broker Architecture), DCOM (Distributed Component Object Model), and RMI (Remote Method Invocation). Although the implementations differ (and they are mostly incompatible), CORBA, DCOM, and RMI offer what is best described as an enhanced, object-oriented mechanism of implementing RPC functionality. Listing 1 is the interface to a remote server object that uses RMI. As you can see, it's not very different from a regular interface. The only remarkable aspect is that it extends the java.rmi.Remote interface and every method can throw java.rmi.RemoteException exceptions. Listing 1RemoteBooking.java package com.psol.resourceful; import java.util.Date; import java.rmi.Remote; import java.rmi.RemoteException; public interface RemoteBooking extends Remote { public Resource[] getAllResources() throws RemoteException; public Resource[] getFreeResourcesOn(Date start, Date end) throws RemoteException; public void bookResource(int resource, Date start, Date end, String email) throws RemoteException; } Where's the network code? There is none beyond what's necessary to extend the Remote interface. That's precisely the beauty of middleware: All you have to do is designate certain objects as remote and the middleware takes care of all the networking and protocol aspects for you. The way you designate remote objects varies depending on the actual technology (CORBA, RMI, or DCOM) you're using. The Downside of Middleware It's not all rosy with middleware, though. It has been successfully implemented on private networks (LANs, intranets, and the like) but has not been so successful on the Internet at large. One of the issues is that middleware uses its own protocols and most firewalls are configured to block non-HTTP traffic. You have to reconfigure your firewall to authorize those communications. Oftentimes those changes prove incompatible with the corporate security policy. Another issue is that middleware successfully addresses only one half of the equation: programming. It's not as good with the other half: deployment. Middleware significantly reduces the burden on the programmer writing distributed applications, but it does little to ease the deployment. In practice, it is significantly easier to deploy a Web site than to deploy a middleware-based application. Most organizations have invested in Web site deployment. They have hired and trained system administrators that deal with the numerous availability and security issues. They are therefore reluctant to invest again in deploying another set of servers. As you will see in a moment, SOAP directly addresses both issues. It borrows many concepts from middleware and enables RPC, but it does so with a regular Web server, which lessens the burden on system administrators. RSS, RDF, and Web Sites In parallel, the World Wide Web has evolved from a simple mechanism to share files over the Internet into a sophisticated infrastructure. The Web is universally available, and it is well understood and deployed in almost every companysmall and large. The Web's success traces back to the ease with which you can join. You don't have to be a genius to build a Web site, and Web hosts offer a simple solution to deployment. Obviously, the Web addresses a different audience than middleware, because it is primarily a publishing solution that targets human readers. RPC calls are designed for software consumption. Gradually the Web has evolved from a pure human publishing solution into a mixed mode where some Web pages are geared toward software consumption. Most of those pages are built with XML. RSS Documents RSS is a good example of the using XML to build Web sites for software rather than for humans. RSS, which stands for RDF Site Summary format, was originally developed by Netscape for its portal Web site. An RSS document highlights the main URLs in a Web vocabulary. Listing 2 is a sample RSS document. Listing 2index.rss <?xml version="1.0"?> <rdf:RDF xmlns: <channel rdf: <title>marchal.com</title> <link></link> <description> Your source for XML, Java and e-commerce. </description> <image rdf:resource="> </rdf:RDF> As you can see, Listing 2 defines a channel with two items and one image. The two items are further defined with a link and a short description. The portal picks this document up and integrates it into its content. Other applications of RSS include distributing newsfeeds. The items summarize the news and link to articles that have more details. See for an example. Although they are hosted on Web sites, RSS documents differ from plain Web pages. RSS goes beyond downloading information for browser rendering. A server downloads the RSS file and most likely integrates it in a database. Making Requests: XML-RPC The next logical step is to merge middleware with XML and the Web. How to best characterize the result depends on your point of view. To the Web programmer, adding XML to Web sites is like enhancing Web publishing with a query/response mechanism. But to the middleware programmer, it appears as if middleware had been enhanced to be more compatible with the Web and XML. This is another illustration of XML being used to connect two fields (Web publishing and middleware) that were previously unrelated. One of the earliest such implementations is probably XML-RPC. From a bird's-eye view, XML-RPC is similar to regular RPC, but the binary protocol used to carry the request on the network has been replaced with XML and HTTP. Listing 3 illustrates an XML-RPC request. The client is remotely calling the getFreeResourcesOn(). The equivalent call in Java would have been written as: BookingService.getFreeResourcesOn(startDate,endDate); As you can see in Listing 3, XML-RPC packages the call in an XML document that is sent to the server through an HTTP POST request. Listing 3An XML-RPC Request POST /xmlrpc HTTP/1.0 User-Agent: Handson (Win98) Host: joker.psol.com Content-Type: text/xml Content-length: 468 <?xml version="1.0"?> <methodCall> <methodName>com.psol.resourceful.BookingService.> Without going into all the details, the elements in Listing 3 are methodCall, which is the root of the RPC call; methodName, which states which method is to be called remotely; params, which contains one param element for every parameter in the procedure call; param, to encode the parameters; value, an element that appears within param and holds its value; dateTime.iso8601, which specifies the type of the parameter value. XML-RPC defines a handful of other types, including: i4 or int for a four-byte signed integer; boolean, with the value of 0 (false) or 1 (true); string, a string; double, for double-precision signed floating point numbers; base64, for binary streams (encoded in base64). XML-RPC also supports arrays and structures (also known as records) through the array and struct elements. Note one major difference between Listing 3 and Listing 2: the former is a request made to a server. XML-RPC goes beyond downloading files; it provides a mechanism for the client to send an XML request to the server. Obviously, the server reply is also encoded in XML. It might look like Listing 4. Listing 4An XML-RPC Encoded Response HTTP/1.0 200 OK Content-Length: 485 Content-Type: text/xml Server: Jetty/3.1.4 (Windows 98 4.10 x86) <?xml version="1.0"?> <methodResponse> <params> <param> <value> <array> <data> <value><string>Meeting room 1</string></value> <value><string>Meeting room 2</string></value> <value><string>Board room</string></value> </data> </array> </value> </param> </params> </methodResponse> From XML-RPC to SOAP XML-RPC is simple and effective, but early on its developers (Microsoft, Userland, and Developmentor) realized that they could do better. Indeed XML-RPC suffers from four serious flaws: There's no clean mechanism to pass XML documents themselves in an XML-RPC request or response. Of course the request (or response) is an XML document, but what if you issue a call to, say, a formatter? How do you pass the XML document to the formatter? As you have seen, "XML document" is not a type for XML-RPC. In fact, to send XML documents, you would have to use strings or base64 parameters, which requires special encoding and is therefore suboptimal. There's no solution that enables programmers to extend the request or response format. For example, if you want to pass security credentials with the XML-RPC call, the only solution is to modify your procedure and add one parameter. XML-RPC is not fully aligned with the latest XML standardization. For example, it does not use XML namespaces, which goes against all the recent XML developments. It also defines its own data types, which is redundant with Part 2 of the XML schema recommendation; XML-RPC is bound to HTTP. For some applications, another protocol, such as Simple Mail Transfer Protocol (SMTP, the email protocol), is more sensible. With the help of IBM, the XML-RPC designers upgraded their protocol. The resulting protocol, SOAP, is not as simple as XML-RPC, but it is dramatically more powerful. SOAP also broadens the field to cover applications that are not adequately described as remote procedure calls. NOTE Does SOAP make XML-RPC irrelevant? Yes and no. Most recent developments take advantage of SOAP's increased flexibility and power, but some developers still prefer the simpler XML-RPC protocol. Listing 5 is the SOAP equivalent to Listing 3. Decoding the SOAP request is more involved than decoding an XML-RPC request, so don't worry if you can't read this document just yet. You learn how to construct SOAP requests in the next section. Listing 5A SOAP Request POST /soap/servlet/rpcrouter HTTP/1.0 Host: joker.psol.com Content-Type: text/xml; charset=utf-8 Content-Length: 569 SOAPAction: "" <?xml version='1.0' encoding='UTF-8'?> <SOAP-ENV:Envelope xmlns: <SOAP-ENV:Body> <ns1:getFreeResourcesOn xmlns: <start xsi:2001-01-15T00:00:00Z</start> <end xsi:2001-01-17T00:00:00Z</end> </ns1:getFreeResourcesOn> </SOAP-ENV:Body> </SOAP-ENV:Envelope> Listing 6 is the reply, so it's the SOAP equivalent to Listing 4. Again, don't worry if you don't understand this listing; you will learn how to decode SOAP requests and responses in a moment. Listing 6A SOAP Response HTTP/1.0 200 OK Server: Jetty/3.1.4 (Windows 98 4.10 x86) Servlet-Engine: Jetty/3.1 (JSP 1.1; Servlet 2.2; java 1.3.0) Content-Type: text/xml; charset=utf-8 Content-Length: 704 <?xml version='1.0' encoding='UTF-8'?> <env:Envelope xmlns: <env:Body> <ns1:getFreeResourcesOnResponse xmlns: <return xmlns: <item xsi:Meeting room 1</item> <item xsi:Meeting room 2</item> <item xsi:Board room</item> </return> </ns1:getFreeResourcesOnResponse> </env:Body> </env:Envelope>
http://www.informit.com/articles/article.aspx?p=26334&amp;seqNum=7
CC-MAIN-2017-17
en
refinedweb
Hispasec Security Advisory Name : PHP Threat level : LOW Class : Information Disclosure Discovered : 2009-08-07 Published : --- Credit : Martin Noga, Matthew "j00ru" Jurczyk, Gynvael Coldwind Vulnerable : PHP 5.2.10 / 5.3.0, tested on Windows XP SP3/Vista SP2 x86 (though supposed to work on. ==[ Memory Disclosure #1 ]== The first vulnerability described here allows a potential attacker to steal 4 junk bytes from the PHP server heap, because of the lack of an appropriate sanity check regarding the JPEG file being parsed by the internal exif_scan_JPEG_header function (file ext\exif\exif.c). To be more exact, the unsecure code is listed below: --- cut --- case M_SOF0: (...) case M_SOF15: exif_process_SOFn(Data, marker, &sof_info); ImageInfo->Width = sof_info.width; ImageInfo->Height = sof_info.height; if (sof_info.num_components == 3) { ImageInfo->IsColor = 1; } else { ImageInfo->IsColor = 0; } break; (...) static void exif_process_SOFn (uchar *Data, int marker, jpeg_sof_info *result) { /* 0xFF SOSn SectLen(2) Bits(1) Height(2) Width(2) Channels(1) 3*Channels (1) */ result->bits_per_sample = Data[2]; result->height = php_jpg_get16(Data+3); result->width = php_jpg_get16(Data+5); result->num_components = Data[7]; --- cut --- As one can see, there is no validation check whether the file size is big enough to contain the bits_per_sample/height/width/num_components fields present in the code. In particular, the section size could be set to 0x0002, thus containing only 2 bytes (the length WORD) with no real section data at all. As the code goes, four structure fields are being filled with some unitialized piece of memory, two of which are being passed back to the high-level PHP script. The programmers is able to easily obtain the values by referecing the $result['COMPUTED']['Height'] and $result['COMPUTED']['Width'] objects. What is more, the above vulnerability could potentially lead to Denial of Service conditions, in case the memory being accessed by the program would not exist (this could happen if the heap allocation of size 2 would lie close to a page-boundary. According to the author's research, such a situation takes place very rarely, thus it is not considered a reasonable DoS attack. ==[ Memory Disclosure #2 ]== The second vulnerability is based on a very similar bug existing in the code. This time, one byte of the stack memory can be passed back to the attacker's script due to a wrong assumption made by the source code author. The following php_read2 function code presents the essence of the issue (file ext\standard\image.c): --- cut --- static unsigned short php_read2(php_stream * stream TSRMLS_DC) { unsigned char a[2]; /* just return 0 if we hit the end-of-file */ if((php_stream_read(stream, a, sizeof(a))) <= 0) return 0; return (((unsigned short)a[0]) << 8) + ((unsigned short)a[1]); } (...) switch (marker) { case M_SOF0: (...) case M_SOF15: if (result == NULL) { /* handle SOFn block */ result = (struct gfxinfo *) ecalloc(1, sizeof(struct gfxinfo)); length = php_read2(stream TSRMLS_CC); result->bits = php_stream_getc(stream); result->height = php_read2(stream TSRMLS_CC); result->width = php_read2(stream TSRMLS_CC); result->channels = php_stream_getc(stream); --- cut --- To be more exact, the php_stream_read "if" statement makes sure there are *ANY* bytes read from the target file, but doesn't check the actual return value. This simply means that one is able to make the functon read only one byte from the file (leaving a[1] unitialized) and then return the entire a[] array, thus letting the user access one leaked byte from the server's stack. What should be noted is that this attack can not be extended to many php_read2 function calls, since the described conditions are met only in case of the EOF marker being met after sucessfully reading one byte from the file. Such a scenario may take place only once during the exported function's calls, because of obvious reasons. ==[ Memory Disclosure #3 ] == The last vulnerability makes it possible to read a maximum of two bytes out of the server's heap memory. The vulnerable piece of code is shown below: --- cut --- static void exif_process_APP12(image_info_type *ImageInfo, char *buffer, size_t length TSRMLS_DC) { size_t l1, l2=0; if ((l1 = php_strnlen(buffer+2, length-2)) > 0) { exif_iif_add_tag(ImageInfo, SECTION_APP12, "Company", TAG_NONE, TAG_FMT_STRING, l1, buffer+2 TSRMLS_CC); if (length > 2+l1+1) { l2 = php_strnlen(buffer+2+l1+1, length-2-l1+1); exif_iif_add_tag(ImageInfo, SECTION_APP12, "Info", TAG_NONE, TAG_FMT_STRING, l2, buffer+2+l1+1 TSRMLS_CC); } } --- cut --- Apparently, the author's intention was to restict the Info string length up to (SectionLength-sizeof(WORD)- LengthOfCompany-sizeof('\0')), but instead a (length-2-l1+1) expression was used. This mistake can result in reaveling 2 additional bytes present right after the 'Info' string being copied from the data file. However, such a situation can only take place in case the string being copied is not terminated with a NULL byte - the case of specially malformed files, only. ==[ Solution ]== As for the first two vulnerabilities, some new check statements should be added in order to make sure every memory byte being passed to the PHP script is really read from a file (existing checks could also be altered). When it comes to the last one, the only thing to change is the '+' sign present inside the php_strlen() parameter. Even though some of the vulnerabilities could theoretically result in Denial of Service conditions, it is very unlikely to take place in practise, according to the authors' research. ==[ Proof of Concept ]== The PoC file should be enclosed to this advisory file, in a ready-to-use form. ==[:
http://gynvael.coldwind.pl/?id=332
CC-MAIN-2017-17
en
refinedweb
Classes and Instances. Introduction Classes, Objects, Methods and Instance Variables Declaring a Class with a Method and Instantiating an Object of a Class Declaring a Method with a Parameter Instance Variables, set Methods and get Methods Primitive Types vs.. Classes and Instances Class Instance 1 Instance 4 Instance 2 Instance 3 All instances of the same class Student Mary Black Sean Smith Niamh Connor Eoin Murphy All instances of the same class class MyClass { //field, constructor, and method declarations } class Student { // fieldsString name;int studentNo;String course; //related to other Objects // methodsvoid register(){...} void completeYear(int year){...} void graduate(){...}} Class: StudentReg Fee: E600 changeFee(double) Mary Black0345678FT211 register()completeYear(int)graduate() Sean Smith05076543FT228 Niamh Connor04565656FT211 register()completeYear(graduate() Eoin Murphy05234567FT228 register()completeYear(int)graduate() register()completeYear(int)graduate() class Student { // instance fieldsString name;int studentNo;String course; // class fieldsstatic double regFee = 600; // class methodsstatic void changeFee(double fee){...} // instance methodsvoid register(){...} } class members declared using keyword static public static final double PI=3.14159; class type constructor instance objectvariable name class Student { // fieldsString name;int studentNo;String course;// constructorStudent (String n, int s, String c){name=n;studentNo=s;course=c;} // methods...} Simple Program class Classid{ // constructorClassid(){ Data and Control } // main method to start executionpublic static void main (String[] args) { new Classid();// instantiating } } The Java system calls the main method which instantiates the program via a new on the constructor for Classid. Execution of the program proceeds from the constructor and ends with the last statement in sequence has been reached. program control class public class HelloWorld1{ // constructorHelloWorld1(){ System.out.println("Hello World yet again..."); } // main method to start executionpublic static void main (String[] args) { new HelloWorld1();// instantiating } }
http://www.slideserve.com/hiroko/classes-and-instances
CC-MAIN-2017-17
en
refinedweb
Hello... I'm learning C with "The C Programming Language" book, and I'm stuck at exercise 1-10 which asks to: "Write a program to copy its input to its output, replacing each tab by \t, each backspace by \b, and each backslash by \\. This makes tabs and backspaces visible in an unambiguous way." Here is my code: #include <stdio.h> /* copy input to output, replacing each tab by \t, each backspace by \b, and each backslash by \\ */ main() { int c; int back; while ((c = getchar()) != EOF) { if (c == '\t') { putchar('\\'); putchar('t'); } if (c == '\b') { putchar('\\'); putchar('b'); } if (c == '\\') { putchar('\\'); putchar('\\'); } if (c != '\t') if (c != '\b') if (c != '\\') putchar(c); } } Everything is working fine, except that it won't print "\b" when the user presses the backspace key. Could anyone give me a hint on how to solve this?
https://www.daniweb.com/programming/software-development/threads/360365/k-r-exercise-1-10-problem
CC-MAIN-2017-17
en
refinedweb
How To Render Light Maps in XNA light map effect texture ldquo new basicmodel camera mapping Light mapping is a method for handling the lighting of a surface, including shadows, for static geometry. There are two mainstream techniques, (shadow mapping and shadow volumes) that can render dynamic models and static models in real-time, but they cost some performance. Ray-tracing is the highly-realistic method for calculating the lighting properties of objects, but the cost in time is very high and it is hard to render a complex scene in real-time. For static models, light mapping is a better method, because it requires just a light map texture and a second set of texture coordinates in the model file (such as a .X file, exported by Panda 3ds Max plug-in. Light mapping can be used with multiple light sources at no additional cost. For example, shadow mapping will add some extra rendering passes to the scene with multiple light sources). How can we generate a light map? We can use the "render to texture" function of 3ds Max to generate a light map and save the texture coordinates in UV Channel 2. The light map is storing the intensities of light on the object’s surface. It can be an RGB color texture. When we render a model with light map, each pixel from the usual color map is modulated by the color stored in the light map (access via the second texture coordinate stored in the FVFData block of a .X file). You can find the complete source code (for XNA Game Studio 3.0) for this article and 3ds Max files here W,S,A,D to move the camera, click and drag mouse middle button to rotate the camera Part 1: Create 3D models and their light maps Step1: (file: Content\step1, original material.max)Add Box and Cylinder - Assign textures to their Diffuse Maps Add Target Directional Light Shadows: - Lenable Shadows [V] on - select "Ray Traced Shadows" or "Shadow Map" Step 2: (file: Content\step2,render t texture.max) Selected Box and Cylinder Rendering -> Render to Texture Output Page: - push “Add” button and select ”LightingMap” in Add Texture Elements - set Target Map Slot as “Self-Illumination” - click the radio of “Output Into Source“ - Object==>set “Use Automatic Unwrap“ - set Channel as2 Delete the ”Target Directional Light” The light map will be generated and assigned automatically in the “Self-Illumination” map Push F9 or click “Quick Render”, the shadow will appear on Box01 and Cylinder01 Step3: (file: Content\step3,use shader.max)Select Box1 and Change Material to be Direct Shader and "Keep old material as sub-material" Select the Shader of "Content\LightMapEffect.fx" Assign the Diffuse Map and its Light Map and Select the Technique as "Technique1" Push F3 to show the Box01 Push F9 or click “Quick Render”, the shadow will appear on Box01 and Cylinder01(the scene without light object) Step4: Export 3D Models with Panda plug-in At last, you can export the model using the Panda plug-in; the second texture coordinate will be written into the FVFData block in the .X file. Remember to enable the "Include .fx file" and "Include .fx parameters" options. Step5: Show the light map resultUse the max file of Step2 If you want to see the light map in 3ds max with sharp shadow, you must re-assign the new materials (with DirectX Shader – LightMap) to Box01 and Cyliner01. Open Material Editor and click the new material slot DirectX Manager page: - Select LightMap and enable “Enable Plugin Material” - BaseTexture: RedB.jpg [V] - Mapping channel=1 - light map: Box01LightMap.tga [V] - Mapping channel=2 Part 2: XNA Project (LightMapWindows) Step 1 - Load model and measure its size - Add camera (reference Book: Learning XNA 3.0 chapter11 Flying Camera) namespace LightMapWindows { /// /// This is the main type for your game /// public class LightMap : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; //add ContentManager content; BasicModel basicModel; Camera camera; public LightMap() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; // content = new ContentManager(Services); content.RootDirectory = "Content"; IsMouseVisible = true; } protected override void LoadContent() { // Create a new SpriteBatch, which can be used to draw textures. spriteBatch = new SpriteBatch(GraphicsDevice); // TODO: use this.Content to load your game content here camera = new Camera(this); Components.Add(camera); basicModel = new BasicModel(); basicModel.LoadModel(content, @"pillar"); basicModel.MeasureModel(); camera.SetSpeed(basicModel.GetModelRadius() / 100.0f); float modelRadius = basicModel.GetModelRadius(); Vector3 modelCenter = basicModel.GetModelCenter(); Vector3 eyePosition = modelCenter; eyePosition.Z += modelRadius * 2; eyePosition.Y += modelRadius; float aspectRatio = (float)Window.ClientBounds.Width / (float)Window.ClientBounds.Height; float nearClip = modelRadius / 100; float farClip = modelRadius * 100; camera.SetCamera(eyePosition, modelCenter, Vector3.Up); camera.SetFOV(MathHelper.PiOver4, aspectRatio, nearClip, farClip); } Step 2 - Render model with its effect instance protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); // TODO: Add your drawing code here basicModel.DrawLightMap(camera); //basicModel.Draw(camera);//basic effect base.Draw(gameTime); } namespace AkBasicModel { class BasicModel { ...... public void DrawLightMap(Camera camera) { //Set transfroms Matrix[] transforms = new Matrix[m_model.Bones.Count]; m_model.CopyAbsoluteBoneTransformsTo(transforms); //Loop through meshes and their effects foreach (ModelMesh mesh in m_model.Meshes) { foreach (Effect effect in mesh.Effects) { //Set Effect parameters effect.CurrentTechnique = effect.Techniques["Technique1"]; effect.Begin(); effect.Parameters["World"].SetValue(GetWorld() * transforms[mesh.ParentBone.Index]); effect.Parameters["View"].SetValue(camera.view); effect.Parameters["Projection"].SetValue(camera.projection); effect.End(); } //Draw mesh.Draw(); } } Note: GameDev.net moderates article comments.
https://www.gamedev.net/resources/_/creative/visual-arts/how-to-render-light-maps-in-xna-r2721
CC-MAIN-2017-17
en
refinedweb
before 13-47. Namespace getSubscriptionNamespace() const; Retrieves the name of the recipient of the notification. Possible return values are E-mail address, the HTTP url and the PL/SQL procedure, depending on the protocol. string getRecipientName() const; Retrieves the presentation mode in which the client receives notifications. Valid Presentation values are defined in Table 13-47. Presentation getPresentation() const; Retrieves the protocol in which the client receives notifications. Valid Protocol values are defined in Table 13-47. Protocol getProtocol() const; Returns TRUE if Subscription is NULL or FALSE otherwise. bool isNull() const; Assignment operator for Subscription. void operator=( const Subscription& sub); Registers a notification callback function when the protocol is PROTO_CBK, as defined in Table 13-47. Context registration is also included in this call. void setCallbackContext( void *ctx); Specifies the list of database server distinguished names from which the client receives notifications. void setDatabaseServerNames( const vector<string>& dbsrv); Sets the context that the client wants to get passed to the user callback. If the protocol is set to PROTO_CBK or not specified, this attribute must); Sets the presentation mode in which the client receives notifications. void setPresentation( Presentation pres); Sets the Protocol in which the client receives event notifications, as defined in Table 13-47. where the subscription is used. The subscription name must be consistent with its namespace. Default value is NS_AQ. void setSubscriptionNamespace( Namespace nameSpace); Sets the name of the recipient of the notification. void setRecipientName( const string& name);
http://docs.oracle.com/cd/E11882_01/appdev.112/e10764/reference032.htm
CC-MAIN-2017-17
en
refinedweb
> > . Assuming you are right, why do I see the same 1.6M profile with: > main = mapM2 (_scc_ "p" (\x -> print x)) ([1..102400] :: [Integer]) >> return () >))) Is >>= not lazy? Sengan
http://www.haskell.org/pipermail/haskell/2000-October/006151.html
CC-MAIN-2014-35
en
refinedweb
Tornado Unittesting with Generators This is the second installment of what is becoming an ongoing series on unittesting in Tornado, the Python asynchronous web framework. A couple months ago I shared some code called assertEventuallyEqual, which tests that Tornado asynchronous processes eventually arrive at the expected result. Today I’ll talk about Tornado’s generator interface and how to write even pithier unittests. Late last year Tornado gained the “gen” module, which allows you to write async code in a synchronous-looking style by making your request handler into a generator. Go look at the Tornado documentation for the gen module. I’ve extended that idea to unittest methods by making a test decorator called async_test_engine. Let’s look at the classic way of testing Tornado code first, then I’ll show a unittest using my new method. Classic Tornado Testing Here’s some code that tests AsyncMongo, bit.ly’s MongoDB driver for Tornado, using a typical Tornado testing style: def test_stuff(self): import sys; print >> sys.stderr, 'foo' db = asyncmongo.Client( pool_id='test_query', host='127.0.0.1', port=27017, dbname='test', mincached=3 ) def cb(result, error): self.stop((result, error)) db.collection.remove(safe=True, callback=cb) self.wait() db.collection.insert({"_id" : 1}, safe=True, callback=cb) self.wait() # Verify the document was inserted db.collection.find(callback=cb) result, error = self.wait() self.assertEqual([{'_id': 1}], result) # MongoDB has a unique index on _id db.collection.insert({"_id" : 1}, safe=True, callback=cb) result, error = self.wait() self.assertTrue(isinstance(error, asyncmongo.errors.IntegrityError)) Full code in this gist. This is the style of testing shown in the docs for Tornado’s testing module. Tornado Testing With Generators Here’s the same test, rewritten using my async_test_engine decorator: @async_test_engine(timeout_sec=2) def test_stuff(self): db = asyncmongo.Client( pool_id='test_query', host='127.0.0.1', port=27017, dbname='test', mincached=3 ) yield gen.Task(db.collection.remove, safe=True) yield gen.Task(db.collection.insert, {"_id" : 1}, safe=True) # Verify the document was inserted yield AssertEqual([{'_id': 1}], db.collection.find) # MongoDB has a unique index on _id yield AssertRaises( asyncmongo.errors.IntegrityError, db.collection.insert, {"_id" : 1}, safe=True) A few things to note about this code: First is its brevity. Most operations and assertions about their outcomes can coëxist on a single line. Next, look at the @async_test_engine decorator. This is my subclass of the Tornado-provided gen.engine. Its main difference is that it starts the IOLoop before running this test method, and it stops the IOLoop when this method completes. By default it fails a test that takes more than 5 seconds, but the timeout is configurable. Within the test method itself, the first two operations use remove to clear the MongoDB collection, and insert to add one document. For both those operations I use yield gen.Task, from the tornado.gen module, to pause this test method (which is a generator) until the operation has completed. Next is a class I wrote, AssertEqual, which inherits from gen.Task. The expression yield AssertEqual(expected_value, function, arguments, ...) pauses this method until the async operation completes and calls the implicit callback. AssertEqual then compares the callback’s argument to the expected value, and fails the test if they’re different. Finally, look at AssertRaises. This runs the async operation, but instead of examining the result passed to the callback, it examines the error passed to the callback, and checks that it’s the expected Exception. Full code for async_test_engine, AssertEqual, and AssertError are in this gist. The code relies on AsyncMongo’s convention of passing (result, error) to each callback, so I invite you to generalize the code for your own purposes. Let me know what you do with it, I feel like there’s a place in the world for an elegant Tornado test framework. (Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)
http://css.dzone.com/articles/tornado-unittesting-generators
CC-MAIN-2014-35
en
refinedweb
TODO Additional Thoughts Basically a TODO list Transient and Project docs in a single module project? Separate deployment of snapshots. Report categorisation. 1 Comment John Allen In regards to sites I just wanted to re-iterate my dev@ posting regarding the lack of proper URL to POM UID namespace management in the URL inheritence and extension schemes used in site generation. I may have a latest version of a project and this may have a nice site but i need to be able to continue to access, and critically link to (from other projects), older versions of the same project. The 'latest' version can always be made accessible via some nice URL mapping e.g. == For me a site is simply another 'view' onto a project's products and in the same way one can access different versions of those products based on specify the version one can not access the various different sites. Note this can be done by manually specifying the project.url and project.distributionManagement.site.url for every project such that the URLS include the group, artefact and versionId information but this is error prone and nasty. In a nutshell, for me a project site's is just another project artefact and therefore its identity and namespace should be managed in the same robust way the the other project products are (jars, pom, src-jars etc), i.e. stored in a unique federated namespace. See MNG-2679
http://docs.codehaus.org/display/MAVEN/best+practices+-+site+management
CC-MAIN-2014-35
en
refinedweb
J standards rules on your build systems. Here are some useful JArchitect features: CQLinq The cool and powerful feature of the JArchitect is the support for Code Query Linq (CQLinq). The CQLinq lets the developers to query the Java code using LINQ queries, for example CQlinq can answer to the following requests: - Which methods create objects of a particular class? from m in Methods where m.CreateA (“MyPackage.MyClass”) select m -Which methods assign a particular field? from m in Methods where m.AssignField(“MyNamespace.MyClass.m_Field”) select m -Which complex method is not enough commented? from m in Application.Methods where m.CyclomaticComplexity > 15 && m.PercentageComment < 10. Select new { m, m.CyclomaticComplexity, m.PercentageComment }. You can also be warned automatically when a CQLinq query returns a certain result. For example I don’t want that my User Interface layer to depend directly on the DataBase layer: warnif count > 0 from p in Packages where p.IsUsing(“DataLayer”) && (n.Name == @”UILayer”) select p JArchitect provides more than 80 metrics that are related to your code organization, code quality and code structure. These metrics could be used in CQLinq to create your coding custom rules, JArchitect could be integrated into your build system to enforce the quality of your codebase. Dependency graph The dependency graph is very useful to explore the existing codebase, we can go inside any project, package or class to discover dependencies between code elements. Dependency Matrix The DSM (Dependency Structure Matrix) is a compact way to represent and navigate across dependencies between components.. Metric view contain types - Types contains methods and fields With treemap rectangles represent code elements. The option level determines the kind of code element represented by unit rectangles. The option level can take the 5 values: project, package, type, method and field. The two screenshots below shows the same code base represented through type level on left, and namespace level on right. If a CQLinq query is currently edited, the set of code elements matched by the query is shown on the treemap as a set of blue rectangles. It’s very helpful to see visually the code elements concerned by a specific CQLinq request. Compare build In software development, products are constantly evolving. Hence, developers and architects must pay attention to modifications in code bases. Modern source code repositories handle incremental development. They can enumerate differences between 2 versions of source code files. JArchitect can tell you what has been changed between 2 builds but it does more than simple text comparison. It can distinguish between comment change and code change, between what has been added/removed and what has just been modified. With JArchitect, you can see how code metrics are evolving and you know if coupling between components is increasing or not. JArchitect can also continuously check modifications to warn you as soon as a breaking compatibility changes. Generate custom reports JArchitect can analyze source code and Java Projects through JArchitect.Console.exe. Each time it analyzes a code base, JArchitect. JArchitect provides a pro license to all open source Java contributors. It could be useful to analyze their code base. So, if you would like to give it a try, check here for more details. Happy coding!Related Whitepaper:
http://www.javacodegeeks.com/2013/03/jarchitect-became-free-for-java-open-source-contributors.html
CC-MAIN-2014-35
en
refinedweb
How to: Access Application Data Last modified: June 28, 2011 Applies to: InfoPath 2013 | InfoPath Forms Services | Office 2013 | SharePoint Server 2013 The InfoPath managed code object model provides objects and collections that can be used to gain access to information about the InfoPath application, including information related to a form's underlying XML document and the form definition (.xsf) file. This data is accessed through the top-level object in the InfoPath object model hierarchy, which is instantiated by using the Application class. In an InfoPath managed code form template project created using Visual Studio 2012, you can use the this (C#) or Me (Visual Basic) keyword to access an instance of the Application class that represents the current InfoPath application, which can then be used to access the properties and methods of the Application class. Displaying the Application Name, Version, and Language ID In the following example, the Name and Version properties of the Application class are used to return the name and version number of the running instance of InfoPath. The LanguageSettings property is then used to return a LanguageSettings object, which in turn is used to return the LCID (a four-digit number) for the language that is currently being used for the InfoPath user interface language. Finally, all of this information is displayed in a message box. This example requires a using or Imports directive for the Microsoft.Office.Core namespace in the declarations section of the form code module. string appName = this.Application.Name; string appVersion = this.Application.Version; LanguageSettings langSettings = (LanguageSettings)this.Application.LanguageSettings; int langID = langSettings.get_LanguageID(MsoAppLanaguageID.msoLanguageIDUI); MessageBox.Show( "Name: " + appName + System.Environment.NewLine + "Version: " + appVersion + System.Environment.NewLine + "Language ID: " + langID);
http://msdn.microsoft.com/en-us/library/aa943279.aspx
CC-MAIN-2014-35
en
refinedweb
Managed desktop apps and Windows Runtime Visual Studio has made it extremely easy to write managed Windows Store apps that consume Windows Runtime APIs. However, if you’re writing a managed desktop app, there are a few things you’ll need to do manually in order to consume the Windows Runtime APIs. This paper provides information about what you need to do so that your desktop app can consume the Windows Runtime APIs. It assumes that the reader is familiar with writing managed desktop apps. This information applies to Windows 8. Introduction Visual Studio has been enhanced to make Windows Store app development incredibly simple, including the ability to easily consume new Windows Runtime APIs. This extra simplicity gets activated when you choose to create one of the new “Windows Store” project types. However, if you create a new managed desktop app that consumes the new Windows Runtime APIs, there are some things you need to do manually. In this paper, I’ll list out some tips to help you do this. Targeting Windows 8 After you’ve created your new managed desktop app, you need to make one manual tweak in the project file to tell Visual Studio that you want to target Windows 8: This is required in order to add references to Windows Metadata files. For step-by-step directions, take a look at How to: Add or Remove References By Using the Reference Manager, and scroll down to Windows Tab -> Core Subgroup. Consuming standard Windows Runtime types As a reminder, the MSDN library documentation identifies each Windows API as being applicable to desktop apps, Windows Store apps, or both. Therefore, from your desktop apps, make sure that you only use APIs documented with this: - Applies to: desktop apps only or this: - Applies to: desktop apps | Windows Store apps Furthermore, each API might have its own set of documented caveats or dependencies (for example, some APIs only work when there is a UI framework created). So be sure to read through the documentation for each Windows Runtime class you intend to use, to make sure it will work with your desktop app. That said, your desktop app can’t consume much of anything from the Windows Runtime until you prepare your project with one essential reference. The Windows Runtime defines some standard classes and interfaces in System.Runtime, such as IEnumerable, that are used throughout the Windows Runtime libraries. By default, your managed desktop app won’t be able to find these types, and so you must manually reference System.Runtime before you can do anything meaningful with Windows Runtime classes. To create this manual reference: - 1. Navigate to your managed desktop app project in the Solution Explorer. - 2. Right-click the References node and click Add Reference. - 3. Click the Browse tab. - 4. Click Browse…. - 5. Navigate to the System.Runtime.dll façade. You can generally find this in a path similar to: %ProgramFiles(x86)%\Reference Assemblies\Microsoft\Framework\.NETFramework\v4.5\Facades\System.Runtime.dll Now you can add references to other Windows metadata files and use the applicable Windows Runtime APIs for your desktop app. Consuming Windows COM APIs Windows 8 contains some useful new APIs available via COM. This might seem out of scope here, but these new COM APIs are often used in conjunction with the Windows Runtime to work with some of the new Windows features around the Windows Store app model. For example, if you’re writing a managed desktop app that enumerates over installed Windows Store app packages, you might use the Windows Runtime PackageManager class to discover those installed Windows Store apps, in conjunction with the COM interface IAppxManifestReader to retrieve information about those Windows Store apps from their manifest. To consume these COM interfaces, you’ll need to generate a type library and then a managed assembly containing the definitions of those COM interfaces: - 1. Open a Developer Command Prompt for Visual Studio. - 2. Run a command similar to this to generate the type library: midl %ProgramFiles(x86)%\Windows Kits\8.0\Include\winrt\<IDL Filename> /out <path>\<TLB Filename> - 3. Run a command similar to this to convert the type library you just generated into a .NET assembly: tlbimp <path>\<TLB Filename> /namespace:<Namespace> /out:<path>\<Assembly filename> - 4. Now, inside Visual Studio, add a reference to the assembly you just created: - a. Navigate to your managed desktop app project in the Solution Explorer. - b. Right-click the References node and click Add Reference. - c. Click the Browse tab. - d. Click Browse…. Troubleshooting A couple of troubleshooting steps to be aware of. - In some cases you might be able to skip generating the .NET Assembly in step 3, and just directly add a reference to the type library you created in step 2 in Visual Studio’s Add Reference dialog box. However, this doesn’t always work, and Visual Studio might fail when you try to reference some type libraries you create this way. In such cases, you can usually get things working by generating and referencing a .NET assembly as in steps 3 and 4. - When you add the reference to the .NET assembly in step 4, by default Visual Studio sets Embed Interop Types to True. You can view this by right-clicking the newly-referenced assembly under the References folder in the Solution Explorer. This setting may cause errors later when you compile your project, such as error CS1752: Interop type '<type name>' cannot be embedded. Use the applicable interface instead. This is a pretty common problem, often with an easy solution. See Interop Type Cannot Be Embedded for more information. Related topics
http://msdn.microsoft.com/en-us/library/windows/apps/jj856306.aspx
CC-MAIN-2014-35
en
refinedweb
I have a simple code in Lejos : import lejos.nxt.*; public class simple{ public static void main(String[] args) throws Exception { Motor.A.rotate(500); } } Motor A can run and stop normally with the NXT cable, but when I plug this Converter cable, motor A can't stop run. How can I make this motor run normally with the converter cable ? Because I want to make 2 motors run together in 1 motor port. Please your response ..
http://www.lejos.org/forum/viewtopic.php?p=21864
CC-MAIN-2014-35
en
refinedweb
Integrating IM applications with Office This article describes how to configure an instant message (IM) client application so that it integrates with the social features in Office 2013, including displaying presence and sending instant messages from the contact card. Last modified: July 01, 2013 Applies to: Excel 2013 | Lync 2013 | Office 2013 | OneNote 2013 | Outlook 2013 | PowerPoint 2013 | Project 2013 | SharePoint Server 2013 | VBA | Visio 2013 | Word 2013 If you have any questions or comments about this technical article or the processes that it describes, you can contact Microsoft directly by sending an email to docthis@microsoft.com. Office 2013 provides rich integration with IM client applications, including Lync 2013. This integration provides users with IM capabilities from within Word 2013, Excel 2013, PowerPoint 2013, Outlook 2013, Visio 2013, Project 2013, and OneNote 2013 as well as providing presence integration on SharePoint 2013 pages. Users can see the photo, name, presence status, and contact data for people in their contacts list. They can start an IM session, video call, or phone call directly from the contact card (the UI element in Office 2013 that surfaces contact information and communication options). Office 2013 makes it easy to stay connected to your contacts without taking you outside of your email or documents. You can customize an IM client application so that it communicates with Office. Specifically, you can modify your IM application so that it displays the following information within the Office UI: Contact photo. Contact name. Contact personal status note. Contact presence status. Contact availability string (for example, "Available" or "Out of Office"). Contact capability string (for example, "Video Ready"). One-click IM launch. One-click video call launch. One-click phone call launch (including SIP, phone number, voice mail, and call new number). Contact management (add to IM group). Contact location and time zone. Contact data, phone number, email address, title, and company name. To enable this integration with Office, an IM client application must implement a set of interfaces that Office provides to connect to it. The APIs for this integration are included in the UCCollborationLib namespace that is contained in the Microsoft.Office.UC.dll file, which is installed with certain versions of Office 2013. The UCCollaborationLib namespace includes the interfaces that you must implement to integrate with Office. When an Office 2013 application starts, it goes through the following process to integrate with the default. Discovering the IM application The Office application looks for several specific keys and entries in the registry to discover the default IM client application. If it discovers a default IM client application, it then attempts to connect to it. The process that the Office application goes through to discover the default IM client application is as follows: The Office application looks to see if the HKEY_CURRENT_USER\Software\IM Providers\DefaultIMApp subkey in the registry is set and reads the application name listed there. The Office application then reads the HKEY_CURRENT_USER\Software\IM Providers\Application name\UpAndRunning key and monitors the value for changes. The Office application next reads the HKEY_LOCAL_MACHINE\Software\IM Providers\Application name registry key and gets the ProcessName and class ID (CLSID) values stored there. Once the IM client application has completed its start sequence successfully and registered all of the classes correctly for the presence integration, it sets the HKEY_CURRENT_USER\Software\IM Providers\Application name\UpAndRunning key to "2", indicating that the client application is running. When the Office application discovers that the HKEY_CURRENT_USER\Software\IM Providers\Application name\UpAndRunning key has been set to "2", it checks the list of running processes on the computer for the process name of the IM client application. Once the Office application finds the process that the IM client application uses, the Office application calls CoCreateInstance using the CLSID to establish a connection to the IM client application as an out-of-process COM server. Authenticating the connection to the IM application After the Office application establishes a connection to the IM client application, it then does the following: The Office application calls IUnknown::QueryInterface method to check for the IUCOfficeIntegration interface. The Office application then calls the IUCOfficeIntegration.GetAuthenticationInfo method, passing in the highest supported integration version (for example, "15.0.0.0"). If the IM client application supports the version of Office passed in as a parameter, the application returns the following hard-coded XML string to the calling code: <authenticationinfo> If the IM client application fails to return a value, the Office application calls the GetAuthenticationInfo method again with the next highest supported version of Office (for example, "14.0.0.0"). Once Office determines that the IM client application supports IM and presence integration, it connects to a required set of interfaces to finish initializing. (For more information, see Connecting to required interfaces.) If the Office application encounters an error on any of the steps above, it backs out and presence integration is not established again during the session of the Office application. Connecting to required interfaces After authenticating the connection to the IM client application, the Office application attempts to connect to a set of required interfaces that the IM client application must expose. The Office application accomplishes this by doing the following: The Office application gets an ILyncClient object by calling the IUCOfficeIntegration.GetInterface method, passing in the oiInterfaceLyncClient constant from the UCCollaborationLib.OIInterface enumeration. The Office application gets an IAutomation object by calling the IUCOfficeIntegration.GetInterface method, passing in the oiInterfaceAutomation constant from the OIInterface enumeration. The Office application sets up the _ILyncClientEvents event listener. The Office application sets up the _IUCOfficeIntegrationEvents event listener. The Office application gets the sign-in state from the IM client application by accessing the ILyncClient.State property. The Office application gets the capabilities of the IM client application by calling the IUCOfficeIntegration.GetSupportedFeatures method, which returns a flag from the UCCollaborationLib.OIFeature enumeration. The Office application accesses the ILyncClient.Self property to get a reference to an ISelf object. Retrieving the capabilities of the local user The Office application gets the capabilities of the local user by doing the following: If the IM client application supports the IClient2 interface, Office tries to get an IContactManager object by accessing the IClient2.PrivateContactManager property. If the IM application does not support the IClient2 interface, Office application gets an IContactManager object by accessing the ILyncClient.ContactManager property. The IM client application must successfully return an IContactManager object before any other IM capabilities can be established. The Office application accesses the ILyncClient.Uri property and then calls IContactManager.GetContactByUri to get the IContact object associated with the local user. The Office application then makes several calls to IContact.CanStart to establish the capabilities of the local user, passing in the values for ModalityTypes.ucModalityInstantMessage and ModalityTypes.ucModalityAudioVideo successively. Retrieving contact presence The Office application gets contact presence, including the local user, by doing the following: The Office application calls IContact.GetContactInformation to get a presence item from the contact. The Office application then subscribes to presence status changes from the contact. It calls IContactManager.CreateSubscription to get an IContactSubscription object. It then calls IContactSubscription.AddContact to add the contact to the subscription and then calls IContactSubscription.Subscribe to get changes in the contact's status. If the IM application supports IContact2, Office attempts to get presence information by calling IContact2.BatchGetContactInformation2. The Office application then retrieves the presence properties for the contact by calling IContact.BatchGetContactInformation. The Office application can get a second set of presence properties by accessing the IContact.Settings property. Finally, the Office application gets the contact's group membership by accessing the IContact.CustomGroups property. This returns an IGroupCollection collection that includes all of the IGroup objects that the contact belongs to. Disconnecting from the IM application When the Office 2013 application detects the OnShuttingDown event from the IM application, it disconnects silently. However, if the Office application shuts down before the IM application, the Office application does not guarantee that the connection is cleaned up. The IM application must handle client connection leaks. As mentioned previously, the IM-capable Office 2013 applications look for specific keys, entries, and values in the registry to discover the IM client application to connect to. These registry values provide the Office application with the process name and CLSID of the class that acts as the entry point to the IM client application's object model (that is, the class that implements the IUCOfficeIntegration interface). The Office application co-creates that class and connects as a client to the out-of-process COM server in the IM client application. Use Table 1 to identify the keys, entries, and values that must be written in the registry to integrate an IM client application with Office. There are three interfaces from the UCCollaborationLib namespace that the executable (or COM server) of an IM client application must implement so that it can integrate with Office. If these interfaces are not implemented, the Office application backs out during the initialization process and the connection with the IM client application is not established. The required interfaces are as follows: IUCOfficeIntegration—Although not required, the _IUCOfficeIntegrationEvents interface should also be implemented in the same derived class. ILyncClient—Although not required, the _ILyncClientEvents interface should also be implemented in the same derived class. - IUCOfficeIntegration interface The IUCOfficeIntegration interface provides the entry-point for an Office application to connect to the IM client application. The interface defines three methods that an Office application calls as part of the process of initiating a connection with the IM client application. The class that implements the IUCOfficeIntegration interface must be co-creatable so that Office can co-create an instance of it. In addition, it must expose the CLSID that is entered as the value for the GUID entry in the HKEY_LOCAL_MACHINE\Software\IM Providers\Application name registry key. The class that inherits from IUCOfficeIntegration should also implement the _IUCOfficeIntegrationEvents interface. The _IUCOfficeIntegrationEvents interface contains the members that expose the event handlers of the IUCOfficeIntegration interface. Table 2 shows the members that must be implemented in the class that inherits from IUCOfficeIntegration and _IUCOfficeIntegration. Use the following code to define a class that inherits from the IUCOfficeIntegration and _IUCOfficeIntegration interfaces within an IM client application. // An example of a class that can be co-created and can integrate // with Office as an IM provider. [ClassInterface(ClassInterfaceType.None)] [ComSourceInterfaces(typeof(_IUCOfficeIntegrationEvents))] [Guid("{CLSID value}"), ComVisible(true)] public class LitwareClientAppObject : IUCOfficeIntegration { // Implementation details omitted. } The GetAuthenticationInfo method takes a string as an argument for the version parameter. When the Office application calls this method, it passes in one of two strings for the argument, depending on the version of Office. When the Office application supplies the method with the version of Office that the IM client application supports (that is, supports the functionality), the GetAuthenticationInfo method returns a hard-coded XML string "<authenticationinfo>". Use the following code to implement the GetAuthentication method within the IM client application code. public string GetAuthenticationInfo(string _version) { // Define the version of Office that the IM client application supports. string supportedOfficeVersion = "15.0.0.0"; // Do a simple check for equivalency. if (supportedOfficeVersion == _version) { // If the version of Office is supported, this method must // return the string literal "<authenticationinfo>" exactly. return "<authenticationinfo>"; } else { return null; } } The GetInterface method shuttles references to classes to the calling code, depending on what is passed in as an argument for the interface parameter. When an Office application calls the GetInterface method, it passes in one of two values for the interface parameter: either the oiInterfaceILyncClient constant (1) or the oiInterfaceIAutomation constant (2) of the UCCollaborationLib.OIInterface enumeration. If the Office application passes in the oiInterfaceILyncClient constant, the GetInterface method returns a reference to a class that implements the ILyncClient interface. If the Office application passes in the oiInterfaceIAutomation constant, the GetInterface method returns a class that implements the IAutomation interface. Use the following code example to implement the GetInterface method within the IM client application code. public object GetInterface(string _version, OIInterface _interface) { // These objects implement the ILyncClient or IAutomation // interfaces respectively. There is no restriction on what these // classes are named. IMClient imClient = new IMClient(); IMClientAutomation imAutomation = new IMClientAutomation(); // Return different object references depending on the value passed in // for the _interface parameter. switch (_interface) { // The calling code is asking for an object that inherits // from ILyncClient, so it returns such an object. case OIInterface.oiInterfaceILyncClient: { return imClient; } // The calling code is asking for an object that inherits // from IAutomation, so it returns such an object. case OIInterface.oiInterfaceIAutomation: { return imAutomation; } default: { throw new NotImplementedException(); } } } The GetSupportedFeatures method returns information about the IM features that the IM client application supports. It takes a string for its only parameter, version. When the Office application calls the GetSupportFeatures method, the method returns a value from the UCCollaborationLib.OIFeature enumeration. The returned value specifies the capabilities of the IM client, where each capability of the IM client application is indicated to the Office application by adding a flag to the value. Use the following code example to implement the GetSupportFeatures method within the IM client application code. ILyncClient interface The ILyncClient interface maps to the capabilities of the IM client application itself. It exposes properties that refer to the person who is signed into the application (the local user, represented by the UCCollaborationLib.ISelf interface), the state of the application, the list of contacts for the local user, and several other settings. When it's trying to connect to the IM client application, the Office application gets a reference to an object that implements the ILyncClient interface. From that reference, Office can access much of the functionality of the IM client application. In addition, the class that implements the ILyncClient interface should also implement the _ILyncClientEvents interface. The _ILyncClientEvents interface exposes several of the events that are required for monitoring the state of the IM client application. Table 3 shows the members that must be implemented in the class that inherits from ILyncClient and _ILyncClientEvents. During the initialization process, Office accesses the ILyncClient.State property. This property needs to return a value from the UCCollaborationLib.ClientState enumeration. The State property stores the current status of the IM client application. It must be set and updated throughout the IM client application session. When the IM client application signs in, signs out, or shuts down, it should set the State property. It is best to set this property within the ILyncClient.SignIn and ILyncClient.SignOut methods, as the following example demonstrates. // This field is of a type that implements the // IAsynchronousOperation interface. private IMClientAsyncOperation _asyncOperation = new IMClientAsyncOperation(); // This field is of a type that implements the ISelf interface. private IMClientSelf _self; public IMClientAsyncOperation SignIn(string _userUri, string _domainAndUser, string _password, object _IMClientCallback, object _state) { ClientState _previousClientState = this._clientState; this._clientState = ClientState.ucClientStateSignedIn; // The IMClientStateChangedEventData class implements the // IClientStateChangedEventData interface. IMClientStateChangedEventData eventData = new IMClientStateChangedEventData(_previousClientState, this._clientState); if (_userUri != null) { // During the sign-in process, create a new contact with // the contact information of the currently signed-in user. this._self = new IMClientSelf(IMContact.BuildContact(_userUri)); } // Raise the _ILyncClientEvents.OnStateChanged event. OnStateChanged(this, eventData as UC.ClientStateChangedEventData); return this._asyncOperation; } } IAutomation interface The IAutomation interface automates features of the IM client application. It can be used to start conversations, join conferences, and provide extensibility window context. Table 4 shows the members that must be implemented in the class that inherits from IAutomation. In addition to the three required interfaces discussed previously, there are several other interfaces that are important for enabling contact presence functionality in Office. These include the following: The IContact or IContact2 interface. - The IContactManager and _IContactManagerEvents interfaces. The IGroup and IGroupCollection interfaces. The IContactSubscription interface. The IContactEndPoint interface. The ILocaleString interface IContact interface The IContact interface represents an IM client application user. The interface exposes presence, available modalities, group membership, and contact type properties for a user. To start a conversation with another user, you must provide that user instance of IContact. Table 5 shows the members that must be implemented in the class that inherits from IContact. During the initialization process, the Office application calls the IContact.CanStart method to determine the IM capabilities for the local user. The CanStart method takes a flag from the UCCollaborationLib.ModalityTypes enumeration as an argument for the _modalityTypes parameter. If the current user can engage in the requested modality (that is, the user is capable of instant messaging, audio and video messaging, or application sharing), the CanStart method returns true. public bool CanStart(ModalityTypes _modalityTypes) { // Define the capabilities of the current IM client application // user by using flags from the ModalityTypes enumeration. ModalityTypes userCapabilities = ModalityTypes.ucModalityInstantMessage | ModalityTypes.ucModalityAudioVideo | ModalityTypes.ucModalityAppSharing; // Perform a simple test for equivalency. if (_modalityType == userCapabilities) { return true; } else { return false; } } The GetContactInformation method retrieves information about the contact from the IContact object. The calling code needs to pass in a value from the UCCollaborationLib.ContactInformationType enumeration for the _contactInformationType parameter, which indicates the data to be retrieved. public object GetContactInformation( ContactInformationType _contactInformationType) { // Determine the information to return from the contact's data based // on the value passed in for the _contactInformationType parameter. switch (_contactInformationType) { case ContactInformationType.ucPresenceEmailAddresses: { // Return the URI associated with the contact. string returnValue = this.Uri.ToLower().Replace("sip:", String.Empty); return returnValue; } case ContactInformationType.ucPresenceDisplayName: { // Return the display name associated with the contact. string returnValue = this._DisplayName; return returnValue; } default: { throw new NotImplementedException; } // Additional implementation details omitted. } } Similar to the GetContactInformation, the BatchGetContactInformation method retrieves multiple presence items about the contact from the IContact object. The calling code needs to pass in an array of values from the ContactInformationType enumeration for the _contactInformationTypes parameter. The method returns an UCCollaborationLib.IContactInformationDictionary object that contains the requested data. public IMClientContactInformationDictionary BatchGetContactInformation( ContactInformationType[] _contactInformationTypes) { // The IMClientContactInformationDictionary class implements the // IContactInformationDictionary interface. IMClientContactInformationDictionary contactDictionary = new IMClientContactInformationDictionary(); foreach (ContactInformationType type in _contactInformationTypes) { // Call GetContactInformation for each type of contact // information to retrieve. This code adds a new entry to // a Dictionary object exposed by the // ContactInformationDictionary property. contactDictionary.ContactInformationDictionary.Add( type, this.GetContactInformation(type)); } return contactDictionary; } The IContact.Settings property returns an IContactSettingDictionary object that contains custom properties about the contact. The IContact.CustomGroups property returns an IGroupCollection object that includes all of the groups of which the contact is a member. ISelf interface During the initialization process, the Office application gets the data for the current user by accessing the ILyncClient.Self property, which must return an ISelf object. The ISelf interface represents the local, signed-in IM client application user. Table 6 shows the members that must be implemented in the class that inherits from ISelf. Presence, available modalities, group membership, and contact type properties for the local user are exposed through the ISelf.Contact property (which returns an IContact object). During the initialization process, the Office application accesses the ISelf.Contact property to get a reference to the contact information for the local user. Use the following code to define a class that inherits from the ISelf interface that implements the Contact property. [ComVisible(true)] public class IMClientSelf : ISelf { // Declare a private field to store contact data for local user. private IMClientContact _contactData; // In the constructor for the ISelf object, the calling code // must supply contact data. public IMClientSelf (IMClientContact _selfContactData) { this._contactData = _selfContactData; } // When accessed, the Contact property returns a reference // to the IContact object that represents the local user. public IMClientContact Contact { get { return this._contactData as IMClientContact; } } // Additional implementation details omitted. } IContactManager and _IContactManagerEvents interfaces The IContactManager object manages the contacts for the local user, including the local user's own contact information. The Office application uses an IContactManager object to access IContact objects that correspond to the local user's contacts. Table 7 shows the members that must be implemented in the class that inherits from IContactManager and _IContactManagerEvents. Office calls IContactManager.GetContactByUri to get a contact's presence information, by using the SIP address of the contact. When a contact is configured for an SIP address in the Active Directory, Office determines this address for a contact and calls GetContactByUri, passing the SIP address of the contact in for the _contactUri parameter. When Office cannot determine the SIP address for the contact, it calls the IContactManager.Lookup method to find the SIP by using the IM service. Here Office passes in the best data that it can find for the contact (for example, just the email address for the contact). The Lookup method asynchronously returns an AsynchronousOperation object. When it invokes the callback, the Lookup method should return the success or failure of the operation in addition to the URI of the contact. public IMClientContact GetContactByUri(string _contactUri) { // Declare a Contact variable to contain information about the contact. IMClientContact tempContact = null; // The _groupCollections field is an IGroupCollection object. Iterate // over each group in collection to see if the // contact is a part of the group. foreach (IMClientGroup group in this._groupCollections) { if (group.TryGetContact(_contactUri, out tempContact)) { break; } } // Check to see that the URI returned a valid contact. If it // did not, create a new contact. if (tempContact == null) { tempContact = IMClientContact.BuildContact(_contactUri); } // Return the contact to the calling code. return tempContact; } The Office application needs to subscribe to presence changes for an individual contact. Thus, when a contact's presence status changes, the IM server alerts the IM client application—thereby alerting the Office application. To do this, the Office application calls the IContactManager.CreateSubscription method to create a new IContactSubscription object for this request. // Declare a private field to contain an IContactSubscription object. private IMClientContactSubscription _contactSubscription; // Return the IContactSubscription object associated // with the IContactManager object. public IMClientContactSubscription CreateSubscription() { return this._contactSubscription; } IGroup and IGroupCollection interfaces The IGroup object represents a collection of contacts with additional properties for identifying the contact collection by a collective group name. An IGroupCollection object represents a collection of IGroup objects defined by a local user and the IM client application. The Office application uses the IGroupCollection and IGroup objects to access the local user's contacts. Table 9 shows the members that must be implemented in the classes that inherit from IGroup and IGroupCollection in the following table. When the Office application gets the information for the local user, it accesses the group memberships of the contact (local user) by calling the IContact.CustomGroups property, which returns an IGroupCollection object. The IGroupCollection must contain an array (or List) of IGroup objects. The class that derives from IGroupCollection must expose a Count property, which returns the number of items in the collection, and an indexer method, this(int), which returns an IGroup object from the collection. IContactSubscription interface The IContactSubscription interface allows you to specify the contacts to receive presence information updates for and the types of presence information that trigger a notification. Office applications use an IContactSubscription object to register changes to contact's presence status. Table 10 shows the members that must be implemented in the classes that inherit from IContactSubscription. The IContactSubscription interface must contain a reference to all the IContact objects that it monitors, using an array or a List. The IContactSubscription.AddContact method adds an IContact object for the to the underlying data structure of the IContactSubscription object, thereby adding a new contact to monitor for presence changes. The IContactSubscription.Subscribe method allows an IM client application to access presence observers for the contact. It can use a polling strategy to get the presence from the server for the contacts for that the IM client application has subscribed to. The Subscribe method is helpful in situations where presence is requested for someone outside of a user's contact list (for example, from a larger public network). IContactEndPoint interface The IContactEndPoint represents a telephone number from a contact's collection of telephone numbers. Table 11 shows the members that must be implemented in the classes that inherit from IContactEndPoint. ILocaleString interface The ILocaleString is a localized string structure that contains both a localized string and the locale ID of the localization. The ILocaleString interface is used to format the custom status string on the contact card. Table 12 shows the members that must be implemented in the classes that inherit from ILocaleString. UCCollaborationLib namespace
http://msdn.microsoft.com/en-us/library/jj900715(v=office.15).aspx
CC-MAIN-2014-35
en
refinedweb
Better Local Settings A common convention I've seen in the Django community is to place code like the following at the bottom of your settings file: # ``settings.py`` # Lots of typical settings here. # ... then ... try: from local_settings import * except ImportError: pass Then storing machine/user-specific settings within a local_settings.py> file. This file is set to be ignored by version control. I can't argue with needing overridden settings (do it all the time) but that import is backward in my opinion. My preferred technique is to still use the settings.py and local_settings.py but relocate the import. So local_settings.py starts out with: # ``local_settings.py`` from settings import * # Properly namespaced as needed. # Custom settings go here. DATABASE_HOST = 'localhost' The only other change is passing --settings=local_settings to any ./manage.py or django-admin.py calls. Trivial implementation, you still only need to override what you need, no changes needed to version control or development workflow. And since you've already imported all of the base settings, you can reuse that information to create new settings, like: # ``local_settings.py`` from settings import * DATABASE_NAME = "%s_dev" % DATABASE_NAME This is by no means a new technique, nor did I come up with it, but it's been very useful to me and I have yet to encounter the drawbacks (other than specifying the --settings flag, which also has an even easier fix via DJANGO_SETTINGS_MODULE) of this approach. Feedback, as always, is welcome.
http://toastdriven.com/blog/2010/jan/05/better-local-settings/
CC-MAIN-2014-35
en
refinedweb
25 July 2012 06:32 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> Sales for the first six months of the year surged 64.6% year on year to Swfr1.96bn, while earnings before interest, tax, depreciation and amortisation (EBITDA) increased by 23.4%, the company said in a statement. But EBITDA margins for the period at 16.6% was 5.6 percentage points lower compared with the same period in 2011, it said. Lonza’s operating profit for January-June 2012 increased 23.5% to Swfr168m, with margins at 8.6%, down from 11.4% in the previous corresponding period, the company said. Taking out one-off items in the financial results, Lonza said its core net profit increased 16.8% year on year to Swfr125m, with core operating profit up 39.2% at Swfr199m. The company hopes to deliver 10-15% growth in operating profit this year. “However, it goes without saying that the volatility of the current macroeconomic situation in some parts of the world can always have a negative effect on all strategic and operational efforts,” Lonza said. ($1 = Swfr0.9
http://www.icis.com/Articles/2012/07/25/9580724/swiss-lonza-h1-net-profit-slips-3.1-on-weak-margins.html
CC-MAIN-2014-35
en
refinedweb
The state of the Lambda in Ruby 1.9 forloop. The concept has many names in other languages and theory: - lambda function - anonymous function - closure (e.g. the term used for the lamdba functions in Java 7) This is a somewhat confusing term, because the term closure also refers to the capturing of the scope surrounding the code. A Block doesn't necessarily need to capture the scope - this code x = lambda {|x,y| x + y}doesn't use any free variables (i.e. variables that are unbound; x and y are declared in formal argument list), and hence doesn't require the creation of a closure (lambda (arg) "hello world"). Another language influential in Ruby's design, Smalltalk, uses a very concise syntax using brackets: [arg| ^"hello world"]. Ruby's most convenient and often used syntax for Blocks is as a parameter to a function, which allows to simply append a Block surrounded by either do/endor braces {/}. Eg. 5.times {|x| puts x}It's convenient, and also allows idioms such as Builder, which allows to create hierarchical data structures very easily by using a nested Blocks. (Tip: An upcoming article here on InfoQ will explain the details of creating a Builder in Ruby - watch out for it in the 2nd half of January). However, there was one problem: passing more than one Block to a function or method didn't work as easily. It was possible, but not with this shorthand. Instead, a Block had to be created using either the Proc.new {}or lambda {}notations. While not horrible, these options are much more verbose and introduce unwelcome tokens that clutter up the code. (Note: Proc.new {}and lambda {}notations have subtle differences as well, but this is not significant in this context). Workarounds are possible for this in certain situations. For instance, if an API call requires multiple Blocks, helper functions could be mixed into the class to a) help with Blocks and b) have the side effect of looking like named arguments: find (predicate {|x,y| x < y}, predicate{|x,y| x > 20})The predicatefunction is nothing more than: def predicate(&b)I.e. returns the Block. Whether this is appropiate or not depends on the specific use case. In this case, the shown code is - arguably - more expressive then the equivalent: b end find (lambda{|x,y| x < y}, lambda {|x,y| x > 20}) Why? Because lambdaleaks implementation details about how this is implemented - with one block argument, no extra keyword would be needed. The predicatesolution annotates the code and generates the lambda. To be clear: this is a workaround. Ruby 1.9 now introduces an new, more concise syntax for creating lambda functions: x = ->{puts "Hello Lambda"}The new syntax is shorter and removes the unfamiliar term lambda. To be clear: this is syntactic sugar. It does, however, help to write APIs that yield very readable code. Some of these APIs might be called "internal DSLs", although the definition for those are quite fuzzy. For these, the new lambda definition helps getting rid of the quite obscure term "lambda" in the middle of otherwise purely domain or problem specific code. Sidu Ponnappa reports about another syntax change in 1.9: Explicitly invoking one block from another in Ruby 1.9.0. This method was something I didn't even cover in my previous post, because the parser would simply blow up when parsing |*args, &block|. Here's what it looks like. [..]This code doesn't work in Ruby 1.8.x - it actually fails at the parser stage with:class SandBox def abc(*args) yield(*args) end define_method :xyz do |*args, &block| block.call(*args) end end SandBox.new.abc(1,2,3){|*args| p args} # => [1, 2, 3] In Ruby 1.9, this works fine.In Ruby 1.9, this works fine.benchmark3.rb:8: syntax error, unexpected ',', expecting '|' define_method :xyz do |*args, &block| ^ benchmark3.rb:11: syntax error, unexpected kEND, expecting $end Another change in 1.9 fixes a long standing issue: block arguments are now local. Take this code: foo = "Outer Scope"In 1.8, the code would print "I'm not local to this block", wheras in 1.9 it prints "Outer Scope". In short, blocks now behave as expected: the block argument shadows the variable of the same name in the outher scope inside the block. (Let's preempt the question "How can I access the variable in the outer scope?". You don't - just choose a different name for the block argument). [1,2,3].each{|foo| foo = "I'm not local to this block" } puts foo What do you think about the Ruby 1.9 lambda/block changes? Do they address all existing concerns or are there other problems left? Tip: see all Ruby 1.9 stories on InfoQ. I don't know by Michael Neale new syntax comment by Roger Pack z = proc {|x, y = 3| 33 } SandBox example segfaults Ruby 1.9 by Paul Harvey See redmine.ruby-lang.org/issues/show/871
http://www.infoq.com/news/2008/01/new-lambda-syntax/
CC-MAIN-2014-35
en
refinedweb
. Visual elements of an application are sized and positioned within a parent container based on rules provided to and by a managing layout. The Flex Framework provides two sets of containers for layout: MX and Spark. The MX containers reside in the mx.containers package of the Flex Framework, while the Spark containers reside in the spark.components package. Though both container sets inherit from UIComponent, they differ in how they lay out and manage the children in their display lists. Within a MX container (such as Box), the size and position of the children are managed by the container’s layout rules and constraints, which are internally defined and based on specified properties and styles. In contrast, the Spark set provides a level of abstraction between the container and the layout and allows you to define the layout separately from the skin and style. The separation of the layout from the container not only provides greater flexibility in terms of runtime modifications but also cuts down on the rendering cycle for a container, as the style properties of a container may not be directly related to the layout. A Spark layout manages the size and positioning of the target container’s child elements and is commonly referred to as the container’s layout delegate. Commonly used layout classes for Spark containers, such as VerticalLayout, can be found in the spark.layouts package of the Flex Framework and are extensions of the base LayoutBase class. When you provide a layout delegate to a Spark container, the target property of the layout is attributed as the targeted container and considered to be a GroupBase-based element. The containers available in the Spark set, such as Group and DataGroup, are extensions of GroupBase and provide a set of methods and properties for accessing their child elements. This set is commonly referred to as the content API. Containers that handle visual elements directly, such as Group and SkinnableContainer, expose methods and attributes of the content API by implementing the IVisualElementContainer interface. Containers that handle data items that are presented based on item renderers, such as DataGroup and SkinnableDataContainer, provide the same methods and attributes directly on their extensions of GroupBase. The layout delegate of a container uses the content API to access and manage that container’s child elements. Child elements accessed from the content API are attributed as implementations of the IVisualElement interface. This interface exposes implicit properties that allow you to access and modify common properties that relate to how the element is laid out and displayed in a container. IVisualElement is an extension of the ILayoutElement interface, which exposes constraint properties and accessor methods that layout delegates use to size and position children within a target container. With UIComponent implementing the IVisualElement interface, you can add elements from both the MX and Spark component sets to the display list of a Spark container and manage them using a layout delegate. You want to control the layout of children in a container, positioning them either horizontally or vertically. Assign either HorizontalLayout or VerticalLayout to the layout property of the container, and set the desired alignment properties to the children along the axis of the specified layout. The HorizontalLayout and VerticalLayout classes are extensions of the spark. layout. Layout Base class and lay out the child elements of a container in a horizontal or vertical sequence, respectively. Spark layouts handle only the size and position of child elements. Attributes related to dimension and positioning constraints are not available on Spark layouts; these are properties of the targeted Spark container. You can define distances between child elements using the gap property of the Horizontal Layout and VerticalLayout classes. For example: <s:Group> <s:layout> <s:VerticalLayout </s:layout> <s:TextInput <s:Button </s:Group> The <s:Group> tag defines the parent Spark container, whose layout manager is specified as a VerticalLayout instance. This example lays out the child elements of the Group container vertically and distanced from each other by 10 pixels. To position child elements relative to the container boundaries, assign values to the paddingLeft, paddingRight, paddingTop, and paddingBottom properties of the layout container, as shown here: <s:Group> <s:layout> <s:HorizontalLayout </s:layout> <s:TextInput <s:Button </s:Group> If you define a fixed or relative (using percent values) size for the container, the vertical Align and horizontalAlign properties of HorizontalLayout and VerticalLayout, respectively, are used by the layout to position each of the container’s child elements with respect to each other and the container boundaries: <s:Group <s:layout> <s:VerticalLayout </s:layout> <s:TextInput <s:Button </s:Group> Update the declared layout property of a Spark container at runtime in response to an event. The layout property of a Spark container defines the layout management delegate for child elements in the container’s display list. The default layout instance for a Spark container is spark.layouts.BasicLayout, which places children using absolute positioning. When the default layout is specified for a container, child elements are stacked upon each other based on their declared depths and the position of each element within the declared display list. You can instead supply sequenced-based layouts from the spark.layouts package or create custom layouts to manage the size and positioning of child elements. The layout implementation for a Spark container can also be switched at runtime, as in the following example: <s:Application xmlns: <fx:Declarations> <s:VerticalLayout <s:HorizontalLayout </fx:Declarations> <fx:Script> <![CDATA[ import spark.layouts.VerticalLayout; private function handleCreationComplete():void { layout = vLayout; } private function toggleLayout():void { layout = ( layout is VerticalLayout ) ? hLayout : vLayout; } ]]> </fx:Script> <s:TextInput <s:Button </s:Application> In this example, the target container for the layout is the Application container. Two separate layout managers are declared in the <fx:Declarations> tag, and the designated layout is updated based on a click of the Button control, changing from Vertical Layout to HorizontalLayout. The next example shows how to switch between these two layouts in a MX container: <s:Application xmlns: <fx:Script> <![CDATA[ import mx.containers.BoxDirection; private function toggleLayout():void { container.direction = ( container.direction == BoxDirection.VERTICAL ) ? BoxDirection.HORIZONTAL : BoxDirection.VERTICAL; } ]]> </fx:Script> <mx:Box <s:TextInput <s:Button </mx:Box> </s:Application> With respect to layout management, the main difference between Spark and MX controls has to do with the separation of responsibilities for the parent container. Within the Spark architecture, you specify a layout delegate for a target container. This allows you to easily create multiple layout classes that manage the container’s child elements differently. Within the context of the MX architecture, any modifications to the layout of children within a container are confined to properties available on the container. Instead of easily changing layout delegates at runtime as you can do in Spark, one or more properties need to be updated, which invokes a re-rendering of the display. This ability to switch layout implementations easily is a good example of the advantages of the separation of layout and containers within the Spark architecture of the Flex 4 SDK, and the runtime optimizations it enables. Using HorizontalLayout, VerticalLayout, and TileLayout, you can uniformly align the child elements of a container. To define the alignment along the x-axis, use the vertical Align property of the HorizontalLayout class; along the y-axis, use the horizontal Align property of the VerticalLayout class. TileLayout supports both the vertical Align and horizontalAlign properties; it lays out the child elements of a container in rows and columns. The available property values for horizontalAlign and verticalAlign are enumerated in the spark.layouts.HorizontalAlign and spark.layouts.VerticalAlign classes, respectively, and correspond to the axis on which child elements are added to the container. The following example demonstrates dynamically changing the alignment of child elements along the y-axis: .LEFT) ? HorizontalAlign.RIGHT : HorizontalAlign.LEFT; } ]]> </fx:Script> <s:Panel <s:layout> <s:VerticalLayout </s:layout> <s:DropDownList /> <s:HSlider /> <s:Button </s:Panel> </s:Application> When the s:Button control is clicked, the child elements are changed from being left-aligned to being right-aligned within a vertical layout of the s:Panel container. To align children along the x-axis, specify HorizontalLayout as the layout property of the container and set the verticalAlign property value to any of the enumerated properties of the VerticalAlign class. To align the child elements in the center of a container along a specified axis, use HorizontalAlign.CENTER or VerticalAlign.MIDDLE as the property value for verticalAlign or horizontalAlign, respectively, as in the following example: <s:Panel <s:layout> <s:HorizontalLayout </s:layout> <s:DropDownList /> <s:HSlider /> <s:Button </s:Panel> Alignment properties can also be used to uniformly size all child elements within a layout. The two property values that are available on both HorizontalAlign and Vertical Align are justify and contentJustify. Setting the justify property value for verticalAlign on a HorizontalLayout or TileLayout will size each child to the height of the target container. Setting the justify property value for horizontalAlign on a Vertical Layout or TileLayout will size each child to the width of the target container. Setting the contentJustify property value sizes children similarly, but sets the appropriate dimension of each child element based on the content height of the target container. The content height of a container is relative to the largest child, unless all children are smaller than the container (in which case it is relative to the height of the container). Both the justify and contentJustify property values uniformly set the corresponding size dimension on all child elements based on the specified layout axes and the target container; any width or height property values defined for the child elements are disregarded. The following example switches between the center and justify values for the horizontalAlign property of a VerticalLayout to demonstrate how dimension properties of child elements are ignored when laying out children uniformly based on size: .JUSTIFY) ? HorizontalAlign.CENTER : HorizontalAlign.JUSTIFY; } ]]> </fx:Script> <s:Panel <s:layout> <s:VerticalLayout </s:layout> <s:DropDownList <s:HSlider /> <s:Button </s:Panel> </s:Application> Assign TileLayout to the layout property of a Spark container to dynamically place its child elements in a grid. TileLayout adds children to the display list in both a horizontal and vertical fashion, positioning them in a series of rows and columns. It displays the child elements in a grid based on the dimensions of the target Spark container, as the following example demonstrates: :HSlider <s:DataGroup <s:layout> <s:TileLayout /> </s:layout> <s:dataProvider> <s:ArrayCollection </s:dataProvider> </s:DataGroup> </s:Application> In this example, the width of the parent container for the layout delegate is updated in response to a change to the value property of the HSlider control. As the width dimension changes, columns are added or removed and child elements of the target DataGroup container are repositioned accordingly. By default, the sequence in which child elements are added to the layout is based on rows. Children are added along the horizontal axis in columns until the boundary of the container is reached, at which point a new row is created to continue adding child elements. If desired, you can change this sequence rule using the orientation property of TileLayout, which takes a value of either rows or columns. The following example changes the default layout sequence from rows to columns, adding each child vertically in a row until the lower boundary of the container is reached, at which point a new column is created to take the next child element: <s:DataGroup <s:layout> <s:TileLayout </s:layout> <s:dataProvider> <s:ArrayCollection </s:dataProvider> </s:DataGroup> You can restrict the number of rows and columns to be used in the display by specifying values for the requestedRowCount and requestedColumnCount properties, respectively, of a TileLayout. The default value for these properties is −1, which specifies that there is no limit to the number of children that can be added to the display in a row or column. By modifying the default values, you control how many child elements can be added to a row/column (rather than allowing this to be determined by the specified dimensions of the target container). When you specify a nondefault value for the requestedRowCount or requestedColumnCount property of a TileLayout, the target container is measured as children are added to the display. If a width and height have not been assigned to the target container directly, the dimensions of the container are determined by the placement and size of the child elements laid out in rows and columns by the TileLayout, as in the following example: :Group> <s:Scroller> <s:DataGroup <s:layout> <s:TileLayout </s:layout> <s:dataProvider> <s:ArrayCollection </s:dataProvider> </s:DataGroup> </s:Scroller> <s:Rect <s:stroke> <s:SolidColorStroke /> </s:stroke> </s:Rect> </s:Group> </s:Application> In this example, the TileLayout target container and a Rect graphic element are wrapped in a Group to show how the Group is resized to reflect the child element positioning provided by the layout. Because the TileLayout disregards the target container’s dimensions when strictly positioning child elements in a grid based on the requestedRowCount and requestedColumnCount property values, unless scrolling is enabled children may be visible outside of the calculated row and column sizes. Consequently, in this example the target DataGroup container is wrapped in a Scroller component and the clipAndEnableScrolling property is set to true on the TileLayout. To restrict the size of the child elements within a target container, use the column Height and rowHeight properties of HorizontalLayout and VerticalLayout, respectively. To dynamically size and position all children of a target container based on the dimensions of a single child element, use the typicalLayoutElement property. By default, the variableRowHeight and variableColumnHeight properties of HorizontalLayout and VerticalLayout, respectively, are set to a value of true. This default setting ensures that all child elements are displayed based on their individually measured dimensions. This can be beneficial when presenting elements that vary in size, but the rendering costs may prove to be a performance burden at runtime. To speed up rendering time, Spark layouts have properties for setting static values to ensure that all child elements are sized uniformly. The following example sets the rowHeight and variableRowHeight property values to constrain the height of child elements in a target container using a vertical layout: <s:Group> <s:layout> <s:VerticalLayout </s:layout> <s:Button <s:Button <s:Button <s:Button </s:Group> In this example, the height property value assigned to any declared Button control is disregarded and all the children are set to the same height as they are positioned vertically without respect to the variable measure calculated by properties of each child. To apply size constraints in a horizontal layout, use the columnWidth and variable Column Width properties, as in the following example: <s:Group> <s:layout> <s:HorizontalLayout </s:layout> <s:Button <s:Button <s:Button <s:Button </s:Group> The previous two examples show how to specify static values for the dimensions of all child elements of a target container with regard to the specified layout control. Alternatively, child dimensions and subsequent positions can be determined by supplying an ILayoutElement instance as the value for a layout control’s typicalLayoutElement property. In this case, the target container’s children are sized and positioned based on the width or height, respectively, of the supplied target instance. The following example supplies a child target to be used in sizing and positioning all children of a target container with a vertical layout: <s:Application xmlns: <s:Group> <s:layout> <s:VerticalLayout </s:layout> <s:Button <s:Button <s:Button <s:Button </s:Group> </s:Application> In this example, each Button control is rendered at the height attributed to the assigned typicalLayoutElement; any previously assigned height property values are disregarded. You want to improve runtime performance by creating and rendering children of a container as needed. Use the useVirtualLayout property of a HorizontalLayout, VerticalLayout, or Tile Layout whose target container is a DataGroup. Virtualization improves runtime performance by creating and recycling item renderers and rendering children only as they come into the visible content area of a display container. Layout classes that extend spark.layouts.LayoutBase, such as VerticalLayout, expose the useVirtualLayout property. Attributed as a Boolean value, useVirtualLayout is used to determine whether to recycle item renderers in a data element container that supports virtualization, such as DataGroup. When employing virtualization, access to data elements using methods of the content API (such as getElementAt()) is limited to elements visible within the content area of the container. The following demonstrates enabling the use of virtualization on the layout delegate of a DataGroup container: <fx:Declarations> <fx:String Lorem ipsum dolor sit amet consectetur adipisicing elit. </fx:String> </fx:Declarations> <s:Scroller> <s:DataGroup <s:layout> <s:VerticalLayout </s:layout> <s:dataProvider> <mx:ArrayCollection </s:dataProvider> </s:DataGroup> </s:Scroller> In this example, the size of the DataGroup container is restricted to show approximately four child elements at a time based on the specified item renderer and supplied data. As new child elements are scrolled into view, previously created item renderer instances that have been scrolled out of view are reused to render the new data. When a layout delegate is provided to a container, the target property of the layout is attributed to the target container of type spark.components.subclasses.GroupBase. When creating and reusing elements, the layout delegate uses methods of the content API exposed by a GroupBase that supports virtualization. At the time of this writing, DataGroup is the only container in the Flex 4 SDK that supports virtualization. You can, however, create custom containers that create, recycle, and validate child elements accordingly by extending GroupBase and overriding the getVirtualElementAt() method. To demonstrate how virtual and nonvirtual children are treated within a GroupBase, the following example loops through the child elements of a container and traces out the virtual elements: <s:Application xmlns: <fx:Library> <fx:Definition <s:Rect <s:fill> <mx:SolidColor </s:fill> </s:Rect> </fx:Definition> </fx:Library> <fx:Script> <![CDATA[ import spark.components.supportClasses.GroupBase; private function handleCreationComplete():void { scroller.verticalScrollBar.addEventListener(Event.CHANGE, inspectVirtualChildren); } private function inspectVirtualChildren( evt:Event ):void { var target:GroupBase = vLayout.target; for( var i:int = 0; i < target.numElements; i++ ) { trace( target.getElementAt( i ) ); } } ]]> </fx:Script> <s:Scroller <s:DataGroup <s:layout> <s:VerticalLayout </s:layout> <s:dataProvider> <mx:ArrayCollection> <fx:CustomRect <s:Button <s:DropDownList <fx:CustomRect </mx:ArrayCollection> </s:dataProvider> </s:DataGroup> </s:Scroller> </s:Application> As the scroll position of the targeted DataGroup container changes, any child elements accessed using the getElementAt() method that are not in view are attributed as null. When you create a custom layout that respects virtualization of data elements, use the getVirtualElementAt() method of the GroupBase target with the getScrollRect() method to anticipate which elements will be visible in the content area of the container. The DefaultComplexItemRenderer is used as the item renderer for the DataGroup container to show that virtualization also works with child elements of differing sizes. When working with a relatively small data set, such as in this example, using virtualization to improve rendering performance may seem trivial. The true power of using lazy creation and recycling children through virtualization really becomes evident when using large data sets, such as a group of records returned from a service. Create a custom layout by extending the com.layouts.supportClasses.LayoutBase class and override the updateDisplayList() method to position and size the children accordingly. What if a project comes along in which the desired layout is not available from the layout classes provided by the Flex 4 SDK? You can easily create and apply a custom layout for a container by extending the LayoutBase class, thanks to the separation of responsibilities in the Spark component architecture. Child elements of a targeted container are positioned and sized within the updateDisplayList() method of a LayoutBase subclass, such as HorizontalLayout or VerticalLayout. By overriding the updateDisplayList() method in a custom layout, you can manipulate the display list of a targeted GroupBase-based container. Each child of a GroupBase container is attributed as an ILayoutElement instance. Methods are available on the ILayoutElement interface that relate to how the child element is drawn on screen with regard to size and position. To create a custom layout for a targeted container’s child elements, loop through those children and apply any desired transformations: package com.oreilly.f4cb { import mx.core.ILayoutElement; import spark.components.supportClasses.GroupBase; import spark.layouts.supportClasses.LayoutBase; public class CustomLayout extends LayoutBase { xpos:Number; var ypos:Number; var element:ILayoutElement; for( var i:int = 0; i < layoutTarget.numElements; i++ ) { element = layoutTarget.getElementAt( i ); radians = ( ( angle * -i ) + 180 ) * ( Math.PI / 180 ); xpos = w + ( Math.sin(radians) * radius ); ypos = h + ( Math.cos(radians) * radius ); element.setLayoutBoundsSize( NaN, NaN ); element.setLayoutBoundsPosition( xpos, ypos ); } } } } The CustomLayout class in this example displays the child elements uniformly radiating from the center of a target container in a clockwise manner. As the child elements of the container are accessed within the for loop, each ILayoutElement instance is accessed using the getElementAt() method of the content API and is given a new position based on the additive angle using the setLayoutBoundsPosition() method. The size of each ILayoutElement is determined by a call to the setLayoutBoundsSize() method. Supplying a value of NaN for the width or height argument ensures that the size of the element is determined by the rendered content of the element itself. A custom layout is set on a container by using the layout property of the container: <s:Application xmlns: ="200" maximum="400" value="300" /> <s:DataGroup <s:layout> <f4cb:CustomLayout /> </s:layout> <s:dataProvider> <mx:ArrayCollection </s:dataProvider> </s:DataGroup> </s:Application> When the updateDisplayList() method of the layout is invoked, as happens in response to a change in the container size or the addition of an element on the content layer, the children are laid out using the rules specified in the CustomLayout class. Generally speaking, it is good practice to also override the measure() method when creating a custom layout class extending LayoutBase. The measure() method handles resizing the target container accordingly based on its child elements and constraint properties. However, because the custom layout created in this example positions children based on the bounds set upon the target container, this is not necessary. You want to set the size of a target container based on the dimensions of the child elements in the layout. Create a LayoutBase-based custom layout and override the measure() method to access the desired bounds for the container based on the dimensions of its child elements. When explicit dimensions are not specified for the target container, control over its size is handed over to the layout. When width and height values are applied to a container, the explicitWidth and explicitHeight properties (respectively) are updated, and their values determine whether to invoke the layout’s measure() method. If values for these properties are not set (equated as a value of NaN), the container’s layout delegate determines the target dimensions and updates the container’s measuredWidth and measuredHeight property values accordingly. To alter how the dimensions of a target container are determined, create a custom layout and override the measure() method: package com.oreilly.f4cb { import mx.core.IVisualElement; import spark.components.supportClasses.GroupBase; import spark.layouts.VerticalLayout; public class CustomLayout extends VerticalLayout { override public function measure() : void { var layoutTarget:GroupBase = target; var count:int = layoutTarget.numElements; var w:Number = 0; var h:Number = 0; var element:IVisualElement; for( var i:int = 0; i < count; i++ ) { element = layoutTarget.getElementAt( i ); w = Math.max( w, element.getPreferredBoundsWidth() ); h += element.getPreferredBoundsHeight(); } var gap:Number = gap * (count - 1 ); layoutTarget.measuredWidth = w + paddingLeft + paddingRight; layoutTarget.measuredHeight = h + paddingTop + paddingBottom + gap; } } } The CustomLayout class in this example sets the measuredWidth and measuredHeight properties of the target container based on the largest width and the cumulative height of all child elements, taking into account the padding and gap values of the layout. When the measure() method is overridden in a custom layout, it is important to set any desired measure properties of the target container. These properties include measuredWidth, measuredHeight, measuredMinWidth, and measuredMinHeight. These properties correspond to the size of the target container and are used when updating the display during a pass in invalidation of the container. To apply a custom layout to a container, use the layout property: <s:Application xmlns: <fx:Script> <![CDATA[ import mx.graphics.SolidColor; import spark.primitives.Rect; private function addRect():void { var rect:Rect = new Rect(); rect.width = Math.random() * 300; rect.height = Math.random() * 30; rect.fill = new SolidColor( 0xCCFFCC ); group.addElement( rect ); } ]]> </fx:Script> <s:layout> <s:VerticalLayout </s:layout> <s:Group> <!-- Content group --> <s:Group <s:layout> <f4cb:CustomLayout </s:layout> </s:Group> <!-- Simple border --> <s:Rect <s:stroke> <mx:SolidColorStroke </s:stroke> </s:Rect> </s:Group> <s:Button </s:Application> When the s:Button control is clicked, a new, randomly sized s:Rect instance is added to the nested <s:Group> container that contains the custom layout. As each new child element is added to the container, the measure() method of the CustomLayout is invoked and the target container is resized. The updated size of the nested container is then reflected in the Rect border applied to the outer container. Although this example demonstrates resizing a Group container based on visual child elements, the same technique can be applied to a DataGroup container whose children are item renderers representing data items. Layouts that support virtualization from the Flex 4 SDK invoke the private methods measureVirtual() or measureReal(), depending on whether the useVirtualLayout property is set to true or false, respectively, on the layout. Because the custom layout from our example does not use virtualization, child elements of the target container are accessed using the getElementAt() method of GroupBase. If virtualization is used, child elements are accessed using the getVirtualElementAt() method of GroupBase. You can access child elements of a GroupBase-based container via methods of the content API such as getElementAt(). When you work with the content API, each element is attributed as an implementation of ILayoutElement, which exposes attributes pertaining to constraints and methods used in determining the size and position of the element within the layout of a target container. The IVisualElement interface is an extension of ILayoutElement and is implemented by UIComponent and GraphicElement. Implementations of IVisualElement expose explicit properties for size and position, as well as attributes related to the owner and parent of the element. Visual elements from the Spark and MX component sets are extensions of UIComponent. This means elements from both architectures can be added to a Spark container, which attributes children as implementations of ILayoutElement. To use the depth property of an element within a container, access the child using the getElementAt() method of the content API and cast the element as an IVisualElement: <s:Application xmlns: <fx:Script> <![CDATA[ import mx.core.IVisualElement; import spark.components.supportClasses.GroupBase; private var currentIndex:int = 0; private function swapDepths():void { var layoutTarget:GroupBase = bLayout.target; var element:IVisualElement; for( var i:int = 0; i < layoutTarget.numElements; i++ ) { element = layoutTarget.getElementAt( i ) as IVisualElement; if( i == currentIndex ) { element.depth = layoutTarget.numElements - 1; } else if( i > currentIndex ) { element.depth = i - 1; } } if( ++currentIndex > layoutTarget.numElements - 1 ) currentIndex = 0; } ]]> </fx:Script> <s:layout> <s:VerticalLayout /> </s:layout> <s:Group> <s:layout> <s:BasicLayout </s:layout> <s:Button <s:Button <s:Rect <s:fill> <mx:SolidColor </s:fill> </s:Rect> <s:Button </s:Group> <s:Button </s:Application> In this example, the child element at the lowest depth is brought to the top of the display stack within the container when a Button control is clicked. Initially, the children of the Group container are provided depth values relative to the positions at which they are declared, with the first declared s:Button control having a depth value of 0 and the last declared s:Button control having a depth value of 3. Upon each click of the Button control, the IVisualElement residing on the lowest layer is brought to the highest layer by giving that element a depth property value of the highest child index within the container. To keep the highest depth property value within the range of the number of children, the depth values of all the other child elements are decremented. To apply transformations that affect all children within a layout, set the individual transformation properties (such as rotationX, scaleX, and transformX) that are directly available on instances of UIComponent and GraphicElement, or supply a Matrix3D object to the layoutMatrix3D property of UIComponent-based children. The Flex 4 SDK offers several approaches for applying transformations to child elements of a layout. Before you apply transformations, however, you must consider how (or whether) those transformations should affect all the other child elements of the same layout. For example, setting the transformation properties for rotation, scale, and translation that are available on UIComponent-based and GraphicElement-based child elements will also affect the size and position of all other children within your layout: <s:Button On instances of UIComponent and GraphicElement, the rotation, scale, and transform attributes each expose properties that relate to axes in a 3D coordinate space. These properties are also bindable, so direct reapplication is not necessary when their values are modified at runtime. Alternatively, you can use the layoutMatrix3D property to apply transformations to UI Component-based elements; again, all children within the layout will be affected. UIComponent-based elements include visual elements from both the MX and Spark component sets. Using the layoutMatrix3D property, you can supply a flash. geom. Matrix3D object that applies all the 2D and 3D transformations to a visual element. Keep in mind, however, that the layoutMatrix3D property is write-only, so any changes you make to the Matrix3D object to which it is set will not be automatically applied to the target element. You will need to reassign the object in order to apply modifications. The following example shows how to apply transformations within a layout using the transformation properties and the layoutMatrix3D property, which subsequently affects other children in the layout of a target container: <s:Application xmlns: <fx:Script> <![CDATA[ import mx.core.UIComponent; [Bindable] private var rot:Number = 90; private function rotate():void { var matrix:Matrix3D = button.getLayoutMatrix3D(); matrix.appendRotation( 90, Vector3D.Z_AXIS ); button.layoutMatrix3D = matrix; rot += 90; } ]]> </fx:Script> <s:Group> <s:Group <s:layout> <s:VerticalLayout </s:layout> <s:Button <s:Rect <s:fill> <s:SolidColor </s:fill> </s:Rect> </s:Group> <s:Rect <s:stroke> <mx:SolidColorStroke </s:stroke> </s:Rect> </s:Group> </s:Application> When the click event is received from the s:Button control, the rotate() method is invoked. Within the rotate() method, the bindable rot property is updated and the rotation value along the z-axis is updated on the GraphicElement-based Rect element. Likewise, rotation along the z-axis is updated and reapplied to the layoutMatrix3D property of the Button control. As mentioned earlier, the layoutMatrix3D property is write-only and prevents any modifications to the Matrix3D object applied from being bound to an element. As such, the Matrix3D object can be retrieved using the getLayoutMatrix3D() method and transformations can be prepended or appended using methods available on the Matrix3D class and reapplied directly to an element. You want to apply 2D and 3D transformations to child elements of a layout without affecting other children within the layout. Supply a TransformOffsets object to the postLayoutTransformOffsets property of instances of UIComponent and GraphicElement, or use the transformAround() method of UIComponent-based children. Along with the Matrix3D object, which affects all children within a layout, Transform Offsets can be used to apply transformations to specific children. When you apply transformations to a UIComponent or GraphicElement instance by supplying a object as the value of the Transform Offsets postLayoutTransformOffsets property, the layout does not automatically update the positioning and size of the other child elements within the target container. Like the Matrix3D object, TransformOffsets is a matrix of 2D and 3D values, but it differs in that it exposes those values as read/write properties. The TransformOffsets object supports event dispatching, allowing updates to properties to be applied to a child element without reapplication through binding. To apply transformations using a TransformationOffsets object, set the postLayoutTransformOffsets property of the target element: <s:Button <s:offsets> <mx:TransformOffsets </s:offsets> </s:Button> <s:Button In contrast to how other child elements are affected when applying transformations to an element of a layout using the layoutMatrix3D property, when the s:Button control in this example is rotated 90 degrees along the z-axis, the position of the second declared s:Button control in the layout is not updated in response to the transformation. When applying transformations to child elements, it is important to keep in mind how you want the application of the transformation to impact the layout, as this will affect whether you choose to use the layoutMatrix3D or postLayoutTransformOffsets property. If your transformations need to be applied over time and you do not want them to affect the other child elements of the layout, the postLayoutTransformOffsets property can be used in conjunction with an AnimateTransform-based effect: <s:Application xmlns: <fx:Declarations> <s:Rotate3D <s:Rotate3D </fx:Declarations> <s:Group <s:Group <s:layout> <s:VerticalLayout </s:layout> <s:Button <s:Button </s:Group> <s:Rect <s:stroke> <mx:SolidColorStroke </s:stroke> </s:Rect> </s:Group> </s:Application> In this example, two s:Rotate3D effects are declared to apply transformations to targeted child elements over a period of time. By default, AnimateTransform-based effects apply transformations using the postLayoutTransformOffsets property of a target element, so updates to the transformation values do not affect the size and position of other child elements of the layout. This is a good strategy to use when some visual indication is needed to notify the user of an action, and you do not want to cause any unnecessary confusion by affecting the position and size of other children. If the desired effect is to apply transformations to other child elements of a layout while an animation is active, you can change the value of the applyChangesPostLayout property of the AnimateTransform class from the default of true to false. As an alternative to using the transformation properties for rotation, scale, and translation or the layoutMatrix3D property, transformations can be applied to UIComponent-based elements using the transformAround() method. The transformAround() method has arguments for applying transformations to an element that will affect the position and size of other children within the layout, and arguments for applying transformations post-layout without affecting the other child elements. The following example uses the transformAround() method to apply rotations around the z-axis to two elements, one that affects the layout of other children and one that does not: <s:Application xmlns: <fx:Script> <![CDATA[ import mx.core.UIComponent; private var newAngle:Number = 0; private function pushOver( evt:MouseEvent ):void { var angle:Number = btn1.rotationZ + 90; btn1.transformAround( new Vector3D( btn1.width, btn1.height / 2 ), null, new Vector3D( 0, 0, angle ) ); } private function pushAround( evt:MouseEvent ):void { newAngle += 90; btn2.transformAround( new Vector3D( btn2.width / 2, btn2.height / 2 ), null, null, null, null, new Vector3D( 0, 0, newAngle ) ); } ]]> </fx:Script> <s:Group> <s:Group <s:layout> <s:VerticalLayout </s:layout> <s:Button <s:Button </s:Group> <s:Rect <s:stroke> <mx:SolidColorStroke </s:stroke> </s:Rect> </s:Group> </s:Application> Each parameter of the transformAround() method takes a Vector3D object and all are optional aside from the first argument, which pertains to the center point for transformations. In this example, the first s:Button declared in the markup rotates in response to a click event and affects the position of the second s:Button declared. As the rotation for the first Button element is set using a Vector3D object on the rotation parameter of transformAround(), the rotationZ property of the element is updated. Within the pushAround() method, a post-layout transformation is applied to the second Button by setting the Vector3D object to the postLayoutRotation argument of transformAround(). When post-layout transformations are applied, the explicit transformation properties of the element (such as rotationZ) are not updated, and as a consequence the layout of the other children is not affected. Though transformations can play a powerful part in notifying users of actions to be taken or that have been taken, you must consider how other children of a layout will be affected in response to those transformations. You want to create a custom layout that applies 3D transformations to all children of a target container. Create a custom layout by extending com.layouts.supportClasses.LayoutBase and override the updateDisplayList() method to apply transformations to child elements accordingly. Along with the layout classes available in the Flex 4 SDK, you can assign custom layouts that extend LayoutBase to containers using the layout property. When the display of a target container is changed, the updateDisplayList() method of the container is invoked, which in turn invokes the updateDisplayList() method of the layout delegate. By overriding updateDisplayList() within a custom layout, you can apply transformations such as rotation, scaling, and translation to the child elements of a GroupBase-based container. Each child element of a GroupBase container is attributed as an ILayoutElement instance, which has methods to apply 3D transformations, such as setLayoutMatrix3D() and transformAround(). To apply transformations using the utility methods of an ILayout Element instance, access the element using the content API, as in the following example: package com.oreilly.f4cb { import flash.geom.Vector3D; import mx.core.IVisualElement; import mx.core.UIComponent; import spark.components.supportClasses.GroupBase; import spark.layouts.supportClasses.LayoutBase; public class Custom3DLayout extends LayoutBase { private var _focalLength:Number = 500; private var _scrollPosition:Number = 0; scale:Number var dist:Number; var xpos:Number = w; var ypos:Number; var element:IVisualElement; for( var i:int = 0; i < layoutTarget.numElements; i++ ) { element = layoutTarget.getElementAt( i ) as IVisualElement; radians = ( ( angle * i ) + _scrollPosition ) * ( Math.PI / 180 ); dist = w + ( Math.sin(radians) * radius ); scale = _focalLength / ( _focalLength + dist ); ypos = h + ( Math.cos(radians) * radius ) * scale; element.depth = scale; element.setLayoutBoundsSize( NaN, NaN ); element.transformAround( new Vector3D((element.width / 2), (element.height / 2) ), null, null, null, new Vector3D( scale, scale, 0 ), null, new Vector3D( xpos, ypos, 0 )); } } public function get scrollPosition():Number { return _scrollPosition; } public function set scrollPosition( value:Number ):void { _scrollPosition = value; target.invalidateDisplayList(); } } } The Custom3DLayout class in this example displays the child elements of a target container within a vertical carousel with 3D perspective. As the child elements of the container are accessed within the for loop, each instance is cast as an IVisualElement implementation (an extension of ILayoutElement) in order to set the appropriate depth value based on the derived distance from the viewer. As well, the explicit width and height properties are used to properly apply 3D translations using the transformAround() method. The first argument of the transformAround() method is a Vector3D object representing the center around which to apply transformations to the element. The following three optional arguments are Vector3D objects representing transformations that can be applied to an element that affect other children of the same layout, which in this example are attributed as a value of null. The last three arguments (also optional) are Vector3D objects representing transformations to be applied post-layout. Post-layout scale and translation transformations applied to an element do not affect the position and size of other children. A scrollPosition property has been added to Custom3DLayout to allow for scrolling through the carousel of elements. As the scrollPosition value changes, the invalidate DisplayList() method of the target container is invoked, which in turn invokes the updateDisplayList() method of the custom layout delegate. The following example applies the Custom3DLayout to a DataGroup container: <s:Application xmlns: <s:layout> <s:HorizontalLayout </s:layout> <s:Group <s:DataGroup <s:layout> <f4cb:Custom3DLayout </s:layout> <s:dataProvider> <s:ArrayCollection> <s:Button <s:Button <s:Button <s:Button <s:Button <s:Button </s:ArrayCollection> </s:dataProvider> </s:DataGroup> <s:Rect <s:stroke> <s:SolidColorStroke </s:stroke> </s:Rect> </s:Group> <s:VSlider </s:Application> As the value of the s:VSlider control changes, the scrollPosition of the custom layout delegate is modified and the transformations on child elements are updated to show a smooth scrolling carousel with perspective. Use the horizontalScrollPosition and verticalScrollPosition properties of a LayoutBase-based layout. When a fixed size is applied to a container and the clipAndEnableScrolling property is set to a value of true, the rendering of child elements is confined to the dimensions of the container. If the position of a child element is determined as being outside of the parent container’s bounds, the layout does not display that child within the container. Containers that implement the IViewport interface—as GroupBase-based containers do—can be wrapped in a Scroller component, and scroll bars will automatically be displayed based on the contentWidth and contentHeight of the viewport. Because, unlike MX containers, Spark containers do not inherently support adding scroll bars to their display, programmatically scrolling the content of a viewport is supported by updating the horizontalScrollPosition and verticalScrollPosition properties of a layout. In fact, that is how a container internally determines its scroll position: by requesting the scroll values of its layout. As shown in the following example, a container viewport can be scrolled programmatically by using another value-based component: <s:Application xmlns: ="0" maximum="{group.contentHeight - group.height}" liveDragging="true" /> <s:DataGroup <s:layout> <s:VerticalLayout </s:layout> <s:dataProvider> <mx:ArrayCollection </s:dataProvider> </s:DataGroup> </s:Application> The maximum scroll value for the s:HSlider control is determined by subtracting the value of the container’s height property from its contentHeight property value. The contentHeight property is an attribute of the IViewport interface, which all GroupBase-based containers implement. The verticalScrollPosition of the container’s layout delegate is bound to the value of the HSlider control, in turn updating the rendered view within the viewport of the container. As the value increases, child elements that previously resided below the viewport are rendered in the layout. As the value decreases, child elements that previously resided above the viewport are rendered. Because the scroll position in the previous example is updated prior to the rendering of child elements, the layout can employ virtualization easily. However, determining the scroll position of a virtualized layout based on the size of child elements involves accessing the virtual child elements of a container directly. The following example demonstrates how to programmatically use virtualized elemental scrolling: <s:Application xmlns: <fx:Library> <fx:Definition <s:Rect <s:fill> <mx:SolidColor </s:fill> </s:Rect> </fx:Definition> </fx:Library> <fx:Script> <![CDATA[ import mx.core.IVisualElement; import spark.core.NavigationUnit; private var elementHeight:Vector.<Number> = new Vector.<Number>(); private var currentIndex:int; private function handleScroll( unit:uint ):void { currentIndex = (unit == NavigationUnit.UP) ? currentIndex - 1 : currentIndex + 1; currentIndex = Math.max( 0, Math.min( currentIndex, group.numElements - 1 ) ); var element:IVisualElement; var ypos:Number = 0; for( var i:int = 0; i < currentIndex; i++ ) { element = group.getVirtualElementAt( i ); if( element != null ) { elementHeight[i] = element.getPreferredBoundsHeight(); } ypos += elementHeight[i]; } ypos += vLayout.paddingTop; ypos += vLayout.gap * currentIndex; vLayout.verticalScrollPosition = ypos; } ]]> </fx:Script> <s:layout> <s:VerticalLayout </s:layout> <s:DataGroup <s:layout> <s:VerticalLayout </s:layout> <s:dataProvider> <mx:ArrayCollection> <fx:MyRect <s:DropDownList <fx:MyRect <s:Button <fx:MyRect </mx:ArrayCollection> </s:dataProvider> </s:DataGroup> <s:Button <s:Button </s:Application> The elemental index on which to base the vertical scroll position of the container viewport is determined by a click event dispatched from either of the two declared s:Button controls. As the currentIndex value is updated, the position is determined by the stored height values of child elements retrieved from the get Virtual Element At() method of the GroupBase target container. You want to determine the visibility of an element within a container with a sequence-based layout delegate and possibly scroll the element into view. Use the fractionOfElementInView() method of a sequence-based layout such as VerticalLayout or HorizontalLayout to determine the visibility percentage value of an element within the container’s viewport and set the container’s scroll position based on the corresponding coordinate offset value returned from the getScrollPosition Delta ToElement() method of a LayoutBase-based layout. Sequence-based layouts available in the Flex 4 SDK, such as VerticalLayout and HorizontalLayout, have convenience properties and methods for determining the visibility of an element within the viewport of its parent container. The fractionOfElementInView() method returns a percentage value related to the visibility of an element within a range of 0 to 1. A value of 0 means the element is not present in the view, while a value of 1 means the element is completely in view. The argument value for the fraction OfElement InView() method is the elemental index of an element in the container’s display list. This index is used, along with the firstIndexInView and Index InView convenience properties, to determine the visibility of the element; you can also use it to determine whether to update a container’s scroll position and if so, by how much. If it is determined that the element needs to be scrolled into view, the scroll position of the container viewport can be updated programmatically using the vertical Scroll Position and horizontalScrollPosition properties, as in the following example: <s:Application xmlns: <fx:Script> <![CDATA[ import spark.layouts.VerticalLayout; import mx.core.IVisualElement; } private function getRandomInRange( min:int, max:int ):int { return ( Math.floor( Math.random() * (max - min + 1) ) + min ); } private function handleClick( evt:MouseEvent ):void { var item:IVisualElement = evt.target as IVisualElement; var index:Number = group.getElementIndex( item ); scrollTo( index ); } private function handleRandomScroll():void { var index:int = getRandomInRange( 0, 9 ); scrollTo( index ); scrollToField.text = (index+1).toString(); } ]]> </fx:Script> <s:layout> <s:VerticalLayout /> </s:layout> <s:Scroller <s:VGroup <s:Button <s:Button <s:Button <s:Button <s:Button <s:Button <s:Button <s:Button <s:Button <s:Button </s:VGroup> </s:Scroller> <s:HGroup <s:Button <s:Label </s:HGroup> </s:Application> Each element in the layout of the <s:VGroup> container is assigned a handler for a click event, which determines the elemental index of that item using the get Element Index() method of the content API. This example also provides the ability to randomly scroll to an element within the container using the handle Random Scroll() method, which (similar to the handleClick() event handler) hands an elemental index to the scrollTo() method to determine the visibility percentage of that element within the container viewport. If the element is not fully in view, it is scrolled into view using the getScrollPositionDeltaToElement() method of a LayoutBase-based layout. This method returns a Point object with position values that indicate the element’s offset from the container viewport (i.e., how far to scroll to make it completely visible). If the return value is null, either the elemental index lies outside of the display list or the element at that index is already fully in view. The display list indexes of the elements visible in the container viewport can also be determined using the firstIndexInView and lastIndexInView convenience properties of a sequence-based layout, as in the following snippet: trace( "firstIndex: " + ( group.layout as VerticalLayout ).firstIndexInView ); trace( "lastIndex: " + ( group.layout as VerticalLayout ).lastIndexInView ); } Upon a change to the scroll position of the layout delegate assigned to a container, the read-only firstIndexInView and lastIndexInView properties of a sequence-based layout are updated and a bindable indexInViewChanged event is dispatched. If you enjoyed this excerpt, buy a copy of Flex 4 Cookbook.
http://www.oreillynet.com/lpt/a/7637
CC-MAIN-2014-35
en
refinedweb
Sharing the goodness… Beth Massi is a Senior Program Manager on the Visual Studio team at Microsoft and a community champion for business application developers. Learn more about Beth. More videos » Very often in business applications we need to validate data through another service. I’m not talking about validating the format of data entered – this is very simple to do in LightSwitch -- I’m talking about validating the meaning of the data. For instance, you may need to validate not just the format of an email address (which LightSwitch handles automatically for you) but you also want to verify that the email address is real. Another common example is physical Address validation in order to make sure that a postal address is real before you send packages to it. In this post I’m going to show you how you can call web services when validating LightSwitch data. I’m going to use the Address Book sample and implement an Address validator that calls a service to verify the data. In Visual Studio LightSwitch there are a few places where you can place code to validate entities. There are Property_Validate methods and there are Entity_Validate methods. Property_Validate methods run first on the client and then on the server and are good for checking the format of data entered, doing any comparisons to other properties, or manipulating the data based on conditions stored in the entity itself or its related entities. Usually you want to put your validation code here so that users get immediate feedback of any errors before the data is submitted to the server. These methods are contained on the entity classes themselves. (For more detailed information on the LightSwitch Validation Framework see: Overview of Data Validation in LightSwitch Applications) The Entity_Validate methods only run on the server and are contained in the ApplicationDataService class. This is the perfect place to call an external validation service because it avoids having clients calling external services directly -- instead the LightSwitch middle-tier makes the call. This gives you finer control over your network traffic. Client applications may only be allowed to connect to your intranet internally but you can allow external traffic to the server managing the external connection in one place. There are a lot of services out there for validating all sorts of data and each service has a different set of requirements. Typically I prefer REST-ful services so that you can make a simple http request (GET) and get some data back. However, you can also add service references like ASMX and WCF services as well. It’s all going to depend on the service you use so you’ll need to refer to their specific documentation. To add a service reference to a LightSwitch application, first flip to File View in the Solution Explorer, right-click on the Server project and then select Add Service Reference… Enter the service URL and the service proxy classes will be generated for you. You can then call these from server code you write on the ApplicationDataService just like you would in any other application that has a service reference. In the case of calling REST-ful services that return XML feeds, you can simply construct the URL to call and examine the results. Let’s see how to do that. In this sample we have an Address table where we want to validate the physical address when the data is saved. There are a few address validator services out there to choose from that I could find, but for this example I chose to sign up for a free trial of an address validation service from ServiceObjects. They’ve got some nice, simple APIs and support REST web requests. Once you sign up they give you a License Key that you need to pass into the service. A sample request looks like this: Which gives you back the result: <?xml version="1.0" encoding="UTF-8"?> <Address xmlns="" xmlns:xsi="" xmlns: <Address>1 Microsoft Way</Address> <City>Redmond</City> <State>WA</State> <Zip>98052-8300</Zip> <Address2/> <BarcodeDigits>980528300997</BarcodeDigits> <CarrierRoute>C012</CarrierRoute> <CongressCode>08</CongressCode> <CountyCode>033</CountyCode> <CountyName>King</CountyName> <Fragment/> </Address> If you enter a bogus address or forget to specify the City+State or PostalCode then you will get an error result: <?xml version="1.0" encoding="UTF-8"?> <Address xmlns="" xmlns:xsi="" xmlns: <Error> <Desc>Please input either zip code or both city and state.</Desc> <Number>2</Number> <Location/> </Error> </Address> So in order to interact with this service we’ll first need to add some assembly references to the Server project. Right-click on the Server project (like shown above) and select “Add Reference” and import System.Web and System.Xml.Linq. Next, flip back to Logical View and open the Address entity in the Data Designer. Drop down the Write Code button to access the Addresses_Validate method. (You could also just open the Server\UserCode\ApplicationDataService code file if you are in File View). First we need to import some namespaces as well as the default XML namespace that is returned in the response. (For more information on XML in Visual Basic please see: Overview of LINQ to XML in Visual Basic and articles here on my blog.) Then we can construct the URL based on the entity’s Address properties and query the result XML for either errors or the corrected address. If we find an error, we tell LightSwitch to display the validation result to the user on the screen. Imports System.Xml.Linq Imports System.Web.HttpUtility Imports < Namespace LightSwitchApplication Public Class ApplicationDataService Private Sub Addresses_Validate(entity As Address, results As EntitySetValidationResultsBuilder) Dim isValid = False Dim errorDesc = "" 'Construct the URL to call the web service Dim url = String.Format("?" & "Address={0}&Address2={1}&City={2}&State={3}&PostalCode={4}&LicenseKey={5}", UrlEncode(entity.Address1), UrlEncode(entity.Address2), UrlEncode(entity.City), UrlEncode(entity.State), UrlEncode(entity.ZIP), "12345") Try 'Call the service and load the XML result Dim addressData = XElement.Load(url) 'Check for errors first Dim err = addressData...<Error> If err.Any Then errorDesc = err.<Desc>.Value Else 'Fill in corrected address values returned from service entity.Address1 = addressData.<Address>.Value entity.Address2 = addressData.<Address2>.Value entity.City = addressData.<City>.Value entity.State = addressData.<State>.Value entity.ZIP = addressData.<Zip>.Value isValid = True End If Catch ex As Exception Trace.TraceError(ex) End Try If Not (isValid) Then results.AddEntityError("This is not a valid US address. " & errorDesc) End If End Sub End Class End Namespace Now that I’ve got this code implemented let’s enter some addresses on our contact screen. Here I’ve entered three addresses, the first two are legal and the last one is not. Also notice that I’ve only specified partial addresses. If I try to save this screen, an error will be returned from the service on the last row. LightSwitch won’t let us save until the address is fixed. If I delete the bogus address and save again, you will see that the other addresses were verified and all the fields are updated with complete address information. I hope this gives you a good idea on how to implement web service calls into the LightSwitch validation pipeline. Even though each service you use will have different requirements on how to call them and what they return, the LightSwitch validation pipeline gives you the necessary hooks to implement complex entity validation easily. Enjoy! Thanks for the article. I would be extremely grateful for an article demonstrating the calling of a web service viaa button click here. The button click code is understood. Getting a WSDL service (which can only be implemented in the server) to be called by a button is a mystery though. I have been down the WCF RIA path...you can read about that plight here: social.msdn.microsoft.com/.../3a547ac4-5ed2-49f0-82bf-873925335a81 Perhaps a look at a walkthough I uploaded: code.msdn.microsoft.com/LightSwitch-Consuming-Web-c54979e0 may be of interest. Thank You The given information is very effective i will keep updated with the same <a href ="">web services</a> This is good information, thanks. I'm very new and I'm having trouble making an ebay API call in lightswitch. In visual studio I add a web reference then type "using application1.com.ebay.developer;" and it works. In lightswitch I cant add a web reference to the client. Instead, I added a service reference to the client and typed "using lightswitchapplication.servicereference1;" into the new c# client class. What happens is that all the API code becomes recognised except for one piece of code: "eBayAPIInterfaceService". Is there a way i can call a web reference in lightswitch? the button just isnt there in the advanced settings. Or is there another way I can get the API working with the service reference? Thanks :) Danny LightSwith 2012 can not be Imports System.Xml.Linq Imports System.Web.HttpUtility I have added a reference System.xml.linq System.Web Please help!
http://blogs.msdn.com/b/bethmassi/archive/2012/01/30/calling-web-services-to-validate-data-in-visual-studio-lightswitch.aspx
CC-MAIN-2014-35
en
refinedweb
Fabulous Adventures In Coding Eric Lippert is a principal developer on the C# compiler team. Learn more about Eric. Once more I have returned from my ancestral homeland, after some weeks of sun, rain, storms, wind, calm, friends and family. I could certainly use another few weeks, but it is good to be back too. Well, enough chit-chat; back to programming language design. Here's an interesting combination of subclassing with nesting. Before trying it, what do you think this program should output? public class A<T> { public class B : A<int> { public void M() { System.Console.WriteLine(typeof(T).ToString()); } public class C : B { } }}class MainClass { static void Main() { A<string>.B.C c = new A<string>.B.C(); c.M(); }} Should this say that T is int, string or something else? Or should this program not compile in the first place? It turned out that the actual result is not what I was expecting at least. I learn something new about this language every day. Can you predict the behaviour of the code? Can you justify it according to the specification? (The specification is really quite difficult to understand on this point, but in fact it does all make sense.) (The answer to the puzzle is.
http://blogs.msdn.com/b/ericlippert/archive/2007/07/27/an-inheritance-puzzle-part-one.aspx
CC-MAIN-2014-35
en
refinedweb
Folder redirection, DFS, and "semaphore timeout" error So we have been successfully using Folder Redirection here for a year or so without any issues. I am in the process of moving our users redirected folder to a DFS namespace as part of retiring an old file server. The process involves moving the user object to a new OU with a new GPO linked that handles folder redirection to the new DFS location (the old location was *not* DFS). I then run a GPUPDATE, restart the machine, and on the next login, the GPO handles moving the user's redirected folder from the old location to the new. I have successfully done this with a handlful of users, but one user in particular, is giving me problems. When I move the user object to the new GP and restart, after the login, i see that their folders are still pointing to the old location, despite GPRESULT saying the new GPO applied. I am finding the following error in the event log. The following error occurred: "Failed to copy files from "C:\Users\<UserName>\Documents" to "<a network share>". Error details: "The semaphore timeout period has expired". I found this KB article, but I am not able to run Procmon because the the copy/error is generated during login. I did try logging in as the user via RDP, and then sitting at the console logged in as admin and running procmon, but it didn't/couldn't see processes running in the other session. Any ideas what I can do to resolve this? EDIT: The issue ended up being the 249 charecter limit mentioned in the KB article. I ran a script to list the offending file paths, had the user rename them, and then the folder redirection GPO properly applied. Thanks to all who offered help! *a note for anyone with a similiar problem that stumbles across this post in the future. the issue was the number if charecters in the entire file path. Also it wasn't the current path that was the issue. The DFS path that I was moving to had more charecters in it than the existing UNC path directly to the old server. When my script reported the length, I had it report the current length and the length of the complete DFS path. It was the later that was problematic. Edited Mar 27, 2013 at 12:47 UTCEdited Mar 27, 2013 at 12:47 UTC 6 Replies Mar 25, 2013 at 7:23 UTC Worst error message ever!!!! LOLOLOL!! Mar 25, 2013 at 7:40 UTC Give that domain user local admin access to the system, reset their AD password (or have them give it to you) so you can use their account and login as them and then see if you can tell what processes are running. Mar 25, 2013 at 7:54 UTC Cant you just goto their share on the server, find the directory or file name that is long, shouldnt be too hard, and delete/rename it? You could do a search for *.* on the user's space, will return all files. Then sort on Folder location Mar 25, 2013 at 8:37 UTC They are a local admin and I am logging in as them after i move their user object to the new OU/GPO. The issue is that they process runs (and throws the error) during login. I have verbose login enabled and it pauses on "Applying Folder Redirection Policy" and stays there a good 30 minutes or so. I believe this is when that actual copy attempt is made. After the login complete, there is no process to view with Procmon. I have seen it hang at the "Applying Folder Redirection Policy" when I have migrated other users, but when the login is complete, all of their data is in the new location and the pointers for My Docs and Desktop are pointing to the new location. Mar 26, 2013 at 2:44 UTC Check the files in the the old and new location and make sure that the user didn't somehow lose permissions to one... If it can't access a file for some reason that might explain the hangup. Mar 27, 2013 at 9:27 UTC Unless you set it up that way (and I can't imagine why you would), Windows folder replication or redirection does not use a semaphore file. It only falls back to looking for one after a timeout in case it is on some kind of funny network with unusual protocols or mixed platforms where netbios and DNS can't work as efficiently as they should, assuming the administrator set up a semaphore handshake for just that situation. I'm assuming you didn't. Something is pooched about that user's data and stopping access to it, causing the timeout. I'm inclined to +1 Dashrender's permissions theory, then I would test accessing the data using various accounts from another server or workstation via hostname/ip/etc and see if you get mixed or consistent results. That should indicate if there is a DNS problem lurking. Something else you can try doing to get more info is to log in locally as the user, start procmon, then RDP in as the admin and minimize the RDP session. Then watch procmon and see what happens. You may have to patch the termsrv.dll to get the multiple concurrent logins (I didn't tell you that). This discussion has been inactive for over a year. You may get a better answer to your question by starting a new discussion. It's FREE. Always will be.
http://community.spiceworks.com/topic/317308-folder-redirection-dfs-and-semaphore-timeout-error
CC-MAIN-2014-35
en
refinedweb
Ethernet addresses. It is used by all Ethernet datalink providers (interface drivers) and can be used by other datalink providers that support broadcast, including FDDI and Token Ring. The only network layer supported in this implementation is the Internet Protocol, although ARP is not specific to that protocol. ARP caches IP-to-link-layer a maximum of four packets while awaiting a response to a mapping request. ARP keeps only the first four-link address tables. Ioctls that change the table contents require sys_net_config privilege. See privileges(5). ); SIOCSARP, SIOCGARP and SIOCDARP are BSD compatible ioctls. These ioctls do not communicate the mac address length between the user and the kernel (and thus only work for 6 byte wide Ethernet addresses). To manage the ARP cache for media that has different sized mac addresses, use SIOCSXARP, SIOCGXARP and SIOCDXARP ioctls. #include <sys/sockio.h> #include <sys/socket.h> #include <net/if.h> #include <net/if_dl.h> #include <net/if_arp.h> struct xarpreq xarpreq; ioctl(s, SIOCSXARP, (caddr_t)&xarpreq); ioctl(s, SIOCGXARP, (caddr_t)&xarpreq); ioctl(s, SIOCDXARP, (caddr_t)&xarpreq); Each ioctl() request takes the same structure as an argument. SIOCS[X]ARP sets an ARP entry, SIOCG[X]ARP gets an ARP entry, and SIOCD[X]ARP deletes an ARP entry. These ioctl() requests may be applied to any Internet family socket descriptors, or to a descriptor for the ARP device. Note that SIOCS[X]ARP and SIOCD[X]ARP require a privileged user, while SIOCG[X]ARP does not. The arpreq structure contains /* * ARP ioctl request */ struct arpreq { struct sockaddr arp_pa; /* protocol address */ struct sockaddr arp_ha; /* hardware address */ int arp_flags; /* flags */ }; The xarpreq structure contains: /* * Extended ARP ioctl request */ struct xarpreq { struct sockaddr_storage xarp_pa; /* protocol address */ struct sockaddr_dl xarp_ha; /* hardware address */ int xarp_flags; /* arp_flags field values */ }; #define ATF_COM 0x2 /* completed entry (arp_ha valid) */ #define ATF_PERM 0x4 /* permanent (non-aging) entry */ #define ATF_PUBL 0x8 /* publish (respond for other host) */ #define ATF_USETRAILERS 0x10 /* send trailer packets to host */ #define ATF_AUTHORITY 0x20 /* hardware address is authoritative */ The address family for the [x]arp_pa sockaddr must be AF_INET. The ATF_COM flag bits ([x]arp_flags) cannot be altered. ATF_USETRAILER is not implemented on Solaris and is retained for compatibility only. ATF_PERM makes the entry permanent (disables aging) if the ioctl() request succeeds. ATF_PUBL specifies that the system should respond to ARP requests for the indicated protocol address coming from other machines. This allows a host to act as an "ARP server," which may be useful in convincing an ARP-only machine to talk to a non-ARP machine. ATF_AUTHORITY indicates that this machine owns the address. ARP does not update the entry based on received packets. The address family for the arp_ha sockaddr must be AF_UNSPEC. Before invoking any of the SIOC*XARP ioctls, user code must fill in the xarp_pa field with the protocol (IP) address information, similar to the BSD variant. The SIOC*XARP ioctls come in two (legal) varieties, depending on xarp_ha.sdl_nlen: Other than the above, the xarp_ha structure should be 0-filled except for SIOCSXARP, where the sdl_alen field must be set to the size of hardware address length and the hardware address itself must be placed in the LLADDR/sdl_data[] area. (EINVAL will be returned if user specified sdl_alen does not match the address length of the identified interface). On return from the kernel on a SIOCGXARP ioctl, the kernel fills in the name of the interface (excluding terminating NULL) and its hardware address, one after another, in the sdl_data/LLADDR area; if the two are larger than can be held in the 244 byte sdl_data[] area, an ENOSPC error is returned. Assuming it fits, the kernel will also set sdl_alen with the length of hardware address, sdl_nlen with the length of name of the interface (excluding terminating NULL), sdl_type with an IFT_* value to indicate the type of the media, sdl_slen with 0, sdl_family with AF_LINK and sdl_index (which if not 0) with system given index for the interface. The information returned is very similar to that returned via routing sockets on an RTM_IFINFO message. ARP performs duplicate address detection for local addresses. When a logical interface is brought up (IFF_UP) or any time the hardware link goes up (IFF_RUNNING), ARP sends probes (ar$spa == 0) for the assigned address. If a conflict is found, the interface is torn down. See ifconfig(1M) for more details. ARP watches for hosts impersonating the local host, that is, any host that responds to an ARP request for the local host's address, and any address for which the local host is an authority. ARP defends local addresses and logs those with ATF_AUTHORITY set, and can tear down local addresses on an excess of conflicts. ARP also handles UNARP messages received from other nodes. It does not generate these messages. arp(1M), ifconfig(1M), privileges(5), if_tcp(7P), inet(7P) Plummer, Dave, An Ethernet Address Resolution Protocol or Converting Network Protocol Addresses to 48 bit Ethernet - Addresses for Transmission on Ethernet Hardware, RFC 826, STD 0037, November 1982. Malkin, Gary, ARP Extension - UNARP, RFC 1868, November 1995. Several messages can be written to the system logs (by the IP module) when errors occur. In the following examples, the hardware address strings include colon (:) separated ASCII representations of the link layer addresses, whose lengths depend on the underlying media (for example, 6 bytes for Ethernet). Duplicate IP address warning. ARP has discovered another host on a local network that responds to mapping requests for the Internet address of this system, and has defended the system against this node by re-announcing the ARP entry. Duplicate IP address detected while performing initial probing. The newly-configured interface has been shut down. Duplicate IP address detected on a running IP interface. The conflict cannot be resolved, and the interface has been disabled to protect the network. An interface with a previously-conflicting IP address has been recovered automatically and reenabled. The conflict has been resolved. This message appears if arp(1M) has been used to create a published permanent (ATF_AUTHORITY) entry, and some other host on the local network responds to mapping requests for the published ARP entry. Name | Synopsis | Description | APPLICATION PROGRAMMING INTERFACE | See Also | Diagnostics
http://docs.oracle.com/cd/E19253-01/816-5177/6mbbc4g2j/index.html
CC-MAIN-2014-35
en
refinedweb
08 September 2011 10:10 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> “The No 1 plant was shut down this morning, for 35-45 days [of] maintenance,” the source said. The company, which has four MEG plants with a combined capacity of 1.8m tonnes/year, will shut down another unit “very soon”, the source added. Nan Ya is expected to shut two MEG plants for turnaround this week. The company's No 2 and No 3 units can each produce 360,000 tonnes/year of MEG, while its No 4 unit is the biggest, with a 720,000 tonne/year capacity. For more on MEG,
http://www.icis.com/Articles/2011/09/08/9490925/taiwans-nan-ya-shuts-mailiao-meg-unit-no-1-for-maintenance.html
CC-MAIN-2014-35
en
refinedweb
02 December 2011 14:25 [Source: ICIS news] LONDON (ICIS)--The European December styrene customer reference price (CRP) has been settled at €1,100/tonne ($1,486/tonne), a rollover from last month, the seller involved said on Friday. A rollover from November was supported by a minor increase for benzene in euro terms this month and weakening demand due to seasonal factors. The contract was agreed on a free carrier (FCA) ?xml:namespace> ($1 = €0.74)
http://www.icis.com/Articles/2011/12/02/9513723/europe-december-styrene-crp-rolls-over-at-1100tonne.html
CC-MAIN-2014-35
en
refinedweb
Chapter I The QUARREL ON THE CAPM: A LITERATURE SURVEY Abstract The current chapter has attempted to do three things. First it presents an overview on the capital asset pricing model and the results from its application throughout a narrative literature review. Second the chapter has argued that to claim whether the CAPM is dead or alive, some improvements on the model must be considered. Rather than take the view that one theory is right and the other is wrong, it is probably more accurate to say that each applies in somewhat different circumstances (assumptions). Finally the chapter has argued that even the examination of the CAPM's variants is unable to solve the debate into the model. Rather than asserting the death or the survival of the CAPM, we conclude that there is no consensus in the literature as to what suitable measure of risk is, and consequently as to what extent the model is valid or not since the evidence is very mixed. So the debate on the validity of the CAPM remains a questionable issue. Keywords: CAPM, CAPM's variants, circumstances, literature survey. 1. INTRODUCTION The traditional capital assets pricing model (CAPM), always the most widespread model of the financial theory, was prone to harsh criticisms not only by the academicians but also by the experts in finance. Indeed, in the last few decades an enormous body of empirical researches has gathered evidences against the model. These evidences tackle directly the model's assumptions and suggest the dead of the beta (Fama and French, 1992); the systematic risk of the CAPM. If the world does not obey to the model's predictions, it is maybe because the model needs some improvements. It is maybe because also the world is wrong, or that some shares are not correctly priced. Perhaps and most notably the parameters that determine the prices are not observed such as information or even the returns' distribution. Of course the theory, the evidence and even the unexplained movements have all been subject to much debate. But the cumulative effect has been to put a new look on asset pricing. Financial Researchers have provided both theory and evidence which suggest from where the deviations of securities' prices from fundamentals are likely to come, and why could not be explained by the traditional CAPM. Understanding security valuation is a parsimonious as well as a lucrative end in its self. Nevertheless, research on valuation has many additional benefits. Among them the crucial and relatively neglected issues have to do with the real consequences of the model's failure. How are securities priced? What are the pricing factors and when? Once it is recognized that the model's failure has real consequences, important issues arise. For instance the conception of an adequate pricing model that accounts for all the missing aspects. The objective of this chapter is to look at different approaches to the CAPM, how these have arisen, and the importance of recognizing that there's no single ‘'right model'' which is adequate for all shares and for all circumstances, i.e. assumptions. We will, so move on to explore the research task, discuss the goodness and the weakness of the CAPM, and look at how different versions are introduced and developed in the literature. We will, finally, go on to explore whether these recent developments on the CAPM could solve the quarrel behind its failure. For this end, the recent chapter is organized as follows: the second section presents the theoretical bases of the model. The third one discusses the problematic issues on the model. The fourth section presents a literature survey on the classic version of the model. The five section sheds light on the recent developments of the CAPM together with a literature review on these versions. The next one raises the quarrel on the model and its modified versions. Section seven concludes the paper. 2. THEORETICAL BASES OF THE CAPITAL ASSET PRICING MODEL In the field of finance, the CAPM is used to determine, theoretically, the required return of an asset; if this asset is associated to a well diversified market portfolio while taking into account the non diversified risk of the asset its self. This model, introduced by Jack Treynor, William Sharpe and Jan Mossin (1964, 1965) took its roots of the Harry Markowitz's work (1952) which is interested in diversification and the modern theory of the portfolio. The modern theory of portfolio was introduced by Harry Markowitz in his article entitled “Portfolio Selection'', appeared in 1952 in the Journal of Finance. Well before the work of Markowitz, the investors, for the construction of their portfolios, are interested in the risk and the return. Thus, the standard advice of the investment decision was to choose the stocks that offer the best return with the minimum of risk, and by there, they build their portfolios. On the basis of this point, Markowitz formulated this intuition by resorting to the diversification's mathematics. Indeed, he claims that the investors must in general choose the portfolios while getting based on the risk criterion rather than to choose those made up only of stocks which offer each one the best risk-reward criterion. In other words, the investors must choose portfolios rather than individual stocks. Thus, the modern theory of portfolio explains how rational investors use diversification to optimize their portfolio and what should be the price of an asset while knowing its systematic risk. Such investors are so-called to meet only one source of risk inherent to the total performance of the market; more clearly, they support only the market risk. Thus, the return on a risky asset is determined by its systematic risk. Consequently, an investor who chooses a less diversified portfolio, generally, supports the market risk together with the uncertainty's risk which is not related to the market and which would remain even if the market return is known. Sharpe (1964) and Linter (1965), while basing on the work of Harry Markowitz (1952), suggest, in their model, that the value of an asset depends on the investors' anticipations. They claim, in their model that if the investors have homogeneous anticipations (their optimal behavior is summarized in the fact of having an efficient portfolio based on the mean-variance criterion), the market portfolio will have to be the efficient one while referring to the mean-variance criterion (Hawawini 1984, Campbell, Lo and MacKinlay 1997). The CAPM offer an estimate of a financial asset on the market. Indeed, it tries to explain this value while taking into account the risk aversion, more particularly; this model supposes that the investors seek, either to maximize their profit for a given level of risk, or to minimize the risk taking into account a given level of profit. The simplest mean-variance model (CAPM) concludes that in equilibrium, the investors choose a combination of the market portfolio and to lend or to borrow with proportions determined by their capacity to support the risk with an aim of obtaining a higher return. 2.1. Tested Hypothesis The CAPM is based on a certain number of simplifying assumptions making it applicable. These assumptions are presented as follows: - The markets are perfect and there are neither taxes nor expenses or commissions of any kind; - All the investors are risk averse and maximize the mean-variance criterion; - The investors have homogeneous anticipations concerning the distributions of the returns' probabilities (Gaussian distribution); and - The investors can lend and borrow unlimited sums with the same interest rate (the risk free rate). The aphorism behind this model is as follows: the return of an asset is equal to the risk free rate raised with a risk premium which is the risk premium average multiplied by the systematic risk coefficient of the considered asset. Thus the expression is a function of: - The systematic risk coefficient which is noted as; - The market return noted; - The risk free rate (Treasury bills), noted This model is the following: Where: ; represents the risk premium, in other words it represents the return required by the investors when they rather place their money on the market than in a risk free asset, and; ; corresponds to the systematic risk coefficient of the asset considered. From a mathematical point of view, this one corresponds to the ratio of the covariance of the asset's return and that of the market return and the variance of the market return. Where: ; represents the standard deviation of the market return (market risk), and ; is the standard deviation of the asset's return. Subsequently, if an asset has the same characteristics as those of the market (representative asset), then, its equivalent will be equal to 1. Conversely, for a risk free asset, this coefficient will be equal to 0. The beta coefficient is the back bone of the CAPM. Indeed, the beta is an indicator of profitability since it is the relationship between the asset's volatility and that of the market, and volatility is related to the return's variations which are an essential element of profitability. Moreover, it is an indicator of risk, since if this asset has a beta coefficient which is higher than 1, this means that if the market is in recession, the return on the asset drops more than that of the market and less than it if this coefficient is lower than 1. The portfolio risk includes the systematic risk or also the non diversified risk as well as the non systematic risk which is known also under the name of diversified risk. The systematic risk is a risk which is common for all stocks, in other words it is the market risk. However the non systematic risk is the risk related to each asset. This risk can be reduced by integrating a significant number of stocks in the market portfolio, i.e. by diversifying well in advantage (Markowitz, 1985). Thus, a rational investor should not take a diversified risk since it is only the non diversified risk (risk of the market) which is rewarded in this model. This is equivalent to say that the market beta is the factor which rewards the investor's exposure to the risk. In fact, the CAPM supposes that the market risk can be optimized i.e. can be minimized the maximum. Thus, an optimal portfolio implies the weakest risk for a given level of return. Moreover, since the inclusion of stocks diversifies in advantage the portfolio, the optimal one must contain the whole stocks on the market, with the equivalent proportions so as to achieve this goal of optimization. All these optimal portfolios, each one for a given level of return, build the efficient frontier. Here is the graph of the efficient frontier: The (Markowitz) efficient frontier The efficient frontier Lastly, since the non systematic risk is diversifiable, the total risk of the portfolio can be regarded as being the beta (the market risk). 3. Problematic issues on the CAPM Since its conception as a model to value assets by Sharpe (1964), the CAPM has been prone to several discussions by both academicians and experts. Among them the most known issues concerning the mean variance market portfolio, the efficient frontier, and the risk premium puzzle. 3.1 The mean-variance market portfolio The modern portfolio theory was introduced for the first time by Harry Markowitz (1952). The contribution of Markowitz constitutes an epistemological shatter with the traditional finance. Indeed, it constitutes a passageway from an intuitive finance which is limited to advices related to financial balance or to tax and legal nature advices, to a positive science which is based on coherent and fundamental theories. One allots to Markowitz the first rigorous treatment of the investor dilemma, namely how obtaining larger profits while minimizing the risks. 3.2 The efficient frontier 3.3 The equity premium puzzle 4. Background on the CAPM . The CAPM's empirical problems may reflect theoretical failings, the result of many simplifying assumptions.” Fama and French, 2003, “The Capital Asset Pricing Model: Theory and Evidence”, Tuck Business School, Working Paper No. 03-26 Being a theory, the CAPM found the welcome thanks to its circumspect elegance and its concept of good sense which supposes that a risk averse investor would require a higher return to compensate for supported the back-up risk. It seems that a more pragmatic approach carries out to conclude that there are enough limits resulting from the empirical tests of the CAPM. Tests of the CAPM were based, mainly, on three various implications of the relation between the expected return and the market beta. Firstly, the expected return on any asset is linearly associated to its beta, and no other variable will be able to contribute to the increase of the explanatory power. Secondly, the beta premium is positive which means that the market expected return exceeds that of individual stocks, whose return is not correlated with that of the market. Lastly, according to the Sharpe and Lintner model (1964, 1965), stocks whose return is not correlated with that of the market, have an expected return equal to the risk free rate and a risk premium equal to the difference between the market return and the risk free rate return. In what follows, we are going to examine whether the CAPM's assumptions are respected or not through the empirical literature. Starting with Jensen (1968), this author wants to test for the relationship between the securities' expected return and the market beta. For this reason, he uses the time series regression to estimate for the CAPM´ s coefficients. The results reject the CAPM as for the moment when the relationship between the expected return on assets is positive but that this relation is too flat. In fact, Jensen (1968) finds that the intercept in the time series regression is higher than the risk free rate. Furthermore, the results indicate that the beta coefficient is lower than the average excess return on the market portfolio. In order to test for the CAPM, Black et al. (1972) work on a sample made of all securities listed on the New York Stock Exchange for the period of 1926-1966. The authors classify the securities into ten portfolios on the basis of their betas.They claim that grouping the securities with reference to their betas may offer biased estimates of the portfolio beta which may lead to a selection bias into the tests. Hence, so as to get rid of this bias, they use an instrumental variable which consists of taking the previous period's estimated beta to select a security's portfolio grouping for the next year. For the estimate of the equation, the authors use the time series regression. The results indicate, firstly, that the securities associated to high beta had significantly negative intercepts, whereas those with low beta had significantly positive intercepts. It was proved, also, that this effect persists overtime. Hence, these evidences reject the traditional CAPM. Secondly, it is found that the relation between the mean excess return and beta is linear which is consistent with the CAPM. Nevertheless, the results point out that the slopes and intercepts in the regression are not reliable. In fact, during the prewar period, the slope was sharper than that predicted by the CAPM for the first sub period, and it was flatter during the second sub period. However, after that, the slope was flatter. Basing on these results, Black, Fischer, Michael C. Jensen and Myron Scholes (1972) conclude that the traditional CAPM is inconsistent with the data. Fama and MacBeth (1973) propose another regression method so as to overcome the problem related to the residues correlation in a simple linear regression. Indeed, instead of estimating only one regression for the monthly average returns on the betas, they propose to estimate regressions of these returns month by month on the betas. They include all common stocks traded in NYSE from 1926 to 1968 in their analysis. The monthly averages of the slopes and intercepts, with the standard errors of the averages, thus, are used to check, initially, if the beta premium is positive, then to test if the averages return of assets which are not correlated with the market return is from now on equal to the average of the risk free rate. In this way, the errors observed on the slopes and intercepts are directly given by the variation for each month of the regression coefficients, which detect the effects of the residues correlation over the variation of the regression. Their study led to three main results. At first, the relationship between assets return and their betas in an efficient portfolio is linear. At second, the beta coefficient is an appropriate measure of the security's risk and no other measure of risk can be a better estimator. Finally, the higher the risk is, the higher the return should be. Blume and Friend (1973) in their paper try to examine theoretically and empirically the reasons beyond the failure of the market line to explain excess return on financial assets. The authors estimate the beta coefficients for each common stock listed in the New York Stock Exchange over the period of January 1950 to December 1954. Then, they form 12 portfolios on the basis of their estimated beta. They afterwards, calculate the monthly return for each portfolio. Third, they calculate the monthly average return for portfolios from 1955 to 1959. These averaged returns were regressed to obtain the value of the beta portfolios. Finally, these arithmetic average returns were regressed on the beta coefficient and the square of beta as well. Through, this study, the authors point out that the failure of the capital assets pricing model in explaining returns maybe due to the simplifying assumption according to which the functioning of the short-selling mechanism is perfect. They defend their point of view while resorting to the fact that, generally, in short sales the seller cannot use the profits for purchasing other securities. Moreover, they state that the seller should make a margin of roughly 65% of the sales market value unless the securities he owns had a value three times higher than the cash margin. This makes a severe constraint on his short sales. In addition to that, the authors reveal that it is more appropriate and theoretically more possible to remove the restriction on the short sales than that of the risk free rate assumption (i.e., to borrow and to lend on a unique risk free rate). The results show that the relationship between the average realized returns of the NYSE listed common stocks and their correspondent betas is almost linear which is consistent with the CAPM assumptions. Nevertheless, they advance that the capital assets pricing model is more adequate for the estimates of the NYSE stocks rather than other financial assets. They mention that this latter conclusion is may be owed to the fact that the market of common stocks is well segmented from markets of other assets such as bonds. Finally, the authors come out with the two following conclusions: Firstly, the tests of the CAPM suggest the segmentation of the markets between stocks and bonds. Secondly, in absence of this segmentation, the best way to estimate the risk return tradeoff is to do it over the class of assets and the period of interest. The study of Stambaugh (1982) is interested in testing the CAPM while taking into account, in addition to the US common stocks, other assets such as, corporate and government bonds, preferred stocks, real estate, and other consumer durables. The results indicate that testing the CAPM is independent on whether we expand or not the market portfolio to these additional assets. Kothari Shanken and Sloan (1995), show that the annual betas are statistically significant for a variety of portfolios. These results were astonishing since not very early, Fama and French (1992), found that the monthly and the annual betas are nearly the same and are not statistically significant. The authors work on a sample which covers all AMEX firms for the period 1927-1990. Portfolios are formed in five different ways. Firstly, they from 20 portfolios while basing only on beta. Secondly, 20 portfolios by grouping on size alone. Thirdly, they take the intersection of 10 independent beta or size to obtain 100 portfolios. Then, they classify stocks into 10 portfolios on beta, and after that into 10 portfolios on size within each beta group. They, finally, classify stocks into 10 portfolios on size and then into 10 portfolios on beta within each size group. They use the GRSP equal weighted portfolio as a proxy for the whole market return market. The cross-sectional regression of monthly return on beta and size has led to the following conclusions: On the one hand, when taking into account only the beta, it is found that the parameter coefficient is positive and statistically significant for both the sub periods studied. On the other hand, it is demonstrated that the ability of beta and size to explain cross sectional variation of the returns on the 100 portfolios ranked on beta given the size, is statistically significant. However, the incremental economic benefit of size given beta is relatively small. Fama and French published in 1992 a famous study putting into question the CAPM, called since then the "Beta is dead" paper (the article announcing the death of Beta). The authors use a sample which covers all the stocks of the non-financial firms of the NYSE, AMEX and NASDAQ for the period of the end of December 1962 until June 1990. For the estimate of the betas; they use the same test as that of Fama and Macbeth (1973) and the cross-sectional regression. The results indicate that when paying attention only to the betas variations which are not related to the size, it is found that the relation between the betas and the expected return is too flat, and this even if the beta is the only explanatory variable. Moreover, they show that this relationship tend to disappear overtime. In order to verify the validity of the CAPM in the Hungarian stock market, Andor et al. (1999) work on daily and monthly data on 17 Hungarian stocks between the end of July 1991 and the beginning of June 1999. To proxy for the market portfolio the authors use three different indexes which are the BUX index, the NYSE index, and the MSCI world index. The regression of the stocks' return against the different indexes' return indicates that the CAPM holds. Indeed, in all cases it is found that the return is positively associated to the betas and that the R-squared value is not bad at all. They conclude, hence, that the CAPM is appropriate for the description of the Hungarian stock market. For the aim of testing the validity of the CAPM, Kothari and Jay Shanken (1999), study the one factor model with reference to the size anomaly and the book to market anomaly. The sample used in their study contains annual return on portfolios from the GRSP universe of stocks. The portfolios are formed every July from 1927 to 1992. The formation procedure is the following; every year stocks are sorted on the basis of their market capitalization and then on their betas while regressing the past returns on the GRSP equal weighted index return. They obtain, hence, ten portfolios on the basis of the size. Then, the stocks in each size portfolio are grouped into ten portfolios based on their betas. They repeat the same procedure to obtain the book-to-market portfolios. Using the Fama and MacBeth cross-sectional regression, the authors find those annual betas perform well since they are significantly associated to the average stock returns especially for the period 1941-1990 and 1927-1990. Moreover, the ability of the beta to predict return with reference to the size and the book to market is higher. In a conclusion, this study is a support for the traditional CAPM. Khoon et al. (1999), while comparing two assets pricing models in the Malaysian stock exchange, examine the validity of the CAPM. The data contains monthly returns of 231 stocks listed in the Kuala Lumpur stock exchange over the period of September 1988 to June 1997. Using the cross section regression (two pass regression) and the market index as the market portfolio, the authors find that the beta coefficient is sometimes positive and some others negative, but they do not provide any further tests. In order to extract the factors that may affect the returns of stocks listed in the Istanbul stock exchange, Akdeniz et al. (2000)make use of monthly return of all non financial firms listed in the up mentioned stock market for the period that spans from January 1992 to December 1998. They estimate the beta coefficient in two stages using the ISE composite index as the market portfolio. First, they employ the OLS regression and estimate for the betas each month for each stock. Then, once the betas are estimated for the previous 24 months (time series regression), they rank the stocks into five equal groups on the basis of the pre-ranking betas and the average portfolio beta is attributed to each stock in the portfolio. They, afterwards, divide the whole sample into two equal sub-periods and the estimation procedure is done for each sub-period and the whole period as well. The results from the cross sectional regression, indicate that the return has no significant relationship with the market beta. This variable does not appear to influence cross section variation in all the periods studied (1992-1998, 1992-1995, and 1995-1998). In a relatively larger study, Estrada (2002) investigates the CAPM with reference to the downside CAPM. The author works on a monthly sample covering the period that spans from 1988 to 2001 (varied periods are considered) on stocks of 27 emerging markets. Using simple regression, the authors find that the downside beta outperforms the traditional CAPM beta. Nevertheless, the results do not support the rejection of the CAPM from two aspects. Firstly, it was found that the intercept from the regression is not statistically different from zero. Secondly, the beta coefficient is positive and statistically significant and the explanatory power of the model is about 40%. This result stems for the conclusion according to which the CAPM is still alive within the set of countries studied. In order to check the validity of the CAPM, and the absence of anomalies that must be incorporated to the model, Andrew and Joseph (2003) try to investigate the ability of the model to predict book-to market portfolios. If it is the case, then the CAPM captures the Book-to-market anomaly and there's no need to further incorporate it in the model. For this intention, the authors work on a sample that covers the period of 1927-2001 and contains monthly data on stocks listed in the NYSE, AMEX, and NASDAQ. So as to form the book-to-market portfolios, they use, alike Fama and French (1992), the size and the book-to-market ratio criterion. To estimate for the market return, they use the return on the value weighted portfolios on stocks listed in the pre-cited stock exchanges and to proxy for the risk free rate; they employ the one-month Treasury bill rate from Ibbotson Associates. They, afterwards, divide the whole period into two laps of time; the first one goes from July 1927 to June 1963, and the other one span from July 1963 to the end of 2001. Using asymptotic distribution the results indicate that the CAPM do a great job over the whole period, since the intercept is found to be closed to zero, but there is no evidence for a value premium. Hence, they conclude that the CAPM cannot be rejected. However, for the pre-1963 period the book to market premium is not significant at all, whereas for the post-1963 period this premium is relatively high and statistically significant. Nevertheless, when accounting for the sample size effect, the authors find that there is an overall risk premium for the post-1963 period. The authors conclude then that, taken as a whole, the study fails to reject the null that the CAPM holds. This study points to the necessity to take into account the small sample bias. Fama and French (2004), estimate the betas of stocks provided by the CRSP (Center for Research in Security Prices of the University of Chicago) of the NYSE (1928-2003), the AMEX (1963-2003) and the NASDAQ (1972-2003). They form, thereafter, 10 portfolios on the basis of the estimated betas and calculate their return for the eleven months which follow. They repeat this process for each year of 1928 up to 2003. They claim that, the Sharpe and Lintner model, suppose that the portfolios move according to a linear line with an intercept equal to risk free rate and a slope which is equal to the difference between the expected return on the market portfolio and that of the risk free rate. However, their study, and in agreement with the previous ones, confirms that the relation between the expected return on assets and their betas is much flatter than the prediction of the CAPM. Indeed, the results indicate that the expected return of portfolios having relatively lower beta are too high whereas expected return of those with higher beta is too low. Moreover, these authors indicate that even if the risk premium is lower than what the CAPM predicts, the relation between the expected return and beta is almost linear. This latter result, confirms the CAPM of Black which assumes that only the beta premium is positive. This means, analogically, that only the market risk is rewarded by a higher return. In order to test for the consistency of the CAPM with the economic reality, Thierry and Pim (2004) use monthly return of stocks from the NYSE, NASDAQ, and AMEX for the period that spans from 1926-2002. The one -month US Treasury bill is used as a proxy for the risk free rate, The CRPS total return index which is a value-weighted average of all US stocks included in this study is used as a proxy for the market portfolio. They sort stocks into ten deciles portfolios on the basis of historical 60 months. They afterwards, calculate for the following 12 months their value weighted returns. They obtain, subsequently, 100 beta-size portfolios. The results from the time series regression indicate, firstly, that the intercepts are statistically indifferent from zero. Secondly, it is found that the betas' coefficients are all positive. Furthermore, in order to check the robustness of the model, the authors split the whole sample into sub-samples of equal length (432 months). The results indicate, also, that for all the periods studied the intercepts are statistically not different from zero except for the last period. In his empirical study, Blake T (2005) works on monthly stocks return on 20 stocks within the S&P 500 index during January 1995-December 2004. The S&P 500 index is used as the market portfolio and the 3-month Treasury bill in the Secondary Market as the risk free rate. His methodology can be summarized as follows; the excess return on each stock is regressed against the market excess return. The excess return is taken as the sample average of each stock and the market as well. After estimating of the betas, these values are used to verify the validity of the CAPM. The coefficient of beta is estimated by regressing estimated expected excess stock returns on the estimates of beta and the regression include intercept and the residual squared so as to measure the non systematic risk. The results confirm the validity of the CAPM through its three major assumptions. In fact, the null hypothesis for the constant term is not rejected. Moreover, the systematic risk coefficient is positive and statistically significant. Finally, the null hypothesis for the residual squared coefficient is not rejected. Hence, he concludes that none of the three necessary conditions for a valid model were rejected at the 95% level. So as to estimate for the validity of the CAPM, Don Galagedera (2005) works on a Sample period from January 1995 to December 2004. This sample contains monthly data from emerging markets represented by 27 countries which are 10 Asian, 7 Latin American and 10 African, Middle-Eastern and European (Argentina, Brazil, Chile, China, Columbia, Czech, Egypt, Hungary, India, Indonesia, Israel, Jordan, Korea, Malaysia, Mexico, Morocco, Pakistan, Peru, Philippines, Poland, Russia, South Africa, Sri Lanka, Taiwan, Thailand, Turkey, and Venezuela ). To proxy for the market index, the world index available in the MSCI database is used and the proxy for the risk-free rate is the 10-year US Treasury bond rate. The results indicate that the CAPM, compared to the downside beta CAPM, offers roughly the same estimates and the same performance. Fama and French (2005) investigate the ability of the CAPM to explain the value premium in the US context during the period of 1926 to 2004 and include in their sample the NYSE, the AMEX and the NASDAQ stocks. They construct portfolios on the basis of the size, and the book-to-market. The size premium is the simple average of the returns on the three small stock portfolios minus the average of the returns on the three big stock portfolios. The value premium is the simple average of the returns on the two value portfolios minus the average of the returns on the two growth portfolios. They afterwards divide the whole period into two sub-periods which are respectively; 1926-1963 and 1963-2004. Then, at the end of June of each year, the authors form 25 portfolios through the intersection of independent sorts of NYSE, AMEX, and NASDAQ stocks into five size groups and five book-to-market groups or Earning to price groups. Finally the size premium or the value premium of each of the six portfolios sorted by size and book-to-market is regressed against the market excess return. The results show that for the size portfolios, when considering the whole period, the F-statistics shows that the intercepts are jointly different from zero. Moreover, it's found that the market premium is for all portfolios positive and statistically significant except. However, the explanatory power is relatively too low. Then, for the first sub-period the intercepts are not statistically significant which means that the intercepts are close to zero. Moreover the beta coefficient is for all portfolios positive and statistically significant with a relatively higher explanatory power. Finally, for the last sub-period, the results are not supportive for the CAPM. In fact, for almost all portfolios, the beta is found to be negative and statistically significant. For further rejection, all the intercepts from the regression are statistically different from zero. Erie Febrian and Aldrin Herwany (2007) test the validity of the CAPM in Jakarta Stock Exhange. They work involve three different periods which are respectively; the pre-crisis period (1992-1997), the crisis period (1997-2001), and the post-crisis period (2001-2007), and contains monthly data of all listed stocks in the Indonesian stock exchange. To proxy for the risk free rate, the authors make use of the 1-month Indonesian central bank rate (BI rate). In order to estimate for the model, the authors use two approaches documented in the literature; it is bout particularly the times series regression and the cross sectional regression. The first one tests the relationship between variables in one time period, and the latter focuses on the observation of the relationship in various periods. To check the validity of the CAPM, they use manifold models used by; Sharpe and Linter (1965), Black Jensen and scholes (1972), Fama and Macbeth (1973), Fama and French (1992), and Cheng Roll and Ross (1986). Using the Fama and French approach (1992) together with the cross sectional regression, the authors find that for all the three periods studied tests of the CAPM lead to the same conclusions. Indeed, the results indicate that the intercept from the cross sectional regression is negative and statistically significant which is a violation to the CAPM's assumption which pretends that the risk premium is the only risk factor. Nevertheless, the beta coefficient is found to be positive and statistically significant and that the R-squared is relatively high, which is a great support for the CAPM. However, when using the Fama and Macbeth method (1973), the results are somehow different. In fact, the intercept is almost always not significant, unless for the latter period which stems for a negative and a significant intercept. The associated beta coefficient is only significant during the last period and the related sign is negative which contradicts the CAPM. Furthermore, the coefficient from the square of beta is neither positive nor significant at all but in the last period (negative coefficient) which means that the relation between the risk premium and the excess return is, in a way, linear. Finally, the results indicate that the coefficient from the residuals is negative and statistically significant for the first and for the last period. Hence, basing on the above results, it is difficult to infer whether the CAPM is valid or not since the results are very sensitive to the method used for the estimate of the model. Ivo Welch (2007) in his test of the CAPM works on daily data during the period of 1962-2007. The Federal Funds overnight rate is used as the risk free rate, the short term rate divided by 255 is used as the daily rate of return, the market portfolio is estimated from the daily S&P500 data, and The Long-Term Rate is the 10-year Treasury bond. Using the time series regression, the difference between the 10 years treasuries and the overnight return is regressed against the market excess return. The results indicate that the intercept is close to zero. In addition to that, the beta coefficient is positive and statistically different from zero. These results are consistent with the CAPM. Working on daily data of stocks listed on the thirty components of the Dow Jones Index and the SP 500 index for five years, one year, or a half year (180 days) of daily returns. While using the cross section regression, the authors find that for all periods used, the results are far from rejecting the CAPM. In fact, the R-squared is 0.28, which indicates, subsequently that the marker beta is a good estimator for the expected return. Michael. D (2008) studies the ability of the CAPM to explain the reward-risk relationship in the Australian context. The sample of the study covers monthly data on stocks listed in the Australian stock exchange for the period of January 1974- December 2004. The equally value weighted ASE indices is used to proxy for the market portfolio. He forms portfolios on the basis of the betas, i.e. stocks are ranged each month on the basis of their estimated betas. The main interesting result from this study is that which show that when the highest beta portfolio and the lowest beta portfolio are removed, it's found that portfolios' returns tend to increase with beta. Hence, the author concludes that the beta is an appropriate measure of risk in the Australian stock exchange. Simon.G.M and Ashley Olson (XXXX) try to investigate the reliability of the CAPM's beta as an indicator of risk. For this end, they work on a one year sample data from November 2005 to November 2006. Their sample includes 288 publicly traded companies and the S&P 500 index (used as the market portfolio). In order to look into the risk return relationship, the authors group stocks into three portfolios on the basis of their betas, i.e. portfolios respectively; with low beta (about 0.5), market beta (about 1), and high beta (around 2). The results point to the rejection of the CAPM. In fact, it was found that the assumption according to which the beta is an appropriate measure of risk s rejected. For example, according to the results obtained, if an investor takes the highest beta portfolio, the chance that this risk be rewarded is about only 11%. In order to test for the validity of the CAPM, Arduino Cagnetti (XXXX) works on monthly returns (end of the month returns) of 30 shares on the Italian stock market for during the period of January 1990 to June 2001. To proxy for the market portfolio, he uses the Mib30 market index and the Italian official discount rate as a proxy for the risk-free rate. Then, he divided the whole period into two sub-periods of 5 years each one. In order to test for the CAPM, the author uses two step procedures. In fact, the first step consists of employing the time series to estimate for the betas of the shares. While the second step involves the regression of the sample mean return on the betas. Then he runs a cross sectional regression while using the average returns for each period as dependant variable and the estimated betas as independent variables. The results indicate that for the second sub-period the beta displays a high significance in explaining returns with a relatively strong explanatory power. However, for the first sub-period and during the whole 10 years, this variable is not significant at all. In their study, Pablo and Roberto (2007)examine the validity of the CAPM with reference to the reward beta model and the three factor model. It is praiseworthy to note, in this level, that our investigation for the article is limited only to the study of the CAPM. To reach their objective, the authors work on a sample covering the period that goes from July 1967 and ends in December 2006 and consists of all data on North American stock markets. That is the NYSE, AMEX, and the NASDAQ stock exchanges with all available monthly data on stocks. The one month Treasury bond is used to proxy for the risk free rate and the CRSP index as a proxy for the market portfolio (portfolio built with all the NYSE, AMEX and NASDAQ stocks weighted with the market value). The methodology pursued along this study is the classic two step methodology, i.e. the times series regression to estimate for ex-ante CAPM's beta, and then the cross section regression using the betas and the sensibilities of the estimated factors as explanatory variables in explaining expected return. To examine the validity of the CAPM, tests were run on the Fama and French's portfolios (6 portfolios formed on the basis of the size and the book to market). The results were represented through two aspects whether or not the intercept is taken into account. On the one hand, when considering the CAPM with intercept, it is found that the latter is statistically different from zero which is a violation to the CAPM's assumptions. Moreover, results indicate that the risk premium is negative which is in contradiction with the Sharpe and Linter model. On the other hand, when removing the intercept, the results are rather supportive for the model. In fact, the risk premium appears to be positive and highly significant which goes in line with the CAPM. Nevertheless, the explanatory power in both cases is too low. In his comparative study, Bornholt (2007) investigates the validity of three assets pricing models, it is about particularly, the CAPM, the three factor model and the reward beta model. It is trustworthy to mention again that in this literature we will be interested in the CAPM. Working on monthly portfolios' return constructed according to the Fama and French methodology (1992) for the period that spans from July 1963 to December 2003. The one month Treasury bill is used as a proxy for the risk free rate and the GRSP value weighted index of all NYSE, AMEX, and NASDAQ stocks as a proxy for the market portfolio. The author uses the time series regression in order to estimate for the portfolios' betas (from 1963 to 1990) and then the cross section regression using the betas estimated as explanatory variables in expected return (1991 to 2003). The results are interpreted from two sides depending on whether the estimation is done with or without the intercept. From the one side, when the intercept is included, the latter is found to be closed to zero which is in accordance with the CAPM. However, the beta coefficient is negative and not significant at all, which is in contradiction with the CAPM. This latter conclusion is enhanced further with the weak and the negative value of the R squared. From the other side, the removal of the intercept does improve the beta coefficient which is in this case positive and statistically significant. Nevertheless, the explanatory power is getting worse. Hence, this study cannot claim to the acceptance or the rejection of the CAPM, since the results are foggy and far from allowing a rigorous conclusion. Working on the French context, Najet et al. (2007) try to investigate the validity of the capital asset pricing model at different time scales. Their study sample entails daily data on 26 stocks in the CAC 40 index and covers the period that spans from January 2002 to December 2005. The CAC40 index is used as the market portfolio and the EUROBOR as the risk free rate. They consider six scales for the estimation of the CAPM which are; 2-4 days, 4-8 days dynamics, 8-16 days, 16-32 days, 32-64 days, 64-128 days, and 128-256 days. The results from the OLS regression show that the relationship between the excess return of each stock is positive and statistically significant at all scales. The explanatory power moves up when the scales get widened. Which means that the higher the frequency is the stronger the relationship will be. To further investigate the changes of the relationship over different scales, the authors use also the following intervals; 2-6, 6-12, 12-24, 24-48, 48-96, and 96-192. The results indicate that the relationship between the two variables of the model is becoming stronger as the scale increases. Nevertheless, they claim that the results obtained cannot draw conclusion about the linearity which remains ambiguous. Within the Turkish context, Gürsoy and Rejepova (2007) check the validity of the CAPM. Their sample covers the period that spans from January 1995 to December 2004 and consists of weekly data on all stocks traded in the Istanbul Stock Exchange. The whole period was divided into five six-year sub-periods. The sub-periods are divided, in their turn, into 3 sub-periods of 2 years each, which correspond to respectively; portfolio formation, beta estimation and testing periods. All periods are considered with one overlapping year. They afterwards, form 20 portfolios of 10 stocks each one on the basis of their pre-ranking betas. Then for the regression equation they use two different approaches which are; the Fama and MacBeth's traditional approach, and Pettengil et al's conditional approach (1995). With reference to the first approach, the CAPM is rejected in all directions. In fact, the results indicate that the intercept is statistically different from zero for all sub-periods except for one. In addition to that, the beta coefficient is found to be significant different from zero only in 2 sub-periods and almost in all cases is negative. Finally the R-squared is only significant in two sub-periods and beyond that it's too weak. Hence, this approach claims to the rejection of the CAPM. As for the second approach, the beta is almost always found to be significant but either positive in up-market or negative in down- market. Furthermore, the results point to a very high explanatory power for both cases. However, the assumption according to which the intercept must be close to zero is not verified for both cases, which means that the risk premium is not the only risk factor in the CAPM. The authors conclude, on the basis of the results, that the validity of the CAPM remains a questionable issue in the Turkish stock market. Robert and Janmaat (2009) examine the ability of the cross sectional and the multivariate tests of the CAPM under ideal conditions. They work on a sample made of monthly return on portfolios provided by the Kenneth French's data library. While examining the intercepts, the slopes and the R-squared, the authors reveal that these parameters are unable to inform whether the CAPM holds or not. Moreover, they claim that the positive and the statistically significant value of the beta coefficient, doesn't indicate that the CAPM is valid at all. The results indicate, also, that the value of the tested parameters, i.e. the intercepts and the slopes, is roughly the same independently whether the CAPM is true or false. In addition to that, the results from the cross sectional regression have two different implications. From the one side, they indicate that tests of the hypothesis that the slope is equal to zero are rejected in only 10% of 2000 replications, which means, subsequently, that the CAPM is dead. From the other side, it's found that tests of the hypothesis that the intercept is equal to zero are rejected in 9%. As consequent, one may think that the CAPM is well alive. As for the R squared, its value is relatively lower particularly when the CAPM is true. The authors find, also, that the tests of the hypothesis that the intercept and the slope are equal to zero differ on whether they make use of the market or the equal weighted portfolio which confirms the Roll's critique. 5. IS THE CAPM DEAD OR ALIVE? SOME RESCUE ATTEMPTS After reviewing the literature on the CAPM, it is difficult if not impossible to reach a clear conclusion about whether the CAPM is still valid or not. The assaults that tackle the CAPM's assumptions are far from being standards and the researchers versed in this field are still between defenders and offenders. Actually, while Fama and French (1992) have announced daringly the dead of the CAPM and its bare foundation, some others (see for example, Black, 1993; Kothari, Shanken, and Sloan, 1995; MacKinlay, 1995 and Conrad, Cooper, and Kaul, 2003) have attributed the findings of these authors as a result of the data mining (or snooping), the survivorship bias, and the beta estimation. Hence, the response to the above question remains a debate and one may think that the reports of the CAPM's death are somehow exaggerated since the empirical literature is very mixed. Nevertheless, this challenge in asset pricing has opened a fertile era to derive other versions of the CAPM and to test the ability of these new models in explaining returns. Consequently, three main classes of the CAPM's extensions have appeared and turn around the following approaches; the Conditional CAPM, the Downside CAPM, and the Higher-Order Co-Moment Based CAPM. 5.1. The Conditional CAPM The academic literature has mentioned two main approaches around the modeling of the conditional beta. The first approach stems for a conditional beta by allowing this latter to depend linearly on a set of pre-specified conditioning variables documented in the economic theory (see for example Shanken, 1990). There have been several evidences that go on this road of research form whose we mention explicitly among others (Jagannathan and Wang, 1996; Lewellen, 1999; Ferson and Harvey, 1999; Lettau and Ludvigson, 2001 and Avramov and Chordia, 2006). In spite of its revolutionary idea, this approach suffers from noisy estimates when applied to a large number of stocks since many parameters need to be estimated (see Ghysels, 1998). Furthermore, this approach may lead to many pricing errors even bigger than those generated by the unconditional versions (Ghysels and Jacquier, 2006). These limits are further enhanced by the fact that the set of the conditioning information is unobservable. The second non parametric approach to model the dynamic of betas is that based on purely data-driven filters. The approaches in this category include the modeling of betas as a latent autoregressive process (see Jostova and Philipov, 2005; Ang and Chen, 2007), or estimating short-window regressions (Lewellen and Nagel, 2006), or also estimating rolling regressions (Fama and French, 1997). Even if these approaches sustain to the need to specify conditioning variables, it is not clear enough through the literature how many factors are they in the cross-sectional and time variation in the market beta. In order to model the beta variation, studies have tried different modeling strategies. For instance, Jagannathan and Wang (1996) and Lettau and Ludivigson (2001) treat beta as a function of several economic state variables in a conditional CAPM. Engle, Bollerslev, and Wooldridge (1988) model the beta variation in a GARCH model. Adrian and Franzoni (2004, 2005) suggest a time-varying parameter linear regression model and use the Kalman filter to estimate the model. The following section treats these findings one by one while showing their main results. 5.1.1. Conditional Beta on Economic States ‘'If one were to take seriously the criticism that the real world is inherently dynamic, then it may be necessary to model explicitly what is missing in a static model", Jagannathan and Wang (1996), p.36. The conditional CAPM is that in which betas are allowed to vary and to be non stationary through time. This version is often used to measure risk and to predict return when the risk can change. The conditional CAPM asserts that the expected return is associated to its sensitivity to a set of changes in the state of the economy. For each state there is a market premium or a premium per unit of beta. These price factors are often the business cycle variables. The authors, who are interested in the conditional version of the CAPM, demonstrate that stocks can show large pricing errors compared to unconditional asset pricing models even when a conditional version of the CAPM holds perfectly. In what follows, a summary of the main studies versed in this field of research is presented. Jagannathan and Wang (1996) were the first to introduce the conditional CAPM in its original version. They claim that the static CAPM has been founded on two unrealistic assumptions. The first one is that according to which the betas are constant over time. While the second one is that in which the portfolio, containing all stocks, is assumed to be a good proxy for the market portfolio. The authors assert that it would be completely reasonable to allow for the betas to vary over time since the firm's betas may change depending on the economic states. Moreover, they state that the market portfolio must involve the human capital. Consequently, their model includes three different betas; the ordinary beta, the premium beta based on the market risk premium which allows for conditionality, and the labor premium which is based on the growth in labor income. Their study includes stocks of non financial firms listed in the NYSE and AMEX during the period that spans from July 1963 to December 1990. They, after that, group stocks into size portfolios and use the GRSP value weighted index as proxy for the market portfolio. For each size decile, they estimate the beta and, afterwards, class stocks into beta deciles on the basis of their pre-ranking betas. Hence, 100 portfolios are formed and their equivalent equally weighted return is calculated. The regressions are made using the Fama and Macbeth procedure (1973). The authors find that the static version of the CAPM does not hold at all. In fact, the results show a small evidence against the beta which appears to be too weak and not statistically different from zero. Moreover, the R-squared is only about 1.35%. Then, the estimation of the CAPM when taking into account the beta variation shows that the beta premium is significantly different from zero. Furthermore, the R-squared moves up to nearly 30%. Nevertheless, the estimation of the conditional CAPM with human capital indicates that this variable improves the regression. Indeed, the results show a positive and a statistically significant beta labor. In addition to that, the beta premium remains significant and the adjusted R-squared goes up to reach 60%. This is a great support for the conditional version of the CAPM. Meanwhile, one of the most important theoretical results which contributes to the survival of the CAPM is that of Dybvig and Ross (1985) and Hansen and Richard (1987) who find that the conditional version of the CAPM is adequate even if the static one is blamed. Pettengill et al. (1995) find that the conditional CAPM can explain the weak relationship between the expected return and the beta in the US stock market. They argue that the relationship between the beta and the return is conditional and must be positive during up markets and negative during down markets. These findings are further supported by the study of Fletcher (2000) who investigates the conditional relationship between beta and return in international stock markets. He finds that the relationship between the two variables is significantly positive during up markets months and significantly negative during down markets months. Another study performed by Hodoshima, Gomez and Kunimura (2000) and supports the conditional relationship between the beta and the return in the Nikkei stock market. Jean-Jacques. L, Helene Rainelli. L M, and Yannick. G (XXXX) run the same study as Jagannathan and Wang (1996) but in an international context. They work on monthly data for the period which goes from January 1995 to December 2004 related to six countries which are; Germany, Italy, France, Great Britain, United States, and Japan. For each country the MSCI index is retained as a proxy for the market portfolio and the 3 month Treasury bond as a proxy for the risk free rate. Following the same methodology as for Fama and French (1992), the authors form portfolios on the basis of the size, the book to market. Hence, for each country 12 portfolios are formed and regressed on the explanatory variables using the cross-sectional regression. The results indicate that the market beta is statistically significant only in one country (the Great Britain). However, the beta premium is found to be significant in four countries, i.e. Italy, Japan, Great Britain and the US. As for the premium labor, the results exhibit significance only in three cases (Germany, Great Britain and France). Moreover, the results stem to a very high explanatory power for all countries which is on average beyond 40%. Meanwhile, Durack et al. (2004) run the same study as Jagannathan & Wang (1996) in the Australian context. Their sample contains monthly data of all listed Australian stocks over the period of January 1980- December 2001. In order to estimate for the premium labor model, they use the value weighted stock index as the market portfolio and extract the macroeconomic data from the bureau of statistics in order to measure the beta premium and the beta labor (two variables; the first one captures the premium resulting from the change in the market premium and the second one captures the premium of the human capital). They, afterwards, sort the stocks into seven size portfolios, then, into seven further beta portfolios. Finally, 49 portfolios are formed and used for the estimate of the conditional CAPM. Using the OLS technique, the authors find that the conditional CAPM does a great job in the Australian stock market. In fact, in both cases, i.e. conditional CAPM with and without human capital, the model accounts for nearly 70% of the explanatory power. Nevertheless, the results report a little evidence towards the beta premium which is found to be, in all cases, positive but not statistically significant at any significance level. Furthermore, the beta of the premium variation is negative and statistically significant. But, unlike Jagannathan and Wang (1996), the authors find that the human capital does not improve the beta estimate which remains insignificant. Finally, they find that the intercept is found to be significantly different from zero which is a violation to the CAMP's assumptions. Campbell R. Harvey (1989), tests the CAPM while assuming that both expected return and covariances are time varying. The author uses monthly data of the New York Stock Exchange from September 1941 to December 1987. Ten portfolios are sorted by market value and rebalanced each year on the basis of this criterion. The risk free rate is the return on Treasury bill that is closed to 30 days at the end of the year t-1 and the conditional information includes the first lag on the equally weighted NYSE portfolio, the junk bond premium, a dividend yield measure, a term premium, a constant and a dummy variable for January are included in the model as well. The results indicate that the conditional covariance change over time. Moreover, it is found that the higher the return is the more important the conditional covariance will be. Nevertheless, it is found that the model with a time-varying reward to risk appears to be worse than the model with a fixed parameter. In fact, the intercepts vary so high to be able to explain the variance of the beta. Consequently, this rejects the CAPM even in this general formula. Jonathan Lewelen and Stefan Nagel (2003) study the validity of the conditional CAPM in the US context. The authors work on a sample including the period that spans from 1964 to 2001, and contains different data frequencies, i.e. daily, weekly and monthly. The data includes all NYSE, and AMEX stocks on GRSP Compustat sorted on three portfolios' type on the basis of the size, the book-to-market, and the momentum effects. The regression test is run on the excess return of all portfolios on the one month Treasury bill rate. The results indicate that the beta varies over time but that this variation is not enough to explain the pricing errors. In fact, the results show that beta cannot covary with the risk premium sufficiently in a way that can explain the alphas of the portfolios. Indeed, the alphas are found to be are high and statistically significant which is a violation to the CAPM. Treerapot Kongtoranin (2007) applies the conditional CAPM to the stock exchange of Thailand. He works on monthly data of 170 individual stocks in SET during 2000 to 2006. The author uses the SET index and the three-month Treasury bill to proxy, respectively for the market portfolio and the risk free rate. Before testing the validity of the conditional CAPM, the author classify the stocks into 10 portfolios of 17 stocks each one on the basis of their average return. They afterwards estimate for the conditional CAPM using the cross sectional regression. The beta is calculated using the ratio of covariance between the individual portfolio and the market portfolio, and the variance of the market portfolio. The covariance is determined by using the ARMA model and the variance of the market portfolio is determined by the GARCH (1, 1) model. The results indicate that the relationship between the beta and the return is negative and not statistically significant. Meanwhile, when a year period is considered, it is found that in 2000, 2004, and 2005 the beta premium is negative and statistically significant. Consequently, these results reject the CAPM which assume that the risk premium is positive. Abd.Ghafar Ismaila and Mohd Saharudin Shakranib (2003) study the ability of the conditional CAPM to generate the returns of the Islamic unit trusts in the Malaysian stock exchange. They work on weekly price data of 12 Islamic unit roots and the Sharjah index for the period that spans from 1 May 1999 until 31 July 2001. They, firstly, estimate for each unit trust the equivalent beta is estimated for the whole sample and for the two following sub-samples; 1 May 1999 - 23 June 2000, and 24 June 2000 - 31 July 2001. Then, they estimate the average beta using the conditional CAPM. The results indicate that the betas coefficients are significant and have a positive value in up markets and negative value in down markets which supports the conditional relationship for all the period studied. Moreover, it is shown that the conditional relationship is better in the down market than in the up market. In order to look at the ability of the conditional CAPM, M. Lettau and S. Ludvigson (2001) study the consumption CAPM within a conditional framework. The study sample includes the returns of 25 portfolios formed according to Fama and French (1992, 1993). These portfolios are considered as the value weighted returns for the intersection of five size portfolios and five books to market portfolios on the NYSE, AMEX, and NASDAQ stocks in COMPUSTAT. The period of study goes from July 1963 to June 1998 and contains quarterly data of all portfolios. For the estimate of the conditional CAPM, the GRSP value weighted return is considered as proxy for the market portfolio, and the cross-sectional regression methodology proposed in Fama and MacBeth (1973) is used. The results indicate that the static CAPM fails in explaining the cross-sectional of stocks return motivating by a small R-squared that is only of 1%. However, for the conditional CAPM, it was found that the scaled variable is positive and statistically significant. Moreover, the adjusted R-squared moves up to reach the 31%. Subsequently, this means that the conditional version of the CAPM performs well the static version. Goezmann et.al (2007), expand the results of the Jagannathan and Wang's study (1996). In their study, the authors assume that the market beta and the premium beta as well as the premium its self can vary across time. Their idea is based on the assumption according to which the expected return is conditional on the economic states. Furthermore, these economic states are determined through the investor expectations about the future prospect of the economy. Hence, they suppose that the expected real Gross Domestic Product (GDP) growth rate is considered as a predictive instrument to know the state of the economy. To reach their objective, the authors form 25 portfolios from the intersection of 5 five book-to-market portfolios and five size portfolios. They, afterwards, measure the beta instability risk directly through the bad and good states of the GDP. Using a two beta model and the general method of moments, the authors find that the risk premium is increasing with the book-to-marker portfolios ranges. In fact, the results indicate that the conditional market risk premium is not priced at all. Besides, when considering the conditional beta (on bad and good states), it is found that the good time sensitivity are positively associated to book to market portfolios. In addition to that, it's found that stocks whose returns are positively related to the GDP earn negative premium. Finally, the results point to a positive beta premium for value stocks which turns into negative for growth stocks. In order to test for the risk-return relationship in the Karachi stock exchange Javid A and Ahmad E (2008) study the conditional CAPM. Their sample study includes daily and monthly return of 49 companies and the KSE 100 index during the period of July 1993 to December 2004 which is divided into five overlapping intervals (1993-1997, 1994-1998, 1995-1999, 1996-2000, and 1997-2001). Using the time series regression (whole period) together with the cross section regression (sub-periods), the results point to weak evidence towards the unconditional version of the CAPM. In fact, it was found that the positive relationship between the expected return and the risk premium doesn't hold in any sub-period. Moreover, the intercepts are found to be statistically indifferent from zero in almost all cases. This weakness is further enhanced by the fact that the residues play a significant role in explaining the return. They, hence, assert that the return distribution must vary over time. Having this in mind, the authors allow for the risk premium to vary along with macroeconomic variables which are supposed to contain the business cycle information. These conditioning variables are the following; market return, call money rate, term structure, inflation rate, foreign exchange rate, growth in industrial production, growth in real consumption, and growth in oil prices. Applying the conditional version of the CAPM, the authors find that the risk premium is positive for roughly all sub-periods (1993-1995, 1993-1998, 1999-2004, 2002-2004), and for the overall sample period 1993-2004. In addition to that, beta coefficient is statistically significant which indicate that investors get a compensation for bearing risk. Nevertheless, the results show that for all sub-periods and for the whole period the intercepts are statistically different from zero. Within an international context, Fujimoto A, and Watanabe M (2005) study the time variation in the risk premium of the value stocks and the growth ones as well. The authors assume that the CAPM holds each period allowing for the beta and for the market premium to vary through time. Their sample includes data set of 13 countries which represent, roughly, 85% of the world's market capitalization. These countries are the following; Australia, Belgium, Canada, France, Germany, Italy, Japan, Netherlands, Singapore, Sweden, Switzerland, UK and the US. For each country, monthly stock returns, market capitalizations, market-to-book values, as well as value-weighted market returns for the non-US are used for the period that ends in 2004 and begin in 1963 for the US, 1965 for the UK, 1982 for Sweden, and in 1973 for all the other countries. They, firstly, form portfolios as the intersection of the book-to-market portfolios and those of size. Then, the authors suppose that the market premium and the betas of the portfolios are presumed to be a function of the dividend yield, the short rate, the term spread, and the default spread. The cross section regression indicates, firstly, that the beta sensitivity is positive and statistically significant in 9 countries for value stocks. Furthermore, it is found also that growth stocks exhibit negative beta premium sensitivity. Moreover, for the long-short portfolio and the size/book-to market portfolios the beta sensitivity is for almost all countries positive and statistically significant. Finally, it is found that the beta premium sensitivity cannot explain the whole variation in value premium in international markets. From their part, Michael R. Gibbons and Wayne.Ferson (1985), also relax the assumption related to the stagnation of the risk premium. Hence, the expected return is conditional on a set of information variables. They use daily return of common stocks composing the Dow Jones 30 for the period that goes from1962 to 1980. So the daily stocks returns are regressed against the lagged stock index, the Monday dummy, and an intercept. The results show that the Monday dummy and the lagged GRSP value-weighted index, are highly significant. Nevertheless, the coefficient of determination is beyond 5%. Consequently, they conclude that their study is robust to missing information. Ferson and Harvey (1999), try to look into the conditioning variables and their impact on the cross section of stock returns. The sample of the study covers the period of 1963-1994 and contains monthly data of the US common stock portfolios. The lagged instrumental variables used in here are respectively; the difference between the one-month lagged returns of a three-month and a one-month treasury bills, the dividend yield of the standard and Poors 500 index, the spread between Moody's Baa and Aaa corporate bond yields, the spread between ten-year and one-year treasury bond yield. The regression produces significant values for roughly all variables. The conditional version implies, also, that the intercepts are time varying which means that they are not zero. They conclude, hence, that the conditional version is not valid. Wayne E. Ferson and Andrew F. Siegel (2007) in their trial to investigate the portfolio efficiency with conditioning information, test the conditional version of the CAPM. For this objective, they use a standard set of lagged variables to model the conditioning information and a sample that ranges from 1963 to 1994. The instrumental variables used are the following; the lagged value of a one-month Treasury bill yield, the dividend yield of the market index, the spread between Moody's Baa and Aaa corporate bond yields, the spread between ten-year and one-year constant maturity Treasury bond yields, and the difference between the one-month lagged returns of a three-month and a one-month Treasury bill. They, after that, group stocks into two classes. The former is a sorting according to the Twenty five value-weighted industry portfolios. The latter, is a classification with reference to their prior equity market capitalization, and separately into five groups on the basis of their ratios of book value to market value. The results indicate that the conditioning variables do not improve very well the estimation. Nevertheless, during all periods these variables exhibit high and statistically significant coefficients. 5.1.2. The Conditional CAPM: data-driven filters Unlike the first approach which is based on pre-specified conditioning information, the data-driven filters approach is based on purely empirical bases. In fact, the data used is the source of factors and it is the only responsible of the beta variation. This means that one do not require a well defined variables, rather one allows for the data to define these variables. Some papers interested in this approach are discussed next. Always in the same field of research, Nagel S and Singelton K (2009) try to investigate the conditional version of assets pricing models. Their methodology is somehow different from the others as for the moment when they use the Stochastic Discount Factor (SDF) as a conditionally affine function of a set of priced risk factors. Their sample study includes quarterly data on three instruments variables over the period of 1952-2006. These variables are; the consumption-wealth ratio of Lettau and Ludvigson (2001a), the corporate bond spread as in Jagannathan and Wang (1996) or the labor income-consumption ratio of Santos and Veronesi (2006). They, next, form portfolios sorted by size and book-to-market ratio as well. Applying the time varying SDF, the authors find that when the two conditioning information, i.e. the consumption-wealth ratio, and the corporate bond spread, is incorporated in the estimation; the model fails in explaining the cross-sectional of stocks returns. They conclude, hence, that the conditional asset pricing models do not play a good role in improving the pricing accuracy. In the US context, Huang P and Hueng J (XXXX) investigate the risk-return relationship in a time-varying beta model according to the view of Pettengill, Sundaram, and Mathur (1995). For this objective, they use a sample which includes daily returns of all stocks listed in the S&P 500 index over the period of November 1987-December 2003. For the estimation of the time-varying beta model, they make use of the Adaptive Least Squares with Kalman Foundations (ALSKF) proposed by McCulloch (2006). The results support the Pettengil et al. model (1995). In fact, it is found that the beta premium is positive and statistically significant in up markets, whereas the risk-return relationship is found to be negative and statistically significant in down markets. Moreover, the results exhibit that none of the intercept is statistically different from zero. Finally, while using the ALSKF, the authors find that the estimation is more precise than that obtained via the OLS regression. The study of Demos A and Pariss S (1998) aims at investigating the validity of the validity of the conditional CAPM within the Athens Stock Exchange. For this end, the authors work on a sample covering fortnightly returns of the nine Athens Stock Exchange (ASE) sectorial indices and the Value weighted index from January 1985 to June 1997. Then, to model the idiosyncratic conditional variances, the ARCH type process is used. The results from the OLS regression indicate that the static version of the CAPM is doing a good job. In fact, the beta coefficient is in all cases positive and statistically significant. This result does not depend on whether an intercept is included in the regression. This is because the intercepts only in three out of nine cases are found to be statistically different from zero. They after that, consider that Value Weighted Index follows a GQARCH(1,1)-M process, the authors find similar results to the static CAPM. In fact, all the estimated betas are found to be positive and statistically different from zero. Nevertheless, the authors find that within the CAPM in both static and dynamic versions, the idiosyncratic risk is priced. Moreover, it's found that the intercepts are jointly statistically different from zero. Basing on the pre-mentioned result the authors claim to the invalidity of the model. They consider that the potential cause of failure is the use of the value weighted index rather than the equally weighted one. Consequently, they repeat the same procedure while using the equally weighted index. Firstly, with respect to the static version, the results indicate that all intercepts are statistically indifferent from zero except for the bank sector and are jointly different from zero. Secondly, as for the dynamic version of the CAPM, the results point to a very supportive result. Indeed, the betas coefficients not only have the right sign but also are highly significant. Finally, the most supportive result is that which shows that the idiosyncratic risk is not priced. In their study, Basu D and Stremme A (2007), study the conditional CAPM while assuming time variation in risk premium. This time variation is captured by a non linear function of asset of variables related to the business cycle. In order to model the time-varying factor risk premium, the authors use the approach of Hansen and Richard (1987). This approach permits to construct a candidate stochastic discount factor given as an affine transformation of the market risk factor. Then, the coefficients from this transformation are non-linear of the conditioning variables sensed to capture the time-variation of the risk premium. They work on monthly data of different kinds of portfolios, i.e. the portfolios sorted on CAPM beta (stocks relative to the S&P 500) over the period of 1980-2004, the momentum portfolios over the period of 1961-1999, the book-to-market portfolios over the 1961-1999 period, and finally the 25 and 100 portfolios sorted by size and book-to-market over the 1963-2004 and the 1963-1990 periods. As for the conditioning variables, the authors make use of the following instruments; the one-month Treasury Bill rate, the term spread (the difference in yield between the 10-year and the one-year Treasury bond), the credit spread (the difference in 10-year yield between the AAA-rated corporate and the corresponding government bond), and the convexity the of the yield curve (the 5-year yield minus the sum of the 10-year and 1-year yields). The results point to a weak evidence towards the static CAPM. In fact, it is found that this latter account only for 1.6% of the returns cross sectional variation. Moreover, the expected return for all studied portfolios has the U-shape which, subsequently, contradicts the CAPM's assumptions. However, when the conditional version is set into play, the results are somehow surprising. Definitely, the scaled version of the CAPM captures 60% of the cross-sectional variation. Furthermore, this model predicts relatively better the expected return, since it accounts for lower errors for the extreme as well as middle portfolios. Finally, the scaled version explains better the risk premium of all portfolios with comparison to the static version. Tobias A and Franzoni F (2008) introduce the unobservable long-run changes to the conditional CAPM as a risk factor. They, hence, propose to model the conditional betas using the Kalman filter since investors are supposed to learn the long level of factor loading through the observation of the realized return. For this reason, they suppose that the betas change overtime following a mean-reverting process. For this aim, they work on quarterly data of all stocks listed in the NYSE, AMEX, and NASDAQ for the period that spans from 1963 to 2004. They, afterwards, group stocks into 25portfolios basing on the size and the book-to-market criterion. As for the conditioning variables, they make use of value-weighted market portfolio, the term spread, the value spread, and the CAY variable documented Lettau and Ludvigson (2001). Then, in order to extract for the filtered betas, they apply the Kalman Filter. The times series test on size and book-to-market portfolios indicates that the introduction of the learning process into the conditional CAPM contributes to the decrease of the pricing errors. Moreover, when the learning is not included in the conditional CAPM, the results point to a rejection. In the interim, Jon A. Christopherson, Wayne E. Ferson and Andrew L. Turner (1999), try to find out the effect of the conditional alphas and betas on the performance evaluation of portfolios. They assert that the betas and alphas move together with a set of conditioning information variables. They find that the excess returns are partially predictable through the information variables. The results point, also, to statistically insignificant alphas which is consistent with the CAPM's predictions. In the same path of research, Wayne E. Ferson, Shmuel Kandel, and Robert F. Stambaugh (1987), apply the conditional version of the CAPM. They assume that the market beta and the risk premium vary overtime. They work on weekly data from the NYSE over the period of 1963-1982 which includes the return of ten common portfolios sorted on equity capitalization. The results suggest that the single factor model is not rejected when the risk premium is allowed to vary overtime and when the risk related to that risk premium is not constrained to be equal to market betas. In the same way, Paskalis Glabadanidis (2008), studies the dynamic asset pricing models while assuming that both the factor loading and the idiosyncratic risk are time varying. In order to model the time variation in the idiosyncratic risk of the one factor model, he uses the multivariate GARCH model. Their main objective relies on the fact that the risk return relationship should contain the proper adjustment which may account for serial autocorrelations in volatility and time variation in the return distribution. His study is run on monthly return of 25 size and book-to-market portfolios and 30 industry ones as well for the period of 1963-1993. The results indicate that the dynamic CAPM could reduce the pricing errors. However, the results indicate that the null intercept hypothesis cannot be rejected at any significance level. 5.2. The Downside Approach “A man who seeks advice about his actions will not be grateful for the suggestion that he maximize his expected utility.” Roy (1952) 5.2.1. From the Mean-Variance to the Downside Approach The mean-variance approach lies on the fact that the variance is an appropriate measure of risk. This latter assumption is founded upon at least one of the following conditions. Either, the investor's utility function is quadratic, or the portfolios' returns are jointly normally distributed. Subsequently, the optimal portfolio chosen based on the criterion of the mean variance would be the same as that which maximizes the investor's utility function. Nevertheless, the adequacy of the quadratic utility function is tackled as for the moment when the investor's risk aversion would be an increasing function of his wealth, whereas the opposite is completely possible. Furthermore, the normal distribution of the return is criticized since the data may exhibit high frequency such as skewness (Leland, 1999; Harvey and Siddique, 2000 and Chen, Hong, and Stein, 2001), or kurtosis (see for example Bekaert, Erb, Harvey, and Viskanta, 1998 and Estrada, 2001c). Levy and Markowitz (1979) find that the mean-variance behavior is a good approximation to the expected utility. In fact, they show that the integration of the skewness or the kurtosis or even the both worsens the approximation to the expected utility. The credibility of the variance as a measure of risk is valid only in the case of symmetric distribution of the return. Then, it is reliable only in the case of normal distribution. Moreover, the beta which is the measure of risk according to the mean-variance approach suffers from diverse critics (discussed in the first and the second section of the chapter). Meanwhile, Brunel (2004), states that the mean-variance criterion is not able to generate a successful allocation of wealth given that investors, in this case, do not consider the higher statistical moment issues. For that reason, choices have to be made on other parameters of the return distribution such as skewness or kurtosis. Skewness preference indicates that investors allocate more importance to downside risk than to upside risk. From these critics, one may think that the failure of the traditional CAPM comes from its ignorance of the extra reward prime required by investors in the bear markets. In fact, by intuition investors would require higher return for holding assets positively correlated with the market in distress periods and a lower return for holding assets negatively correlated with the market in bear periods. Consequently, upside and downside periods are not treated symmetrically, from where the birth of the semi-deviation or also the semi-variance approach. The concept of the semi-variance was firstly introduced by Markowitz (1959) and was later refined by Hogan and Warren (1974) and Bawa and Lindenberg (1977). This approach preserves the same characteristics as the regular CAPM with the only difference in the risk measures. In fact, while the former uses the semi-variance and the downside beta, the latter uses the variance and the regular beta. In the particular case where the returns are symmetrically distributed, the downside beta is equal to the regular beta. However, for asymmetrical distribution the two models diverge largely. The standard deviation identifies the risk related to the volatility of the return, but it does not make a distinction between upside changes and downside changes. In practice, the separation between these two aspects is though important. In fact, if the investor is risk averse, then he will be averse to downside volatility and accept gladly the upside volatility. So the risk occurs when the wrong scenario is put into play. The semi-variance is more plausible than the variance as a measure of risk. Indeed, the semi-deviation accounts for the downside risk that investors want to prevent contrary to the upside risk which has the welcome. In a nutshell, the semi-variance is an adequate measure of risk for a risk averse investor. 5.2.2. A Brief History on the Downside Risk Measures In order to understand the contribution and the concept of the downside risk, it is important to study the history of its development. The purpose of this section is to review the measures of the downside risk and to clarify its major innovation Along with the academic literature, there have been two major measures that have been commonly used; it is about the semi-variance and the lower partial moment. All of these measures were tools to develop the portfolio theory and to determine the efficient choice among portfolios for a risk averse investor. The first paper appeared in the field of finance and which was interested in the downside risk theory was that of Markowitz (1952). This author develops a theory that uses the mean return and the variance and covariance to construct the efficient frontier in where one may find all portfolios that maximizes the return for a given risk level or that minimizes the risk for a given return level. Hence, the investor should make a risk-return tradeoff according to his utility function. Nevertheless, due to the human being nature which is not obvious, it's difficult to determine a common utility function. The second paper published in this field of research was that of Roy (1952).this latter states that the creation of mathematical utility function for an investor is very difficult. Consequently, an investor would not be satisfied for simply maximizing his expected utility. He suggests, for that reason another measure of risk which he calls the safety first technique. This measure suggests that an investor would prefer the safety of his wealth and chooses some minimum acceptable return that conserves the principal. The minimum acceptable return is called, according to Roy (1952), the disaster level or also the target return beyond which we would not accept the risk. That is why; the optimal choice of an investor will be that which has the smallest probability of going under the disaster level. He develops, thus, the reward to variability ratio which, for a given investor, minimizes the probability of the portfolio to go under the target return level. After that, Markowitz (1959) has admitted the Roy's approach and the importance of the downside risk. He states that the downside risk is crucial for portfolio choice for at least two reasons; first because the return distribution is far form being normal. Second, because only the downside risk is pertinent for the investor. He proposes, therefore, two measures of the downside risk; the below-mean semi variance and the below-target semi variance. Both of them use only the return below the mean or the target return. Since that, many researchers have explored the downside measures in their study and have demonstrated the superiority of these risk measures over the variance (see for example; Quirk and Saposnik, 1962; Mao, 1970; Balzer, 1994 and Sortino and Price, 1994 among others). Meanwhile, Klemkosky (1973) and Ang and Chua (1979) have demonstrated the plausibility of the below-target semi variance approach as a tool in evaluating the mutual fund performance. Hogan and Warren (1974) have developed a below- target capital asset pricing model which is useful in the case where the return distribution is non normal and asymmetric. The development of the downside risk is pulled along with the emergence of the low partial moment measure introduced by Bawa (1975) and Fishburn (1977).Then, Nantell and Price (1979), and Harlow and Rao (1989) have suggested another version of the downside CAPM known since as the lower partial moment CAPM. The lower partial moment as a risk measure developed for the first time by Bawa (1975) and Fishburn (1977) which moves the constraint from having only one utility function and provide a whole rainbow of utility functions. Moreover, it describes the risk in terms of risk tolerance. Another support for the downside risk provided by Roy (1952), Markowitz (1959), Swalm (1966), and Mao (1970) who demonstrated that the investors are not concerned with the above-target returns. From this, the semi variance is more practical in evaluating risk in investment and financial strategies. 5.2.3. Background on the Downside CAPM Over the last decade, extensive empirical literature had been carried out to investigate the downside approach as a risk measure. Indeed, taking as a starting point the failure of the CAPM's beta in representing risk, several researchers have tried to improve the relationship risk return and to fill the gaps of its limitations with reference to the market model. In order to obviate these limitations, Hogan and Warren (1974) and Bawa and Linderberg (1977) put forward the use of the downside risk rather than the variance as a risk measure and developed a MLPM-CAPM, which is a model that does not rely on the CAPM's assumptions. Both studies sustained that the MLPM-CAPM model outperforms the CAPM at least on theoretical grounds. Harlow and Rao (1989) improved the MLPM-CAPM model and introduced a more general model, which is known as the Generalised Mean-Lower Partial Moment CAPM. This is a MLPM-CAPM model for any arbitrary benchmark return. Particularly, their empirical results suggest the use of the generalized MLPM-CAPM model, since no evidence goes in support for traditional CAPM. The other caveat from the latter study is that target return should equal to the mean of the assets' returns rather than the risk-free rate. In this road of research, we note, particularly, Leland (1999) who investigates the risk and the performance measures for portfolios with asymmetrical return distribution. This author criticizes the plausibility of ‘'alpha'' and the ‘'Sharpe ratio'' to evaluate portfolios' performance, and suggest the use of the downside risk approach. He proposes, hence, another risk measure which differs from the CAPM beta, particularly, when the assets' return or that of the portfolio is assumed to be non linear in the market return. Estrada (2002) in her seminal paper evaluates the mean-semi variance behavior in the sense that it yields a utility level similar to the investor's expected utility. She divided the whole world on three markets; all markets, developed markets and the emerging markets. She finds that the standard deviation is an implausible risk measure and suggests in turn the semi-deviation as a better alternative. The results indicate, also, that the mean-semi variance behavior outperforms the mean-variance behavior in emerging markets and in the whole sample of all markets. Then, using the J-test of Davidson and MacKinnon (1981), the author finds that none of the two approximations does better than the other in explaining the variability of the expected utility. However, it is shown that the mean-semivariance approach outperforms the other in the case of the negative exponential utility function. The author reports, also, that MSB is not only consistent with the maximization of expected utility but also with the maximization of the utility of expected compound return. She ended, finally, her article with an inquiry of whether the downside beta can be used in a one factor model as an alternative to the traditional beta. She prepares, hence, the area to the next paper treating the CAPM within a downside framework. In the same year, Estrada investigates the downside CAPM within the emerging markets context. The downside CAPM replaces the original beta by the downside beta. This beta is defined as the ratio of the cosemivariance to the market's semivariance. She works on a sample that covers the entire Morgan Stanley Capital Indices database of emerging markets. The data contains monthly returns on 27 emerging markets for various sample periods, some of the data begins at January 1988 and some others start later. The author demonstrates, while making use of the average monthly return for the whole sample, that both the beta and the downside beta are significant in generating returns), and that the latter explains better the average return witnessed by a high explanatory power. However, when considering the two risk measures into one model, only the downside beta is found to be significant (cross section regression). The results indicate, in addition to that, that the returns are more sensitive to the changes in the downside risk. Then, the author compares the performance of the CAPM and the downside CAPM. The results support the downside CAPM since the downside beta explains roughly 55% of the returns variability in emerging markets. As a conclusion, the author mentions the plausibility of the mean-semivariance behavior in explaining the return on the sample of markets studied. For the meantime, Thierry Post and Pim Vliet (2004) investigate the downside risk and the CAPM. In order to assign weights to the market's portfolios return, the authors use the pricing Kernel. Their samples includes the ordinary common US stocks listed on the New York Stock Exchange (NYSE), American Stock Exchange (AMEX), and NASDAQ markets and uses a monthly frequency for the period of 1926-2002. Portfolios are sorted according to their betas and downside betas for the previous 60 months and the average return is then computed for the following next 12 months. For further discussion, the authors control also for the size and the momentum (portfolios formed on the basis of the size as in Fama and French (1992) and the momentum). The results indicate that the downside betas are higher than the regular betas. Furthermore, it is found that the downside betas decrease the pricing errors. They, afterwards, apply a double sorting routine, in order to distinguish the effect of the two betas. Hence, they first sort stocks on quintile portfolios based on regular beta and then divide each of those portfolios into five portfolios based on downside beta and the opposite procedure is done also. Consequently, 25 portfolios is constructed and then regressed separately against the regular beta and the downside beta. The results from the regression indicate that the average return is positively associated to the downside beta within each regular beta quintile. Nevertheless, the relation between the average return and the regular beta tends to fade up. Finally, Thierry Post and Pim Vliet (2004) assert that the mean-semivariance CAPM strongly outstrips the mean-variance CAPM and that the downside risk is a better risk measure both theoretically and empirically. In the same way, Thierry Post, Pim Vliet, and Simon Lansdorp (2009) look into the downside beta and its ability to derive return. They use monthly stock returns from the GRSP at the University of Chicago and select the ordinary common US stocks listed on the NYSE, AMEX, and NASDAQ for the period that spans from January 1926 to December 2007. They, moreover, investigate the role of the downside beta for various sub-samples which are; 1931-1949, 1950-1969, 1970-1988, and 1989-2007 and use the regular beta portfolio and the downside beta portfolio as a benchmark. After that, the authors carry out several portfolios' classification, first on the basis of the regular beta, then on downside beta, on regular beta first and then on three definitions of downside betas (semivariance beta, ARM beta, and downside covariance beta.), and even on one downside beta then on the other definitions of downside betas. They obtain in sum 12 sets of double sorted beta portfolios. They, also, control for the size, the book-to-market, the momentum, the co-skewness and the total volatility. Thierry P et al. (2009) find that the downside beta is more pertinent to investors than regular beta and that the downside beta measured by the semivariance is more plausible than that determined through the other definitions. Furthermore, it is shown that the downside covariance beta does not yield a better performance than the regular beta (mean spread of only 5 or 6 basis points). As for the other characteristics per se the size, the value and the momentum, the authors make use of only the semivariance measure since it has shown a great dominance over the other measures. In here, the results indicate that the difference between the regular beta and downside beta cannot account for the omitted stock characteristics. In a nutshell, the authors sustain, while including the other characteristics into the regression, that the significance of the downside beta remains higher and outperforms all the other betas in each of the sub-sample and notably in most recently years. This latter result corroborates the importance of the semi-variance as a risk measure and the superiority of the downside CAPM over the traditional one. Ang, Chen and Xing (2005) in their attempt to investigate the downside risk, use data from the Center for Research in Security Prices (CRSP) to construct portfolios of stocks sorted by various characteristics of returns. It is about, particularly, ordinary common stocks listed on NYSE, AMEX and NASDAQ during the period that spans from July 3rd, 1962 to December 31st, 2001. Using the Fama and Macbeth (1973) regression, the authors examine separately the downside and the upside component of beta and find that the downside risk is priced. In fact, the downside risk coefficient is found to be positive and highly significant. Nevertheless, the upside beta coefficient is negative which goes in the same current as the previous literature investigating the availability of the CAPM's beta. They show that the downside risk premium is always positive roughly about 6% per annum and statistically significant. They find also that this positive significant premium remains even when controlling for other firm characteristics and risk characteristics. In opposite, the upside premium changes its sign to turn into negative when considering the other characteristics. Likewise, Olmo (2007) finds, while performing his study on a number of UK sectoral indices, that both the CAPM beta and the downside beta are pricing factors for risky assets. He finds, also, that stocks which co-vary with the market in downturn periods generate higher return than that predicted by the CAPM. Contrary, stocks that are negatively correlated with the market in bad times are found to have lower return. The major result of this study is that the sectors which are indifferent to bad states changes and belong generally to the safe sectors seem to be not priced by downside beta. Similarly, Diana Abu-Ghunmi (2008) explores the downside risk in a conditional framework within the U.K context. Her sample includes all common stocks traded on the London Stock Exchange and the FTSE index from July 1981 to December 2005. They also use the coincident index to split their sample into expansion and recession periods. The portfolio's formation is done upon monthly return and based on three main risk measures per se; the beta, the upside beta, and the downside beta. They, afterwards, run the Fama-MacBeth (1973) cross sectional regression of excess returns on realized betas to examine the downside risk- return relation using individual stocks. The results point to a positive and a significant premium between the expected return and the unconditional downside risk. They note, as well, that conditioning the risk return relationship on the state of the world contributes to increase monotonically the relationship between the expected return and the downside beta during expansion periods. However, in recession periods they find no evidence between the return and the downside beta or the CAPM's beta. They conclude that downside beta plays a major role in pricing in pricing small and value stocks but not large and growth stocks. But, although the downside approach has been basis for many academic papers and has had significant impact on academic and non academic financial community, it is still subject to severe critics. 5.2.4. The High Order Moment CAPM 5.2.4.1. Evidence from the existence of the skewness and the kurtosis in the returns' distribution Literature on the CAPM has shown several evidences on favor of non normality and asymmetrical returns distribution. The attractive attribute of the CAPM is its pleasing and powerful explanation with a well built theoretical background about the risk-return relationship. This model is built on the basis of some assumptions but a critical one which imposes normality on the return distribution, so that the first two moments (mean and variance) are largely sufficient to describe the distribution. Nevertheless, this latter assumption is far from being satisfied as demonstrated by Fama (1965), Arditti (1971), Singleton and Wingender (1986), and more recently, Chung, Johnson, and Schill (2006). These studies point that the higher moments of return distribution are crucial for the investors and, from that, must not be neglected. They suggest, hence, that not only the mean and the variance but also higher moments such as skewness and kurtosis should be included in the pricing function. Consequently these attacks have led to the rejection of the CAPM within the Sharpe and Linter version and lead the way to the development of asset pricing models with higher moment than the variance like for instance; Fang and Lai (1997), Hwang and Satchell (1998) and Adcock and Shutes (1999) who introduced the kurtosis coefficient in the pricing function or also Kraus and Lizenberg (1974) who introduce the Skewness coefficient. The skewness coefficient is a measure of the asymmetry in the distribution. Particularly it is a tool to check that the distribution does not look to be the same to the left and the right with reference to a center point. The negative value of the skewness indicates that the distribution is concentrated on the right or skewed left. However, the positive value of the skewness indicates analogically that the data are skewed right. For more precision, we mean by skewed left that the left tail is longer than the right one and vice versa. Subsequently, for a normal distribution the skewness must be near to zero. The formula of the skewness is given by the ratio of the third moment around the mean divided by the third power of the standard deviation. Similarly the kurtosis coefficient is a measure to check whether the data are peaked or flat with reference to a normal distribution. This means, subsequently, that data with high kurtosis tend to have a different peak around the mean, decreases rather speedily and have heavy tails. However, data with low kurtosis tend to have rather a flat top near the mean. From this, the negative kurtosis indicates that the distribution is flat contrary the positive kurtosis indicates peaked distribution. For a normal distribution the kurtosis must be equal to zero. The formula of the kurtosis is given by the fourth moment around the mean divided by the square of the variance minus three which one calls also the excess kurtosis with reference to the normal distribution. The distribution that has zero excess Kurtosis is called “mesokurtotic” which is the case of all the normal distribution family. However, the distribution with positive excess Kurtosis is called “leptokurtotic” and means that this distribution has more than normal of values near the mean and a higher probability than normal of values in extreme (fatter tails). Finally, the distribution with negative excess Kurtosis is called “platykurtotic” and indicates that there is a lower probability than the normal to find values near to the mean and a lower probability than the normal to find extreme values (thinner tails). The French mathematician Benoit Mandelbrot (2004) in his book entitled ‘'The Misbehavior of Markets: A Fractal view of risk, ruin and reward'' concluded that the failure of any model (for example the option model of Black and Sholes or also the CAPM of Sharpe and Linter) or any investment theory in the modern finance can be due to the wide reliance on the normal distribution assumption. Generally speaking, investors who maximize their utility function have preferences which cannot be explained only as a straightforward comparison between the first two moments of the returns' distribution. In fact, the expected utility function of a given investor uses all the available information relating to the assets' returns and can be somehow linked to the other moments. It is not strange then to see authors like Arditti (1967), Levy (1969), Arditti and Levy (1975) and Kraus and Litzenberger (1976) extending the standard version of the CAPM to incorporate the skewness in the pricing function. Or even to see others incorporating the kurtosis coefficient, we name among others Dittmar (2002) who extends the three moments CAPM and examines the co-kurtosis coefficient. All these works stem to the necessity of introducing the high moment to the distribution in order to ameliorate the assets pricing if the restriction of normality is moved away. For example Dittmar(2002) finds that investors dislike co-kurtosis and prefer stocks with lower probability mass in the tails of the distribution rather than stocks with higher probability mass in tails of the distribution. He concludes, hence, that assets that increase the portfolios' kurtosis must earn higher return. Likewise, assets that decrease the portfolios' kurtosis should have lower expected return. As for Arditti (1967), Levy (1969), Arditti and Levy (1975) and Kraus and Litzenberger (1976), their results imply a preference for a positive skewness. They find that investors prefer stocks that are right skewed to those which are left skewed. Hence, assets that decrease the portfolios' skewness are more risky and must earn higher return comparing to those which increase the portfolios' skewness. These findings are further supported by studies like that of Fisher and Lorie (1970) or also that of Ibbotson and Sinquefield (1976) who find that the return distribution is skewed to the right. This has led Sears and Wei (1988) to derive the elasticity of substitution between the systematic risk and the systematic skewness. More recently, Harvey and Siddique (2000) show that this systematic skewness is highly significant with a positive premium of about 3.60 percent per year and therefore must be well admitted in pricing assets. The empirical evidence on the high moment CAPM is very mixed and very rich not only by its contribution but also by the methodologies used and the moments introduced. That's why the following section is devoted to further understand these approaches and to summarize the most important papers that explore this model in their studies in a purely narrative review. 5.2.4.2. Literature review on the high order moments CAPM Kraus & Litzenberger (1976) were the first to suggest that the higher co-moments should be priced. They claim that in the case where the return's distribution is not normal, investors are concerned about the skewness or the kurtosis. Just like Kraus and Litzenberger (1976), Harvey and Siddique(2000), have studied non-normal asset pricing models related to co-skewness. The study sample covers the period of 1963-1993 and contains the NYSE, AMEX, and NASDAQ equity data. They define the expected return as a function of covariance and co-skewness with the market portfolio in a three-moment CAPM and find that this model is better in explaining return and report that coskewness is significant and commands on average a risk premium of 3.6 percent per annum. From his part Dittmar (2002) uses the cubic function as a discount factor in a Stochastic Discount Factor framework. The model is a cubic function of return on the NYSE value-weighted stock index, and labor growth following Jagannathan and Wang (1996). He finds that the co-kurtosis must be included with labor growth so as to arrive to an admissible pricing kernel. Within the American context, Giovanni Barone Adesi, Patrick Gagliardini, and Giovanni Urga (2002) investigate the co-skewness in a quadratic market model. The study sample includes monthly return of 10 stock portfolios of the NYSE, AMEX, and NASDAQ formed by size from July 1963 to December 2000. The results from the OLS regression indicate that the extension of the return generating process to the skewness is praiseworthy. In fact, it is found that portfolios of small firms have negative co-skewness with the market. The results show also that there's an additional component in portfolios' return which not explained neither by the covariance nor by the co-skewness. Daniel Chi-Hsiou Hung, Mark Shackleton and Xinzhong Xu (2003) investigate the plausibility of the high co-moments CAPM (co-skewness and co-kurtosis) in explaining the cross section of stock returns in the UK context. They work on a 26 year period from January 1975 to December 2000 and incorporate monthly data of all listed stocks. For the portfolios formed on the basis of the beta, the authors find that the higher beta portfolio has the highest total skewness and kurtosis. Moreover, the higher co-moments show little significance in explaining cross section returns and do not increase the explanatory power of the model. However, the intercepts are found to be all insignificant. For the size sorted portfolios, the higher co-moments seem to be somehow significant and increase the explanatory power of the model. Moreover, for these portfolios the betas are all insignificant. Overall, Hung et al. (2003) find that the beta is very significant in every model and the addition of the higher order co-moment terms do not improve the explanatory power of the model which remains roughly unchanged. Furthermore, in all these models, the intercepts are insignificant. They conclude, finally, that in the UK stock exchange there exists a little evidence on favor of non-linear market models, since the higher order moments are found to be too weak. From their part, Rocky Roland and George Xiang (2004) suggest an asset pricing model with higher moments than the variance and extend the traditional version to a three-moment CAPM and a four-moment CAPM. They conclude, through their theoretical study, that further tests must be conducted to check the accuracy of the model even if some of research is already done. Angelo Ranaldo, and Laurent Favre (2005) put forward the extension of the two moments CAPM to a four moments one including the co-skewness and the co-kurtosis in pricing the hedge funds' returns. Their study is run on monthly returns 0f 60 hedge fund indices which are equally weighted. The market portfolio is constructed upon 70% of the Russell 3000 index and 30% of the Lehman US aggregate bond index and the risk free rate is the US 1 month Certificate of Deposit. Including the skewness and considering the quadratic model (time series regression), the authors find, on the one hand, that the adjusted R squared increases with more than half comparing to that of the two-moment CAPM. On the other hand, it is found that the coefficient of the co-skewness is positive and statistically significant which supports the existence of the co-skewness. However, the results indicate that the betas from the quadratic model are smaller than those implied by the two-moment CAPM. This latter result may be explained by the fact that some of the explanatory power of the beta is taken away by the co-skewness. Then, taking into account the co-kurtosis in a cubic model, Angelo Ranaldo, and Laurent Favre (2005) find that the additional coefficient has no major function in explaining the hedge fund return since it is only significant in four strategies. They point, hence, on the basis of their findings that the higher moments are more suited to represent the hedge fund industry return. In 2007, Chi-Hsiou Hung investigates the ability of the higher co-moments in predicting returns of portfolios formed from combining stocks in international markets and portfolios invested locally in the UK and the US. His sample includes monthly US dollar denominated returns, market value of common shares and interest rates for the period that spans from January 1975 to December 2004 and contains 18 countries. For this end, the author constructed portfolios on the basis of the momentum criterion. Hence, 10 equally weighted momentum sorted portfolios are obtained every six months based on past six months compounded returns. Then, portfolios are sorted on size every 12 months by ranking stocks on the basis of the market value at the time of the ranking. Finally, to dismantle the momentum effect and that of the size, the author builds 36 portfolios from the intersection of six size portfolios and six momentum portfolios. Overall, 100 equally weighted momentum, size and 36 double sorted momentum-size portfolios are obtained and regressed against the beta, the co-skewness and the co-kurtosis. He uses also a beta-gamma-delta sorts by firstly groups stocks into beta deciles, then into gamma and finally into delta deciles. Applying the cross sectional regression and looking at the beta-gamma-delta sequential sorts, the author finds that the risk premium associated to the market, the skewness and the kurtosis are highly significant. This is further supported by a higher adjusted R-squared for the four moment model compared to the two moment model. As for the momentum portfolios, the beta of portfolios is found to be statistically significant and the co-skewness and the co-kurtosis are generally almost significant. In addition to that, the adjusted R-squared increases when including the third and the fourth moment. For size portfolios, it is found that all portfolios' betas are negatively associated to the size and the co-skewness and co-kurtosis premiums are highly significant. However, unlike the other sorted portfolios, for the size portfolios all the intercepts are significant. Finally, for the two-way-sorts, it's found that the inclusion of the higher co-moments renders all intercepts insignificant and engenders a positive and a statistically significant premium for both the co-skewness and the co-kurtosis. For robustness check, Chi-Hsiou Hung (2007) investigates within a time series regression a cubic market model and finds that returns on the winner, loser and the smallest size deciles are cubic functions of the market model. One year later, particularly in 2008, the same author i.e., Chi-Hsiou Hung, investigates the ability of the non linear market-models in predicting assets' returns. The sample of the study contains a set of nineteen countries including the Canada, the United States, Belgium, Denmark, Finland, France, Germany, Italy, Netherlands, Norway, Spain, Sweden, Switzerland, United Kingdom, Australia, Hong Kong, Japan, Singapore and Taiwan and covers the 954-weeks' returns from 22 September 1987 to 27 December 2005 for both listed and delisted firms. The analysis of the higher order moment models includes the three moments and the four moment framework and run on the momentum and size portfolios. Using the times series regression, Chi-Hsiou Hung (2008) finds that the beta coefficient is highly significant in every model for both the winner and the loser portfolios. For the winner portfolios, adding the co-skewness to the CAPM increases the adjusted R-squared as the coefficient of the co-skewness is negative and statistically significant. However, for the loser portfolios, the addition of this coefficient does not contribute to the improvement of the explanatory power which is enhanced by an insignificant value of the coefficient. As for the fourth moment, the results do not provide any support on its favor. Though, in all models and for both the winner and the loser portfolios, the intercepts are found to be positive and highly significant which weakens the accuracy of these models in predicting returns. Concerning the size portfolios, the results indicate in sum there major conclusions. First, the standard CAPM explains well the return on the biggest size portfolios. Then, adding the squared of the markets contributes obviously to the improvement of the estimation for the smallest size portfolios. Nevertheless, the inclusion of the cubed market term does not have any utility since the coefficient is insignificant in all models and for both the studied portfolios. Gustavo M. de Athayde and Renato G. Flôres Jr (200?) extend the traditional version of the CAPM to include the higher moments and run tests on the Brazilian context. They use daily returns for the ten most liquid Brazilian stocks for the period that goes from January 2nd 1996 to October 23rd 1997. The results from the time series regression point toward the importance of the skewness coefficient. Indeed, adding the skewness to the classical CAPM generate a significant gain at the 1%level. Similarly, adding the skewness to the CAPM that contains already the kurtosis coefficient obviously contributes to the improvement of the latter since the coefficient is found to be significant. However, the addition of the kurtosis either to the classical CAPM or to that including skewness does not have any marginal gain. Gustavo M. and Renato G. (200?) conclude, therefore, that for the Brazilian context the addition of the skewness is appropriate while the gain in adding the kurtosis is irrelevant. In a similar way, Daniel R. Smith (2007) tests whether the conditional co-skewness explains the cross section of expected return in the UK stock exchange. Their study is performed on 17 value-weighted industry portfolios, 25 portfolios formed by sorting stocks on their market capitalization and book-equity to market-equity ratio for the period that spans from July 1963 to December 1997. For the conditional version, he uses six conditioning information variables that are documented in the literature. The results indicate that the co-skewness is important determinant in explaining the cross section of equities' return. Moreover, it is found that the pricing relationship varies through time depending on whether the market is negatively or positively skewed. The author claims, through the results, that the conditional two-moment CAPM and the conditional three factor model are rejected. However, the inclusion of the co-skewness in both models cannot be rejected by the data at all. Recently, Christophe Hurlin, Patrick Kouontchou, Bertrand Maillet(2009) try to include higher moments in the CAPM. As in Fama and French (1992), portfolios are formed with reference to their market capitalization and book-to-market ratio. The study sample consists of monthly data of listed stocks from the French stock market over the January 2002 through December 2006 sample period. Using three regression types, i.e. the cross section regression, the times series regression and the rolling regression, the authors find that, when co-skewness is taken into account, portfolios characteristics have no explanatory power in explaining returns, whereas, the ignorance of the co-skewness produces contradictory results. The results point also to a relatively higher value of adjusted R-squared for the model with the skewness factor compared to the classical model and that the co-skewness is positively related to the size. In a sum, the authors assert that high frequency intra-day transaction prices for the studied portfolios and the underlying factor return produce more plausible measures and models of the realized co-variations. Benoit Carmichael (2009) explores the effect of co-skewness and co-kurtosis on assets pricing. He finds that the skewness market premium is proportional to the standard market risk premium of the CAPM. This result supports that standard market risk is the most important determinant of the cross-sectional variations of asset returns. Likewise, Jennifer Conrad, Robert F. Dittmar, and Eric Ghysels (2008) try to explore the effect of the volatility and the higher moments on the security returns. Using a sample made of option prices for the period that goes from 1996 to 2005, the authors estimate the risk moments for individual securities. The results point to a strong relationship between the third and the fourth moments and the subsequent returns. Indeed, it's found that the skewness is negatively associated to the subsequent returns. This is equivalent to say that stocks with a relatively lower negative or positive skewness earn lower return. It is found also that the kurtosis is positively and significantly associated to the returns. They claim also, that these relationships are robust when controlling for some firms' characteristics. Then, using a stochastic discount factor and controlling for the higher co-moments, Conrad et al. (2008) find that idiosyncratic kurtosis is significant for short maturities whereas idiosyncratic skewness has significant residual predictive power for subsequent returns across maturities. In the Pakistani context, Attiya Y. Javid (2009) looks at the extension of the CAPM to a mean-variance-skewness and a mean-variance-skewness-kurtosis model. He works on daily as well as monthly return of individual stocks traded in the Karachi stock exchange for the period from 1993 to 2004. Allowing for the covariance co-skewness and co-kurtosis to vary overtime, the results indicate that both the unconditional and the conditional three-moment CAPM performs relatively well compared to the classical version and the four moment model. Nevertheless, the results show that the systematic covariance and the systematic co-skewness have only insignificant role in explaining return. 6. THE QUARREL ON THE CAPM AND ITS MODIFIED VERSIONS The CAPM developed by Sharpe (1964) and John Lintner (1965), and Mossin (1965) gave the birth to assets' valuation theories. For a long time, this model had always been the theoretical base of the financial assets valuation, the estimate of the cost of capital, and the evaluation of portfolios' performance. Being a theory, the CAPM found the welcome thanks to its circumspect elegance and its concept of good sense which supposes that the risk-averse investors would require a higher return to compensate for supporting higher risk. It seems that a more pragmatic approach carries out to conclude than there are enough limits resulting from the empirical tests of the CAPM. In fact, since the CAPM is based on simplifying assumptions, it will be completely normal that the deviation from these assumptions generates, ineluctably, imperfections. The most austere critic that was addressed to the CAPM, is that advanced by Roll (1977). In fact, in his paper, the author declares that the theory is not testable unless the market portfolio includes all assets in the market with the adequate proportions. Then, he blames the use of the market portfolio as a proxy, since the proxy should be mean/variance efficient even though the true market portfolio could not be. He, afterwards, passes judgment on the studies of both Fama and MacBeth (1973) and Blume and Friend (1973) in the sense that they present evidence of insignificant nonlinear beta terms. In fact, he sustains that without verifying to how extent the proxy of the market portfolio is closer to the reality, these evidences won't serve to any conclusion at all. He concludes, then, that the most practical hypothesis in this theory is that the market portfolio is ex-ante efficient. He asserts, subsequently, that verifying whether the market proxy is a good estimator may allow verifying the testable hypothesis of the model. With reference to Roll (1977), the results of empirical tests are dependent on the index chosen as a proxy of the market portfolio. If this portfolio is efficient, then we conclude that the CAPM is valid. If not, we will conclude that the model is not valid. But these tests do not allow us to ascertain whether the true market portfolio is really efficient. The tests of the CAPM are based, mainly, on three various implications of the relationship between the return and the market beta. Initially, the expected return on any asset is linearly connected to its beta, and no other variable will be able to contribute to the increase of the explanatory power of the model. Then, the premium related to beta is positive which means that the expected return of the market exceeds that of the individual stocks, whose return is not correlated with that of the market. Lastly, in the Sharpe and Lintner model (1964, 1965), the stocks whose returns are not correlated with that of the market, have an expected return equal to the risk free rate and a risk premium equal to the difference between the market return and that of the risk free rate. Furthermore, the CAPM is based on the simplifying assumption that all investors behave in the same way, but this is not easily feasible. Indeed, this model is based on anticipations and since the individuals do not announce their beliefs concerning the future, the tests of the CAPM can only lead to the assumption that the future can present either less or more the past. As a conclusion, tests of the CAPM can be only partially conclusive. Tests of the CAPM find evidences that are conflicting with the assumptions. For instance, many researchers (Jensen, 1968; Black, Jensen, and Scholes, 1972 among others) have found that the relationship between the beta and the expected return is weaker than the CAPM predicts. It is not bizarre, also, to find that low beta stocks earn higher returns than the CAPM suggests. Moreover, the CAPM is based on the risk reward principle which says that investors who bear higher risk are compensated for higher return. However, sometimes the investors support higher risk but require only lower returns. It is, particularly, the case of the horse ... and the casino gamblers. Several other assumptions are delicate tackling the validity of the model. For example, the CAPM assumes that the market beta is unchanged overtime. Yet, in a dynamic world this assumption remains a discussed issue. Since, the market is not static it would be preferable for a goodness of fit to model what is missing in a static model. In addition to that, the model supposes that the variance is an adequate measure of risk. Nevertheless, in the reality other risk measures such as the semi-variance may reflect more properly the investors' preferences. Furthermore, while the CAPM assumes that the return's distribution is normal, it is though often observed that the returns in equities, hedge funds, and other markets are not normally distributed. It is even demonstrated that higher moments such as the skewness and the kurtosis occur in the market more frequently than the normal distribution assumption would expect. Consequently, one can find oscillations (deviations compared to the average) more perpetually than the predictions of the CAPM. The reaction to these critics is converted into several attempts aiming at the conception of a well built pricing model. The Jagannathan and Wang (1996) conditional CAPM, for one, is an extension of the standard model. The conditional CAPM differ from the static CAPM in some assumptions about the market's state. In this model, the market is supposed to be conditioned on some state variables. Hence, the market beta is time varying reflecting the dynamic of the market. But, while the conditional CAPM is a good attempt to replace the static model, it had its limitations as well. Indeed, within the conditional version there are various unanswered questions. Questions are of the type; how many conditioning variables must be included? Can we consider all information with the same weight? Should high quality information be heavily weighted? How can investors choose between all information available on the market? For the first question, there is no consensus on the number of state variables included. Ghysels (1998) has criticized the conditional asset pricing models due to the fact that the incorporation of the conditioning information may lead to a great problem related to parameter instability. The problem is further enhanced when the model is used out-of-sample in corporate finance applications. Then, since investors do not have the same investment perspectives, the set of information available on the markets is not treated in the same way by all of them. In fact, information may be judged as relevant by an investor and redundant by another. So, the former will attribute a great importance to it and subsequently assign it a heavy weight. The latter, whereas, neglect this information since it doesn't affect the decision making process. To my own knowledge, the failure of the conditional CAPM may possibly come from the ignorance of the information weight. The beta of the model must be conditioning on the state of variables with the adequate weights. i.e., the contribution of each information variable in the market's risk must be proportional to its importance and relevancy in the decision making process for a given investor. Furthermore, even investors are not certain about which information must be included and which is not. Investors are usually doubtful about the quality of these information sources. Shall they refer to announcements and disclosures, analyst reports, observed returns and so on? This is remains questionable since even the set of information is not observed and that investors are uncertain about these parameters. A further limit associated to the conditional CAPM, is that they are prone to the underconditioning bias documented by Hansen and Richard (1987) and Jagannathan and Wang (1996). This means that there is a lack of the information included. Shanken (1990), and Lettau and Ludvigson (2001) suggest in order to overcome this problem to make the loadings depend on the observable state variables. Nevertheless, the knowledge of the ‘'real'' state variables clearly requires an expert. With the intention of avoiding the use of ‘'unreal'' state variables, Lewellen and Nagel (2006) divide the whole sample into non-overlapping small windows (months, quarters, half-years) and estimate directly from the short window regressions the time series of the conditional alphas or betas. They find weak evidence for the conditional CAPM over the unconditional one. Nevertheless, the method of Lewellen and Nagel (2006) can lead to biases in alphas and betas known as the ‘'overconditioning bias'' (Boguth, Carlson, Fisher, and Simutin (2008)). This bias may occur when using a conditional risk proxy not fully included in the information set such as the contemporaneous realized betas. The third contribution in the field of asset pricing models was the higher order moments CAPM. This model introduces the preferences about higher moments of asset return distributions such as skewness and kurtosis. The empirical literature highlighted a large discrepancy over the moments included in the model. For instance, Christie-David and Chaudry (2001) employ the four-moment CAPM on future markets. The result of their study indicates that the systematic co-skewness and co-kurtosis explain the return cross section variation. Jurczenko and Maillet (2002), Galagedera, Henry and Silvapulle (2002) make use of the Cubic Model to test for coskewness and cokurtosis. Hwang and Satchell (1999) study the co-skewness and the co-kurtosis in emerging markets. They show that co-kurtosis is more plausible than the co-skewness in explaining the emerging markets return. Y. Peter Chung Herb Johnson Michael J. Schill (2004) show that adding a set of systematic co-moments of order 3 through 10 reduces the explanatory power of the Fama-French factors to insignificance in roughly every case. Through these inconsistencies, one may think that modeling the non linear distribution suffers from the lack of a standard model to capture for the high moments. This problem is getting worse when cumulated together with the necessity to specify a utility function which is a faltering block in the application of higher moments CAPM-versions until now. Because only the investors utility function can determine their preferences. The last extension of the CAPM is that related to the downside risk. The downside CAPM defines the investors risk as the risk to go below a defined goal. When calculating the downside risk, only a part of the return distribution is used and only the observations which are below he mean are considered, i.e. only losses. Hence the downside beta can be largely biased. Moreover, the semi-variance is only useful when the return distribution is asymmetric. However, when the return distribution for a given portfolio is normal, then the semi-variance is only half the portfolio's variance. Consequently, the risk measure may be biased since the portfolio is mean-variance efficient. Furthermore, the downside risk is always defined with reference to a target return such as the mean or the median or in some cases the risk-free rate which is supposed to be constant for a given lap of time. Nevertheless, investors change their objectives and preferences from time to time which modify, consequently, the accepted level of risk over time. So, a well defined downside risk should perhaps include a developing learning process about investors' accepted rate of risk. 7. CONCLUSION The dispute over the CAPM has been for as much to answer the following question: ‘'is the CAPM dead or alive?''. Tests of the CAPM find evidences that are in some cases supportive and in some others aggressive. For instance, while Blume and Friend (1973), Fama and Macbeth (1973) accept the model, Jensen (1968), Black, Jensen, and Scholes (1972), and Fama and French (1992) reject it. The above question raises various issues in asset pricing models that academicians must struggle in order to claim to the validity of the model or before drawing any conclusion. Issues are like for example; to know whether the CAPM's relationship is still valid or not? Or does this relationship change when the context of the study changes? Do statistical methods affect the validity of the model? Does the study sample impinge on the model? Also since many improvements are joined to the model, the questions may be turned into types like for example; does conditional beta improve the CAPM? Does co-skewness or co-kurtosis or the both improve the risk-reward relationship? Does the Downside risk contribute to the survival of the CAPM? So, to conclude whether the CAPM is dead or not, one may find various difficulties since the evidence is very mixed. Unfortunately, through the narrative literature review we do not come out with a clear conclusion about whether our answer is yes or not. It seems that this tool is inadequate for our study since a strong debate needs to be solved. In fact, to answer the question of interest, several issues, remain doubtful in the literature review, must be taken for granted. First, it is impossible to compare studies that do not have the same quality. Quality is measured through for example the statistical methods, the sample size, the data frequency …etc Second, in order to reach a clear conclusion we must not rely only on studies that defend our point of view and neglect the opposite view. Hence, to defend the model, it is advisable to gather all positive studies, and to reject it is recommendable to accumulate only negative studies. Finally, if many versions need to be examined, how can we draw conclusion about the validity of the version since in the version itself the evidence is mitigated. For instance, how shall we know whether the conditional version improves the model while the conditional version its self is contested. In a nutshell, in order to move away the fogs on the CAPM and to brush away the hole inherent to the narrative literature review we recommend the use of the meta-analysis technique which is an instrument that affords accurate and pertinent conclusions. References Jensen, M., 1968. The Performance of Mutual Funds in the Period 1945-1964. Journal of Finance 23 (2), 389-416. Black, F., Jensen, M., Scholes, M., 1972. The Capital Asset Pricing Model: Some Empirical Tests in Studies in the Theory of Capital Markets. Michael C. Jensen, Ed. New York: Praeger, 79-121. Fama, Eugene F., and James D., MacBeth. 1973. Risk, Return, and Equilibrium: Empirical Tests. Journal of Political Economy 71, 607-636. Blume, M., and Irwin, F., 1973. A New Look at the Capital Asset Pricing Model. Journal of Finance 28 (1), 19-33. Stambaugh, R., 1982. On the Exclusion of Assets from Tests of the Two-Parameter Model: A Sensitivity Analysis. Journal of Financial Economics 10 (3), 237-268. Kothari. S, Shanken, J., and Richard G., 1995. Another look at The Cross Section of Expected Returns. Journal of Finance 50, 185-224. Fama, Eugene, F., and French, Kenneth, R., 1992. The Cross-Section of Expected Stock Returns. Journal of Finance 47 (2), 427-465. György, A., Mihály, O., and Balázs, S., 1999. Empirical Tests of Capital Asset Pricing Model (CAPM) In the Hungarian Capital Market. Periodica Polytechnica Ser. Soc. Man. Sci 7, 47-61. Kothari, S., and Shanken, J., 1999. Beta and Book-to-Market: Is the Glass Half Full or Half Empty?, Working paper available at Huck, K., Ahmadu, S., and Gupta, S., 1999. CAPM or APT? A Comparison of Two Asset Pricing Models for Malaysia. Malaysian Management Journal 3, 49-72. Levent, A., Altay-Salih, A., and Aydogan, K., 2000. Cross Section of Expected Stock Returns in ISE. Working paper available at Estrada, J., 2002. Systematic Risk in Emerging Markets: the D-CAPM. Emerging Markets Review 3, 365-379. Andrew, A., and Joseph, C., 2003. CAPM over the Long-Run: 1926-2001. NBER Working Paper No. W11903 Fama, Eugene, F. and French, Kenneth, R., 2004. The Capital Asset Pricing Model: Theory and Evidence. The Journal of Economic Perspectives 18 (3), 25-46. Thierry, P., and Pim Van V., 2004. Conditional Downside Risk and the CAPM. ERIM Report Series No. ERS-2004-048-F&A. Blake, T., 2005. An Empirical Evaluation of the Capital Asset Pricing Model. Working paper available at Galagedera, D., 2005. Relationship between downside beta and CAPM beta. Working paper available at. Fama, Eugene, F. and French, Kenneth, R., 2005. The Value Premium and the CAPM. The Journal of Finance 61 (5), 2163-2185. Erie, Febrian and Aldrin, Herwany., 2007. CAPM and APT Validation Test Before, During, and After Financial Crisis in Emerging Market: Evidence from Indonesia. The Second Singapore International Conference on Finance. Ivo Welch., 2007. A Different Way to Estimate the Equity Premium (for CAPM and One-Factor Model Usage Only). Working paper available at Peter, Christo ersen, Kris Jacobs and Gregory Vainberg, 2008. Forward Looking Betas. Working paper available at. Michael Dempsey, 2008., The significance of beta for stock returns in Australian markets. Investment Management and Financial Innovations 5 (3), 51-60. Simon, G., Koo and Ashley, Olson, XXXX., Capital Asset Pricing Model Revisited: Empirical Studies on Beta Risks and Return. Working paper available at. Arduino, Cagnetti, 2002. Capital Asset Pricing Model and Arbitrage Pricing Theory in the Italian Stock Market: an Empirical Study. Business and Management Research Publications. Available at. Pablo, Rogers and José Roberto Securato, 2007. Reward Beta Approach: A Review. Working paper available at. Bornholt, G. N., 2007. Extending the capital asset pricing model: the reward beta approach. Journal of Accounting and Finance 47, 69-83. Najet, Rhaiem, Saloua, A., and Anouar Ben.Mabrouk., 2007. Estimation of Capital Asset Pricing Model at Different Time Scales: Application to French Stock Market. The International Journal of Applied Economics and Finance 2, 79-87. Cudi Tuncer Gürsoy and Gulnara Rejepova., 2007. Test of Capital Asset Pricing Model in Turkey. Dogus Üniversitesi Dergisi 8, 47-58. Robert, R. Grauer, and Johannus A. Janmaat., 2009. On the power of cross-sectional and multivariate tests of the CAPM. Journal of Banking and Finance 33,775-787. Ravi, Jagannathan, and Zhenyu, Wang., 1996., The Conditional CAPM and the Cross-Section of Expected Returns. The Journal of Finance 51, 3-53. Jean-Jacques Lilti, Helene Rainelli Le Montagner, and Yannick Gouzerh (….), Capital humain et CAPM conditionnel: une comparaison internationale des rentabilités d'actions.' Martin Lettau & Sydney Ludvigson, 2001. Resurrecting the (C) CAPM: A Cross-Sectional Test When Risk Premia Are Time-Varying. Journal of Political Economy, University of Chicago Press 109, 1238-1287. William N. Goetzmann, Akiko Watanabe, and Masahiro Watanabe, 2007. Investor Expectations, Business Conditions, and the Pricing of Beta-Instability Risk'', Attiya Y. Javid and Eatzaz Ahmad, 2008. The Conditional Capital Asset Pricing Model: Evidence from Karachi Stock Exchange'', PIDE Working Papers, Vol.48 Akiko Fujimoto, and Masahiro Watanabe, 2005. Value Risk in International Equity Markets. Stefan Nagel, and Kenneth J. Singleton, 2009. Estimation and Evaluation of Conditional Asset Pricing Models. Peng Huang, and James Hueng, ‘' Conditional Risk-Return Relationship in a Time-Varying Beta Model. Antonis Demos and Sofia Pariss., 1998. Testing Asset Pricing Models: The Case of The Athens Stock Exchange. Multinational Finance Journal 2, 189-223. Devraj Basu and Alexander Stremme; 2007. CAPM and Time-Varying Beta: The Cross-Section of Expected Returns. Tobias Adrian and Francesco Franzoni, 2008. Learning about Beta: Time-Varying Factor Loadings, Expected Returns, and the Conditional CAPM. More from UK Essays - Dissertation Examples Index - Return to the Dissertations Index - Example Finance Dissertations - More Finance Dissertation Examples - Free Finance Essays - Finance Essays (submitted by students) - Dissertation Help - Free help guides for writing your dissertation
http://www.ukessays.com/dissertations/finance/capm.php
CC-MAIN-2014-35
en
refinedweb
What we'll learn We will learn how to use React's context API to manage state. Also, we'll see how to use useSWR hook from swr to manage async data from an API. Our Requirements - Data can come from synchronous or asynchronous calls. An API endpoint or a simple setState. - Allow to update state data from the components that use it. - No extra steps like actions, thunks. Small introduction to swr SWR (stale-while-revalidate) is a caching strategy where data is returned from a cache immediately and send fetch request to server. Finally, when the server response is available, get the new data with changes from the server as well as updating the cache. Here we are talking about the swr library from vercel. It provides a hook useSWR which we will use to fetch data from GitHub API. Head over to swr's docs to learn more. The API is small and easy. Store We need a top-level component where will maintain this global state. Let's call this component GlobalStateComponent. If you've used Redux, this can be your store. We'll test with 2 types of data for better a understanding. - Users data coming from an API like GitHub which might not change pretty quickly. - A simple counter which increments count by 1 every second. // global-store.jsx const GlobalStateContext = React.createContext({ users: [], count: 0, }); export function GlobalStateProvider(props) { // we'll update here return <GlobalStateContext.Provider value={value} {...props} />; } // a hook which we are going to use whenever we need data from `GlobalStateProvider` export function useGlobalState() { const context = React.useContext(GlobalStateContext); if (!context) { throw new Error("You need to wrap GlobalStateProvider."); } return context; } Now we need to use useSWR hook to fetch users data. Basic API for useSWR looks like this. const { data, error, mutate } = useSWR("url", fetcher, [options]); // url - an API endpoint url. // fetcher - a function which takes the first argument as parameters (url here) // and returns a promise. // options - Options for the hook. Configuration for this hook. // data - response from the API request // error - Error response from fetcher will be caught here. // mutate - Update the cache and get new data from server. We will use browser's built-in fetch API. You can use Axios or any other library you prefer. const fetcher = (url) => fetch(url).then((res) => res.json()); With this, our complete useSWR hook looks like this. const { data, error, mutate } = useSWR(``, fetcher); And, we need a setState with count and a setInterval which updates the count every second. ... const [count, setCount] = React.useState(0); const interval = React.useRef(); React.useEffect(() => { interval.current = setInterval(() => { setCount(count => count + 1); }, 1000); return () => { interval.current && clearInterval(interval.current); } }, []); ... A context provider takes a value prop for the data. Our value will be both user related data and count. If we put all these little things together in a global-store.jsx file, it looks like this. // global-store.jsx const GlobalStateContext = React.createContext({ users: [], mutateUsers: () => {}, error: null, count: 0, }); export function GlobalStateProvider(props) { const { data: users, error, mutate: mutateUsers } = useSWR( ``, fetcher ); const [count, setCount] = React.useState(0); const interval = React.useRef(); React.useEffect(() => { interval.current = setInterval(() => { setCount((count) => count + 1); }, 1000); return () => { interval.current && clearInterval(interval.current); }; }, []); const value = React.useMemo(() => ({ users, error, mutateUsers, count }), [ users, error, mutateUsers, count, ]); return <GlobalStateContext.Provider value={value} {...props} />; } // a hook to use whenever we need to consume data from `GlobalStateProvider`. // So, We don't need React.useContext everywhere we need data from GlobalStateContext. export function useGlobalState() { const context = React.useContext(GlobalStateContext); if (!context) { throw new Error("You need to wrap GlobalStateProvider."); } return context; } How to use it Wrap your top-level component with GlobalStateProvider. // app.jsx export default function App() { return <GlobalStateProvider>//...</GlobalStateProvider>; } Let's have two components, one consumes users data and another one needs counter. We can use useGlobalState hook we created in both of them to get users and count. // users.jsx export default function Users() { const { users, error } = useGlobalState(); if (!users && !error) { return <div>Loading...</div>; } return <ul>...use `users` here</ul>; } // counter.jsx export default function Counter() { const { count } = useGlobalState(); return <div>Count: {count}</div>; } // app.jsx export default function App() { return ( <GlobalStateProvider> <Counter /> <Users /> </GlobalStateProvider> ); } That's it. Now you'll see both Counter and Users. The codesandox link: codesandbox But, Wait If you put a console.log in both Users and Counter components, you'll see even if only count updated, Users component also renders. The fix is simple. Extract users in a component between App and Users, and pass users as a prop to Users component, and wrap Users with React.memo. // app.jsx export default function App() { return ( <GlobalStateProvider> <Counter /> - <Users /> + <UserWrapper /> </GlobalStateProvider> ) } // user-wrapper.jsx export default function UserWrapper() { const { users, error } = useGlobalState(); return <Users users={users} error={error} />; } // users.jsx - export default function Users() { + const Users = React.memo(function Users({users, error}) { - const {users, error} = useGlobalState(); if (!users && !error) { return <div>Loading...</div>; } return ( <ul> ...use users here </ul> ) }); export default Users; Now check the console.log again. You should see only Counter component rendered. The finished codesandbox link: codesandbox How to force-update users Our second requirement was to update the state from any component. In the same above code, if we pass setCounter and mutateUsers in the context provider's value prop, you can use those functions to update the state. setCounter will update the counter and mutateUsers will resend the API request and returns new data. You can use this method to maintain any synchronous, asynchronous data without third-party state management libraries. Closing Notes - Consider using useReducerinstead of useStateif you end up with too many setStates in global state. A good use case will be if you are storing a large object instead of a single value like countabove. Splitting up that object in multiple setStatemeans any change in each of them will re-render all the components using your context provider. It'll get annyoing to keep track and bring in React.memofor every little thing. - react-query is another solid library as an alternative to swr. - Redux is still doing great for state management. The new redux-toolkit amazingly simplifies Redux usage. Check it out. - Have an eye on recoil, A new state management library with easy sync, async state support. I didn't use it on a project yet. I'll definitely try it soon. Thank you and have a great day. 😀 👋 Posted on by: Sai Sandeep Vaddi Learn. Build. Ship. Repeat
https://dev.to/saisandeepvaddi/simple-way-to-manage-state-in-react-with-context-kig
CC-MAIN-2020-40
en
refinedweb
I am looking at the online Help as I write an export UDF, and I am not able to get the fileContents working. What is a stringified json and how do I use it. Can you please provide an example? It's just a json in string form. Check out this python library Great answer Ray! More color:To get the information, start with a function that looks like thisimport jsondef exportUDF(inStr): obj = json.loads(inStr) contents = obj["fileContents"] import jsondef exportUDF(inStr): obj = json.loads(inStr) contents = obj["fileContents"] Thanks for your help! I was trying to write a sample main to generate some data. I was missing double quotes My code below now works. msg = []ss = "AUDUSD,72850,2017-06-28 16:46:17.830,0.00344776010689,2878"for ii in xrange(100): msg.append(ss)jsonStr = '{{"fileContents" : "{}"}}'.format('\n'.join(msg))main(jsonStr) Nicely done Mark!
https://discourse.xcalar.com/t/white-check-mark-what-is-a-stringified-json/141
CC-MAIN-2020-40
en
refinedweb
). It’s currently also used in Nodeshot to update the network links that are shown on the map. If you are a developer of another community network node-db project and you want to use netdiff to update the topology stored in your database, please get in touch!('./stored-olsr.json') new = OlsrParser('telnet://127.0.0.1:9090') diff(old, new) In alternative, you may also use the subtraction operator: from netdiff import OlsrParser from netdiff import diff old = OlsrParser('./stored-olsr.json') new = OlsrParser('telnet://127.0.0.1:9090') - netdiff.BatmanParser: parser for the batman-advanced alfred tool - netdiff.Bmx6Parser: parser for the BMX6 b6m tool - netdiff.CnmlParser: parser for CNML 0.1 - netdiff.NetJsonParser: parser for the NetworkGraph NetJSON object. Initialization arguments data: the only required argument, different inputs are accepted: - JSON formatted string representing the topology - python dict (or subclass of dict) representing the topology - string representing a HTTP URL where the data resides - string representing a telnet URL where the data resides - string representing a file path where the data resides timeout: integer representing timeout in seconds for HTTP or telnet requests, defaults to None verify: boolean indicating to the request library whether to do SSL certificate verification or not Initialization examples Local file example: from netdiff import BatmanParser BatmanParser('./my-stored-topology.json') HTTP example: from netdiff import NetJsonParser url = '' NetJsonParser(url) Telnet example with timeout: from netdiff import OlsrParser OlsrParser('telnet://127.0.1:8080', timeout=5) HTTPS example with self-signed SSL certificate using verify=False: from netdiff import NetJsonParser OlsrParser('', verify=False).
https://pypi.org/project/netdiff/0.4.5/
CC-MAIN-2020-40
en
refinedweb
Everything about Slinky -- issues, ideas for features, where it's being used, and more! Hi All , I am writing a test cases of functional component using munit and below is the code : import components.ErrorComponent import munit.FunSuite import org.scalajs.dom.document import slinky.core.FunctionalComponent import slinky.web.ReactDOM class ErrorComponentTestSuite extends FunSuite { test("Can render a functional component") { val container = document.createElement("div") val component = FunctionalComponent[Int](_.toString) ReactDOM.render(component(1), container) assert(container.innerHTML == "1") } } whenever i execute this code using command testOnly etlflow.components.ErrorComponentTestSuite it fails with an error : etlflow.components.ErrorComponentTestSuite: ==> X etlflow.components.ErrorComponentTestSuite.Can render a functional component 0.01s scala.scalajs.js.JavaScriptException: ReferenceError: window is not defined Any idea what i am doing wrong here ? I have this def keyHandler(evt: SyntheticKeyboardEvent[dom.html.Input]) = if(evt.key == "Enter") props.updateTags(props.tags + (tagRef.current.value)) else () and I get the complaint [error] overloaded method := with alternatives: [error] (v: Option[() => Unit])slinky.core.OptionalAttrPair[slinky.web.html._onKeyDown_attr.type] <and> [error] (v: () => Unit)slinky.core.AttrPair[slinky.web.html._onKeyDown_attr.type] <and> [error] [T <: slinky.core.TagElement](v: slinky.web.SyntheticKeyboardEvent[T#RefType] => Unit): slinky.core.AttrPair[T] [error] cannot be applied to (slinky.web.SyntheticKeyboardEvent[org.scalajs.dom.html.Input] => Unit) [error] onKeyDown := (keyHandler _)) TagElementconstraint comes from. I can't see it in the code. And I don't understand what I'm supposed to write to create an event handler of the correct type. SyntheticKeyboardEventhas no subtype constraint, and neither does SyntheticEvent. := (evt) => keyHandler(evt)? Slinky uses a bit of implicit trickery in order to have the event's element type match the tag that the attribute is being applied to, I wonder if there's some issue with partially applied functions here. You are setting this attribute on an inputtag, right? Hi all. Thoroughly enjoying Slinky to build my app using functional components with hooks. I've hit a road block with useContext in that I can't figure out how to set the value. I naively thought it would be similar to useState like so: val (thing, setThing) = useContext(MyContexts.thing) which I believe is how it works in vanilla React. This doesn't appear to be the case with Slinky (version 0.6.5). Could someone please explain how to use context with functional components in Slinky? What have I misunderstood? Thanks! useContexthook. Note that React doesn't do this either: You set the context using a Provider. MyContexts.thing.Providerapproach within the functional component that sets the value and render all consumers of that value within the body of that provider. Hey there; loving this library so far! I am running into an issue with the react-router facade when trying to use activeClassName with NavLink. I'm not sure why, since I think I'm using it the same way NavLink is used for the sidebar on the Slinky docs site, and that works perfectly. Minimal example of what isn't working for me: @react object Homepage { case class Props() val component = FunctionalComponent[Props] { props => NavLink( to = "/about", activeClassName = Some("bar") )( className := "foo" )( "About" ) } } ReactDOM.render( Router( history = History.createBrowserHistory() )( Switch( Route("/", Homepage.component, exact = true), Route("/about", Homepage.component), Route("*", Homepage.component) ) ), container ) The link itself works, and goes to the /about page, but the bar class doesn't get added when /about is the current page. Any thoughts on what I might be doing wrong would be greatly appreciated! On a slightly related note, is there a recommended/efficient way of sharing state globally? @mn98 not sure if it can be of interest in your search, but I wrote this example about using Diode
https://gitter.im/shadaj/slinky
CC-MAIN-2020-40
en
refinedweb
Waldek Mastykarz wrote a good post about not passing the web part context all around your React components, which is good advice. And as Waldek pointed out after reviewing my code I pass the full context and not just the GraphClient. So ideally, passing just what you need is better, but I’m lazy at times :) And it gives you the reader an opportunity to improve my code :) I tend to create static helper classes, and here’s one approach to ease calling Graph API’s throughout your solution. The wrapper class is quite simple, and I’ve created helper methods for GET, POST, PATCH, DELETE. To use this you would first initialize the class in your main web part code. public async render(): Promise<void> { await MSGraph.Init(this.context); ... } and somewhere in your code if you wanted to get Group data for a group you could use something like this: import { MSGraph } from '../services/MSGraph'; ... let groupId = this.props.context.pageContext.legacyPageContext.groupId; let graphUrl = `/groups/${groupId}`; let group = await MSGraph.Get(graphUrl); Photo by Jeremy Thomas at Unsplash
https://www.techmikael.com/2018/09/example-of-wrapper-to-ease-usage-of.html
CC-MAIN-2020-40
en
refinedweb
? As you probably know, doubles are represented internally as binary fractions. The original double in binary[1. Obviously the mantissa is in binary and the exponent is in decimal.] is 1.1111111111111111111111111111111111111111111111111111 x 2-2 Or 0.011111111111111111111111111111111111111111111111111111 which in decimal is 0.499999999999999944488848768742172978818416595458984375 As you can see it is extremely close to the stated value. Again, just so we are clear: when in source code you say 0.49999999999999994, that decimal fraction is rounded to the nearest binary fraction that has no more than 52 bits after the decimal place. Already we have one rounding, and as you can see, we rounded up — but we’re still less than half. Now suppose we took this value and added exactly 0.5 to it. We’d get 0.999999999999999944488848768742172978818416595458984375 which is still smaller than one. Therefore we’d expect this to go to zero when passed to Math.Floor, right? Wrong; that fraction isn’t a legal double because it would require 53 bits of precision to represent exactly. A double only has 52 bits of precision, and therefore this must be rounded off when it becomes a double. But rounded to what? Clearly it is extremely close to 1.0. What is the largest double value that is smaller than 1.0? In binary that would be 1.1111111111111111111111111111111111111111111111111111 x 2-1 Or 0.11111111111111111111111111111111111111111111111111111 which is 0.99999999999999988897769753748434595763683319091796875 in decimal. So we now have three relevant values: 0.999999999999999888977697537484345957636833190917968750 (largest double smaller than 1.0) 0.999999999999999944488848768742172978818416595458984375 (exact) 1.000000000000000000000000000000000000000000000000000000 (1.0) Which one should we round the middle one to? Well, which is closer? It’s OK, I’ll wait while you do the math by hand. . . . Neither is closer. The value is exactly in the middle of the two possibilities. The difference is 0.000000000000000055511151231257827021181583404541015625 in both directions. Of course you could have worked that out a lot more easily in binary than in decimal! The three numbers are in binary: 0.111111111111111111111111111111111111111111111111111110 0.111111111111111111111111111111111111111111111111111111 1.000000000000000000000000000000000000000000000000000000 From which you can read off that difference between both pairs is 2-54. When faced with this situation the floating point chip has got to make a decision and it decides to once again round up; it has no reason to believe that you really want it to round down. So the sum rounds to the double 1.0, which is then “floored” to 1.0. Once again we see that floating point math does not behave at all like pen-and-paper math. Just because an algorithm would always work in exact arithmetic does not mean that it always works in inexact floating point. Multiple roundings of extremely small values can add up to a large difference. I’ve posted the code I used to make all these calculations below; long-time readers will recognize it from my previous article on how to decompose a double into its constituent parts. using System; using System.Collections.Generic; using System.Text; using Microsoft.SolverFoundation.Common; class Program { static void Main() { MyDouble closeToHalf_Double = 0.49999999999999994; Rational closeToHalf_Rational = DoubleToRational(closeToHalf_Double); Console.WriteLine("0.49999999999999994"); Console.WriteLine("Actual value of double in binary:"); Console.WriteLine("1.{0} x 2 to the {1}", closeToHalf_Double.MantissaBits.Join(), closeToHalf_Double.Exponent - 1023); Console.WriteLine("Actual value of double as fraction:"); Console.WriteLine(closeToHalf_Rational.ToString()); Console.WriteLine("Actual value of double in decimal:"); Console.WriteLine(closeToHalf_Rational.ToDecimalString()); Console.WriteLine(); MyDouble closeToOne_Double = 0.99999999999999994; Rational closeToOne_Rational = DoubleToRational(closeToOne_Double); Console.WriteLine("0.99999999999999994"); Console.WriteLine("Actual value of double in binary:"); Console.WriteLine("1.{0} x 2 to the {1}", closeToOne_Double.MantissaBits.Join(), closeToOne_Double.Exponent - 1023); Console.WriteLine("Actual value of double as fraction:"); Console.WriteLine(closeToOne_Rational.ToString()); Console.WriteLine("Actual value of double in decimal:"); Console.WriteLine(closeToOne_Rational.ToDecimalString()); Console.WriteLine(); // Now let's do the arithmetic in "infinite precision": Rational sum = closeToHalf_Rational + (Rational)0.5; Console.WriteLine("Sum in exact arithmetic as fraction:"); Console.WriteLine(closeToOne_Rational.ToString()); Console.WriteLine("Sum in exact arithmetic in decimal:"); Console.WriteLine(closeToOne_Rational.ToDecimalString()); // But that has more precision than will fit into a double, // so we have to round off. We have two possible values: Rational d1 = Rational.One - sum; Rational d2 = sum - closeToOne_Rational; Console.WriteLine("Difference from one:"); Console.WriteLine(d1.ToDecimalString()); Console.WriteLine("Difference from closeToOne:"); Console.WriteLine(d2.ToDecimalString()); } static Rational DoubleToRational(MyDouble d) { bool subnormal = d.Exponent == 0; var two = (Rational)2; var fraction = subnormal ? Rational.Zero : Rational.One; var adjust = subnormal ? 1 : 0; for (int bit = 51; bit >= 0; --bit) fraction += d.Mantissa.Bit(bit) * two.Exp(bit - 52 + adjust); fraction = fraction * two.Exp(d.Exponent - 1023); if (d.Sign == 1) fraction = -fraction; return fraction; } } struct MyDouble { private ulong bits; public MyDouble(double d) { this.bits = (ulong)BitConverter.DoubleToInt64Bits(d); } public int Sign { get { return this.bits.Bit(63); } } public int Exponent { get { return (int)this.bits.Bits(62, 52); } } public IEnumerable<int> ExponentBits { get { return this.bits.BitSeq(62, 52); } } public ulong Mantissa { get { return this.bits.Bits(51, 0); } } public IEnumerable MantissaBits { get { return this.bits.BitSeq(51, 0); } } public static implicit operator MyDouble(double d) { return new MyDouble(d); } } static class Extensions { public static int Bit(this ulong x, int bit) { return (int)((x >> bit) & 0x01); } public static ulong Bits(this ulong x, int high, int low) { x <<= (63 - high); x >>= (low + 63 - high); return x; } public static IEnumerable<int> BitSeq(this ulong x, int high, int low) { for (int bit = high; bit >= low; --bit) yield return x.Bit(bit); } public static Rational Exp(this Rational x, int y) { Rational result; Rational.Power(x, y, out result); return result; } public static string ToDecimalString(this Rational x) { var sb = new StringBuilder(); x.AppendDecimalString(sb, 50000); return sb.ToString(); } public static string Join<T>(this IEnumerable seq) { return string.Concat(seq); } } A tiny correction to your first bold statement: I think you should count the implicit leading 1 bit in the precision. Your binary representation of the number uses 54 1’s, so it makes more sense to say it needs 54 bits of precision. Your fixed point representation of the original number also uses 54 bits after the decimal point, not 52, but it does use 53 1’s. Again, saying it has 53 bits of precision makes more sense.. Also, it might be useful to explain that the decision to round up is not arbitrary but the result of long discussions in a committee. The poor chip has no say in it. The rounding is done so that the last bit of the rounded number is 0. So positive …11 is rounded up, while positive …01 is rounded down. This is also called “round to even.” Actually to be even more picky IEEE-754 defines 4 rounding modes. The one you mention is the default, though, and AFAIK is the only one used for .Net System.Double and System.Float. See here for how to change modes with the Microsoft C runtime library. Since this subthread is all about being picky… It is not System.Float but System.Single Sorry couldn’t stop myself from posting:) “To save one from a mistake is a gift of paradise”, after all. Well spotted. 🙂 So now you have me wondering: how should you make sure to round halves up, correctly, even for the unreasonable numbers, if I can call them that? There is no Math.Round(val, MidpointRounding.Up). Math.Floor(value + nextafter(0.5, 0)) is a valid trick for positive numbers, but gives the wrong result for -0.5. Conditionally adding either 0.5 or nextafter(0.5, 0), depending on the sign of value, works, but still, it does not necessarily preserve the sign of negative numbers (they may round to +0.0 instead of -0.0). Taking a different approach, Math.Round(value < 0 ? nextafter(value, 0) : value, MidpointRounding.AwayFromZero) comes to mind, but that would give the wrong result when nextafter(value, 0) differs from value by more than one. The only thing I can come up with that I haven’t yet been able to rule out is special casing based on value – Math.Truncate(value): if it’s exactly 0.5, return value + 0.5. If it’s exactly -0.5, return -(-value – 0.5). Otherwise, just return Math.Round(value);. But I’m not sure I am not overlooking anything, and I’m convinced there must be better ways. It turns out that, for exactly the same reasons adding 0.5 *doesn’t* work, adding the number he references that is the largest value smaller than 0.5 *does* work. Think about all of the boundary conditions and you’ll see that this is the case (0.5 works, see above, 0.4999…etc works because it rounds correctly. Interesting hack. That does not work, as I already noted in my comment: “Math.Floor(value + nextafter(0.5, 0)) is a valid trick for positive numbers, but gives the wrong result for -0.5.” Note that nextafter(0.5, 0) is just another way of writing 0.49999999999999994, except without relying on any specific precision. Continuing with this, it appears to be simpler to just do it yourself, and not even use the default Math.Round at all: FWIW the first ‘strange’ bug I remember encountering was a report that on an accounting system created for Apple II computers, dollar values of $8.12 were recorded as $8.11 The system programmers were stumped as to the cause and the program gave correct values for every other dollar & cents value but this one. The problem turned out to be they used a routine to store the dollar/cents amounts like this: int (100*$amount) In Applesoft, 100*8.12=811.9999998 or thereabouts. But to make matters worse, Applesoft rounded the answer for display purposes, so you get contradictory-seeming results like these: print (8.12*100) 812 print int(8.12*100) 811 print (812-(8.12*100)) 2.68220901E-07 To make matters worse, reversing the order of the multiplicands changes the answer: print (812-(100*8.12)) 0 Moral of the story: Expect the results of floating point operations to deviate from the expected in the last significant digit (or more, especially if you’re chaining operations) and to not necessarily display the discrepancy. FYI back in the day I tested this up to many millions or billions of dollars (I can’t remember exactly but I let a loop run for pretty much all day on it) and 8.12 was the only two-digit decimal to have this problem in Applesoft. You can still see the bug at work today on Apple II emulators like > In Applesoft, 100*8.12=811.9999998 or thereabouts. Oops, actually 100*8.12=812 whereas 8.12*100=811.9999998 (approx.). The order of the multiplicands is important. I think, when dealing with issues like this, we need to make a careful distinction between accuracy and precision. Using the IEEE 754 double-precision binary floating-point format, 2^53 = 1+2^53, and this also holds for 2^54, 2^55, … So if you need that 1+2^53 result, you should not be using numbers in that IEEE-754 format. The underlying reasons for this also hold for fixed width integers. Some results will not fit. About half the results from + and almost all the results from * have this characteristic. As a rough rule of thumb, if the accuracy you need uses the square root of the number of bits of precision you can mostly ignore fixed width format issues. That’s not completely true (floating point subtraction of positive decimal values can be a good counter example), but it’s a plausible starting point. Note also that we teach kids how to deal with fixed width numeric issues in grade school. We teach them how to carry when adding, borrow when subtracting, and so on. The digits we use in our decimal numbers are “fixed width” and it’s simply not the case that the sum of any two single digit numbers will fit in a single digit result. Computer hardware often includes support for “carry”, and programming languages routinely do not expose this to the programmer. But we can work around this by splitting our numeric formats in half and use the upper half to represent “carry” (and using a sequence of “numbers” to represent our numeric values). There’s a web site of quotations from the late, great Ken Iverson,, and this one seems particularly apposite: ‘.’ The problem with real values (oh, the pun) is that you, strictly speaking, never can be sure that A=B and B=C. What you know is that A≈B and B≈C, but that doesn’t imply that A≈C! “If you throw a frog into a boiling water, it will jump out, but if you throw it into a cold water and then slowly heat it, the frog won’t notice and will get boiled”. But of course, no one likes to carry all this A=B±ε, B=C±ε notation, because it’s tedious and not funny at all — though you clearly see that A=C±2ε. I agree, I was always taught that comparing floats for equality should be considered a bug. That said, I worked for a long time on a constraint programming language project and I couldn’t convince anyone else on the project that including zero tolerance comparison of reals in the language was a mistake. It is _often_ a bug. It is not _always_ a bug. An example of when it isn’t is a function that operates on an array to produce another array (such as a Fourier transform or matrix product), and takes an optional floating-point “scale” input that it will multiply each element of the output by. It is entirely reasonable — and common — to compare that “scale” input to 1.0 exactly. If it is exactly 1.0, which it often is because the user often supplies a literal 1.0 input, then we can skip the multiplication. But if it is anything other than _exactly_ 1.0, we need to do the multiplication. Another example is that we may have a function that uses different computation paths depending on whether the input array is aligned to an exact multiple of a SIMD vector width, and we want to test to make sure that these computation paths produce precisely the same answer. Having an equivalence test for floating-point values is useful, in that if X is equivalent to Y, then for any pure function F(), one may substitute F(X) for F(Y) when convenient (e.g. if one has computed one, one need not repeat the computation for the other). Note that equivalence does not imply that the real numbers the floats are “supposed” to represent are equal, nor does non-equivalence mean the real numbers are not equal. Equivalence, however, is a useful concept in its own right even though languages and frameworks may not expose it very well (e.g. floating-point types in .NET override `Equals(Object)` in such fashion as to report some non-equivalent values as equal). As for why languages have a “numerically-equal” operator, that may tie in with the fact that floating-point types are often used to store whole-number quantities which are large, but which within the precisely-representable range. Historically they were often by far the best means of storing such values; today there are still contexts where they remain the best (e.g. when mixing real and whole-number quantities, it’s often better to use floating-point types for everything than to perform lots of integer-to-floating-point conversions). Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1358 Wow what an information.. This article has good information. Thanks for the post..! While it’s an interesting defect, it’s important to note that there are multiple precision issues with floating point types and associated math. If one needs precision without rounding error, use the Decimal type. So you’re telling me that in decimal, one divided by three and then multiplied by three is one, because there are no rounding errors with decimal? There are plenty of rounding errors with decimal; they’re just different rounding errors. Hear hear. No numerical representation can express all possible numerical quantities. Even using only the four basic arithmetic operators (or, for that matter, using any one of them), it will be possible to express a sequence of computations whose numerical result will not be representable, thus requiring the computation to return an imprecise result, return an error indication, throw an exception, run out of memory, or hang. For financial calculations it may be helpful to have a fixed-precision type which would always either yield exact results or report a failure, but Decimal is not such a type, since any operation on Decimal values may produce rounded results without any warning. Any finite precision representation is going to have these problems. There another class of numbers for which +0.5 will mess things up: 2^52+1, 2^52+3, …, 2^53-1 and the negatives of these. Nice catch! Wall cavities especially shouldn’t be sealed back up until it is completely dry inside. Mold problems, however, when left unattended for too long a time, can cause a severely weakened home architecture and even health problems such as allergies and poisoning. If you hire a company that handles both processes from the start, they are likely to be able to discover that mold is going to be a problem and take care of it before it becomes a health issue. Bullying is a problem in the online gaming world. The info above should make sure you have a great gaming experience. This means that you have to understand more about making the avocation as pleasurable as it can be. Feel free to visit my web site :: Lol Elo Boost
https://ericlippert.com/2013/05/16/spot-the-defect-rounding-part-two/
CC-MAIN-2020-40
en
refinedweb
Following are my observations and findings while using Unicode in a LabVIEW UI application. The purpose of using Unicode in this application is to localize the UI on the fly in any language without having to change the Computer locale. The application provides a settings page which allows the user to select the language of the UI. Based on this setting all UI components (controls and indicators) are localized into the selected language. The resource strings for all controls/indicators and languages are read from a text file containing Unicode strings. Disclaimer: LabVIEW for Windows has limited support for Unicode strings in the front panel controls and indicators. This is not an offically supported feature, meaning that it is not as fully tested as other released parts of the development and run-time environment. In addition this feature is not covered under standard product support and parts of this feature may change in future releases of LabVIEW, i.e. any code developed on this feature may require changes when upgrading to a newer version of LabVIEW. If you have any feedback or questions about using Unicode in LabVIEW post them as comments on this document or in the Developer Zone discussion forums. The code posted as part of this document has been developed and tested in LabVIEW 2009 running on Windows XP SP3 (English). The VIs are saved back to LabVIEW 8.6. Earlier versions of LabVIEW did not include all of this Unicode support and it is suggested that you upgrade to LabVIEW 8.6 or later if you want to use Unicode in your application development. This code has not been tested with other operating systems and I assume it will not work on non-Windows OSs, though I expect it should work in Windows Vista and Windows 7. What is Unicode? The answer to this question could cover many pages by itself so I will not attempt to provide a detailed or comprehensive explanation of the Unicode standard and its different character encodings. Please consult other online sources for this information. It will be helpful to be familiar with what Unicode is before proceeding with the rest of this document. What Every Programmer Absolutely, Positively Needs To Know About Encodings And Character Sets To Wor... Unicode and LabVIEW Unicode is not officially supported by the LabVIEW environment, but there is basic support of Unicode available as described in this document. Unicode can support a wide range of characters from many different languages in the same application. Windows XP (Vista, 7) is fundamentally built on Unicode and uses Unicode strings internally, but it also supports non-Unicode applications. By default LabVIEW on Windows (English) does not use Unicode strings, but rather uses Multibyte Character Strings (MBCS). The interpretation of MBCS is based on the current code page selected in the operating system. The current code page is set using the regional settings of the OS and determines how the bytes in the strings are rendered into characters on the screen. The most common code page is 1252 used by English Windows as well as several other Western languages and comprises the commonly known extended ASCII character set. When the regional settings in Windows are changed the OS may switch to a different code page for rendering strings. For example if you switch to Japanese, code page 932 will be used. Using different code pages allows LabVIEW to have localized versions of the development environment. All code pages include support for the basic ASCII characters used in the English language, as well as a local set of characters. Therefore if you have code page 932 selected, the operating system can still render ASCII characters as well as Japanese. Using Unicode instead of MBCS, an application can render characters from many different alphabets or scripts without switching code pages/regional settings. In fact all of the language scripts supported in the legacy code pages are included in Unicode and Unicode keeps being expanded with more characters every release. Because Unicode does support more than 65535 characters nowadays, a concept of planes was introduced in conjunction with surrogate pairs. Most of the characters covered by the code pages are included in Plane 0 of the Unicode standard and fit on a 2-byte representation, but more complex characters for mathematics or ancient scripts have been located on higher planes and thus use surrogate pairs as they code point value (and are thus coded on 4 bytes). A common encoding form of the Unicode character set is UTF-16. UTF-16, depending on byte order is called Big Endian or Little Endian. Unicode in LabVIEW is handled as little endian, also called UTF-16LE. This is important to know when looking at the hexadecimal representation of strings or working with Unicode text files. Table 1: Example of a few characters in ASCII and Unicode When writing Unicode to a plain text file you commonly prepend a Byte Order Mark (BOM) as the first two characters of the file. The BOM indicates to the file reader that the file contians Unicode text and if the byte order is big-endian or little-endian. The BOM for big-endian is 0xFE FF. The BOM for little-endian including LabVIEW is 0xFF FE. Windows Notepad and Wordpad can detect a Unicode file using the BOM and display their contents correctly. LabVIEW for Touchpanel on Windows CE LabVIEW for Touchpanel on Windows CE supports multi-byte character sets (MBCS) — specifically double-byte character sets (DBCS). Under this scheme, a character can be either one or two bytes wide. If it is two bytes wide, its first byte is a special "lead byte," chosen from a particular range depending on which code page is in use. Taken together, the lead and "trail bytes" specify a unique character encoding.” (" target="_blank"><SPAN>) A code page only contains the characters from one particular language such as Korean. Therefore MBCS can only support ASCII and one other set of language characters at a time and you need to select the specific code page for non-ASCII characters to be used in your application. To do that, look for the "language for non-Unicode programs" in the Windows Control Panel. Using Unicode in LabVIEW Common Use Cases A list of common uses of Unicode in an application developed using LabVIEW includes: Non Unicode All strings in the application used for display, user input, file I/O network communication (e.g. TCP/IP) are ASCII strings. This is the most common use of LabVIEW and does not require the use or consideration of Unicode. Non-Unicode = Extension of ASCII based on system code page ASCII technically only defines a 7-bit value and can accordingly represent 128 different characters including control characters such as newline (0x0A) and carriage return (0x0D). However ASCII characters in most applications including LabVIEW are stored as 8-bit values which can represent 256 different characters. The additional 128 characters in this extended ASCII range are defined by the operating system code page aka "Language for non-Unicode Programs". For example, on a Western system, Windows defaults to the character set defined by the Windows-1252 code page. Windows-1252 is an extension of another commonly used encoding called ISO-8859-1. Figure 1: LabVIEW ASCII string in Hex and normal display showing extended ASCII characters Unicode • The application reads Unicode data from a file or other source and displays it using a non-Unicode encoding (ASCII based) on the user interface. In this use case it is assumed the Unicode characters are limited to the subset supported by extended ASCII. • The application reads Unicode data from a file or other source and displays it as Unicode characters on the user interface. • The application internally uses characters encoded in a non-Unicode way, including input from the UI by the user, but needs to write data to a file or other destination in Unicode. • The application uses Unicode strings internally including input from the UI and writes Unicode data to a file or other destination. To use Unicode in LabVIEW you must enable Unicode support by adding the following setting in the LabVIEW.ini file. After making this change you must restart the development environment. [LabVIEW] UseUnicode=True The LabVIEW string controls and indicators have two private properties related to entering and displaying Non-Unicode (extended ASCII) or Unicode characters. These properties are not exposed through the regular property node; access to these properties is provided through subVIs as part of the examples included with this document. • Force Unicode Text • InterpretAsUnicode Force Unicode Text is a property which can be enabled and disabled on the string control using the context menu of the control or indicators. Figure 2: Setting the Force Unicode Text property on a string control The Force Unicode Text property affects how text entered from the keyboard is converted to a string (byte stream) in the diagram. If text is passed from an ASCII keyboard and this property is turned on, then the text is automatically converted to the Unicode equivalent of the ASCII characters. Typically this means that every single byte character is converted to the two byte Unicode equivalent. InterpretAsUnicode is a property which can be enabled on text elements of different UI controls and indicators, such as the text of a string control/indicator, the caption of a control/indicator, the Boolean text of a Boolean control/indicator, etc. This property controls whether a string value passed to the text element is interpreted as an ASCII or Unicode string. SubVIs provided with the example in this document allow you to pass strings to different UI elements and select whether you are passing an ASCII or Unicode string. Note: The state of the InterpretAsUnicode property of a string element may be changed dynamically if text is pasted or entered into the text element by the user. The display mode (InterpretAsUnicode) of the text element will automatically adapt to Unicode or ASCII depending on the type of text entered into the control. • If you paste a Unicode string into a text element the InterpretAsUnicode property is turned on. • If you paste a regular ASCII string into a text element the InterpretAsUnicode property is turned off. For example, if the display mode of a string control is Unicode (InterpretAsUnicode property on) and text is entered from an ASCII keyboard, the display mode will be switched to ASCII and the current value of the string control will be interpreted and displayed as ASCII characters. This can cause issues if the Force Unicode Text property is enabled for a string control. Entering regular ASCII text will cause the string control to interpret all data as ASCII, however the Force Unicode Text property will automatically convert the new characters entered in the control input to Unicode data. These two conditions combined will cause ASCII text to have a ‘space’ between each letter entered. These spaces are actually the extra Null byte, which are the second byte of each of the ASCII characters converted to Unicode. To resolve this issue you must detect the keyboard input and set the Text.InterpretAsUnicode property of the string control to True to properly display all text as Unicode. This is shown in the examples. When localizing the name of a control or indicator on the user interface you should always use the caption of the control instead of the label. The label is part of the code of the VI (similar to a variable name) and should not be changed. The caption should be displayed instead of the label and can be changed at run-time using the VIs provided. The Listbox, Multicolumn Listbox and Table controls have different behavior in terms of processing Unicode strings from the rest of the text elements described previously. These controls so not use the InterpretAsUnicode property. Instead they look for a BOM (Byte Order Mark) on any strings passed to them. If a string passed to these controls starts with a BOM (either 0xFFFE or 0xFEFF) then the string will be handled as Unicode. This allows you to mix both Unicode and ASCII strings in the same control. The examples include subVIs to pass strings to these controls and mark them as Unicode using the BOM. Figure 3: Adding the BOM to Unicode strings to update a listbox In order to display Unicode strings on your user interface the fonts you are using must have the necessary support for all the characters you are using. If you are using an extensive set of characters from languages using non-latin characters you should verify that your selected fonts have the necessary character support. Two specific fonts commonly available on Windows that include most Unicode characters are Arial Unicode MS and Lucida Sans Unicode. Included in the examples are two very simple VIs to convert an ASCII (MBCS) strings to Unicode and vice versa. These VIs use functions provided by Windows to detect the current code page used for the MBCS and handle the conversion. Figure 4: Converting between ASCII and Unicode strings in LabVIEW The conversion VIs are polymorphic and can handle scalar strings as well as 1D and 2D string arrays. The attached project includes a number of examples showing how to display Unicode strings on different UI controls and indicators. For each of these control types subVIs are included to pass strings to the control and their caption and specify whether the string should be treated as Unicode or not. The following UI controls and indicators are supported with specific VIs: Using control properties you can also access these controls inside of other data structures such as a cluster. Figure 5: Converting an ASCII string to Unicode and display it on a string indicator, 1D string array and 2D string array Figure 6: Converting an ASCII string array to Unicode and display it on a Listbox, Multicolumn Listbox and Table In order to read Unicode strings from a front panel string control there are a number of settings and that need to be made: 1. Enable the Force Unicode Text property of the string control from its context (right-click) menu. 2. Enable the Update Value while Typing property of the string control from its context menu. Figure 7: Enable the Force Unicode Text and Update Value while Typing properties 3. Add an event case to the Event Structure for the Value Change event of the string control. In the event case wire the control reference and new value from the event to the Tool_Unicode Update String VI as shown in the following diagram. This will update the string control as the user is typing to keep the InterpretAsUnicode property set to Unicode, while entering ASCII characters. Figure 8: Event Handler for the Value Changed event of the string control When reading and writing text files it is important to know if the contents of the file is ASCII or Unicode. The Read from Text File function in LabVIEW does not know whether the contents of the file is ASCII or Unicode. Therefore you need to check to see if the file contains a BOM (Byte Order Mark) at the beginning of the data read from the file and then process the data accordingly. Figure 9: Read a Unicode text file and process To write Unicode text to a file, convert all your strings to Unicode and then prepend the BOM before writing the final string to a file using the Write to Text File function. Figure 10: Write Unicode text to a file June 2014 - Added VI package of the Unicode tools. v 2.0.0.4 Includes wrappers for built-in functions for converting between LV Text and UTF-8. Function palette moved to Addons palette. February 2019 - Updated VI package to include minor bug fix that occurs in Windows 10 and LabVIEW 64-bit. Latest version is 2.0.1.5. The VIP will also be added to the LabVIEW Tools Network VIPM repository. A few comments on Unicode have begun here (of all places...). I am a big proponent of Unicode in LabVIEW, and using it the past year and a half has been both interesting and rewarding. Perhaps this document will be very useful for me to develop multilingual application. But I have some questions here. I have a project in which I have to use Korean language. You know this is a language which includes unicode characters. For example I am using lots of xy graph in my project. I have to change x axis and y axis name with korean words. But when I assign 한국어/조선말 word to x axis name, it doesn't appear like that. You have written a function which can update captions but not xy graph axis names. I can't update that names. I would like to apply same solution to this names but you have made "update caption vi" password protected. I understand but I really need what that vis include. This vi is very critical for me to understand how to update x and y axis name. You also made other critical vis password protected. For example updating string, boolean text vis are password protected too. Please let me know how to update any of my control's captions, boolean text, x and y axis names and etc. I really need to know that. Please help me. I didn't find your email address. I am very sorry for writing my problem here. mehmetned6@hotmail.com How can i change the TabCaption of the Tab Control? If you meant the caption of the whole tab control, you can use Tool_Unicode Update Caption.vi to make this change. If you are asking about the name of each individual tab in the tab control, then you can not change it programmatically. This is because the name of the tab is part of the programming logic and can not be changed at run-time. For example, if you wire the tab control terminal to a case structure in your diagram, the case names take on the names of the individual tabs, and these can't be changed at run-time. The tabs do not have a caption separate from their logical names, like the control does as a whole. To solve this problem you can overlay a classic string indicator with transparent background over each tab and update its value for your localized names. You'll need to make the color of the strings in the tab the same as the background so that the user doesn't see them. You also need to add code to detect mouse clicks on the overlay strings to change the active page of the tab control when the user clicks on the tab while the VI is running. Dear Christian, Here i send the example vi what i am facing problem in changing the TabCaption in different language's for your reference. Please check the Vi and replay me as soon as possible. siva, I don't see any example attached to your comment. I will add an example to the document above showing a solution to using and updating a tab control. The example uses a string indicator and transparent button overlaid on each tab. I have uploaded some basic VIs to do the conversion between various encodings, UTF-8, UTF-16, UTF16LE.!! I have added a Utility VI to read and write Unicode text files (similar functionality of the Notepad application in Windows).!! Does the "Export String" Feature of LabVIEW support unicode text? I want to use it to localize VIs and i have problems importing unicode txt files. Greets The Export/Import Strings feature in LabVIEW does not support Unicode. One option you can consider is the LTK LabVIEW Localization Toolkit from SEA. I already knew about the LTK, but thanks anyway. My reason for asking is, that i am currently writing a Thesis on possibilities to change languages in labview. I had much succes using the export/import-strings method, sadly the files Labview creates on my machine running german Windows XP and english Labview only support about 200 characters (my guess: Windows-1252 character set is beeing used). I wonder if there is any way around this limitation. greetings Using the VI server functions in LabVIEW (property nodes for VIs, controls, etc.) you can build your own Import/Export Strings functionality which could support Unicode for any parts of LabVIEW that does support Unicode. First of all: Thanks a lot for that unicode option! I'm now working a while with it and found some small problems. One of them is still Unicode and TabControl caption. Since LV6.1 we have the property Independent Label to have caption text independent from the logical names on TabControl pages. The hidden front panel elements are not really a solution for us. We don't have such fixed Tab size and structure. From my view it should be possible to have a tool for it like for the other front panel elements. The next problems are concerning the ring item list, hidden unicode on boolean, ..... Please have a look at the attachment (Unicode problems.zip). I hope you will support me like you did it before. LVTester: Unicode input need double space.vi >> See Read Unicode from String Control.vi example on how to create the behavior you need at run-time Tab Control: I added another function and example for the Tab Control Pages The other three problems you show are bugs in the Unicode implementation. I will forward them to the development team. For some reasons I still have to work with Labview 8.5. Can you save the examples for LV 8.5? I can not get the boolean text to work. the boolean caption works fine., Thanks. is the a big reason why the code is password protected? Christian, Is it possible to complete the library with a Tool_Unicode Update Boolean.vi (but for boolean with action "arming at release". the one in the library just work with the boolean with action "commute until release" I tryied to modifiy your vi but I just succeed for caption, not for text. (due to password on "tools_unicode_display_boolean.vi") regards jerome please refer the following section in this document and restart the LabVIEW. you should be able to View those properties. Caption--> Interpret As Unicode.!! This unicode is handy and all, but until it work or you have some way to make it work on the menu bar, it fall short of being a real solution for multiple langauge. in the western us we have a host of multi cutural people all looking at the same program. it would be nice if we could change it also on the fly to be a different langauge. or maybe a make a menu bar control that we use in place of it. that we could modifiy. Top secret, need to know basis. if you know it could change the balance of world power.. ok just kidding.. NI wishes for many parts of LabView to be off limits to the normal developer. I am sure they think there is a good reason for it. Pitol, As described in the document these properties are not officially supported by LabVIEW and are enabled through special settings in the development environment along with other similar interfaces that are not exposed publicly at this time. For Unicode support these are the only two relevant properties. Using the property nodes from the existing examples you can now use these properties on other controls as well if they are availbale for these controls. is there any hidden property to view enum Strings[] in Unicode? vix, Enum strings are part of the programming logic (e.g. they are used to name the cases of a case structure) and can therefore not be Unicode and can not be changed at runtime. The Strings[] property of an enum will return the current set of (ASCII) strings in the Enum definition. I am using the Report Generation toolkit to load excel worksheets. One of the sheets has mixed chinesse and english columns. When I read the column with the Chinese in it I get all ??? (question marks) back. How can I read excel cells into labview? I have my labview setup so I can copy chinese characters out of my worksheet and paste directly into a frontpanel text box and that works great. Just when I load the cell into memory then prepend the BOM then put into text box does not work. Thanks for any help Hi jboden, When you say "I have my labview setup so I can copy chinese characters out of my worksheet and paste directly into a frontpanel text box", I assume that means you followed the instructions on this document to enable Unicode in LabVIEW controls. While the UseUnicode token allows to support the display of Unicode text in LabVIEW, it has its limitation and doesn't make LabVIEW a fully Unicode compliant application. What it allows is to render a Unicode string properly on screen, but it doesn't guarantee that the LabVIEW string manipulation functions will work, nor any toolkit functions like the Report Generation Toolkit. That's why this feature is private and unsupported. For example here, I think it would be a safe bet to say that the ActiveX calls to Excel used in the Report Generation Toolkit are still trying to convert the Unicode Excel text to your system codepage (or system locale). Unless you change your system locale (aka "Language for non-Unicode programs") to Chinese (and be careful to know whether you are dealing with Simplified Chinese or Traditional Chinese), this conversion will fail and give you question marks. So in this case, I would advise you to change your system locale to the proper Chinese encoding and try this again. At that point, you won't need the UseUnicode token to display Chinese because your whole system will already be set to intepreting dual byte chars as Chinese. Let me know how this works for you. Good luck! I'm making multiple language i my application and I have used this article and find it very helpful. But now I have a problem to translate the pages on my tab control. You have an example of how it works which works fine. But if I add a new page to the tabcontrol the new page doesn't accept unicode. My question is what kind of property you have made on the first pages? Hope you understand what I mean. Best regards Simon Simon, When you add a new page to a tab control, the display on the new page is not Unicode by default. You need to change it to Unicode display, you can do that by copy/pasting a Unicode string from another application into the page tab. You need to do this just once manually and then the display will be Unicode. This needs to be done for each tab page individually. Unfortunately there is not a programmatic way to make this change. When I need a Unicode string for this or other similar purpose I normally use the Google Tranlsate page, translate English text to Chinese or another non-Latin character language, and copy the translated text. Google Translate is also how I created all of the strings in the examples (just in case any of the translations seem strange). Displaying Unicode on frontpanel elements works, as far as I noted, fine. But I could not manage displaying charakters on menue or frontpanel titels. Is this not possible or have I done something wrong? Regards Spirou Thanks. That was so much easier than I expected. I thought that I needed to do something programmatic. Thanks for the help. Best regards Simon” I have a fix for your problem, The vi's are done in LV 2012 How did you solve this? Thank you! I noticed that while working with this toolbox sometimes it happens a mess on the fonts and the numbers of numeric indicators are shown with meaningless characters. After my investigation I think it is because the numbers are somehow interpreted as unicode. Is there a function to force "Interpret as Unicode" to False for numbers of numerical controls and indicators? I was having the same issue, it was completely random and still I have no idea which is the true solution. But often, after recreating corrupt controls, the problem goes away. Unfortunately this does sound like a control that is getting corrupted. There are no properties or methods to control the numerical text in regards to Unicode, as Unicode should not be used with the value displayed in the control. Hello Christian, I'm not sure if this is a corrupted control. As a matter of fact, if you chnage UseUnicode to "False" in the LabVIEW.ini file and leopen LabVIEW, the control is shown properly. I think that for some strange reason, LV starts to interpret as Unicode the numbers of the control, and it shouldn't do this for any reason. I think this is clearly a LV bug, but in my application I can install a fix, but I can't upgrade to a newer major release of LV. This is why I asked for a property to force LV not to interpret the numbers as Unicode. You can fix the problem clicking on the control and enetering a new value, but you must do this for every numerical control that shows this problem!! Hi everybody, I find a way to reproduce a common bug displayed when using Unicode. The VI can be downloaded here (LV2012): 1: On the block diagram, double click the label with "n°" inside and copy&paste the text inside the second string of the array, 2: if needed refresh the screen (by scrolling down&up with the mouse) and now the string array will appear like this: 3: double click the label with "ABC" inside and copy&paste the text inside the first string of the array and now the string array will appear like this: Note: there is an extra space between "n" and "°" Can this be helpful to solve the bug? I confirm the bug in LV2015 too This is actually not a bug, but expected behavior. In your VI the ABC free string is in ASCII, while n° is in Unicode. When you paste either one into the string array you change the whole string array to either ASCII or UNICODE. ASCII ABC when converted to Unicode becomes 䉁. Actually it becomes 䉁C, as each Unicode character is 2 bytes, but LabVIEW does not display the extra byte which is the C. When UNICODE n° is converted to ASCII it becomes n °. The best solution in this cases is to use UNICODE for all strings, and use the UNICODE version of ABC. In LabVIEW it can be difficult to create a UNICODE version of a normal ASCII string. For this purpose, the version 2.0.0.4 VIP of the Unicode tools above, include VIs to convert between ASCII and Unicode (UTF-16LE). Another option is to:
https://forums.ni.com/t5/Reference-Design-Content/LabVIEW-Unicode-Programming-Tools/tac-p/3493045/highlight/true?profile.language=en
CC-MAIN-2020-40
en
refinedweb
In this article, you’ll learn the basics of parsing an HTML document using Python and the LXML library. Introduction Data is the most important ingredient in programming. It comes in all shapes and forms. Sometimes it is placed inside documents such as CSV or JSON, but sometimes it is stored on the internet or in databases. Some of it is stored/transferred or processed through the XML format, which is in many ways similar to HTML format, yet its purpose is to transfer and store data, unlike HTML, whose main purpose is to display the data. On top of that, the way of writing HTML and XML is similar. Despite the differences and similarities, they supplement each other very well. Both Xpath and XML are engineered by the same company W3C, which imposes that Xpath is the most compatible Python module to be used for parsing the XML documents. Since one of the programing principals which would push you towards the programming success is to “not reinvent the wheel”, we are going to refer to the W3C () consortium document and sources in regarding the syntax and operators on our examples to bring the concept of XPath closer to the people wishing to understand it better and use it on real-life problems. The IT industry has accepted the XML way of transferring data as one of its principles. Imagine if one of your tasks was to gather information from the internet? Copying and pasting are one of the simplest tools to use (as it is regularly used by programmers as well); it might only lead us to gather some simple data from the web, although the process might get painfully repetitive. Yet, in case if we have more robust data, or more web pages to gather the data from, we might be inclined to use more advanced Python packages to automate our data gathering. Before we start looking into scraping tools and strategies, it is good to know that scraping might not be legal in all cases, therefore it is highly suggested that we look at the terms of service of a particular web site, or copyright law regarding the region in which the web site operates. For purposes of harvesting the web data, we will be using several Python libraries that allow us to do just that. The first of them is the requests module. What it does is that it sends the HTTP requests, which returns us the response object. It only used if urge to scrape the content from the internet. If we try to parse the static XML file it would not be necessary. There are many parsing modules. LXML, Scrapy and BeautifulSoup are some of them. To tell which one is better is often neglected since their size and functionality differs from one another. For example, BeautifulSoup is more complex and serves you with more functionality, but LXML and Scrapy comes lightweight and can help you traversing through the documents using XPath and CSS selectors. There are certain pitfalls when trying to travel through the document using XPath. Common mistake when trying to parse the XML by using XPath notation is that many people try to use the BeautifulSoup library. In fact that is not possible since it does not contain the XPath traversing methods. For those purposes we shall use the LXML library. The requests library is used in case we want to download a HTML mark-up from the particular web site. The first step would be to install the necessary packages. Trough pip install notation all of the modules above could be installed rather easily. Necessary steps: pip install lxml(xpath module is a part of lxml library) pip install requests(in case the content is on a web page) The best way to explain the XML parsing is to picture it through the examples. The first step would be to install the necessary modules. Trough pip install notation all of the modules above could be installed rather easily. What is the XPath? The structure of XML and HTML documents is structurally composed of the nodes (or knots of some sort), which is a broader picture that represents the family tree-like structure. The roof instance, or the original ancestor in each tree, is called the root node, and it has no superior nodes to itself. Subordinate nodes are in that sense respectively called children or siblings, which are the elements at the same level as the children. The other terms used in navigating and traversing trough the tree are the ancestors and descendants, which in essence reflect the node relationship the same way we reflect it in real-world family tree examples. XPath is a query language that helps us navigate and select the node elements within a node tree. In essence, it is a step map that we need to make to reach certain elements in the tree. The single parts of this step map are called the location steps, and each of these steps would lead us to a certain part of the document. The terminology used for orientation along the axis (with regards to the current node) is very intuitive since it uses regular English expressions related to real-life family tree relationships. XPath Selector XPath selector is the condition using which we could navigate through an XML document. It describes relationships as a hierarchical order of the instances included in our path. By combining different segment of XML syntax it helps us traverse through to the desired parts of the document. The selector is a part of the XPath query language. By simply adding different criteria, the XPath selector would lead us to different elements in the document tree. The best way to learn the XPath selector syntax and operators is to implement it on an example. In order to know how to configure the XPath selector, it is essential to know the XPath syntax. XPath selector is compiled using an etree or HTML module which is included within the LXML package. The difference is only if we are parsing the XML document or HTML. The selector works similarly as a find method with where it allows you to select a relative path of the element rather than the absolute one, which makes the whole traversing less prone to errors in case the absolute path gets too complicated. XPath Syntax XPath syntax could be divided into several groups. To have an exact grasp of the material presented we are going to apply further listed expressions and functions on our sample document, which would be listed below. In this learning session, we are going to use a web site dedicated to scraping exercises. Node selection: Using “..” and “.” we can direct and switch levels as we desire. Two dot notations would lead us from wherever we are to our parent element, whereas the one dot notations would point us to the current node. The way that we travel from the “context node” (our reference node), which is the milestone of our search, is called “axes”, and it is noted with double slash //. What it does is that it starts traversing from the first instance of the given node. This way of path selection is called the “relative path selection”. To be certain that the // (empty tag) expression would work, it must precede an asterisk (*) or the name tag. Trough inspecting the element and copying its XPath value we are getting the absolute path. XPath Functions and Operators here are 6 common operators which are used inside the XPath query. Operators are noted the same way as in plain Python and serve the same purpose. The functions are meant to aid the search of desired elements or their content. To add more functionality to our XPath expression we can use some of LXML library functions. Everything that is written in-between the “[]” is called a predicate and it is used to closer describe the search path. Most frequently used functions are contains() and starts-with(). Those functions and their results would be displayed in the table below. Going Up and Down the Axis The conventional syntax used to traverse up and down the XPath axes is ElementName::axis. To reach the elements placed above or below our current axes, we might use some of the following axes. A Simple Example The goal of this scraping exercise is to scrape all the book genres placed at the left-hand side of the web site. It almost necessary to see the page source and to inspect some of the elements which are we aiming to scrape. from lxml import html import requests url = '' # downloading the web page by making a request objec res = requests.get(url) # making a tree object tree = html.fromstring(res.text) # navingating the tree object using the XPath book_genres = tree.xpath("//ul/li/a[contains(@href, 'categ')]/text()")[0:60] # since the result is the list object, we iterate the elements, # of the list by making a simple for loop for book_genre in book_genres: print (book_genre.strip()) Resources:
https://blog.finxter.com/html-parsing-using-python-and-lxml/
CC-MAIN-2020-40
en
refinedweb
import "github.com/justwatchcom/gopass/pkg/pwgen" Package pwgen implements multiple popular password generate algorithms. It supports creating classic cryptic passwords with different character classes as well as more recent memorable approaches. Some methods try to ensure certain requirements are met and can be very slow. cryptic.go external.go memorable.go pwgen.go rand.go validate.go wordlist.go const ( // CharAlpha is the class of letters CharAlpha = upper + lower // CharAlphaNum is the class of alpha-numeric characters CharAlphaNum = digits + upper + lower // CharAll is the class of all characters CharAll = digits + upper + lower + syms ) GenerateExternal will invoke an external password generator, if set, and return it's output. GenerateMemorablePassword will generate a memorable password with a minimum length GeneratePassword generates a random, hard to remember password GeneratePasswordCharset generates a random password from a given set of characters GeneratePasswordCharsetCheck generates a random password from a given set of characters and validates the generated password with crunchy GeneratePasswordWithAllClasses tries to enforce a password which contains all character classes instead of only enabling them. This is especially useful for broken (corporate) password policies that mandate the use of certain character classes for no good reason Cryptic is a generator for hard-to-remember passwords as required by (too) many sites. Prefer memorable or xkcd-style passwords, if possible. NewCryptic creates a new generator with sane defaults. NewCrypticForDomain tries to look up password rules for the given domain or uses the default generator. NewCrypticWithAllClasses returns a password generator that generates passwords containing all available character classes NewCrypticWithCrunchy returns a password generators that only returns a password if it's successfully validated with crunchy. Password returns a single password from the generator Package pwgen imports 15 packages (graph) and is imported by 3 packages. Updated 2020-08-17. Refresh now. Tools for package owners.
https://godoc.org/github.com/justwatchcom/gopass/pkg/pwgen
CC-MAIN-2020-40
en
refinedweb
import "k8s.io/kubernetes/pkg/scheduler/framework/plugins/tainttoleration" const ( // Name is the name of the plugin used in the plugin registry and configurations. Name = "TaintToleration" // ErrReasonNotMatch is the Filter reason status when not matching. ErrReasonNotMatch = "node(s) had taints that the pod didn't tolerate" ) New initializes a new plugin and returns it. TaintToleration is a plugin that checks if a pod tolerates a node's taints. func (pl *TaintToleration) Filter(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeInfo *framework.NodeInfo) *framework.Status Filter invoked at the filter extension point. func (pl *TaintToleration) Name() string Name returns name of the plugin. It is used in logs, etc. func (pl *TaintToleration) NormalizeScore(ctx context.Context, _ *framework.CycleState, pod *v1.Pod, scores framework.NodeScoreList) *framework.Status NormalizeScore invoked after scoring all nodes. func (pl *TaintToleration) PreScore(ctx context.Context, cycleState *framework.CycleState, pod *v1.Pod, nodes []*v1.Node) *framework.Status PreScore builds and writes cycle state used by Score and NormalizeScore. func (pl *TaintToleration) Score(ctx context.Context, state *framework.CycleState, pod *v1.Pod, nodeName string) (int64, *framework.Status) Score invoked at the Score extension point. func (pl *TaintToleration) ScoreExtensions() framework.ScoreExtensions ScoreExtensions of the Score plugin. Package tainttoleration imports 7 packages (graph) and is imported by 25 packages. Updated 2020-07-16. Refresh now. Tools for package owners.
https://godoc.org/k8s.io/kubernetes/pkg/scheduler/framework/plugins/tainttoleration
CC-MAIN-2020-40
en
refinedweb
February 13, 2018. When you join a new team, one of of the first things you'll figure out is their preferred coding style. They probably have a linter like rubocop or flake8 to delegate style arguments to computers who are the supreme pedantics. Sometimes though, you'll find reasons to change a repository’s coding style or to merge in another code base with different style choices. At a certain scale, you’ll probably just fixing things by by hand, but for projects that span thousands of files, no amount of caffeine can mask the pain. Linting is just one example of a broader category of problems: how do you refactor large codebases? Although a common answer is simply not to refactor at scale, that tends to cause codebases to degrade rapidly over time. It can be better! Most languages come with libraries to represent source code as s-expressions, which you can then modify in new ways to generate modified source code. For Ruby, two libraries that do just that are: ruby_parser which parses a Ruby file and outputs its s-expressions, ruby2ruby which translates s-expressions into Ruby code. Let’s try them out. (Full source code is available on Github.) Imagine we have a method incr which used to require two parameters, but most invocations incremented by 1, so we added 1 as the default value for the second parameter. Now we want to rewrite all calls to incr to only pass a second value if it is different from the default. So we could imagine some code looking like this: def incr(x, i = 1) x + i end incr(5, 100) incr(3, 1) incr(10, 1) incr(17, 17) That we want to rewrite to look like this: def incr(x, i = 1) x + i end incr(5, 100) incr(3) incr(10) incr(17, 17) The interesting parts of our code will be in a rewrite function, so first let’s write the scaffolding that function will live within, and then rush to the fun parts. The scaffolding is thankfully pretty short, requiring a few libraries, parsing stdin to s-expressions, rewriting those s-expressions into Ruby code, and then outputting to stdout. require 'ruby_parser' require 'ruby2ruby' def rewrite(expr) expr end parsed = RubyParser.new.parse(ARGF.read) puts Ruby2Ruby.new.process rewrite(parsed) Assuming you have the example input above in a file named refactor.input and you name this file refactor.rb, then you can run it using: ruby refactor.rb < refactor.input This is actually pretty cool, because we’re taking in some code, parsing it, and then recombining it, but the really fun part is what comes next: modifying it! Astute readers will notice the output version has some extraneous parentheses. I’m skimming over because it’s equivalent Ruby code, but it’s a bit annoying, and perhaps an astute reader will propose a non-regex based solution. The rewrite function gets called by ruby_parser on the top-level s-expression, from which you can recursively explore all the program's s-expressions. To explore the structure of individual s-expressions a bit, consider the input: incr(3, 1) Which is represented by a Ruby object whose structure is: s(:call, nil, :incr, s(:lit, 1), s(:lit, 2)) In order, these values are: :call is the kind of s-expression (some other common kinds are :block, :lasgn and :defn) for invoking a function, the second value, nil, doesn’t contain anything interesting for :call, although it does for other kinds, :incr is the name of the function invoked, remaining values are the parameters passed to invoked function. Reminding ourselves of our original problem statement: can we remove the second parameter of calls to the :incr function if they specify the same value as the default parameter? Yup, we now know enough to write that function: def rewrite(expr) if expr.is_a? Sexp if expr[0] == :call && expr[2] == :incr && expr.size == 5 && expr[4][0] == :lit && expr[4][1] == 1 # remove the second parameter expr.pop() end # descend into children expr.each { |x| rewrite(x) } end expr end There are three interesting parts here: incrand the second parameter is the new default parameter, a litof value 1, then we should remove it. :blocks-expression which is pretty boring. Stepping back, I think this is pretty awesome! We’re now programmatically rewriting code. We can use this to maintain even large codebases without doing huge amounts of manual toil. Let’s try it again, doing something a bit more ambitious. Imagine you’ve hired a bunch of Python programmers on our team who keep writing Python-style for loops instead of learning Ruby's each idiom, and that we want to rewrite them to use each. Your input might be something like: def count(lst) i = 0 for ele in lst i += 1 end end And you’d want this output: def count(lst) i = 0 lst.each { |ele| i = i + 1 } end Taking another stab at our rewrite function, this is a bit messier: def rewrite(expr) if expr.is_a? Sexp if expr[0] == :for lst = expr[1] param = expr[2] func = expr[3] expr.clear expr[0] = :iter expr[1] = Sexp.new(:call, lst, :each) expr[2] = Sexp.new(:args, param[1]) expr[3] = func end # descend into children expr.each { |x| rewrite(x) } end expr end A bit messier, but also a pretty neat demonstration of what you can do is once you start playing around with this technique. For example, you could imagine only doing this if the complexity of the refactored for loop is low enough. These are very contrived examples, but I think are enough to let you start dreaming about ways this technique could be applied usefully to your work, particularly if your work involves migrating large codebases to new implementations. Google's Large-Scale Automated Refactoring Using ClangMR is an interesting case study of doing that at immense scale, and Source Code Rejuvenation is Not Refactoring is another exploration of this topic. Most importantly, I think this is a good reminder to avoid falling into the "I'll just work through it" mindset for large migrations, which I believe can become the limit on your company's overall throughput. Thanks to Ingrid and KF for shaping this post.
https://lethain.com/refactoring-programmatically/
CC-MAIN-2020-40
en
refinedweb
Python library for parsing network topology data (eg: dynamic routing protocols, NetJSON, CNML) and detect changes. Project description Netdiff is a simple Python library that provides utilities for parsing network topology data of open source dynamic routing protocols and detecting changes in these topologies. Current features: - parse different formats - detect changes in two topologies - return consistent NetJSON output - uses the popular networkx library under the hood Goals: - provide an abstraction layer to facilitate parsing different network topology formats - add support for the most popular dynamic open source routing protocols - facilitate detecting changes in network topology for monitoring purposes - provide standard NetJSON output - keep the library small with as few dependencies as possible Currently used by(file='./stored-olsr.json') new = OlsrParser(url='') diff(old, new) In alternative, you may also use the subtraction operator: from netdiff import OlsrParser from netdiff import diff old = OlsrParser(file='./stored-olsr.json') new = OlsrParser(url='') or the older txtinfo plugin - netdiff.BatmanParser: parser for the batman-advanced alfred tool (supports also the legacy txtinfo format inherited from olsrd) - netdiff.Bmx6Parser: parser for the BMX6 b6m tool - netdiff.CnmlParser: parser for CNML 0.1 - netdiff.NetJsonParser: parser for the NetJSON NetworkGraph format - netdiff.OpenvpnParser: parser for the OpenVPN status file Initialization arguments Data can be supplied in 3 different ways, in the following order of precedence: - data: dict or str representing the topology/graph - url: URL to fetch data from - file: file path to retrieve data from Other available arguments: - timeout: integer representing timeout in seconds for HTTP or telnet requests, defaults to None - verify: boolean indicating to the request library whether to do SSL certificate verification or not Initialization examples Local file example: from netdiff import BatmanParser BatmanParser(file='./my-stored-topology.json') HTTP example: from netdiff import NetJsonParser url = '' NetJsonParser(url=url) Telnet example with timeout: from netdiff import OlsrParser OlsrParser(url='telnet://127.0.1', timeout=5) HTTPS example with self-signed SSL certificate using verify=False: from netdiff import NetJsonParser OlsrParser(url='', verify=False) NetJSON output Netdiff parsers can return a valid NetJSON NetworkGraph object: from netdiff import OlsrParser olsr = OlsrParser(url= format.
https://pypi.org/project/netdiff/0.6/
CC-MAIN-2020-40
en
refinedweb
Notice: On April 23, 2014, Statalist moved from an email list to a forum, based at statalist.org. [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: mat list I would look at -help estimates- and use -estimates store- and -estimates restore-. If your problem is too complicated for that than it will certainly be so complicated that the probability of a hard to find bug is (near) 100% when referring to coefficients by column number. If you decide to use scalars, than don't forget to use -tempname-s for them (variables and scalars share the same namespace, and this can cause problems), see the manual entry for -scalar-. -- Maarten On Tue, Apr 10, 2012 at 2:50 PM, Chiara Mussida <cmussida@gmail.com> wrote: > for the calculations i have to use coef from different models, that's > why i need to store them in separate matrices and not only store and > work on the last model estimates. For each of my reg k will create a > different mat with obviously a diff name and thereafter i will play > with coef. Do you think it is more deficient to save them as scalars > with more simple name than b[row, col ]? > > On 10/04/2012, Maarten Buis <maartenlbuis@gmail.com> wrote: >> On Tue, Apr 10, 2012 at 12:33 PM, Chiara Mussida wrote: >>> i run a reg and i need to use the coefficients for calculations purposes. >>> After my reg i gen a matrix with my betas: >>> mat beta=e(b) >>> this is a 1*k vector of coef. >>> To use them for my manipulation is it enough to refer to them- such: >>> beta[1,col] or it is better to gen a scalar for each matrix element? >> >> I have programmed quite a bit of post-estimation commands and I have >> never been in a situation where I had to refer to coefficients by >> column number. That seems to me an extremely bug-prone method. >> >> I find it often convenient to refer to specific coefficients as >> -_b[varname]-. This is mainly because it is easier to see in your code >> what coefficient you refer to, so the code becomes easier to read and >> debug. This trick does not require that you store your coefficients in >> a separate matrix, but it does refer to the currently active model >> (which you can manipulate using -est store- and -est restore-). >> >> Hope this helps, >> Maarten >> >> -------------------------- >> Maarten L. Buis >> Institut fuer Soziologie >> Universitaet Tuebingen >> Wilhelmstrasse 36 >> 72074 Tuebingen >> Germany >> >> >> >> -------------------------- >> * >> * For searches and help try: >> * >> * >> * >> > > > -- > Chiara Mussida > PhD candidate > Doctoral school of Economic Policy > Catholic University, Piacenza (Italy) > * > * For searches and help try: > * > * > * -- -------------------------- Maarten L. Buis Institut fuer Soziologie Universitaet Tuebingen Wilhelmstrasse 36 72074 Tuebingen Germany -------------------------- * * For searches and help try: * * *
https://www.stata.com/statalist/archive/2012-04/msg00412.html
CC-MAIN-2020-40
en
refinedweb
Rene Engelhard wrote: >>OOo already works with GCC's STL (there are a few namespace-shuffling >>patches needed, but they're already in the upstream tree). The only reason > > > Sure it will work? See > (and > the referenced >) Yeah, it doesn't work yet. *sigh* The STLport dependency is still pretty annoying, but I guess we're stuck with it for a while longer.
https://lists.debian.org/debian-openoffice/2005/09/msg00071.html
CC-MAIN-2018-05
en
refinedweb
Sencha Touch is rich in APIs which makes it best HTML5 framework for hybrid mobile application development. Along with this important API for accessing device File System with the leverage of Apache Cordova. There are many other standard APIs available for the storage like localstorage, SQLite which supports many browsers but device File System API are yet to evolve in mobile platform. In this article we will see how we can use Apache Cordova File API abstraction in Sencha Touch to interact with your mobile device file system. Problem Statement In many mobile native application you might have to interact with native device file system for various purposes like file system, reading the file, downloading the file from remote url, open the file to read, directory listing etc. In this article we will see how you can use Apache Cordova File API abstraction in Sencha Touch to interact with your mobile device file system to perform standard following operations: - Reading File System - Downloading the file from remote url and - Open the file with inAppBrowser Pre-requisite - Sencha Touch development environment (Sencha cmd, Touch SDK ( V – 2.3.X ) etc) - NodeJS 10.x - Cordova 2.x - Android development environment setup - Working knowledge of Sencha Touch Details Step 1 : Create a Sencha Touch application using Sencha CMD Run the following command to generate your Sencha Touch app sencha –sdk <sdk path> generate app <app-namespace> <app-location> E.g. sencha –sdk /Users/wtc/sdk/touch-2.3.0 generate app WTC /Applications/XAMPP/htdocs/STFile Step 2 : Enable support for Cordova Now change your directory to your app directory and run the following command To change the directory to app directory cd /Applications/XAMPP/htdocs/STFile To enable support for Cordova sencha cordova init Step 3 : Using Cordova File plugin and InAppBrowser plugin In order to use the Apache Cordova File API in your app you need to add the Cordova plugin. To add these plugins change to cordova directory in your app directory. cd /Applications/XAMPP/htdocs/STFile/cordova Run the file command to add File plugins cordova plugin add cordova plugin add Run the file command to add inAppBrowser plugin cordova plugin add Now we are ready to use the native device file system using the Sencha Touch File API which is Cordova file abstractions Step 4: Accessing the device File System Ext.device.FileSystem is singleton class which provides an API to navigate file system hierarchies on your device. Past following piece of code in launch function in app.js file , which will tell you the file system root path for your device. Ext.device.FileSystem.requestFileSystem({ type: window.PERSISTENT, size: 0, success: function(fileSystem) { Ext.Msg.alert('File System',fileSystem.fs.root.fullPath); }, failure: function(error) { Ext.Msg.alert('File System path', error); } }); Now build the app for native device. Go to application directory and change the cordova.local.propeties file for Android native package by changing the platform value as below: cordova.platforms=android Now run the following command from application directory. sencha app build native Deploy the .apk file in your emulator or real device to test the app. You will see following output Step 4: Download file from remote url and open using inAppBrowser of device In order to download the remote file and to view you need to do following steps: - Get the filesystem instance - Get file entry instance for file to be downloaded and - download file - Open the file using inAppBrowser Remove the launch function in app.js and copy following code and paste in app.js file launch: function() { // Destroy the #appLoadingIndicator element Ext.fly('appLoadingIndicator').destroy(); var me = this; var downloadBtn = { xtype:'button', text: 'Download File', height:10,width:200, handler: function() { me.downloadFile(); } }; var openBtn = { xtype:'button', text: 'Open File', height:10,width:200, handler: function() { me.openFile('sample.pdf'); } }; var cont = Ext.create('Ext.Container', { centered:true, items: [ downloadBtn,{xtype:'spacer',padding:20},openBtn ] }); Ext.Viewport.add(cont); }, downloadFile: function() { var me = this; var fileName = 'sample.pdf'; // Edit this file name var remoteDocUrl = ''; // Edit this file url this.getLocalFileEntry(fileName, function(fileEntry) { if (!Ext.isEmpty(fileEntry.download) && Ext.isFunction(fileEntry.download)) { fileEntry.download({ source: remoteDocUrl, trustAllHosts: true, options: {}, success: function(fe) { Ext.Msg.alert(fileName ,'File downloaded successfully.'); }, failure: function(error) { Ext.Msg.alert('Download fail'); } }); } else { console.log('download API not available!!'); } }, function(error) {}); }, getLocalFileEntry: function(fileName, successCbk, failureCbk) { Ext.device.FileSystem.requestFileSystem({ type: window.PERSISTENT, size: 0, success: function(fileSystem) { var newFilePath = fileSystem.fs.root.fullPath + '/' + fileName; var fileEntry = new Ext.device.filesystem.FileEntry(newFilePath, fileSystem); successCbk(fileEntry);}, failure: function(error) { console.log('Failed to get the filesystem: ' + error); failureCbk(error); Ext.Msg.alert('getLocalFileEntry fail' + error); } }); }, openFile: function(fileName){ //get the local file path using the file name and open the document this.getLocalFileEntry(fileName, function(fileEntry){ //open the file using InAppBrowser Ext.device.Browser.open({ url: fileEntry.path, showToolbar: true, options: 'location=yes', listeners: { loadstart: function() { console.log('Started loading..'); Ext.Msg.alert('Started loading..'); }, loadstop: function() { console.log('Stopped loading..'); Ext.Msg.alert('Stopped loading..'); }, loaderror: function() { console.log('Error loading..'); Ext.Msg.alert('Error loading..'); },close: function() { console.log('Closing.'); Ext.Msg.alert('Closing..'); } }}); }, function(error){ });}, Create an native package and install the package on your device, you will able to see following view. Tap on Download File to download the file and once file downloaded i.e. you will see an alert ( Note : In application you can show loading indicator as this is just to show the functionality using alert feature ). Observe the downloaded file in your device file system. Now to open the downloaded file tap on Open File button to view the file in the device inAppBrowser Note : If fails to open using inAppBrowser in Android then you need to make some changes to Cordova InAppBrowser plugin as below cordova/platforms/android/src/org/apache/cordova/inappbrowser/InAppBrowser.java, execute method line no 147, comment out replace with result = openPdfFile(url); i.e. else part will look as below else { Log.d(LOG_TAG, "in blank"); //result = showWebPage(url, features); result = openPdfFile(url);} Also add following method to the file public String openPdfFile(String url) { try { Intent intent = new Intent(Intent.ACTION_VIEW); intent.setDataAndType(Uri.parse(url), "application/pdf"); intent.setFlags(Intent.FLAG_ACTIVITY_CLEAR_TOP); cordova.getActivity().startActivity(intent); return ""; } catch (android.content.ActivityNotFoundException e) { LOG.e(LOG_TAG, "Error opening " + url + ": " + e.toString()); return e.toString(); }} There are more APIs available in Sencha Touch to perform additional operations on file and interact with native device file system. Conclusion In this article you saw how we can leverage the Apache Cordova FIle API in Sencha Touch and perform different file operations and interact with native device file system, which are common in any enterprise mobile application. Excellent tutorial! I am developing an app in Sencha Touch and need exactly such tutorial but as applied to getting the Contacts from the smartphone device. My development environment was set up fine and I added one plugin – cordova contacts plugin, it looks like it was added correctly, but I don’t seem to get any results past step 3. Do you have any tutorials specifically for getting the Contacts? I purchased your Sencha Touch Cookbook Second Edition but the Contacts example there does not use Sencha Touch 2.3 & Cordova, it’s more of a strictly cordova build (this tutorial is perfect for my needs, albeit for file system). The reference to the Sencha API being shown in the “next chapter” is in Chapter 10, the last chapter of the book, so I did not find the Sencha APi example. Any help would be appreciated.
https://wtcindia.wordpress.com/2014/04/08/work-with-files-on-your-device-using-sencha-touch-2-3-cordova/
CC-MAIN-2018-05
en
refinedweb
The AT24Cxx series of chips provide a mechanism for storing data that will survive a power outage or battery failure. These EEPROMs are available in varying sizes and are accessible using the I2C interface. Hardware The chip used to develop this library is one that is available on a common DS3231 RTC module with EEPROM memory module: Software The following software displays the first 16 bytes of the EEPROM, writes values to those bytes and then displays the new contents: using System.Threading; using Microsoft.SPOT; using Netduino.Foundation.EEPROM; namespace AT24C32Test { public class Program { public static void Main() { var eeprom = new AT24Cxx(0x57); var memory = eeprom.Read(0, 16); for (ushort index = 0; index < 16; index++) { Debug.Print("Byte: " + index + ", Value: " + memory[index]); } eeprom.Write(3, new byte[] { 10 }); eeprom.Write(7, new byte[] { 1, 2, 3, 4 }); memory = eeprom.Read(0, 16); for (ushort index = 0; index < 16; index++) { Debug.Print("Byte: " + index + ", Value: " + memory[index]); } Thread.Sleep(Timeout.Infinite); } } } API Constructors AT24Cxx(byte address = 0x50, ushort speed = 10, ushort pageSize = 32, ushort memorySize = 8192) Creates a new AT24Cxx object with the specified address, communication speed, memory size and page size. The pageSize is the number of bytes that can be written in a single transaction before the address pointer wraps to the start of the page. This value is used to assist with higher performance data writes. The memorySize parameter is the number of bytes in the EEPROM. This is used to prevent wrapping from the end of the EEPROM back to the start. Methods byte[] Read(ushort startAddress, ushort amount) The Read method will read a number of bytes ( amount)from the specified address. void Write(ushort startAddress, byte[] data) The Write method writes a number of bytes to the EEPROM.
http://netduino.foundation/Library/ICs/EEPROM/AT24Cxx/
CC-MAIN-2018-05
en
refinedweb
Hi, How should you deal with namespaces in XML? As soon as there is a namespace (and there is in the string I currently get...), I can't manage to read any data out from the XML object I put the string inside. Here is a samle piece of code I've been trying around with: var myString = '<ArrayOfInt xmlns:<int>0</int> <int>0</int> <int>0</int></ArrayOfInt>' var myStringNoNS = '<ArrayOfInt> <int>0</int> <int>0</int> <int>0</int></ArrayOfInt>' var myXML = XML(myString); var myXMLNoNs = XML(myStringNoNS); $.writeln(myXML.children().length()); // 0 $.writeln(myXMLNoNs.children().length()); // 3 // Trying to remove the namespace part using the built-in method for it: var myXML2 = myXML.removeNamespace(myXML.namespace()); $.writeln(myXML.namespace() == myXML2.namespace() ); // True... So the Namespace was not removed. // Writing out namespace(): My question is: Primarily: how do I read the contents of the nodes as they exist in myXML Or if that is not possible: How do I get rid of the namespace using a built-in function such as removeNamespace. Of cource I could make some string replacements in the original xml string, or alter the incoming data... but that's not my question. That's more of a last way out. Thanks, Andreas See g_namespaces It looks like adding default xml namespace "" before your myXML declaration will allow you to access its non-namespaced (or default namespaced) children. I can't say I understand the syntax, but it works. Jeff Great! (There's just a missing "=" in your example) default xml namespace = "" Thank you! Sometimes it works... but very often the ExtendScript ToolKit "hangs" when I try to run, and sometimes I just get "Execution finished" as result, and nothing was written out in the console window... If I press F11 to enter debugging instead of running, the test script (with the default namespace line) becomes inaccessible almost every run, and I have to restart Indesign and / or ESTK. I have restarted my computer several times as well, the last couple of hours. I have also trashed the InDesign preferences, but the problem persists. Also, the default namespace seems to stick to the InDesign session or something... since removing it and running the same script (in the ESTK by pressing F5) returns the same (correct) result, without the default namespace. And the debugger/InDesign freezes/hangs even with the default xml statement removed, as long as I have tried to run with it before. Can these problems be repeated, are they known? These tests I'm doing in CS4 (since the code need to work for CS4). Whoops. Sorry about the missing "=". I can reproduce your problems running the script (on CS4 on the Mac) from the ESTK, but only if I'm running it in a persistent custom engine. I assume you are too, which would explain why the default xml namespace persists across invocations of your script. You may want to get that namespace declaration (and whatever you're doing that depends on it) out of the global namespace if setting it for a whole session is a problem. I've run across at least one other bug running scripts involving JavaScript XML in the ESTK, but that one was in CS5. It was similar behavior: a script that would work fine double-clicking it in the Scripts palette would hang indefinitely in the ESTK.
https://forums.adobe.com/thread/892660
CC-MAIN-2018-05
en
refinedweb
[PM] Remove the old 'PassManager.h' header file at the top level of LLVM's include tree and the use of using declarations to hide the 'legacy' namespace for the old pass manager. This undoes the primary modules-hostile change I made to keep out-of-tree targets building. I sent an email inquiring about whether this would be reasonable to do at this phase and people seemed fine with it, so making it a reality. This should allow us to start bootstrapping with modules to a certain extent along with making it easier to mix and match headers in general. The updates to any code for users of LLVM are very mechanical. Switch from including "llvm/PassManager.h" to "llvm/IR/LegacyPassManager.h". Qualify the types which now produce compile errors with "legacy::". The most common ones are "PassManager", "PassManagerBase", and "FunctionPassManager".
https://reviews.llvm.org/rL229094
CC-MAIN-2018-05
en
refinedweb
al_get_opengl_extension_list man page al_get_opengl_extension_list — Allegro 5 API Synopsis #include <allegro5/allegro_opengl.h> ALLEGRO_OGL_EXT_LIST *al_get_opengl_extension_list(void) Description Returns the list of OpenGL extensions supported by Allegro, for the given display. Allegro will keep information about all extensions it knows about in a structure returned by al_get_opengl_extension_list. For example: if (al_get_opengl_extension_list()->ALLEGRO_GL_ARB_multitexture) { //use it } The extension will be set to true if available for the given display and false otherwise. This means to use the definitions and functions from an OpenGL extension, all you need to do is to check for it as above at run time, after acquiring the OpenGL display from Allegro. Under Windows, this will also work with WGL extensions, and under Unix with GLX extensions. In case you want to manually check for extensions and load function pointers yourself (say, in case the Allegro developers did not include it yet), you can use the al_have_opengl_extension(3) and al_get_opengl_proc_address(3) functions instead.
https://www.mankier.com/3/al_get_opengl_extension_list
CC-MAIN-2018-05
en
refinedweb
Navigation A. Introduction This homework is intended to reinforce various OOP concepts. You won't have to write much code, and it's unlikely you'll have to do much debugging. However, the problems will be pretty tricky. Make efficient use of your time if you're stuck by working with other students or seeking help. As usual, you can obtain the skeleton with $ git fetch shared $ git merge shared/hw3 $ git push B. TrReader The standard abstract class java.io.Reader is described in the on-line documentation. It is a general interface to "types of object that have a .read operation defined on them." The idea is that each time you read from a Reader, it gives you the next character (or characters) from some source; just what source depends on what subtype of Reader you have. A program defined to take a Reader as a parameter doesn't have to know what subtype its getting; it just reads from it. Create a class that extends Reader, and provides a new kind of Reader, a TrReader, that translates the characters from another Reader. That is, a TrReader's source of characters is some other Reader, which was given to the TrReader's constructor. The TrReader's read routine simply passes on this other Reader's characters, after first translating them. public class TrReader extends Reader { /** A new TrReader that produces the stream of characters produced * by STR, converting all characters that occur in FROM to the * corresponding characters in TO. That is, change occurrences of * FROM.charAt(0) to TO.charAt(0), etc., leaving other characters * unchanged. FROM and TO must have the same length. */ public TrReader(Reader str, String from, String to) { // FILL IN } // FILL IN } For example, we can define Reader in = new InputStreamReader(System.in); which causes in to point to a Reader whose source of characters is the standard input (i.e., by default, what you type on your terminal, although you can make it come from a file if desired). This means that while (true) { int c = in.read(); if (c == -1) { break; } System.out.print ((char) c); } would simply copy the standard input to the standard output. However, if we write TrReader translation = new TrReader(in, "abcd", "ABCD"); while (true) { int c = translation.read(); if (c == -1) { break; } System.out.print((char) c); } then we will copy the standard input to the standard output after first capitalizing all occurrences of the letters a—d. If we have defined /** A TrReader that does no translation. */ TrReader noTrans = new TrReader(someReader, "", ""); then a call such as noTrans.read() simply has the same effect as someReader.read(), and while (true) { int c = notrans.read(); if (c == -1) { break; } System.out.print((char) c); } just copies the contents of someReader to the standard output, just as if we substituted someReader.read() for notrans.read(). By the way, the program above will work even if you only implement the int read(char[], int, int) method of your reader. This is because the default implementation of int read() uses your read(char[], int, int) method, as you can see in the source code for the Reader class. Notes The TrReaderTest contains a smidgen of testing for your convenience—nothing thorough, however. The read() method (which you are not required to implement) returns an int, but we're supposedly working with items of type char. The reason for this is that read() returns a -1 if there's nothing left to read, and -1 is not a valid char. You should not create a new char[] array for the read(char[], int, int) method. Use the one that is given to you. Also keep in mind that all Readers (including the built in ones) must implement this particular version of read. If you get an error that contains "unreported exception IOException;" when you're trying to make, what you're missing is throws IOException in one of your method declarations (just before the opening {). We haven't learned what this means yet, so don't worry (or read Chapter 11 of HFJ) and just do it. C. Applying TrReader Using the TrReader class from part B, fill in the following function. You may use any number of new operations, one other (non-recursive) method call, and that's all. In addition to String, you are free to use any library classes whose names contain the word Reader (check the on-line documentation), but no others. See the template file Translate.java. Feel free to include unit tests of your translate method. /** The String S, but with all characters that occur in FROM changed * to the corresponding characters in TO. FROM and TO must have the * same length. */ static String translate(String S, String from, String to) { // NOTE: This try {...} catch is a technicality to keep Java happy. try { // FILL IN } catch (IOException e) { return null; } } D. WeirdList Fill in the Java classes on the next page to agree with the comments. However, do not use any if, switch, while, for, do, or try statements, and do not use the ?: operator. The WeirdList class may contain only private fields. The methods in WeirdListClient should not use recursion. DO NOT FIGHT THE PROBLEM STATEMENT! I really meant to impose all the restrictions I did in an effort to direct you into a solution that illustrates object-oriented features. You are going to have to think, but the answers are quite short. See the skeleton templates in WeirdList.java, IntUnaryFunction.java, and WeirdListClient.java. The skeleton also provides some cursory tests. /** An IntUnaryFunction represents a function from * integers to integers. */ public interface IntUnaryFunction { /** The result of applying this function to X. */ int apply (int x); } /** A WeirdList holds a sequence of integers. */ public class WeirdList { /** The empty sequence of integers. */ public static WeirdList EMPTY = // FILL IN; /** A new WeirdList whose head is HEAD and tail is * TAIL. */ public WeirdList(int head, WeirdList tail) { /* FILL IN */ } /** The number of elements in the sequence that * starts with THIS. */ public int length() { /* FILL IN */ } /** Apply func.apply to every element of THIS WeirdList in * sequence, and return a WeirdList of the resulting values. */ public WeirdList map(IntUnaryFunction x) { /* FILL IN */ } /** Return a string containing my contents as a sequence of numerals * each preceded by a blank. Thus, if my list contains * 5, 4, and 2, this returns " 5 4 2". */ @Override public String toString() { /* FILL IN */ } /** Print the contents of THIS WeirdList on the standard output * (on one line, each followed by a blank). Does not print * an end-of-line. */ public void print() { /* FILL IN */ } // FILL IN WITH *PRIVATE* FIELDS ONLY. // You should NOT need any more methods here. } // FILL IN OTHER CLASSES HERE (HINT, HINT). class WeirdListClient { /** Return the result of adding N to each element of L. */ static WeirdList add(WeirdList L, int n) { /* FILL IN */ } /** Return the sum of the elements in L. */ static int sum(WeirdList L) { /* FILL IN */ } }
http://inst.eecs.berkeley.edu/~cs61b/fa16/materials/hw/hw3/index.html
CC-MAIN-2018-05
en
refinedweb
The DS323x ICs offer a low cost accurate real time clock with a temperature compensation crystal oscillator. This range of chips offers the following functionality: - Temperature compensation - Battery backup - I2C (DS3231) and SPI (DS3234) interfaces. - Two programmable alarms - 32.768 KHz square wave output Purchasing A variety of modules are available including low cost modules with integrated EEPROM: Hardware The DS3231 real time clock module (see image below) requires only four (for simple timekeeping) or five (for alarms) connections The 32K pin outputs the 32,768 Hz clock signal from the module. This signal is only available when power is supplied by Vcc, it is not available when the module is on battery power. The orange wire is only required if the alarms are being used to interrupt the Netduino. Software The following application sets an alarm to trigger at when the current second is equal to 15. The interrupt routine displays the time and then clears the interrupt flag: using System; using System.Threading; using Microsoft.SPOT; using Microsoft.SPOT.Hardware; using SecretLabs.NETMF.Hardware; using SecretLabs.NETMF.Hardware.NetduinoPlus; using Netduino.Foundation.RTC; namespace DS3231Test { public class Program { public static void Main() { DS3231 rtc = new DS3231(0x68, 100, Pins.GPIO_PIN_D8); rtc.ClearInterrupt(DS323x.Alarm.BothAlarmsRaised); rtc.SetAlarm(DS323x.Alarm.Alarm1Raised, new DateTime(2017, 10, 29, 9, 43, 15), DS323x.AlarmType.WhenSecondsMatch); rtc.OnAlarm1Raised += rtc_OnAlarm1Raised; Thread.Sleep(Timeout.Infinite); } static void rtc_OnAlarm1Raised(object sender) { DS3231 rtc = (DS3231) sender; Debug.Print("Alarm 1 has been activated: " + rtc.CurrentDateTime.ToString()); rtc.ClearInterrupt(DS323x.Alarm.Alarm1Raised); } } } Connect the interrupt pin of the DS3231 real time clock to digital input pin 8 on the Netduino. API Enums Alarm Type of alarm event that has been raised or the type of alarm that should be raised: Alarm1Raised Alarm2Raised BothAlarmsRaised AlarmType The two alarms have a number of possible options that may be configured with the SetAlarm method: Alarm 1 OncePerSecond- Event raised every second WhenSecondsMatch- Event raised with the seconds in the current time matches the seconds in alarm 1. WhenMinutesSecondsMatch- Event raised when both the seconds and the minutes in the current time match the time stored in alarm 1. WhenHoursMinutesSecondsMatch- Event raised when the hours, minutes and seconds in the current time match the time stored in alarm 1. WhenDateHoursMinutesSecondsMatch- Event raised when the current date and time match the time stored in alarm 1. WhenDayHoursMinutesSecondsMatch- Event raised when the day, hour, minute and second in the current time match the time in stored in alarm 1. Alarm 2 OncePerMinute- Event is raised every minute. WhenMinutesMatch- Event is raised when the minutes in the current time match the minutes in alarm 2. WhenHoursMinutesMatch- Event raised when the hours and minutes in the current time match those stored in alarm 2. WhenDateHoursMinutesMatch- Event raised when the date, hours and minutes of the current time match the those stored in alarm 2. WhenDayHoursMinutesMatch- Event raised when the day, hours, minutes match those stored in alarm 2. Constructors Properties DateTime CurrentDateTime Get or set the current time. Temperature Temperature of the sensor in degrees centigrade. Methods SetAlarm(Alarm alarm, DateTime time, AlarmType type) Set one of the two alarms where alarm indicates which alarm should be set and type determines which event will generate the alarm. The time parameter provides the date / time information for the specified alarm. EnableDisableAlarm(Alarm alarm, bool enable) Enable or disable the specified alarm. ClearInterrupt(Alarm alarm) Clear the interrupt for the specified alarm. This must be called in the alarm event handlers prior to exit. Events OnAlarm1Raised(object sender) and OnAlarm2Raised(object sender) Raised when the appropriate alarm is triggered. If both alarms are triggered at the same time then both events will be triggered. It is important that the event handler clears the event by calling the ClearInterrupts method before exiting. If the interrupts are not cleared then future events will not be triggered.
http://netduino.foundation/Library/RTCs/DS323x/
CC-MAIN-2018-05
en
refinedweb
What to put on MX record when using Ubuntu Posfix SMTP? I configured SMTP postfix from Ubuntu 14. I already reroute my domain from another site to use digitalocean namespace. I also configured my digital ocean control panel and added my domain name, A, Cname but I do not know what to put on MX record? Do I need to put something on MX record when I want to use SMTP postfix from Ubuntu 14?
https://www.digitalocean.com/community/questions/what-to-put-on-mx-record-when-using-ubuntu-posfix-smtp
CC-MAIN-2018-05
en
refinedweb
Follow-up Comment #2, bug #16302 (project gnustep): This combination used to work. It was the original one I implemented :-) I think it should be sufficient to change this method to just call callback: in the Cygwin case. That is, put the usage of ET_WINMSG into #ifndef __CYGWIN__ #endif _______________________________________________________ Reply to this item at: <> _______________________________________________ Nachricht geschickt von/durch Savannah
http://lists.gnu.org/archive/html/bug-gnustep/2007-10/msg00064.html
CC-MAIN-2018-05
en
refinedweb
#include <Wire.h>#include <LiquidCrystal_I2C.h>LiquidCrystal_I2C lcd(0x27 20,4); //set the LCD address to 0x27 void setup(){lcd.init();lcd.backlight();lcd.setCursor(0, 0);lcd.print("b2cqshop")(0x30+val/100);lcd.print(0x30+(val%100)/10);lcd.print('.');lcd.print(0x30+val%10);delay(100);} Now I have connected Pin analog 4 (SDA ) and analog 5 (SDL) to the resepective Two wire interface for the board on the back of the 20 x 4 LCD as well as VCC and GND. How do I know which address to use as well? I have seen a lot of 0x27 stuff .i read how someone wrote you have to also code wire.begin but that just shows an error. I was wondering myself what wire.begin ?How do I know which address to use as well? I have seen a lot of 0x27 stuffI was using the LiquidCrystal_I2C lcd( 0x27, 20, 4); to initiate the LCD... how do I "open the serial monitor" This is a dumb question, I know...but how do I "open the serial monitor" I uploaded the code for the I2C scanner and it compiled fine...but now what? I did try the new liquid crystal library but which version are you referring to? There are like 10 downloads on that bitbucket page.....so confusing....and is this for 1.0 or 1.0.1? I saw some posts of some sainsmarts getting to work, but dammit I want mine to To install the library:* Download the most recent version of the library.... == Version ==Current New LiquidCrystal is the latest zip file in the download section.The New LiquidCrystal library has been tested and is compatible with Arduino SDK 1.0.1, 1.0 and Arduino SDK 0022.
http://forum.arduino.cc/index.php?topic=120929.msg910098
CC-MAIN-2015-35
en
refinedweb
Buffered AudioStreamer. More... #include <NetStream_as.h> Buffered AudioStreamer. This class you create passing a sound handler, which will be used to implement attach/detach and eventually throw away buffers of sound when no sound handler is given. Then you push samples to a buffer of it and can request attach/detach operations. When attached, the sound handler will fetch samples from the buffer, in a thread-safe way. Attach the aux streamer. On success, _auxStreamerAttached will be set to true. Won't attach again if already attached. References _auxStreamer, _soundHandler, gnash::sound::sound_handler::attach_aux_streamer(), fetchWrapper(), and gnash::sound::sound_handler::unplugInputStream(). Referenced by gnash::NetStream_as::play(). References _audioQueue, _audioQueueMutex, and gnash::deleteChecked(). Referenced by gnash::NetStream_as::close(), and gnash::NetStream_as::seek(). Detach the aux streamer. _auxStreamerAttached will be set to true. Won't detach if not attached. References _auxStreamer, _soundHandler, and gnash::sound::sound_handler::unplugInputStream(). Referenced by gnash::NetStream_as::close(). Fetch samples from the audio queue. References _audioQueue, _audioQueueMutex, _audioQueueSize, gnash::BufferedAudioStreamer::CursoredBuffer::m_ptr, gnash::BufferedAudioStreamer::CursoredBuffer::m_size, and gnash::key::n. Referenced by fetchWrapper(). Fetch samples from the audio queue. Referenced by attachAuxStreamer(). Push a buffer to the audio queue. References _audioQueue, _audioQueueMutex, _audioQueueSize, _auxStreamer, and gnash::BufferedAudioStreamer::CursoredBuffer::m_size. This is where audio frames are pushed by advance and consumed by sound_handler callback (audio_streamer) Referenced by cleanAudioQueue(), fetch(), push(), and gnash::NetStream_as::update(). The queue needs to be protected as sound_handler callback is invoked by a separate thread (dunno if it makes sense actually) Referenced by cleanAudioQueue(), fetch(), push(), and gnash::NetStream_as::update(). Referenced by attachAuxStreamer(), detachAuxStreamer(), and push(). Referenced by attachAuxStreamer(), and detachAuxStreamer().
http://gnashdev.org/doc/html/classgnash_1_1BufferedAudioStreamer.html
CC-MAIN-2015-35
en
refinedweb