text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
Hash::AutoHash::AVPairsMulti - Object-oriented access to hash with multi-valued simple (non-reference) elements Version 1.17 use Hash::AutoHash::AVPairsMulti; # create object and set intial values my $avp=new Hash::AutoHash::AVPairsMulti pets=>'Spot',hobbies=>'chess',hobbies=>'cooking'; # access or change hash elements via methods my $pets=$avp->pets; # ['Spot'] my @pets=$avp->pets; # ('Spot') my $hobbies=$avp->hobbies; # ['chess','cooking'] my @hobbies=$avp->hobbies; # ('chess','cooking') $avp->hobbies('go','rowing'); # new values added to existing ones my $hobbies=$avp->hobbies; # ['chess','cooking','go','rowing'] $avp->family({kids=>'Joey'}); # illegal - reference # you can also use standard hash notation and functions my($pets,$hobbies)= @$avp{qw(pets hobbies)}; # get 2 elements in one statement $avp->{pets}='Felix'; # set pets to ['Spot','Felix'] my @keys=keys %$avp; # ('pets','hobbies') my @values=values %$avp; # (['Spot','Felix'], # ['chess','cooking','go','rowing']) while(my($key,$value)=each %$avp) { print "$key => @$value\n"; # prints each element as usual } delete $avp->{hobbies}; # no more hobbies # CAUTION: hash notation doesn't respect array context! $avp->{hobbies}=('go','rowing'); # sets hobbies to last value only my @hobbies=$avp->{hobbies}; # @hobbies is (['rowing']) # alias $avp to regular hash for more concise hash notation use Hash::AutoHash::AVPairsMulti qw(autohash_alias); my %hash; autohash_alias($avp,%hash); # access or change hash elements without using -> $hash{hobbies}=['chess','cooking']; # append values to hobbies my $pets=$hash{pets}; # ['Spot','Felix'] my $hobbies=$hash{hobbies}; # ['rowing','chess','cooking'] # another way to do the same thing my($pets,$hobbies)=@hash{qw(pets hobbies)}; # set 'unique' in tied object to eliminate duplicates use Hash::AutoHash::AVPairsMulti qw(autohash_tied); autohash_tied($avp)->unique(1); $avp->hobbies('cooking','baking'); # duplicate 'cooking' not added my @hobbies=$avp->hobbies; # ('rowing','chess','cooking','baking') This is a subclass of Hash::AutoHash which wraps a tied hash whose elements contain multiple simple values like numbers and strings, not references. Hash::AutoHash::Record uses this class to represent attribute-value pairs parsed from text files. It is conceptually a subclass of Hash::AutoHash::MultiValued whose elements contain values of all kinds.PairsMulti. By default, hash elements may contain duplicate values. my $avp=new Hash::AutoHash::AVPairsMulti hobbies=>'go',hobbies=>'go'; my @hobbies=$avp->hobbies; # ('go','go') You can change this behavior by setting 'unique' in the tied object implementing the hash to a true value. autohash_tied($avp)->unique(1); my @hobbies=$avp->hobbies; # now ('go') $avp=new Hash::AutoHash::AVPairsMulti hobbies=>['GO','go']; autohash_tied($avp)->unique(sub {my($a,$b)=@_; lc($a) eq lc($b)}); my @hobbies=$avp->hobbies; # @hobbies is ('GO') When 'unique' is given a true value, duplicate removal occurs immediately by running all existing elements through the duplicate-removal process. Thereafter, duplicate checking occurs on every update. $avp=new Hash::AutoHash::AVPairsMulti hobbies=>['GO','go','dance']; autohash_tied($avp)->filter(\&uniq_nocase_sort); my @hobbies=$avp->hobbies; # @hobbies is ('dance','go') You can do the same thing more concisely with this cryptic one-liner. autohash_tied($avp)->filter(sub {my %u; @u{map {lc $_} @_}=@_; sort values %u}); Filtering occurs when you run the 'filter' method. It does not occur on every update. Title : new Usage : $avp=new Hash::AutoHash::AVPairsMulti pets=>'Spot',hobbies=>'chess',hobbies=>'cooking' -- OR -- $avp=new Hash::AutoHash::AVPairsMulti [pets=>'Spot',hobbies=>'chess',hobbies=>'cooking'] -- OR -- $avp=new Hash::AutoHash::AVPairsMulti {pets=>'Spot',hobbies=>['chess','cooking']} Function: Create Hash::AutoHash::AVPairsMulti object and set elements. Returns : Hash::AutoHash::AVPairsMulti(%$avp)->unique -- OR -- tied(%$avp)->unique($boolean) -- OR -- tied(%$avp)->unique(\&function) -- OR -- $unique=autohash_tied($avp)->unique -- OR -- autohash_tied($avp)->unique($boolean) -- OR -- autohash_tied($avp)->unique(\&function) Function: Get or set option that controls duplicate elimination.. This method must be invoked on the tied object implementing the hash. Title : filter Usage : $filter=tied(%$avp)->filter -- OR -- tied(%$avp)->filter($boolean) -- OR -- tied(%$avp)->filter(\&function) -- OR -- $filter=autohash_tied($avp)->filter -- OR -- autohash_tied($avp)->filter($boolean) -- OR -- autohash_tied($avp)- operate exactly as there. You must import them into your namespace before use. use Hash::AutoHash::AVPairsMultiPairsMulti object and hash Returns : Hash::AutoHash::AVPairsMulti object You can access the object implementing the tied hash using Perl's built-in tied function or the autohash_tied function inherited from Hash::AutoHash. Advantages of autohash_tied are (1) it operates directly on the Hash::AutoHash::AVPairsMultiPairsMulti object. In forms 2 and 4, the first argument is a hash to which a Hash::AutoHash::AVPairsMulti object has been aliased Returns : In forms 1 and 2, object implementing tied hash or undef. In forms 3 and 4, result of invoking method (which can be anything or nothing), or undef. Args : Form 1. Hash::AutoHash::AVPairsMulti object Form 2. hash to which Hash::AutoHash::AVPairsMulti object is aliased Form 3. Hash::AutoHash::AVPairsMulti object, method name, optional list of parameters for method Form 4. hash to which Hash::AutoHash::AVPairsMulti object is aliased, method name, optional list of parameters for method Title : autohash_get Usage : ($pets,$hobbies)=autohash_get($avp,qw(pets hobbies)) Function: Get values for multiple keys. Args : Hash::AutoHash::AVPairsMulti object and list of keys Returns : list of argument values Title : autohash_set Usage : autohash_set($avp,pets=>'Felix',kids=>'Joe') -- OR -- autohash_set($avp,['pets','kids'],['Felix','Joe']) Function: Set multiple arguments in existing object. Args : Form 1. Hash::AutoHash::AVPairsMulti object and list of key=>value pairs Form 2. Hash::AutoHash::MultiValue object, ARRAY of keys, ARRAY of values Returns : Hash::AutoHash::AVPairsMulti object The remaining functions provide hash-like operations on Hash::AutoHash::AVPairsMulti objects. These are useful if you want to avoid hash notation all together. Title : autohash_clear Usage : autohash_clear($avp) Function: Delete entire contents of $avp Args : Hash::AutoHash::AVPairsMulti object Returns : nothing Title : autohash_delete Usage : autohash_delete($avp,@keys) Function: Delete keys and their values from $avp. Args : Hash::AutoHash::AVPairsMulti object, list of keys Returns : nothing Title : autohash_exists Usage : if (autohash_exists($avp,$key)) { ... } Function: Test whether key is present in $avp. Args : Hash::AutoHash::AVPairsMultiPairsMultiPairsMulti object Returns : list of keys Title : autohash_values Usage : @values=autohash_values($avp) Function: Get the values of all keys that are present in $avp Args : Hash::AutoHash::AVPairsMulti object Returns : list of values Title : autohash_count Usage : $count=autohash_count($avp) Function: Get the number keys that are present in $avp Args : Hash::AutoHash::AVPairsMulti object Returns : number Title : autohash_empty Usage : if (autohash_empty($avp)) { ... } Function: Test whether $avp is empty Args : Hash::AutoHash::AVPairsMulti object Returns : boolean Title : autohash_notempty Usage : if (autohash_notempty($avp)) { ... } Function: Test whether $avp is not empty. Complement of autohash_empty Args : Hash::AutoHash::AVPairsMulti::Record are other subclasses of Hash::AutoHash. Hash::AutoHash::AVPairsSingle is similar but requires each attribute to have a single value. Hash::AutoHash::MultiValued is similar, but permits values to be non-simple, ie, references. Most of the implementation comes from the tied hash class of Hash::AutoHash::MultiValued. Hash::AutoHash::Record uses this class to represent attribute-value pairs parsed from text files. Nat Goodman, <natg at shore.net> Please report any bugs or feature requests to bug-hash-autohash-avpairsmultiPairsMulti You can also look for information at: This program is free software; you can redistribute it and/or modify it under the terms of either: the GNU General Public License as published by the Free Software Foundation; or the Artistic License. See for more information.
http://search.cpan.org/dist/Hash-AutoHash-AVPairsMulti/lib/Hash/AutoHash/AVPairsMulti.pm
CC-MAIN-2015-35
en
refinedweb
Private Data for Objects in JavaScript JavaScript does not come with dedicated means for managing private data for an object. This post describes five techniques for working around that limitation: - Instance of a constructor – private data in environment of constructor - Singleton object – private data in environment of object-wrapping IIFE - Any object – private data in properties with marked names - Any object – private data in properties with reified names - Single method – private data in environment of method-wrapping IIFE The following sections explain each technique in more detail. Required knowledge: While everything is explained relatively slowly, you should probably be familiar with environments and IIFEs [1] and with inheritance and constructors [2]. Instance of a constructor – private data in environment of constructorThis approach works as follows: When a constructor is invoked, two things are created: The constructor’s instance and an environment. The instance is to be initialized by the constructor. The environment holds the constructor’s parameters and local variables. Every function (which includes methods) created inside the constructor will retain a reference to the environment – the environment “in which” it was created. Thanks to that reference, it will always have access to the environment, even after the constructor is finished. The environment will stay alive as long as there is a reference to it. This combination of function and environment is called a closure, because the environment “closes over” the function’s free variables, variables that are not local to it. The constructor environment is thus data storage that is independent of the instance and only related to it because it is created at the same time. To connect the two, there must be functions that live in both worlds. Using Crockford’s terminology, an instance can have three kinds of data associated with it: - Public properties: Data stored in the instance is publicly accessible. - Private data: Data stored in the environment is only accessible to the constructor and functions created inside it. - Privileged methods: Private functions can access public properties, but public methods cannot normally access private data. We thus need special privileged methods – functions created in the constructor that are added to the instance. Privileged methods are public and can thus be seen by non-privileged methods, but they also have access to the private data. The following sections explain the three kinds in more detail. 1.1 Public properties Remember that given a constructor Constr, there are two kinds of properties that are public, accessible by everyone. First, prototype properties are stored in an object that is the prototype of all instances, its properties are shared by them. That object is also accessible via Constr.prototype. Prototype properties are usually methods. Constr.prototype.publicMethod = ...; Second, instance properties are unique to each instance. They are added in the constructor and usually “fields” (holding data, not methods). function Constr(...) { this.publicField = ...; } 1.2 Private data The constructor’s environment consists of the parameters and local variables. They are only accessible from inside the constructor and thus private to the instance. function Constr(...) { var that = this; // hand to private functions var privateValue = ...; function privateFunction(...) { privateValue = ...; that.publicField = ...; that.publicMethod(...); } } If you don’t like the that = this work-around, above, you have the option to use bind (caveat: consumes more memory): function Constr(...) { var privateValue = ...; var privateFunction = function (...) { privateValue = ...; this.publicField = ...; this.publicMethod(...); }.bind(this); } 1.3 Privileged methods Private data is so safe from outside access that prototype methods can’t access it. But then how else would you use it, after leaving the constructor? The answer are privileged methods: Functions created in the constructor are added as instance-specific methods. They can thus access the private data and be seen by prototype methods. function Constr(...) { this.privilegedMethod = function (...) { ... }; } 1.4 Analysis - Not very elegant: Mediating access to private data via privileged methods introduces an unnecessary indirection. Privileged methods and private functions both destroy the separation of concerns between the constructor (setting up instance data) and its prototype property (methods). - Completely safe: There is no way to access the environment’s data from outside. Which makes this solution secure if you need to guarantee that (e.g. for security-critical code). On the other hand, private data not being accessible to the outside can also be an inconvenience: Sometimes you want to unit-test private functionality. And some temporary quick fixes depend on the ability to access private data. This kind of quick fix cannot be predicted, so no matter how good your design is, the need can arise. - Possibly slower: Accessing properties in the prototype chain is highly optimized in current JavaScript engines. Accessing values in the closure might be slower. But these things change constantly, so you’ll have to measure, should this really matter for your code. - Memory consumption: Keeping the environment around and putting privileged methods in instances costs memory. Again: Be sure it really matters for your code and measure. 2. Singleton object – private data in environment of object-wrapping IIFE If you work with singletons, the technique of putting private data in an environment can still be used. But, as there is no constructor, you’ll have to wrap an immediately-invoked function expression (IIFE, [1]) around the singleton to get such an environment. var obj = function () { // open IIFE // public var that = { publicMethod: function (...) { privateValue = ...; privateFunction(...); }, publicField: ... }; // private var privateValue = ...; function privateFunction(...) { privateValue = ...; that.publicField = ...; that.publicMethod(...); } return that; }(); // close IIFE Public methods can access private data, as long as they are invoked after it has been added to the environment. 3. Any object – private data in properties with marked names For most non-security-critical applications, privacy is more like a hint to clients of an API: “You don’t need to see this”. That’s the core benefit of encapsulation: Hiding complexity. Even though more is going on under the hood, you only need to understand the public part of an API. The idea of a naming convention is to let clients know about privacy by marking the name of a property. A prefixed underscore is often used for this purpose. The following example shows a type StringBuilder whose property _buffer is private, but by convention only. function StringBuilder() { this._buffer = []; } StringBuilder.prototype = { constructor: StringBuilder, add: function (str) { this._buffer.push(str); }, toString: function () { return this._buffer.join(""); } }; Interaction: > var sb = new StringBuilder(); > sb.add("Hello"); > sb.add(" world"); > sb.add("!"); > sb.toString() ’Hello world!’ 3.1 Analysis - Natural coding style: With the popularity of putting private data in environments, JavaScript is the only mainstream programming language that treats private and public data differently. A naming convention avoids this slightly awkward coding style. - Property namespace pollution: The more people use IDEs, the more it will be a nuisance to see private properties where you shouldn’t. Naturally, IDEs could adapt to that and recognize naming conventions and when private properties shouldn’t be shown. - Private properties can be accessed from outside: Applications include unit tests and quick fixes. But it also gives you more flexibility as to who data should be private too. You can, for example, include subtypes in your private circle, or “friend” functions. With the environment approach, you always limit access to functions created inside the scope of that environment. - Name clashes: private names can clash. This is already an issue for subtypes, but it becomes more problematic with some kind of multiple inheritance (e.g. via mixins or traits). 4. Any object – private data in properties with reified names One problem with a naming convention for private properties is that names might clash. You can make such clashes less likely by using longer names, that, for example, include the name of the type. Then, above, the private property _buffer would be called _StringBuilder_buffer. If such a name is too long for your taste, you have the option of reifying it, of turning it into a thing (which is the literal meaning of reification). var buffer = "_StringBuilder_buffer"; Whereas we previously used the name directly, it is now a value (the “thing” mentioned above) stored in the variable buffer. We now access the private data via this[buffer]. var StringBuilder = function () { var buffer = "_StringBuilder_buffer"; function StringBuilder() { this[buffer] = []; } StringBuilder.prototype = { constructor: StringBuilder, add: function (str) { this[buffer].push(str); }, toString: function () { return this[buffer].join(""); } }; return StringBuilder; }(); We have wrapped an IIFE around StringBuilder so that the variable buffer stays local and doesn’t pollute the global namespace. 4.1 ECMAScript.next and reified namesThe ECMAScript.next proposal “private name objects” takes the idea of reified names one step further. Names can now also be objects, so-called private name objects. There will be a module name with a function create() that lets you create such objects: var buffer = name.create(); Each invocation of name.create() produces a new name object that is unique – different from any other name object created in this manner. Until the end of this section, we use the term “private property” as an abbreviation for “a property whose name is a private name object”. Compared to string names, name objects have two advantages: - Hidden: Private properties don’t show up when examining an object with the usual tools (Object.keys, propName in obj, etc.), they don’t pollute an object’s property name space. - Inaccessible: Partially as a consequence of hiding, one can only access a private property if one “has” its name object. That makes it secure: One can control precisely who has access. That control is also an advantage compared to using environments for privacy: You can now grant someone access, e.g. a unit test to check that a private method works properly. That means: You get elegant code and security while being able to control precisely who sees what. You can, for example, let unit test code see the private name object, but no one else. 4.2 Analysis You get all the advantages of naming conventions, while avoiding name clashes – at the expense of having to manage the reified names. 5. Single method – private data in environment of method-wrapping IIFE Sometimes you only need private data for a single method. Then you can use the same technique as sharing the constructor’s environment: Attach an environment to the method, use it to hold data that has to persist across method invocations. To do so, you simply wrap an IIFE around the function defining the method. For example: var obj = { method: function () { // open IIFE // method-private data var invocCount = 0; return function () { invocCount++; console.log("Invocation #"+invocCount); return "result"; }; }() // close IIFE }; Interaction: > obj.method() Invocation #1 'result' > obj.method() Invocation #2 'result' 5.1 Analysis For the use cases where this approach is relevant, managing the private data close to the method that uses it is very convenient. The memory consumption caused by the additional environment may be an issue. 6. Conclusion We have seen that there are several patterns that you can use for keeping data private in JavaScript. Each one has pros and cons, so you need to choose carefully. ECMAScript.next will make things simpler via private name objects. It might even introduce syntactic sugar so that you don’t have to manage them manually. Upcoming: my book on JavaScript (free online). References
https://dzone.com/articles/private-data-objects?mz=46483-html5
CC-MAIN-2015-35
en
refinedweb
What does "using namespace std" do? I am new to C++ and have seen it in a lot of source code. Just wondering, Thanks. Printable View What does "using namespace std" do? I am new to C++ and have seen it in a lot of source code. Just wondering, Thanks. The c++ Standard Library is defined within the "std" namespace. A namespace is a unique declarative region that attaches an additional identifier to any names declared inside it. namespace foobar { int i; int j; } These ints names won't collide because they are foobar::i and foo::i, respectively. namespace foo { int i; int j; } This allows reuse of names and prevents name collisions, which can be a problem with very large projects. In order to get access to names within the "std" namespace (actually any namespce) you either need to use scope resolution operator for the namespace( std::cout ) or access the entire namespace by using the "using namespace std;". The second method allows the use of names within the namespace without using the scope resolution operator such as "cout" not std::. Now that I've given a long answer to a short question, the using namespace std allows access to the C++ Standard Library without using the scope resolution operator, std::.
http://cboard.cprogramming.com/cplusplus-programming/3165-using-namespace-std-what-does-do-printable-thread.html
CC-MAIN-2015-35
en
refinedweb
_lwp_cond_reltimedwait(2) - determine type and status of a processor #include <sys/types.h> #include <sys/processor.h> int processor_info(processorid_t processorid, processor_info_t *infop); The processor_info() function returns the status of the processor specified by processorid in the processor_info_t structure pointed to by infop. The structure processor_info_t contains the following members: int pi_state; char pi_processor_type[PI_TYPELEN]; char pi_fputypes[PI_FPUTYPE]; int pi_clock; The pi_state member is the current state of the processor, either P_ONLINE, P_OFFLINE, P_NOINTR, P_FAULTED, P_SPARE, or P_POWEROFF. The pi_processor_type member is a null-terminated ASCII string specifying the type of the processor. The pi_fputypes member is a null-terminated ASCII string containing the comma-separated types of floating-point units (FPUs) attached to the processor. This string will be empty if no FPU is attached. The pi_clock member is the processor clock frequency rounded to the nearest megahertz. It may be 0 if not known. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. The processor_info() function will fail if: An non-existent processor ID was specified. The caller is in a non-global zone, the pools facility is active, and the processor is not a member of the zone's pool's processor set. The processor_info_t structure pointed to by infop was not writable by the user. pooladm(1M), psradm(1M), psrinfo(1M), zoneadm(1M), p_online(2), sysconf(3C)
http://docs.oracle.com/cd/E23824_01/html/821-1463/processor-info-2.html
CC-MAIN-2015-35
en
refinedweb
Zend\XmlRpc Zend\XmlRpc\Client¶ Introduction¶ Zend Framework provides support for consuming remote XML-RPC services as a client in the Zend\XmlRpc\Client package. Its major features include automatic type conversion between PHP and XML-RPC, a server proxy object, and access to server introspection capabilities. Method Calls¶ The constructor of Zend\XmlRpc\Client receives the URL of the remote XML-RPC server endpoint as its first parameter. The new instance returned may be used to call any number of remote methods at that endpoint. To call a remote method with the XML-RPC client, instantiate it and use the call() instance method. The code sample below uses a demonstration XML-RPC server on the Zend Framework website. You can use it for testing or exploring the Zend\XmlRpc components. XML-RPC Method Call The XML-RPC value returned from the remote method call will be automatically unmarshaled and cast to the equivalent PHP native type. In the example above, a PHP String is returned and is immediately ready to be used. The first parameter of the call() method receives the name of the remote method to call. If the remote method requires any parameters, these can be sent by supplying a second, optional parameter to call() with an Array of values to pass to the remote method: XML-RPC Method Call with Parameters If the remote method doesn’t require parameters, this optional parameter may either be left out or an empty array() passed to it. The array of parameters for the remote method can contain native PHP types, Zend\XmlRpc\Value objects, or a mix of each. The call() method will automatically convert the XML-RPC response and return its equivalent PHP native type. A Zend\XmlRpc\Response object for the return value will also be available by calling the getLastResponse() method after the call. Types and Conversions¶ Some remote method calls require parameters. These are given to the call() method of Zend\XmlRpc\Client as an array in the second parameter. Each parameter may be given as either a native PHP type which will be automatically converted, or as an object representing a specific XML-RPC type (one of the Zend\XmlRpc\Value objects). PHP Native Types as Parameters¶ Parameters may be passed to call() as native PHP variables, meaning as a String, Integer, Float, Boolean, Array, or an Object. In this case, each PHP native type will be auto-detected and converted into one of the XML-RPC types according to this table: Note What type do empty arrays get cast to? Passing an empty array to an XML-RPC method is problematic, as it could represent either an array or a struct. Zend\XmlRpc\Client detects such conditions and makes a request to the server’s system.methodSignature method to determine the appropriate XML-RPC type to cast to. However, this in itself can lead to issues. First off, servers that do not support system.methodSignature will log failed requests, and Zend\XmlRpc\Client will resort to casting the value to an XML-RPC array type. Additionally, this means that any call with array arguments will result in an additional call to the remote server. To disable the lookup entirely, you can call the setSkipSystemLookup() method prior to making your XML-RPC call: Zend\XmlRpc\Value Objects as Parameters¶ Parameters may also be created as Zend\XmlRpc\Value instances to specify an exact XML-RPC type. The primary reasons for doing this are: - When you want to make sure the correct parameter type is passed to the procedure (i.e. the procedure requires an integer and you may get it from a database as a string) - When the procedure requires base64 or dateTime.iso8601 type (which doesn’t exists as a PHP native type) - When auto-conversion may fail (i.e. you want to pass an empty XML-RPC struct as a parameter. Empty structs are represented as empty arrays in PHP but, if you give an empty array as a parameter it will be auto-converted to an XML-RPC array since it’s not an associative array) There are two ways to create a Zend\XmlRpc\Value object: instantiate one of the Zend\XmlRpc\Value subclasses directly, or use the static factory method Zend\XmlRpc\AbstractValue::getXmlRpcValue(). Note Automatic Conversion When building a new Zend\XmlRpc\Value object, its value is set by a PHP type. The PHP type will be converted to the specified type using PHP casting. For example, if a string is given as a value to the Zend\XmlRpc\Value\Integer object, it will be converted using (int)$value. Server Proxy Object¶ Another way to call remote methods with the XML-RPC client is to use the server proxy. This is a PHP object that proxies a remote XML-RPC namespace, making it work as close to a native PHP object as possible. To instantiate a server proxy, call the getProxy() instance method of Zend\XmlRpc\Client. This will return an instance of Zend\XmlRpc\Client\ServerProxy. Any method call on the server proxy object will be forwarded to the remote, and parameters may be passed like any other PHP method. Proxy the Default Namespace The getProxy() method receives an optional argument specifying which namespace of the remote server to proxy. If it does not receive a namespace, the default namespace will be proxied. In the next example, the ‘test’ namespace will be proxied: Proxy Any Namespace If the remote server supports nested namespaces of any depth, these can also be used through the server proxy. For example, if the server in the example above had a method test.foo.bar(), it could be called as $test->foo->bar(). Error Handling¶ Two kinds of errors can occur during an XML-RPC method call: HTTP errors and XML-RPC faults. The Zend\XmlRpc\Client recognizes each and provides the ability to detect and trap them independently. HTTP Errors¶ If any HTTP error occurs, such as the remote HTTP server returns a 404 Not Found, a Zend\XmlRpc\Client\Exception\HttpException will be thrown. Handling HTTP Errors Regardless of how the XML-RPC client is used, the Zend\XmlRpc\Client\Exception\HttpException will be thrown whenever an HTTP error occurs. XML-RPC Faults¶ An XML-RPC fault is analogous to a PHP exception. It is a special type returned from an XML-RPC method call that has both an error code and an error message. XML-RPC faults are handled differently depending on the context of how the Zend\XmlRpc\Client is used. When the call() method or the server proxy object is used, an XML-RPC fault will result in a Zend\XmlRpc\Client\Exception\FaultException being thrown. The code and message of the exception will map directly to their respective values in the original XML-RPC fault response. Handling XML-RPC Faults When the call() method is used to make the request, the Zend\XmlRpc\Client\Exception\FaultException will be thrown on fault. A Zend\XmlRpc\Response object containing the fault will also be available by calling getLastResponse(). When the doRequest() method is used to make the request, it will not throw the exception. Instead, it will return a Zend\XmlRpc\Response object returned will containing the fault. This can be checked with isFault() instance method of Zend\XmlRpc\Response. Server Introspection¶ Some XML-RPC servers support the de facto introspection methods under the XML-RPC system. namespace. Zend\XmlRpc\Client provides special support for servers with these capabilities. A Zend\XmlRpc\Client\ServerIntrospection instance may be retrieved by calling the getIntrospector() method of Zend\XmlRpc\Client. It can then be used to perform introspection operations on the server. The following methods are available for introspection: - getSignatureForEachMethod: Returns the signature for each method on the server - getSignatureForEachMethodByMulticall($methods=null): Attempt to get the method signatures in one request via system.multicall(). Optionally pass an array of method names. - getSignatureForEachMethodByLooping($methods=null): Get the method signatures for every method by successively calling system.methodSignature. Optionally pass an array of method names - getMethodSignature($method): Get the method’s signature for $method - listMethods: List all methods on the server From Request to Response¶ Under the hood, the call() instance method of Zend\XmlRpc\Client builds a request object (Zend\XmlRpc\Request) and sends it to another method, doRequest(), that returns a response object (Zend\XmlRpc\Response). The doRequest() method is also available for use directly: Processing Request to Response Whenever an XML-RPC method call is made by the client through any means, either the call() method, doRequest() method, or server proxy, the last request object and its resultant response object will always be available through the methods getLastRequest() and getLastResponse() respectively. HTTP Client and Testing¶ In all of the prior examples, an HTTP client was never specified. When this is the case, a new instance of Zend\Http\Client will be created with its default options and used by Zend\XmlRpc\Client automatically. The HTTP client can be retrieved at any time with the getHttpClient() method. For most cases, the default HTTP client will be sufficient. However, the setHttpClient() method allows for a different HTTP client instance to be injected. The setHttpClient() is particularly useful for unit testing. When combined with the Zend\Http\Client\Adapter\Test, remote services can be mocked out for testing. See the unit tests for Zend\XmlRpc\Client for examples of how to do this.
http://framework.zend.com/manual/2.0/en/modules/zend.xmlrpc.client.html
CC-MAIN-2015-35
en
refinedweb
faction..." Now reads: "...spends a large fraction..." gpFreeGadgetss;" Now reads: "PGADGET PFreeGadget ="gpFreeGadgets;" incorrect: "This technique [using CREATE_SUSPENDED with CreateProcess] is generally used by a parent process that wants to assign the STDIN or STDOUT handles for the child process to refer to a pipe or other kernel object. By creating the process so that the primary thread is initially suspended, the parent process can safely use the SetStdHandle function to redirect one or more of the standard input or output handles for the child process before the primary thread has a chance to execute." In fact, by the time the parent process returns from calling CreateProcess, the handle table for the child process is completely initialized, including those entries used for stdin, stdout, and stderr. To modify the STD handle(s) of a child process, the parent process must replace it's own STD handle(s) using SetStdHandle, launch the child process using CreateProcess and the bInheritHandles parameter set to TRUE, then restore the parent STD handles to their original state. Alternatively, the STARTUPINFO field members hStdInput, hStdOutput, and/or hStdError may be used to setup the STD handles of a child process that is being started. This sentence and every sentence that follows to the end of the paragraph has been deleted. So the last sentence in this paragraph is now "In this case, the primary thread for a process will not begin exeuting the main or WinMain function until the parent process has called the ResumeThread function." thread owning a mutex exits..." and continuing on to the end of the paragraph, to the following: Note that if a thread owning a mutex exits or is terminated before releasing the mutex, one of the threads waiting for ownership of that mutex will return from its wait function with a failure code of WAIT_ABANDONED_0, or a value between WAIT_ABANDONED_0 and (WAIT_ABANDONED_0 + nCount-1) for the multiple object wait functions. That thread is now the owner of the mutex, and must use ReleaseMutex just as if a return value of WAIT_OBJECT_0 or (WAIT_OBJECT_0 + nCount-1) had been returned. This means that both the WaitSucceeded and WaitAbandoned functions presented earlier indicate you own the mutex and need to release it. Most of the time, an abandoned mutex is a sign of a bug in the logic of a thread somewhere and should be asserted so that you can track down the problem as early as possible. where the WAIT_ABANDONED items are in CW, "nCount" is in italic CW both times it appears, and the function names are in Roman itals. The third sentence of the second full paragraph on the page now reads: "If a thread owning a mutex exits or is terminated before releasing the mutex, one thread waiting for ownership of that mutex will return from its respective wait function with a failure code of WAIT_ABANDONED_0, or a value between WAIT_ABANDONED_0 and (WAIT_ABANDONED_0 + nCount - 1) for the multiple object wait functions." the reader to the editorial note (the sentence that begins "In fact, since the MapViewOfFile function increases the usage counts..."). No text has been changed in the paragraph preceding the code listing - just the footnote has been removed. After the listing on page 110 (which is fine as-is), this new paragraph has been added: "While closing the file mapping handle immediately after calling MapViewOfFile is valid when performing memory-mapped file I/O, this technique should not be used when using a file mapping object for shared memory purposes. In the shared-memory scenario, two or more processes have agreed on the name for the mapping object. If one application closes the mapping object after calling MapViewOfFile before the cooperating application has a chance to call CreateFileMapping or OpenFileMapping, the named mapping object may no longer exist (or at least, may no longer be identifiable by the agreed upon name), and the two applications will not achieve shared memory as desired. For this reason, the mapping object handle should not be closed until you no longer have any need to use the shared memory." following code. This source, which demonstrates using a mapping object for shared memory, follows the recommendation made in the new paragraph on page 110. #include <windows.h> #include <tchar.h> #include <stdio.h> #define MAX_STRING_SIZE 256 #define FILE_MAPPING_NAME __TEXT("ShareStrSharedMemory") #define INITIAL_STRING __TEXT("(nothing yet)") int main( int argc, char *argv[]) { HANDLE hMapping; LPTSTR lpSharedString; TCHAR szLocalString[MAX_STRING_SIZE]; BOOL bCreated; // create a named file mapping as shared memory... hMapping = CreateFileMapping( (HANDLE) 0xFFFFFFFF, NULL, PAGE_READWRITE, 0, MAX_STRING_SIZE, FILE_MAPPING_NAME); if (hMapping != NULL) { if (GetLastError() == ERROR_ALREADY_EXISTS) { bCreated = FALSE; printf("Opened preexisting file mapping. "); } else { bCreated = TRUE; printf("Created file mapping. "); } } else { printf("Unable to create file mapping, exiting! "); exit(-1); } // map the memory into this process... lpSharedString = (LPTSTR) MapViewOfFile( hMapping, FILE_MAP_ALL_ACCESS, 0, 0, 0); if (lpSharedString == NULL) { printf("Unable to map into memory, exiting! "); exit(-1); } // initialize the string if necessary... if (bCreated) _tcscpy( lpSharedString, INITIAL_STRING); while (TRUE) { printf( "Type a string to share, [Enter] to display current string, or "quit": "); // input string... _getts( szLocalString); if (_tcscmp( szLocalString, __TEXT("quit")) == 0) { // quit... break; } else if (szLocalString[0] == __TEXT(' ')) { // show the string... printf( "Current string is '%s'. ", lpSharedString); } else { // set the string... _tcscpy( lpSharedString, szLocalString); } } // unmap the memory... UnmapViewOfFile(lpSharedString); // close our handle to the file mapping object... CloseHandle(hMapping); // exit... return 0; } Removed the editorial NOTE at the end of this page, that read: "Just as this book was going to press..." This note referred to a comment made in the last paragraph on page 110, just before the sample code. down on the page. It reads: if (this == &rhs) return(*this); in it. Basially, the type of the m_cNotEmpty variable was changed from CMclEvent to CMclSemaphore. If you use this downloadable version of CMclLinkedLists.h that has the fix in it, then the paragraph in the book that explains this variable has been changed. That paragraph was on page 200, and began: "The CMclLinkedList base class provides internal synchronization using the critical section object..." Basically, the change was to replace references to the m_cNotEmpty event with references to the m_cNotEmpty semaphore, although a related minor change to the last sentence was needed as well. Here's the full text of the revised paragraph: "The MclLinkedList base class provides internal synchronization using the critical section objet m_cCritSec and the semaphore object m_cNotEmpty. The critical section serializes access to the internal data structures of the linked list; without it, nodes could be dropped during simultaneous puts and gets, and chaos would result. The semaphore object is used for making get operations block until there is data in the linked list. When a put operation places data onto the linked list, the m_cNotEmpty semaphore is released. When a get operation removes the last entry from the list, the semaphore is naturally reset by the operating system. The actual list manipulation operations performed by put and get are coordinated by making them atomic using the critical section." In the second paragraph, the next to last sentence used to read: "The variable m_bRun is used to prevent the worker thread from exiting when there are always jobs in the queue." This sentence now reads: "The variable m_bRun is used to allow the worker thread to exit even if there are still jobs in the queue." Note that the comment in the accompanying source code is correct. The loop will exit if either the m_ceControl event is signalled or m_bRun is set by the Stop method. In Example 10-3, Line 3 used to read: CFixedThreadPool.h Now reads: CFixedThreadPool.cpp "CAnimatorDlg::OnInitDialog". {441} On the first line, "InitInstance" has been replaced by "OnInitDialog". The heading of the first code example used to read: Example 14-2. ExceptionFlow Program Output: Fault Scenario The word "No" was inserted, and the heading now reads: Example 14-2. ExceptionFlow Program Output: No-Fault Scenario [699] Change the index entry for "alterable waits" to "alertable waits." © 2015, O’Reilly Media, Inc. (707) 827-7019 (800) 889-8969 All trademarks and registered trademarks appearing on oreilly.com are the property of their respective owners.
http://www.oreilly.com/catalog/errata.csp?isbn=9781565922969
CC-MAIN-2015-35
en
refinedweb
Details - Type: Improvement - Status: Open - Priority: Trivial - Resolution: Unresolved - Affects Version/s: 2.1, 2.2, 2.3 - Fix Version/s: None - Component/s: Eclipse plugins, Tools - Labels:None Description Component Descriptor Editor (CDE) allows creation of CAS Types where the fully qualified type is the same as some package name in the "classpath", or where the fully qualified type includes a namespace which is already a class name in the classpath. This causes a "collision" when the JCas is generated. There is also a possibility for ambiguity for CAS type names that end in _Type. This should be disallowed. These checks may want to be put into the uima core, not just into the CDE. Activity - All - Work Log - History - Activity - Transitions Good point. I agree it should be a warning - probably when JCasGen is run. Show Marshall Schor added a comment - Good point. I agree it should be a warning - probably when JCasGen is run. Won't be fixed in time for 2.2 Show Marshall Schor added a comment - Won't be fixed in time for 2.2 defer beyond 2.3.0 Show Marshall Schor added a comment - defer beyond 2.3.0 Hm, not everybody uses the JCas, and people do use the same package as, for example, their annotator on purpose. So maybe we could give a warning instead of completely disallowing this? Make it a best practice?
https://issues.apache.org/jira/browse/UIMA-382?focusedCommentId=12490502&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
CC-MAIN-2015-35
en
refinedweb
Philip or Troy - would you care to prepare and test the backport to ensure we can commit this for the next 0.9 release, coming within days? Bill Philip Martin wrote: > Troy Heber <troyh@debian.org> writes: > >> In any case, this looks like the culprit. > > Agreed, I've cc'd dev@a.a.o. The C implementation of apr_atomic_cas > on the 0.9.x branch is broken on ia64, and probably any other 64 bit > platform that uses the same code. For the full story see: > > >> apr_uint32_t apr_atomic_cas(volatile apr_uint32_t *mem, long with, >> long cmp) >> { >> long prev; >> #if APR_HAS_THREADS >> apr_thread_mutex_t *lock = hash_mutex[ATOMIC_HASH(mem)]; >> if (apr_thread_mutex_lock(lock) == APR_SUCCESS) { >> prev = *(long*)mem; >> if (prev == cmp) { >> *(long*)mem = with; >> } >> apr_thread_mutex_unlock(lock); >> return prev; >> >> On a 64-bit machine we end up with a size mismatch and a compare of >> junk. mem is defined as a pointer to a 32-bit, then a cast to long >> 64-bit in this case. prev ends up with junk it it and fails the >> compare prev == cmp that passes on a 32-bit box. In any case this is a >> bug, not positive if it's the only one. > > It looks like it has already been fixed in apr1.2 (as used by apache2.2): > > apr_uint32_t apr_atomic_cas32(volatile apr_uint32_t *mem, apr_uint32_t with, > apr_uint32_t cmp) > { > apr_uint32_t prev; > #if APR_HAS_THREADS > apr_thread_mutex_t *lock = hash_mutex[ATOMIC_HASH(mem)]; > > CHECK(apr_thread_mutex_lock(lock)); > prev = *mem; > if (prev == cmp) { > *mem = with; > } > CHECK(apr_thread_mutex_unlock(lock)); > #else > prev = *mem; > if (prev == cmp) { > *mem = with; > } > #endif /* APR_HAS_THREADS */ > return prev; > } >
http://mail-archives.apache.org/mod_mbox/apr-dev/200609.mbox/%3C45146E24.8030109@rowe-clan.net%3E
CC-MAIN-2015-35
en
refinedweb
I must confess I am a bit jealous of many of the n00b developers who've just started out using Visual Studio 2005 since it appeared late last year. They got started with a whole new set of controls, many of which can be "hooked up" to do incredibly productive things -- with no code at all. Meanwhile, we .NET "Old Timers", having become used to doing things "our way", have passed up on all this goodness in favor of the familiar. How many noob developers know that the DataGrid in ASP.NET 2.0 is still there? Yep. It's just not on the Toolbox by default. But, hey -- they've got the much better GridView, so what do they care? At any rate, there is a set of controls that newer developers seem to have gravitated to, leaving the older developers like moi in the lurch - and that is the DataSource controls. One of those, the ObjectDataSource control, I find quite interesting. ObjectDataSource is a member of the family of ASP.NET data source controls that enable declarative databinding against underlying data stores. Most data source controls are designed for a two-tiered application architecture, where the page interacts directly with the data provider. Many ASP.NET developers want to encapsulate data retrieval, often combined with business logic, into a tier component object that introduces an additional layer between the presentation page and data provider. ObjectDataSource allows developers to structure applications using this three-tiered architecture and still take advantage of the benefits of declarative databinding in ASP.NET. It's not a full-fledged Object Relational Mapping infrastructure, but it is a step in the right direction, with familiar semantics that make it highly useful. Old-timers will probably remember "ObjectSpaces" and how it went through several iterations and finally was killed off (quite unceremoniously, I believe). The ObjectDataSource control object model is similar to the SqlDataSource control, but instead of a ConnectionString property, ObjectDataSource exposes a TypeName property that specifies a type to instantiate for performing the data operations. Like the command properties of SqlDataSource, the ObjectDataSource control supports properties such as SelectMethod, UpdateMethod, InsertMethod, and DeleteMethod for specifying methods of the associated type to call to perform these data operations. The difference is that instead of being SqlCommands (SelectCommand, for example) the methods are actual methods of the class that was specified for the ObjectDataSource control to use. This gives you, the developer, the opportunity to do a lot more business logic coding in your type, while still offering the familiar page-facing databinding interface that GridView, DataList and their brethren can consume happily. ObjectDataSource performs all this magic through reflection- hence it is wise to create a target class that has static methods in order to avoid constant and repetitive instantiation and collection every time a method is used. The advantages of this approach haven't been lost on .NET Developers -- many code generators are now producing code for Data layers that is ObjectDataSource - compliant. One example that I like is the "SubSonic Zero Code DAL" (formerly called "ActionPack"). I started out creating a SQLite Provider for it, but I had to stop because their code is still a moving target. Once it settles down, I'll resume my work. I first became interested in the ObjectDataSource control when I visited Peter Kellner's code for the MembershipProvider. Peter has done a bang-up job of integrating this into an ASP.NET "Membership Management" page - a version of which I featured in a previous article here. So let's take a look at some sample code that utilizes the ObjectDataSource. I'll keep this example simple in keeping with the "BASICS" theme of these articles. What I'm going to do is illustrate the flexibility of the ObjectDataSource by not having it get data from a database at all. Instead, our type will go out to the web and retrieve search result feeds from Live.com feed search in RSS format. These will then be databound to a GridView on the page. I'll also put in a "search" textbox and the required code and declarative markup to enable the control to use the search term that the user enters as a "select parameter" in the same way one would add a search term to the "WHERE CLAUSE" of a SQL statement. First, lets have a look at the two classes I use to handle the "data access"- the RSSFeed class, and the RSSDataSource class: RSS Feed Class: using System; using System.Collections.Generic; using System.Text; namespace RSSObjectDataSource { public class RSSFeed { private string _title; public string Title { get { return _title; } set { _title = value; } } private string _description; public string Description get { return _description; } set { _description = value; } private string _link; public string Link get { return _link; } set { _link = value; } private DateTime _pubDate; public DateTime PubDate get { return _pubDate; } set { _pubDate = value; } public RSSFeed(string title, string description, string link, DateTime pubDate) this._title = title; this._description = description; this._link = link; this._pubDate = pubDate; } } The RSSFeed class, as can be seen above, is just a simplified "container" for an RSS Feed Item, and it only contains the most important fields (title, description, link, and pubDate). The RSS DataSource class: using System.Net; using System.Data; public static class RSSDataSource public static List<RSSFeed> FeedList = new List<RSSFeed>(); public static string LastSearchTerm = ""; private static string url1 = ""; private static string url2 = "&mkt=en-US&format=rss&count=50"; // sorry they don't go over "50". public static int GetCount( string searchTerm) return FeedList.Count; public static ICollection<RSSFeed> GetFeeds( string searchTerm) LastSearchTerm = searchTerm; DataSet ds = new DataSet(); if (searchTerm == "") searchTerm = "a"; ds.ReadXml(string.Concat(url1, searchTerm, url2)); DataTable dt = ds.Tables[2]; List<RSSFeed> list= new List<RSSFeed>(); foreach (DataRow row in dt.Rows) { string title = (string)row["title"]; string description =(string)row["description"]; string link = (string)row["link"]; DateTime pubDate =DateTime.Now; // handle malformed RFC822 date formats try{ pubDate =Convert.ToDateTime(row["pubDate"]); } catch {} list.Add(new RSSFeed(title, description, link, pubDate)); } FeedList = list; return list; public static ICollection<RSSFeed> GetFeeds(string searchTerm, int maxRows, int startRowIndex) { List<RSSFeed> list = new List<RSSFeed>(); string description = (string)row["description"]; DateTime pubDate = DateTime.Now; try { pubDate = Convert.ToDateTime(row["pubDate"]); } catch { } list.Add(new RSSFeed(title, description, link, pubDate)); if (list.Count < maxRows) maxRows = list.Count; List<RSSFeed> retList = new List<RSSFeed>(); for (int i=startRowIndex;i<maxRows;i++) retList.Add(list[i]); return retList; Now let's take a look at some declarative markup on the page that pulls this all together: The ObjectDataSource specifies "GetFeeds" as it's select method, the TypeName of my RSSDataSource class, a Cache Duration of 21600 (6 hours - caching is built in), and a SelectParameter of the TextBox for the search term. The interesting thing about all this is that if you look at the codebehind class for this page, there is NO CODE! It all works automatically based on the settings of the control. Here is an example display (reduced size) where the user has typed "ASP.NET" into the search textbox. This is a nice way to see search results that point directly to RSS Feeds, and when you click on a link that you like, you can subscribe to the feed immediately in IE 7 or Firefox 2.0: ObjectDataSource is a unique control that has lots of potential for easing the acquisition of and display of data from a variety of sources. Download the Visual Studio 2005 Web Application Project
http://www.nullskull.com/a/732/basics-objectdatasource-control.aspx
CC-MAIN-2015-35
en
refinedweb
Difference between revisions of "EUG:How to Contribute" Revision as of 19:00, 30 October 2010 How to Contribute to the Eclipse Users Guide If you create a new wiki page in this manual. You must precede it with the UEG: manual namespace. Click. Yuo only have to type the horizontal line, the title will be automatically inserted without the namespace.
http://wiki.eclipse.org/index.php?title=EUG:How_to_Contribute&diff=225641&oldid=225640
CC-MAIN-2015-35
en
refinedweb
hi there! I am trying to assign new UV coordinates to a face via python and have a problem: after writing my UV data nothing happens, blender does not see or recognize the changes I make. when rerunning the script the old values are there again. here's a small piece of code: import Blender mesh = Blender.NMesh.GetRawFromObject('Plane') face = mesh.faces[0] # that prints me the original uv data as set manually in the UV editor print face.uv del face.uv[:] # that seems to work - I get a [ ] print face.uv face.uv.append((0.0, 1.0)) face.uv.append((0.0, 0.0)) face.uv.append((1.0, 0.0)) face.uv.append((1.0, 1.0)) # that seems to work as well - face.uv has all the values above now print face.uv # do I need this? mesh.update() ---------- well - after running the script nothing changes and when I run it one more time I see that it starts with old values - blender did not remember the changes I made. I guess I am doing something wrong or missing something, but I was not able to figure out what; any help is appreciated :) thanks! Jin problem saving modified UV data Scripting in Blender with Python, and working on the API Moderators: jesterKing, stiv 2 posts • Page 1 of 1 never mind :) just figured it out! baaah, spent almost 4 hours yesterday :) it works when using the Mesh module, it does not work with NMesh, that's all :) it works when using the Mesh module, it does not work with NMesh, that's all :) 2 posts • Page 1 of 1 Who is online Users browsing this forum: No registered users and 0 guests
http://www.blender.org/forum/viewtopic.php?t=968&view=next
CC-MAIN-2015-35
en
refinedweb
Performing Data-Related Tasks by Using Code You can accomplish many data-related design tasks by using the designers and tool windows in Visual Studio LightSwitch. However, you must add code to an application to accomplish certain tasks. For example, you must write code to validate a field by applying custom conditions. This document shows how to accomplish data-related tasks by using the data runtime object model. For more information about where you can write code in an application, see the following topics: The following list shows common data-related tasks that you can accomplish by using the data runtime object model. The tasks are described later in this document. You can read individual data items or collections of data items from any data source in your application. The following example retrieves the customer that is currently selected in a screen. The following example iterates over a collection of customers. You can read data from related entities. For example, a Customer entity might have a one-to-many relationship with an Orders entity. You could iterate over all orders that have been placed by a customer by using the Orders property of the Customer entity. The following example iterates over the collection of orders that are related to a customer. The following example gets the customer who placed a specific order. You can retrieve queries from the model and then execute them in your code. To view an example, see How to: Retrieve Data from a Query by Using Code. You can update data for any entity by using code. The following example shows code that runs when a user creates an order in the Order entity in a screen and then clicks the Save button. The code updates a field in the Products entity by using a field in the Order Details entity. The following example adds a new customer to the NorthwindData data source. This example populates the fields that describe the new customer by using information from a contact that was recently added to a SharePoint list. The example calls a query named NewCustomersInSharePoint to determine which contacts in the SharePoint list have not yet been imported to the NorthwindData data source. partial void ImportCustomers_Execute() { foreach (SharePointCustomer spCust in this.DataWorkspace.SharePointData.NewCustomersInSharePoint()) { Customer newCust = new Customer(); newCust.ContactName = spCust.FirstName + " " + spCust.LastName; newCust.Address = spCust.Address; newCust.City = spCust.City; newCust.PostalCode = spCust.PostalCode; newCust.Region = spCust.Region; //Set the CopiedToDatabase field of the item in SharePoint. spCust.CopiedToDatabase = "Yes"; } this.DataWorkspace.SharePointData.SaveChanges(); } Typically, pending changes are committed to a data source when the user clicks the Save button in a screen. However, you can also commit pending changes by adding code that calls the SaveChanges method of a data source. You must add this code if you want to accomplish either of these tasks: Commit changes that you make to data that is located in other data sources. Override the Save event of a screen. If your application includes screens that combine data from multiple data sources, you must also add code to specify the order in which you want to update those sources. The files in which you write custom code have a primary data source. If you add custom code that modifies data from another data source in your LightSwitch solution, you must commit those changes by calling the SaveChanges method of that data source. The following example shows code that runs when a user creates an order in an Order entity in a screen and then clicks the Save button. The code updates a field in the Products entity by using a field in the Order Details entity. Because the Products entity is located in another data source, this code calls the SaveChanges method of that data source to commit the changes. You can change the behavior of the Save button on a screen by overriding the Save event. Because you are replacing the behavior of the Save button, your code must call the SaveChanges method when you want to commit pending changes. The following example overrides the Save event of a customer screen to catch and handle a specific exception that might be thrown if the save operation fails. If your application includes a screen that combines data from multiple sources, you must override the saveChangesTo method and explicitly call SaveChanges in the Saving method. Public Class MyScreen Private Sub MyScreen_InitializeDataWorkspace( saveChangesTo As System.Collections.Generic.List(Of Microsoft.LightSwitch.IDataService)) saveChangesTo.Add(Me.DataWorkspace.AppData1) saveChangesTo.Add(Me.DataWorkspace.AppData2) End Sub Private Sub MyScreen_Saving(ByRef handled As Boolean) Me.DataWorkspace.AppData1.SaveChanges() Me.DataWorkspace.AppData2.SaveChanges() handled = True End Sub End Class public class MyScreen { private void MyScreen_InitializeDataWorkspace(System.Collections.Generic.List<Microsoft.LightSwitch.IDataService> saveChangesTo) { saveChangesTo.Add(this.DataWorkspace.AppData1); saveChangesTo.Add(this.DataWorkspace.AppData2); } private void MyScreen_Saving(ref bool handled) { this.DataWorkspace.AppData1.SaveChanges(); this.DataWorkspace.AppData2.SaveChanges(); handled = true; } } You can apply custom validation rules to the fields of an entity. You can add custom error messages that appear when users modify the value of properties in ways that do not conform to your validation rules. For more information, see How to: Validate Data in a LightSwitch Application By default, all users can view, insert, delete, or update data that appears in a screen. However, you can restrict these permissions by adding code to one of the following methods: CanRead CanInsert Can CanUpdate If you restrict an operation by using these methods, LightSwitch makes the operation unavailable to users who do not have unrestricted permissions. For more information, see How to: Handle Data Events. The following example enables a user to update customer information if the user has update permission. This code example requires a permissions group named RoleUpdate. For more information about how to add a permissions group to your application, see Enabling Authorization and Creating Permissions. By default, LightSwitch calls these methods when a user attempts to view, insert, delete, or update information. You can also call these methods in your custom code before data is read or modified. You can identify and discard pending changes before they are committed to a data source. The following example shows three user methods that identify and discard pending changes. The UndoAllCustomerUpdates method discards all changes made to all customers. The UndoAllUpdates method discards all changes made to the data source. The UndoCustomerEdit method discards changes made to the currently selected row of data in a customer screen. partial void UndoAllCustomerUpdates_Execute() { foreach (Customer cust in this.DataWorkspace.NorthwindData.Details. GetChanges().OfType<Customer>()) { cust.Details.DiscardChanges(); } } partial void UndoAllUpdates_Execute() { this.DataWorkspace.NorthwindData.Details.DiscardChanges(); } partial void UndoCustomerEdit_Execute() { Customers.SelectedItem.Details.DiscardChanges(); } If you want to modify a query beyond the capabilities of the Query Designer, you can extend the query by adding code to the PreProcessQuery method of the query. For more information, see How to: Extend a Query by Using Code.
https://msdn.microsoft.com/en-us/library/ff851990(v=vs.110).aspx
CC-MAIN-2015-35
en
refinedweb
(Resource Acquisition Is Initialization) is an incredibly important technique in C++ (D and Ada) which helps ensure that memory leaks are prevented. Without it the developer would have to manually free up used memory once it is no longer needed which could then cause issues with exception safety. RAII is discussed in detail in [The C++ programming Language, Bjarne Stroustrup] so I will not go into too much depth here. The idea is that the destructors of all objects residing on the stack are called as they go out of scope (perhaps due to propagation of an exception during stack unwinding). These destructors then contain any important cleanup code such as stopping a thread, closing a file etc... In order to take advantage of RAII, any C libraries must be wrapped in C++ classes first. An example of which would be GTK+ which provides an official C++ binding called gtkmm. I have also found that I needed to do the same with the Motif library in order to safely use it with C++ in the OpenCDE project (motifmm). This was a large amount of work and is nowhere near complete. There is also a downfall in that since other developers have started to join the project and some are more experienced with C than C++ or visa versa, it is quite awkward for code reuse to take place. For example a C++ .ini parser can only be used by C++ developers or a C .ini parser can only safely be used by the C developers. What I have been looking for is a way that both the C and C++ developers can correctly use the same code. Looking at two ideas which are slightly similar to what I require, The C++ auto_ptr template class and the GNU C Compiler's extension of the cleanup variable attribute. The auto_ptr is one of the earliest and simplest of C++ smart pointers that allow for RAII to take place on objects stored on the heap or that use file resources etc... However it cannot be used with C code at all so the following would not make any sense. std::auto_ptr<FILE> file; file.reset(fopen("example.txt", "r")); // Prepare for a crash! This compiled and ran but results in undefined behavior. What would happen behind the scenes is that delete would be called on our FILE pointer which is obviously incorrect and should have been fclose(). It probably wouldn't work even if free() correctly cleans up the memory since delete and free() are generally not interchangeable in the majority of C++ compilers (I have only seen it work on a few select versions of GCC). So it seems that the smart pointer needs a way of explicitly knowing how to clean up the data of the contained pointer rather than just making an (often incorrect) assumption. This brings us onto the GNU C Compiler extension. #define RAII_VARIABLE(vartype,varname,initval,dtor) \ void _dtor_ ## varname (vartype * v) { dtor(*v); } \ vartype varname __attribute__((cleanup(_dtor_ ## varname))) = (initval) void example_usage() { RAII_VARIABLE(FILE*, logfile, fopen("logfile.txt", "w+"), fclose); fputs("hello logfile!", logfile); } Though implemented with a slightly complex macro and not portable between C compilers, this does work. The important thing to note is that an fclose() function pointer has been passed in so the cleanup method is explicitly stated. If this can be implemented with a C++ template class, it will allow our new smart pointer class to work with most C++ compilers available rather than being a compiler specific extension. I have implemented a "toy" version of a template class which should demonstrate the idea discussed above. It has been kept deliberately simple and as such does not throw any kind of exception when the reset() fails if fopen() returns NULL) #ifndef C_PTR_H #define C_PTR_H #include <iostream> template<class T, class R> class c_ptr { private: T t; R (*func)(T); void clean() { if(func != 0 && t != 0) { (*func)(t); t = 0; func = 0; } } public: c_ptr() { t = 0; func = 0; } ~c_ptr() { clean(); } void reset(T t, R (*func)(T)) { clean(); this->t = t; this->func = func; } T get() { return t; } }; #endif With the above template class, an example of working with files the "C way" rather than std::ofstream is demonstrated. c_ptr<FILE*> file1; c_ptr<FILE*> file2; file1.reset(fopen("example.txt", "w+"), fclose); if(file1.get() == NULL) { return; } file2.reset(fopen("test.txt", "w+"), fclose); if(file2.get() == NULL) { // No need to clean up file1 return; } fputs("hello world!", file1.get()); // No need to do any cleanup With this, as the smart pointer container (c_ptr) goes out of scope, fclose is called on the contained pointer rather than the more generic free() or delete. Note that a FILE* pointer is explicitly specified as the template argument rather than FILE, this is because sometimes the data is not handled via a pointer such as an OpenGL texture (GLuint) or a sys/socket (int). The above c_ptr implementation will need improvements such as for cleanup functions that require more than one parameter (XtDestroyWidget(XtDisplay(widget), widget)). This is made even worse because the secondary parameter (XtDisplay*) can only be derived from the primary parameter (Widget, the one that needs cleanup) but that doesn't yet exist when passing in the cleanup function and parameters. (Luckily XtDestroyWidget() only requires one parameter, I had to lie a little bit to show the worse case scenario). When.
https://www.ibm.com/developerworks/mydeveloperworks/blogs/karsten/?lang=en
CC-MAIN-2015-35
en
refinedweb
With the release of Visual Studio 2013, the ASP.NET team introduced a new ASP.NET Identity system, and you can read more about that release here. Following up on the article to migrate web applications from SQL Membership to the new Identity system, this article illustrates the steps to migrate existing applications that follow the Providers model for user and role management to the new Identity model. The focus of this tutorial will be primarily on migrating the user profile data to seamlessly hook it into the new system. Migrating user and role information is similar for SQL membership. The approach followed to migrate profile data can be used in an application with SQL membership as well. As an example, we will start with a web app created using Visual Studio 2012 which uses the Providers model. We'll then add code for profile management, register a user, add profile data for the users, migrate the database schema, and then change the application to use the Identity system for user and role management. As a test of migration, users created using Universal Providers should be able to log in and new users should be able to register. You can find the complete sample at. Profile data migration summary Before starting with the migrations, let us look at the experience of storing profile data in the Providers model. Profile data for application users can be stored in multiple ways, the most common among them being using the inbuilt profile providers shipped along with the Universal Providers. The steps would include - Add a class that has properties used to store Profile data. - Add a class that extends 'ProfileBase' and implements methods to get the above profile data for the user. - Enable using default profile providers in the web.config file and define the class declared in step #2 to be used in accessing profile information. The profile information is stored as serialized xml and binary data in the 'Profiles' table in the database. After migrating the application to use the new ASP.NET Identity system, the profile information is deserialized and stored as properties on the user class. Each property can then be mapped onto columns in the user table. The advantage here is that the properties can be worked on directly using the user class in addition to not having to serialize/deserialize data information each time when accessing it. Getting Started - Create a new ASP.NET 4.5 Web Forms application in Visual Studio 2012. The current sample uses the Web Forms template, but you could use MVC Application as well. - Create a new folder 'Models' to store profile information - As an example, let us store the date of birth, city, height and weight of the user in profile. The height and weight are stored as a custom class called 'PersonalStats'. To store and retrieve the profile, we need a class that extends 'ProfileBase'. Let's create a new class 'AppProfile' to get and store profile information. public class ProfileInfo { public ProfileInfo() { UserStats = new PersonalStats(); } public DateTime? DateOfBirth { get; set; } public PersonalStats UserStats { get; set; } public string City { get; set; } } public class PersonalStats { public int? Weight { get; set; } public int? Height { get; set; } } public class AppProfile : ProfileBase { public ProfileInfo ProfileInfo { get { return (ProfileInfo)GetPropertyValue("ProfileInfo"); } } public static AppProfile GetProfile() { return (AppProfile)HttpContext.Current.Profile; } public static AppProfile GetProfile(string userName) { return (AppProfile)Create(userName); } } - Enable profile in the web.config file. Enter the class name to be used to store/retrieve user information created in step #3. <profile defaultProvider="DefaultProfileProvider" enabled="true" inherits="UniversalProviders_ProfileMigrations.Models.AppProfile"> <providers> ..... </providers> </profile> - Add a web forms page in 'Account' folder to get the profile data from the user and store it. Right click on project and select 'Add new Item'. Add a new webforms page with master page 'AddProfileData.aspx'. Copy the following in the 'MainContent' section: <h2> Add Profile Data for <%# User.Identity.Name %></h2> <asp:Label <div> Date of Birth: <asp:TextBox </div> <div> Weight: <asp:TextBox </div> <div> Height: <asp:TextBox </div> <div> City: <asp:TextBox </div> <div> <asp:Button </div>Add the following code in the code behind: protected void Add_Click(object sender, EventArgs e) { AppProfile profile = AppProfile.GetProfile(User.Identity.Name); profile.ProfileInfo.DateOfBirth = DateTime.Parse(DateOfBirth.Text); profile.ProfileInfo.UserStats.Weight = Int32.Parse(Weight.Text); profile.ProfileInfo.UserStats.Height = Int32.Parse(Height.Text); profile.ProfileInfo.City = City.Text; profile.Save(); }Add the namespace under which AppProfile class is defined to remove the compilation errors. - Run the app and create a new user with username 'olduser'. Navigate to the 'AddProfileData' page and add profile information for the user. You can verify that the data is stored as serialized xml in the 'Profiles' table using the Server Explorer window. In Visual Studio, from 'View' menu, choose 'Server Explorer'. There should be a data connection for the database defined in the web.config file. Clicking on the data connection shows different sub categories. Expand 'Tables' to show the different tables in your database, then right click on 'Profiles' and choose 'Show Table Data' to view the profile data stored in the Profiles table. Migrating database schema To make the existing database work with the Identity system, we need to update the schema in the Identity database to support the fields we added to the original database. This can be done using SQL scripts to create new tables and copy the existing information. In the 'Server Explorer' window, expand the 'DefaultConnection' to display the tables. Right click Tables and select 'New Query' data- Paste the SQL script from and run it. If the 'DefaultConnection' is refreshed, we can see that the new tables are added. You can check the data inside the tables to see that the information has been migrated. Migrating the application to use ASP.NET Identity - Install the Nuget packages needed for ASP.NET Identity: - Microsoft.AspNet.Identity.EntityFramework - Microsoft.AspNet.Identity.Owin - Microsoft.Owin.Host.SystemWeb - Microsoft.Owin.Security.Facebook - Microsoft.Owin.Security.Google - Microsoft.Owin.Security.MicrosoftAccount - Microsoft.Owin.Security.Twitter -. We will be using the existing classes for role, user logins and user claims. We need to use a custom user for our sample. Right click on the project and create new folder 'IdentityModels'. Add a new 'User' class as shown below: using Microsoft.AspNet.Identity.EntityFramework; using System; using System.Collections.Generic; using System.Linq; using System.Web; using UniversalProviders_ProfileMigrations.Models; namespace UniversalProviders_Identity_Migrations { public class User : IdentityUser { public User() { CreateDate = DateTime.UtcNow; IsApproved = false; LastLoginDate = DateTime.UtcNow; LastActivityDate = DateTime.UtcNow; LastPasswordChangedDate = DateTime.UtcNow; Profile = new ProfileInfo(); } public System.Guid ApplicationId { get; set; } public bool IsAnonymous { get; set; } public System.DateTime? LastActivityDate { get; set; } public string Email {; } public ProfileInfo Profile { get; set; } } }Notice that the 'ProfileInfo' is now a property on the user class. Hence we can use the user class to directly work with profile data. Copy the files in the IdentityModels and IdentityAccount folders from the download source ( ). These have the remaining model classes and the new pages needed for user and role management using the ASP.NET Identity APIs. The approach used is similar to the SQL Membership and the detailed explanation can be found here. Copying Profile data to the new tables As mentioned earlier, we need to deserialize the xml data in the Profiles tables and store it in the columns of the AspNetUsers table. The new columns were created in the users table in the previous step so all that is left is to populate those columns with the necessary data. To do this, we will use a console application which is run once to populate the newly created columns in the users table. - Create a new console application in the exiting solution. - Install the latest version of the Entity Framework package. - Add the web application created above as a reference to the console application. To do this right click on Project, then 'Add References', then Solution, then click on the project and click OK. - Copy the below code in the Program.cs class. This logic reads profile data for each user, serializes it as 'ProfileInfo' object and stores it back to the database. public class Program { var dbContext = new ApplicationDbContext(); foreach (var profile in dbContext.Profiles) { var stringId = profile.UserId.ToString(); var user = dbContext.Users.Where(x => x.Id == stringId).FirstOrDefault(); Console.WriteLine("Adding Profile for user:" + user.UserName); var serializer = new XmlSerializer(typeof(ProfileInfo)); var stringReader = new StringReader(profile.PropertyValueStrings); var profileData = serializer.Deserialize(stringReader) as ProfileInfo; if (profileData == null) { Console.WriteLine("Profile data deserialization error for user:" + user.UserName); } else { user.Profile = profileData; } } dbContext.SaveChanges(); }Some of the models used are defined in the 'IdentityModels' folder of the web application project, so you must include the corresponding namespaces. - The above code works on the database file in the App_Data folder of the web application project created in the previous steps. To reference that, update the connection string in the app.config file of the console application with the connection string in the web.config of the web application. Also provide the complete physical path in the 'AttachDbFilename' property. - Open a command prompt and navigate to the bin folder of the above console application. Run the executable and review the log output as shown in the following image. - Open the 'AspNetUsers' table in the Server Explorer and verify the data in the new columns that hold the properties. They should be updated with the corresponding property values. Verify functionality Use the newly added membership pages that are implemented using ASP.NET Identity to login a user from the old database. The user should be able to log in using the same credentials. Try the other functionalities like adding OAuth, creating a new user, changing a password, adding roles, add users to roles, etc. The Profile data for the old user and the new users should be retrieved and stored in the users table. The old table should no longer be referenced. Conclusion The article described the process of migrating web applications that used the provider model for membership to ASP.NET Identity. The article additionally outlined migrating profile data for users to be hooked into the Identity system. Please leave comments below for questions and issues encountered when you migrate your app. Thanks to Rick Anderson and Robert McMurray for reviewing the article. This article was originally created on December 13, 2013
http://www.asp.net/identity/overview/migrations/migrating-universal-provider-data-for-membership-and-user-profiles-to-aspnet-identity
CC-MAIN-2015-35
en
refinedweb
Howdy, I am messing with while loops and can not figure out how to get the following while statments to work. ______Code_____ //--------------------------------------------------------------------------- /* Divide 2 numbers 1) Show the decimal value and 2) Show fractional value with remainder*/ #include <vcl.h> #include <iostream.h> #include <conio.h> #include <stdio.h> int a, b, c, d, e; //declare variables float divide (int c, int d); //function prototype int main(int argc, char* argv[]) { divide (c, d); //call divide function getchar(); return 0; } float divide (int c, int d) { double f, g; float h; int remainder; cout << "\nEnter a value for C: "; cin >> c; while (c<1) { cout << "must be more than 0!! \n"; cout <<" Enter a value for C: "; cin >>c; } cout << "\nEnter a value for D: "; cin >>d; while (d<1) { cout << "must be more than 0!! \n"; cout <<" Enter a value for D: "; cin >>d; } e=c/d; f =c; //change c to double??? g =d; //change d to double??? h=f/g; //get value as double to print as decimal cout <<"\nDecimal value: " << h<<"\n"; if (e<1) {cout<< c<<"/"<<d<<" is less than 1: \n"; return 0;} remainder =c%d; if (e<1 && remainder<1){cout<<"Rem is less than 1: \n"; return 0;} else; cout <<c<<"/"<<d<<"="<<e<<" with a remainder of " <<remainder; return 0; } clearly when a decimal value is entered for variable c or d i get a infinite loop. my question is how can i test for values less than 0 for example .01 or .23 and get the result i'm looking for, "Tell the user to enter a larger value"? thanks for any help M.R.
http://cboard.cprogramming.com/cplusplus-programming/5216-while-some-value-loops-printable-thread.html
CC-MAIN-2015-35
en
refinedweb
NAME tzfile - time zone information SYNOPSIS #include <tzfile.h> DESCRIPTION This page describes the structure of timezone files as commonly found in /usr/lib/zoneinfo or /usr/share/zoneinfo.. NOTES This manual page documents <tzfile.h> in the glibc source archive, see timezone/tzfile.h. It seems that timezone uses tzfile internally, but glibc refuses to expose it to userspace. This is most likely because the standardised functions are more useful and portable, and actually documented by glibc. It may only be in glibc just to support the non-glibc- maintained)
http://manpages.ubuntu.com/manpages/lucid/man5/tzfile.5.html
CC-MAIN-2015-35
en
refinedweb
Hello everybody, I just started to learn C, so I hope you don't mind that I will post here the newbie-problems I do run into. I am learning C as a hobby, just as I learned Basic, Pascal and VB(A) in the past. So now I want to learn the "famous C" :P. Well, here we go.... You can use "strncpy' to copy a number of characters from one string to another. That is handy, but what I really would like is that I can copy a number of characters from a given position in the source string to the destination string. I tried some things.... Having in mind that a string in C is an array of char, I tried this: strncpy(dest_str, source_str[3], 10) I hoped that strncpy would start at position 4 of the source string and copied 10 characters. But when I compiled the program, the compiler told me that it hated me and stopped :P. Then I thought that maybe a pointer could help me. So I tried the next something But after that I got an casting error.But after that I got an casting error.Code: #include <stdio.h> #include <string.h> int main() { char regel1[30]; char regel2[30]; char *ptr; strcpy(regel1, "Programing is fun and relaxing to do"); ptr = ®el1[10]; strncpy(regel2, *ptr, 3); } Okay, so how to solve this problem... can I use "strncpy" at all to get done what I want? I mean, something like that would be nice for other functions as "memset" too. Thank you already for your answering and advices. Joke. (BTW the name Joke is no joke :). It is very common Dutch girls name and is pronounced as "jo-ke" and not as the English word joke).
http://cboard.cprogramming.com/c-programming/105151-strncpy-adavnced-p-printable-thread.html
CC-MAIN-2015-35
en
refinedweb
I need Oracle connector jar file. where will i download? I need Oracle connector jar file. where will i download? I need Oracle connector jar file. where will i download Connecting Oracle database with struts - Struts Connecting Oracle database with struts Can anyone please provide me some solutions on Connection between Oracle database and struts java connecting to oracle db - JDBC java connecting to oracle db how to connect oracle data base... the following packages in your java file:*********** import java.sql.*; import...) Connect to database:*********** a) If you are using oracle oci driver,you connecting to database - Struts connecting to database Hi I am having problems with connection to MS SQL Server 2005 database. My first is what do i write in struts-configuration.xml file that enable me to use methods in the model class to display connecting servlet to db2 - JSP-Servlet ie classes111.jar file in the classpath. 4)copy the abou said jar file in lib... file in oracle software after installation. I think it is in jdbc folder...connecting servlet to db2 Hello sir, Iam new to db2.so I would like Download Struts Learn how to Download Struts for application development or just for learning the new version of Struts. This video tutorial shows you how you can download struts and save on your computer. This easy to understand download struts shows index MySQL Database Tutorials Oracle Database Tutorials Structured java connecting to oracle db java connecting to oracle db PLZ SAY ME HOW TO INSERT THE VALUES INTO ORACLE THIS IS THE CODE: import java.io.*; import java.sql.*; import..."); Connection con=DriverManager.getConnection("jdbc:oracle:thin Connecting to MYSQL Database in Java -java-5.0.8-bin.jar file in jdk lib? If not then put it. For more information, visit...Connecting to MYSQL Database in Java I've tried executing the code...-java-5.1.17-bin.jar. Does the version really matter? I copied the connector to jdk how to download oracle 9i how to download oracle 9i how to download oracle 9i Oracle 9i free download Oracle 9i free download where to download oracle 9i for database connectivity in j2ee requesting for a jar file - Development process requesting for a jar file Sir Please send me a jar file of this sir , i need this package jar file org.apache.poi.hssf.usermodel for Excel reading..., You need to install Jakarta api. Download it from the following link jar file jar file jar file where it s used jar file jar file how to create a jar file in java jar file jar file steps to create jar file with example connecting jsp to mysql - JSP-Servlet ; One jar file is needed to connect java with mysql data base. That can...connecting jsp to mysql Hi, i am working on 'Web application development' project that uses JSP, MySQL and tomcat.i am not able to connect down load jar file - JavaMail down load jar file i want to down load james2.1.3 file.where is location it got it. Hi Friend, You can download James Server from the following link: Thanks JAR FILE JAR FILE WHAT IS JAR FILE,EXPLAIN IN DETAIL? A JAR file... and applications. The Java Archive (JAR) file format enables to bundle multiple... link: Jar File Explanation Download. Download. I need a swing program to download a file from the server oracle oracle sir now am doing one project , my frond end is vb and backend is oracle. so 1> how can i store the image in my field 2> how can i back up the table into .txt file download jar - Framework download jar hi somebody could ya tell me from where i can get portal-ejb.jar Download Hibernate 4 /files/ When you will download the hibernate 4 jar file you should be care about... downloaded the zip file. When your download will be completed you will be found a zip file. To use the jar files you will have to unzip the downloaded file. You connecting with database - Struts connecting with database I am creating an application where when jsp page is displayed, it contains the combo box where data is populated from the database.it has 3 buttons and the functionality for all buttons is different jar file jar file how to run a java file by making it a desktop icon i need complete procedur ..through cmd> < How to use JAR file in Java do not use JAR file, then they will have to download every single file one... in JAR files, then you just need to download one single file and Run it. When you... to digitally sign the JAR file. Users who want to use the secured file can check Creating a JAR file in Java Creating a JAR file in Java This section provides you to the creating a jar file.... This program has been used the jar command "jar cf jar-file-name directory Java Jar File Java Jar File In Java, JAR (Java ARchive) is a platform-independent file format that allows you... and sounds) into a single JAR file. It supports compression, which reduces the file Viewing contents of a JAR File Viewing contents of a JAR File This section shows you how you can read the content of jar file... the contents of the jar file without extracting it. You can easily understand Java JAR Files ; A JAR file is a collection of class files and auxiliary resources associated with applets and applications. The Java Archive (JAR) file format enables to bundle multiple files into a single archive file. Typically a JAR file format Struts file uploading - Struts Struts file uploading Hi all, My application I am uploading... can again download the same file in future. It is working fine when I... in the database without breaking and user should be able to download the file Downloading Struts & Hibernate ; In this we will download Struts & Hibernate.... Download Struts The latest release of Struts can be downloaded from http... called Struts-Hibernate-Integration. 2. Unzip Downloaded file Shopping Cart Index Page Modules in a application Working Of An Application Download Source Code Download Source Code (As WAR file Listing the Main Attributes in a JAR File Manifest Listing the Main Attributes in a JAR File Manifest Jar Manifest: Jar Manifest file is the main section of a jar file. This file contains the detailed information about Extract Jar File Extract Jar File How to extract jar file? Hi Please open the command Prompt and to the jar file and write the command jar -xvf JaraFileName.jar Download Quartz Job Scheduler , QuartzService. quartz-oracle-<ver>.jar optional Oracle... Download Quartz Job Scheduler In this section we will download Quartz Job Scheduler from java file upload in struts - Struts java file upload in struts i need code for upload and download file using struts in flex.plese help me Hi Friend, Please visit the following links: http Explain struts.jar file - Struts Explain struts.jar file Hi friends am new to java. I read jar file means collection of java files. For executing struts application what are the necessary jar files. " struts.jar " file contains what. can u explain jar file - Java Beginners jar file jar file When creating a jar file it requires a manifest... options in jar file. What is Jar File? JAR files are packaged in the zip format What is Jar File?JAR files are packaged in the zip format making Hibernate- Oracle connection - Hibernate on databaseconnection --> New Oracle Added ojdbc14 Jar file path UID...Hibernate- Oracle connection In Eclipse I tried Windows -->... to make a connection to oracle DB from eclipse. Thanks org.apache.commons.collections15.Transformer jar file org.apache.commons.collections15.Transformer jar file Dear, Please can you provide me the link for the following jar file: import org.apache.commons.collections15.Transformer; tahnks too much Connecting to a MySQL Database in Java Connecting to a MySQL Database in Java  ... for a manipulation. We have many database provided like Oracle, MySQL etc. We... will learn how to connect the MySQL database with the Java file. Firstly, we need Hibernate required jar files - Hibernate Hibernate required jar files Hi, What are the jar files... how can set in environment variables? Also give me the download location of the jar files. Thanks in advance Thanks Mohan Hi Friend jar file with html jar file with html I have a jar file. On double click it shows an applet page. Please tell me how to connect that applet into html or jsp page File insertion into oracle database File insertion into oracle database How to Read and Insert a file (any format) into a Oracle database Struts - Struts Struts hi, I am new in struts concept.so, please explain example login application in struts web based application with source code . what are needed the jar file and to run in struts application ? please kindly php download file script php download file script PHP script to download file in IE How to create a jar file ??? The given code creates a jar file using java. import java.io.*; import...(); System.out.println("Jar File is created successfully."); } catch (Exception ex...How to create a jar file Hello!!!! I have a project which has How to download, compile and test the tutorials using ant. Compiling and running Struts 2.1.8 examples with ant In this section we will learn how to download and run Struts 2.1.8 examples discussed here. You will have to install ant php download file code php download file code PHP code to download the files from the remote server how to download file how to download file How do I let people download a file from my page Apache HSSF Jar file location Apache HSSF Jar file location where can i get jar files for apache hssf What is a JAR file in Java What is a JAR file in Java JAR file is the compressed file format. You can store many files in a JAR file. JAR stands for the Java Archive. This file format is used - Connecting JTable to database - JDBC Connecting JTable to database Hi.. I am doing a project on Project...; int index=1; int count=table.getRowCount(); try{ Class.forName... 'index' variable. Thanks conversion of war file into executable jar file conversion of war file into executable jar file how to convert war file into executable jar file Reg: Tree view in Struts using ajax - Struts out the tree and treenode attribute belong to which .jar file or tld ? Example... in all struts tld file.. please help me if you know thanks in advance? ... Struts Tag : 1)Struts download site quotion on .jar quotion on .jar in realtime where we use .jar class files. A Jar file combines several classes into a single archive file. Basically,library classes are stored in the jar file. For more information,please go through Connecting to MySQL In the lib folder place all required file ie. commons-collections.jar, commons-dbcp.jar, commons-pool.jar, j2ee.jar and mysql-connector-java-5.1.7-bin.jar...-connector-java-5.1.7-bin.jar Output how to convert a jar file into .exe file how to convert a jar file into .exe file hi, I want convert my jar file into executable file,urgently please help me jar File creation - Swing AWT jar File creation I am creating my swing applications. How can i... in java but we can create executable file in java through .jar file.. how can i convert my java file to jar file? Help me Runnable JAR Runnable JAR How to create runnable JAR file in eclipse ? Please provide me step by step demo... I am windows 7 user. I have made one jar file but when I double click it,it doesn't run. Why csv file download csv file download Hello Every one, when user clicks download button I want to retrieve data from mysql database table in csv format using java.if you know please send code. Thanks in Advance Please visit Advertisements If you enjoyed this post then why not add us on Google+? Add us to your Circles
http://roseindia.net/tutorialhelp/comment/80299
CC-MAIN-2015-35
en
refinedweb
On 18 Jul, 10:31 am, proppy at aminche.com wrote: >I'm working on a twisted implementation of protocol buffers RPC here: > Johan, this is awesome! This is exactly the sort of thing that I had in mind when I was suggesting the 'tx' namespace in the first place; I'm so glad you've released it. Although of course I hope that we can subsume it into Twisted at some later point, since all the code looks nice and TDD... :-)
http://twistedmatrix.com/pipermail/twisted-python/2008-July/018112.html
CC-MAIN-2015-35
en
refinedweb
Details - Type: Bug - Status: Resolved - Priority: Critical - Resolution: Won't Fix - Affects Version/s: 4.1.2 - - Component/s: Core Components - Labels:None - Environment:Tap 4.1.2, Firefox 2.0.0.5 or IE 7 on Win XP SP2, Safari 2.0.4 or Firefox 2.0.0.5 on OS X 10.4.10, served by Tomcat in JBoss 4.2.1. Description Is there a reason why submitType kills its component's listener? For example, whereas this listener works... <input jwcid="@Submit" type="submit" value="Cancel" action="listener:doCancel"> ... the listener in this one is ignored... <input jwcid="@Submit" type="submit" value="Cancel" action="listener:doCancel" submitType="cancel"/> ...and the listener has to be specified on the Form instead... <form jwcid="@Form" cancel="listener:doCancel"> Why is this so? What if I want more than one button of type cancel, each one with its own listener, and possibly its own parameters? I think submitType="refresh" behaves this way, too. Here's a working example: package sandpit; import org.apache.tapestry.html.BasePage; public abstract class CancelPage extends BasePage { public abstract void setMessage1(String value); public abstract void setMessage2(String value); public void doComponentListener(){ setMessage1("doComponentListener() invoked."); } public void doFormListener(){ setMessage2("doFormListener() invoked."); } } <html jwcid="@Shell" title=""> <body jwcid="@Body"> <h2>Demonstration of how submitType="cancel" kills the component's listener</h2> <form jwcid="@Form"> <fieldset> <legend>This form does NOT specify a cancel listener<, which is bad...<br/> <input jwcid="@Submit" type="submit" value="Cancel" submitType="cancel" action="listener:doComponentListener"/> BAD<br/> <code><input jwcid="@Submit" type="submit" value="Cancel" submitType="cancel" action="listener:doComponentListener"/></code><br/><br/> </fieldset> </form> <form jwcid="@Form" cancel="listener:doFormListener"> <fieldset> <legend>This form specifies cancel="listener:doFormListener"<, and the form's listener takes over, which is questionable...<br/> <input jwcid="@Submit" type="submit" value="Cancel" submitType="cancel" action="listener:doComponentListener" /> QUESTIONABLE<br/> <code><input jwcid="@Submit" type="submit" value="Cancel" submitType="cancel" action="listener:doComponentListener"/></code><br/><br/> ...but in this case the form's listener works... <br/> <input jwcid="@Submit" type="submit" value="Cancel" submitType="cancel"/> GOOD<br/> <code><input jwcid="@Submit" type="submit" value="Cancel" submitType="cancel"/></code><br/><br/> </fieldset> </form> Message1: <span style="color: red;"><span jwcid="@Insert" value="ognl:message1">Message</span></span><br/> Message2: <span style="color: blue;"><span jwcid="@Insert" value="ognl:message2">Message</span></span> </body> </html> Activity - All - Work Log - History - Activity - Transitions I'm not entirely sure what can be done about this issue......If you look at the documentation on the Form success/cancel/refresh combined with the knowledge that it's impossible to know that any particular button has been clicked unless you do rewind the whole form - it makes things "tricky". The cancel logic is currently happening because FormSupportImpl reads the submit mode parameter and doesn't even rewind the form. I'm not sure how we can bypass this functionality without breaking existing applications...I'm not really sure. The refresh listeners should have their listeners called correctly - if this isn't happening for you please let me know. You can of course very easily just use a normal listener and have it perform whatever cancel logic you need still though right? At the very least the various submit buttons/links should probably throw a runtime exception indicating that you can't have a submit type of "cancel" and a listener/action bound as it will never be called. thoughts? you're right - the contract for the cancel listener has always been that it doesn't rewind the form - as i now recall i'll double check the refresh listener and close this. Looked at what happens with refresh listener of @Submit... It is called only when async=true - it is skipped on normal submits! The javascript for the first case (async) is: tapestry.form.refresh("msg", "cc", ); and for the normal case it is: tapestry.form.refresh('msg','cc') Need to dig some more to see why I don't know if this is the same issue or a new one, but in 4.1.5 a Submit component with submitType="cancel" and no action listener doesn't call the cancel listener of the enclosing Form component. e.g: <form jwcid="@Form" cancel="listener:doCancel"> <input jwcid="@Submit" value="Cancel" submitType="cancel"/> </form This was working on 4.1.3 Alejandro, no! What you're seeing is TAPESTRY-2225 as for "submitType=cancel killing its listener", it happens because there's no rewind taking place in that case. i'm not sure why the cancel submit doesn't call its own listener first - it indeed only calls the forms cancel listener And I need that behavior as well, so ...
https://issues.apache.org/jira/browse/TAPESTRY-1673
CC-MAIN-2015-35
en
refinedweb
Name | Synopsis | Description | Return Values | Attributes | See Also #include <signal.h> int raise(int sig); The raise() function sends the signal sig to the executing thread. If a signal handler is called, the raise function does not return until after the signal handler returns. The effect of the raise function is equivalent to calling: pthread_kill(pthread_self(), sig); See the pthread_kill(3C) manual page for a detailed list of failure conditions and the signal.h(3HEAD) manual page for a list of signals. Upon successful completion, 0 is returned. Otherwise, -1 is returned and errno is set to indicate the error. See attributes(5) for descriptions of the following attributes: pthread_kill(3C), pthread_self(3C), signal.h(3HEAD), attributes(5), standards(5) Name | Synopsis | Description | Return Values | Attributes | See Also
http://docs.oracle.com/cd/E19082-01/819-2243/6n4i099fj/index.html
CC-MAIN-2015-35
en
refinedweb
Elixir v0.13.0 released, hex.pm and ElixirConf announced Hello folks! Elixir v0.13.0 has been released. It contains changes that will effectively shape how developers will write Elixir code from now on, making it an important milestone towards v1.0! On this post we are going to cover some of those changes, the road to Elixir v1.0, as well as the announcement of hex.pm. Before we go into the changes, let's briefly talk about ElixirConf! ElixirConf We are excited to announce ElixirConf, the first ever Elixir conference, happening July 25-26, 2014 in Austin, TX. The Call For Proposals is open and we are waiting for your talks! The registration is also open and we hope you will join us on this exciting event. We welcome Elixir developers and enthusiasts that are looking forward to be part of our thrilling community! Summary In a nutshell, here is what new: Elixir now runs on and requires Erlang R17; With Erlang R17, Elixir also adds support for maps, which are key-value data structures that supports pattern matching. We'll explore maps, their features and limitations in this post; Elixir v0.13 also provides structs, an alternative to Elixir records. Structs are more flexible than records, provide faster polymorphic operations, and still provide the same compile-time guarantees many came to love in records; The Getting Started guide was rewritten from scratch. The previous guide was comprised of 7 chapters and was about to become 2 years old. The new guide features 20 chapters, it explores the new maps and structs (which are part of this release), and it goes deeper into topics like IO and File handling. It also includes an extra guide, still in development, about Meta-Programming in Elixir; Elixir v0.13 provides a new comprehension syntax that not only works with lists, but with any Enumerable. The output of a comprehension is also extensible via the Collectableprotocol; Mix, Elixir's build tool, has been improved in order to provide better workflows when compiling projects and working with dependencies; There are many other changes, like the addition of StringIO, support for tags and filters in ExUnit and more. Please check the CHANGELOG for the complete list. Even with all those improvements, Elixir v0.13.0 is backwards compatible with Elixir v0.12.5 and upgrading should be a clean process. Maps Maps are key-value data structures: iex> map = %{"hello" => :world} %{"hello" => :world} iex> map["hello"] :world iex> map[:other] nil Maps do not have a explicit ordering and keys and values can be any term. Maps can be pattern matched on: iex> %{"hello" => world} = map %{"hello" => :world} iex> world :world iex> %{} = map %{"hello" => :world} iex> %{"other" => value} = map ** (MatchError) no match of right hand side value A map pattern will match any map that has all the keys specified in the pattern. The values for the matching keys must also match. For example, %{"hello" => world} will match any map that has the key "hello" and assign the value to world, while %{"hello" => "world"} will match any map that has the key "hello" with value equals to "world". An empty map pattern ( %{}) will match all maps. Developers can use the functions in the Map module to work with maps. For more information on maps and how they compare to other associative data structures in the language, please check the Maps chapter in our new Getting Started guide. Elixir Sips has also released two episodes that cover maps (part 1 and part 2). Maps also provide special syntax for creating, accessing and updating maps with atom keys: iex> user = %{name: "john", age: 27} %{name: "john", age: 27} iex> user.name "john" iex> user = %{user | name: "meg"} %{name: "meg", age: 27} iex> user.name "meg" Both access and update syntax above expect the given keys to exist. Trying to access or update a key that does not exist raises an error: iex> %{ user | address: [] } ** (ArgumentError) argument error :maps.update(:address, [], %{}) As we will see, this functionality becomes very useful when working with structs. Structs Structs are meant to replace Elixir records. Records in Elixir are simply tuples supported by modules which store record metadata: defrecord User, name: nil, age: 0 Internally, this record is represented as the following tuple: # {tag, name, age} {User, nil, 0} Records can also be created and pattern matched on: iex> user = User[name: "john"] User[name: "john", age: 0] iex> user.name "john" iex> User[name: name] = user User[name: "john", age: 0] iex> name "john" Pattern matching works because the record meta-data is stored in the User module which can be accessed when building patterns. However, records came with their own issues. First of all, since records were made of data (the underlying tuple) and a module (functions/behaviour), they were frequently misused as an attempt to bundle data and behaviour together in Elixir, for example: defrecord User, name: nil, age: 0 do def first_name(self) do self.name |> String.split |> Enum.at(0) end end User[name: "john doe"].first_name #=> "john" Not only that, records were often slow in protocol dispatches because every tuple can potentially be a record, sometimes leading to expensive checks at runtime. Since maps are meant to replace many cases of records in Erlang, we saw with the introduction of maps the perfect opportunity to revisit Elixir records as well. In order to understand the reasoning behind structs, let's list the features we got from Elixir records: - A way to organize data by fields - Efficient in-memory representation and operations - Compile-time structures with compile-time errors - The basic foundation for polymorphism in Elixir Maps naturally solve issues 1. and 2. above. In particular, maps that have same keys share the same key-space in memory. That's why the update operation %{map | ...} we have seen above is relevant: if we know we are updating an existing key, the new map created as result of the update operation can share the same key space as the old map without extra checks. For more details on why Maps are efficient, I would recommend reading Joe's blog post on the matter. Structs were added to address features 3. and 4.. A struct needs to be explicitly defined via defstruct: defmodule User do defstruct name: nil, age: 0 end Now a User struct can be created without a need to explicitly list all necessary fields: iex> user = %User{name: "john"} %User{name: "john", age: 0} Trying to create a struct with an unknown key raises an error during compilation: iex> user = %User{address: []} ** (CompileError) unknown key :address for struct User Furthermore, every struct has a __struct__ field which contains the struct name: iex> user.__struct__ User The __struct__ field is also used for polymorphic dispatch in protocols, addressing issue 4.. It is interesting to note that structs solve both drawbacks we have earlier mentioned regarding records. Structs are purely data and polymorphic dispatch is now faster and more robust as it happens only for explicitly tagged structs. For more information on structs, check out the Structs chapter in the getting started guide (you may also want to read the new Protocols chapter after it). Maps, structs and the future With the introduction of maps and structs, some deprecations will arrive on upcoming releases. First of all, the ListDict data structure is being deprecated and phased out. Records are also being deprecated from the language, although it is going to be a longer process, as many projects and Elixir itself still use records in diverse occasions. Note though only Elixir records are being deprecated. Erlang records, which are basically syntax sugar around tuples, will remain in the language for the rare cases Elixir developers need to interact with Erlang libraries that provide records. In particular, the Record has been updated to provide the new Record API (while keeping the old one for backwards compatibility). Finally, structs are still in active development and new features, like @derive, should land in upcoming Elixir releases. For those interested, the original maps and structs proposal is still availble. Comprehensions Erlang R17 also introduced recursion to anonymous functions. This feature, while still not available from Elixir, allows Elixir to provide a more flexible and extensible comprehension syntax. The most common use case of a comprehension are list comprehensions. For example, we can get all the square values of elements in a list as follows: iex> for n <- [1, 2, 3, 4], do: n * n [1, 4, 9, 16] We say the n <- [1, 2, 3, 4] part is a comprehension generator. In previous Elixir versions, Elixir supported only lists in generators. In Elixir v0.13.0, any Enumerable is supported (ranges, maps, etc): iex> for n <- 1..4, do: n * n [1, 4, 9, 16] As in previous Elixir versions, there is also support for a bitstring generator. In the example below, we receive a stream of RGB pixels as a binary and break it down into triplets: iex> pixels = <<213, 45, 132, 64, 76, 32, 76, 0, 0, 234, 32, 15>> iex> for <<r::8, g::8, b::8 <- pixels>>, do: {r, g, b} [{213,45,132}, {64,76,32}, {76,0,0}, {234,32,15}] By default, a comprehension returns a list as a result. However the result of a comprehension can be inserted into different data structures by passing the :into option. For example, we can use bitstring generators with the :into option to easily remove all spaces in a string: iex> for <<c <- " hello world ">>, c != ?\s, into: "", do: <<c>> "helloworld" Sets, maps and other dictionaries can also be given with the :into option. In general, the :into accepts any structure as long as it implements the Collectable protocol. For example, the IO module provides streams, that are both Enumerable and Collectable. You can implement an echo terminal that returns whatever is typed into the shell, but in upcase, using comprehensions: iex> stream = IO.stream(:stdio, :line) iex> for line <- stream, into: stream do ...> String.upcase(line) <> "\n" ...> end This makes comprehensions useful not only for working with in-memory collections but also with files, io devices, and other sources. In future releases, we will continue exploring how to make comprehensions more expressive, following in the footsteps of other functional programming research on the topic (like Comprehensive Comprehensions and Parallel Comprehensions). Mix workflows The last big change we want to discuss in this release are the improvements done to Mix, Elixir's build tool. Mix is an essential tool to Elixir developers and helps developers to compile their projects, manage their dependencies, run tests and so on. In previous releases, Mix was used to download and compile dependencies per environment. That meant the usual workflow was less than ideal: every time a dependency was updated, developers had to explicitly fetch and compile the dependencies for each environment. The workflow would be something like: $ mix deps.get $ mix compile $ MIX_ENV=test mix deps.get $ mix test In Elixir v0.13, mix deps.get only fetches dependencies and it does so accross all environments (unless an --only flag is specified). To support this new behaviour, dependencies now support the :only option: def deps do [{:ecto, github: "elixir-lang/ecto"}, {:hackney, github: "benoitc/hackney", only: [:test]}] end Dependencies now are also automatically compiled before you run a command. For example, mix compile will automatically compile pending dependencies for the current environment. mix test will do the same for test dependencies and so on, interrupting less the developer workflow. hex.pm This release also marks the announcement of hex.pm, a package manager for the Erlang VM. Hex allows you to package and publish your projects while fetching them and performing dependency resolution in your applications. Currently Hex only integrates with Mix and contributions to extend it to other tools and other languages in the Erlang VM are welcome! The next steps As seen in this announcement, this release dictates many of the developments that will happen in Elixir and its community in the following weeks. All projects are recommended to start moving from records to structs, paving the way for the deprecation of records before 1.0. The next months will also focus on integrating Elixir more tightly to OTP. During the keynote at Erlang Factory, Catalyse Change, Dave Thomas and I argued that there are many useful patterns, re-implemented everyday by developers, that could make development more productive within the Erlang VM if exposed accordingly. That said, in the next months we plan to: - Integrate applications configuration (provided by OTP) right into Mix; - Provide an Elixir logger that knows how to print and format Elixir exceptions and stacktraces; - Properly expose the functionality provided by Applications, Supervisors, GenServers and GenEvents and study how they can integrate with Elixir. For example, how to consume events from GenEvent as a stream of data? - Study how patterns like tasks and agents can be integrated into the language, often picking up the lessons learned by libraries like e2 and functionality exposed by OTP itself; - Rewrite the Mix and ExUnit guides to focus on applications and OTP as a whole, rebranding it to "Building Apps with Mix and OTP"; You can learn more about Elixir in our Getting Started guide and download this release in the v0.13 announcement. We hope to see you at ElixirConf as well as pushing your packages to hex.pm.
http://elixir-lang.org/blog/2014/04/21/elixir-v0-13-0-released/
CC-MAIN-2015-35
en
refinedweb
char * fgets ( char * str, int num, FILE * stream ); <cstdio> Get string from stream Reads characters from stream and stores them as a C string. /* fgets exmaple */ #include <stdio.h> int main() { FILE * pFile; char mystring [100]; pFile = fopen ("myfile.txt" , "r"); if (pFile == NULL) perror ("Error opening file"); else { fgets (mystring , 100 , pFile); puts (mystring); fclose (pFile); } return 0; }
http://www.cplusplus.com/reference/clibrary/cstdio/fgets/
crawl-002
en
refinedweb
I've been working crazy hours updating my Silverlight course for version 2 and expanding it with lots of new material. With the PDC coming up in less than a week, I've also been working on some cool tips and tricks demonstrating some of the lesser-known but potentially useful features of Silverlight 2. Each day leading up to the show I'm going to try to post one juicy tip or trick. Here's the first one. Silverlight's System.Windows.Browser namespace contains "HTML bridge" classes allowing managed code to integrate with the browser DOM. Among other things, you can use these classes to grab managed references to HTML DOM elements; call JavaScript functions from C#; call C# methods from JavaScript; process DOM events in C#; and handle events fired from C# with JavaScript. Whenever you call from one language to another, one issue that's sure to rear its head is how data—particularly data types that don't have an equivalent on the other side—is marshaled. The marshaling rules in Silverlight are complex but powerful. For example, when you call from C# into JavaScript, you can pass instances of managed types like this: // Define the type [ScriptableType] public class Person { public string Name { get; set; } public int Age { get; set; } } // Create a Person instance and pass it to JavaScript Person person = new Person { Name = "Adam", Age = 18 }; HtmlPage.Window.Invoke("js_func", person); Silverlight's marshaling layer uses reflection on the C# side to discover the particulars of the Person class, and then it creates a look-alike type on the JavaScript side. In JavaScript, you can consume a Person object like this: function js_func(arg) var name = arg.Name; var age = arg.Age; So far, so good. Passing instances of custom types from C# to JavaScript couldn't be easier. The marshaling layer even honors by-ref and by-val semantics, so class instances are passed by reference and value types are passed by value. Now suppose you want to pass a Person object from JavaScript to C#. You might do this on the JavaScript side, assuming the method is named CSharp_Method and it's exposed by a scriptable type instance named "page": var person = new Object(); person.Name = 'Adam'; person.Age = 18; var control = document.getElementById('SilverlightControl'); control.content.page.CSharp_Method(person); But in C#, the object comes through as a generic ScriptObject, which forces you to write code like this: public void CSharp_Method(ScriptObject arg) string name = arg.GetProperty("Name").ToString(); int age = Int32.Parse(arg.GetProperty("Age").ToString()); This is where HtmlPage's rather obscure RegisterCreateableType method comes in handy. In C#, use RegisterCreateableType to turn Person into a class that can be instantiated in client-side script: HtmlPage.RegisterCreateableType("Person", typeof(Person)); Then, in JavaScript, create a Person instance and pass it to C# like this: var The C# method can now be written as follows: public void CSharp_Method (Person person) string name = person.Name; int age = person.Age; Now the Person object really is a Person object, and you can consume it with all the benefits of strong typing. Does that mean you won't be needing the ScriptObject class any more? Hardly. Stay tuned for cool Silverlight trick #2, which uses ScriptObject to allow two Silverlight control instances to communicate without involving JavaScript. If you would like to receive an email when updates are made to this post, please register here RSS Hello Jeff, We've just updated our code base for Silverlight RTW too ;) Please take a look at the demo here: We're extremely in need of any suggestions or recommendations. Any suggestions from you can be really useful. In this issue: Paulio and Jeff Prosise Tim Heuer blogged about changes on Silverlight.net: Silverlight I have an example as you presented in calling a javascript function from C# (in Silverlight). It works if I pass a string but I get an InvalidOperationException when I try to pass an object. I've defined the object as you describe also. Have there been changes to Silverlight or can you suggest what problem I might look at?
http://www.wintellect.com/CS/blogs/jprosise/archive/2008/10/20/cool-silverlight-trick-1.aspx
crawl-002
en
refinedweb
think I'm going back to the framework, and just advise people to restart it if they think the environment has been corrupted. On the other hand, I'm having a hell of a time trying to use sys.unixShellCommand to launch anything that calls back to Radio using XML-RPC. It worked fine with the weirdo Python I used to have installed. I got it from here. It's a 35M download, and was built to be used with PyGame, a game framework for Python. I may just recommend that particular Python for use on OS X. Why is this weirdo build of Python the only version of Python on OS X that seems to sensibly support application launching? 11:36:30 PM It's a little harder doing the same thing in Python. I could get the scoping improvement by translating bundles into "if 1:", but I am unsure about whether that would actually make things worse. 7:28:09 PM What I'd really like is a way to make the notification happen in the other direction -- I want to be told that I need to update. I suppose I could try the JavaScript version of XML-RPC. I could just call into some service that blocks if it doesn't have any information, or returns whatever has accumulated since the last call. 6:10:03 PM It's a little harder doing the same thing in Python. I could get the scoping improvement by translating bundles into "if 1:", but I am unsure about whether that would actually make things worse. 4:16:51 PM It turns out I didn't have a standard install of Python on my machine. I thought I had MacPython installed, but when I tested it against the standard MacPython installation I discovered that it doesn't work with the launch.appWithDocument verb I was using in Radio. So I'm going to make it work with the Fink version of python, and assume that it's installed in the standard Fink location of /sw/bin/python. I'll also include documentation that describes how to fix that path if it's wrong. So here we go again. Sorry about that. 3:05:15 PM It's the 'bundle' keyword. All it does is the equivalent of introducing a block of code. That didn't make sense to me at first. UserTalk is customarily programmed in an outliner, and a block is introduced merely by making it a child of the 'if', 'while', 'case' or other keyword that requires a block. So all it really does is add a level of indentation to the code. Which, as it turns out, was the key to why it's such a useful feature. When you start mucking about in Frontier or Radio, you really don't get very far until you get comfortable with the programming environment that's been set up, since it is educational to poke around in the current code base. One of the things you start seeing is a lot of code that looks like this: on whatever(s) bundle // what this phrase does bundle // what the next phrase does bundle // clean up the mess return (true) ...where the bundles have code in them, they're just collapsed in the outline. When you use an outline, you can collapse bits of code so they don't get in the way of your thought. You can tag the bundle with a comment that summarizes what is going on, which makes it much easier to deal with the code. It's easy to take on big projects if you split them up into tiny pieces, and getting them out of your way visually really helps in keeping focused on the bigger picture. If you didn't have a 'bundle', you wouldn't have a way to break code into phrases that can be hidden. You're left with just adding whitespace to make some breathing room, but that soaks up vertical space that always seems to be in short supply. In an outliner, to make something hideable, it must be made a child. It's a nuance you really don't get until you start using it. Now that I'm programming Python in a browser, I've been missing being able to use bundles. Then it occurs to me -- I'm already rendering Python code from a browser, there's nothing keeping me from allowing someone to use a bundle keyword in the browser, and then just do the right thing when it's written out. I could have a 'bundle' in Python. I think that little feature will slip into my Python Tool in the next minor release. 12:31:36 PM
http://radio.weblogs.com/0100039/categories/radioPython/2002/02/17.html
crawl-002
en
refinedweb
OSAF's PyLucene web The PyLucene web of TWiki. TWiki is a Web-Based Collaboration Platform for the Enterprise. en-us Copyright 2009 by contributing authors OSAF wikimaster [wikimaster@osafoundation.org] The contributing authors of OSAF OSAF Open Source Applications PyLucene? ProjectsUsingPyLucene Biomed Search is the largest biomedical image search engine with over 1 million image text captions indexed. Grassyknoll is a lightweight full text search web ... (last changed by AlexK) 2008-01-12T23:02:54Z AlexK ThreadingInPyLucene When using PyLucene in your application all threads must be created as instances of PyLucene.PythonThread . These behave exactly like Python's normal threading ... (last changed by PeterFein) 2007-05-10T06:43:00Z PeterFein ApacheAndPyLucene Do not attempt to execute any PyLucene code underneath Apache/mod python. See also ThreadingInPyLucene and the discussion of the underlying GCC bug (fixed in GCC ... (last changed by VaclavSlavik) 2007-04-10T09:26:12Z VaclavSlavik WebHome PyLucene is a GCJ compiled version of Java Lucene integrated with Python. Its goal is to allow you to use Lucene's text indexing and searching capabilities from Python ... (last changed by OferNave) 2007-04-01T20:54:29Z OferNave UsefulLuceneTools Plush is PyLUcene SHell to play with a Lucene indexes interactively. Luke is a Lucene index browser written in Java with a pretty UI. PeterFein 30 Mar 2007 ... (last changed by PeterFein) 2007-03-30T14:11:35Z PeterFein WebNotify TWikiGuest example@your.company .WebChangesAlert, ., .TWikiRegistration (last changed by OferNave) 2007-03-30T09:02:24Z OferNave APIDocsExtendingLuceneClasses Extending Lucene Classes Many areas of the Lucene API expect the programmer to provide their own implementation or specialization of a feature where the default ... (last changed by OferNave) 2007-03-30T05:52:50Z OferNave APIDocsPythonExtensions Pythonic Extensions to the Java API Java is a very verbose language. Python, on the other hand, offers many syntactically attractive constructs for iteration, property ... (last changed by OferNave) 2007-03-30T05:51:57Z OferNave APIDocsExposedJavaRuntimeClasses Exposed Java Runtime Classes To help with debugging and to support some Lucene APIs, PyLucene also exposes some Java runtime APIs. As with the Java Lucene APIs ... (last changed by OferNave) 2007-03-30T05:40:19Z OferNave APIDocs API Documentation !PyLucene is currently built against Java Lucene 2.0. It intends to supports the entire Lucene API, except for the !RemoteSearchable class. Contributed ... (last changed by OferNave) 2007-03-30T05:31:19Z OferNave APIDocsJavaAPIDifferences Java API Differences The PyLucene API exposes all Java Lucene classes in a flat namespace in the PyLucene module. For example, the Java import statement ... (last changed by OferNave) 2007-03-30T05:29:59Z OferNave APIDocsExceptionHandling Exception Handling Java exceptions are caught at the language barrier and reported to Python by raising a !PyLucene.JavaError instance whose args tuple contains the ... (last changed by OferNave) 2007-03-30T05:22:22Z OferNave APIDocsThreadingSupport Threading Support The garbage collector implemented by the Java runtime support in libgcj insists on having full control over the creation of threads used by it. At ... (last changed by OferNave) 2007-03-30T05:21:37Z OferNave APIDocsSamples Samples The best way to learn !PyLucene is to look at the many samples included with the !PyLucene source release or on the web at ... (last changed by OferNave) 2007-03-30T05:20:57Z OferNave Paper Note: This paper was given at !PyCON 2005 and !EuroPython 2005. It contains a lot of information that is not yet duplicated elsewhere on the wiki, but is not kept ... (last changed by OferNave) 2007-03-30T05:19:01Z OferNave WebPreferences PyLucene Web Preferences The following settings are web preferences of the PyLucene web. These preferences overwrite the site level preferences in and , ... (last changed by JaredRhine) 2007-03-29T21:34:42Z JaredRhine
http://chandlerproject.org/PyLucene/WebRss
crawl-002
en
refinedweb
Greg Reinacker on Sam Ruby on Loosely Coupled Web Services. ACK. Mark Baker: Here's how I see it. HTTP is the protocol, and no other protocols are required in order to get stuff done. Perhaps Mark has a narrower definition of communcations protocol than I do? [Sam Ruby] Here we have it, Sam and I agree on something. I hope Mark's statement is from a larger context that I haven't really followed. There are things like reliable delivery, broadcast, transactions, streaming, store/forward for which support demand can be expressed in WSDL (by declaring and hence demanding support for headers), but many which can not be solely be carried through HTTP but require multi-pipe connectivity or just can't be done using HTTP, at all. Show me an approach to negotiate a 2-phase or 3-phase commit protocol successfully through HTTP and I'll be quiet. In referral to my previous post: With the "same semantic metaschema" I actually meant that the metaschemas are at least compatible and this would be enabled by a network of basic metaschemas that define well understood terminology from all sorts of fields and are based on well-known standards (ISO, IEEE, W3C, UN, WHO, etc.) and also corporate and organization sources. It actually doesn't have to be coordinated and organized and can be as chaotic as the web is, because everyone can and should freely link and establish similarities between these things. So, if both parties talk very loosely (coupled) about a "beetle" and the metaschema reference for one comes from the animal space and the other comes from the Volkswagen space, they are probably not the same. But if both talk about some different sorts of funky insects and know about that by tracing their metaschema references back to a common junction point ... well, then there's hope that some smart logic may actually be able to negotiate the least-common-denominator data between them. The Semantic Web Activity at the W3C does the groundwork for these things now; however, none of the examples that I've seen (and that may be because I am ignorant or lazy not to look hard enough) actually annotate XML Schema (they rather annotate RDF) and I think that's something to at least think about.. For interoperability reasons, I would like to have the message based equivalent of a dynamic invocation interface as well, which is strictly late bound and based only on metadata discovery and works on a global scale and with potentially anonymous partners; resembling the DII, Reflection or IDispatch idea, but based on meta-metadata. Schematron + Semantic Annotations + XSLT could be candidates for components to build this - methinks. [And with this, I invite Sam to have the final word on this friendly banter. I will accept anything from "you are completely nuts" to "that's worse than I thought it could get".] I said: "Complete agreement may be a problem, but agreement on a composite, well-defined subset of things that both parties are interested in and understand is not." Sam says: "Pardon me, but did you say both? BOTH? BOTH!!!??? Works well 1:1, but how about nnn:mmmmmm?" I say: "When watching TV, my favorite TV station's program people and I seem to have a well-defined subset of common interests. Otherwise I wouldn't watch their stuff so frequently. They don't know me, but there's a 1:couple-of-million 'both' relationships there." And then Sam says on my recent post: "I was more thinking of other ways. Ones that do not require existing instance documents to be invalidated when an XML schema is only extended." look! Look! LOOK!!!!! You can keep all instance documents intact (provided you have an instrumented schema), have any schema namespace you like, have schema versioning, or have no namespace and/or no schema (provided you have instrumented elements) and all that, if you just tell me what the things mean that you give me by annotating them with semantics. I am only saying that schema has to do with semantics, because there's no other way to express a semantic association of data in the standardized technology set but "my thing" and "your thing" via namespaces. It's very, very poor, but that's all we have. When we start linking things like OWL into schemas and document instances, we'll have a much richer way to define and agree on things. I can write a schema that defines elements and attributes all in German and you can write one that has everything in English and even structure it somewhat differently or use other concrete datatypes (say, strings instead of numbers). If we both use the same semantic metaschema for our schema, we'll both understand.
http://radio.weblogs.com/0108971/2002/07/29.html
crawl-002
en
refinedweb
Specifies the security configuration for a class or an individual method within a web service or Java control class. Note that the @common:security annotation provides role-mapping with scoped, not global roles, and assumes that the subject has already been authenticated by WebLogic Server's security framework. The role referenced by the @common:security annotation applies to the EJB produced when Workshop compiles the web service or Java control. @common:security roles-allowed="space_separated_list_of_roles_permitted_to_access_the_object" roles-referenced="space_separated_list_of_roles" run-as="single_role_name" run-as-principal="single_principal_name" single-principal="true | false" callback-roles-allowed="space_separated_list_of_roles_permitted_to_callback_the_object" Optional. Specify a list of roles permitted to access the object annotated by @common:security. Individual roles listed must be separated by a single space. If @common:security is applied at the class level, then the roles referenced may access all individual methods within the class regardless of any further role restrictions placed on individual methods. The roles allowed are defined in the underlying ejb-jar.xml files and a role-principal mapping, with a principal that is given the same name as the role name, is defined in the underlying weblogic-ejb-jar.xml file. For more information, see Role-Based Security. Optional. Specifies a list of roles to which there are programmatic references ( .hasRole("Admin") ) in the class or method code. The annotation causes the generated runtime code to include a reference to the roles in the resulting deployment descriptor. Optional. A web resource (a class or method) that includes this attribute assumes the permission-level of the role specified and may access other resources accordingly. Note that the run-as attribute signifies an externally defined role. When run-as appears in a JWS file, the EJB deployment descriptor (weblogic-ejb-jar.xml) will mark the role with an externally-defined tag: <externally-defined/>. To successfully deploy, the role referred to must exist in the target server's security realm. If run-as is present without run-as-principal, then the run-as value is assumed to be a principal and role name. Note that run-as is only applied in a top-level context. It is ignored in a nested context, because run-as relies on the generated EJB deployment descriptor, a deployment descriptor possessed only by top-level elements. Optional. A web resource (a class or method) that includes this attribute assumes the permission-level of the principal specified and may access other resources accordingly. Note that if you specify the run-as-principal attribute, you must also specify the run-as attribute. Optional. Takes a boolean value. If true, only the principal who started the conversation can continue and finish the conversation. If false, a conversation can be continued and finished by another (appropriately authorized) user. Optional. Specify a list of roles permitted to callback the object annotated by @common:security. Individual roles listed must be separated by a single space. The callback-roles-allowed annotation may appear: (1) On a JWS file, provided that there is a control declared within the JWS file and this control implements com.bea.control.ExternalCallbackTarget. (2) On JCS file, provided that there is a web service control declared within the JCS file and this control implements com.bea.control.ExternalCallbackTarget. (3) Inline on the declaration of the control. Assume that BankControl is a JCS control file. /** * @common:control * @common:security callback-roles-allowed="AccountHolders" */ private controls.BankControl bankControl; You cannot place callback-roles-allowed on a JCX file. Example #1 /** * @common:operation * @common:security roles-allowed="friends" */ public String hello() { return "Hello Friends!"; } Example #2 /** * @common:security single-principal="false" */ public class PurchaseSupplies implements com.bea.jws.WebService { /** * @common:operation * @jws:conversation phase="start" */ public void requestPurchase() { } /** * @common:operation * @jws:conversation phase="continue" */ public void approvePurchase() { } /** * @common:operation * @jws:conversation phase="finish" */ public void executePurchase() { } } @jpf:controller Annotation
http://e-docs.bea.com/workshop/docs81/doc/en/workshop/javadoc-tag/common/security.html
crawl-002
en
refinedweb
The following sections describe how to use transactions with WebLogic JMS: A transaction enables an application to coordinate a group of messages for production and consumption, treating messages sent or received as an atomic unit. When an application commits a transaction, all of the messages it received within the transaction are removed from the messaging system and the messages it sent within the transaction are actually delivered. If the application rolls back the transaction, the messages it received within the transaction are returned to the messaging system and messages it sent are discarded. When a topic subscriber rolls back a received message, the message is redelivered to that subscriber. When a queue receiver rolls back a received message, the message is redelivered to the queue, not the consumer, so that another consumer on that queue may receive the message. For example, when shopping online, you select items and store them in an online shopping cart. Each ordered item is stored as part of the transaction, but your credit card is not charged until you confirm the order by checking out. At any time, you can cancel your order and empty your cart, rolling back all orders within the current transaction. There are three ways to use transactions with JMS: To enable multiple JMS servers in the same JTA user transaction, or to combine JMS operations with non-JMS operations (such as EJB), the two-phase commit license is required. For more information, see Using JTA User Transactions. The following sections explain how to use a JMS transacted session and JTA user transaction. A JMS transacted session supports transactions that are located within the session. A JMS transacted session’s transaction will not have any effects outside of the session. For example, rolling back a session will roll back all sends and receives on that session, but will not roll back any database updates. JTA user transactions are ignored by JMS transacted sessions. Transactions in JMS transacted sessions are started implicitly, after the first occurrence of a send or receive operation, and chained together—whenever you commit or roll back a transaction, another transaction automatically begins. Before using a JMS transacted session, the system administrator should adjust the connection factory (Transaction Timeout) and/or session pool (Transaction) attributes, as necessary for the application development environment. The following figure illustrates the steps required to set up and use a JMS transacted session. Set up the JMS application as described in Setting Up a JMS Application, however, when creating sessions, as described in Step 3: Create a Session Using the Connection, specify that the session is to be transacted by setting the transacted boolean value to true. For example, the following methods illustrate how to create a transacted session for the PTP and Pub/sub messaging models, respectively: qsession = qcon.createQueueSession( true, Session.AUTO_ACKNOWLEDGE ); tsession = tcon.createTopicSession( true, Session.AUTO_ACKNOWLEDGE ); Once defined, you can determine whether or not a session is transacted using the following session method: public boolean getTransacted( ) throws JMSException Perform the desired operations assoicated with the current transaction. Once you have performed the desired operations, execute one of the following methods to commit or roll back the transaction. To commit the transaction, execute the following method: public void commit( ) throws JMSException The commit() method commits all messages sent or received during the current transaction. Sent messages are made visible, while received messages are removed from the messaging system. To roll back the transaction, execute the following method: public void rollback( ) throws JMSException The rollback() method cancels any messages sent during the current transaction and returns any messages received to the messaging system. If either the commit() or rollback() methods are issued outside of a JMS transacted session, a IllegalStateException is thrown. The Java Transaction API (JTA) supports transactions across multiple data resources. JTA is implemented as part of WebLogic Server and provides a standard Java interface for implementing transaction management. You program your JTA user transaction applications using the javax.transaction.UserTransaction object to begin, commit, and roll back the transactions. When mixing JMS and EJB within a JTA user transaction, you can also start the transaction from the EJB, as described in “ Transactions in EJB Applications” in Programming WebLogic JTA. You can start a JTA user transaction after a transacted session has been started; however, the JTA transaction will be ignored by the session and vice versa. WebLogic Server supports the two-phase commit protocol (2PC), enabling an application to coordinate a single JTA transaction across two or more resource managers. It guarantees data integrity by ensuring that transactional updates are committed in all of the participating resource managers, or are fully rolled back out of all the resource managers, reverting to the state prior to the start of the transaction. Before using a JTA transacted session, the system administrator must configure the connection factories to support JTA user transactions by selecting the XA Connection Factory Enabled check box. The following figure illustrates the steps required to set up and use a JTA user transaction. Set up the JMS application as described in Setting Up a JMS Application, however, when creating sessions, as described in Step 3: Create a Session Using the Connection, specify that the session is to be non-transacted by setting the transacted boolean value to false. For example, the following methods illustrate how to create a non-transacted session for the PTP and Pub/sub messaging models, respectively. qsession = qcon.createQueueSession( false, Session.AUTO_ACKNOWLEDGE ); tsession = tcon.createTopicSession( false, Session.AUTO_ACKNOWLEDGE ); The application uses JNDI to return an object reference to the UserTransaction object for the WebLogic Server domain. You can look up the UserTransaction object by establishing a JNDI context ( context) and executing the following code, for example: UserTransaction xact = ctx.lookup(“javax.transaction.UserTransaction”); Start the JTA user transaction using the UserTransaction.begin() method. For example: xact.begin(); Perform the desired operations associated with the current transaction. Once you have performed the desired operations, execute one of the following commit() or rollback() methods on the UserTransaction object to commit or roll back the JTA user transaction. To commit the transaction, execute the following commit() method: xact.commit(); The commit() method causes WebLogic Server to call the Transaction Manager to complete the transaction, and commit all operations performed during the current transaction. The Transaction Manager is responsible for coordinating with the resource managers to update any databases. To roll back the transaction, execute the following rollback() method: xact.rollback(); The rollback() method causes WebLogic Server to call the Transaction Manager to cancel the transaction, and roll back all operations performed during the current transactions. Once you call the commit() or rollback() method, you can optionally start another transaction by calling xact.begin(). Because JMS cannot determine which, if any, transaction to use for an asynchronously delivered message, JMS asynchronous message delivery is not supported within JTA user transactions. However, message driven beans provide an alternative approach. A message driven bean can automatically begin a user transaction just prior to message delivery. For information on using message driven beans to simulate asynchronous message delivery, see “ Designing Message-Driven EJBs” in Programming WebLogic EJB. The following example shows how to set up an application for mixed EJB and JMS operations in a JTA user transaction by looking up a javax.transaction.UserTransaction using JNDI, and beginning and then committing a JTA user transaction. In order for this example to run, the XA Connection Factory Enabled check box must be selected when the system administrator configures the connection factory. Import the appropriate packages, including the javax.transaction.UserTransaction package. import java.io.*; import java.util.*; import javax.transaction.UserTransaction; import javax.naming.*; import javax.jms.*; Define the required variables, including the JTA user transaction variable. public final static String JTA_USER_XACT= "javax.transaction.UserTransaction"; . . . Set up the JMS application, creating a non-transacted session. For more information on setting up the JMS application, refer to Setting Up a JMS Application. //JMS application setup steps including, for example: qsession = qcon.createQueueSession(false, Session.CLIENT_ACKNOWLEDGE); Look up the UserTransaction using JNDI. UserTransaction xact = (UserTransaction) ctx.lookup(JTA_USER_XACT); Start the JTA user transaction. xact.begin(); Perform the desired operations. // Perform some JMS and EJB operations here. Commit the JTA user transaction. xact.commit()
http://e-docs.bea.com/wls/docs92/jms/trans.html
crawl-002
en
refinedweb
CodeRush users interested in creating templates that generate custom code based on elements inside a container (e.g., fields in a class, methods in a type, types in a namespace, comments in a file, etc.), might want to check out this YouTube video. In it, the IDE Team discusses some work we're doing, where we need to add custom code to serialize and deserialize the fields of around 30 classes. The solution came in creating a template that iterates through the fields in each class and generates the appropriate serialization or deserialization code for each field. The main template looks like this: «:ccsr» public override void WriteData(BinaryWriter writer){ base.WriteData(writer); «ForEach(Field in this, WriteField)»} public override void ReadData(BinaryReader reader){ base.ReadData(reader); «ForEach(Field in this, ReadField)»} Both the ReadField and WriteField templates will be called once for each field in the active class. Both of these templates have a number of alternate expansions. The expansion ultimately selected for a particular field is determined by context. You can set context with the Context Picker on the lower right of the Template options page. To make this work, we created a new context, called TypeImplements, because many of the scenarios we needed to respond to were dependent upon the type of the field. For example, one of the alternate expansions for ReadField has this context: TypeImplements(«?Get(itemType)»,System.Boolean) You can pass parameters to contexts (like we've done here), by right-clicking the context in the Context Picker, and selecting "Parameters...". The expansion for the ReadField template associated with the context above looks like this: «?Get(itemName)» = reader.ReadBoolean(); «?Get(itemName)» returns the name of the field we're iterating over, while «?Get(itemType)» returns the full type name. Get is a custom StringProvider that you can use to retrieve the value of a template variable stored with the Set StringProvider (the ForEach TextCommand stores the itemName and itemType variables for you automatically before calling the ReadField and WriteField templates). The new TypeImplements context added to solve this code generation challenge will ship with the next version of CodeRush. Hi Mark, Really interesting stuff, but I gave up 60 seconds into the video - I defy anyone to read the code on the screen on the YouTube video. Is there a reason why you chose to put it on YouTube and not on at a better resolution? Cheers! G Hi Glen, This was not meant to be a DevExpress Channel video :) Jeff just walked in with one of his cameras and recorded some bits... :) mm.... A way to shows that new interesting features are coming... because sincerely i was thinking go with the optometrist. :) Surely!: Pingback from Dew Drop - September 19, 2008 | Alvin Ashcraft's Morning Dew Thanks for the feedback. I definitely agree with you that the resolution at YouTube sucks. We shot the screen part at 1024x768 with Camtasia, so it should have been better, but even clicking the View in High Quality button at YouTube produces results that requires some imagination to read. I'll talk to our video guy and see if we can get a better quality version up somewhere. Can i access an attribute associated with a class, property? I think about use that new feature with XPO? Wolfgang Hi Wolfgang, You might want to post more specifics of your question to the CodeRush for Visual Studio newsgroup/forum, or you can direct it to support. In general however, the answer is Yes, you can access attributes associated with classes and members, however you might have to write a small amount of code to make this work. CodeRush customers get full source to the ForEach TextCommand, so if you want to add the ability to iterate through all attributes associated with the active member/type, this is easy to do.
http://community.devexpress.com/blogs/markmiller/archive/2008/09/18/ide-team-discussion-using-coderush-templates-to-generate-code.aspx
crawl-002
en
refinedweb
Block From Nemerle Homepage Intro This page discusses proposed block construct. Please feel free to add any comments. Description The block construct looks like this: foo: { ... ... } It starts with a label (an identifier) followed by : colon and a sequence of expressions. The value of the block is the value of the last expression in its sequence, except when we jump out of the block, using its label. Such a jump transfers control flow to the end of the block returning value passed to the label. For example: def x = foo: { when (some_cond) foo (3); qux (); 42 } works the same as: def x = if (some_cond) 3 else { qux (); 42 } (which is clearly a better code structure in this very place :-) Common usage Blocks are good replacement for the return, break and continue statements known from C. For example the following Nemerle code: foo () : int { return: { when (some_cond) return (42); do_some_stuff (); return (33); } } simulates the following C# code: int foo () { if (some_cond) return 42; do_some_stuff (); return 33; } While the following: break: { while (cond) { when (some_cond) break (); some_stuff (); } } simulates: while (cond) { if (some_cond) break; some_stuff (); } As you can see we have used here the void literal. It is however also possible to return some more meaningful value: def has_negative = res: { foreach (elem in some_collection) when (elem < 0) res (true); res (false) } which give us possibility to have localized return-like statement inside the expressions. Nemerle.Imperative Special implicit blocks are created around functions and loops. After importing the Nemerle.Imperative namespace, you have access to regular break, continue and return known from C. using Nemerle.Imperative; def foo1 (x) { when (x > 3) return x * 17; mutable y = x; while (y > 0) { when (y % 33 == 0) continue; when (some_cond) break; when (some_other_cond) return y; } } def foo2 () { when (some_cond) return; foo1 (3); } As in C break exits nearest enclosing loop and continue starts the next iteration. return exits nearest enclosing function, but functions created by the compiler to implement standard loops don't count!
http://nemerle.org/Block
crawl-002
en
refinedweb
We. One. There are a lot of blog entries that I'd write if they weren't already written. Stupid statement. No, really. One of the great qualities of the documentation that we built for WCF and WF and CardSpace is that it's completely legible and understandable :) Since there's just a lot of stuff in the SDK docs and one easily gets lost in the forest, I'll point out a few of the conceptual docs and/or samples and may add the one or the other commentary here or there. For the first one that I selfishly point out the only actual commentary is that I wrote that piece ;) Go read about Message Inspectors and how to implement client- and/or server-side schema-based validation in WCF, complete with the ability to refer to the validation schemas by config. Adventure-seekers might be interested in poking around in that code and replace the schema validation and the schemas with XSLTs and transforms. That would create some interesting followup-challenges for synthesizing the ContractDescription that projects out the correct pre-transformation representation for WSDL, but I guess that'd be part of the fun. A bad sign for how much I’m coding these days is that I had a HDD crash three weeks ago and only restored Visual Studio into fully working condition with all my tools and stuff today. I’ve decided that that has to change otherwise I’ll get really rusty. Picking up the thread from “Professor Indigo” Nicholas Allen, I’ve built a little program that illustrates an alternate handling strategy for poisonous messages that WCF throws into the poison queue on Vista and Longhorn Server if you ask it to (ReceiveErrorHandling.Move). The one we’re showing in the docs is implementing a local resolution strategy that’s being fired within the service when the service ends up faulting; that’s the strategy for ReceiveErrorHandling.Fault and works for MSMQ 3.0. The strategy I’m showing here requires our latest OS wave. When a message arrives at a WCF endpoint through a queue, WCF will – if the queue is transactional – open a transaction and de-queue the message. It will then try to dispatch it to the target service and operation. Assuming the dispatch works, the operation gets invoked and – might – tank. If it does, an exception is raised, thrown back into the WCF stack and the transaction aborts. Happily, WCF grabs the next message from the queue – which happens to be the one that just caused the failure due to the rollback – and the operation – might – tank again. Now, the reasons why the operation might fail are as numerous as the combinations of program statement combinations that you could put there. Anything could happen. The program is completely broken, the input data causes the app to go to that branch that nobody ever cared to test – or apparently not enough, the backend database is permanently offline, the machine is having an extremely bad hardware day, power fails, you name it. So what if the application just keeps choking and throwing on that particular message? With either of the aforementioned error handling modes, WCF is going to take the message out of the loop when its patience with the patient is exhausted. With the ReceiveErrorHandling.Fault option, WCF will raise an error event that can be caught and processed with a handler. When you use ReceiveErrorHandling.Move things are a bit more flexible, because the message causing all that trouble now sits in a queue again. The headache-causing problem with poison messages is that you really, really need to do something about them. From the sender’s perspective, the message has been delivered and it puts its trust into the receiver to do the right thing. “Here’s that $1,000,000 purchase order! I’m done, go party!”. If the receiving service goes into the bug-induced loop of recurring death, you’ve got two problems: You have a nasty bug that’s probably difficult to repro since it happens under stress, and you’ve got a $1,000,000 purchase order unhappily sitting in a dark hole. Guess what your great-grand-boss’ boss cares more about. The second, technically slightly more headache-causing problem with poison messages (if that’s possible to imagine) is that they just sit there with all the gold and diamonds that they might represent, but they are effectively just a bunch of (if you’re lucky) XML goo. Telling a system operator to go and check the poison message queues or to surface their contents to him/her and look what’s going on there is probably not a winning strategy. So what to do? Your high-throughput automated-processing solution that does the regular business behind the queue has left the building for lunch. That much is clear. How do you hook in some alternate processing path that does at least surface the problem to an operator or “information worker”– or even a call center agent pool – in a legible and intelligible fashion so that a human can look at the problem and try finding a fix? In the end, we’ve got the best processing unit for non-deterministic and unexpected events sitting between our shoulders, one would hope. How about writing a slightly less automated service alternative that’s easy to adjust and try to get the issue surfaced to someone or just try multiple things [Did someone just say “Workflow”?] – and hook that straight up to where all the bad stuff lands: the poison queue. Here’s the code. I just coded that up for illustrative purposes and hence there’s absolutely room for improvement. I’m going to put the project files up on wcf.netfx3.com and will update this post with the link. We’ll start with the boilerplate stuff and the “regular” service: using System;using System.Collections.Generic;using System.Text;using System.ServiceModel.Channels;using System.ServiceModel;using System.Runtime.Serialization;using System.ServiceModel.Description;using System.Workflow.Runtime;using ServerErrorHandlingWorkflow;using ServerData;namespace Server{ [ServiceContract(Namespace=Program.ServiceNamespaceURI)] interface IApplicationContract { [OperationContract(IsOneWay=true)] void SubmitData(ApplicationData data); } [ServiceBehavior(TransactionAutoCompleteOnSessionClose=true, ReleaseServiceInstanceOnTransactionComplete=true)] class ApplicationService : IApplicationContract { [OperationBehavior(TransactionAutoComplete=true,TransactionScopeRequired=true), System.Diagnostics.DebuggerStepThrough] public void SubmitData(ApplicationData data) { throw new Exception("The method or operation is not implemented."); } } Not much excitement here except that the highlighted line will always cause the service to tank. In real life, the path to that particular place where the service consistently finds its way into a trouble-spot is more convoluted and may involve a few thousand lines, but this is a good approximation for what happens when you hit a poison message. Stuff keeps failing. The next snippet is our alternate service. Instead of boldly trying to do complex processing, it simply punts the message data to a Workflow. That’s assuming that the message isn’t completely messed up to begin with and can indeed be de-serialized. To mitigate that scenario we could also use a one-way universal contract and be even more careful. The key difference between this and the “regular” service is that the alternate service turns off the WCF address filter check. We’ll get back to that. [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)] class ApplicationErrorService : IApplicationContract { public void SubmitData(ApplicationData data) { Dictionary<string,object> workflowArgs = new Dictionary<string,object>(); workflowArgs.Add("ApplicationData",data); WorkflowInstance workflowInstance = Program.WorkflowRuntime.CreateWorkflow( typeof(ErrorHandlingWorkflow), workflowArgs); workflowInstance.Start(); } } So now we’ve got the fully automated middle-of-the-road default service and our “what do we do next” alternate service. Let’s hook them up. class Program { public const string ServiceNamespaceURI = ""; public static WorkflowRuntime WorkflowRuntime = new WorkflowRuntime(); static void Main(string[] args) { string msmqQueueName = Properties.Settings.Default.QueueName; string msmqPoisonQueueName = msmqQueueName+";poison"; string netMsmqQueueName = "net.msmq://" + msmqQueueName.Replace('\\', '/').Replace("$",""); string netMsmqPoisonQueueName = netMsmqQueueName+";poison"; if (!System.Messaging.MessageQueue.Exists(msmqQueueName)) { System.Messaging.MessageQueue.Create(msmqQueueName, true); } First – and for this little demo only – we’re setting up a local queue and do a little stringsmithing to get the app.config stored MSMQ format queue name into the net.msmq URI format. Next … ServiceHost applicationServiceHost = new ServiceHost(typeof(ApplicationService)); NetMsmqBinding queueBinding = new NetMsmqBinding(NetMsmqSecurityMode.None); queueBinding.ReceiveErrorHandling = ReceiveErrorHandling.Move; queueBinding.ReceiveRetryCount = 1; queueBinding.RetryCycleDelay = TimeSpan.FromSeconds(1); applicationServiceHost.AddServiceEndpoint(typeof(IApplicationContract), queueBinding, netMsmqQueueName); Now we’ve bound the “regular” application service to the queue. I’m setting the binding parameters (look them up at your leisure) in a way that we’re failing very fast here. By default, the RetryCycleDelay is set to 30 minutes, which means that WCF is giving you a reasonable chance to fix temporary issues while stuff hangs out in the retry queue. Now for the poison handler service: ServiceHost poisonHandlerServiceHost = new ServiceHost(typeof(ApplicationErrorService)); NetMsmqBinding poisonBinding = new NetMsmqBinding(NetMsmqSecurityMode.None); poisonBinding.ReceiveErrorHandling = ReceiveErrorHandling.Drop; poisonHandlerServiceHost.AddServiceEndpoint(typeof(IApplicationContract), poisonBinding, netMsmqPoisonQueueName); Looks almost the same, hmm? The trick here is that we’re pointing this one to the poison queue into which the regular service drops all the stuff that it can’t deal with. Otherwise it’s (almost) just a normal service. The key difference between the ApplicationErrorService service and its sibling is that the poison-message handler service implementation is decorated with [ServiceBehavior(AddressFilterMode = AddressFilterMode.Any)].Since the original message was sent to the a different (the original) queue and we’re now looking at a sub-queue that has a different name and therefore a different WS-Addressing:To identity, WCF would normally reject processing that message. With this behavior setting we can tell WCF to ignore that and have the service treat the message as if it landed at the right place – which is what we want. And now for the unspectacular run-it and drop-a-message-into-queue finale: applicationServiceHost.Open(); poisonHandlerServiceHost.Open(); Console.WriteLine("Application running"); ChannelFactory<IApplicationContract> client = new ChannelFactory<IApplicationContract>(queueBinding, netMsmqQueueName); IApplicationContract channel = client.CreateChannel(); ApplicationData data = new ApplicationData(); data.FirstName = "Clemens"; data.LastName = "Vasters"; channel.SubmitData(data); ((IClientChannel)channel).Close(); Console.WriteLine("Press ENTER to exit"); Console.ReadLine(); } }} The Workflow that’s hooked up to the poison handler in my particular sample project does nothing big. It’s got a property that is initialized with the data item and just has a code activity that spits out the message to the console. It could send an email, page an operator through messenger, etcetc. Whatever works.) I read that Google runs buses. Headline material in the New York Times. Impressive. Ehh. Here at Microsoft, buses are run by the King County Metro Transit system and we all get a free Flexpass. And our shuttle system is constrained to connections within and between the campus locations in and around Redmond. Of course that's just cost effective, logical and boring and therefore not newsworthy, I guess. COM
http://vasters.com/clemensv/default,date,2007-04-02.aspx
crawl-002
en
refinedweb
In and of itself, the article is quite useful and makes some extremely valuable, though often missed, points regarding how coding style affects the long term value of a body of software. First, I had no idea that Yahoo! uses Python heavily enough to have warranted the creation of a coding standards document! However, the part of the article I found to be most interesting-- i.e. most relevant to my daily programming habits-- was the section that discusses method naming conventions. Public method names should: 1) start with a lower-case letter 2) upper-case the first letter of each word 3) include a description of the required arguments it takes, each followed by an underscore. Here are some examples: def charset_(self, o) ## takes one argument, the charset, in MessagePage.py def getPodForGroup_(self,groupname) ## takes one argument, the "Group", in DataCluster.py def getSubsForUser_(self,username) ## takes one argument, the "User", in userserv.py This is inspired by the Objective-C "descriptive" method syntax: [object method: arg1 withName1: arg2 withName2: arg3]; [object method: arg1 withName1: arg2 withName2: arg3]; In the context of PyObjC, the document describes the exact method by which PyObjC advertises the underlying Objective-C APIs. Every year or so (PyObjC has been around since at least 1996), someone raises the idea of making the ObjC API as advertised on the Python side of the bridge "more Pythonic". The proposal is typically along the lines of removing all those ugly underscores, etc... Ugly they may be, but they convey valuable information... A PythonMac thread that discusses the same naming conventions from the perspective of bridging ObjC into Python starts here. We just used a lot more words to say what the above article nicely summarizes... 11:03:01 PM (If you look closely, my powerbook is sitting on top of a NeXT cube. A rather special cube, at that. It has three NeXTdimension boards in it. I have three 21" NeXT monitors to go with it, but don't have the desk space to set it up.) 8:50:03 AM
http://radio.weblogs.com/0100490/2003/03/12.html
crawl-002
en
refinedweb
tag:blogger.com,1999:blog-63304662009-01-12T05:19:34.222-05:00sam bot dot comYou don't care about me. You don't. So don't even pretend to. But, if you did, I would write something like this: Hi. I'm Sam and this is my blog. It's a place for me to unload some of the crud that floats around in the ol' gray matter. I might blog about design, technology, the Mac... or quite possibly, what I had for lunch. In any case, welcome.Sam the rare chance that anyone is looking here for WWDC info... umm... that's so unlikely that I'm not even going to finish the thought. I am at WWDC though. But I, along with the rest of the developers here, will be concentrating all of my efforts on breaking twitter. You can see all of the action here: <a href=""></a>Sam 8 Us<a href=""><img src="" align="left" class="pic"></a>Yeah... I'm on a bit of a hiatus right now (<i>could you tell?</i>). I'm just trying to get some job/life/post-graduation stuff going in the forward/upward/non-wayward direction (<i>and it's going pretty well... thanks for asking!</i>). But fear not, when I return (<i>and this doesn't count</i>), sam bot dot com will be bigger/better/stronger than ever before! <br /><br />'Til then, brothers and sisters, blog on!<br /><br />P.S. I want an iPhone.Sam Fear ChangeWhoa. Apple.com. It's all... <a href="">different</a>.Sam, the Disappointment!I've been really struggling with a good way to say this, and I've decided redundancy is probably the best approach. Here goes: Today's WWDC keynote was a...<br /><br /><b><i>big</i></b> <br /><i>adjective</i> <br /><b>1.</b> large, sizable, substantial, great, huge, immense, enormous, extensive, colossal, massive, mammoth, vast, tremendous, gigantic, giant, monumental, mighty, gargantuan, elephantine, titanic, mountainous, Brobdingnagian; towering, tall, high, lofty; outsize, oversized; goodly; capacious, voluminous, spacious; king-size(d), man-size, family-size(d), economy-size(d); informal jumbo, whopping, mega, humongous, monster, astronomical, ginormous*<br /><br />...disappointment. Honestly, the highlight of the keynote (for me) was the announcement of the future absence of the brushed metal look (which I've said good-bye to long ago, thanks to <a href="">Uno</a>). Yeah, aside from being a great sub-genre of Heavy Metal, brushed metal is getting a bit tired! (<i>Ha!</i>)<br /><br />*<i>Special shout-outs to the OS X thesaurus for this one!</i>Sam's Get It Together!<a href=""><img src="" align="left" class="pic"></a>Less than two hours to showtime, people. Let's get it together! Set hammock to <i>light sway</i>. Coffee, hot! Point all browsers to <a href="">Mac Rumors Live</a> (or <a href="">Engadget</a>). Keep a "new blog entry" window at the ready. And please, let's make sure that "i" key is all polished and ready to go. I have a feeling that it's going to get a thorough workout today.Sam<a href=""><img src="" align="left" class="pic"></a>Well what do you know... this year's WWDC kicks off next Monday... <i>er... tomorrow</i>. I guess it sort of snuck up on me. And so now the question is, "<i>what's it gonna be, folks?</i>" More iPhone fun? Leopard stuff? Something brand new? Only the Steve knows for sure.<br /><br />(And in other news, I'm blogging from a hammock. Yes, I know, my socks are very stylish, thank you.)Sam... Kudos Onslaught<img src="" align="left" class="pic">Well, after being disconnected just as I was giving my phone number to the customer service rep (<i>you know... just in case we get disconnected</i>), and after trudging through the automated help interface twice, being put on hold while forced to endure a never-ending Kenny G. medley, and then finally being connected to a real live person, and having them finally connect me to the <i>correct</i> real live person, I have successfully accessed my home wifi network. Yay!<br /><br />The solution was as simple as putting a "$" in front of the WEP password. Wow. Wouldn't it have been much simpler and easier (not to mention, more cost effective for SBC/AT&T/Yahoo!) if that minute, yet critical, nugget of information was included in the li'l install booklet? Or perhaps if it was available on their help site? (<i>"How did you query their help site without internet access?"</i> you may ask. Thankfully, the upstairs neighbors have been allowing me to <s>steal</s> borrow internet from their unsecured wifi network for about a month now. Thanks guys! (That's for dominating the communal storage space with your heaping mounds of junk.)) <br /><br />Anyway, the true reason that I'm posting this is twofold. Of course my main motivation for publicizing this gripe is to... well, gripe (really though, isn't that the primary usage of most blogs?). But my secondary reason, and the one that makes me seem less whiny and quite possibly even selfless (and therefore, the one that we will be focusing on here), is simply to get this solution, the "$" before the WEP password, out there into the boundless ether of internetdom. Come on, I can't be the only one experiencing this problem. <br /><br />So hopefully, Google will index this solution, make it findable, I'll be a great boon to many, and there will be much rejoicing. See, I do my part to give back to society. Let the onslaught of rightfully-deserved kudos commence! And just for the record, I will gladly accept all forms of kudos... <s>even</s> especially those in <a href="">bar form</a>.Sam, You've Done It Again!<i>Congrats Palm, you've done it again! You've added another unimpressive and uninspiring piece of hardware to your already lacking product line.</i><br /><br /><a href=""><img src="" align="left" class="pic"></a>Those creative and forward-thinking guys and gals over at Palm Inc. (or Palm Pilot, or PalmOne, or Access OS, or whatever their name is this month), have just released a spectacular new product. Something so spectacular and new that you're likely to curl up in the nearest corner and cry. Yes, cry. Cry till your tear ducts have nothing left to cry and they poof out vapor clouds of dried tear powder. And when someone asks why you're crying in the corner (<i>because inevitably, someone will</i>), you can tell them that you're crying because now, thanks to the innovative innovations emanating from Palm's headquarters, humanity's long stint of suffering and agony is finally over. Brothers and sisters, these are not tears of sorrow. Nay! They are tears of joy! For today, Palm released a laptop. But wait! Not just any ol' laptop. It's a <i>little</i> laptop. Hurray! <br /><br />See, not all that impressive, is it? What really gets me though is the mounds and mounds of <a href="">media attention</a> devoted to this <a href="">product's</a> release. Maybe I'm missing something... and perhaps you can help. Okay okay, fun time! Yay! <a href="">Complete</a> the following sentence: Palm's new laptop thingy is great because <br />____________________________________________<br />____________________________________________<br />____________________________________________.<br /><br />(I'll even start the <a href="">first one</a>.)Sam!<img src="" align="left" class="pic">Hot! I just <a href="">read</a> that a portion of the filming of Indiana Jones 4 will be taking place in The Have (okay. Not have. <i>Hayv</i>... with a long A. Like HAVEn. Get it? That's what all the cool kids call New Haven. Are you a cool kid? I'm a cool kid.)! Apparently, there's going to be "some kind of car chase on Chapel between College and High streets." June 28-30. Yeah. That's hot.Sam Filth<a href=""><img src="" align="left" class="pic"></a>Sometimes I find the most difficult time to write is when I have too much to write about... hence the severe lack of updateage. But yes, I'm still alive. And no, my thesis didn't kill me (though it tried its damnedest... and proved to be a more-than-worthy adversary. Good for you, thesis!). <br /><br />So yeah, I'm all graduated and stuff. The graduation ceremony was spectacular. It was a beautiful day, I got an award, there was free beer... needless to say, fun was had by all. And now, into the real world, to experience bigger and better things. Like debt and unemployment. <br /><br />It's funny though: grad school, Quinnipiac, the endless amounts of research, writing, and general academic toil... for the past two years, these have all been monstrously strenuous and mentally exhausting parts of my life. And despite my relentless complaining, I've loved every excruciating moment. In fact, I've reveled in those moments. I've rolled in them, gleefully, like a pig in its own filth. But now, without any real preparation, I've been yoinked, remorselessly, from my sheltering filth. Sadly, I've discovered that my filth was keeping me warm... sane... content. My filth was keeping me filled with purpose. <br /><br />Sigh... it's cold and strange out here without my filth to slog through. Perhaps I should get a doctorate.* Yes! Back to the cozy, comfortable, filthy world of academia! <br /><br />Sallie Mae, baby, I'm coming home! Now where's that loan deferment form?<br /><br />*<i>Doctorate? Umm... no way. Not for all the coddling filth in the world.</i>Sam Bar!<a href=""><img src="" align="left" class="pic"></a>Yeah. I saw it in the store and I just had to. I couldn't resist. I needed to know what the seven foods of Deuteronomy 8:8 taste like... in bar form. And the result? <i>Well <s>goddamn</s> goshdarn!</i> That's one divine <a href="">Bible Bar</a>. <br /><br />(Actually, it was kinda bland.)<br /> Sam Wildebeest<img src="" align="left" class="pic">Staying in line with my <a href="">most</a> <a href="">current</a> <a href="">theme</a> (and my most passionate hobby), I'd like to bring to everyone's attention the fact that there is FREE COFFEE being distributed (for this, and most of next week) in the Quinnipiac University Law Library. All you have to do to participate in this spectacular event is simply say that you're a law student. So, if anyone asks, I'm a law student. Heck... I'd declare myself a <i>tremendous wildebeest*</i> if it results in free coffee. <br /><br />*Yeah, I'll admit, "tremendous wildebeest" is an odd thing to declare oneself. But I'm trying really hard to work it into a blog post... cuz it's sort of an inside joke... but not a very funny one... and that's all I got... and I'm gonna go now... <i>ZINGGG!!!**</i><br /><br />**And just so we're all on the same page here, "<i>ZINGGG!!!</i>" is the sound it makes when I leave really quickly.Sam <a href="">either</a>:<br /><br />A) I've finished my thesis,<br />B) I haven't left the library since Saturday, or<br />C) I'm dead (and blogging from the nether-realm).<br /><br />The correct answer is A... and a little bit of C thrown in for flava. <br /><br />The truth is that I'm 95% done with my thesis. And 5% dead. So really, it all balances out. But I'll be 100% done really soon. And then 100% graduated. And then 100% a master. <br /><br />And then 100% unemployed. <br /><br />Yay!Sam My Heart Doesn't Explode First...<a href=""><img src="" align="left" class="pic"></a>I am only allowed to leave the library tonight under one of the two following conditions:<br /><br />1) I am finished with my graduate thesis, <i>or</i><br />2) I am dead. <br /><br />(<i>I'm hoping for the first scenario... but only marginally so</i>). <br /><br />And thus, in a weak attempt to accelerate the completion of my thesis (and also, to advance the incursion of death), I have prepared the requisite dinner of power bars and energy drinks... a perfect compliment to a fine evening of academic toil. <br /><br />Ahh... good times.Sam Choice<a href=""><img src="" align="left" class="pic"></a>Concerning web browsers, there was a time (a dark time) when I used to do the Safari thing. Then I migrated to Firefox (and then back to Safari... and then back to Firefox... which is not what we're here to talk about). But concerning mail apps, the Moz (Mozilla, the makers of Firefox) just released <a href="">Thunderbird 2</a>, a free and powerful email app. <br /><br />Currently, I use OS X's Mail and I'm pretty happy with it. Though, it does have its annoying quirks. Now the T'bird... well, she's already making some <a href="">peeps</a> pretty excited. <i>Dare I?</i><br /><br />And so, as I do with all of the important and life-altering decisions that I'm faced with, I turn to the internet. My dear readers, should I make the switch from Mail to T'bird?<br /><br />A) Yes.<br />B) No.<br />C) I really don't care. And by the way, this will be the last time that I read this crappy-ass blog.Sam?<a href=""><img src="" align="left" class="pic"></a>Being somewhat of a self-proclaimed sewing dork, and equally as much a <i>cool machines</i> dork, I was delighted (<i>simply delighted!</i>) to see <a href="">this graphic</a> explain, using four-color animated gif awesomeness, the stitching technique of the commonplace sewing machine. Up until this point, I was fully willing to accept satan, magic, or gnomes as logical answers to the <i>mystery of the sewing machine</i> question... that can be fully and succinctly articulated in the following syllable: <i>wha?</i>Sam Have Influence<a href="">It</a> <a href="">worked</a>!Sam on Your Blogs and Ride!<a href=""><img src="" align="left" class="pic"></a>Something weird is going on with Blogger and/or Blogspot and/or Bloglines: the blogs of some of my peeps that have not updated their blogs in a <a href="">very</a> <a href="">long</a> <a href="">time</a>, are spontaneously showing up as unread in my feed reader. Which is weird, but also kinda nice. It's like an unexpected dose of nostalgia... a reminder of a simpler time... a time when bloggers may still have referred to their craft as <i>we</i>blogging. Weblogging!? Ha! Those truly were foolish times.<br /><br />And so, the following is a plea... no, let's make it a petition (to be signed in the <a href="">comments</a> section of this post). It's a call to all of my pals who once proudly donned the titled "blogger" but have since fallen, hard, from the top deck of the blogwagon. (And this points especially pointedly at The Dark Lord Derfla, whose blog, <a href="">From the Depths of the Tepid Inferno</a>, demonstrates a mastery of hand drawn, Sharpie and Post-It note illustrations which showcase the hilarious exploits (often featuring <a href="">yours</a> <a href="">truly</a>) of the trials and tribulations of The Dark Lord's daily routine.)<br /><br />And so, from one who knows the pain of time spent <a href="">away from the blog</a>, I reach an outstretched hand from the driver's seat of the blogwagon to those unfortunate bloggers who have fallen to the cold, wet, and just miserable on all accounts, ground. Come'on back. We'll party like it's 2004. <br /><br /><b>The Petition:</b><br /><br />The internet is a cold, wet, and just miserable on all accounts, place without the blogs we once loved. Therefore, we, the undersigned, hereby invite those who may have fallen, to get back on their blogs and ride (ummm... sing that last part in the tune of <i>Fat Bottom Girls</i> by Queen... you know, that part where Freddie Mercury shouts, "Get on your bikes and ride!" for no particular reason whatsoever. God... I love that song). <br /><br />Signed,<br /><br /><a href="">(<i>sign petition by leaving a comment</i>)</a>Sam<a href=""><img src="" align="left" class="pic"></a><i><a href="">Top 10 Coolest Doormats</a>!? What!?</i> No... I refuse.<br /><br />Okay. This is it. I've had enough. I'm drawing the line. The buck stops here. The camel's back just broke. Et cetera. Et cetera. Et cetera.<br /><br />Can someone please tell me what the blogoshere's fascination with <i>top ten lists</i> is?<br /><br /><b><i>Play along at home! What fun!</i></b><br />1. Go to <a href="">digg.com</a>. <br />2. Search for "<a href="§ion=news&search-buried=1&type=title&area=all&sort=score">top ten</a>."*<br />3. Be appalled by the ensuing result. <br /><br />Anyway, this has got to end. So, even though <a href="">I've written one</a>, I declare an all out boycott of the invasive top ten list. Starting... <i>now!</i><br /><br />*<i>The actual search term that I used was "ten top." I dunno. It just worked better that way. But I'm sure someone was going to call me out on it.</i>Sam Hate Old Navy<a href=""><img src="" align="left" class="pic"></a>I never was, nor do I aspire to be a bicycle messenger. However, I spent some time living in the East Bay, rooming with a San Franciscan messenger. And as a result, I was unwittingly plunged headfirst into their culture... which was accurately described to me as being "the rock star lifestyle of the cycling world," in that bicycle messengers are feared by grandmas, idolized by youth, and guilty of trashing hotel rooms... all of which I can't personally verify. But I've heard stories.<br /><br />I do enjoy <i>the bicycle</i> though, in all its forms (especially the <a href="">purest</a>), cultures, and subcultures... including, of course, bike messenger culture. Clearly, this stems (almost entirely) from my west coast inundation. And even though I have long since moved back to the east coast, I've maintained a sort of passive interest in the goings-on within the bicycle messenger genre of bikedom. <i>Why?</i> Well, I guess it makes me feel ever-so-slightly less removed from the west coast and my <a href="">bike messenger friend</a>. <br /><br /><i>And that is why I hate Old Navy. Good night.</i><br /><br />Wait. I think I left something out. Oh right... In my passive interest in the goings-on of the bike messenger community, I stumbled upon <a href="">this</a>: "Can't think of a sub-culture that hates to be co-opted more than bike messengers. Nothing worse than seeing your lifestyle turned into an Old Navy tshirt." Yep. Old Navy has taken the <i>bike messenger rock star lifestyle</i>, condensed it, mainstreamed it, and printed it on a faux-vintage t-shirt. <br /><br />And <i>that</i> is why I hate Old Navy. Good night. <br /><br />(Ahh... see? My hatred makes so much more sense now. And for the record, I don't really hate Old Navy. It's not their fault. I think <i>Murphy's Law of Cool Things</i> states that, "All cool things will, eventually and unfortunately, be exploited by large corporations (that just don't get it) for the specific intent of mainstream consumption.)Sam"Kurt is up in heaven now."<a href="">Kurt Vonnegut</a> died yesterday. He was 84. This makes me so sad. <br /><br />The following is a <a href="">quote</a> from Kurt's last book, <i><a href="">A Man Without a Country</a></i> (2005). It seems oddly appropriate:<br /><br /><img src="" align="left" class="pic">."<br /><br />Anyway, R.I.P. Kurt Vonnegut. And if you haven't already, you should read Vonnegut's <i><a href="">The Sirens of Titan</a></i>, followed by <i><a href="">Slapstick</a></i>... two of my favorites.<br /><br />More:<br /><a href="">Boing Boing</a><br /><a href="">New York Times</a><br /><a href="">Gaurdian Unlimited</a>Sam. Sausage or Chain... You DecideJust some linkage that I'd like to share:<br /><br /><a href=""><img src="" align="left" class="pic"></a>1) The <a href="">Otis</a>, by <a href="">Swobo Bikes</a>. I'm going to call it the world's most perfect city bike. What sets this bike above some of the <a href="">others</a> in the urban cycling genre is, in order of least to most important: front disc brake, a 3-speed internal hub, matt black styling, and finally and most importantly, there is a bottle opener embedded in the bottom of the saddle! Yes, a bottle opener. See... perfect. (What is questionable about the Otis, is the inclusion of a coaster brake. <i>Huh?</i> Yeah, I don't get that one either. It would seem, however, that one should be able to simply spin off the fixed cog and replace it with a freewheel. But don't quote me on that.)<br /><br />2) And the second link... <i>darn... what was the second one?</i> Well, I guess this will have to do. All you need to know is <a href="">blood is truth</a>. Many, <i>many</i> good things for your aural amusement. Enjoy.Sam'm Not Sorry<a href=""><img src="" align="left" class="pic"></a>I enjoyed FREE COFFEE (<i>yes, FREE COFFEE is deserving of all caps</i>) this morning, just for driving my car to school. Encouragement for being lazy. Ha! In your face, beautiful spring-time weather!<br /><br />Hmm... this blog is quickly transforming into a <i>where to get <a href="">FREE</a> <a href="">COFFEE</a></i> blog. I can't honestly say that it's entirely unexpected though... considering my fondness for all that is <i>free</i> and all that is <i>coffee</i>. <br /><br />I do, however, feel compelled to apologize for the lack of warning. Though, in all truthfulness, I'm not repentant... in any way whatsoever. But having said that, here's a hollow apology: Sorry, jerks.Sam the Founder of Quinnipiac University is...<a href=""><img src="" align="left" class="pic"></a>Right... so I don't want to jinx it, but as of right now (<i>actually, as of <a href="">February 26, 2007</a></i>), I am, according to the all-knowing brain-in-a-jar that is Wikipedia, the founder of <a href="">Quinnipiac University</a>.<br /><br />Yep. It's right there in the <a href="">History</a> section, second sentence in. It reads, "Quinnipiac University is a private, coeducational, nonsectarian institution of higher education. Originally known as the Connecticut College of Commerce, it was founded in 1929 by <b>Samuel H. Cohen</b> as a small business college..." <br /><br />1929. Damn. I'm looking pretty good despite being almost 80!<br /><br /><i>and</i><br /><br />I guess I should update my resume to reflect my former position as University Founder. Unfortunately, I will be unable to provide references... because they're all probably dead.<br /><br />I don't know who made the actual edit (I do have my <a href="">suspicions</a>, though). But anyway, I have to go now. I'm off to the bursar's office. Being the founder, I assume that I'm entitled to a tuition reimbursement... or at the very least, a free <a href="">bobble-head</a>. <br /><br />(<i>What makes this inaccuracy all the more ironic, is that last semester I wrote a paper commending the Wikipedia community for its accuracy... promoting the notion that, by the collaborative authoring of worldly knowledge, accuracy will prevail! Clearly, I was wrong. But I'm not giving back my A. Oh... and just in case Wikipedia ever catches up with itself, and reverts to a previous iteration, <a href="">here</a> is a .pdf of the Quinnipiac University Wikipedia entry as it exists today.</i>)Sam... More Free (as in Beer) CoffeeTomorrow. All day long. FREE ICED COFFEE. <a href="">This time</a>, from Dunkin' Donuts.Sam
http://feeds.feedburner.com/sambot
crawl-002
en
refinedweb
So I am just getting started with wix code but one of the things I am running into involves the owner value. So I have laid out the scenario below and what I believe to be my options for solutions (honestly don't care which one works as long as one can work), but don't know if they are possible or how to accomplish them. I want to add data to a collection on the behalf of my memebers, ie it is a list of items they have sent to us. I don't want them to have to add this data as the number of items could be in the hundreds (very time consuming for them and a lot of the data that needs to be added needs to be of a certain nomenclature). However, if I add the data to the database myself using the database app, all the data is tied to me as the owner which means I can't easily filter the dataset on a member only page to only show data for the logged in user. 1) Is there a way to change the owner value or manually set the value when say submitting a form even if I am logged in as myself? 2) Or is there a way to filter the data in a query using something like the logged in users email (which as far as I can tell wouldn't be hard to make a unique value in one of my collections)? Came up with a potential 3rd solution 3) Have access to the members collection (assuming one exists when using the built in members section) which would allow me to filter my list on a members dataset where I could put my unique identifier all in the members collection or create a memebers section from scratch (in which case I would be curious about how to approach this since wix recently added the members section app). I have this same issue. I really want to just update the database monthly from a csv file, but as soon as I upload it, I become the owner and now noone can see their own personal data. Did you figure out a solution by chance? Hi, The 2nd option would be the best one for you. If I understand correctly, you'll have the email of the users next to every item in the collection. So now you need to understand, when a user gets to the site, who's the user. You should allow them registration to the site (so they will be members). Then, use 'wix-users' API to get the current logged in users email and filter the data by code. Liran. Hi @Liran Kurtz (WIX), I'm facing the same issue here, and I'm using 2nd solution as well. However, this is still not ideal as I had to loosen the read permission on my database collection, from 'Site member author' to 'Site member' (and since new members are auto-approved, this might as well be 'Anyone'). This removes the guarantee that no visitor of my site can read other visitor's data, which makes me a bit uncomfortable. Also, if some user would change their e-mail address, the 2nd solution would fail. So, I would rather as an admin be able to manually change the 'Owner' field, but this field is read-only. Can you tell me the reason why it is read-only, and if there is really no way of changing it? Kind regards, Henk I am facing the same issue. I was about to buy the wix premium for my business untilI found this problem. My business relies on limited contracts with my customers, so I cannot let them store as many entries as they want because those are paid separately. I need to let them see only the items that they paid for, but I can't let them see the items of other customers to avoid sensitive data to be leaked. Everything goes right until I set the dynamic dataset to filter by "logged in user", because it filters the repeater using the owner ID of the database and I am the only person who created theitem, so they all will see everything because the ID is always the same no matter what. I really need to filter the repeaters to show ONLY the items of the currently logged in member. I should be able to assign the owne ID at my will. Please advise. Kind regards. Summerset, it´s clear what you are trying to achieve. But unless I am very much mistaken, Wix is not going to offer this possibility. Main reason is, that want you want can already be done, just like Liram said. But ...... you have to code a bit. It´s called "Wix Code" after all, not Wix Click-and-Run. With this, I mean that no environment can foresee every possible use case that is out there. I have been in this business for decades and I have seen many "environments" pop up that promised easy, full fledged development possibilities. But if you really want/wanted something for your business ... you have/had to code. If you want just a little bit more than run-off-the-mill stuff, you will have to develop some code. Wix offers this possibility, so they have little reason to implement your requested change. Like Liram said, you need to record a client ref per item (like email address, or his _id) in a 1-n relationship with a user collection. And then, in a repeater, you filter on that particular user. Protecting a row from unauthorized access can be done also: you just check if the current user is the user in the requested row and done, BUt, again, it takes a bit a coding. If you are not willing to spend the time on learning how to do this, you could hire somebody to do it for you (no, I am not available, I am trying to help you, that´s all). So in short, Wix offer pretty advanced possibilities for developing working apps, but the "click-and-run" paradigm simply has its limitations. Hope I made some sense. @Giri Zano Hi Giri, Our point is that it is safer and more convenient to use the existing Wix access control functionality, instead of every user that encounters this use case having to implement their own on top of it. That is why we are curious about the design choice of making the Owner ID a locked field. Maybe there is a security reason for this design choice that we are unaware of? If there is no such reason however, it makes sense to remove this restriction. In the mean time, one could definitely use your approach to get around this restriction. Regards, Henk @Giri Zano Thanks for taking some time to reply. I appreciate. Well, I totally understand. I coded it and is working fine. I set the email address to filter the dataset and works. The only concern is that even if I used the ".onReady" statement, the published site shows the whole information for a while and then the filtered version. Thanks again. There is a better solution that allows you to set the real _owner field of the record. Indeed, you can't set the owner field when you call insert or update. However, you can do that when you use data hooks. Data hooks are functions that are being called before or after doing some action with your collection. To add one click the "Hooks" button in the toolbar in your content manager view (the db itself). A file named data.js will be created. Add a function like this: In the code that you insert the data just make sure to include a field named owner_override with the id that you want to have as owner. You don't have to have this field in your collection's schema. Very nice and elegant solution! This should already take care of most use cases. Is it also possible to change the `_owner` field in an update hook? Because if that's possible, then we can just add `owner_override` to the schema and then anyone can update the `_owner` field via the db data editor page. Dan, I have the hook installed just as you have described. What code do I need to place on the page itself so that the data will stop displaying the "owner" who uploaded the data to the database but by the "user" who is logged in? Your note says I need to add or "include a field named owner_override with the id" on the page -- so that the data displayed corresponds to the item (or row of data) of the current logged in user (and not the owner that imported the data) -- What code do I place on the page? $w does not allow me to choose or place "_owner" field on the front end... Here are some ideas of the page code: import wixUsers from 'wix-users'; import wixData from 'wix-data'; $w.onReady(function () { let user = wixUsers.currentUser; let userId = user.id; let isLoggedIn = user.loggedIn; let owner_override?? = $w('#email') let query = wixData.query($w('#dataset1')); wixData.query($w('#dataset1')) .find() .then( (results) => { let item = results; } ); }); @KD Online Design I don't recommend sending the override owner from the client because it means that anyone could set whatever owner they'd like. Can you describe the product flow that you try to solve? Just step by step who uploads/insert, etc. I will try to help according to your use case. I didn't check, but it should be possible. :) This would be highly unrecommended unless you fully understand how to secure this flow. For example, someone might create records with someone else's email and effectively "injecting" more data to that other user. There are other more complicated-to-explain flows that could break this. Using the built-in security mechanism ensures that your data stays secure and no user can get to other user's data @dan Thanks for your comments, Dan. I see your point! Here is the problem I am trying to solve. We have a backend app in FileMaker with all of our docent data including scheduling docents, reporting their activity, and keeping track of their preferences. We do not want users to update their own information directly for a variety of reasons. The cost to add them, would be prohibitive to our non-profit which operates on membership dues. Today, we collect their preferences and any demographic changes (address, phone, email) through a printed form they scan and email to us. We print the form with their current information and all they need to do is mark what has changed. This is what we are trying to use the member website for: Let them see their current information, let them submit changes for any of the fields, and then on the backend, we will collect the data and import the changes to FileMaker. I have tried a number of methods to do this and have been unsuccessful linking a member to a data collection. Since Wix doesn't allow us to add custom fields to the member data that can be exposed on the website, I resorted to creating a data collection with all of the docent data, including their email address. We preload their information using the Wix import facility which marks the record's owner ID with my webmaster owner ID. Which will not allow me to use the Owner filter on the data collection. There are too many fields for us to force our members (many in their late 60's and 70's) to enter the initial data themselves. If there is a better way to allow members to update (or add for new, younger members) their information securely (even if it is with Wix code that we'll have a developer create), I'd love to hear what we should do instead. @barb First of all if all of the data collection access is managed using backend code you are in a pretty secure place. If you are taking responsibility for keeping the docent lists up to date in your data collections based on email addresses this should be fine. When a user has logged into the Wix Site the backend code, as well as front end code, can determine who is logged in using the wix-users-backend currentUser property and from here you can get their email and manage the data accordingly. Now each Site Member has a unique ID which is in currentUser.id. When you are working in back end you could do as dan suggests and update the _owner information using this value but you would have to suppress authentication in the wix-data option argument. This again is safe when done in the back-end so long as front end code can't execute the back-end code without being signed in and having an Admin role for example. Now one other thing you can also do, since these are Site Members, is use the email address to get the User Id from the Members - PrivateMemberData collection that is created for you when you add the Members App... This will give you the User Id to email mapping from the CRM and you can use the ID for the _owner property. Hope this helps. :) Hi guys, I have a maybe similar issue, maybe someone can help me. Members register on my site to create an account. There are two members pages 1: ADD A PET 2: MY PETS A member can add a pet on the ADD A PET page by filling out some input boxes linked to a database, once the form is submitted. This information is visible in the identical form under the MY PETS page, the information that that user submitted is only visible to him ie: FILTER - OWNER IS LOGGED IN The user can add multiple multiple pets under the ADD A PET section and scroll through these pets in the MY PETS sections. He can also UPDATE the pets profile as well as delete it under the MY PETS page. I need the owner of a pet to be able to transfer a "pet profile" to another registered site member by means of adding a button that can do this or some other simple means, he then specifies the email address of the other registered member and clicks TRANSFER. This action would then remove that specific pet from his MY PETS section and add it to the MY PETS section of the other users MY PETS page. Any ideas? It used to work very long time ago. Probably you can't do it for at least a year now Now it is possible to update `_owner` by passing `suppressAuth: true` option. This works only from backend code. Examples: wixData.insert('collectionName', {_owner: "11111111-1111-1111-1111-111111111111", title: "Custom Owner"},{suppressAuth: true}) wixData.update('collectionName', {_id:"existing document id", _owner: "11111111-1111-1111-1111-111111111111", title: "Custom Owner"},{suppressAuth: true}) so it is no longer necessary to use data hooks? When did this change? @Povilas Skruibis thank you so much for updating - outdated guidance on the Wix forums is a major problem. It's good to see updates happening when something changes. ABSTRACT: I have no coding experience and don't use Corvid (only Wix Editor). I was running into the same problem as everyone else in this thread. Neither the Help Center or Support staff could give a workable solution. I did a Google search and found your discussion here for coders. Although I am not a coder, I offer my experience in the event that other non-coders end up here as well. ISSUE: Members login and see a table displaying information. This table connects to a dataset which contains information for ALL members, but I only want a member to see HIS/HER data. As it was, any member could login and see ALL members' information. SOLUTION: Click the dataset (which displays member viewed content). Add a Field for the member's "Login Email" (this Field doesn't connect to page elements, thus isn't displayed to members). Go to its settings and apply a filter. Set the Field to "Login Email", the Condition to "Is", and the Value to "Another Dataset". Set the other dataset to "Members Private Dataset" (or similar) and then set its Field to "Login Email." Now only information relative to the logged-in member is displayed for them on the table. CONCLUSION: I understand that most everyone in this forum is more educated than I am regarding website development. I just found this to be a solution to my problem, given the limited experience I possess. If the member changes the login email used, they cannot access the data until I have updated my database. I simply provide a notice to them explaining as much. I have found a very limited percentage of members change this information, so it's pretty easy to update. -Thanks for all the insight- I was so excited to see this post but I can't seem to make it work. In order to reference the Members Private Dataset in the filter, I had to first add that dataset to the page. Then, I added the login email filter to my custom dataset (I already had the login email field in my dataset) as you described but when I publish it and log in as a member, I don't see any information. I have set it up as a Read-Write data set so users will be able to edit their own information that I have added via the dashboard when they become a member. (Although I also tested to see if Read-Only would work and it doesn't.) Could you show a screen shot of your page and of the settings for your data sets? I seem to be missing something. @barb I also noticed the filter does not work for the admin. I am not sure why but I think it had to do with the admin owning the collection. I assure you that all other members besides the admin does work. Try setting up several "test" members and then giving them a try. @barb Additionally, I have set the collection's permission to "read-only." However, I believe this filter process works for all permission settings. The filter simply doesn't apply to an admin because they see everything.
https://www.wix.com/corvid/forum/community-discussion/changing-the-owner
CC-MAIN-2019-47
en
refinedweb
XML is a descendant of SGML, the Standard Generalized Markup Language. XML solves them. is an SGML application. However, HTML is just one SGML application. It does not have or offer anywhere near the full power of SGML itself. Since it restricts authors to a finite set of tags designed to describe web pagesand describes them in a fairly presentation oriented way at thatit's really little more than a traditional markup language that has been adopted by web browsers. It doesn't lend itself to use beyond the single application of web page design. You would not use HTML to exchange data between incompatible databases or to send updated product catalogs to retailer sites, for example. HTML does web pages, and it does them very well, but it only does web pages. SGML was the obvious choice for other applications that took advantage of the Internet but were not simple web pages for humans to read. The problem was that SGML is complicatedvery, very complicated. The official SGML specification is over 150 very technical pages. It covers many special cases and unlikely scenarios. It is so complex that almost no software has ever implemented it fully. Programs that implemented or relied on different subsets of SGML were often incompatible with each other. The special feature one program considered essential would be considered extraneous fluff and omitted by the next program.. However, XML 1.0 was just the beginning. The next standard out of the gate was Namespaces in XML, an effort to allow markup from different XML applications to be used in the same document without conflicting. Thus a web page about books could have a title element that referred to the title of the page and title elements that referred to the title of a book, and the two would not conflict. Next up was the Extensible Stylesheet Language (XSL), an XML application for transforming XML documents into a form that could be viewed in web browsers. This soon split into XSL Transformations (XSLT) and XSL Formatting Objects (XSL-FO). XSLT has become a general-purpose language for transforming one XML document into another, whether for web page display or some other purpose. XSL-FO is an XML application for describing the layout of both printed pages and web pages that approaches PostScript for its power and expressiveness . However, XSL is not the only option for styling XML documents. Cascading Style Sheets (CSS) were already in use for HTML documents when XML was invented, and they proved to be a reasonable fit to XML as well. With the advent of CSS Level 2, the W3C made styling XML documents an explicit goal for CSS. The pre-existing Document Style Sheet and Semantics Language (DSSSL) was also adopted from its roots in the SGML world to style XML documents for print and the Web. The Extensible Linking Language, XLink, began by defining more powerful linking constructs that could connect XML documents in a hypertext network that made HTML's A tag look like it is an abbreviation for "anemic." It also split into two separate standards: XLink for describing the connections between documents and XPointer for addressing the individual parts of an XML document. At this point, it was noticed that both XPointer and XSLT were developing fairly sophisticated yet incompatible syntaxes to do exactly the same thing: identify particular elements in an XML document. Consequently, the addressing parts of both specifications were split off and combined into a third specification, XPath. A little later yet another part of XLink budded off to become XInclude, a syntax for building complex documents by combining individual documents and document fragments . Another piece of the puzzle was a uniform interface for accessing the contents of the XML document from inside a Java, JavaScript, or C++ program. The simplest API was merely to treat the document as an object that contained other objects. Indeed, work was already underway inside and outside the W3C to define such a Document Object Model (DOM) for HTML. Expanding this effort to cover XML was not hard. Outside the W3C, David Megginson, Peter Murray-Rust, and other members of the xml-dev mailing list recognized that third-party XML parsers, while all compatible in the documents they could parse, were incompatible in their APIs. This led to the development of the Simple API for XML, or SAX. In 2000, SAX2 was released to add greater configurability and namespace support, and a cleaner API. One of the surprises during the evolution of XML was that developers adopted it more for record-like structures, such as serialized objects and database tables, than for the narrative structures for which SGML had traditionally been used. DTDs worked very well for narrative structures, but they had some limits when faced with the record-like structures developers were actually creating. In particular, the lack of data typing and the fact that DTDs were not themselves XML documents were perceived as major problems. A number of companies and individuals began working on schema languages that addressed these deficiencies. Many of these proposals were submitted to the W3C, which formed a working group to try to merge the best parts of all of these and come up with something greater than the sum of its parts. In 2001, this group released Version 1.0 of the W3C XML Schema Language. Unfortunately, this language proved overly complex and burdensome. Consequently, several developers went back to the drawing board to invent cleaner, simpler, more elegant schema languages, including RELAX NG and Schematron. Eventually, it became apparent that XML 1.0, XPath, the W3C XML Schema Language, SAX, and DOM all had similar but subtly different conceptual models of the structure of an XML document. For instance, XPath and SAX don't consider CDATA sections to be anything more than syntax sugar, but DOM does treat them differently than plain-text nodes. Thus, the W3C XML Core Working Group began work on an XML Information Set that all these standards could rely on and refer to. As more and more XML documents of higher and higher value began to be transmitted across the Internet, a need was recognized to secure and authenticate these transactions. Besides using existing mechanisms such as SSL and HTTP digest authentication built into the underlying protocols, formats were developed to secure the XML documents themselves that operate over a document's entire life span rather than just while it's in transit. XML encryption, a standard XML syntax for encrypting digital content, including portions of XML documents, addresses the need for confidentiality. XML Signature, a joint IETF and W3C standard for digitally signing content and embedding those signatures in XML documents, addresses the problem of authentication. Because digital signature and encryption algorithms are defined in terms of byte sequences rather than XML data models, both XML Signature and XML Encryption are based on Canonical XML, a standard serialization format that removes all insignificant differences between documents, such as whitespace inside tags and whether single or double quotes delimit attribute values. Through all this, the core XML 1.0 specification remained unchanged. All of this new functionality was layered on top of XML 1.0 rather than modifying it at the foundation. This is a testament to the solid design and strength of XML. However, XML 1.0 itself was based on Unicode 2.0, and as Unicode continued to evolve and add new scripts such as Mongolian, Cambodian, and Burmese, XML was falling behind. Primarily for this reason, XML 1.1 was released in early 2004. It should be noted, however, that XML 1.1 offers little to interest developers working in English, Spanish, Japanese, Chinese, Arabic, Russian, French, German, Dutch, or the many other languages already supported in Unicode 2.0. Doubtless, many new extensions of XML remain to be invented. And even this rich collection of specifications only addresses technologies that are core to XML. Much more development has been done and continues at an accelerating pace on XML applications, including SOAP, SVG, XHTML, MathML, Atom, XForms, WordprocessingML, and thousands more. XML has proven itself a solid foundation for many diverse technologies.
https://flylib.com/books/en/1.132.1.18/1/
CC-MAIN-2019-47
en
refinedweb
We are happy to announce that PhpStorm 2019.1 Beta is now available for download! Below is a roundup of the notable highlights: You can now debug Twig and Laravel Blade templates, use a special code cleanup tool for PHP, and have PhpStorm detect dead code. We’ve also reworked imports, improved autocompletion for function arguments and return values, and added a Recent Locations popup! Debug Twig and Laravel Blade templates With PhpStorm 2019.1, you’ll be able to debug original uncompiled template files of two popular engines, Twig and Blade. Simply specify the path to the template cache folder, turn on debug mode (for Twig), and set a breakpoint in the template file. The debugger will stop the execution at that line and you’ll be able to see what’s going on right in the IDE: the context, local and global variables, and so on. Docker support improvements For interpreters based on Docker Compose, you can now choose between docker-compose run or docker-compose exec for executing containers. If you have a heavy container that you don’t want to restart on each test run, you may reuse it by choosing the docker-compose exec option. Or you can use docker-compose run for lightweight containers or those not working in daemon mode (that is, stopping right after start). Locating Dead Code PhpStorm 2019.1 can help you find redundant code by highlighting classes, class members, or functions that are never used. Find candidates for removal instantly by turning on the ‘Unused declaration’ inspection under Preferences | Editor | Inspections in PHP | Unused. The inspection takes into account dynamic usages of the code, for example via magic methods. You can check the report for the whole project by selecting Code -> Inspect Code… Code Cleanup for PHP In PhpStorm, Code Cleanup is a batch action that lets you run a number of safe transformations on the whole project or a part of it. In PhpStorm 2019.1, Code Cleanup comes with PHP-specific intentions: it can optimize full class name occurrences by either adding the ‘use’ statement or removing the unnecessary part from it. It can also automatically fix code style issues with PHP CS Fixer or PHP_CodeSniffer’s phpcbf. Code Cleanup can be executed automatically before changes are committed, or you can trigger it manually at any time via Code -> Code Cleanup… Reworked Imports We’ve reworked the inspections and intention actions related to namespaces importing and using FQN. The main idea behind them is to avoid qualifiers as much as possible: now, PhpStorm will let you simply remove a redundant qualifier if it is possible, or replace it with the corresponding ‘use’ import statement. Also, now when you paste some code into a file, PhpStorm will ask to reuse an existing alias. Improved Autocompletion With the help of a special file, .phpstorm.meta.php, PhpStorm can now suggest arguments and return values better. This is to cover situations when, instead of some simple type like integer or string, you would like to see a certain set of constants suggested. Or if you expect some function to return a certain constant, but with type hints, you could only know that it’s an integer: You can also improve suggestions in PhpStorm for your library or project by providing your own .phpstorm.meta.php file. Check out, for example, this symfony-meta package for the Symfony framework. VCS improvements This release comes with neat improvements for VCS. For example, there’s a new “Uncheck all” checkbox for partial Git commits which lets you clear the current selection, and then select a specific set of changes to commit. In addition, fixup, squash, and cherry-pick actions are now available right in the Git log. Recent Locations popup In the new navigation popup, you’ll find recently visited code points presented in context – a couple of lines before and a couple of lines after. All the locations are chronologically ordered in this popup, with the last visited location at the top. To call up the new Recent Locations popup, press Cmd-Shift-E / Ctrl+Shift+E. Custom Themes If Default white and Darcula themes are not enough, create a custom one! Every UI aspect of your IDE, from icons to radio buttons and arrows, are now configurable. You’ll be able to customize and save as a new theme plugin. Stay tuned for more information! Other improvements for PHP - New inspection: method may be ‘static’ - New refactoring: Move to class for functions and constants - New quick-fix: Remove unused variable - Multiple new intentions for manipulating strings - New coloring options for primitive parameter types and class members by visibility - PHPDoc styling configuration improved: sort use statements, define the order of tags, customize the number of spaces Web technologies - The documentation (F1) for CSS properties and HTML tags and attributes now shows the up-to-date descriptions and information about the browsers support from MDN - New browser compatibility inspection for CSS properties - Run and debug Node.js app when using Docker Compose - Extract CSS variable refactoring - New inspections for Angular applications - Convert a function with Promise to an async function with an intention - New intentions that introduce object or array destructuring Database tools - Support for new databases: Greenplum, Vertica, and Apache Hive - Code completion supports combined statements for CREATE and DROP - Support for DEFINER attributes in MySQL and MariaDB - Support for the Oracle mode in MariaDB - You can now set the default folder for a project Download the beta from our website or via Toolbox App. Please feel free to share with us any feedback that you have: add comments on this blog post or speak up in our public issue tracker. Stay tuned as the PhpStorm 2019.1 release is coming soon! The JetBrains PhpStorm Team The Drive to Develop
https://blog.jetbrains.com/phpstorm/2019/03/phpstorm-2019-1-beta/
CC-MAIN-2019-47
en
refinedweb
A node that's a subscriber, a publisher and uses dynamic parameters [Python] Suppose I have defined two custom dynamic parameters, gain1 and gain2, in a cfg file. I would like to use these in a node that subscribes and publishes to two topics. Here's what i tried: import rospy from std_msgs.msg import Float64 from dynamic_reconfigure.server import Server from myPack.cfg import paramConfig def param_callback(config, level): gain1 = config.gain1 gain2 = config.gain2 return config def callback1(msg): in1 = msg.data def callback2(msg): in2 = msg.data rospy.init_node('mixer') srv = Server(paramConfig, param_callback) sub1 = rospy.Subscriber('in1', Float64, callback1) sub2 = rospy.Subscriber('in2', Float64, callback2) pub1 = rospy.Publisher('out1', Float64) pub2 = rospy.Publisher('out2', Float64) out1 = Float64() out2 = Float64() while not rospy.is_shutdown(): out1.data = in1*gain1 out2.data = in2*gain2 pub1.publish(out1) pub2.publish(out2) But it seems the subscribers callbacks don't work. This error is given for the first line of the while loop. NameError: name 'in1' is not defined Note that the subscription and parameter server code work fine individually. Am I using the right method? Update: As suggested by gvdhoorn, I had to define gain1 and gain2 as global variables and initialize in1 and in2 before the while loop. The latter because the program could reach the while loop before a callback. In this case, in1 is still undefined and I would receive the same error . I feel this method is a bit inelegant and as gvdhoorn pointed out, using a class instead of global variables is better programming. Another problem you should change: your publishers are being declared as Subscribers. pub1 = rospy.Subscriber('out1', Float64)Change to : pub1 = rospy.Publisher('out1', Float64, queue_size=1) Whoops. Just saw that. The node is a simplified version of what I use and written specifically for this question so I never actually ran it. Ah ok then! It's just that I reproduced your problem, fixed the variable scope and got into another problem related to that (A Subscriberobject doesn't have the .publishmethod). =]
https://answers.ros.org/question/278409/a-node-thats-a-subscriber-a-publisher-and-uses-dynamic-parameters-python/?answer=278410
CC-MAIN-2019-47
en
refinedweb
In today’s Programming Praxis exercise, our goal is to implement a unix checksum utility. Let’s get started, shall we? Some imports: import Data.Char import System.Environment I made two changes in the checksum algorithm compared to the Scheme version. I included to conversion to a string to remove some duplication and I used a simpler method of dividing and rounding up. checksum :: String -> String checksum = (\(s,b) -> show s ++ " " ++ show (div (b + 511) 512)) . foldl (\(s,b) c -> (mod (s + ord c) 65535, b + 1)) (0,0) Depending on whether or not the program was called with any arguments, the checksum is calculated for either the stdin input or the files provided. main :: IO () main = getArgs >>= \args -> case args of [] -> interact checksum fs -> mapM_ (\f -> putStrLn . (++ ' ':f) . checksum =<< readFile f) fs Advertisements Tags: bonsai, checksum, code, Haskell, kata, praxis, programming, sum, unix
https://bonsaicode.wordpress.com/2011/03/25/programming-praxis-sum/
CC-MAIN-2017-30
en
refinedweb
Frees an existing string allocated by the slapi_ch_malloc() ,slapi_ch_realloc(), and slapi_ch_calloc(). Call this function instead of the standard free() C function. #include "slapi-plugin.h" void slapi_ch_free_string( char **s ); This function takes the following parameter: Address of the string that you wish to free. This function frees an existing string, and should be used in favor of slapi_ch_free() when using strings. It will perform compile-time error checking for incorrect error arguments, as opposed to slapi_ch_free(), which defeats the compile-time checking because you must cast the argument to (void**).
http://docs.oracle.com/cd/E19693-01/819-0996/aaier/index.html
CC-MAIN-2015-06
en
refinedweb
We have a near ubiqitious data storage and data transmission format for intranets and the internet yet the many want to poison the interoperability well by increasing the number of incompatible formats that are called 'XML'. --Dare Obasanjo on the xml-dev mailing list, Monday, 22 Nov 2004 Namespaces are an intrinsic part of an element. A furniture store <table> doesn't become an XHTML <table> just because you moved it to an XHTML document. --Jason Hunter on the jdom-interest mailing list, Sunday, 28 Nov 2004. --Chris Spencer Read the rest in Linux Opinion: An Open Letter to a Digital World (LinuxWorld) the trick with xslt is that it's a specification of what to do, not an instruction to do something. ie it is truly non-procedural. the paradigm shift is from programming (giving a clear step by step set of instructions) to specifying (if you have an x then do y). this is subtle but critical. my experience training practicing programmers to make this paradigm shift is that they struggle. they're programmers because they can build a detailed set of instructions. if you talk to managers you'll find they can actually do this sort of work better because their day-to-day work is based on broad directions not detailed instructions. --Rick Marshall on the xml-dev mailing list, Thu, 11 Nov 2004 The real problem with DOM is that it is good enough for many purposes; it has far too many methods, many overlapping in purpose and not named consistently. Committees and legacies do that. Despite that, everyone already has a DOM library handy -- not just Java programmers, but also programmers of many other languages. It is too easy to just choose DOM because it is widespread and available. Although I would not generally choose to write in the Java language if I had the option to write Python (or maybe Ruby, or even Perl), XOMreally does everything better than DOM. XOMis more correct, easier to learn, and more consistent. Most of its capabilities have not been covered in this introduction, but rest assured it incorporates the usual collection of XML technologies: XPath, XSLT, XInclude, the ability to interface with SAX and DOM, and so on. If you are doing XML development in the Java language, and you are able to include a custom LGPL library in your application, I strongly recommend that you give XOMa serious look. --David Mertz Read the rest in XML Matters: The XOM Java XML API. --James Gosling Read the rest in ACM Queue - A Conversation with James Gosling - James Gosling talks about virtual machines, security, and of course, Java. Market share does not predict security. Apache has more market share than has Microsoft IIS, which has more holes than Apache. --Ben Goodger Read the rest in Unearthing the origins of Firefox | Newsmakers | CNET News.com XSLT's XML format was one of its huge advantages over DSSSL, which it effectively replaced the way XML replaced SGML (and DSSSL had far less success than SGML). It's much easier to read "</xsl:if></xsl:for-each></xsl:if></xsl:variable>" and know exactly what kinds of structures are being ended, in what order, than to look at "))))" and know the same thing. --Bob DuCharme on the xml-dev mailing list, Tuesday, 9 Nov 2004 For big systems with open-ended scaling requirements, architectures that are asynchronous and queued rather than call and response generally seem to win, big time. It’s not an accident that IBM and Tibco were making millions selling big robust asynchronous queuing infrastructure long before anyone started talking about “Web Services.” --Tim Bray Read the rest in ongoing · Web Services Theory and Practice. --Michael Kay on the xml-dev mailing list, Friday, 22 Oct 2004. --Sean McGrath Read the rest in Sean McGrath, CTO, Propylon you just can’t get both the necessary flexibility and performance that you need for XML unless you are prepared to move away from a purely relational approach. --Philip Howard Read the rest in IBM moves the database goalposts | The Register. --Tim O'Reilly Read the rest in Read/Write Web: Tim O'Reilly Interview Mozilla came back to life and is now improving, which is no longer the case for IE. If mozilla brings a good run time environment for intranets apps, then things may change and we may have an alternative option to XAML/IE/longhorn. Mozilla teams should listen more to developers needs and less to W3C in order to succeed. --Didier PH Martin on the xml-dev mailing list, Wednesday, 07 Jul 2004 Whenever I hear about a new text editor that’s “better than BBEdit,” the first thing I do is open its Find and Replace window. Then I run back to BBEdit. --Michael Tsai Read the rest in BBEdit 8 Python is essentially Scheme with indentation instead of parentheses. --John Cowan on the xml-dev mailing list, Wednesday, 23 Oct 2002. --Kurt Cagle Read the rest in Metaphorical Web: Conferences and Google the canonical documentation of the Scheme and Lisp standards is maintained not in S-expression syntax but in LaTeX syntax. If S-expressions were easier to edit, it would be most logical to edit the document in S-expressions and then write a small Scheme program to convert S-expressions into a formatting language like LaTeX. This is, what XML and SGML people have done for decades, because they really do believe that their technologies are better for document editing and maintenance than LaTeX. The Lisp world seems to have come to a different conclusion about S-expressions versus LaTeX. --Paul Prescod Read the rest in XML is not S-Expressions IE doesn't really do namespaces, it vaguely emulates them with a hack. The whole Mozilla family on the other hand implements them as per spec, as do (or soon will) browsers of the KHTML family. --Robin Berjon on the xml-dev mailing list, Thu, 28 Oct 2004.) If you are a Java programmer, do not trust your instincts regarding whether you should use XML as part of your core application in Python. If you're not implementing an existing XML standard for interoperability reasons, creating some kind of import/export format, or creating some kind of XML editor or processing tool, then Just Don't Do It. At all. Ever. Not even just this once. Don't even think about it. Drop that schema and put your hands in the air, now! If your application or platform will be used by Python developers, they will only thank you for not adding the burden of using XML to their workload. --Phillip J. Eby Read the rest in dirtSimple.org: Python Is Not Java For the user who spends 50 percent of the time in the Web browser and another 40 percent in the mail client, the Linux desktop is already there. --Andy Hertzfeld Read the rest in Technology Review: An Alternative to Windows The First Amendment can't give special rights to the established news media and not to upstart outlets like ours. Freedom of the press should apply to people equally, regardless of who they are, why they write or how popular they are. --Eugene Volokh Read the rest in The New York Times > Opinion > Op-Ed Contributor: You Can Blog, but You Can't Hide, --Reporters without Borders Read the rest in Internet News Article | Reuters.com Just because the spammers are sociopaths is no reason for webmasters to behave in an equally offensive manner. --Alan Eldridge Read the rest in Using Apache to stop bad robots : evolt.org, Backend RSS is a syndication format. It's not well-suited to carrying ads. It's designed for syndicating content, and content only. No navigation, no design, no advertisements. --Andy Baio Read the rest in Wired News: Advertisers Muscle Into RSS The bottleneck for XML processing in an application is dependent on the application. This is all old ground. To some people the wire size is important so the added cost of compressing/decompressing is fine. For others, processing time is the bottleneck so compounding XML parsing with the cost of compressing/decompressing XML makes things worse not better. There is no one-size-fits-all solution to optimization problems. --Dare Obasanjo on the xml-dev mailing list, Wednesday, 18 Aug 2004 apparently all of the talk of alternative XML encodings being much more efficient than text XML are based on consistent use of document classes for each test, so this consistency means the kind of redundancy that makes compression much easier. When someone prototypes an encoding that is orders of magnitude more efficient for arbitrary XML, which Mike said that no one had done yet, I'll more seriously consider the possibility that a binary XML standard might be worth the trouble. --Bob DuCharme on the xml-dev mailing list, Monday, 22 Nov 2004 Why is it that even simple questions asked about straightforward aspects of Unicode somehow mutate into hairsplitting arguments about who exactly meant what and which version does which...? --Mark E. Shoulson on the Unicode mailing list, Tuesday, 23 Nov 2004? --Roger L. Costello on the xml-dev mailing list, Tuesday, 24 Aug 2004 post XML Schemas, the W3C brand is fairly diminished as far as new specs are concerned. XBC could easily go the way of XPointer, XML 1.1 and XML Fragment Interchange: like a quarrelsome but beautiful neighbour, decorative but to be avoided. --Rick Jelliffe Read the rest in Binary XML? What about Unicode-aware CPUs instead? I'll admit that there may be people who actually want the features SOAP provides and REST doesn't, maybe even for reasons other than relentless marketing. Counter to the past few years of Web Services propaganda, however, I'd also argue that they're a minority of cases, in projects if not in wallets. Most of the time using SOAP is just adding wasted overhead in the service of an architecture that isn't generally necessary. Most developers don't need CORBA, nor do most most developers need Web Services. The real problem here from my perspective isn't whether or not SOAP sucks, but that gleeful vendors tried to pretend the market for it was much larger than it actually was, and weren't keen on hearing from people who pointed out that most of the time SOAP isn't a particularly clean solution. In fact, much of time, it's poison. XML-based technologies seem particularly susceptible to the "if we standardize it, everyone will use it" fallacy. Somehow people seem to have absorbed the standardization aspect of XML while missing its flexibility and the fact that it was explicitly designed as "SGML for the Web". They've just kept going with an insane urge to create more sort-of standards... --Simon St. Laurent Read the rest in Eric Newcomer's Weblog: More on WS-* Complexity. --Bill de hÓra Read the rest in Bill de hÓra Why do you need a nul? They're not exactly legal characters in plain text; I know of no program that would do anything constructive with them in plain text. A file with arbitrary control characters in it is generally not a plain text file; an escape code certainly has no fixed meaning and where it does have meaning it does things, like underlining and highlighting and other things, that aren't exactly plain text. --D. Starner on the unicode mailing list, Sunday, 14 Nov 2004.. --Tim O'Reilly Read the rest in Read/Write Web: Tim O'Reilly Interview, Part 1: Web 2.0 The W3C also doesn't help with its wonderful rule that "Cool URIs should be as hard as possible to remember". Throwing a random year in your namespace URIs is considered Good Practice. I guess we should be thankful they're not URNs. --Robin Berjon on the xml-dev mailing list, Wed, 27 Oct 2004 SVG is something of a platypus, ornithorincus anatinus (the name of which I remember, curiously enough, from a Mr. Roger's Neighborhood song). It is a graphics format. It is an animation format. It is an interactive GUI format. It is a DOM for performing integrated web services. It's becoming a publishing format. Like the duckbill platypus, it seems like it was stuck in some kind of bizarre transmogrifier ray, a la Vincent Price's The Fly, neither bird nor mammal but somewhere in between. There's never really been anything like it, to be perfectly honest. Flash often comes to mind as the point of comparison, but in reality, Flash lacks the capabilities for abstraction that are intrinsic to SVG. Don't get me wrong on this - Flash is a very powerful tool for creating impressive looking graphic animations. The difference between Flash and SVG, however, is that Flash is a self-contained world; SVG on the other hand is beginning to shape up into an application that entwines itself within other specifications. This will become more obvious when SVG moves more into the native space of browsers and operating systems, rather than being a plug-in. The significance of the Mozilla SVG effort, even at its current nascent stage, is that you can create interactive and animated graphics inline to other markup such as XHTML or XUL. This means, among other things, that the graphics on a page are immediately accessible as part of the DOM, are integrated into the whole fabric of a web page both programmatically and visually. --Kurt Cagle Read the rest in Metaphorical Web: SVG and the Search for <elegance>. --Roger Johansson Read the rest in Web development mistakes | Lab | 456 Berea Street I use XML day in and day out and have learned everything I know by trial an error. I've made many mistakes along the way. I've tried my best to learn from them, but Effective XML was the book that made everything click for me. The best part is that the book went well beyond just helping me see my errors. I've already applied some of the ideas to new work I've done recently and have been able to head off some of the problems I would have encountered. Effective XML is by far the best XML book I've ever read, and quite possibly the best tech book I've read all year. --Norman Richards Read the rest in Review: Effective XML. --Charles Cooper Read the rest in Why I dumped Internet Explorer | Perspectives | CNET News.com I think it's an adventage when all the involved languages are based on the same (XML) convention: The data description (XML), the transformation (XSLT) and the output GUI (XHTML). It's easier to play with XHTML and apply the changes to the XSLT - comparing to integration to any scripting language. When using XSLT, the original XHTML tags stay "as is". Therefor, it's easier to understand the XHTML within the XSLT doc, than from a script. --Amir Yiron on the xsl-list mailing list, Wed, 19 May 2004 Firefox is suffering from a success crisis. The bad news is so many people can't get to the site. The good news is its popularity. --Stephen Pierzchala Read the rest in Firefox 1.0 fans clog Mozilla site | CNET News.com re: Capitalization: Should you use ID or Id Speaking for myself, when I see mixed capitalization, I switch from thinking in acronym/abbreviation mode to thinking in word mode. The goal with ID should be to immediately identify that you're referring to the word Identification, as opposed to Id. Of course, a millisecond's thought figures this out, but I see this as a kind of cognitive speed bump. --Chris B. Behrens Read the rest in Capitalization: Should you use ID or Id. --Liam Quin on the xml-dev mailing list, Thursday, 21 Oct 2004 correctness always comes first, however rare the scenario; and I also try to live by the principle that a clean API is more important than a 2% performance improvement. If you want a 2% performance improvement, just wait for next week's hardware. --Michael Kay on the xml-dev mailing list, Thursday, 28 Oct 2004. --Tim Bray Read the rest in ongoing - On Resources Server-side processing language gives you much flexibility in terms of applying logic upfront. XSLT takes this to a whole nother level. Before XSLT (for me) HTML was a dead horse, and if I wanted to get crafty with it, I had to intertwine the HTML code with my server-side code. This gets messy messy messy, and further complicates the server-side code. XSLT allows for full separation of presentation and server-side processing. Not just that, but XSLT allows you to *program* (I call it that) at the presentation level. --Karl J. Stubsjoen on the xsl-list mailing list, Wed, 19 May 2004). --Michael Rys on the xml-dev mailing list, Tuesday, 18 Nov 2003 I like building web apps that (from the users point of view anyway) have only one url. Remembering that xslt is xml and itself easy to manipulate with xslt or dom code, it's perfectly possible, indeed practical, to have one asp/aspx page which has a resource of many interfaces (real(on disk) or virtual(created on the fly) (probably cached)) and many data sources which map together to form a whole application. Sort of an application that generates itself in response to the users input (according to rules of course). --Rod Humphris on the xsl-list mailing list, Wed, 19 May 2004 Stupid is not illegal. --Norm Walsh on the www-tag mailing list, 28 Jul 2003 W3C seems like a parliament too far away from practical needs and caught into political vested interests or simply jammed into ethereal dialogs. --Didier PH Martin on the xml-dev mailing list, Wed, 07 Jul 2004. --Jason Hunter Read the rest in A Conversation with Jason Hunter on XML and Java Technologies The last two weeks have really shown off the difference between open source projects and closed source to me. The short version is “Closed source software can go to hell.” libxml2 is supported on the GNOME XML mailing by Daniel Veillard at Redhat. The responsiveness on the mailing list is utterly amazing. I got one response in 8 minutes, the next one in 2 hours. Compare and contrast with a closed source vendor that shall remain nameless, but the first one to pop into your head is probably the right one. 9 days and counting so far for a useful meaningful response. Repeat after me: “I will not be a share cropper.” -- Victor Ng Read the rest in crankycoder - vlibxml2 - first usable release When you have two different organizations trying to push two different vocabularies for solving the same problem, it doesn't help the supply chain. If you're a small guy, supporting a bunch of different schemas gets difficult. --Ron Schmelzer, ZapThink Read the rest in XML: Too much of a good thing? | CNET News.com The DOM is the infant of HTML DOM 0 implementations, XML when it didn't have namespaces, and then some heavy-duty namespace grafting on top. It's a tribute to its inceptors that it's not far more monstrous. --Robin Berjon on the xml-dev mailing list, Wed, 27 Oct 2004. --Patrick Stickler on the www-tag mailing list, Monday, 25 Oct 2004 The 'single schema' approach only works in the limited case where the information is a small set, usually in a highly regulated and constrained domain. Also - it also often works when people just do a 'one-off' interchange with limited participants. So - they have some initial success - then try and scale across a whole community - and then discover it is not going to be a simple linear growth path. Not to mention the need to express more than just the simple constriant rules and share those across the community. --David RR Webber on the xml-dev mailing list, Tuesday, 07 Sep 2004 There are a lot reasons why the Web has been huge hit with the developer crowd. But, IMHO, the main "success" factor in developing for the web is that of visibility. The core specs of the Web (HTTP, HTML, CSS) are all there ready to be uncovered. Countless of times I leveraged the visibility of the Web to troubleshoot problems, learned how to create new problems, and gain invaluable insight, all because the "guts" of the Web were there ready to be digested. It is a very important aspect of the web that must utilized in the WS-* world. Therefore, I think it is imperative that WS-* vendors engender visibility of the specs at the protocol level (a binary infoset.. yuck) and simplified exposure of the specs at the programming model level. Trust me, you will be very relieved when you have to troubleshoot a interop problem with a Google service in five years. --Dave Bettin Read the rest in Show me the angle brackets van der Vlist on the xml-dev mailing list, Saturday, 23 Oct 2004 To me, the ultimate boon coming from the XML world will probably end up being a much easier way to create a standardized document format that would be open source and universal. Think about a format that would embed annotations and commentary, revision marks, stylesheets, and the like in a format that every other word processing vendor besides Microsoft would directly support. It's a crime that more than twenty-five years from the time the PC friendly word processor was written there is still no definitive open standard. --Kurt Cagle on the "Computer Book Publishing" mailing list, Sunday, 25 Mar 2001 So a benefit of the RDF solution is that instead of leveraging existing investments in relational data stores that are common place in the enterprise one can use a different, potentially incompatible data store? Have you missed the occurences within the database world in the past few years with regards to Object Oriented Databases and Native XML databases? This should be taken to heart whenever one touts some new data storage technology as a replacement for relational stores. --Dare Obasanjo on the xml-dev mailing list, Thursday, 24 Apr 2003. --Salam Pax Read the rest in Guardian Unlimited | Special reports | The Baghdad Blogger goes to Washington: day four Open source is the ticket out of the banality Microsoft has imposed. --Louis Suárez-Potts Read the rest in Technology Review: An Alternative to Windows. --Charles Cooper Read the rest in Why I dumped Internet Explorer | Perspectives | CNET News.com If there's one thing that the RSS Draconian Wars taught us, it's that you don't want to be involved in any discussion of XML and error handling. --Phil Ringnalda Read the rest in phil ringnalda dot com: PHP turns evil When you use defined standards and valid code you future-proof your documents by reducing the risk of future web browsers not being able to understand the code you have used. --Roger Johansson Read the rest in Developing With Web Standards | 456 Berea Street Bloggers and radio hosts pound newspapers for bias that pales in comparison to their own. The same people who pilloried former New York Times Executive Editor Howell Raines for mounting a "crusade" against the Augusta National Golf Club’s men-only policy devoted their energies to the swift boat story with an obsessiveness impossible to contemplate in a general news publication. The same critics who stomped up and down when the Los Angeles Times made the mistake of saying none of the Swift Boat Veterans served on a boat with Kerry (actually, one did) seemed altogether blasé when the coverage for which they’d been begging exposed the accusatory veterans as being very far from scrupulously truthful. .) --Matt Welch Read the rest in Reason: A Swift Boat Kick in the Teeth: How the mainstream media grapple with partisans From my markup-centric perspective, RDF is ugly, high-level, and excessively charged with meaning encoded so abstractly as to be nearly cryptographic. Oh, and it's painfully constraining since it can't figure out how to deal with mixed content, a common human construct. --Simon St.Laurent on the xml-dev mailing list, Tuesday, 20 Aug 2002 Market Dominance Netscape had it by being first. Microsoft has it by being everywhere. Firefox will have it by being best. --Ben Goodger Read the rest in Inside Firefox: Market Dominance Read the rest in Bill de hÓra: Monster Oriented XML, used in conjunction with, for example, Java technologies and SQL, does provide digital archives and libraries developers with a significant means for tagging data that more effectively enables interoperability between and across systems, particularly in distributed network environments. This is mostly backend stuff, i.e., it is invisible to the end-user- the client- but it enables robust search and retrieval of data in ways not possible without it. --James Landrum on the xml-dev mailing list think for a lot of Eudora users, myself included, the lack of support for HTML email is a feature, not a bug. --Robert Gruber on the WWWAC mailing list. --David Walker Read the rest in Shorewalker.com - Simplicity and ubiquity matter (or, How reality mugged Joel Spolsky) there's no harm in using XML Schema to check data against the business rules, so long as you realize this is *an* XML Schema, not *the* XML Schema. We need to stop thinking that there can only be one schema. --Michael Kay on the xml-dev mailing list, Thursday, 19 Aug 2004 I was a fool for believing that Office 2003 would open up the data generated by MS's cash cow products to 3rd party XML applications. Giving the peasants, oops sorry, "customers" some options ain't no way to run an evil empire :-) One Word to write them all, one Access to find them, one Excel to count them all, and thus to Windows bind them. --Mike Champion on the xml-dev mailing list, Saturday, 12 Apr 2003 The more I look at what's happening with WS*, the more I think it looks exactly like what the OMG did with CORBA - a blizzard of specs no one cares about, which tends to make vendor interop harder and harder. --James Robertson Read the rest in Smalltalk Tidbits, Industry Rants: What CORBA got wrong? XML is really really good for interchange and really really irritating for in-memory manipulation. I think we all ought to be more up-front about this --Tim Bray on the xml-dev mailing list, Wed, 21 Aug 2002 On the last system I worked on, we were struggling with SOAP and switched to a simpler REST approach. It had a number of benefits. Firstly, it simplified things greatly. With REST there was no need for complicated SOAP libraries on either the client or server, just use a plain HTTP call. This reduced coupling and brittleness. We had previously lost hours (possibly days) tracing problems through libraries that were outside of our control. Secondly, it improved scalability. Though this was not the reason we moved, it was a nice side-effect. The web-server, client HTTP library and any HTTP proxy in-between understood things like the difference between GET and POST and when a resource has not been modified so they can offer effective caching - greatly reducing the amount of traffic. This is why REST is a more scalable solution than XML-RPC or SOAP over HTTP. Thirdly, it reduced the payload over the wire. No need for SOAP envelope wrappers and it gave us the flexibility to use formats other than XML for the actual resource data. For instance a resource containing the body of an unformatted news headline is simpler to express as plain text and a table of numbers is more concise (and readable) as CSV. --Joe Walnes Read the rest in Joe Walnes, REST and FishEye. --Joshua Marinacci Read the rest in java.net: My 1 year anniversary at Java.net: the social side of software. [August 21, 2004] XML in general doesn't consider the difference between CDATA and other text semantically meaningful; XSLT in particular discards that distinction on input. Trying to treat CDATA boundaries as meaningful is a Very Bad Idea. --Joseph Kesselman on the xalan-j-users mailing. --Paul Graham Read the rest in What the Bubble Got Right It's a problem that people should have to pay for a whole OS upgrade to get a safe browser. It does look like a certain amount of this is to encourage upgrade to XP. --Michael Cherry, Directions on Microsoft Read the rest in Microsoft to secure IE for XP only | CNET News just about anything else available. --Paul Brislen Read the rest in New Zealand News - Technology - Kiwi leads effort to build a better browser”. --Douglas Bowman Read the rest in Stopdesign | The IE Factor. Accepting this will make your life a lot less frustrating. --Roger Johansson Read the rest in Developing With Web Standards | 456 Berea Street The? Web Services emerged at a time when some of us actually believed that XML was a uniform solution to disparate problems, and there was a long time when XML and Web Services were treated as synonyms. Maybe what's happening now is the result of recognizing that a large number of programmers and users aren't actually enterprise developers - we have no more need of WS-Transfer than we have of an S/390 running a dedicated message queue system. For the most part, Web Services and the WS-* set of specifications address problems many people just plain don't have. --Simon St. Laurent Read the rest in Are Web Services receding? Whether a blog leans left, right or sideways, as a collective force they are working to keep reporters honest. Journalists may not like their methods -- having your work sliced and diced in public is no fun -- but the end result may be better-quality news. --Adam L. Penenberg Read the rest in Wired News: Blogging the Story Alive Metaphorically, a statement is like a molecule in which the predicate is the chemical bond between two atoms. The only structures in RDF are statements, and each statement associates exactly one subject with exactly one object. More complex structures, like topic map associations, must be built up one statement at a time. --Thomas B. Passin on the XML Developers mailing list, Friday, 04 Jun 2004 The debate over who and isn’t a journalist is worth having, although we don’t have time for it now. You can read a good account of the latest round in that debate in the September 26th.” --Bill Moyers Read the rest in Society of Professional Journalists - SPJ National Convention 93.7% still seems like a really daunting market share for IE. But turn it around: that's more than one out of every 20 web users (also known as "potential customers" to commercial websites). Just three months ago it was slightly less than one in 20; today it's trending toward 1 in 10. That's significant. Many companies write web applications that support only IE. Although I've never agreed with that strategy, I can see how some are convinced that it's a reasonable one. But I suspect the problems with an IE-only approach will quickly become clearer. --Glenn Vanderburg Read the rest in You Can't Ignore Firefox unique visitors are an irrelevant statistic. Most such visitors are sampling a single page to get an answer, rather than engaging with your site. Instead of tracking them, count loyal users as a key metric for site success. --Jakob Nielsen Read the rest in When Search Engines Become Answer Engines (Jakob Nielsen's Alertbox) The problem of linking things together on the Web takes an almost vertical ascent into complexity which layer of abstraction piling on layer of abstraction very quickly. All you need to do is move slightly beyond the "link this to this" model of HTML and you are in deeply complex philosophical territory. If you doubt that this is the case, I would suggest you take a look at the HyTime standard. The sheer size and complexity of it amply demonstrates the enormity of problem hidden behind the simple term "linking". --Sean Mcgrath Read the rest in ITworld.com - XML IN PRACTICE - XLink: A Hyperspace Oddity If you're an Internet Explorer user, you owe it to yourself to download FireFox and see how a real browser works. --Preston Gralla Read the rest in The World's Best Browser Just Got Better Because people's appetites for esoteric sports statistics are so insatiable, the data reports that get exchanged and formatted for display are often incredibly intricate. For our industry, the benefits of XML are clear: consistent input no matter what the provider, what the sport, what the native language." --Alan Karben, chairman of the SportsML Working Group Read the rest in XML: Too much of a good thing? | CNET News.com. --Tim Bray Read the rest in ongoing · Web Services Theory and Practice XML specification was first published, it's spawned hundreds of dialects, or schemas, benefiting everyone from butchers to bulldozer operators wishing to easily exchange information electronically. --David Becker Read the rest in XML: Too much of a good thing? | CNET News.com The DOM very rarely makes sense, especially when it comes to namespaces. If you want to retain your sanity, avoid it. --Michael Kay on the xml-dev mailing list, Friday, 3 Sep 2004 HTML will never die. Gencoding never does. There should always be that easy to learn, easy to apply vocabulary that gets jobs done fast. --Bullard, Claude L (Len) on the xml-dev mailing list, Friday, 30 Apr 2004 As many of you may know, the sites hosted by ibiblio are not accessable from the People's Republic of China. This is due to ibiblio hosting Tibet related sites, censored by the chinese government. What you might not know is that that kind of censorship wouldn't be possible without the cooperation (complicity?) of some large U.S. corporations: Cisco Systems, Google Inc., Yahoo Inc., Microsoft Corp., Sun Microsystems, inter alia. --Paola E. Raffetta on the webgroup mailing list, Tuesday, 7 Sep 2004. --Joel Spolsky Read the rest in Joel on Software - It's Not Just Usability I should note, to be fair, that XML-on-the-Web idea fizzled just as fast as 3D-on-the-Web did. All of the supposed client-side HTML killers[*] in the late 1990's either died quickly (VRML, XML, ActiveX controls), are on life support (Java applets), or have found niches and learned to cohabitate peacefully (Flash, JavaScript, PDF). --David Megginson on the XML Developers mailing list, Friday, 30 Apr 2004. --Prince Read the rest in Wired 12.09: PLAY In the long run, I think many people will be using XQuery and XML. Sometimes the storage will be basically relational with XML extensions, sometimes it will be basically XML with extensions to optimize typical XQuery operations. But the queries will look rather similar in either case, and vendors of relational databases and native XML databases will be working hard on solving many of the same problems. For now, XML databases do seem to be a niche market. People are very conservative when it comes to changing their database technology... --Jonathan Robie on the xml-dev mailing list, Wed, 25 Aug 2004 Avery Read the rest in The 7 Fallacies of XML Validation XHTML is a stopgap, but the gap needs stopping -- alternatives such as XML+XSLT aren't universally supported (e.g. Mark Pilgrim's Atom feed looks terrible in Opera and Safari). Likewise, XML+XSLT has the problem that if you have M XML source formats and N output formats (e.g. for different browsers, different devices, different classes of users ...) then you have M x N stylesheets to maintain. With XHTML as the single source format, you have N stylesheets to maintain. Sure, that has the cost of the kludges you must do to force-fit information into XHTML, but it's not at all clear to me that the cost of this outweighs the practical benefit. And of course you can use some better XML format as the single source format, but then you have to design it, deploy it, change it, manage the versions ....and deal with the snotty users who don't believe you've added enough value over (X)HTML to justify all these costs. I'm no fan of XHTML or the W3C process that has produced it, but I'm beginning to think that it's like democracy -- the worst of all possible approaches, except for the known alternatives. --Michael Champion on the XML Developers mailing list, Wed, 14 Jul 2004. --Marc Hedlund Read the rest in O'Reilly Network: Microsoft Word and "Smarter Than" It took far too long for the JCP to acknowledge that those in trenches knew EJB and other J2EE doodads had issues in a way that no amount of visual tooling and enterprise patterns could paper over. Huge pressure had to bubble up from the developer community in the form of OpenSymphony, Spring, Pico, Hibernate and bestselling books like Bitter Java and Bitter EJB. This lack of feedback seems to result in disdain for J2EE as a development platform and produces one reactionary OS project after another addressing the same issues (the web frameworks situation is so bad it's getting its own section). Some of these projects are valuable, but many result in buyer's regret and legacy issues if the project dies as the leads go off to do something else cool without leaving a community behind them (Hani Suleiman deserves immense credit for highlighting this problem). There are no winners here. Today, a significant issue is how the J2EE fits with integration styles where the Web, documents, messaging, interop-uber-alles and most importantly, tight budgets, dominate the landscape. --Bill de hÓra Read the rest in Bill de hÓra: Java unhinged. --Joshua Marinacci Read the rest in java.net: My 1 year anniversary at Java.net: the social side of software. [August 21, 2004] Anyone who has doubts about the intrinsic crunchy goodness of URIs is liable to have an aneurysm during any serious encounter with RDF. --Simon St.Laurent on the xml-dev mailing list, Sunday, 17 Nov 2002 I predict that within ten". --Larry Wall Read the rest in Perl.com: The State of the Onion real business-level validation checks a lot more than syntax. It's not enough that a name and address field are filled in, that usually has to unambiguously match a known customer. It's not enough that a date is in the format specified by the schema, it has to be in an appropriate timeframe (usually the recent past or the recent future). Given that you have to validate all that stuff anyway, and that you have tools such as XPath to extract the needed information from a more general context rather than a rigid syntax, lots of people find that the exercise of defining, agreeing to, and validating against a syntax-level schema doesn't add enough benefit to justify the cost. --Michael Champion on the XML Developer mailing list, Tuesday, 8 Jun 2004 REST is an architectural style -- a way of organizing a system into components and governing the interaction between those components such that the resulting system remains stable while accomplishing the desired tasks. The reason HTTP is involved in REST is because I had to shrink and redesign HTTP/1.0 to match those features that were actually interoperable in 1994, which turned out to be the core of the REST model (it was called the HTTP object model at the time) and that was carried forward into designing the extensions for HTTP/1.1. Thus, the two are only intertwined to the extent that REST is based on the parts of HTTP that worked best. There is absolutely no reason that a new protocol could not be a better match for REST than HTTP. Older protocols, however, typically do not supply enough metadata or require too many network interactions. --Roy T. Fielding Read the rest in Adam Bosworth's Weblog: Learning to REST. --Martin Hardee Read the rest in Sun.com Usability & Useful Stuff W3C still maintains a distinction between HTML and XHTML, and still offers both specifications.on its site. HTML is not deprecated. --Doug Ewell on the Unicode mailing list, Sunday, 15 Aug 2004. --Sean Mcgrath Read the rest in Sean McGrath, CTO, Propylon. --Justin Gehtland Read the rest in ONJava.com: Better, Faster, Lighter Programming in .NET and Java It is little wonder that MS has paid so much attention to ensuring that Direct X is at the cutting edge of gaming graphics technology so that game developers use it in the creation of their latest masterpiece. This has had the very neat effect of making those games run well on Windows and ensuring they don't run at all on their competitor's OS's. It is much harder for a game developer to shift a game created using Direct X over to the Apple or GNU/Linux OS's than it is if the game is OpenGL based. This is one reason why id Software have always produced Linux versions of their games alongside the Windows version as they use OpenGL. Unfortunately, they are very much the exception and are likely to remain so unless those associated with competing OS's take action to redress the situation. Until they do so, Microsoft will continue to have a major competitive advantage over Apple and GNU/Linux. --Ian Mckenzie Read the rest in Why Games Matter - OSNews.com HTML is not XML (unless you are very lucky!) --Daniel Joshua on the xsl-list mailing list, Thursday, 20 May 2004. --Tim Bray Read the rest in ongoing Characters vs. Bytes. --Uche Ogbuji Read the rest in XML.com: Decomposition, Process, Recomposition Joel calls for a richer set of controls and events. Those who know a bit about Mozilla will immediately start thinking about XUL and XBL, and Microsoft's equivalent (XAML) is also relevant here. Much of this stuff is already doable in Javascript, but XML languages are better for a reason fundamental to the web: They lower the barrier to processing. It is an order of magnitude easier to decipher what a document is specifying than a program: the only way for a machine to really understand a program is to execute it. Unlike Javascript or any other Turing-complete language, XML doesn't suffer from the halting problem. Lowering the barrier is vital so that a wider range of lesser-powered web clients can understand your content, whether those web clients are mini-browsers running on embedded devices or ten-line scraping scripts. Furthermore, explicit unambiguous markup means that the client then has more freedom in rendering the document in the way it sees fit, and this freedom is vital to true web accessibility. If the speech browser for blind users knows that what it's trying to render is not just a collection of layers with links in them but a standard menu then it can render it in a much more usable way. --Yoz Grahame Read the rest in Yoz Grahame's Cheerleader: What I Want For WHAT. --Larry Wall Read the rest in Apache News Blog Online—first of all, they all use something different, and, second, all of them say of whatever. --Jakob Nielsen Read the rest in Time for a Redesign: Dr. Jakob Nielsen It's important to distinguish the proposition that the information is not in the form I'm used to seeing it in and the information is not there. --C. M. Sperberg-McQueen at Extreme Markup Languages, 2004, Wednesday, August The real reason I like Extreme is that it's a gathering of people who share a common interest. I think of it as manipulation of tagged content, but it's not that simple. It's a gathering of people who are secure enough in their knowledge and their interests they can talk about unsolved and perhaps unsolvable problems without causing a panic. Extreme is an XML conference at which we can talk about what we cannot do. We can do that without frightening away either potential users of XML or, more likely to be frightened, the marketers who are trying to sell software to those potential new customers. Just try talking about what's broken in XML at one of the big XML conferences. You won't be very popular. Extreme is a gathering of people who are eager or, at least willing, to listen to XML heresy, to people telling us that what we've been doing all along is silly or, more likely, that what we've believed and are comfortable with is wrong. We talk about our projects, specifications, and standards we love, hate, use, ignore admire, disdain; and the logic or philosophy behind our approaches to the definition, creation and manipulation of marked up documents. --B. Tommie Usdin Extreme Markup Languages Keynote, Tuesday, August 3, 2004. As soon as your company starts using Outlook, you can see emergent, horrible, almost biological things start to happen. So by using Outlook, you're not practicing safe e-mail. We need a "condomized" version of it. --Bill Joy Read the rest in Fortune.com - Technology - Joy After Sun Nothing in the Namespaces Rec defines, describes, provides, specifies, suggests, entails, depends on, or constitutes a mechanism for defining globally unique names. The Namespaces Rec makes it possible to avoid one way in which names assigned in isolation might fail to be globally unique, but it neither requires that namespace owners ensure that local names are unique within a namespace, nor mentions that as a necessary or convenient step towards having globally unique names. You may have been misled by the rhetoric in the introduction to the first edition of the Namespaces Rec, but that introduction did not provide an accurate characterization of the technical content of the document. --C. M. Sperberg-McQueen on the www-tag mailing list, 11 May 2004. --Antoine Quint Read the rest in O'Reilly Network: A matter of trust (or lack thereof) Google is much more dangerous to Microsoft than Netscape was. Probably more dangerous than any other company has ever been. Not least because they're determined to fight. On their job listing page, they say that one of their "core values'' is "Don't be evil.'' In a company selling soybean oil or mining equipment, such a statement would merely be eccentric. But I think all of us in the computer world recognize who that is a declaration of war on. --Paul Graham Read the rest in Great Hackers The dependency injection rule (paraphrased "Don't let high-level classes depend on low-level details") is almost never followed by great programmers (like, e.g., James Clark, Kohsuke Kawaguchi, Michael Kay (don't feel left out if you aren't on this highly personal list of programmers whose work I have examined in detail)) presumably because they, like the rest of us, require a) actual proof that their solutions work, b) don't tolerate well overheads introduced by indirection and c) (maybe, just a thought) work in a culture where dependency injection is not the norm or highly valued. --Bob Foster on the xml-dev mailing list, Thursday, 08 Apr 2004 It is likely that mathematical proofs are the mote in the eye of the semantic web community. There is a tendency to run to math and logic when faced with uncertainty as in a story where one holds up a cross or runs to holy ground when faced with a vampire (the unknown). Logic and math, though useful, have their limits and absolutes are rare. Over time, some AI researchers such as Richard Ballard and for comparison, John Sowa point out that knowledge is not merely good logic and math. It is a theory making behavior, a sense-making behavior, more like traditional scientific method than pure mathematical modeling. --Claude L (Len) Bullard on the xml-dev mailing list, Tuesday, 15 Jun 2004. --Paul Boutin Read the rest in So Tired - Where Web surfers go when they haven't slept a wink. By Paul Boutin I have long been an advocate of technologies -- from XML through the Semantic Web -- that would make it easier to search and process information by more clearly expressing its structure and context. The problem is that creating a critical mass of such material would require a tremendous evolution in tools and discipline -- certainly an ambitious vision. Google realizes a respectable cross-section of the promise of the XML Web generation by merely finding creative ways of harnessing the mountain of legacy from the original Web. --Uche Ogbuji Read the rest in Perspective on XML: Steady steps spell success with Google- ADTmag.com? --Dan Bricklin Read the rest in Software That Lasts 200 Years XML is one of the few formats out there that can handle multiple encodings and unicode decently, and much of this is due to the xml declaration. --Thomas B. Passin on the xml-dev mailing list, Wed, 21 Jul 2004 I like WikiML and the whole notion of reduced, learnable, plain-text markup conventions, and I'll take it as a sign of real progress when one emerges with a design compelling enough, and a processing model robust enough (it'll have to go beyond "check correctness by eyeballing output"), to unseat the currently-dominant paradigm. Anything not as dead-simple as <tag>this</tag> is going to be a pain to learn, teach, maintain. --Wendell Piez on the xsl-list mailing list, Thursday, 08 Jul 2004 XML comments are IMHO just to comment on the XML they are in, not for any outside use, say the processing of the xml or whatever use the XML has. --Christof Hoeke on the xsl-list mailing list, Thursday, 8 Jul 2004 Periodically, I am getting emails from people or institutions (including the USA government national endowment for the arts and humanities agency!) asking me to sign and return incomprehensible legalese documents so that they can re-use the WebMuseum documents. Sorry, but... I have invested thousands of hours into building this collection, and I make my work available for free over the Internet already. What else do you need ? Why would I sign incomprehensible legalese documents, that not only do not provide me any benefit, but instead could actually backfire when I least expect it ? Also, these documents are usually subject to some foreign law and jurisdiction, such as USA laws. Being neither a USA citizen nor resident, I have absolutely no reason to submit to such a foreign jurisdiction. In other words, the only thing you will ever get is the WebMuseum online Copyright and License Agreement. Any email request asking me to sign any legal document will be silently ignored. --Nicolas Pioch Read the rest in WebMuseum: How to contribute. --Dare Obasanjo on the xml-dev mailing list, Wed, 28 Apr 2004 As has been said many times; one persons metadata is another persons data. Treating types as anything other than data is wrong, wrong, wrong! Types are just an attribute that someone can attach to something and treating anything as though it has a single type only restricts future extensibility. Any XML schema mechanism that is going to be truly useful has to allow for elements to behave polymorphically with respect to type depending on the context in which the element is evaluated. --Peter Hunsberger on the XML Developers mailing list, Thursday, 8 Jul 2004. --Robert X. Cringley Read the rest in PBS | I, Cringely . Archived Column Browser support for XHTML is pretty bad; it's more faked than real. HTML works great; no reason to throw it out. --Joshua Allen, Microsoft, on the xml-dev mailing list, Monday, 12 Jul 2004. --Tim Bray Read the rest in Taking XML's measure |CNET.com If you are starting from a DOM, performance will almost certainly be better if you use DOMSource. If you are starting from XML text, or from a SAX stream, performance will almost certainly be better if you use SAXSource. (When doing the comparison, remember to allow for the time spent building the DOM, which has overhead similar to that of building our internal model directly from SAX.) --Joseph Kesselman on the xalan-j-users mailing list, Thursday, 8 Apr 2004. To me the "small, incremental improvements to the ugly web" approach will make FireFox/Safari/etc be to IE6 what XHTML 1.x is to HTML. Neat increment, mostly a flop. --Robin Berjon Read the rest in Brendan's Roadmap Updates: The non-world non-wide non-web Applications last for ten years. People who have been battling with the limitations of version 1.0 of one language for five years don't want to be told to switch to version 1.0 of another language with similar limitations. XQuery is full of deliberate restrictions in functionality, which make it highly suited as a database query language, and very difficult to use as a general-purpose XML transformation language. Handling narrative XML (anything without a rigid schema) in XQuery is really hard work. --Michael Kay Read the rest in XSLT 2.0 Sir?. --David Carlisle on the xsl-list mailing list, Thursday, 8 Jul 2004 *The* big initial driver of OWL was DARPA, as it is well known that OWL was incarnated as the "DARPA Agent Markup Language" and it is well known that there is alot of interest in certain well-funded government circles regarding threat classification, pattern recognition. For example suppose we have gobs of random information, and from these gobs we have a way to correlate the information into individual collections, for example, suppose we can correlate a whole host of phone conversations as involving individuals. Now suppose we have lots of other types of information that involve other (as yet unnamed) individuals. What we want are a set of inferencing operations that will allow us to *equate* individuals identified by sets of phone conversations with individuals identified by financial transactions. Get the picture. --Jonathan Borden on the xml-dev mailing list, Saturday, 12 Jun 2004 My second epiphany about this stuff came more recently -- it became brutally clear that internet, XML and web services technologies had done a lot to remove the mechanical barriers to data interchange, so exchanging well-understood document, data records, and service invocations across platforms is no longer the painfully labor intensive proposition it was even a decade ago. Now that the plumbing is in place, however, it is clear that the barriers to effective communication lie more in what the data *means* than in what format it is in or what protocol will be used to exchange it. One might hope that industry-wide working groups will sort out the differences for each vertical.Wwheeooooffff [sound of dope smoke being inhaled ;-) ] One might hope that people will value interoperability more than inertia and adopt something like UBL [Kumbaya .... Kumbaya]. One might anticipate that some Omnipotent Entity such as the US government, WalMart, or Microsoft will just enforce uniformity [could happen, but the proles tend to resist such attempts by Big Brother]. One might much more plausibly believe, IMHO, that a) individual organizations can formalize what *they* mean by various terms, namespaces, etc. by reference to concrete documentation that describes them or software components/database fields that implement them; and b) that these private ontologies could be shared and mapped-between by those needing to exchange data across organizational boundaries. Maybe someday those will evolve into shared ontologies such as SNOMED, we shall see, but we don't need to believe in such things to use OWL, etc. to formalize and manipulate the private taxonomies/ontologies that are in actual use. --Michael Champion on the XML Developers mailing list, Saturday, 12 Jun 2004. --Jakob Nielsen Read the rest in Time for a Redesign: Dr. Jakob Nielsen QNames are kind of the result of a collision between a URI truck and the Name compact car where we end up driving the truck from the driver's seat of the car. --Simon St.Laurent on the xml-dev mailing list, Sunday, 19 Jan 2003 The difference from procedural formats, is that with XML there is no need to change the format to enable new features in processing software, as long as those features do not require new information. In many cases, new information isn't needed, provided that the original markup is reasonably good. (At Ericsson, switching to SGML meant changing formats about once a decade, as opposed to switching every year, as they did before. This meant considerable savings, because they did not have to update their software as often. In effect, their software was much more robust.) As a result, XML formats can be more stable than procedural formats, and therein lies a major advantage of XML. --Henrik Martensson on the xml-dev mailing list, Tuesday, 08 Jun 2004 I think it's actually easier to get xhtml right first time, precisely *because* it is necessarily syntactically clean and strict. You are never left wondering if it matters if you leave a bit of syntax out, or if you should include one, because it always matters: the rules are clear. Closing your tags, quoting your attributes etc. are very good habits to get into from the start. --Anton Prowse on the XHTML-L mailing list, Sunday, 20 Jun 2004. CERT's recommendation is just a reflection of the trend we have seen for quite some time. --Chris Hofmann, Mozilla Foundation Read the rest in Wired News: Mozilla Feeds on Rival's Woes. --Jim Waldo Read the rest in Jini Network Technology Fulfilling its Promise Google is to "the semantic web" as Cleveland is to Xanadu. --Bob Foster on the xml-dev mailing list, Thursday, 03 Jun 2004 Google is to "the semantic web" as CompuServe was to "the web". --Joshua Allen on the xml-dev mailing list, Wed, 2 Jun 2004 Le format "xml" ne veut rien dire : xml est une façon de définir un langage, non un langage par lui même. C'est un abus de langage - mais tant de personnes semblent si heureuses de le croire - que de dire que, du moment que c'est du xml, c'est portable et tout ce que l'on voudra de --able. --Herve AGNOUX on the xml-tech mailing list, Monday, 2 Feb 2004 We're missing an important business model on the network that's inbetween being free and paying $30 per month for a subscription, a valuable market indeed. It's like executing pocket change transactions on the network. There have been obstacles to doing it, but on a basic business model level, the major obstacle has been that it costs much more than a quarter to settle a quarter. But now, due to innovations in microcommerce it's possible to charge 25 cents for a transaction, without prohibitive settling costs. For example, it might now cost a nickel to collect 25 cents. So, a whole lot of possibilities present themselves, like premium searches or better driving directions. --Greg Papadopoulos, Chief Technology Officer, Sun Microsystems Read the rest in Grid Computing: A Conversation with Sun Microsystems' Chief Technology Officer, Greg Papadopoulos IMHO learning html first and then xhtml afterwards is like being taught to drive by one's dad and, once you think you're ready for your test, only then getting professional lessons to correct any bad habits. Yes, it can work, and no, it does not necessarily mean that you will have even developed any bad habits (html *can* be written just as strictly and cleanly as xhtml), but what it does do is provide needless potential for developing such: you are going to be reading code on the web written by all kinds of people, from experts to amateurs, and it's good to already understand the stricter xhtml rules so that when learning from these people you are in a position to identify when and where their code fails to be xhtml. --Anton Prowse on the XHTML-L mailing list, Sunday, 20 Jun 2004 There has been over enthusiasm surrounding XML. It has been hyped and everyone thinks it will infiltrate everything, but if it infiltrates everything, then we've got problems everywhere. --Craig S. Mullins, director of technology planning at BMC Software Read the rest in Date defends relational model Web services exist because many people were under the impression that the Web couldn't be used to solve machine-to-machine integration problems. They were mistaken. Where that leaves us now is essentially with two competing architectural styles. --Mark Baker on the xml-dev mailing list, Tuesday, 15 Jun 2004 I don't do much schema-work in general; here at Antarctica we use DTDs for all our interchange & config files (and somebody else does that). But someone said they needed a schema for the son-of-RSS work, and I volunteered. I used the compact syntax, which is just remarkably easy to read and write. So, having now done one serious (albeit small) project with RelaxNG, I really REALLY wonder why anyone would use anything else? In particular because, if you use the Trang tool, you get an XML Schema for free. Of course the XML Schema can't cover a whole bunch of cases that are easy with RNG, and is much harder to read and understand, but hey, if that's what you want, you got it. --Tim Bray on the XML Dev mailing list, Wed, 09 Jul 2003 We now have 80 % of data living in the messy horror world of proprietary file formats and ad hoc structures inside Excel sheets and the like. If those 80 % are taken over by XML, that's a big step forward. --Alexander Jerusalem on the XML Developers mailing list, Monday, 31'm not sure why one would bother with XML at all in a situation where horrible things happen when uncontrolled evolution occurs -- XML can be made to work in tightly coupled systems, but I don't see what advantage it has over proprietary object or database interchange formats if you want things to die quickly and cleanly when closely shared assumptions are violated. I can think of some, such as the classic SGML use case of maintenance manuals that must work across a wide variety of systems but must also conform to precise structural specifications. Nevertheless, the "I've got 50 customers who want to send me orders in conceptually similar but syntactically diverse formats" use case is a lot more typical IMHO. The typical options are between using a technology that can gracefully accommodate diversity and change (and paying the price of occasional breakage), and having humans transcribe information from diverse input formats into an internal standard (and paying a much higher price for every transaction ... and you still have to pay the price for human error!). Anyone who can avoid the dilemma by requiring the customers to send orders in a rigidly defined format probably doesn't need XML in the first place. --Michael Champion on the XML Developer List mailing list, Tuesday, 8 Jun 2004 Microsoft, like almost all monopolies, has become fat and lazy. Monopolies do not engage in innovation with the same urgency because they don’t have to innovate to stay in business. --Robert Lande, University of Baltimore Read the rest in Seattle Weekly: News: Microsoft's Sacred Cash Cow by Jeff Reifman. --Joel Spolsky Read the rest in Joel on Software - How Microsoft Lost the API War the decades of work on SGML had received little attention until XML came out. Was XML at all new? No. One of the huge improvements, at least to me, with XML was the specification by EBNF productions. Perhaps that is a technical change, but perhaps that is part of why it has such a huge adoption -- writers of parsers can be very precise about what is legal and what isn't. --Jonathan Borden on the xml-dev mailing list, Saturday, 12 Jun 2004: - What does my software look like to a non-technical user who has never seen it before? - Is there any screen in my GUI that is a dead end, without giving guidance further into the system? - The requirement that end-users read documentation is a sign of UI design failure. Is my UI design a failure? - For technical tasks that do require documentation, do they fail to mention critical defaults? - Does my project welcome and respond to usability feedback from non-expert users? - And, most importantly of all...do I allow my users the precious luxury of ignorance? --Eric S. Raymond Read the rest in The Luxury of Ignorance: An Open-Source Horror Story L'élément <object> est sous-spécifié, et n'est que très mal implémenté en général. De ce fait, toutes sortes de choses fonctionnent moins bien quand on l'utilise, contrairement à <embed> qui est un vieux reste de l'époque où Netscape balafrait maladroitement des browsers mais qui tend à marcher correctement, ne serait-ce que du fait de sa simplicité. --Robin Berjon on the xml-tech mailing list, Friday, 06 Feb 2004 the main threat to XSLT2 is not XQuery, it is XSLT1. Currently saxon7 is the only visible implementation, although when I last speculated how XSLT2 would get out of CR status you implied that you had hopes of further implementations appearing, which would be good. But it remains to be seen how many other scenarios (aside from client side browser transforms) stay at XSLT1. For as long as that remains a significant minority, it will always be easier to achieve cross platform portability by writing in XSLT1 than 2. --David Carlisle on the xsl-list mailing list, Thursday, 13 May 2004 CSV has been around for ages, and the way I've always written CSV parsers is to take the first line as a line of column headings, and use those to select what is done with each column's fields. And my software ignores fields it doesn't know a use for, and assumes that missing fields it expects have a NULL value, which may or may not cause higher-level code to reject the row. Also, ASN.1 has an extension mechanism, where people using different variants of a 'schema' can still communicate; the decoder may inform the application that it had to discard some data it didn't understand, but still provides the fields that the decoder knows. ;-) --Alaric B Snell on the XML Developer mailing list, Tuesday, 08 Jun 2004. --Steven Garrity Read the rest in The Rise of Interface Elegance in Open Source Software. --Norm Walsh Read the rest in On Atom and Postel’s Law Metadata, like semantics, is in the eye of the beholder. One person's data is another's metadata. --Dare Obasanjo on the xml-dev mailing list, Tuesday, 8 Jun 2004 the people writing the validation rules should always write them to allow maximum flexibility, in the recognition that the system designers aren't omniscient. Validation rules, for example, should never force users to tell lies in order to get past validation (like the web sites, fortunately now rare, that require me to enter a fax number - someone somewhere is getting some strange faxes by now). --Michael Kay on the xml-dev mailing list, Sunday, 6 Jun 2004 The long-term strategic threat to the entertainment industry is that people will get in the habit of creating and making as much as watching and listening, and all of a sudden the label applied to people at leisure, 50 years in the making — consumer — could wither away. But it would be a shame if Hollywood just said no. It could very possibly be in the interest of publishers to see a market in providing raw material along with finished product. --Jonathan Zittrain, Berkman Center for Internet and Society Read the rest in The New York Times > Movies > Hijacking Harry Potter, Quidditch Broom and All IE sucks so very much. But, at least IE is a stable platform to react against. You don't have to worry about dead end products pulling the rug out from under you. --Ian Bicking Read the rest in Ian Bicking 28.5.2004 Always do a tag-share analysis before writing an XML up/down/cross-translate in XSLT or DOM/SAX or whatever. A remarkably small number of element types make up the bulk of the markup - regardless of the size of the schema. --Sean Mcgrath Read the rest in XML tag share analysis and power law distributions. --Bob DuCharme Read the rest in OpenP2P.com: Wanted: Cheap Metadata [May. 24, 2004] for things like SAX pipelines, deep data structures allow less concurrency; in a pipeline, you can only start on a element when you know the previous step has finished with it, deep hierarchies obviously slow that event down. When we need a hierarchy we have a metadata model that maps the hierarchy, the data that gets mapped to this structure is flat. --Peter Hunsberger on the xml-dev mailing list, Monday, 17 May 2004 URLs are an essential part of the Web as we know it. URIs are a parasitic outgrowth on that technology which claims to be an improvement but mostly just adds infinite layers of ambiguity. --Simon St.Laurent on the xml-dev mailing list, Monday, 12 Aug 2002 Using a hierarchy as a general mechanism for representing relationships adds to complexity, but using a relational model to model certain forms of hierarchical structure also brings its problems. In your example the objects that ended up as elements under the root element were all independent objects, their identities were separate. Nesting elements comes into its own where there is a strong aggregation relationship between an element and its constituent elements - the identity of the constituent elements being dependent on the identity of their parent element in an invariant fashion. --Chris Angus on the xml-dev mailing list, Monday, 17 May 2004.” Since all the elements of a given type need not have the same structure, information that is unknown or inapplicable can simply not appear. --Don Chamberlin Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com- good data modeling is as important as it ever was, xml or no. --Thomas B. Passin on the xml-dev mailing list, Monday, 17 May 2004 XML was invented to solve the problem of data interchange, but having solved that, they now want to take over the world. With XML, it's like we forget what we are supposed to be doing, and focus instead on how to do it. --Chris Date Read the rest in Date defends relational model Jim Hendler told me that 4.9 years ago I asked him never to call the semantic Web AI because of this problem.. AI in fact is not artificial intelligence. The AI folks have got lots of code and techniques and useful languages that they've used in their search for artifical intelligence but we're not going to turn, nobody doing semantic web is holding their breath for something in strong artificial intelligence. --Tim Berners-Lee at WWW2004, Friday, 21 May 2004 It's our practical observation that a loosely-coupled document based web-service ought only apply constraints to enforce tight/tighter coupling as necessary (typically production) but when possible the constraints are best kept loose - this is particularly the case during development, when data model and process may be in flux. In other words the development philosophy is - create the service/process and constrain after the fact as necessary. It seems like XSD and WSDL approach the world with the opposite philosophy mandating that a schema be developed before the service. --Peter Rodgers on the xml-dev mailing list, Wed, 19 May 2004 I always present drunk. --Dean Jackson Mixed Markup, WWW 2004 XSLT was originally just a small part of a tool to transform XML into print, as is the 'transform' part of DSSSL. People looked at it, found other uses for it, and the pace has never slowed since. I think it took the original developers by surprise, and I doubt if we realise the extent to which its being used. --David Pawson on the xsl-list mailing list, Friday, 14 May 2004 Actually now that I've written a few XSLT2 stylesheets I am coming to quite like it. The last one I wrote could not sensibly have been written in XSLT1, perhaps with some effort in xslt1+xx:node-set() but definitely far more natural in xslt2. (string handling, regexp and xslt-defined xpath functions). The schema support is an irritation (as I posted on the offical comment list I had to a) declare the schema namespace and b) use explict casting to xs:integer all over the place) but hopefully some of that irritation can be avoided by minor tweaks to the casting rules before XSLT2 is finalised. I suspect that the schema typing would be far more annoying and intrusive in a schema-aware XSLT processor, but it's hard to make any real comments on that given the lack of implementations to try. XPath2 is of course a complete mess compared to Xpath 1 but the mess seems worse (much worse) when reading the spec, and that will annoy rather few people, in practice as used in a non-schema aware XSLT2 engine it's not as bad as it seems (or at least, the bad bits don't intrude as often as you might expect), which is why I'm far more relaxed about XSLT2/XPath2 than I was when I first saw the specs. --David Carlisle on the xsl-list mailing list, Thursday, 13 May 2004 The trend for such systems is to build in generic, default behavior (for collation or for other aspects of localizable information), to support a number of high visibility and high demand particular behaviors "out of the box" and then to open the systems to end-user customization of particular combinations of behavior. The IT industry is, of course, a long way away from perfection here, in part because the entire field of internationalization of software is considered bizarre geekiness even among your run of the mill programming geeks. But the globalization of information technology is inevitable, in my opinion, and as that globalization proceeds, the inevitable tension between central control and end user demand will play itself out in ways that make the technology eventually more flexible and adaptive. --Kenneth Whistler on the unicode mailing list, Friday, 14 May 2004 There are now, and always have been, excellent reasons why people create specific languages or data syntaxes to address specific purposes. (The "little languages" paradigm comes to mind.) However, while purpose built languages syntaxes can provide tremendous benefits in the domain for which they are intended, the darn things have a nasty habit of "leaking" out of their domains with unfortunate frequency. They also have a nasty tendency to slowly accumulate more and more features as their scope and domain of usage grows. Languages, syntaxes and even "applications" share a common tendency to grow towards some ideal "super-state" which encompasses all uses and all domains. This isn't bad in itself. The problem comes when these beasts meet in the night and compete with each other. We end up with a Babelization when, in theory, language should be a matter of choice -- not a bar to interoperability. --Bob Wyman on the xml-dev mailing list, Wed, 21 Apr 2004 XML is good at providing a facade of openness to things which really aren't open. --Simon St.Laurent on the xml-dev mailing list, Wed, 29 Oct 2003 in the development labs where I work, we have found IE6 to have significant bugs and inadequacies in its implementations of CSS 1 and CSS 2 which make the browser nearly useless for some of our new intranet applications. As a result, we use Mozilla which seems to have the best support for the W3C CSS 1, CSS 2, and DOM standards. We have found that Opera and Konqueror/Safari (Safari uses Konqueror's KHTML renderer with additional tweaks by Apple) come in 2nd and 3rd place, respectively, with respect to supporting the features of these standards that we are most interested in. IE always seems to come in last place on our tests. --Edward H. Trager on the unicode mailing list, Friday, 7 May 2004. --Dr. Stuart Feldman Read the rest in Wired News: The Unfolding Saga of the Web When I was a student, the men's lavatories in the computer lab had a row of five urinals. Four were identical; the fifth was different and carried the manufacturer's mark "Ideal Standard". I don't know if they taught the same lesson to the female students. --Michael Kay on the xml-dev mailing list, Wed, 28 Apr 2004 RFC 2396 should earn its place in history as an example of how not to write a specification mere mortals can understand. --Bob Foster on the xml-dev mailing list, Tuesday, 06 Apr 2004 It is now crystal-clear that allowing qnames to escape from element & attribute names into content was a terrible mistake that we're now stuck with forever. --Tim Bray on the xml-dev mailing list, Friday, 19 Dec 2003. --Claude L (Len) Bullard, on the xml-dev mailing list, Monday, 19 Apr 2004 I set up a test for a customer a while back to see how fast Expat could parse documents. On my 900 MHz Dell notebook, with 256MB RAM and Gnome, Mozilla, and XEmacs competing for memory and CPU, Expat could parse about 3,000 1K XML documents per second (if memory does not fail me). If I had tried to, say, build DOM trees from that, I expect that the number would have fallen into the double digits (in C++) or worse. In this case, obviously, there would be far more to be gained from optimizing the code on the other side of the parser (say, by implementing a reusable object pool or lazy tree building) than there would be from replacing XML with something that parsed faster. I have never benchmarked SOAP implementations, so I have no idea how well they perform, but my Expat datapoint suggests that XML parsing is unlikely to be the bottleneck. In fact, you might be able to gain more by writing an optimized HTTP library that fed content as a stream rather than doing an extra buffer copy. --David Megginson on the XML Developers mailing list, Monday, 19 Apr 2004. --Dennis Sosnoski on the xml-dev mailing list, Sunday, 04 Apr 2004 Why is there no WSDL for REST? Because HTML is the WSDL for REST, often supplemented by less understandable choreography mechanisms (e.g., javascript). That usually doesn't sit well with most "real" application designers, but the fact of the matter is that this combination is just as powerful (albeit more ugly) as any other language for informing clients how to interact with services. We could obviously come up with better representation languages (e.g., XML) and better client-side behavior definition languages, but most such efforts were killed by the Java PR machine. Besides, the best services are those for which interaction is an obvious process of getting from the application state you are in to the state where you want to be, and that can be accomplished simply by defining decent data types for the representations. --Roy T. Fielding Read the rest in Adam Bosworth's Weblog: Learning to REST As a writer I have a hard time understanding why most of the current crop of XML editors have user interfaces that are a lot worse than my old SGML editor. (WordPerfect with an SGML plugin.) Nor can they measure up to other old time SGML/XML tools, such as ADEPT Editor, FrameMaker+SGML, Documentor, and others. (Yes, I know that from a developers point of view, some of these were just horrible. Some were not exactly ideal writng tools either, it's just that they seem better than many of the things that are around today.) Most of the current crop of XML editors, XMetaL and Arbortext Publisher are exceptions, seem to be little more than text editors with syntax highlighting. This is not what I want in an authoring tool that I am going to use several hours a day, every day. Text editors with syntax highlighting may suit programmers, but that is very different from being suitable for authors. XML editors must make it easy to write and structure documents. Context sensitive element dialogs and validation are necessary, of course, but they are not enough, not by a long shot. --Henrik Martensson on the xml-dev mailing list, Friday, 09 Apr 2004. --John Reynolds Read the rest in java.net: Coding for your own legacy [May 01, 2004] Research indicates that 82 percent of Internet users decline to provide any personal information because too many details were asked for that didn't seem necessary. And 64 percent decide not to buy online because they aren't certain how their personal data might be used. High-tech firms need to wake up to the fact that sharing information without permission is bad for business.. --Roger Fairchild, president of The Customer Respect Group Read the rest in BUSINESS WIRE: The Global Leader in News Distribution! --Mark E. Shoulson on the Unicode mailing list, Wed, 28 Apr 2004 Separating processors from syntax is why SAX is a Good Thing. --Norman Gray on the xml-dev mailing list, Wed, 21 Apr 2004 Even in simple contructs like address, customer and so on, you need the freedoms the XML provides to intermix structure with text, to nest structures and to make structures recursive. XML frees you from the modelling strictures created by the false dualism between objects and containers and frees you from the horrors of flattening perfectly good business constructs to fit within the strictures of normalised database tables. A good test for whether or not an application is taking a document-centric or a data-centric approach to data modelling is *mixed content*. Mixed content cuts to the heart of the document-centric worldview and is famously ugly when represented in an OO-like modelling approach. --Sean McGrath Read the rest in Sean McGrath, CTO, Propylon One powerful idiom that has become accepted and expected with XML is that, whenever at all possible, you produce precisely but accept loosely.. --Stephen D. Williams on the xml-dev mailing list, Monday, 19 Apr 2004 Steve Jobs used to talk about wouldn there's no deployment. That's a huge advantage. Because there's no deployment you don't have to bring all your people's PCs in or upload them all heavy. Now software's gotten better at being adaptive and self-modifying, but that cuts both ways. I'm sick of applying my upgrades on Windows every night. And it makes me nervous that the software on my PC is constantly changing. So I think what we want isn't a thick client, and I wasn't leading that way. But I think there will be some cases where there's a thick client. I think in general we still want to say an app is just something you point to with a URL. And you don't have to deploy it. And you can throw it out of memory at any time, and there it's still a browser. You have to know one thing once and that's your browser. Then you just point to the URL and you run them in the way that you do in the browser mall as opposed to .EXEs. --Adam Bosworth, BEA Read the rest in BEA's Bosworth: The World Needs Simpler Java SGML came out of work done in the 60s, primarily for document processing and typesetting, and has a lot of practical, hands-on features (e.g. datatag, shortref, omittag, shorttag) that may have helped adoption early on, but in the 1990s were a handicap. --Simon St.Laurent on the xml-dev mailing list, Friday, 4 Oct 2002 A lot of people for and against XML don't take the time to consider the proper usage of XML in large documents. Those I've spoken with who hate XML, or at least feel it's used in the wrong places, are usually because they are dealing with large documents and whatever application is processing them crumbles (such as Microsoft's Biztalk, can't handle a document larger than 20mb). And on the flipside, those for XML usually state things like XML is great for small documents, instant internet communications, etc. Both sides fail to realize that if coded properly, size matters not. --Bryce K. Nielsen on the xml-dev mailing list, Thursday, 8 Apr 2004 XML has a few minor warts; it's still a Good Thing overall. --Joseph Kesselman on the xerces-j-user mailing list, Wed, 7 Apr 2004 I can walk into any meeting anywhere in the world with a piece of paper in hand, and I can be sure that people will be able to read it, mark it up, pass it around, and file it away. I can't say the same for electronic documents. I can't annotate a Web page or use the same filing system for both my email and my Word documents, at least not in a way that is guaranteed to be interoperable with applications on my own machine and on others. Why not? --Eugene Eric Kim Read the rest in A Manifesto for Collaborative Tools XML DSIG and XML-Encryption are based on the XPath model. Since I think signatures and encryption are crucial to the deployment of web services, and since I think it'll be a cold day in h... before the security folks get together to revise those specs to use the Infoset model, I tend to view SOAP 1.2 and its ilk as more DOA than SOA. --Rich Salz on the xml-dev mailing list, Tuesday, 13 Apr 2004 I remain unconvinced that XML tools are usually the right prism through which to view such a serialized RDF document. Consider a document containing the serialization of an RDF triple that expresses [Noah(known by his URI), isAuthorOf, Document(known by its URI)]. While you can use XPath, XSL and or XQuery on the XML serialization, it's not clear to me that this is the architecturally robust way to extract the author for the document. It seems that if the data is fundamentally RDF, you usually want a query mechanism that's aware of RDF and triples. Similarly, I don't think that XML schema (or WSDL) would be a first class way of enforcing the rule that every such document must have a statement of authorship. It seems to me that using XML tools on such RDF serializations is about like using Grep on XML documents; both are useful tricks at times, but Grep is not usually the right way to extract information from an XML document and XML tools are not RDF tools. --Noah Mendelsohn on the www-tag mailing list, Wed, 17 Mar 2004 Java objects have an awful lot of built-in memory overhead just for the java.lang.Object base class, and if you naively create a separate object for every element, attribute, attribute value, text chunk, and so on, you end up with a very large in-memory data structure. Memory aside, Java object creation and deletion is also very slow (that's why it takes so long to load an XML document into a DOM). --David Megginson on the XML Developers mailing list, Tuesday, 06 Apr 2004 XLink is too flexible (and as a consequence too verbose) to be successful in a culture that emerged from a/@href. Wouldn't something along the lines of xlink:href + xlink:src + xlink:ref (and perhaps a xlink:multi to flag multi links) be sufficient for the needs of most, far simpler to use than the current XLink (for simple links), and more likely of success? It does nothing to address all the complex linking needs, but those would be easier to build later once at least the simple parts of the spec are successful --Robin Berjon on the xml-dev mailing list, Friday, 26 Mar 2004 the XML infoset is too granular to represent the level of abstraction most real world applications deal with XML. Most real world applications use an abstraction of XML that is more akin to a subset of the XPath data model (elements, attributes, and text nodes). --Dare Obasanjo on the xml-dev mailing list, Monday, 12 Apr 2004. --James Clark Read the rest in RELAX NG, O'Reilly & Associates, 2003, pp. ix. --Andrew Orlowski Read the rest in The Register. --Michael Kay on the xml-dev mailing list, Friday, 26 Mar 2004 :-) --Paul Kocher Read the rest in Slashdot | Security Expert Paul Kocher Answers, In Detail. --Dennis Sosnoski on the xml-dev mailing list, Sunday, 04 Apr 2004- Anyone remember the (dare I say "good") old days when people wrote useful software that did useful things, rather than writing incomprehensible specifications? It's not a "standard" till it's implemented and in wide usage. So it does make me wince when these specifications are blessed as "standards" even before they are fully-baked (or half-baked if you prefer) and before there are usable implementations and/or wide-spread adoption. --Andrzej Jan Taramina on the xml-dev mailing list, Sunday, 04 Apr 2004 I am very much in favour of XML recommendations being accompanied by reference implementations whenever possible. If the people who put the recommendation together can't implement it, then who can? If it is too expensive and time consuming for them to do it, then maybe the recommendation is too complex, or simply not useful enough. --Henrik Martensson on the xml-dev mailing list, Sunday, 04 Apr 2004 I'm not at all sure that starting by building an ontology is a wise investment. As many of you are aware, I've spent the last 1 1/2 years working on the Web Services Architecture group at W3C, which might be characterized as an attempt to define a [relatively informal, although we are toying with an OWL representation] ontology for web services concepts, terminology, and concrete instantiations. It is, to put it bluntly, a bitch: what appears to be common sense to one person or organization is heresy to another; the meaning of simple words such as "service" tend to lead to infinitely recursive definitions; and just when you think you start to understand things with some degree of rigor, a new analyst/pundit/consultant fad comes out of left field to confuse things all over again. Imposing ontologies up front is as politically/economically impossible as imposing the One True Schema, and defining them post hoc is difficult and inevitably incomplete/partially inaccurate. --Mike Champion on the xml-dev mailing list, Friday, 19 Sep 2003 It’s no secret that the original XML cabal was a bunch. --Tim Bray Read the rest in ongoing -- OpenOffice because all XML files are text files (despite the best efforts of some people to change that) they're particularly portable between operating systems. --Bob DuCharme on the xml-dev mailing list, Friday, 12 Mar 2004 names of the other attributes in the collection, and a value, which is a Unicode string. That is the complete abstraction. The core ideas of XML are this abstraction, the syntax of XML, and how the abstraction and syntax correspond. If you understand this, then you understand XML. --James Clark Read the rest in RELAX NG, O'Reilly & Associates, 2003, pp. ix-x The perceived slowness is because Xerces is a conformant XML parser. "Faster" XML parsers usually gain from not implementing validation or only supporting a limited number of character encodings. So keep this in mind when evaluating parsers and pick the parser for your application appropriately. --Andy Clark on the xerces-j-user mailing list, Sunday, 07 Mar 2004 PowerPoint can make almost anything appear good and look professional. Quite frankly, I find that a little bit frightening. --David Byrne Read the rest in Wired 12.04: The 2004 Wired Rave Awards The data itself shouldn't be tied to /any/ sort of mechanism for displaying itself, nor should it be self-aware of how it might be used. Because by doing so you pigeon hole the data into use in a single context or application. --Eric Hanson on the xml-dev mailing list, Saturday, 27 Mar 2004. --Ben Trafford on the xml-dev mailing list, Friday, 26 Mar 2004 Many news aggregator applications have "support" for RSS 1.0, using naïve XML parsers. However, if the RDF of the feed is serialized using a triple-oriented format analagous to TriX, most news aggregators would break. The whole ecosystem works, for now, because producers of the RSS 1.0 feeds are careful to emit files that conform to the XML format that the aggregators expect. In other words, RSS 1.0 claims to be an RDF vocabulary, but in practice it ends up being an XML schema. --Joshua Allen on the www-tag mailing list, Thursday, 18 Mar 2004. Libxml2 is fast. I mean insanely fast. Nothing else even comes close. It is insanely fast and insanely compliant with all the specifications that it claims to support, and it is getting faster while gaining more features. So you just know that somewhere, someone is selling their soul to somebody, and you just hope it isn’t you. --Mark Pilgrim Read the rest in Beware of strangers [dive into mark] when a 'system' has that many users, cheap convenient pet tricks recoup very large costs. Those costs come in many forms including the potential wrangling over the holy brackets (Shall We Make Curly or Pointy Holy?). But even then, the impedance mismatch that a syntax specification with a namespace and a structure can cause create very real headaches. XML is the winner of the 'pick one' contest. Syntax can be very important. Is it important for everyone to pick one? No, but it is cheaper and convenient. --Claude L (Len) Bullard on the ' www-tag mailing list, Monday, 27 Oct 2003 An even better analogy is putting XML into RDBMS by shredding the documents into tables and columns. You can make it work, and with a little care, you can make it extremely fast (i.e. you can avoid joins for tree-based operations), but the fundamental models *are* different, and there *is* impedance. Without careful design, those issues often come back and bite you in unexpected ways. --Gavin Thomas Nicol on the www-tag mailing list, Friday, 19 Mar 2004 "if I serialize my XML carefully (no comments or no CDATA sections perhaps), it will be a bit easier to use Grep to reliably extract information from my files". True, and that might be a handy thing to do, but Grep really doesn't properly navigate the structure or model of an XML document. --Noah Mendelsohn on the www-tag mailing list, Thursday, 18 Mar 2004. --John Simpson Read the rest in XML.com: From Word to XML [Dec. 30, 2003] On the subject of error handling, one of my biggest hassles in making Saxon portable across different XML parsers has been differences in the way they handle an exception thrown by a callback such as startElement. They vary in whether or not such an exception is notified to the ErrorHandler, and they vary in whether it re-emerges intact as an exception thrown by the parse() method or whether it gets wrapped in some other exception. The specs, of course, are very unspecific on such points. --Michael Kay on the xml-dev mailing list, Tuesday, 24 Feb 2004 What really intrigues me is that for all the theoretical interest in semantic approaches to search/discovery/analysis over the past few years, the actual advances in practical applications seem to come from metadata generation and pattern matching (Google), dirt-simple fuzzy or Bayesian classifers (e.g. Spam Bayes), and brute force "kitchen sink" combinations of it all (e.g. IBM "WebFountain, AFAIK). I'm willing to bet that there is some good synergy between ontologies and the brute-force stuff -- for example I would like to be able to give Spam Bayes some knowledge of my world, e.g. I never spam myself, or a message with no recognizeable words in it is almost certainly spam. Still, I see the "dumb" approaches working every minute of every day (about how often I get spam!) and I'm not seeing the real world success stories for the "smart" approach. --Mike Champion on the xml-dev mailing list, Friday, 19 Sep 2003 The strengths of XML, etc. are not in computer science. Rather, XML's strengths are in the *human sciences* of sociology, psychology, and political science. XML offers us no concepts or methods that weren't completely understood "computer science" long before ASN.1 was first implemented in the early 80's. From a "computer science" point of view, XML is less efficient, less expressive, etc. than ASN.1 binary encodings or the encodings of many other systems. However, because XML uses human readable tag names, because it is text based, easy to write, has an army of evangelists dedicated to it and many freely available tools for processing it, etc. XML wins in any system that values the needs of humans more than those of the machines. XML's ability to "win" in the human arena has enabled a great outburst of computer science as a result of the greater interchange of information and the increased ease of interchange. However, this great outpouring of utility and enablement of new computer science work has been at the cost of accepting an interchange format which is "inferior" from the point of computer science. Of course, I think most of us accept that this cost is an acceptable one and a small price to pay in most cases. --Bob Wyman on the xml-dev mailing list, Saturday, 14 Feb 2004 One of my favorite things about my Linux-based Sharp Zaurus PDA is that XML is its native format for its address book, calendar app, etc. I can put it in its cradle, ftp the files to a Windows machine, and do anything with them that I'd do with any other XML file. --Bob DuCharme on the xml-dev mailing list, Friday, 12 Mar 2004 I don't buy the argument that programmers benefit from a Web Services toolkit. Such things do not build applications -- at most they automate the production of security holes. Getting two components to communicate is a trivial process that can be accomplished using any number of toolkits (including the libwww ones). The difficult part is deciding what to communicate, when to communicate it, in what granularity, and how to deal with partial failure conditions when they occur. These are fundamental problems of data transfer and application state. --Roy T. Fielding Read the rest in Adam Bosworth's Weblog: Learning to REST IE is such a poor excuse for a browser that you won't be able to do much with CSS. IE only does tagsoup rendering, you're feeding caviar to the pigs. --Robin Berjon on the xml-dev mailing list, Thursday, 04 Mar 2004. --Dare Obasanjo Read the rest in XML-Journal - Can One Size Fit All? If you happen to use anything but the latest Microsoft browser on the latest Microsoft operating system, there’s a fair chance you’ll be unable to complete a purchase at many sites. For example, there’s some shopping system supplier that has invisible “Buy” buttons when Macintosh users try to hand over their dough, even using the latest Internet Explorer. True, most people have Microsoft systems, but not everybody does, and the color of several million Mac-user and Linux-user credit cards are the same as those of Windows users. The purveyor of this popular shopping system is clearly at fault for not testing its system properly, but what of their many victims, the retail websites that are losing millions because of it? No matter what kind of outsourcing you may do, you need to have an active program to verify the outside party is doing their job. --Bruce Tognazzini Read the rest in AskTog: Top 10 Reasons to Not Shop On Line. --Don Chamberlin Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com- the Web is messy behind the scenes, but I can still use it to read film reviews, keep up to date with news stories, buy books, check weather, reserve hotel rooms, file flight plans (in the U.S., anyway), download software, and so on. I haven't heard of anyone doing things like that with Xanadu or HyTime-based systems, much less XML+XLink: that's the ultimate testament for 99:1 or 100:0 designs. --David Megginson on the XML Developers List, Thursday, 04 Mar 2004 The ultimate testament for 80/20 designs is that 80% of the time the 80/20 is 80% subjective, caused by a lack of vision and/or a bad design and just a poor excuse to refuse legitimate features! --Eric van der Vlist on the xml-dev mailing list, Thursday, 04 Mar 2004 Well, as all good XML developers know, there are 2 main (could also be called “standard”) methods to parse your XML: using the DOM, which loads the whole document tree into memory and is (relatively) easy to use, or via SAX, which is extremely fast, doesn’t have a large memory footprint, but requires a bit of (repetitive) coding techniques. Since the majority of .Net (and previously VB/MSXML) developers didn’t need the performance of SAX, we typically used the DOM method to parse XML in our applications. When Microsoft rolled out .Net, they didn’t include a SAX parser for .Net, but something just a fast (and just as complicated to code against) called the XmlReader (which is basically the pull version equivalent of the push style SAX parser), and .Net developers still had basically two ways to parse XML, and most developers still used the DOM. If you want to parse some XML, you used the DOM. If you had to worry about memory or performance, you used one of the XmlReaders. Life was good, and as developers we fell into a DOM induced coma. --DonXML Demsak Read the rest in Waking Up From a DOM Induced Coma not using XML when you claim to be creating XML has real costs that demand examination of whether using XML in the first place is reasonable. InkML appears to my eye to flunk this test with flying colors. --Simon St.Laurent on the xml-dev mailing list, Thursday, 21 Aug 2003 pretty well all the characters from ASCII and EBCDIC and JIS and KOI8 and ISCII and Taiwan and ISO 8859 made it into Unicode. So at one level, it's reasonable to think of all these things as encodings of Unicode, if only of parts of Unicode. XML blesses this approach, and allows you to encode XML text in any old encoding at all, but doesn't provide a guarantee that software will be able to read anything but the standard Unicode UTF encodings --Tim Bray Read the rest in ongoing Characters vs. Bytes. --Neil Young Read the rest in Wired 12.03: The Reinvention of Neil Young, Part 6 Nattering nabobs of negativism will doubtless be glad to note that XML 1.1 parsers MUST support XML 1.0 as well, and that human and mechanical XML generators SHOULD generate XML 1.0 unless there is a specific reason to generate XML 1.1. --John Cowan on the xml-dev mailing list, Wed, 4 Feb 2004. --David Megginson on the XML Developers mailing list, Friday, 13 Feb 2004. --Eric S. Raymond Read the rest in The Luxury of Ignorance: An Open-Source Horror Story. --Dan Milstein Read the rest in Edge East 2004 - A Skeptic's Tour. --Ganesh Prasad Read the rest in Linux Today - Community: Beyond an Open Source Java." The biggest reason we still have a spam problem in the United States is that spam is 100 percent legal, with some very minor exceptions. If we had effective spam laws, we would be able to get the spam situation under control. It's just like the fax situation. In the 1990s, persons' faxes were full of advertisements. Congress passed a very simple law stating that. --John Levine Read the rest in Finding a way to fry spam |CNET.com. --Jason Hunter Read the rest in Servlets.com Weblog: JDOM Hits Beta 10 I built IE 4 and built the DHTML and built the team that built it. And when we were doing this we didn't fully understand these points. And one of the points was people use the browser as much because it was easy to use as almost anything else. In other words I'd talk to customers and say we can add to the browser all these rich gestures. We can add expanding outlines and collapsing and right click and drag over and all that—all the stuff you're used to in a GUI. And without exception the customer would tell me please don't do that, because right now anyone can use the sites we deploy and so our training and support costs are minimal because it's so self-evident what to do. And if you turn it into GUI we know what happens, the training and support costs become huge. So one of the big values of the browser is its limits. --Adam Bosworth, BEA Read the rest in BEA's Bosworth: The World Needs Simpler Java I have written two books on, or partially on, XQuery, in the last year. My take on it is that in order to do anything of interest, you need to know XPath to a fairly solid degree. By the time you get there, XSLT is more expressive and capable than XQuery. --Kurt Cagle on the xsl-list mailing list, Saturday, 21 Feb 2004 It is harmful to allow producing incorrect results in the name of "better performance". In fact, the best speed for producing wrong results should be as close to zero as possible. We should always do whatever is possible to decrease the speed of producing wrong results. --Dimitre Novatchev Read the rest in RE: [DM] IBM-DM-105: Order of comments, PI's and text given [schema normalized value] property from Dimitre Novatchev on 2004-02-18 (public-qt-comments@w3.org from February 2004) XSLT is *not* an angle-bracket processor, it is a node processor where the nodes usually (but not always) happen to come from and go to XML angle brackets. An out-of-line system is going to require users to consider syntactic issues rather than let the processors consider syntactic issues. XSLT relieves the users of this and lets people focus on their information, not on their syntax. The designers of XSLT make this claim up front and don't try to hide it: XSLT was not designed to preserve or manipulate the syntax of a document, it was designed to be totally general purpose with the information structure of a document: when the document is used by a processor downstream, the choice of syntax is irrelevant as long as it is correct. --G. Ken Holman on the xml-dev mailing list, Friday, 20 Feb 2004 Infoset is a concept unknown to XML 1.0, and parsing is the necessary first step of *every* instance of XML 1.0 processing. Many things might be built upon the output of a particular XML 1.0 parse, including a tree, or a graph, or the abstraction into Infoset form of the data items identified in that parse. Yet it seems to me that an 'infoset' would not usually be the desired final product of processing an XML instance, in large part because such an 'infoset' is a terminal output product: it cannot be passed or pipelined into any other context because it is utterly specific to the circumstances in which it is produced. What is required if the output of particular XML processing is to be passed to other XML processing is a document, which of course will first be parsed before any other processing is performed on it in that new environment. It is this conveyance of XML instances from context to context which is so well suited to the internetwork topology, and particularly to the Web-as-we-know-it. Instances are available as entity bodies which we may GET at a particular URL, process, and then republish as new instances at other URLs. --W. E. Perry on the XML DEV mailing list, Wed, 14 Jan 2004 <test_scores/degree/certification/hairstyle> to prove it. But oh what I wouldn't give to wrap my hands around the neck of the one who designed this API..." --Kathy Sierra Read the rest in To API designers/spec developers: pity those of us who have to LEARN this... The initial buzz around SOAP was all based on the rpc/encoded use (or Microsoft's "wrapped" variation) where method calls were "transparently" exposed as web services. Now that that's effectively deprecated in WS-I BP 1.0 (for good reason) SOAP currently offers very little (if any) functionality beyond what can be done using direct XML interchange over HTTP. What *is* interesting in the web services area is WSDL, but WSDL doesn't require SOAP. And UDDI has always struck me a solution in search of a problem - I've yet to see any practical applications that couldn't be handled just as easily with a simple web page directory of services. --Dennis Sosnoski on the xml-dev mailing list, Saturday, 14 Feb 2004 most useful innovations tend to come from visionaries who don't fully understand the complexities of what they're unleashing on the world, and not the experts who are focusing on the details. That's more or less Clayton Christensen in a nutshell -- the experts were doing the "sustaining innovations" to make faster and more complicated mainframes while the Jobs/Wozniaks of the world were screwing around in their parents' garages creating the "disruptive innovations." I'm reminded of the (possibly apocryphal) story that Tim Berners-Lee was ignored or scorned by the hypertext community of the late 80's / early 90's because his stuff was so trivial and didn't address the interesting problems. Of course, by ignoring the interesting problems he could deliver something that actually added value vastly disproportionate to the cost of seeing 404 messages now and then. --Michael Champion on the xml-dev mailing list, Friday, 9 Jan 2004 As it applies to XML, ironically, Postel's Law would be difficult, even impossible, to observe if it were not exactly for the relatively stark clarity afforded by the definition of XML well-formedness. (That allegation that XML's creators "broke Postel's Law" misses the point that Postel was writing a *specification*, after all.) Imagine if XML were woolier and wafflier than it is, that its corner cases had not been fully explored and discoverable in the archives of this list. What would being liberal or conservative mean then? Among the endless debates over what was in and what was out, the horn would be sounded for being liberal in what you accept from others, and for all to play, we would have to accept more and more. Soon enough we would have bloatware and vendor lock-in (hey, where have we seen that?). Far from a wise counsel that we should work together, Postel's Law would be a recipe for disaster, if the hawks could not keep insisting that "if it's not well-formed, it's not XML". Well-formedness, however bizarre or arbitrary it may seem in some respects (no unescaped less-than signs in attribute values! no slashes in tag names!) is not just a religion, it is the placing of a boundary. If it parses as XML, well good, go ahead and move on. If it doesn't, all bets are off. Not a threat, but simply a statement of fact, that stuff that doesn't lie about what character encoding it uses (to use an example actually cited), is going to be more predictable and less troublesome in general than stuff that does. --Wendell Piez on the xml-dev mailing list, Friday, 16 Jan 2004). --Dare Obasanjo, Microsoft, on the xml-dev mailing list, Tuesday, 25 Nov 2003 We found that a lot of people try to cut corners on the Internet, because customer service is expensive. They think, oh, it's online, it's self-service. But that's a mistake. Online, people need help more, not less, and the human element is something that can be really a great differentiator for a brand. --Lauren Freedman, E-tailing Group Read the rest in Online Shopper: Toll-Free Apology Soothes Savage Beast Do you believe it's impossible to build a generic data model & format that could be used in place of the myriad of formats you talk about there? A format which could be used, for example, to describe the state of a lightbulb as well as the state of a business process? IMO, not only do I believe this is possible, I believe we have it in our grasp today; it's called RDF (Topic Maps would also work, but they're not as Web-friendly). --Mark Baker Read the rest in Adam Bosworth's Weblog: Learning to REST In my explorations, ASN.1 toolkits felts more to me like data-binding kits than XML parsers. There doesn't seem to be much notion of anything like an "ASN.1 infoset", a set of containers and properties you can explore without necessarily knowing the bindings. ASN.1 feels effectively schema-driven, designed from the outset to be optimized for a world where processes are tightly bound. There aren't general ASN.1 "parsers" in the same sense that there are XML parsers, or at least there weren't last time I looked. Folks who actually care about XML per se are often looking for looser bindings. ASN.1 chafes against the kinds of assumptions that are common in XML, like that I might conceivably work on found documents with no accompanying metadata. --Simon St.Laurent on the xml-dev mailing list, Friday, 3 Oct 2003 J'aurais été très content de débourser quelques euros pour avoir par exemple un éditeur ayant les fonctionalités de oXygen. Malheureusement sur une machine puissante il est tout simplement inutilisable. Une minute pour valider un document de trente lignes avec un schema simple dans un thread séparé, c'est un peu plus de 59 secondes de trop. --Robin Berjon on the xml-tech mailing list, Tuesday, 10 Feb 2004. --Roy T. Fielding Read the rest in Adam Bosworth's Weblog: Learning to REST My belief is that the failings of RSS are so great and that the quality of service we'll be able to provide with Atom feeds is so much greater than what we can currently provide, that RSS use will fall off rapidly once Atom becomes established. Users will demand it. --Bob Wyman on the xml-dev mailing list, Friday, 6 Feb 2004 if your site has won a web graphics design award, you are likely in serious need of a redesign. You are likely featuring something useless but pretty or you wouldn’t have won it. Your job is to move product, not to win awards. Useless but pretty not only slows up transfer, it dazzles the customer and draws him or her away from an appreciation of the product you're in business to sell. Sales 101. --Bruce Tognazzini Read the rest in AskTog: Top 10 Reasons to Not Shop On Line.. --Don Chamberlin Read the rest in XQuery from the Experts: Influences on the design of XQuery - WebReference.com- Mosaic and the web browser in general made the web go, and the pundits said it would ever be so. But the heyday of the web browser is ending or so it is said. Long live the PC Client for where there might be one icon before, now there will be many each with its own non-interoperating XML format. Standards schmandards, I want candy. What isn't waning? HTTP and URLs. So you have a case for the architecture over the implementation. --Claude L (Len) Bullard on the xml-dev mailing list, Monday, 12 Jan 2004 At this point, when you put something up on the Web, you don't have to say who put it up there, you don't have to say where it really lives, the author could be anyone. Which is supposedly its freedom. But as a user, I'm essentially in a position where everyone can represent themselves to me however they wish. I don't know who I'm talking to. I don't know if this is something that will lead to interesting conversation and worthwhile information -- or if it's a loony toon and a waste of my time. I'm not a big control freak, I don't really know who would administer this or how it would be. But I would just like to see that a Web page had certain parameters that are required: where it is and whose it is. I would like to have some way in which I could have some notion of who I'm talking to. A digital signature on the other end. --Ellen Ullman Read the rest in Salon | 21st internal subsets shouldn't ever be supported, at least not in the form of passing in a string; guaranteeing that this string is well-formed is way too expensive. Generating an internal subset would require a full DTD-generating API, which is hardly necessary these days, IMAO. --John Cowan on the xml-dev mailing list, Wed, 21 Jan 2004 Xerces is bigger, slower, has more features and fewer bugs. --Michael Kay on the xml-dev mailing list, Sunday, 23 Nov 2003. --Tim Bray Read the rest in ongoing - TPSM 2: Technology Losers ASN.1 would have been a useful interop format for data exchange if it would have been simpler and only text-based. By providing the different binary encodings, it hampered its adoption for interoperability because of the added complexity and costs for tools. Up to the point XML came along and was doing this right and provided the additional benefit of being usable for markup and data. I don't think making XML more complex will be a good idea given my experience with ASN.1 over the last 12 years. --Michael Rys on the xml-dev mailing list, Tuesday, 18 Nov 2003 the notion that there has to be one format, even if that format is only for interchange (which it rarely is), speaks volumes about our fears and very badly about our initiative. --Simon St.Laurent on the xml-dev mailing list, Friday, 23 Jan 2004 It seems to me that it is a little pompous to elevate a pragmatic and practical design choice by Postel in the specification of TCP to a "law" of supposed universal truth - Postel's Law (sic). Postel put forward an approach which was intended to produce particular practical results in the context of TCP. Postel's so-called "Law" is, in fact, a rule of thumb. It may be applicable outside its originally intended scope. But intelligent assessment of its relevance or otherwise in specific settings with potentially different functional requirements is required. Always remembering that the fifth digit is not the most cogitatively-endowed part of the human anatomy. --Andrew Watt on the xml-dev mailing list, Wed, 14 Jan 2004 there is nothing in the DOM specification that states an implementation of that API must be thread-safe. So application writers should never assume that a particular DOM implementation is thread- safe. --Andy Clark on the xerces-j-user mailing list, Monday, 12 Jan 2004. --Ed Davies Read the rest in On Atom and Postel’s Law I realize that no committee can come up with something that works for absolutely everyone. But, when you have something evolving on its own, completely organically, it evolves into tag soup, and commercial enterprises don't want to use tag soup. They're nervous enough about the differences between RSS .9, 1.0, 2.0, and Atom. --Bob DuCharme xml-dev mailing list, Friday, 23 Jan 2004 Hypertext is a particular application of linking, a more basic notion. It's a mistake to think of links solely in the hypertext context. --John Cowan on the xml-dev mailing list, Thursday, 22 Jan 2004 Postel's law isn't a law of nature, or of logic. It's a law like almost all the laws we have: a ceteris paribus law. But as is so often the case, other thing _aren't_ always equal. As an engineering principle, tho', it seems to hold enough of the time that it's reasonable to take it as a default position. But that doesn't exclude counter-examples; nor is the presence of counter-examples sufficient to render the rule worthless. --Miles Sabin on the xml-dev mailing list, Tuesday, 13 Jan 2004 Postel's Law is being invoked on the one hand, out of context -- as if by saying "be liberal" Postel were arguing that users of the protocol he is defining can go ahead and break the rules, what the heck, because being liberal and forgiving in what you accept is the rule. But Postel (as has been pointed out) doesn't fairly warrant this: his assumption is that users will be following the explicit rules. Only where the rules fail to be perfectly clear, he adds (in a metastatement that is not, after all, a specification of a protocol but rather -- in the "Philosophy" section -- a hopeful instruction to his readers about how to behave), they should be "conservative ... and liberal". Note *both* conservative and liberal. To suggest that he's therefore licensing rule-breaking (whatever the rule may be) is to miss how he's simultaneously insisting on conformity ("be conservative in what you do"). The Law is a paradox. --Wendell Piez on the xml-dev mailing list, Friday, 16 Jan 2004. --Jakob Nielsen Read the rest in Competitive Testing of Website Usability (Jakob Nielsen's Alertbox) Being "new" no longer has the traditional meaning when patents are concerned. Innovation is no longer a practical requirement for receiving a patent. Consider, for instance, the Microsoft XUL patent. The basic concept or "innovation" that they claim has been "obvious" since at least ALL-IN-1 used a forms based interface for defining office applications back in the early 80's. Even the MID stuff is way too recent if you're looking for prior art on the concept. However, the only thing that distinguishes the Microsoft patent is that *every* claim requires "HTML". Most of the prior art didn't use HTML to accomplish what XUL does and so is not relevant. The mere fact that HTML, or any encoding format like HTML, has the same properties as all the other encoding formats used similarly in the past, is not considered relevant by the patent office. --Bob Wyman on the xml-dev mailing list, Thursday, 18 Dec 2003. --Patrick Stickler on the www-tag mailing list, Thursday, 23 Jan 2003 Experience, with RSS feeds, has shown an overwhelming willingness on the part of the content producers to FIX THEIR ERRORS. Ask them, do it using a third party if you don't want to do it yourself. But above all, DO SOMETHING to help eliminate the problem. --Bill Kearney on the xml-dev mailing list, Friday, 16 Jan 2004 The majority of my users don't know they are using XML, but I definitely like the fact that when my app creates something broken I clearly know it. Otherwise, my bugs would have children, grandchildren, ad infinitem; tracing their ancestry would be such a pain. It is such a simple thing to provide a well-formed instance that I don't understand the issue. It is not like every instance *has* to be validated against some XML Schema (or whatever). It makes things much simpler for me so I can provide UIs to users who could care less about XML. --Robert Koberg on the xml-dev mailing list, Friday, 16 Jan 2004 Give a man* a valid XML document and he has a valid XML document. Give a man a tool that produces valid XML documents and he has valid XML documents for life. Expect a man to deal with tag soup, and you're telling him to fish. --Danny Ayers Read the rest in Raw: Thought experiment? Whatever 'liberal' and 'conservative' might have meant in Postel's original usage, in the context of XML instances which we can GET at URLs 'liberal' in what we accept means that we acknowledge the instance is not likely to be in a form which our process can use directly. The only form which our process could use directly would be a very particular data structure. In their own terms, processes operate only upon specific data structures, and neither a concrete instance document nor an abstract infoset is what a process can use directly. The difference is that the process can be designed to be liberal in accepting numerous schematically differing concrete instances to parse and then to instantiate the output of the parse as the particular data structure which the process requires. That liberality cannot be extended to some sort of 'infoset' as input, because input which is not a parseable document must either correspond perfectly to a very specific and typed schematic or be useless to a particular process, and that is the most illiberal of demands. --W. E. Perry on the XML DEV mailing list, Wed, 14 Jan 2004 For those not familiar with XQuery -- it is like XPath + XSLT + Methamphetamines. --Brian McCallister Read the rest in Things I Wish I Had Time to Code in 2004 When I first started talking to Roy about the Web, most of what he said was over my head too. Learning about Web architecture, for me at least, has been a journey of self-exploration more than a lesson in distributed computing (though it's been that too); revisiting past assumptions, reinterpreting past experience, rebuilding mental models, etc... I suppose that I only understand what I do now because I was making an honest attempt to learn why the Web was so successful (as you appear to be doing now, kudos). I wasn't expecting to discover the bigger truth of the Web - I didn't even know there was one when I started. But I think that approaching things from that point of view - humility, I suppose - is a productive thing to do. It encouraged me to spend far too much time dissecting the words of Roy, TimBL, and DanC, because I respected the massive success that their work had seen. --Mark Baker Read the rest in Adam Bosworth's Weblog: Learning to REST to understand the value of REST, you have to understand the value of the Web. People don't need to be a programmer to interact with the Web, or even to build Web pages. People don't need to understand a new language in order to access interesting resources; they only need a URI. The Web increases in value whenever a resource is made available via a URI. A company's resources, whether they be bank acccounts, seats on a plane, or simple marketing materials, can either be made available as a set of state operations or as an opaque application (whether that be a poorly written CGI, ISAPI, or JSP application is largely irrelevant -- the user has to learn each site behavior independently). There is no role for control-based integration in such systems -- all of the value is in the data. That is why applications that behave like the Web work better than those that merely gateway to the Web. For the same reason, it would be foolish to use REST as the design for implementing a system consisting primarily of control-based messages. Those systems deserve an architectural style that optimizes for small messages with many interactions. Architectures that try to be all things to all applications don't do anything well. --Roy T. Fielding Read the rest in Adam Bosworth's Weblog: Learning to REST The last time all the peoples of the earth spoke the same language, were Smote Down. Perhaps there's something to be said for Tag Soup. --Rich Salz on the xml-dev mailing list, Friday, 9 Jan 2004 World domination isn't my thing, but if it was, I'd be using XML. --Norman Walsh on the xml-dev mailing list, Friday, 09 Jan 2004 xml potentially helps by providing a universal, verifiable data stream to work with (for starters). things like xsl provide definable ways to manipulate data (transfer functions), and careful construction of specs (dtd/schema) means we can handle pre-conditions - a lot of discussion on this list is about this exact point. this could be a very significant development for computing. --Rick Marshall on the xml-dev mailing list, Tuesday, 06 Jan 2004 Most browser makers couldn't implement HTTP/1.1 correctly if their lives depended on it. --Wesley Felter Read the rest in Re: HTTP Digest Authentication the problem I have with so many tools today is that the engineers have succeeded spectacularly at enabling us to get information into the systems, and failed almost as spectacularly at enabling us to get information out. --Claude L (Len) Bullard on the XML Developers List, Monday, 5 Jan 2004 I've seen a lot of sites that do silly heavy-Javascript navigation stuff, the kind that (when you visit their home page) show you a page saying that they've noticed your browser isn't the most recent version of IE, so please visit and download the latest version. I usually email them a big long rant about how their HTML developers are ripping them off, investing all that extra effort to make the site usable by less browsers, all for a few flashy drop down menus. Judging from responses I receive, the support staff seem to think that adding support for extra browsers is something they need to pay the Javascript developers more to do, not as something that would be there if they're paid the Javascript developers *less* in the first place. I wonder who could have put THAT idea in their heads, eh? --Alaric B Snell on the XML-DEV mailing list, Monday, 05 Jan 2004 Most of the time you really don't need to know which XML parser you are using. I found recently a particular job has been running on Xerces for months when I thought it used Crimson. --Michael Kay on the xml-dev mailing list, Sunday, 23 Nov 2003 RELAX-NG is getting good buzz these days not because it's based on the formalism(s) of hedge automata and tree regular expressions, but because it's elegant -- simple yet powerful. RELAX/TREX are elegant because Makoto Murata and James Clark very deeply understand both the underlying formalism and XML itself. No amount of post-hoc formalism can create elegance when it does not exist in the core of a design. --Mike Champion on the xml-dev mailing list. --Bruce Eckel Read the rest in Bruce Eckel's MindView, Inc: 1-1-04 Why we use Ant (or: NIH). --Brad Porter, TellMe Networks Read the rest in XML makes its mark - Tech News - CNET.com- Quotes in 2003 Quotes in 2002 Quotes in 2001 Quotes in 2000 Quotes in 1999
http://www.ibiblio.org/xml/quotes2004.html
CC-MAIN-2015-06
en
refinedweb
Nazar Buko wrote: . . . public class Lock { . . . public void pull() { if (dial1 == letter && dial2 == letter && dial3 == letter) { lockOpen = true; } else { lockOpen = false; } . . . Nazar Buko wrote: . . . How would I set up my set method so that, say the first time the method is called, it stores whatever char was provided in the parameter is the first letter of the combination, and the second time the . . . fred rosenberger wrote: . . . On a real lock, you don't enter one number, see if it opens, enter the next number, see if it opens, enter the third, and see if it opens. . . . Campbell Ritchie wrote:I think you will have to query that. You would have to change it to myLock.set('C', 'A', 'K'); or similar.
http://www.coderanch.com/t/621081/java/java/Method-creation-trouble
CC-MAIN-2015-06
en
refinedweb
On Sat, 12 Jul 2008, Linus Torvalds wrote:> > >.> > This sounds like it could trigger various other problems too, but happily > hit the BUG_ON() first. - both cpu_down() and cpu_up() can just end with a simple if (cpu_online(cpu)) cpu_set(cpu, cpu_active_map); before they release the hotplug lock, and it will always do the right thing regardless of whether the up/down succeeded or not.The _only_ thing that "active" is used for is literally to verify that a migration is valid before doing it.Now, a few points: (a) it's *TOTALLY* untested. It may or may not compile. (b) I think many architectures do magic things at the initial boot, and change the online maps in odd ways. I tried to avoid this by simply doing the initial "cpu_set()" of cpu_active_map pretty late, just before we bring up other CPUs (c) I think this is a pretty simple approach - and like how all the code is architecture-neutral. The "active" map may not be used a lot, but doesn't this simplify the whole problem a lot? It just makes the whole scheduling issue go away for CPU's that are going down.What do you guys think? Ingo?Vegard, and just out of interest, in case this would happen to work, does this actually end up also fixing the bug (with the other fixes unapplied?) Linus--- include/linux/cpumask.h | 15 ++++++++++++++- init/main.c | 7 +++++++ kernel/cpu.c | 8 +++++++- kernel/sched.c | 2 +- 4 files changed, 29 insertions(+), 3 deletions(-)diff --git a/include/linux/cpumask.h b/include/linux/cpumask.hindex c24875b..88f2dd2 100644--- a/include/linux/cpumask.h+++ b/include/linux/cpumask.h@@ -359,13 +359,14 @@ static inline void __cpus_fold(cpumask_t *dstp, const cpumask_t *origp, /* * The following particular system cpumasks and operations manage- * possible, present and online cpus. Each of them is a fixed size+ * possible, present, active and online cpus. Each of them is a fixed size * bitmap of size NR_CPUS. * * * #else * cpu_possible_map - has bit 'cpu' set iff cpu is populated * cpu_present_map - copy of cpu_possible_map@@ -417,6 +418,16 @@ extern cpumask_t cpu_possible_map; extern cpumask_t cpu_online_map; extern cpumask_t cpu_present_map; +/*+ * With CONFIG_HOTPLUG_CPU, cpu_active_map is a real instance.+ * Without hotplugging, "online" and "active" are the same.+ */+#ifdef CONFIG_HOTPLUG_CPU+extern cpumask_t cpu_active_map;+#else+#define cpu_active_map cpu_online_map+#endif+ #if NR_CPUS > 1 #define num_online_cpus() cpus_weight(cpu_online_map) #define num_possible_cpus() cpus_weight(cpu_possible_map)@@ -424,6 +435,7 @@ extern cpumask_t cpu_present_map; #define cpu_online(cpu) cpu_isset((cpu), cpu_online_map) #define cpu_possible(cpu) cpu_isset((cpu), cpu_possible_map) #define cpu_present(cpu) cpu_isset((cpu), cpu_present_map)+#define cpu_active(cpu) cpu_isset((cpu), cpu_active_map) #else #define num_online_cpus() 1 #define num_possible_cpus() 1@@ -431,6 +443,7 @@ extern cpumask_t cpu_present_map; #define cpu_online(cpu) ((cpu) == 0) #define cpu_possible(cpu) ((cpu) == 0) #define cpu_present(cpu) ((cpu) == 0)+#define cpu_active(cpu) ((cpu) == 0) #endif #define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))diff --git a/init/main.c b/init/main.cindex f7fb200..bfccff6 100644--- a/init/main.c+++ b/init/main.c@@ -414,6 +414,13 @@ static void __init smp_init(void) { unsigned int cpu; + /*+ * Set up the current CPU as possible to migrate to.+ * The other ones will be done by cpu_up/cpu_down()+ */+ cpu = smp_processor_id();+ cpu_set(cpu, cpu_active_map);+ /* FIXME: This should be done in userspace --RR */ for_each_present_cpu(cpu) { if (num_online_cpus() >= setup_max_cpus)diff --git a/kernel/cpu.c b/kernel/cpu.cindex c77bc3a..2a30026 100644--- a/kernel/cpu.c+++ b/kernel/cpu.c@@ -44,6 +44,8 @@ void __init cpu_hotplug_init(void) #ifdef CONFIG_HOTPLUG_CPU +cpumask_t cpu_active_map;+ void get_online_cpus(void) { might_sleep();@@ -269,11 +271,13 @@ int __ref cpu_down(unsigned int cpu) int err = 0; cpu_maps_update_begin();+ cpu_clear(cpu, cpu_active_map); if (cpu_hotplug_disabled) err = -EBUSY; else err = _cpu_down(cpu, 0);-+ if (cpu_online(cpu))+ cpu_set(cpu, cpu_active_map); cpu_maps_update_done(); return err; }@@ -337,6 +341,8 @@ int __cpuinit cpu_up(unsigned int cpu) else err = _cpu_up(cpu, 0); + if (cpu_online(cpu))+ cpu_set(cpu, cpu_active_map); cpu_maps_update_done(); return err; }diff --git a/kernel/sched.c b/kernel/sched.cindex 4e2f603..21ee025 100644--- a/kernel/sched.c+++ b/kernel/sched.c@@ -2680,7 +2680,7 @@ static void sched_migrate_task(struct task_struct *p, int dest_cpu) rq = task_rq_lock(p, &flags); if (!cpu_isset(dest_cpu, p->cpus_allowed)- || unlikely(cpu_is_offline(dest_cpu)))+ || unlikely(!cpu_active(dest_cpu))) goto out; /* force the process onto the specified CPU */
http://lkml.org/lkml/2008/7/12/137
CC-MAIN-2015-06
en
refinedweb
getpeereid() Get the effective credentials of a UNIX-domain peer Synopsis: #include <sys/types.h> #include <unistd.h> int getpeereid( int s, uid_t *euid, gid_t *egid ); Since: BlackBerry 10.0.0 Arguments: - s - A UNIX-domain socket (see the UNIX protocol) of type SOCK_STREAM on which either you've called connect(), or one returned from accept() after you've called bind() and listen(). - euid - NULL, or a pointer to a location where the function can store the effective user ID. - egid - NULL, or a pointer to a location where the function can store the effective group ID. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:. Errors: - EBADF - The argument s isn't a valid descriptor. - ENOTSOCK - The argument s is a file, not a socket. - ENOTCONN - The argument s doesn't refer to a socket on which you've called connect(), or isn't one returned by listen(). - EINVAL - The argument s doesn't refer to a socket of type SOCK_STREAM, or the system returned invalid data. Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/g/getpeereid.html
CC-MAIN-2015-06
en
refinedweb
.jboss.jboss In cases where Client Proxy methods do not return Response or ClientResponse, it may be not be desireable for the Client Proxy Framework to throw generic ClientResponseFailure exceptions. In these scenarios, where more fine-grained control of thrown Exceptions is required, the ClientErrorInterceptor API may be used. public static T getClientService(final Class clazz, final String serverUri) { ResteasyProviderFactory pf = ResteasyProviderFactory.getInstance(); pf.addClientErrorInterceptor(new DataExceptionInterceptor()); System.out.println("Generating REST service for: " + clazz.getName()); return ProxyFactory.create(clazz, serverUri); }. Note, however, that the response input stream may need to be reset before additional reads will succeed. public class ExampleInterceptor implements ClientErrorInterceptor { public void handle(ClientResponse response) throws RuntimeException { try { BaseClientResponse r = (BaseClientResponse) response; InputStream stream = r.getStreamFactory().getInputStream(); stream.reset(); String data = response.getEntity(String.class); if(FORBIDDEN.equals(response.getResponseStatus())) { throw new MyCustomException("This exception will be thrown " + "instead of the ClientResponseFailure"); } } catch (IOException e) { //... } // If we got here, and this method returns successfully, // RESTEasy will throw the original ClientResponseFailure } } Resteasy has a manual API for invoking requests: org.jboss.resteasy.client.ClientRequest See the Javadoc for the full capabilities of this class. Here is a simple example: ClientRequest request = new ClientRequest(""); request.header("custom-header", "value"); // We're posting XML and a JAXB object request.body("application/xml", someJaxb); // we're expecting a String back ClientResponse<String> response = request.post(String.class); if (response.getStatus() == 200) // OK! { String str = response.getEntity(); } When using spring you can generate a REST client proxy from an interface with the help of org.jboss.resteasy.client.spring.RestClientProxyFactoryBean. <bean id="echoClient" class="org.jboss.resteasy.client.spring.RestClientProxyFactoryBean" p:
http://docs.jboss.org/resteasy/2.0.0.GA/userguide/html/RESTEasy_Client_Framework.html
CC-MAIN-2015-06
en
refinedweb
Introduction This article provides an overview of common deployment topologies and configurations for IBM® WebSphere® Service Registry and Repository (hereafter referred to as Service Registry). Deployment topologies represent how Service Registry is configured to support your governance process, with runtime registries corresponding to specific life cycle stages or activities, such as evaluation, development, test, and production. Deployment configuration patterns represent the physical Service Registry installation required to support such topologies. Because Service Registry is a J2EE application based on WebSphere Application Server, its available configuration patterns are driven by those available to WebSphere Application Server. (If you are unfamiliar with WebSphere Application Server, see WebSphere Application Server concepts at the bottom of the article. This article is for newcomers to Service Registry who have the responsibility to deploy the product to meet an enterprise's needs. It is structured around the stages of the Service Registry deployment life cycle, and describes the topologies you might adopt at each stage and the pros and cons of common deployment configurations. This article uses the term life cycle in two contexts. The services that you are governing have a life cycle, as they are modelled, assembled, deployed, and managed. And your implementation of Service Registry also has a life cycle, as you move from evaluation to implementation. Typical Service Registry life cycles: - Evaluation and training -- used for proof-of-concept, prototyping, general product evaluation, and educational workshops. - Development and testing -- used for development and testing of governance and promotion processes. - Full production -- used for staged testing and final deployment to production systems. Overview of Service Registry The main function of Service Registry is to act as a repository for service-related data, and for descriptions of this data. Service Registry is used to store the physical documents related to the description of a service. For example, you might want to store the WSDL file that defines a service, XSD files describing the message format of expected incoming requests and outgoing responses from that service, and any Web service-related policy documents describing policies that are applied to that service. When you import these files into Service Registry, it parses the file contents and breaks it into constituent parts, or entities in a process is known as shredding. Shredding creates physical entities, representing real objects such as the actual files, and logical entities, representing information contained in the files. Each of the entities can be modelled in a logical form beyond the physical boundaries of the files that describe them. As part of this import process, additional metadata is created and tied to these newly created logical and physical entities within Service Registry. This metadata helps describe, classify, and relate the entities to their origin. Thus the entities all have standard properties associated with them. The entities derived when documents are imported make up the logical model of Service Registry. The logical model supports entities such as port type, port, and message (related to WSDL files), and complex type or simple type (related to XSD documents). These entities each have properties and relationships that represent their characteristics, as defined in the source document. For example, a physical WSDL document entity has values assigned to standard properties, such as its assigned name, a specific namespace group it belongs to, and an initial version number. A logical WSDL service entity also has a namespace property and a relationship to the ports it contains. All individual results of document shredding are aggregated into one logical model that represents not only the content of individual documents, but also relationships between content in different documents. Logical entities cannot be individually versioned, because they are derived from a physical document (which can be versioned) and cannot therefore be independently manipulated. However, as described above, both logical and physical entities can be assigned predefined metadata describing their relationship to other stored entities and classified as belonging to one or more grouping systems. Service Registry has configuration profiles that provide predefined property, relationship, and classification definitions. You can use the configuration profiles provided with Service Registry, or produce customized configuration profiles to reflect your organization's needs. For an overview of the Service Registry content model, see Content model overview in the Service Registry information centre. Service Registry design considerations Two types of operations are performed in Service Registry: - Governance - SOA development involves publishing new services, governing services throughout their life cycle, and querying Service Registry to find services that can be used to develop a new business process. - Runtime - An application or enterprise service bus (ESB) interrogates Service Registry to discover an appropriate service to use at run time, or to make routing or data transformation decisions. You may have multiple runtime environments in your Service Registry deployment. For example, there may be two or more ESB domains, perhaps as a result of an acquisition or merger. Many Service Registry implementations incorporate both a governance function and a runtime function, but there are also governance-only and runtime-only implementations. This article assumes a dual-purpose Service Registry system. Many implementations of Service Registry involve multiple service registries: a governance registry in which governance operations take place, and one or more runtime registries, which are populated from the governance registry by promotion. Promotion occurs when governed objects reach a certain defined state in their life cycle. Actions involved in governing a service Here is the sequence of actions involved in governing a service in Service Registry: - Publish the service to the governance service registry, or discover the service from another application environment (such as WebSphere Application Server) using the Service Registry service discovery mechanism. - Govern the service through the development life cycle. - Deploy the service to production. The service is promoted automatically to the runtime service registry. - Retire the service. The service is re-promoted to the runtime service registry to update its life cycle state. - Delete the service from the runtime registry. Synchronization of life cycle states When governing a service, you move it through various life cycle stages. For example, you might be governing a version of a service as a service version object in Service Registry. During its life cycle, the service version progresses through the model, assemble, deploy, and manage phases, and each of these phases has a number of life cycle states that the service version transitions through. In the case of the deploy phase, the object transitions through staging review, staged, certification review, certified, operational review, and operational states. Changes to life cycle states must be performed only in the governance service registry. After changing the life cycle state of a service in the governance registry, it is promoted (or re-promoted) to the runtime service registry to update its life cycle state there. This promotion ensures that the life cycle states remain synchronized for the separate copies of the same service that exist in the governance service registry and any runtime registries, preventing any negative impact on business runtime operations. Part of your design is deciding at what point in the life cycle the object is promoted. For example, you may decide to promote a service version object when it reaches the operational state, and not before. After a service has reached the end of its useful life, and you have updated the life cycle state in the governance registry and re-promoted the service to synchronize the change with runtime service registry, you will eventually delete the service from the runtime registry as part of general cleanup operations. Configuration profiles Service Registry supports configuration profiles, which can be loaded and activated in your Service Registry installation and control the capabilities of Service Registry. For example, if you want to implement service governance, use the Governance Enablement Profile (GEP). Otherwise, for a runtime-only service registry with no governance, you might use the Basic Profile. You can design and load your own configuration profiles. For example, if you want to implement a life cycle model different from those implemented in the GEP, Service Registry Studio provides a graphical interface for doing so. Do not confuse configuration profiles with WebSphere Application Server profiles, which are explained under WebSphere Application Server concepts at the bottom of the article. You must consider which configuration profiles to activate in each of the service registries in your deployment. Typically, the service registries support different sets of operations. For example, the runtime registry might not support life cycle transitions, depending on promotion from a governance registry to implement state changes. You might configure your runtime registry to be read-only. Alternatively, you might have a legitimate reason to update your runtime service registry, for example, to support IBM Tivoli Composite Application Manager (ITCAM) for SOA, which can monitor a running service, and update the metadata for the service entry in Service Registry. Policy management You can use Service Registry to manage policies, which specify the requirements that must be met so that a service can be consumed by a client. For example, a Web service may require that all messages are digitally signed and encrypted; in this example, the requirement for signature and encryption is the policy and the Web service itself is referred to as the subject of the policy. Service Registry provides a central point of management for such policies. A policy has its own life cycle and must be promoted from the governance service registry to the runtime service registry at the required point in that life cycle. You can discover policy sets from WebSphere Application Server or WebSphere Message Broker and govern them in the governance service registry. Then, at the required point in the governance life cycle, you can publish them automatically to other WebSphere Application Server or WebSphere Message Broker instances. You can also modify the policies themselves and then replicate back the changes. (This example involves WebSphere Application Server in a wider context, not just in the role of application server for Service Registry). WebSphere DataPower Appliances can, at run time, pull policies from the runtime service registry that are attached to Web Service Definition Language (WSDL) elements, or obtain a complete WSDL file, containing policy attachments, for processing. Evaluation and training If you are a potential user of Service Registry, and want to quickly install and evaluate it to confirm that it meets your requirements, start with a stand-alone deployment topology. If you are an experienced Service Registry user who is tasked with training other users, then the stand-alone deployment topology is also recommended. Stand-alone deployment topology In a stand-alone Service Registry deployment, all service metadata is stored in a single service registry. It must be configured to achieve the right degree of separation between development and runtime usage, so that changes made to services in the service registry during development cannot harm business runtime operations, and so that publish and governance operations do not degrade runtime performance. Therefore, a stand-alone deployment is not suitable for pilot or production use for any but the smallest implementations. Instead, use this deployment configuration for: - Proof of concept - Proof of technology - Prototyping - General product evaluation - Educational workshops The stand-alone deployment will help you understand the issues of your production environment relating to segregation of service registry content, and to plan the most appropriate deployment configuration to use for production. To achieve segregation in a single service registry, you must use additional metadata to identify which environment applies to each service. If you have more than one runtime domain, the configuration of your registry service metadata must discriminate between these domains. For example, the same service may be defined in two or more different domains, with different endpoints. Figure 1 summarizes how the stand-alone registry provides support for the development and runtime environments: Figure 1. Separate development and runtime environments in a stand-alone deployment Segregation of data in a single service registry The recommended method for segregating data within a single service repository is by using classifications. The classification system is a form of pre-defined metadata that can be tagged to entities within your service repository to offer a method of data segregation. For example, the GEP defines a governance profile taxonomy -- a system of hierarchical types that can be used to describe entities. The types are expressed in a class and subclass system, which is a base classification used in part to describe different environments that services might be deployed to. The top-level class of Environment enables you to segregate different services or versions of a service across environment boundaries. You can assign your service to one of the following pre-defined subclasses in this type hierarchy: Development, Test, Staging, and Production. You can use the same approach if you have more than one runtime domain. Again the configuration of your registry service metadata must discriminate between these domains. For example, the same service might be defined in two or more different domains, with different endpoints. As before, the classification system described in the pre-supplied governance profile taxonomy can be applied to your services. In addition to defining an Environment type system, a base business domain class system is also described, providing the following predefined sub-classes: - Finance - Insurance - Insurance Account Management - Insurance Claims Processing - Sales and Marketing The GEP provides other classification systems, and you can also design your own if you define a custom configuration profile. Stand-alone deployment configuration A stand-alone deployment configuration has WebSphere Application Server, Service Registry, and the Service Registry database on the same node. A common variant is to have WebSphere Application Server and Service Registry on the same node, and a remote Service Registry database on a separate system. If required, you can install several stand-alone Service Registry systems on the same node, but each stand-alone Service Registry is administered from its own dedicated WebSphere Application Server admin console (in WebSphere Application Server terminology, each Service Registry is in a separate cell). Stand-alone system with local database In this configuration, WebSphere Application Server, Service Registry, and the Service Registry database are installed on a single node (a single computer): Figure 2. Stand-alone system with local database Stand-alone system with remote database In this configuration, WebSphere Application Server and Service Registry are installed on one computer, and the Service Registry database on a different one: Figure 3. Stand-alone system with remote database Development and testing If you are a new user of Service Registry looking to develop and test a governance view of service metadata before going live with a production system, then the pilot deployment is recommended. You can use a pilot deployment system to develop and test the administration of service metadata, to develop life-cycles, and to perfect a promotion strategy. The emphasis in a pilot deployment system is developing and testing your implementation of Service Registry. It provides a sandbox for you to prototype and test your service management processes as well as any components that you develop, such as custom configuration profiles, or Business Space environments tailored to different end users. The pilot environment does not contain real service metadata, and service metadata does not move between the pilot environment and the full production environment. You are effectively testing the configuration profile in the pilot environment, and you can move the configuration profile into production. Again, this recommended deployment is applicable even if only a governance registry and repository is required. Runtime registries illustrated in this deployment can be removed. Pilot deployment The pilot topology described here contains two separate governance service registries and one runtime service registry. The pilot is used for customization of registry governance life-cycles and profiles. There are two governance service registries, because one is used for developing the governance processes and the other is used to test the governance processes before deploying content to the production instance. Figure 4 shows this topology: Figure 4. Overview of pilot deployment system In the pilot deployment system, users work in the development and test governance service registries only, and content is promoted automatically to the runtime service registry at the appropriate point in the governance life-cycle. Although not primarily intended for production use, this topology can be used in production for a small-scale SOA environment. This section has described the second stage in a typical registry life-cycle, the piloting of the purchased technology. It has described the recommended multi-registry deployment topology, providing physical segregation of data into separate registry repositories. Each registry manages content at different levels of completeness associated with different phases of the SOA life-cycle. The next section describes the recommended deployment configuration to support this topology. Again, you will see that the suggested configuration patterns closely mirror what is permissible under WebSphere Application Server. Pilot deployment configuration A pilot deployment configuration has three independent registries and associated databases installed under a single WebSphere Application Server installation. Four variations of this configuration are shown below: - A typical configuration would have all these on the same node: that is, multiple application server profiles within one WebSphere Application Server installation: Figure 5. Pilot deployment configuration on a single node -- single WebSphere Application Server - As above but using remote registry databases held on a separate system or systems: Figure 6. Pilot deployment configuration on a single node -- single WebSphere Application Server, remote database - Multiple stand-alone application servers existing on a single machine, but through independent installations of WebSphere Application Server. Again, with or without remote database hosting: Figure 7. Pilot deployment configuration on a single node -- multiple WebSphere Application Server - Multiple stand-alone application servers all existing as independent installations of WebSphere Application Server on separate nodes. Again, with or without remote database hosting: Figure 8. Pilot deployment configuration on multiple nodes Full production When you are ready to go live with your system, you need to manage a series of staging environments, using one registry for each. You can focus and support the various stages of the service development and testing life cycle in an isolated manner prior to final deployment to the production runtime registry. This section describes topologies for a full production deployment. Full production deployment topology The full production deployment is used to support staged testing and final deployment to production systems. In this topology, information from the single governance service registry is promoted to a series of staging environments for different levels of testing before final deployment to the production runtime service registry. Figure 9 shows a full production deployment, with the numbers representing the promotion sequence to various environments. Figure 9. Full production environment A service can pass through a series of environments between its initial development and its release into production -- for example, development, test, staging, and production environments. In a full production deployment, there is one registry for each environment that you want to test in an isolated manner. The number and types of environments depend on the nature of your development process. Keeping the service metadata for your development, testing, and staging environments in separate registries provides fine-grained control, letting you more effectively mirror what will happen when a service goes into production. Typical registries found in this deployment topology include: - Governance master - Development teams share access to the content in this registry, typically segregated according to department. All discovery, development, and governance operations are performed here. - Runtime staging - One registry for each staging environment -- for example, integration, acceptance testing, and pre-production. - Runtime production - A limited set of content from the governance registry is promoted to this registry when services are ready to go into production, using the registry promotion feature. A typical set of runtime testing environments, each having its own service registry, includes: - Integration testing - Testing in the integration environment ensures that the service functions correctly when integrated with the component or components that depend on it, or on which it depends. For example, a service may be consumed by an application, and the service and application may be owned by different departments and therefore have been developed and tested independently. - Acceptance testing - Acceptance testing ensures that the service, together with all integrated components, meets the stated business need. - Pre-production testing - The pre-production environment is an exact replica of the production system, and therefore testing in it ensures that the service will function as required in production. Service promotion and synchronization You move services through the different environments via promotion. Conceptually, a service moves from one environment to the next when it reaches the appropriate life cycle state as a result of governance operations. In practice, all governance operations are performed on the service in the governance service registry, and promotion is used to promote a copy of the service to the next staging service registry in the sequence and, ultimately, to the production service registry. This promotion ensures that the life cycle states and metadata remain synchronized for the separate copies of the same service in the service registries. Configuration profiles In a production environment, you can load and activate different configuration profiles on the service registries. You can use the configuration profile to lock down functionality in your staging and runtime service registries to ensure that life cycle state changes and associated metadata changes are confined to the governance service registry. Full production deployment configuration A full production deployment configuration will have multiple independent service registries and their associated databases. For high availability (redundancy) in the production environment, each service registry is replicated across multiple nodes in a cluster system (a horizontal cluster with cluster members on multiple nodes across many machines in a cell). Each service registry uses remote registry databases held on separate systems. This configuration is shown in Figure 10: Figure 10. Full production configuration Other variants for each registry are: - Multiple stand-alone application servers on a single machine, but through independent installations of WebSphere Application Server, again with remote database hosting. - Multiple node, multiple application server profiles with remote database - Multiple node, multiple stand-alone application servers with remote database Migrating between topologies Special considerations for runtime service registries A runtime registry is any non-governance service registry. Runtime registries are satellite service registries (such as Test, Pre-production, and Production) that get populated by promotion of objects from a governance service registry. You can add further runtime registries or remove existing ones at any time. When creating a new runtime registry, it is populated from the governance registry. Communication between a governance registry and runtime registries can vary from direct network connectivity, to no direct connection, to the use of a DMZ. Runtime registries do not necessarily follow the same security model as the governance registry -- in fact, a runtime registry is likely to have more restrictive security policies. When following recommended practice and using the GEP in the governance registry, everything from capability version downwards is typically promoted and stored in a runtime registry. You do not promote the actual physical documents (such as WSDLs, XSDs, and policies), but rather the objects that are derived from these documents, and which define the service. Although you can customize and configure the GEP, keep changes to a minimum as to avoid complications with promotion. When possible, lock down runtime registries with as few unneeded processes running as possible. Guidelines include: - Tighten access permissions -- there is no need for wide access to create/update/delete. Establish set security permissions, such as for Administrator, ITCAM, and UDDI. - Turn off some plug-ins, but consider those that should run when updates occur. For example, ITCAM can require update access and promotion will cause creates/updates. Consider using the governance policy validator. - Disable e-mail notification and scheduled tasks. - Ensure that the correlator plug-in is not enabled. - Disable promotion to other registries. - Consider using a different user interface for your runtime registries. For example, use Business Space instead of the Web UI. - Disable JMS. (However, if you want to synchronize your runtime registry with UDDI, you need JMS enabled.) - Possibly disable activity logging. (Not strongly recommended as such logging is more relevant for a runtime registry.) Evolving your deployment topologies Having created the stand-alone evaluation and training topology described in this article, you might want to use it as the basis of a pilot development and testing topology. Or you might want to use your pilot topology as the basis of a full production system. Evolving between topologies in this way involves the creation of additional service repositories: - When moving from a stand-alone deployment configuration to a pilot deployment configuration, you move from having a single service registry to having separate governance and runtime service registries. - When moving from a pilot deployment configuration to a full production deployment configuration, you require additional runtime registries so that you have separate service registries for staging and production (and you might require multiple staging registries for different testing phases). To establish additional runtime registries in support of such a migration, take the following steps: - Identify services and objects that should be present in the target runtime service registry. Since promotion occurs on a transition, it is not sufficient to assume anything in a specific state has been promoted. You must also be sure that the only way to have arrived at the state concerned, within the life cycle, was along the transition that would have caused promotion. If this assumption can be made, then you can identify which services or objects should be in the repository based on the state they are in. - Establish how to re-trigger the promotion to the newly created runtime registry you wish to populate. Temporarily modify the life cycle for the states you identified in Step 1. Add a re-promote transition and then add these re-promote transitions to the promotion configuration with the new runtime registry as the target. - Consider the order objects get transitioned to trigger re-promotion. Ideally you should follow the governance enablement profile process and re-promote entities in the same order as they were initially promoted, in order to avoid potential promotion issues on the target. The actions (updates) that promotion takes completely depends on what already exists within the target registry. In theory, re-promoting the highest level object (for example, service version) will result in everything else getting created if it does not already exist. Promotion is only a snapshot in time. The runtime registry only holds objects as they were at the point in time the transition caused promotion. Any attempt to re-promote later to recreate a runtime registry will not necessarily re-promote the exact same metadata. So a runtime registry created from day one and a runtime registry later re-created via this mechanism are not guaranteed to be identical. The differences would correspond to changes permitted within the GEP while the service is in a state identified in Step 1. Additionally, policies do not get automatically promoted across with service documents to which they apply as part of the standard process modelled by the GEP, because policy attachments relate to all objects to which they apply (whether ready for promotion or not). So for example, promotion of a WSDL will not result in its policy attachments being included. WebSphere Application Server concepts You need to be familiar with some key WebSphere Application Server terms and concepts: - Node - A node is a logical grouping of managed application servers. A node usually corresponds to a logical or physical computer system with a distinct IP host address. Nodes cannot span multiple computers. The node name is based on the host name of the computer, such as mandsrv01. Nodes can be managed or unmanaged -- an unmanaged node does not have a node agent or administrative agent to manage its servers, whereas a managed node does. An application server can be on unmanaged or managed nodes. - Cells - WebSphere Application Server can be configured so that several nodes can be associated together in a cell, which is implemented by having a Deployment Manager that administers the nodes in the cell. All of the components in the cell are administered from a single WebSphere Application Server administrative console. There can be only one registry installed in a cell (although it can be replicated across the nodes in a cluster system – see definition of cluster). - Distributed (or federated) node - WebSphere Application Server supports a distributed configuration, where a deployment manager controls several application servers distributed across several nodes. Each node can have different resources and capabilities, but all are in the same cell and are controlled from a single administrative console. Because of the limitation of having only one registry in a cell, this configuration is not applicable when you require multiple, independent registry installations. - Clusters - Clusters are like distributed systems where application servers are managed together, but the aim is to provide high availability or workload balancing. Servers that belong to a cluster are members of that cluster set and must all have identical application components deployed on them. (The physical machines do not have to be identical). A vertical cluster has cluster members on the same node or physical machine. A horizontal cluster has cluster members on multiple nodes across many machines in a cell. You can configure either type of cluster, or have a combination of vertical and horizontal clusters. For high availability, you need a horizontal cluster. - Profile - A WebSphere Application Server profile defines the runtime environment. The profile includes all the files that the server processes in the runtime environment, and that you can change. An administrator can define multiple such runtime environments under one installed copy of WebSphere Application Server. Administration is enhanced when using profiles rather than multiple product installations. Not only is disk space saved, but updating the product is simplified when you maintain a single set of product core files. Also, creating new profiles is more efficient and less error-prone than full product installations, enabling you to create separate profiles of the product for development and testing. WebSphere Application Server provides different profile types. (Do not confuse a WebSphere Application Server profile with a Service Registry configuration profile. A Service Registry configuration profile is loaded and activated in Service Registry after installation, and configures its capabilities). Conclusion This article described some recommended topologies when deploying Service Registry. It provided a view of Service Registry deployment over time, moving from a simple stand-alone evaluation system, through a development and test sandbox, to a full production system. Resources - WebSphere Service Registry and Repository. - describes the architecture and functions of WebSphere Service Registry and Repository, along with sample integration scenarios that you can use to implement the product in an SOA. - Service Life Cycle Governance with IBM WebSphere Service Registry and Repository This IBM Redbook uses business scenarios to illustrate SOA governance using WebSphere Service Registry and Repository as the authoritative registry and repository. - Service Life Cycle Governance with IBM WebSphere Service Registry and Repository Advanced Life Cycle Edition This IBM Redbook identifies the key functions and capabilities that are required for service governance based on field best practices and client scenarios. - WebSphere Service Registry and Repository information portal This developerWorks wiki is an alternative portal for all resources related to WebSphere Service Registry and Repository. - difficult.
http://www.ibm.com/developerworks/websphere/library/techarticles/1105_debelin/1105_debelin.html
CC-MAIN-2015-06
en
refinedweb
17 March 2010 11:33 [Source: ICIS news] LONDON (ICIS news)--LyondellBasell has declared force majeure on polypropylene (PP) supply out of its 210,000 tonne/year Carrington plant in the UK due to problems with propylene supply and not for technical reasons, said a company source on Wednesday. “We finally managed to solve the technical issues but have had to declare force majeure due to problems receiving on-spec propylene,” said the source. Supplies at the site were depleted after a 12-day delay to the plant's restart following a month-long planned maintenance shutdown. A ?xml:namespace> “There is an acute shortage of some PP grades in the market at present,” he said. Homopolymer injection PP prices were reported within a wide range in Europe, with some regional differences, but net prices were now above €1,100/tonne ($1,507/tonne) FD (free delivered) NWE (northwest Demand had dipped during the second half of March after a very strong start, and players were said to be waiting for the settlement of the April propylene contract before committing to more volumes. Talk for April was for an increase in the new propylene contract, which PP producers said would lead to another push for higher PP prices next month. PP producers in ($1 = €0.73) For more on polypropylene
http://www.icis.com/Articles/2010/03/17/9343359/lyondellbasell-declares-force-majeure-on-pp-from-carrington.html
CC-MAIN-2015-06
en
refinedweb
03 January 2012 17:30 [Source: ICIS news] HOUSTON (ICIS)--Chinese energy and petrochemicals major Sinopec has agreed to pay $2.2bn (€1.7bn) to acquire one-third of the interest of Devon Energy in five ?xml:namespace> The assets are Niobrara, Mississippian, Devon CEO John Richels said the deal with Sinopec would improve “We can accelerate the de-risking and commercialisation of these five plays without diverting capital from our core development projects,” Richels added. The companies expect to close the deal in the first quarter of 2012, subject to regulatory approvals. For Sinopec, the deal marks its entry into the upstream The Chinese firm is already active in Canada's upstream oil and gas sector. Last year, Sinopec acquired a Canadian natural gas producer, and in 2010 it took a stake in a Canadian oil sands firm. In related news on Tuesday, French energy and petrochemicals major Total said it had acquired an interest in shale gas assets in Ohio from US firms Chesapeake and EnerV
http://www.icis.com/Articles/2012/01/03/9519954/sinopec-to-pay-2.2bn-for-part-of-devons-us-shale-gas-interests.html
CC-MAIN-2015-06
en
refinedweb
SNMP - Purpose. SNMP is a protocol for getting the status (e.g., CPU load, free memory, network load) of computing devices such as routers, switches and even servers. - Object descriptor, managed object. The client can provide a globally unique names such as cpmCPUTotal5secRev (the average CPU load of a Cisco device for the past 5 seconds) to indicate the information that it wants, then the server should return such information. Such a textual name is called the “object descriptor”. The word “object” or “managed object” refers to the concept of CPU load. The actual CPU load in the device is called the “object instance”. - Object identifier (OID). To make sure that each object descriptor is unique, actually it is defined using a list of integers such as 1.3.6.1.4.1.9.9.109.1.1.1.1.6. Each integer is like a package in Java. For example, the integers in 1.3.6.1.4.1.9 represents iso (1), org (3), dod, i.e., department of defense (6), internet (1), private (4), enterprises (1), cisco (9) respectively. This allows the Internet authority to delegate the management of the namespace hierarchically: to private enterprises and then to Cisco, which can further delegate to its various divisions or product categories. Such a list of integers is called an “object identifier”. This is the ultimate identification for the managed object. - Even though the object descriptor should be unique, it is useful to see the hierarchy. Therefore, usually the full list of object descriptors is displayed such as iso.org.dod.internet.private.enterprises.cisco…cpmCPUTotal5secRev. - Why use integers instead of symbolic names? Probably to allow the network devices (with little RAM or CPU power) implementing SNMP to save space in processing. Symbolic names such as object descriptor can be used by human in commands, but in the protocol’s operation it is done using object identifier. - In principle, the object a.b.c.d and the object a.b.c.d.e on a device have NO containment relationship. That is, they are NOT like a Java object containing a child object. In fact, the value of each object in SNMP is basically a simple value (scalar) such as an integer or a string. The only relationship between them is their names. - Identifying an instance. Now comes the most complicated concept in SNMP. Consider the concept of the number of bytes that have been received by a network interface on a router. This concept is an object. As a router should have multiple interfaces, there must be multiple instances of that object. Then, how can an SNMP client indicate to the SNMP server which instance it is interested in? The solution is more or less a kludge: to allow the instance of, say, a.b.c.d, to represent a table (a compound, structural value), which contains rows (also compound, structural value) represented by a.b.c.d.e. Each row contains child object instances (with scalar values only). Each child object is called a “columnar object”. For example, each row may contain three object instances: a.b.c.d.e.f, a.b.c.d.e.g, and a.b.c.d.e.idx. If you’d like to refer to the a.b.c.d.e.f instance in a particular row, you will write a.b.c.d.e.f.<index>. The meaning of the index is defined by a.b.c.d.e (the row). For example, it may be defined as finding the row in the table which contains a columnar object a.b.c.d.e.idx whose value equals to <index>, then return the columnar object a.b.c.d.e.f as the result. - Note that this is the only situation where the value of an object can be a structure and that there is object containment relationship in SNMP. - What is confusing is that a.b.c.d.e.f is used both as an object identifier and the lookup key to find the child instance in the row. Unlike other object identifiers, the identifier now represents an object containment relationship so it must have a.b.c.d.e as the prefix, otherwise the server won’t know which table to look into and what is the definition for the index. - The complete identifier a.b.c.d.e.f.<index> is called an instance identifier. - Here is a concrete example: Consider iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifInOctets.1. The definition of iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry says that to find the row in the table, it should search for a row which contains a child object iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifIndex with the value of 1 (the index specified), then it will return the value of child object iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifInOctets in the row. Of course, for this to work, the iso.org.dod.internet.mgmt.mib-2.interfaces.ifTable.ifEntry.ifIndex child object in each row must have been assigned with sequential values 1, 2, …, etc. (which is indeed the case). - Finally, a simple case is that there is no table at all. For example, to find the up time of the device, use the object identifier iso.org.dod.internet.mgmt.mib-2.host.hrSystem.hrSystemUptime and append a .0 as the index, so the instance identifier is iso.org.dod.internet.mgmt.mib-2.host.hrSystem.hrSystemUptime.0. BTW, “hr” stands for “host resources”. - MIB (management information base). A MIB is just a a collection of managed objects (not instances). There are standard MIBs so that device manufacturers can implement and users can find the right object identifiers to use. There are also proprietary MIBs such as those designed by Cisco to provide information only available on its devices. - Finding the supported OIDs. How can you find out the MIBs or the object identifiers supported by a device? It is easier to just “walk the MIB tree”: to show all the instances in the tree or in a subtree. On Linux, this is done as below. You specify the IP or hostname of the server and optionally specify a node so that only that subtree is displayed (the object identifier starts with a dot, otherwise it will be assumed it is relative to iso.org.dod.internet.mgmt.mib-2): # snmpwalk <some args> localhost # snmpwalk <some args> localhost .iso.org.dod.internet.mgmt.mib-2.system # snmpwalk <some args> localhost system - Getting an instance. Just specify the instance identifier: # snmpget <some args> localhost .iso.org.dod.internet.mgmt.mib-2.host.hrSystem.hrSystemUptime.0 # snmpget <some args> localhost host.hrSystem.hrSystemUptime.0 - SNMP enttiy, engine and applications. SNMP is a peer to peer protocol. There is no concept of a server and a client (these terms are used here for simplicity). Instead, both peers have the same core capabilities such as sending or receiving SNMP messages, performing security processing (see below), integrating v1, v2c and v3 processing (dispatching) and etc. This part is called the SNMP engine. On top of the engine, there are different “applications”: one application may only respond to requests for object instances (called SNMP agent in v1 and v2), another application may probe others (called SNMP manager in v1 and v2), yet another may forward SNMP messages (SNMP proxy). The whole server or client is called the “SNMP entity”. - Context. On some devices there are multiple copies of a complete MIB subtree. For example, a physical router may support the concept of virtual routers. Each such virtual router will have a complete MIB subtree of instances. In that case, each virtual router may be indicated as a “context” in the SNMP server. A context is identified by name. For example, a virtual router could be identified as the “vr1″ context. There is a default context with empty string (“”) as its name. When a client sends a query, it can specify a context name. If not, it will query the default context. - Notification (trap). An SNMP server may actively send a notification to the client when some condition occurs. This is very much like a response without a request. Otherwise, everything is similar. The condition (e..g., the changes of the value of an instance or its going out of a range), the destination, the credentials used (see the security section below) and etc. are configured on the server. - Transport binding. Typically SNMP runs on UDP port 161. SNMP security - SNMP v1 and v2c security. In SNMP v1 and v2c (v2 was not widely adopted), there is little security. The only security is the “community string”. That is, the server is configured to be in a community identify by a string such as “foo”, “public” (commonly used and the default for many devices to mean no protection) or “private”. If the client can quote the community strnig, then it is allowed access. As the community string is included as plain text in SNMP packets, it practically provides no security. Therefore, in v1 and v2c, to access an SNMP server, you will do something like: # snmpwalk -v 2c -c public localhost # snmpget -v 2c -c public localhost <INSTANCE ID> - SNMP v3 security. In SNMP v3, there is user-based security. That is, the client may be required to authenticate the messages to the server as originating from a user using a password (authentication password). In addition, the client may be furthered required to encrypt the messages using another password (privacy password). This security requirement is called the “security level” (no authentication needed, authentication but no privacy, authentication with privacy). Therefore, in v3, you will access the server like: # snmpwalk -v 3 -l noAuthNoPriv localhost # snmpwalk -v 3 -l authNoPriv -u kent -A "my auth passwd" localhost # snmpwalk -v 3 -l authPriv -u kent -A "my auth passwd" -X "my priv passwd" localhost - Client configuration file. To save typing all those every time, you can store these parameters into the snmp.conf file as defaults. - Security limitation. It is a bad idea to specify the password on the command line as it can be revealed by local users using “ps”. Storing it into the configuration file is better. However, the file only allows a single authentication password and a single privacy password, not enough to handle the case of using different passwords for different servers. - Security name. A security name is just a user name. No more, no less. That’s the term used in the RFC (maybe in the future it could be something else?) - Algorithm. Further, there are different algorithms for authentication (HMAC using MD5 or SHA) and for privacy (encryption using DES or AES). So, you need to specify the algorithms to use: # snmpwalk -v 3 -l noAuthNoPriv localhost # snmpwalk -v 3 -l authNoPriv -u kent -a MD5 -A "my auth passwd" localhost # snmpwalk -v 3 -l authPriv -u kent -a MD5 -A "my auth passwd" -x DES -X "my priv passwd" localhost - Ensure the algorithms match. As SNMP uses UDP and each query and response may use just a single UDP packet, there is no negotiation of algorithm at “connection phase” at all. In fact, presumably for simplicity in implementation, the algorithms used are not even indicated in the message, so the client must use the agreed-on algorithms as configured in the user account on the server, otherwise the server will simply fail to authenticate or decrypt the message. - Localized keys. The authentication password and privacy password of a user account are not used directly. The idea is, most likely you will use the same password for all the user account on all devices on site. If it is directly used, then a hacker controlling one device will be able to find the password and use it to access all the other devices. Therefore, when creating a user account, you specify the password, but the Linux SNMP server will combine it with a unique ID (called the “engine ID”) generated for the device (such as the MAC or IP and/or a random number generated and stored on installation), hash it and use the result as the password (the “localized key”). This way, even if the hacker can find this localized key, he will still be unable to find the original password. - But how can a client generate the same key? It has to retrieve the engine ID first and then perform the same hashing. This is supported by the SNMP protocol. - User accounts creation. Due to the need to generate localized keys, the way to create user accounts on Linux is quite weird. You stop the server, specify the user account’s name and password in a file, then start the server. It will read the password, convert it to a localized key and overwrite the file. This file is /var/lib/snmp/snmpd.conf on Linux: createUser kent MD5 "my auth password" DES "my privacy password" createUser paul SHA "my auth password, no encryption needed" - Access control. The access control can specify the user account, the lowest security level required, which part of the MIB tree is accessed (may use an OID to identify a subtree), the type of access (read or write), in order to grant the access. Here are some example settings on Linux (although human user names are used, but in practice they should be representing devices): rouser john noauth .iso.org.dod.internet.mgmt.mib-2.system rouser kent priv .iso.org.dod.internet.mgmt.mib-2.system rouser kent auth .iso.org.dod.internet.mgmt.mib-2 rwuser paul priv - View. How to specify several subtrees in an access control rule? You can define a view. A view has a name and is defined as including some subtrees and excluding some subtree. Then you can refer to it by name in access control: view myview included .iso.org.dod.internet.mgmt.mib-2.system view myview included .iso.org.dod.internet.mgmt.mib-2.host view myview excluded .iso.org.dod.internet.mgmt.mib-2.host.hrStorage rwuser paul priv -V myview - Access control for v1 and v2c. For v1 and v2c, access control can specify the community string, the IP range of the client (the “source”), the subtree (OID) or the view: rocommunity public 192.168.1.0/24 .iso.org.dod.internet.mgmt.mib-2.system rwcommunity private localhost -V myview - Most flexible access control model. The above access control model is called the “traditional model”. The new, most flexible access control model is called the “view-based access control model (VACM)”, even though the former can also use views. It may be more suitable to called it group-based access control as it uses user groups in the rules (NOT the precise syntax yet!): group g1 kent group g1 paul #access <group> <context> <min sec level> <exact context?> <view for read> <view for write> <view for notify> access g1 "" auth exact myview1 myview2 myview3 - Mapping community string to user name. When using the VACM, instead of granting access to community strings, you need to merge v1 and v2c into the user-based access control processing. To do that, a community string along with the source can be mapped to a user name (the user name mapped to do NOT have to be existing): com2sec user1 192.168.1.0/24 public # "default" source means any com2sec user2 default private - Security model. Even though the different types of identity in the different SNMP versions are represented uniformly as a user name, their trustworthiness is still significantly different. So, in specifying group memberships and access control rules, you are required to specify the “security model” (v1, v2c or the user security model as in v3) and that’s the correct syntax: group g1 usm kent group g1 usm paul group g2 v2c user1 group g2 v1 user1 group g2 v1 user2 access g1 "" usm auth exact myview1 myview2 myview3 access g2 "" any noauth exact myview4 myview5 myview6 Reference: Concepts of SNMP (including v3) from our JCG partner Kent Tong at the Kent Tong’s personal thoughts on information technology blog.
http://www.javacodegeeks.com/2013/04/concepts-of-snmp-including-v3.html
CC-MAIN-2015-06
en
refinedweb
Hi, I have a question regarding the warning that I get when I compile a program using boost::ublas library. I have to say that I'm not really experienced programmer so I might be doing something really stupid... Anyway, I'm using WinXP with Mingw (gcc 3.4.5) and boost 1.34-1. I want to create a simple function that returns a vector. I have it written like this: ===> Definition (file.h ) ===> Implementation (file.cpp)===> Implementation (file.cpp)Code: #include <boost/numeric/ublas/vector.hpp> typedef boost::numeric::ublas::vector<double> vec; vec testfunc (vec &v); Code: #include "file.h" vec testfunc (vec &v) { vec v1 = v/2; return v1; } When I try to compile, I get the following warning: base class `class boost::numeric::ublas::storage_array<boost::numeri c::ublas::unbounded_array<double, std::allocator<double> > >' should be explicitly initialized in the copy constructor What am I doing wrong and how can I fix this? The program works, but I'd like to get rid of the warning Thank you very much for your answers.
http://cboard.cprogramming.com/cplusplus-programming/96013-base-class-should-explicitly-initialized-copy-constructor-printable-thread.html
CC-MAIN-2015-06
en
refinedweb
import java.io.InputStream; 24 import java.io.OutputStream; 25 26 /** 27 * A {@code CipherService} uses a cryptographic algorithm called a 28 * <a href="">Cipher</a> to convert an original input source using a {@code key} to 29 * an uninterpretable format. The resulting encrypted output is only able to be converted back to original form with 30 * a {@code key} as well. {@code CipherService}s can perform both encryption and decryption. 31 * <h2>Cipher Basics</h2> 32 * For what is known as <em>Symmetric</em> {@code Cipher}s, the {@code Key} used to encrypt the source is the same 33 * as (or trivially similar to) the {@code Key} used to decrypt it. 34 * <p/> 35 * For <em>Asymmetric</em> {@code Cipher}s, the encryption {@code Key} is not the same as the decryption {@code Key}. 36 * The most common type of Asymmetric Ciphers are based on what is called public/private key pairs: 37 * <p/> 38 * A <em>private</em> key is known only to a single party, and as its name implies, is supposed be kept very private 39 * and secure. A <em>public</em> key that is associated with the private key can be disseminated freely to anyone. 40 * Then data encrypted by the public key can only be decrypted by the private key and vice versa, but neither party 41 * need share their private key with anyone else. By not sharing a private key, you can guarantee no 3rd party can 42 * intercept the key and therefore use it to decrypt a message. 43 * <p/> 44 * This asymmetric key technology was created as a 45 * more secure alternative to symmetric ciphers that sometimes suffer from man-in-the-middle attacks since, for 46 * data shared between two parties, the same Key must also be shared and may be compromised. 47 * <p/> 48 * Note that a symmetric cipher is perfectly fine to use if you just want to encode data in a format no one else 49 * can understand and you never give away the key. Shiro uses a symmetric cipher when creating certain 50 * HTTP Cookies for example - because it is often undesirable to have user's identity stored in a plain-text cookie, 51 * that identity can be converted via a symmetric cipher. Since the the same exact Shiro application will receive 52 * the cookie, it can decrypt it via the same {@code Key} and there is no potential for discovery since that Key 53 * is never shared with anyone. 54 * <h2>{@code CipherService}s vs JDK {@link javax.crypto.Cipher Cipher}s</h2> 55 * Shiro {@code CipherService}s essentially do the same things as JDK {@link javax.crypto.Cipher Cipher}s, but in 56 * simpler and easier-to-use ways for most application developers. When thinking about encrypting and decrypting data 57 * in an application, most app developers want what a {@code CipherService} provides, rather than having to manage the 58 * lower-level intricacies of the JDK's {@code Cipher} API. Here are a few reasons why most people prefer 59 * {@code CipherService}s: 60 * <ul> 61 * <li><b>Stateless Methods</b> - {@code CipherService} method calls do not retain state between method invocations. 62 * JDK {@code Cipher} instances do retain state across invocations, requiring its end-users to manage the instance 63 * and its state themselves.</li> 64 * <li><b>Thread Safety</b> - {@code CipherService} instances are thread-safe inherently because no state is 65 * retained across method invocations. JDK {@code Cipher} instances retain state and cannot be used by multiple 66 * threads concurrently.</li> 67 * <li><b>Single Operation</b> - {@code CipherService} method calls are single operation methods: encryption or 68 * decryption in their entirety are done as a single method call. This is ideal for the large majority of developer 69 * needs where you have something unencrypted and just want it decrypted (or vice versa) in a single method call. In 70 * contrast, JDK {@code Cipher} instances can support encrypting/decrypting data in chunks over time (because it 71 * retains state), but this often introduces API clutter and confusion for most application developers.</li> 72 * <li><b>Type Safe</b> - There are {@code CipherService} implementations for different Cipher algorithms 73 * ({@code AesCipherService}, {@code BlowfishCipherService}, etc). There is only one JDK {@code Cipher} class to 74 * represent all cipher algorithms/instances. 75 * <li><b>Simple Construction</b> - Because {@code CipherService} instances are type-safe, instantiating and using 76 * one is often as simple as calling the default constructor, for example, <code>new AesCipherService();</code>. The 77 * JDK {@code Cipher} class however requires using a procedural factory method with String arguments to indicate how 78 * the instance should be created. The String arguments themselves are somewhat cryptic and hard to 79 * understand unless you're a security expert. Shiro hides these details from you, but allows you to configure them 80 * if you want.</li> 81 * </ul> 82 * 83 * @see BlowfishCipherService 84 * @see AesCipherService 85 * @since 1.0 86 */ 87 public interface CipherService { 88 89 /** 90 * Decrypts encrypted data via the specified cipher key and returns the original (pre-encrypted) data. 91 * Note that the key must be in a format understood by the CipherService implementation. 92 * 93 * @param encrypted the previously encrypted data to decrypt 94 * @param decryptionKey the cipher key used during decryption. 95 * @return a byte source representing the original form of the specified encrypted data. 96 * @throws CryptoException if there is an error during decryption 97 */ 98 ByteSource decrypt(byte[] encrypted, byte[] decryptionKey) throws CryptoException; 99 100 /** 101 * Receives encrypted data from the given {@code InputStream}, decrypts it, and sends the resulting decrypted data 102 * to the given {@code OutputStream}. 103 * <p/> 104 * <b>NOTE:</b> This method <em>does NOT</em> flush or close either stream prior to returning - the caller must 105 * do so when they are finished with the streams. For example: 106 * <pre> 107 * try { 108 * InputStream in = ... 109 * OutputStream out = ... 110 * cipherService.decrypt(in, out, decryptionKey); 111 * } finally { 112 * if (in != null) { 113 * try { 114 * in.close(); 115 * } catch (IOException ioe1) { ... log, trigger event, etc } 116 * } 117 * if (out != null) { 118 * try { 119 * out.close(); 120 * } catch (IOException ioe2) { ... log, trigger event, etc } 121 * } 122 * } 123 * </pre> 124 * 125 * @param in the stream supplying the data to decrypt 126 * @param out the stream to send the decrypted data 127 * @param decryptionKey the cipher key to use for decryption 128 * @throws CryptoException if there is any problem during decryption. 129 */ 130 void decrypt(InputStream in, OutputStream out, byte[] decryptionKey) throws CryptoException; 131 132 /** 133 * Encrypts data via the specified cipher key. Note that the key must be in a format understood by 134 * the {@code CipherService} implementation. 135 * 136 * @param raw the data to encrypt 137 * @param encryptionKey the cipher key used during encryption. 138 * @return a byte source with the encrypted representation of the specified raw data. 139 * @throws CryptoException if there is an error during encryption 140 */ 141 ByteSource encrypt(byte[] raw, byte[] encryptionKey) throws CryptoException; 142 143 /** 144 * Receives the data from the given {@code InputStream}, encrypts it, and sends the resulting encrypted data to the 145 * given {@code OutputStream}. 146 * <p/> 147 * <b>NOTE:</b> This method <em>does NOT</em> flush or close either stream prior to returning - the caller must 148 * do so when they are finished with the streams. For example: 149 * <pre> 150 * try { 151 * InputStream in = ... 152 * OutputStream out = ... 153 * cipherService.encrypt(in, out, encryptionKey); 154 * } finally { 155 * if (in != null) { 156 * try { 157 * in.close(); 158 * } catch (IOException ioe1) { ... log, trigger event, etc } 159 * } 160 * if (out != null) { 161 * try { 162 * out.close(); 163 * } catch (IOException ioe2) { ... log, trigger event, etc } 164 * } 165 * } 166 * </pre> 167 * 168 * @param in the stream supplying the data to encrypt 169 * @param out the stream to send the encrypted data 170 * @param encryptionKey the cipher key to use for encryption 171 * @throws CryptoException if there is any problem during encryption. 172 */ 173 void encrypt(InputStream in, OutputStream out, byte[] encryptionKey) throws CryptoException; 174 175 }
http://shiro.apache.org/static/1.2.2/xref/org/apache/shiro/crypto/CipherService.html
CC-MAIN-2015-06
en
refinedweb
30 April 2008 05:29 [Source: ICIS news] By Prema Viswanathan SINGAPORE (ICIS news)--China's polyethylene (PE) market is expected to be stable in coming weeks as pressure to raise prices from surging feedstock costs and low inventories is offset by resistance from end users facing a margin squeeze, sellers and buyers said on Wednesday. Ahead of the Labour Day holidays starting on 1 May, trade slowed in ?xml:namespace> Import prices managed to hold firm this week, having risen by $10-30/tonne (€6-19/tonne) last Friday from a week earlier, to $1, 600-1760/tonne CFR (cost and freight) China, on the back of high crude, naphtha and ethylene costs. Buying sentiment for linear low density PE (LLDPE) was quite strong, due to unusually low inventories among Chinese buyers and strong demand from the agricultural film application segment. The substitution of low density PE (LDPE) with LLDPE, due to persistent shortages in the LDPE market, also boosted demand. These positive factors, for the time being, outweighed the negative sentiment triggered by the ban on thin PE bags which would come into effect on 1 June, and the credit control measures implemented by local banks, suppliers and traders said. "In any case, the effect of the ban on PE bags less than 25 microns in thickness will be shortlived, going by the trend seen in other countries where such bans have been implemented," a Hong Kong-based trader who sells into China said. "Eventually, PE consumption will increase as thicker bags come into vogue," he added. Credit control measures initiated by the Chinese government to stem overheating of the economy, an increase in minimum wages for industrial workers, and strengthening of the yuan had increased costs for converters, an end user said. The economic downturn in the Speculation about the possible withdrawal of the export tax rebate on polymers and finished goods also dampened sentiment somewhat, a trader said. "However, this will only affect exports. Domestic demand continues to be strong ahead of the Beijing Olympics, and this will compensate for any downturn in export demand," he said. In the high density PE (HDPE) market, blow moulding and injection grades witnessed robust demand, and prices of these grades caught up with those of HDPE film grade. Demand for HDPE film, however, languished due relatively high inventories. "We are facing considerable customer resistance in Sentiment in the HDPE film market was also adversely affected by low priced "Although the volumes being offered are not significant enough to be reflective of the general market trend, the offers do tend to dampen buying sentiment," a second trader said. ($1 = €0.64) For more on PE visit ICIS chemical intelligence. Jeremiah Ch
http://www.icis.com/Articles/2008/04/30/9120154/china-pe-import-market-stable-near-term.html
CC-MAIN-2015-06
en
refinedweb
An Introduction to Data Analysis using Spark SQL This article was published as a part of the Data Science Blogathon Introduction Spark is an analytics engine that is used by data scientists all over the world for Big Data Processing. It is built on top of Hadoop and can process batch as well as streaming data. Hadoop is a framework for distributed computing that splits the data across multiple nodes in a cluster and then uses of-the-self computing resources for computing the data in parallel. As this is open-source software and it works lightning fast, it is broadly used for big data processing. Before start, I assume that you have a certain amount of familiarity with spark, and you have worked on small applications to handle big data. Also, familiarity with Spark RDDs, Spark DataFrame, and a basic understanding of relational databases and SQL will help to proceed further in this article. Spark Catalyst optimizer We shall start this article by understanding the catalyst optimizer in spark 2 and see how it creates logical and physical plans to process the data in parallel. Spark 2 includes the catalyst optimizer to provide lightning-fast execution. The catalyst optimizer is an optimization engine that powers the spark SQL and the DataFrame API. The input to the catalyst optimizer can either be a SQL query or the DataFrame API methods that need to be processed. These are known as input relations. Since the result of a SQL query is a spark DataFrame we can consider both as similar. Using these inputs, the catalyst optimizer comes up with a logical optimization plan. But, at this stage, the logical plan is said to be unresolved because it doesn’t take into account the types of columns. In fact, at this stage, the optimizer is not aware of the existence of the columns. This is where Catalog comes into the picture. The Catalog contains the details about every table from all the data sources in the form of a catalog. The Catalog is used to perform the analysis of the inputs and results of the logical plan. After this point, actual optimization takes place. This is where the input will be passed to look for possible optimizations. These steps may include pruning of projections and simplifying expressions to simplify the query so that it executes more efficiently. Then the optimizer will come up with different optimizations in different combinations and will generate a collection of logical plans. Following that, the cost of each of the plans will be calculated. The logical plan with the lowest cost in terms of resources and execution time will be picked. After the logical plan has been picked, it needs to be translated into the physical plan by taking into account the available resources. So, the lowest cost logical plan as input, number of physical plans will be generated and the cost for each of these will be calculated using the Tungsten engine. The cost calculation involves several parameters including the resource available and the overall performance and the efficiency of resource use for each of the physical plans. The output of this stage will be a Java bytecode that will run on the Sparks execution engine. This is the final output of the catalyst optimizer. Following diagram of a high-level logical overview of catalyst optimizer: Introduction to Spark SQL There are several operations that can be performed on the Spark DataFrame using DataFrame APIs. It allows us to perform various transformations using various rows and columns from the Spark DataFrame. We can also perform aggregation and windowing operations. Those who have a background working with relational databases and SQL will find the familiarity of DataFrame with relational tables. You can perform several analytical tasks by writing queries in spark SQL. I will show several examples by which you can understand how we can treat a spark DataFrameas as a relational database table. Creating Spark Session For this, we need to set up the spark in our system and after we log into the Spark console, the following packages need to be imported to perform the examples. from pyspark.sql import SparkSession from pyspark.sql.types import * from pyspark.sql.functions import * from pyspark.sql.types import Row from datetime import datetime After the necessary imports, we have to initialize the spark session by the following command: spark = SparkSession.builder.appName("Python Spark SQL basic example").config("spark.some.config.option", "some-value").getOrCreate() Then we will create a Spark RDD using the parallelize function. This RDD contains two rows for two students and the values are self-explanatory. student_records = sc.parallelize([Row(roll_no=1,name='John Doe',passed=True,marks={'Math':89,'Physics':87,'Chemistry':81},sports =['chess','football'], DoB=datetime(2012,5,1,13,1,5)), Row(roll_no=2,name='John Smith',passed=False,marks={'Math':29,'Physics':31,'Chemistry':36}, sports =['volleyball','tabletennis'], DoB=datetime(2012,5,12,14,2,5))]) Creating DataFrame Let’s create a DataFrame from this RDD and show the resulting DataFrame by following the command. student_records_df = student_records.toDF() student_records_df.show() Now, as we can see the content of column ‘marks’ has been truncated. To view the full content we can run the following command: student_records_df.show(truncate=False) Creating Temporary View The above DataFrame can be treated as a relational table. For that, by using the following command we can create a relational view named ‘records’ which is valid for the created spark session. student_records_df.createOrReplaceTempView('records') It is time for us to now run a SQL query against this view and show the results. spark.sql("SELECT * FROM records").show() Here we can verify that the spark.sql returns Spark DataFrame. Accessing Elements of List or Dictionary within DataFrame While creating the RDD, we have populated the ‘marks’ filed with a dictionary data structure and the ‘sports’ filed with a list data structure. We can write SQL queries that will pick specific elements from that dictionary and list. spark.sql('SELECT roll_no, marks["Physics"], sports[1] FROM records').show() We can specify the position of the element in the list or the case of the dictionary, we access the element using its key. Where Clause Let’s see the use of the where clause in the following example: spark.sql("SELECT * FROM records where passed = True").show() In the above example, we have selected the row for which the ‘passed’ column has the boolean value True. We can write where clause using the values from the data structure field also. In the following example, we are using the key ‘Chemistry’ from the marks dictionary. spark.sql('SELECT * FROM records WHERE marks["Chemistry"] < 40').show() Creating Global View The view ‘records’ we have created above has the scope only for the current session. Once the session disappears, the view will be terminated, and it will not be accessible. However, if we want other sessions which were initiated in the same application to be able to access the view even if the session that created the view ends, then we make a global view by using the following command: student_records_df.createGlobalTempView('global_record') The scope of this view will be at the application level rather than the session-level. Now, let’s run a select query on this global view: spark.sql("SELECT * FROM global_temp.global_records").show() All the global views are preserved in the database called: global_temp. Dropping Columns from DataFrame If we want to see only the columns of our DataFrame, we can use the following command: student_records_df.columns If we want to drop any column, then we can use the drop command. In our dataset, let’s try to drop the ‘passed’ column. student_records_df = student_records_df.drop('passed') Now, we can see that we don’t have the column ‘passed’ anymore in our DataFrame. Few More Queries Let’s create a column that shows the average marks for each student: spark.sql("SELECT round( (marks.Physics+marks.Chemistry+marks.Math)/3) avg_marks FROM records").show() Now, we will add this column to our existing DataFrame. student_records_df=spark.sql("SELECT *, round( (marks.Physics+marks.Chemistry+marks.Math)/3) avg_marks FROM records") student_records_df.show() We had dropped the column ‘passed’ earlier. We can derive a new column named ‘status’, where we will put the status ‘passed’ or ‘failed’ after calculating the average marks and we will check if the average marks are greater than 40 percent. To perform that, first, we must update the view again. student_records_df.createOrReplaceTempView('records') We can achieve this by the following query: student_records_df = student_records_df.withColumn('status',(when(col('avg_marks')>=40, 'passed')).otherwise('failed')) student_records_df.show() the above command adds a new column in the existing DataFrame by executing actions defined within it. Group by and Aggregation Let’s look into some more functionalities of Spark SQL. For that, we have to take a new DataFrame. Let’s create a new DataFrame with employee records. employeeData =(('John','HR','NY',90000,34,10000), ('Neha','HR','NY',86000,28,20000), ('Robert','Sales','CA',81000,56,22000), ('Maria','Sales','CA',99000,45,15000), ('Paul','IT','NY',98000,38,14000), ('Jen','IT','CA',90000,34,20000), ('Raj','IT','CA',93000,28,28000), ('Pooja','IT','CA',95000,31,19000)) columns = ('employee_name','department','state','salary','age','bonus') employeeDf = spark.createDataFrame(employeeData, columns) If we wish to query the department wise total salary, we can achieve that in the following way: employeeDf.groupby(col('department')).agg(sum(col('salary'))).show() The result shows the department-wise total salary. If we want to see the total salary in an ordered way we can achieve by following way. employeeDf.groupby(col('department')).agg(sum(col('salary')).alias('total_sal')).orderBy('total_sal').show() Here, the total salary is appearing in ascending order. If we want to view this in descending order, we have to run the following command: employeeDf.groupby(col('department')).agg(sum(col('salary')).alias('total_sal')).orderBy(col('total_sal').desc()).show() We can perform group by and aggregate on multiple DataFrame columns at once: employeeDf.groupby(col('department'),col('state')).agg(sum(col('bonus'))).show() We can run more aggregates at one time by following way: employeeDf.groupby(col('department')).agg(avg(col('salary')).alias('avarage_salary'),max(col('bonus')).alias('maximum_bonus')).show() Windowing in Spark Window functions allow us to calculate results such as the rank of a given row over a range of input rows Suppose we want to calculate the second highest salary of each department. In such scenarios, we can use spark window functions. To use windowing in spark, we have to import the Window package from pyspark.sql.window and then we can write the following from pyspark.sql.window import Window windowSpec = Window.partitionBy("department").orderBy(col("salary").desc()) employeeDf = employeeDf.withColumn("rank",dense_rank().over(windowSpec)) employeeDf.filter(col('rank') == 2).show() In the above sequence of commands, first, we have imported the Window package from pyspark.sql.window. Then we have defined the specification for windowing. Next, we have performed the window function of DataFrame and added a new column rank that shows the highest salary per department. Finally, we ran a command to show the second highest salary from all departments by filtering the DataFrame where the rank is 2. Joins in Spark To perform join let’s create another dataset containing managers of each department. managers = (('Sales','Maria'),('HR','John'),('IT','Pooja')) mg_columns = ('department', 'manager') managerDf = spark.createDataFrame(managers, mg_columns) managerDf.show() Now, if we want to view the name of managers of each employee, we can run the following command: employeeDf.join(managerDf, employeeDf['department'] == managerDf['department'], how='inner').select(col('employee_name'),col('manager')).show() We can perform the join of two DataFrames by the join method. We have to specify the columns on which we will be performing the join and the type of join we want to perform (inner, left, right, etc.) within the join method Conclusion In this article, we have learned the basics of Spark SQL, why it works lightning fast and how to manipulate spark DataFrames using Spark SQL. Also, we have learned to partition the data and order them logically and finally, how we can work with multiple DataFrames using join. Thank you for reading. Hope these skills will help you to perform complex analysis on your data at speed. Happy Learning!! Leave a Reply Your email address will not be published. Required fields are marked *
https://www.analyticsvidhya.com/blog/2021/08/an-introduction-to-data-analysis-using-spark-sql/
CC-MAIN-2022-33
en
refinedweb
@Generated(value="OracleSDKGenerator", comments="API Version: 20210630") public class CancelBuildRunRequest extends BmcRequest<CancelBuildRunDetails> getInvocationCallback, getRetryConfiguration, setInvocationCallback, setRetryConfiguration, supportsExpect100Continue clone, finalize, getClass, notify, notifyAll, wait, wait, wait public CancelBuildRunRequest() public CancelBuildRunDetails getCancelBuildRunDetails() Parameter details required to cancel a build run. public String getBuildRunId() Unique build run earlier due to conflicting operations. For example, if a resource has been deleted and purged from the system, then a retry of the original creation request might be rejected. public CancelBuildRunDetails getBody$() Alternative accessor for the body parameter. getBody$in class BmcRequest<CancelBuildRunDetails> public CancelBuildRunRequest.Builder toBuilder() Return an instance of CancelBuildRunRequest.Builder that allows you to modify request properties. CancelBuildRunRequest.Builderthat allows you to modify request properties. public static CancelBuild<CancelBuildRunDetails> public int hashCode() BmcRequest Uses invocationCallback and retryConfiguration to generate a hash. hashCodein class BmcRequest<CancelBuildRunDetails>
https://docs.oracle.com/en-us/iaas/tools/java/2.38.0/com/oracle/bmc/devops/requests/CancelBuildRunRequest.html
CC-MAIN-2022-33
en
refinedweb
07-03-2018 08:01 AM 07-03-2018 08:01 AM How to assert for a "not null" response in the json response from soap api? How to assert for a "not null" response in the json response from soap api using groovy scripting Solved! Go to Solution. 2 REPLIES 2 07-03-2018 11:10 AM 07-03-2018 11:10 AM You can try something like this snippet of code, that asserts that the repsonse is not null, is not an empty collection, and is not an empty string: def response = context.expand( '${TestStepName#Response}' ) assert (response != null) && (response != "") && (response != []); --- Click the Accept as Solution button if my answer has helped, and remember to give kudos where appropriate too! Community Hero 07-05-2018 10:14 AM 07-05-2018 10:14 AM You can do this in script assertion by writing below code: def response = messageExchange.getResponseContent() assert (response != null) && (response != "") && (response != []):"Assertion failed, Got Null Response" Click "Accept as Solution" if my answer has helped, Remember to give "Kudos" 🙂 ↓↓↓↓↓ Thanks and Regards, Himanshu Tayal
https://community.smartbear.com/t5/ReadyAPI-Questions/How-to-assert-for-a-quot-not-null-quot-response-in-the-json/m-p/167267/highlight/true
CC-MAIN-2022-33
en
refinedweb
psa_drv_se_context_t Struct Reference Driver context structure. #include <crypto_se_driver.h> Driver context structure. Driver functions receive a pointer to this structure. Each registered driver has one instance of this structure. Implementations must include the fields specified here and may include other fields. Member Function Documentation ◆ MBEDTLS_PRIVATE() [1/3] A read-only pointer to the driver's persistent data. Drivers typically use this persistent data to keep track of which slot numbers are available. This is only a guideline: drivers may use the persistent data for any purpose, keeping in mind the restrictions on when the persistent data is saved to storage: the persistent data is only saved after calling certain functions that receive a writable pointer to the persistent data. The core allocates a memory buffer for the persistent data. The pointer is guaranteed to be suitably aligned for any data type, like a pointer returned by malloc (but the core can use any method to allocate the buffer, not necessarily malloc). The size of this buffer is in the persistent_data_size field of this structure. Before the driver is initialized for the first time, the content of the persistent data is all-bits-zero. After a driver upgrade, if the size of the persistent data has increased, the original data is padded on the right with zeros; if the size has decreased, the original data is truncated to the new size. This pointer is to read-only data. Only a few driver functions are allowed to modify the persistent data. These functions receive a writable pointer. These functions are: - psa_drv_se_t::p_init - psa_drv_se_key_management_t::p_allocate - psa_drv_se_key_management_t::p_destroy The PSA Cryptography core saves the persistent data from one session to the next. It does this before returning from API functions that call a driver method that is allowed to modify the persistent data, specifically: - psa_crypto_init() causes a call to psa_drv_se_t::p_init, and may call psa_drv_se_key_management_t::p_destroy to complete an action that was interrupted by a power failure. - Key creation functions cause a call to psa_drv_se_key_management_t::p_allocate, and may cause a call to psa_drv_se_key_management_t::p_destroy in case an error occurs. - psa_destroy_key() causes a call to psa_drv_se_key_management_t::p_destroy. ◆ MBEDTLS_PRIVATE() [2/3] The size of persistent_data in bytes. This is always equal to the value of the persistent_data_size field of the psa_drv_se_t structure when the driver is registered. ◆ MBEDTLS_PRIVATE() [3/3] Driver transient data. The core initializes this value to 0 and does not read or modify it afterwards. The driver may store whatever it wants in this field.
https://docs.silabs.com/gecko-platform/4.1/service/api/structpsa-drv-se-context-t
CC-MAIN-2022-33
en
refinedweb
I am just playing around a bit with pytorch and have a model which has the following structure: Layer A - 100 trainable parameters Layer B - 0 trainable parameters Layer 3 - 5 trainable parameters In my forward function, I have something like: def forward(x, y): a = layer_a(x) b = layer_b(a) loss = layer_c(b, y) return {"loss": loss} The layer_b is simply defined as: class LayerB(nn.Module): def __init__(self, params): super().__init__() self.params = params def forward(self, x): return x.clone() Now when I run this model, it makes one step through the training process and then crashes with: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.FloatTensor [32, 1]], which is output 0 of AsStridedBackward0, is at version 2; expected version 1 instead I am trying to understand why this is happening. Do you think it is necessary to call x.clone().detach() since no trainable parameters are in that layer? I ask because at least the model training does not crash when I do that.
https://discuss.pytorch.org/t/pytorch-layer-with-no-trainable-parameters/143700
CC-MAIN-2022-33
en
refinedweb
4. Analysis modules The MDAnalysis.analysis module contains code to carry out specific analysis functionality for MD trajectories.. An analysis using the available modules usually follows the same structure Import the desired module, since analysis modules are not imported by default. Initialize the analysis class instance from the previously imported module. Run the analysis, optionally for specific trajectory slices. Access the analysis from the resultsattribute from MDAnalysis.analysis import ExampleAnalysisModule # (e.g. RMSD) analysis_obj = ExampleAnalysisModule.AnalysisClass(universe, ...) analysis_obj.run(start=start_frame, stop=stop_frame, step=step) print(analysis_obj.results) Please see the individual module documentation for any specific caveats and also read and cite the reference papers associated with these algorithms. Additional dependencies Some of the modules in MDAnalysis.analysis require additional Python packages to enable full functionality. For example, MDAnalysis.analysis.encore provides more options if scikit-learn is installed. If you installed MDAnalysis with pip (see Installing MDAnalysis) these packages are not automatically installed. Although, one can add the [analysis] tag to the pip command to force their installation. If you installed MDAnalysis with conda The building block for the analysis modules is MDAnalysis.analysis.base.AnalysisBase. To build your own analysis class start by reading the documentation. 4.2. Distances and contacts - Deprecated modules: 4.4. Membranes and membrane proteins 4.5. Nucleic acids 4.6. Polymers 4.7. Structure 4.7.1. Macromolecules 4.7.2. Liquids 4.8. Volumetric analysis 4.9. Dimensionality Reduction 4.10. Legacy analysis modules The MDAnalysis.analysis.legacy module contains code that for a range of reasons is not as well maintained and tested as the other analysis modules. Use with care.
https://docs.mdanalysis.org/dev/documentation_pages/analysis_modules.html
CC-MAIN-2022-33
en
refinedweb
9-Axis LSM9DS1 Sensor 9-Axis 9-Axis Once you have the correct Tools selections, upload the program to your processor! Code /************************************************************************ * LSM9SD1 9-Axis Wireling Example: * This code has the ability to print out all available values from the * 9-Axis sensor, but some are commented out in this Sketch so as not to * over-crowd the Serial Monitor. This program shows the basic methods * on interfacing with this sensor to retrieve value readings. * * Hardware by: TinyCircuits * Written by: Ben Rose, Lavérena Wienclaw, & Brandon Farmer for TinyCircuits * * Initiated: 11/20/2017 * Updated: 12/06/2019 ************************************************************************/ #include <Wire.h> // For I2C connection #include <Wireling.h> // For Wireling Interfacing // For the communication with the LSM9DS1 int DISPLAY_INTERVAL = 300; // interval between pose displays // Global variables to retrieve, store, and schedule readings from the sensor unsigned long lastDisplay; unsigned long lastRate; int sampleCount; RTVector3 accelData; RTVector3 gyroData; RTVector3 compassData; RTVector3 fusionData; // Make Serial Monitor compatible for all TinyCircuits processors #if defined(ARDUINO_ARCH_AVR) #define SerialMonitorInterface Serial #elif defined(ARDUINO_ARCH_SAMD) #define SerialMonitorInterface SerialUSB #endif void setup() { int errcode; SerialMonitorInterface.begin(115200); while (!SerialMonitorInterface); Wire.begin(); // Begin I2C communication // Initialize Wireling Wireling.begin(); Wireling.selectPort(0); //9-Axis Sensor Port, may differ for you delay(100); imu = RTIMU::createIMU(&settings); // create the imu object SerialMonitorInterface.print("ArduinoIMU begin using device "); SerialMonitorInterface.println(imu->IMUName()); if ((errcode = imu->IMUInit()) < 0) { SerialMonitorInterface.print("Failed to init IMU: "); SerialMonitorInterface.println(errcode); } // See line 69 of RTIMU.h for more info on compass calibaration if (imu->getCalibrationValid()) SerialMonitorInterface.println("Using compass calibration"); else SerialMonitorInterface fusion.newIMUData(imu->getGyro(), imu->getAccel(), imu->getCompass(), imu->getTimestamp()); sampleCount++; if ((delta = now - lastRate) >= 1000) { SerialMonitorInterface.print("Sample rate: "); SerialMonitorInterface.print(sampleCount); if (imu->IMUGyroBiasValid()) { SerialMonitorInterface.println(", gyro bias valid"); } else { SerialMonitorInterface.println(", calculating gyro bias - don't move IMU!!"); } sampleCount = 0; lastRate = now; } if ((now - lastDisplay) >= DISPLAY_INTERVAL) { lastDisplay = now; // Get updated readings from sensor and update those values in the // respective RTVector3 object accelData = imu->getAccel(); gyroData = imu->getGyro(); compassData = imu->getCompass(); fusionData = fusion.getFusionPose(); displayAxis("Accel:", accelData.x(), accelData.y(), accelData.z()); // The following data is commented out for easy reading and you can uncomment it all by // highlighting it and using "'Ctrl' + '/'" for windows or "'COMMAND' + '/'" for Mac // // Gyro data // displayAxis("Gyro:", gyroData.x(), gyroData.y(), gyroData.z()); // // // Compass data // displayAxis("Mag:", compassData.x(), compassData.y(), compassData.z()); // // // Fused output // displayDegrees("Pose:", fusionData.x(), fusionData.y(), fusionData.z()); SerialMonitorInterface.println(); } } } // Prints out pieces of different radian axis data to Serial Monitor void displayAxis(const char *label, float x, float y, float z) { SerialMonitorInterface.print(label); SerialMonitorInterface.print(" x:"); SerialMonitorInterface.print(x); SerialMonitorInterface.print(" y:"); SerialMonitorInterface.print(y); SerialMonitorInterface.print(" z:"); SerialMonitorInterface.print(z); } // Converts axis data from radians to degrees and prints values to Serial Monitor void displayDegrees(const char *label, float x, float y, float z) { SerialMonitorInterface.print(label); SerialMonitorInterface.print(" x:"); SerialMonitorInterface.print(x * RTMATH_RAD_TO_DEGREE); SerialMonitorInterface.print(" y:"); SerialMonitorInterface.print(y * RTMATH_RAD_TO_DEGREE); SerialMonitorInterface.print(" z:"); SerialMonitorInterface.print(z * RTMATH_RAD_TO_DEGREE); } There are several different metrics that can be read from the 9-Axis sensor, so for clarity the majority of the data is commented out in the loop(). When you run the program as is, you should see the Accelerometer values from all three axes. This is what the output should look like from the Serial Monitor if you wave the 9-Axis around a few times. You can decide what values are important or unimportant to you, and uncomment other outputs around line 118 in the program. NOTE: The sensor will need a few seconds after the upload of the program to calibrate a still position in order to print out better readings. The Serial Monitor will print when the gyro bias is valid, so it is best to calibrate the sensor with the help of the Serial Monitor before moving it around. Beyond the program The library used for the 9-Axis example are from an RTIMU library made by mrbichel and RPi-Distro, you can find the GitHub library page here if you would like to learn more or contribute. Now you'll just have to figure out what values you want to read from the 9-Axis and how to make a fun, moving project out of it! Downloads If you have any questions or feedback, feel free to email us or make a post on our forum. Show us what you make by tagging @TinyCircuits on Instagram, Twitter, or Facebook so we can feature it. Thanks for making with us!
http://learn.tinycircuits.com/Wirelings/9-Axis_Wireling_Tutorial/
CC-MAIN-2022-33
en
refinedweb
Software Development Kit (SDK) and API Discussions I am seeing this error and following it the stack trace: *** Aborted at 1560796356 (unix time) try "date -d @1560796356" if you are using GNU date *** PC: @ 0x7fc8379a48ed shttpc_get_connect_error *** SIGSEGV (@0x10084) received by PID 15323 (TID 0x202ba700) from PID 65668; stack trace: *** @ 0x7fc823c99bdd google::(anonymous namespace)::FailureSignalHandler() @ 0x7fc822ffead0 (unknown) @ 0x7fc8379a48ed shttpc_get_connect_error @ 0x124824924800 (unknown) Segmentation fault (core dumped) The arguments all look file. Not sure what is happening here. Any pointers will be helpful. let me know if any info is needed. Live Chat, Watch Parties, and More!
https://community.netapp.com/t5/Software-Development-Kit-SDK-and-API-Discussions/shttpc-get-connect-error-with-netapp-sdk/td-p/148955
CC-MAIN-2022-33
en
refinedweb
public class StringBinding extends TupleBinding<String> TupleBindingfor a simple String StringBinding() public String entryToObject(TupleInput input) TupleBinding TupleInputentry. entryToObjectin class TupleBinding<String> input- is the tuple key or data entry. public void objectToEntry(String object, TupleOutput output) TupleBinding objectToEntryin class TupleBinding<String> object- is the key or data object. output- is the tuple entry to which the key or data should be written. protected TupleOutput getTupleOutput(String<String> object- is the object to be written to the tuple output, and may be used by subclasses to determine the size of the output buffer. TupleBase.setTupleBufferSize(int) public static String entryToString(DatabaseEntry entry) Stringvalue. entry- is the source entry buffer. public static void stringToEntry(String val, DatabaseEntry entry) Stringvalue into an entry buffer. val- is the source value. entry- is the destination entry buffer. Copyright (c) 2002, 2017 Oracle and/or its affiliates. All rights reserved.
http://docs.oracle.com/cd/E17277_02/html/java/com/sleepycat/bind/tuple/StringBinding.html
CC-MAIN-2017-30
en
refinedweb
#include <stdarg.h> #include <stdio.h> #include <wchar.h> int vfwprintf(FILE *restrict stream, const wchar_t *restrict format, va_list arg); int vswprintf(wchar_t *restrict s, size_t n, const wchar_t *restrict format, va_list arg); int vwprintf(const wchar_t *restrict format, va_list arg); The vwprintf(), vfwprintf(), and vswprintf() functions are the same as wprintf(), fwprintf(), and swprintf() respectively, except that instead of being called with a variable number of arguments, they are called with an argument list as defined by <stdarg.h>. These functions do not invoke the va_end() macro. However, as these functions do invoke the va_arg() macro, the value of ap after the return is indeterminate. Refer to fwprintf(3C). Refer to fwprintf(3C). Applications using these functions should call va_end(ap) afterwards to clean up. See attributes(5) for descriptions of the following attributes: fwprintf(3C), setlocale(3C), attributes(5), standards(5) The vwprintf(), vfwprintf(), and vswprintf() functions can be used safely in multithreaded applications, as long as setlocale(3C) is not being called to change the locale.
http://docs.oracle.com/cd/E36784_01/html/E36874/vswprintf-3c.html
CC-MAIN-2017-30
en
refinedweb
rpc_mgmt_inq_if_ids- returns a vector of interface identifiers of interfaces a server offers #include <dce/rpc.h> void rpc_mgmt_inq_if_ids( rpc_binding_handle_t binding, rpc_if_id_vector_t **if_id_vector, unsigned32 *status); Input - binding - Specifies a binding handle. To receive interface identifiers from a remote application, the calling application specifies a server binding handle for that application. To receive interface information about itself, the application specifies NULL. If the binding handle supplied refers to partially bound binding information and the binding information contains a nil object UUID, then this routine returns the rpc_s_binding_incomplete status code. To avoid this situation, the application can obtain a fully bound server binding handle by calling the rpc_ep_resolve_binding() routine. Output - if_id_vector - Returns the address of an interface identifier vector. -_no_interfaces No interfaces registered. -. An application calls the rpc_mgmt_inq_if_ids() routine to obtain a vector of interface identifiers listing the interfaces registered by a server with the RPC run-time system. If a server has not registered any interfaces with the run-time system, this routine returns a rpc_s_no_interfaces status code and an if_id_vector argument value of NULL. interface identifier vector. The application calls the rpc_if_id_vector_free() routine to release the memory used by this vector. By default, the RPC run-time system allows all clients to remotely call this routine. To restrict remote calls of this routine, a server application supplies an authorisation function using the rpc_mgmt_set_authorization_fn() routine. None. rpc_ep_resolve_binding() rpc_if_id_vector_free() rpc_mgmt_set_authorization_fn() rpc_server_register_if(). Please note that the html version of this specification may contain formatting aberrations. The definitive version is available as an electronic publication on CD-ROM from The Open Group.
http://pubs.opengroup.org/onlinepubs/9629399/rpc_mgmt_inq_if_ids.htm
CC-MAIN-2017-30
en
refinedweb
: import org.perl.*; Collection foo = Perl5.unpack(template, string); [download] -- perl: code of the samurai I don't know of any projects like this, but ++ to you samurai; I think we ought to do it, and to start us off here's a first run implementation of map, applicable to grep. Its weakness is that the loop iterations and return values must be defined by the user, however I'm not sure yet how to get similar behavior to setting $_ as in Perl. In any case, here goes: public class Mapper { // Define an interface that each block will implement. Note that // we use java.lang.Object here so this will work with any class. interface MapBlock { public Object[] run(Object[] list); } // Define our map method, which calls the methods declared in the // MapBlock interface and returns the outcome. public static Object[] map(MapBlock block, Object[] list) { Object[] out = block.run(list); return out; } // Define a main to test the idea. public static void main(String[] args) { // Test arguments, lowercase 'foo' and 'bar' Object[] objects = { new String("foo"), new String("bar") }; // Call the map function, passing an anonymous class implementin +g // interface MapBlock containing the code we want to run. Object[] new_list = map(new MapBlock() { public Object[] run(Object[] list) { Object[] out = new Object[list.length]; for (int i = 0; i < list.length; i++) { String str = (String)list[i]; // Create uppercase versions out[i] = str.toUpperCase(); } return out; } }, objects); // end of MapBlock, second param objects // Test to see what our new Object array contains. System.out.println((String)new_list[0] + (String)new_list creates a new object that implements the interface MapBlock, which amounts to an is-a relationship. The class itself anonymous, but it's-a MapBlock. That means we can create a one-off class implementing particular behavior, and because the compiler is aware of the MapBlock interface, it'll let you create a new class that implements it. This is closest we can get AFAIK to passing a block of code to a function. The problem in this version though is that all the functionality must be packed into the anonymous implementation, including the code for iterating through the list, as an interface cannot include implementation. But that means this isn't yet like map. So, below I try the other option, subclassing another class. In this case new MapBlock creates an anonymous subclass of MapBlock, which means we can inherit a constructor and some implementation behavior. The idea now is to pack the implicit map functions, e.g., iterating over a list and placing the results into an array, into the superclass. The anonymous subclass now only has to implement the abstract method! ; ) Case in point: C programmers that program Perl as though it were C with scalars. Of course they'd be twice as efficient if they realized they could replace all their for(;;) loops with foreach, map and grep, but they concentrate so hard on making Perl feel like C that they miss them completely. -sam.
http://www.perlmonks.org/index.pl/jacques?node_id=211105
CC-MAIN-2017-30
en
refinedweb
The only thing I'm aware of is that I need to look at Martin's issue of how do we gradually migrate to PEP 420 namespace packages from the existing pkgutil and pkg_resources versions of namespace packages. I'll do that this weekend. Does anyone know of any other PEP issues? I know there are some outstanding implementation and testing issues, but I'm not so concerned about those before getting the PEP ruled on. Eric.
https://mail.python.org/pipermail/import-sig/2012-May/000624.html
CC-MAIN-2017-30
en
refinedweb
The implementation of the Naive Bayes classifier used in the book is the one provided in the NTLK library. Here we will see how to use use the Support Vector Machine (SVM) classifier implemented in Scikit-Learn without touching the features representation of the original example. Here is the snippet to extract the features (equivalent to the one in the book): import nltk def dialogue_act_features(sentence): """ Extracts a set of features from a message. """ features = {} tokens = nltk.word_tokenize(sentence) for t in tokens: features['contains(%s)' % t.lower()] = True return features # data structure representing the XML annotation for each post posts = nltk.corpus.nps_chat.xml_posts() # label set cls_set = ['Emotion', 'ynQuestion', 'yAnswer', 'Continuer', 'whQuestion', 'System', 'Accept', 'Clarify', 'Emphasis', 'nAnswer', 'Greet', 'Statement', 'Reject', 'Bye', 'Other'] featuresets = [] # list of tuples of the form (post, features) for post in posts: # applying the feature extractor to each post # post.get('class') is the label of the current post featuresets.append((dialogue_act_features(post.text),cls_set.index(post.get('class'))))After the feature extraction we can split the data we obtained in training and testing set: from random import shuffle shuffle(featuresets) size = int(len(featuresets) * .1) # 10% is used for the test set train = featuresets[size:] test = featuresets[:size]Now we can instantiate the model that implements classifier using the scikitlearn interface provided by NLTK and train it: from sklearn.svm import LinearSVC from nltk.classify.scikitlearn import SklearnClassifier # SVM with a Linear Kernel and default parameters classif = SklearnClassifier(LinearSVC()) classif.train(train)In order to use the batch_classify method provided by scikitlearn we have to organize the test set in two lists, the first one with the train data and the second one with the target labels: test_skl = [] t_test_skl = [] for d in test: test_skl.append(d[0]) t_test_skl.append(d[1])Then we can run the classifier on the test set and print a full report of its performances: # run the classifier on the train test p = classif.batch_classify(test_skl) from sklearn.metrics import classification_report # getting a full report print classification_report(t_test_skl, p, labels=list(set(t_test_skl)),target_names=cls_set)The report will look like this: precision recall f1-score support Emotion 0.83 0.85 0.84 101 ynQuestion 0.78 0.78 0.78 58 yAnswer 0.40 0.40 0.40 5 Continuer 0.33 0.15 0.21 13 whQuestion 0.78 0.72 0.75 50 System 0.99 0.98 0.98 259 Accept 0.80 0.59 0.68 27 Clarify 0.00 0.00 0.00 6 Emphasis 0.59 0.59 0.59 17 nAnswer 0.73 0.80 0.76 10 Greet 0.94 0.91 0.93 160 Statement 0.76 0.86 0.81 311 Reject 0.57 0.31 0.40 13 Bye 0.94 0.68 0.79 25 Other 0.00 0.00 0.00 1 avg / total 0.84 0.85 0.84 1056 The link to the NLTK book is broken. Also, you can use train_test_split function to do the random splitting into train/test data in one line. scikit-learn thanks rolisz! Thank you very much for you example. It was very helpful for getting me started with my experiments. You left a minor error, however: you should witch the order of 'p' and 't_test_skl' when asking for the classification report. The API lists the true labels first and then the predicted labels second: Oh dear, I have been typing for too long today... Should have been: "for *your example" "you should *switch" I hope I caught all of my errors.. Thank you Ruben, I fixed the code and the report. How would you do this with a Random Forest classifier? Initializing the classifier this way should work: classif = SklearnClassifier(RandomForestClassifier()) This comment has been removed by a blog administrator. How do I output the probability of the predicted instead of the classes? Hi Hock, you canàt get the probability with LinearSVC. Nut, there are other classifiers, the ones in sklearn.naive_bayes or sklearn.svm.SVC for example, that expose the method predict_proba that gives you what you need.
http://glowingpython.blogspot.it/2013/07/combining-scikit-learn-and-ntlk.html
CC-MAIN-2017-30
en
refinedweb
Like methods, constructors can also be overloaded. We will see constructor overloading with the help of an example using this() and parameterized constructor. Before we got through the source code and examples lets discuss why we need to overload a constructor: Constructor overloading is way of having more than one constructor which does different-2 tasks. For e.g. Vector class has 4 types of constructors. If you do not want to specify the initial capacity and capacity increment then you can simply use default constructor of Vector class like this Vector v = new Vector(); however if you need to specify the capacity and increment then you call the parameterized constructor with two int args like this: Vector v= new Vector(10, 5); You must have understood the need to overloading. Lets see how to overload a constructor with the help of below example program: package beginnersbook.com; public; } //Getter and setter methods public int getStuID() { return stuID; } public void setStuID(int stuID) { this.stuID = stuID; } public String getStuName() { return stuName; } public void setStuName(String stuName) { this.stuName = stuName; } public int getStuAge() { return stuAge; } public void setStuAge(int stuAge) { this.stuAge = stuAge; } } class TestOverloading { public static void main(String args[]) { //This object creation would call the default constructor StudentData myobj = new StudentData(); System.out.println("Student Name is: "+myobj.getStuName()); System.out.println("Student Age is: "+myobj.getStuAge()); System.out.println("Student ID is: "+myobj.getStuID()); /*This object creation would call the parameterized * constructor StudentData(int, String, int)*/ StudentData myobj2 = new StudentData(555, "Chaitanya", 25); System.out.println("Student Name is: "+myobj2.getStuName()); System.out.println("Student Age is: "+myobj2.getStuAge()); System.out.println("Student ID is: "+myobj2.getStuID()); } } Output: Student Name is: New Student Student Age is: 18 Student ID is: 100 Student Name is: Chaitanya Student Age is: 25 Student ID is: 555 As you can see in the above example that while creating the instance myobj, default constructor ( StudentData()) gets called however during the creating of myobj2, the arg-constructor ( StudentDate(int, String, int)) being called.Since both the constructors are having different initialization code the variables value are different in each case as shown in the output of the program. Let’s see role of this () in constructor overloading package beginnersbook.com; public class ConstOverloading { private int rollNum; ConstOverloading() { rollNum =100; } ConstOverloading(int rnum) { this(); /*this() is used for calling the default * constructor from parameterized constructor. * It should always be the first statement * in constructor body. */ rollNum = rollNum+ rnum; } public int getRollNum() { return rollNum; } public void setRollNum(int rollNum) { this.rollNum = rollNum; } } class TestDemo{ public static void main(String args[]) { ConstOverloading obj = new ConstOverloading(12); System.out.println(obj.getRollNum()); } } Output: 112 As you can see in the above program that we called arg-constructor during object creation ( ConstOverloading obj = new ConstOverloading(12);). However since we have placed the this() statement inside it, the default constructor implicitly being called from it. Test your skills – Guess the output of below program package beginnersbook.com; public class ConstOverloading { private int rollNum; ConstOverloading() { rollNum =100; } ConstOverloading(int rnum) { rollNum = rollNum+ rnum; this(); } public int getRollNum() { return rollNum; } public void setRollNum(int rollNum) { this.rollNum = rollNum; } } class TestDemo{ public static void main(String args[]) { ConstOverloading obj = new ConstOverloading(12); System.out.println(obj.getRollNum()); } } Output: Exception in thread "main" java.lang.Error: Unresolved compilation problem:Constructor call must be the first statement in a constructor Program caused a compilation error. Reason: this() should be the first statement inside a constructor. Another important point to note while overloading a constructor is: When we don’t define any constructor, the compiler creates the default constructor(also known as no-arg constructor) by default during compilation however if we have defined a parameterized constructor and didn’t define a no-arg constructor then while calling no-arg constructor the program would fail as in this case compiler doesn’t create a no-arg constructor. Lets see the above point with the example program: package beginnersbook.com; public class Demo { private int rollNum; //We are not defining a no-arg constructor here Demo(int rnum) { rollNum = rollNum+ rnum; } //Getter and Setter methods } class TestDemo{ public static void main(String args[]) { //This statement would call no-arg constructor Demo obj = new Demo(); } } Output: Exception in thread "main" java.lang.Error: Unresolved compilation problem:The constructor Demo() is undefined very useful… Really good example for constructor overloading. Thanks Jayaram Good explanation with examples. Thanks This is so clear Wow.. this is really helpful for me on my java class. Thanks so much :) Its on of the best website for java reference. All the concepts are so clearly explained. Uts truely value its name as Beginner’s Book. I am really thanks to the author and all the people working behind the team.No need to open 4-3 sites simultaneously. The contains are enough to build your fundamentals and to clear your doubt. The page is so simple and effective. Great UI and great design of the page too. I really appriciate your hard work. Keep it up. thanks a lotttttt. It is very helpful your explanation for each n every concept is so good. Hi…the explanation is very simple and useful. I have few questions. 1.What if child class’s constructor is being called and it’s parent class does not a default constructor(which is expected to be called implicitly)? Will complier throw error? 2.if child class’s default constructor is being called,where first statement is this(parameter); Will this call parent’s default constructor first and then will invoke parametrized constructor of current object? 3.if in child’s default constructor which is being called first statement is super(); and second is this(); Will this work in anyway?if yes then how? Or then it will throw compilation error as both the statement’s requirement is to be first statement. What’s the use of getter and setter method here while we could pass the values in object This is a quite useful example, and very well explained. This site is very helpful for those who wants to learn JAVA. Best regards and good wishes !!! Hi, In the 2nd example. In the comment section, you have written ” this() is used for calling the default constructor from parameterized constructor.” I believe, this() is used to call no-argument constructor from parameterized constructor. Correct me, if i am wrong!!!!. This is one of the best website i found to learn java effectively. Keep up the good work!!.
http://beginnersbook.com/2013/05/constructor-overloading/
CC-MAIN-2017-30
en
refinedweb
Stardust/Knowledge Base/Infrastructure System Administration Maintenance/Model Deployment Process Model Deployment For general information about process model deployment please refer to the online documentation. Refreshing the Model Cache In an environment where several server instances are not running as a real (EJB) cluster but synchronize only via the database (e.g. Spring) the model caches of the instances can get out of sync. After a model has been deployed via one particular server instance the other instances have to be notified of the change so they can flush and reload their local model caches. The console command below can be used to do this (username and password have to be replaced with the credentials of a user having the Administrator role). Obviously restarting a server instance or reloading the web application will also refresh the local model cache. console -u user -p password engine -init Model Synchronization via a clustered Model Cache Process models deployed into a runtime are stored into the Audit Trail database. The object model, however, is loaded into memory to guarantee fast access to process model information during process execution. The model cache is updated whenever a new process model is deployed. That means as soon as the model deployment is done and the model cache updated, new processes will behave on basis of the new process model version. The model cache is a cache per JVM. Therefore in clustered environments the model cache is updated on the node on which the deployment has been applied only. There's no inbuilt automatism yet, that would take care of model cache synchronization across multiple nodes within a cluster. Caches need to be flushed per node explicitly by using console command line tool, which causes an operational overhead. There’s, however, a simple way to enable an automatic model cache synchronization across multiple nodes within a cluster by using Hazelcast distributed caching technology. - Save the hazelcast-1.9.jar into the WEB-INF/lib of your IPP web application. - Make the ModelWatcher.java (find source code below) part of one of your custom JAR deployments (or create a new JAR if necessary) - Set the property Model.Watcher = com.infinity.bpm.model.ModelWatcher in your carnot.properties. Please note: The automatic model cache synchronization does only work for deployments of process model version. It does not work for overwrite and model delete operation. Both, however, are not meant to be applied in productive environment outside a maintenance window. package com.infinity.bpm.model; import ag.carnot.base.log.LogManager; import ag.carnot.base.log.Logger; import ag.carnot.workflow.runtime.QueryService; import ag.carnot.workflow.runtime.beans.QueryServiceImpl; import ag.carnot.workflow.spi.cluster.Watcher; import com.hazelcast.core.Hazelcast; import com.hazelcast.core.ITopic; import com.hazelcast.core.MessageListener; @SuppressWarnings("unchecked") public class ModelWatcher implements Watcher, MessageListener { private static final Logger logger = LogManager .getLogger(ModelWatcher.class); private boolean dirty; private Object globalState; private ITopic modelTopic; private QueryService qService; public ModelWatcher() { this.modelTopic = Hazelcast.getTopic("model"); this.modelTopic.addMessageListener(this); logger.info("Member added to model change listener."); this.qService = new QueryServiceImpl(); } public Object getGlobalState() { return globalState; } public boolean isDirty() { return this.dirty; } public void setDirty() { logger.info("Model changed. Notify listeners."); this.globalState = this.qService.getActiveModel().getModelOID(); this.modelTopic.publish(this.globalState); } public void updateState(Object globalState) { logger.info("Model changed. Reload model."); this.qService = new QueryServiceImpl(); this.globalState = qService.getActiveModel().getModelOID(); this.dirty = false; } public void onMessage(Object msg) { logger.info("Received model change notification."); if (msg instanceof Integer) { Integer modelOid = (Integer) msg; if (this.globalState != modelOid) { this.dirty = true; } } } }
http://wiki.eclipse.org/STP/Stardust/KnowledgeBase/SystemAdministration/ModelDeployment
CC-MAIN-2017-30
en
refinedweb
This article explains why C# interactive window is the best Code Snippet Compiler & Execution environment as compared to any other options like online C# pad, C# Code Editor, Online C# Code Compiler, Third Party tools to compile C# code snippet etc, C# Interactive window is a very useful window which provide us the feature to test our code snippet without compiling the application. Using C# interactive window we can do a lot of things like Execute & see your code output by just typing it in C# interactive window, Select your Code inside the Editor and see the output of that code snippet, supports C# 6 & C# 7 language features, write Using directive inside it, add a dll reference, call method of newly added dll, Open outside Visual Studio , execute an *.csx file and many more things. Let’s see all those features one by one. Selecting your code snippet and execute it without compiling the application I have written the following code in Visual Studio void Multiply(int x, int y) { WriteLine($ "Multiply of {x} and {y} is : {x * y}"); } Multiply(10, 25); Now select the above code snippet and press the shortcut “Ctrl+E, Ctrl+E” It will automatically open C# interactive window and compile code snippet as in the following screenshot. Keep history of last executed code snippet It keeps the history of last executed code. To verify it close the C# interactive window and again press “Ctrl+E, Ctrl+E” to open it. You will find that it contains all those data which we have executed before closing this window. Reset C# Interactive Window There are two option to reset C# interactive window a.Click on Reset icon ( located at top left corner of C# Interactive window ) b.Use the command “#reset”. Refer the below image. Navigate History We can navigate to history in C# interactive window for next and previous item. Navigate to Previous : to Navigate to previous we use Up Arrow button (History Previous) located at top left of C# interactive window or shortcut: “Alt + Up Arrow”. : We can navigate to next by using Down Arrow button (History Next) located at top left of C# interactive window or shortcut “Alt +Down Arrow”. One of the best thing about C# Interactive window is that it maintains a lot of things in context of history e.g. when we close C# interactive window and reopen it data is not lost. So got the same window with same data which we have executed last time before closing it. Clear C# Interactive Screen To clear C# Interactive window screen we can use the clear screen button available at the top left side of the Window. Refer the below image for Clear screen button location inside C# interactive window. Clearing the screen do not clear the data from history. It just clears the data from UI screen and if you use the history button or just press the shortcuts “Alt + Up Arrow” or “Alt + Down Arrow” you will get all those data from history. Different ways to Open C# Interactive Window We can open C# Interactive windows in 3 ways inside Visual Studio. Using the shortcut Key. We can use shortcut key : “Ctrl+E, Ctrl+E” to open Interactive window, a.If we select some code snippet and then use the shortcut key “Ctrl + E, Ctrl + E” then it will open C# interactive Window and it also executes the selected code snippet. b.If we do not select any code snippet and just press the shortcut key “Ctrl + E, Ctrl + E” in that case it will not execute any code snippet and just open the C# interactive window. 2. Open From Context Menu We can open it by right clicking anywhere on Code window and select “Execute in Interactive” or select some code snippet and then right click >> Execute in Interactive , a.If we select some code snippet and open it from context menu then it will open C# interactive Window and it also executes the selected code snippet. b.If we do not select any code snippet and just open it from context menu in that case it will not execute any code snippet and just open the C# interactive window. From View Menu We can also open it from view menu. Go to View Menu, then click C# Interactive , In the above screenshots you have seen multiple ways to open C# Interactive window inside Visual Studio. Currently it is available with Visual Studio 2015 Update 2 and Visual Studio Preview ‘15’. But it is not necessary that you use C# interactive with Visual Studio we can use it without Visual Studio too. This is a tool integrated with Visual Studio. Let’s see how we can use C# interactive outside the visual studio. Opening C# Interactive Outside Visual Studio Open Visual Studio 2015 Developer Command Prompt >> type “csi” inside the command prompt to open it as C# interactive. After that you can execute any scripts or do you calculations or whatever other tasks we are performing inside it when open from visual studio same tasks can be performed here also. Supports C# 6 & C# 7 C# interactive window provides the support for C# 6 and C# 7 as well. We do not need to change anything for executing C# code features from C# 1.0 to C# 6.0 but for C# 7.0 feature we need to make a small change to compile it properly. Change for C# 7 Solution Explorer, Select your Project, Right Click, Select Properties, Go to Build Tab, General, then Conditional compilation symbols: Enter “ __DEMO__ & __DEMO_EXPERIMENTAL__ ” in the textbox as in the following screenshot, Full support of intellisence It provides the full support of intelliSense as you can see in the following screenshot. Statement can be written in multiple lines C# Interactive window provides the support to write your code snippet statements in multiple lines. If you have copied data from somewhere and just pasting those data inside Interactive window in that case it automatically expand to multiple line if your copied code have multiple lines. But if you are just typing the code in C# interactive window and if you press enter it might evaluate that code but if you want to type it in multiple lines just use the shortcut “Shift + Enter” to start a new line without executing the expression. When you press “Shift + Enter” it automatically adds a dot(.) at start of new line which indicates that statement is continued. You can refer the below screenshot for the same. To test this code snippet, using static System.Threading.Thread; List < string > fruits = new List < string > (); fruits.Add("Apple"); fruits.Add("Banana"); fruits.Add("Grape"); fruits.Add("Guava"); fruits.Add("Mango"); Parallel.ForEach(fruits, fruit => { Console.WriteLine ($ "Fruit Name: {fruit}, Thread Id= {CurrentThread.ManagedThreadId}"); } ); C# Interactive Commands Using DLL reference Inside C# interactive window We Use “#r” to include a dll reference. Refer the below screenshot for more details,We Use “#r” to include a dll reference. Refer the below screenshot for more details, If you want to practice it for C# 7 feature I recommend you to test it with Visual Studio ‘15’ preview. To know more about Visual Studio ‘15’ you can go through the below C# Interactive With Visual Studio 2015 评论 抢沙发
http://www.shellsec.com/news/10537.html
CC-MAIN-2017-30
en
refinedweb
xbMembers Content count2 Joined Last visited Community Reputation104 Neutral About zxb - RankNewbie zxb replied to zxb's topic in Graphics and GPU ProgrammingI found another solution that dont need render on memory bitmap. Thanks. zxb posted a topic in Graphics and GPU ProgrammingIn Windows system, Texture show ok when render on window dc, but Texture not showed when render on memory bitmap? I need generate a memory bitmap for ::UpdateLayerWindow, with WS_EX_LAYERED syte window. Thank you! HDC hDC = ::GetDC(m_hWnd); #if 0 // Texture show ok. if(SetWindowPixelFormat(hDC)==FALSE) { return 0; } if(CreateViewGLContext(hDC)==FALSE) { return 0; } //::ReleaseDC(m_hWnd, hDC); #else // Texture not showed. m_hDCMem = ::CreateCompatibleDC(hDC); CRect rcWnd; GetWindowRect(&rcWnd); LPBYTE pbits = NULL; m_hbmpMem = CreateDIBSection(rcWnd.Width(), rcWnd.Height(), 32, (LPVOID *)&pbits); HGDIOBJ hbmpOld = ::SelectObject(m_hDCMem, m_hbmpMem); if(SetWindowPixelFormat(m_hDCMem)==FALSE) { return 0; } if(CreateViewGLContext(m_hDCMem)==FALSE) { return 0; } ::ReleaseDC(m_hWnd, hDC); #endif ... // some paint code glBindTexture(GL_TEXTURE_2D, texName); glBegin(GL_POLYGON); //glColor4f(1.0f,0.0f,0.0f,1.0f); glTexCoord2f(0.0, 0.0); glVertex3f(-8.0f,-8.0f, 0.0); //glColor4f(0.0f,0.0f,1.0f,1.0f); glTexCoord2f(1.0, 0.0); glVertex3f(8.0f,-8.0f,0.0); //glColor4f(0.0f,1.0f,0.0f,1.0f); glTexCoord2f(1.0, 1.0); glVertex3f(8.0f,8.0f, 0.0); //glColor4f(0.0f,0.0f,1.0f,1.0f); glTexCoord2f(0.0, 1.0); glVertex3f(-8.0f,8.0f,0.0); glEnd();
https://www.gamedev.net/profile/208621-zxb/?tab=topics
CC-MAIN-2017-30
en
refinedweb
java.lang.Object org.apache.log4j.LogManagerorg.apache.log4j.LogManager public class LogManager Use the LogManager class to retreive Logger instances or to operate on the current LoggerRepository. When the LogManager class is loaded into memory the default initalzation procedure is inititated. The default intialization procedure is described in the short log4j manual. public static final String DEFAULT_CONFIGURATION_FILE public static final String DEFAULT_CONFIGURATION_KEY public static final String CONFIGURATOR_CLASS_KEY public static final String DEFAULT_INIT_OVERRIDE_KEY public LogManager() public static void setRepositorySelector(RepositorySelector selector, Object guard) throws IllegalArgumentException LoggerFactorybut only if the correct guard is passed as parameter. Initally the guard is null. If the guard is null, then invoking this method sets the logger factory and the guard. Following invocations will throw a IllegalArgumentException, unless the previously set guard is passed as the second parameter. This allows a high-level component to set the RepositorySelector used by the LogManager. For example, when tomcat starts it will be able to install its own repository selector. However, if and when Tomcat is embedded within JBoss, then JBoss will install its own repository selector and Tomcat will use the repository selector set by its container, JBoss. IllegalArgumentException public static LoggerRepository getLoggerRepository() public static Logger getRootLogger() public static Logger getLogger(String name) Loggerinstance. public static Logger getLogger(Class clazz) Loggerinstance. public static Logger getLogger(String name, LoggerFactory factory) Loggerinstance. public static Logger exists(String name) public static Enumeration getCurrentLoggers() public static void shutdown() public static void resetConfiguration()
http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/LogManager.html
CC-MAIN-2017-30
en
refinedweb
After upgrade to 0.26.0-rc version, this line: DeviceEventEmitter.addListener('keyboardWillShow', (e)=>this.updateKeyboardSpace(e)); updateKeyboardSpace import React from 'react'; import {DeviceEventEmitter} from 'react-native'; It seems like you can not use this kind of event listener any more. This seems to be handled by the Keyboard component now, which uses native libraries. For iOS it is defined here, the event names seem to be the same; I couldn't find an Android implementation, though. You would need to test if this works, but for iOS this should do the trick: import {Keyboard} from 'react-native'; Keyboard.addListener('keyboardWillShow', (e)=>this.updateKeyboardSpace(e)); EDIT: The API explained was internal only. For normal usage, one could use the callbacks on the ScrollResponder. You could use either onKeyboardWillShow and onKeyboardWillHide. The ScrollResponder Mixin is used in the ScrollView and ListView, so you may use this props there. I did a small example on github.
https://codedump.io/share/8vIzuM45JSU/1/react-native-deviceeventemitter-keyboardwillshow-stopped-working
CC-MAIN-2017-30
en
refinedweb
Here's an example using xlsxwriter: import os import glob import csv from xlsxwriter.workbook import Workbook for csvfile in glob.glob(os.path.join('.', '*.csv')): workbook = Workbook(csvfile + '.xlsx') worksheet = workbook.add_worksheet() with open(csvfile, 'rb') as f: reader = csv.reader(f) for r, row in enumerate(reader): for c, col in enumerate(row): worksheet.write(r, c, col) workbook.close() FYI, there is also a package called openpyxl, that can read/write Excel 2007 xlsx/xlsm files. Hope that helps.
https://codedump.io/share/xUmJZwVLI4Um/1/python-convert-csv-to-xlsx
CC-MAIN-2017-30
en
refinedweb
demondoc Wrote:...a lot of the summaries seem to be incomplete. I checked Stargate Universe season #1 and most of the summaries were incomplete, I checked on Tvrage.com to check and they were complete there. They would show the first few words in the summary and nothing more. Weird..... 04:57:34 T:2112 M:1828843520 ERROR: Error Type: <type 'exceptions.ImportError'> 04:57:34 T:2112 M:1828945920 ERROR: Error Contents: No module named elementtree.ElementTree 04:57:34 T:2112 M:1829027840 ERROR: Traceback (most recent call last): File "C:\Users\Darts\AppData\Roaming\XBMC\addons\script.tvrage.com\default.py", line 5, in <module> import elementtree.ElementTree as etree [email protected] ImportError: No module named elementtree.ElementTree gtwibell Wrote:Using 1.0.7 with the new Eden beta and while it installs, when I try to set up my shows there is no way with the remote to move to or select any of the options. The left/right and up/down buttons do nothing. But holding down Menu does bring up the context menu for the (non existent) list of shows. ruuk Wrote:I'd like some people to test this before I submit it to the Repository. Please let me know how it works for you, good or bad Get the zip here:- 1.0.8.zip Henkske Wrote:Thnx. I will test it this week beefystripper Wrote:Same problem with me with version 1.0.7. Can't navigate to Add show nor Add all from library or settings. //Beefy beefystripper Wrote:Hi I',m on a ATV2 (OS X). //Beefy
https://forum.kodi.tv/printthread.php?tid=82174&page=10
CC-MAIN-2017-30
en
refinedweb
How Do We Handle Abstract Methods? Use Cases Extension methods in C#. We should be able to define a mixin for iterable collections. So, if a class implements iterate(), you can mixin another class that will give it first, each(), etc. That mixin class needs to implement those in terms of iterate(), which it does not define. In the context of the mixin, iterate()is an abstract member. "Base class"-style delegate fields. For example, Amaranth has a class like this: abstract class ContentBase { public abstract string Name { get; } private Content content ContentBase(Content content) { this.content = content; } } In Magpie, this would be a delegate field since it has state, but we need to be able to support that abstract "Name" property. Requirements A user should be able to declare (as opposed to define) a member in a class. A declared member has a type but no implementation. When type-checking a class, it will be as if that member is there. This way, we can type-check a mixin or delegate class in the context of having the functionality it needs its host to provide. We should be able to statically ensure that all declared members are given an implementation before they can be accessed. A user shouldn't have to worry about getting an error at runtime that they tried to call a method that wasn't given a concrete implementation. If there's a member they need to implement, it should tell them this at check time. You should be able to "forward" abstract members. For example, class A may define an abstract member foo. Class B mixes in A but is also intended to be used as a mixin. It should be able to declare an abstract member foothat passes the buck onto the class that mixins it in. As always, we should accomplish this with a minimum of ceremony and complexity. Abstract Mixins Mixins are the easy one. Since they are already stateless (non-constructible), all we really need to do is: - Allow the user to declare members on a class. - When we're checking a mixin, make sure that the parent class defines members that are compatible with all of the declared members on the mixin. - If a class has abstract members, don't let it be constructed. And that should be good. Rule 2 lets us implement abstract members. Rule 3 and the fact that mixins side-step construction completely make sure that you will only be able to refer to an abstract member from a context where a concrete implementation has been provided. Abstract Delegates Now we come to the challenge. Lets say we wanted to implement the ContentBase use case in Magpie. The abstract delegate class would be: class ContentBase def shared new(content Content -> ContentBase) construct(content: content) end get name String var content Content end A class using it would be something like: class WidgetContent def shared new(content ContentBase) construct(content: content) end delegate var content ContentBase end So we'd construct one like: var content = ContentBase new(Content new(...)) var widget = WidgetContent new(content) Magpie's construction style is from the leaves in: we create all of the fields for a class and then instantiate the class using them. The problem here is the first line. At that point, we're creating an instance of an abstract class. That violates our second requirement. There's nothing here preventing you from doing: var content = ContentBase new(Content new(...)) content name // bad! calling abstract member! Even if we don't forget to give it a parent object, there's an equivalent problem: var content = ContentBase new(Content new(...)) var widget = WidgetContent new(content) // fine so far... widget name // delegates to content through widget, still ok... widget content name // bad! not going through widget, so we won't be able // to look up the name member on it Ideas to resolve this: Don't allow abstract delegates The simplest and harshest solution. Just don't allow abstract methods in delegates. In practice, I don't think this will work well. I've got lots of examples in Amaranth and other code of classes with both state and abstract members. Define two types for an abstract class Given an abstract class like ContentBase, there will be two types: ContentBase and AbstractContentBase. The first is the "normal" type and can be used like you'd expect. The only objects that will have this type are places where the abstract members have been correctly implemented by a delegating parent object. So, in the above example, WidgetContent is a subtype of ContentBase because it has a delegate field of that class. The AbstractContentBase type is then for variables of the abstract class that are not correctly accessed through a delegating parent that implements its abstract members. When you construct an instance of ContentBase the variable you get back is of type AbstractContentBase. If you access a delegate field on some object (like doing widget content) that's the type of variable you'll get back since you're stripping off the delegating parent. The Abstract__ type has no members on it. This ensures that you'll get a check error if you try to use an instance of an abstract type outside of its delegating content. It's basically a black hole. You can pass it around, but you can't do anything with it. When a class has a delegate field (with abstract members), its construct method will take the abstract type for that field, not the normal one. So, in the above example, the type signature for WidgetContent construct is content: AbstractContentBase. In other words, an instance of an abstract class has a special not-very-useful AbstractFoo type. But a class that has a delegate field of that class is correctly a subtype of the full-featured Foo type. Make the type useless This is a refinement of the previous idea. Instead of defining two types, just have one. But that type will be the equivalent of the Abstract__ up there: it will have no members. It's assignable to itself, but aside from that, there isn't anything useful you can do with it. That just gives you enough to get it into a delegate field for class that's using it, which is all you need. This implies that abstract classes do not define any usable type. They provide stateful behavior that can be mixed into another class, but don't define a type that describes all classes that do that. For example: class Named def sayName(->) print("Hi, I'm " + name + (if excited? then "!" else ".")) get excited? Bool // abstract var name String end class Dave get excited? Bool = true delegate var name Named end Given these, you can create a new Dave like this: var dave = Dave new(name: Named new(name: "Dave")) What you can't do is define functions that act on the Named interface alone: def sayTwice(named Named ->) named sayName() named sayName() end The problem is that the Named type has no members, not even sayName. That ensures that you don't try to use a standalone instance of an abstract class, but also prevents the above. This seems like enough of a limitation that it probably isn't worth pursuing. Implicitly construct delegate fields Consider the above example: var dave = Dave new(name: Named new(name: "Dave")) The key bit here, and the cause of our problems, is that we're passing in an instance of Named to that constructor. But we can't safely create one of those outside of the context of a parent class. So maybe the solution is to not do that. Instead, the containing class's construct function will have the magic required to promote a record for the delegate field to the real deal. The above would become: var dave = Dave new(name: (name: "Dave")) And then, internally, it will take that name: "Dave" record and promote it to the delegate field's type. This neatly solves the problem of dangling delegates. Abstract classes simply won't have constructors and cannot be created on their own. They can still be used as types. We'll just have to modify the class subtyping rules to allow a class A to be assignable to class B if A has a delegate field of type B. Don't worry about it Remember, Magpie is optionally typed. It isn't perfectly sound. Maybe the simplest solution is to not worry about it. You can instantiate an abstract class just fine. If you try to use it, it'll do weird (but defined!) things when it tries to call abstract members. Don't sweat it.
http://magpie.stuffwithstuff.com/design-questions/how-do-abstract-members-work.html
CC-MAIN-2017-30
en
refinedweb
Which of the following Statement is true about given code? public class A { public int Name{get;set;} } Class A should be Abstract Class A should have a private field Associated with property Property should have definition. Code will be compiled Successfully Yes, code will be compiled Successfully, because these type of properties are called as auto implemented properties(C# 3.0). Compiler will add a private field to the class at compile time . Back To Top
http://skillgun.com/question/3192/csharp/properties/which-of-the-following-statement-is-true-about-given-code-public-class-a-public-int-namegetset
CC-MAIN-2017-30
en
refinedweb
My problem here is trying to test a substr of a string. when compiling I get the following errors c:\work\programming\nettest\nt.h(40) : error C2146: syntax error : missing ';' before identifier 'path' c:\work\programming\nettest\nt.h(40) : error C2501: 'string' : missing storage-class or type specifiers c:\work\programming\nettest\nt.h(40) : error C2501: 'path' : missing storage-class or type specifiers c:\work\programming\nettest\nt.cpp(47) : error C2679: binary '=' : no operator defined which takes a right-hand operand of type 'class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >' (or there is no acceptable conv ersion) c:\work\programming\nettest\nt.cpp(61) : error C2039: 'path' : is not a member of 'netData' c:\work\programming\nettest\nt.h(34) : see declaration of 'netData' c:\work\programming\nettest\nt.cpp(84) : error C2440: 'return' : cannot convert from 'struct netData *' to 'struct netData' No constructor could take the source type, or constructor overload resolution was ambiguous c:\work\programming\nettest\nt.cpp(91) : error C2679: binary '=' : no operator defined which takes a right-hand operand of type 'struct netData' (or there is no acceptable conversion) Error executing cl.exe. Can anyone tell me whats wrong with the following line if ( line.substr(index+6,1) == '1') I've tried assigning the substr to a tmp string, then a char, casting it to a char. all have failed. +++++++++++++++++++++++++ Additionally the line while(!netcsv.eof()) statemant causes an error. This has worked in a previous program with one major difference. In the working program the code <ifstream netcsv(filename)> where filename was defined as an array of char. In this program it is defined as a string. +++++++++++++++++++++++++ Thanks in advance for any help with these issues, most of the code is shown below. I should point out that netData and the string header are defined in nt.h Code:#include <c:\work\programming\nettest\nt.h> #include <fstream> #include <iostream> //#include <string> #include <sstream> using namespace std; netData readNetcsv() { int index, startTime, timeStep, endTime, tmp; char answer, temp; string line, tstore, filename, path; bool marker = false, again = true; netData *lMember, *start_ptr, *end_ptr; //create object start_ptr = end_ptr = NULL; //init pointer to list do { cout << "Enter file path : " << endl; //get path details cout << "(eg c:\\dir1\\dir2\\ " << endl; cin >> path; filename = path + "net.csv"; //build full filename ifstream netcsv(filename.c_str()); //& open if (netcsv) //process file { cout << filename << " opened successfully " << endl; istringstream strin (tstore); //object constructor for int string ops do { getline (netcsv, line); if ((line[0] == 'T') && (line[1] == 'I')) //wait until colomn headers found { marker = true; getline (netcsv, line); } if (marker == true) //start saving data { lMember = new netData; //create new record index = line.find(",",1); //find first comma in line tstore = line.substr(0,index); //get time field strin >> lMember->time; //save as integer variable tstore = line.substr(index+1,4); //probability now strin >> tmp; //or lMember->probability = strtod(tstore.c_str(), NULL); lMember->prob = (int)(tmp*100 + 0.5f); //convert to threshold value if ( line.substr(index+6,1) == '1') //alarm field { lMember->alarm = true; } else { lMember->alarm = false; } lMember->nxt = NULL; if (start_ptr == NULL) //manage pointers { start_ptr = lMember; //first member in list start_ptr->path = path; //file path details end_ptr = start_ptr; startTime = lMember->time; //hold first time point } else { end_ptr->nxt = lMember; //set pointer for previous record end_ptr = lMember; //point to new end of list if (timeStep == 0) timeStep = lMember->time - startTime; } } }while(!netcsv.eof()); endTime = lMember->time; //hold last time point again = false; //another loop not required } else { //file doesn't exist cout << "File not found. " << endl; cout << "Enter new path y/n (y) : " << endl; cin >> answer; if (answer == 'n') again = false; //allow prog to fail } }while(again); return (start_ptr); }
http://cboard.cprogramming.com/cplusplus-programming/91659-string-eof-handling-problems.html
CC-MAIN-2014-49
en
refinedweb
It's a common story: you're working on a project and have need for a very simple class, one that you assume is part of the library you're using. You search the library and discover that the class is not there. You're then faced with a choice, you can "roll your own" class or use a third party implementation (if you can find one). It's not surprising that in most cases we choose the former. Creating your own bread and butter class that you know will come in handy in the future can be satisfying. The class in question is a Deque (pronounced "deck") class. The deque is a data structure that allows you to add and remove elements at both ends of a queue. I was working on my state machine toolkit and needed a Deque collection class. I checked the System.Collections namespace to see if it had one. Unfortunately, it didn't. I did a cursory search for a Deque class here at CodeProject and found this article for a "Dequeue" class. I have not looked at the code, but decided anyway that it wasn't exactly what I was looking for, and admittedly, I had already made up my mind to write my own Deque class. Deque System.Collections The queue data structure represents functionality for adding elements to the end (or back) of the queue and removing elements from the beginning (or front) of the queue. Think of a line of people waiting to buy a ticket at a movie theater. The first person in line is the first person to buy a ticket. This approach is called "first in, first out" (FIFO). Sometimes you need the ability to add and remove elements at both the beginning and end of the queue. The deque data structure fits the bill. It is a "double-ended-queue" in which you can add and remove elements at the front and back of the queue. First and foremost, I wanted my Deque class to look like a class in the System.Collections namespace. Had I found the Deque class when I was searching the System.Collections namespace, this is what it would look like. I took a close look at the Queue class and modeled my class after it. To this end, the Deque class implements the same interfaces, ICollection, IEnumerable, and ICloneable. Queue ICollection IEnumerable ICloneable In addition to the methods and properties in those interfaces, the Deque class also has several methods found in the Queue class. Clear Contains ToArray Synchronized The Queue class also provide a Peek method. This method lets you peek at the element at the front of the Queue without removing it. Following its lead, the Deque class provides PeekFront and PeekBack methods for peeking at the elements at the front and back of the Deque respectively. Peek PeekFront PeekBack The PushFront and PushBack methods allow you to add elements to the front and back of the Deque respectively, and their counterparts, the PopFront and PopBack methods, allow you to remove elements from the front and back of the Deque respectively. PushFront PushBack PopFront PopBack The Deque class uses a doubly-linked list to implement its collection of elements. The links in the list are represented by a private Node class. Since elements can be added to the front and back of the Deque, it was necessary to have front and back nodes to keep track of the front and back of the Deque. Node There is a custom DequeEnumerator private class for implementing the IEnumerator interface returned by the GetEnumerator method for enumerating over the Deque from front to back. I've written custom enumerators before, and it really isn't hard, it's just that you have to make sure that your implementation conforms to the IEnumerator specification. DequeEnumerator IEnumerator GetEnumerator Because I needed the Deque to be thread safe, I implemented a static Synchronized method (following the lead of the collections in the System.Collections namespace). It returns a thread safe wrapper around a Deque object. To implement this, I first made each of the methods and properties in the Deque class virtual. I then derived a private class from the Deque class called SynchronizedDeque in which each of the Deque's methods and properties are overridden. virtual SynchronizedDeque When a SynchronizedDeque is created, it is given a Deque object. It achieves thread safety by locking access to its Deque object in each of its overridden methods and properties and delegating calls to the Deque object. The Synchronized method creates a SynchronizedDeque object and returns a Deque reference to it, so it looks and acts just like a regular Deque object. Deque<T> I decided it was time to create a new version of the Deque class that supports generics. Converting the original Deque class to a generic version was fairly straightforward. I copied the code from the original and changed all references to the items in the Deque from type object to type T. I made Deque<T> class implement the IEnumerator<T> interface. What is interesting here is that IEnumerator<T> implements IDisposable, so I had to modify my private enumerator class to provide IDisposable functionality. All of this was rather easy to do. object T IEnumerator<T> IDisposable The original Deque is still there, unaltered, if you need it. The Deque class resides in the Sanford.Collections namespace (this used to be the LSCollections namespace; I felt it needed renaming). And the Deque<T> class resides in the Sanford.Collections.Generics namespace. The zipped source code contains several files: the original Deque.cs file, the files implementing the new Deque<T> class, and the Tester.cs and GenericTester.cs files. The Tester.cs file represents a console application class that tests the functionality of both the Deque and Deque<T> classes. Sanford.Collections LSCollections Sanford.Collections.Generics Well, this article has been a change of pace. My last contribution here at CodeProject was a series of articles presenting a toolkit I had spent months working on and researching. In contrast, the Deque class along with this article was for the most part simple to write. But sometimes the simplest things can be the most useful. At least that's what I'm hoping I've accomplished here. Thanks for your time, and as always comments and suggestions.
http://www.codeproject.com/script/Articles/View.aspx?aid=11754
CC-MAIN-2014-49
en
refinedweb
BOSS 33 - The honest rocking!! Discussion in 'HTML' started by vikram singh, Nov 8, 2010. Want to reply to this thread or ask your own question?It takes just 2 minutes to sign up (and it's free!). Just click the sign up button to choose a username and then you can ask your own questions on the forum. - Similar Threads Need help resolving accidental (honest!) language pissing matchhas, Jun 2, 2004, in forum: Python - Replies: - 7 - Views: - 362 - has - Jun 4, 2004 Mucking with the calling scripts namespace (For a good reason, honest!)Doug Rosser, Aug 2, 2004, in forum: Python - Replies: - 7 - Views: - 295 - Christopher T King - Aug 4, 2004 - Replies: - 29 - Views: - 895 - Jeffrey Schwab - Sep 16, 2006 Honest opinionsoulfly73, Jun 23, 2009, in forum: .NET - Replies: - 0 - Views: - 279 - soulfly73 - Jun 23, 2009 3 videos that are rocking the webRico, May 4, 2007, in forum: Ruby - Replies: - 0 - Views: - 100 - Rico - May 4, 2007
http://www.thecodingforums.com/threads/boss-33-the-honest-rocking.737482/
CC-MAIN-2014-49
en
refinedweb
Metadata Extensibility Overview This topic introduces the requirements for creating custom metadata handlers for the Windows Imaging Component (WIC), including both metadata readers and writers. It also discusses the requirements for extending WIC run-time component discovery to include your custom metadata handlers. This topic contains the following sections. - Prerequisites - Introduction - Creating a Metadata Reader - Creating a Metadata Writer - Installing and Registering a Metadata Handler - Special Considerations - Related topics Prerequisites To understand this topic you should have an in-depth understanding of WIC, its components, and metadata for images. For more information on WIC metadata, see the WIC Metadata Overview. For more information on WIC components, see the Windows Imaging Component Overview. Introduction As discussed in the WIC Metadata Overview, there are often multiple blocks of metadata within an image, each exposing different types of information in different metadata formats. To interact with a metadata format embedded within an image, an application must use an appropriate metadata handler. WIC provides several metadata handlers (both metadata readers and writers) that enable you to read and write specific types of metadata such as Exif or XMP. In addition to the native handlers provided, WIC provides APIs that enable you to create new metadata handlers that participate in WIC's run-time component discovery. This enables applications that use WIC to read and write your custom metadata formats. The following steps enable your metadata handlers to participate in WIC's run-time metadata discovery. - Implement a metadata-reader handler class (IWICMetadataReader) that exposes the required WIC interfaces for reading your custom metadata format. This enables WIC-based applications to read your metadata format the same way they read native metadata formats. - Implement a metadata-writer handler class (IWICMetadataWriter) that exposes the required WIC interfaces for encoding your custom metadata format. This enables WIC-based applications to serialize your metadata format into supported image formats. - Digitally sign and register your metadata handlers. This enables your metadata handlers to be discovered at run time by matching the identifying pattern in the registry with the pattern embedded in the image file. Creating a Metadata Reader The main access to metadata blocks within a codec is through the IWICMetadataBlockReader interface that each WIC codec implements. This interface enumerates each of the metadata blocks embedded in an image format so that the appropriate metadata handler can be discovered and instantiated for each block. The metadata blocks that are not recognized by WIC are considered unknown and are defined as the GUID CLSID_WICUnknownMetadataReader. To have your metadata format recognized by WIC, you must create a class that implements three interfaces:aReader Interface The IWICMetadataReader interface must be implemented when creating a metadata reader. This interface provides access to the underling metadata items within the data stream of a metadata format. The following code shows the definition of the metadata reader interface as defined in the wincodecsdk.idl file. interface IWICMetadataReader : IUnknown { HRESULT GetMetadataFormat( [out] GUID *pguidMetadataFormat ); HRESULT GetMetadataHandlerInfo( [out] IWICMetadataHandlerInfo **ppIHandler ); HRESULT GetCount( [out] UINT *pcCount ); HRESULT GetValueByIndex( [in] UINT nIndex, [in, out, unique] PROPVARIANT *pvarSchema, [in, out, unique] PROPVARIANT *pvarId, [in, out, unique] PROPVARIANT *pvarValue ); HRESULT GetValue( [in, unique] const PROPVARIANT *pvarSchema, [in] const PROPVARIANT *pvarId, [in, out, unique] PROPVARIANT *pvarValue ); HRESULT GetEnumerator( [out] IWICEnumMetadataItem **ppIEnumMetadata ); }; The GetMetadataFormat method returns the GUID of your metadata format. The GetMetadataHandlerInfo method returns an IWICMetadataHandlerInfo interface that provides information about your metadata handler. This includes information such as what image formats support the metadata format and whether your metadata reader requires access to the full metadata stream. The GetCount method returns the number of individual metadata items (including embedded metadata blocks) found within the metadata stream. The GetValueByIndex method returns a metadata item by an index value. This method enables applications to loop through each metadata item in a metadata block. The following code demonstrates how an application can use this method to retrieve each metadata item in a metadata block. PROPVARIANT readerValue; IWICMetadataBlockReader *blockReader = NULL; IWICMetadataReader *reader = NULL; PropVariantInit(&readerValue); hr = pFrameDecode->QueryInterface(IID_IWICMetadataBlockReader, (void**)&blockReader); if (SUCCEEDED(hr)) { // Retrieve the third block in the image. This is image specific and // ideally you should call this by retrieving the reader count // first. hr = blockReader->GetReaderByIndex(2, &reader); } if (SUCCEEDED(hr)) { UINT numValues = 0; hr = reader->GetCount(&numValues); // Loop through each item and retrieve by index for (UINT i = 0; SUCCEEDED(hr) && i < numValues; i++) { PROPVARIANT id, value; PropVariantInit(&id); PropVariantInit(&value); hr = reader->GetValueByIndex(i, NULL, &id, &value); if (SUCCEEDED(hr)) { // Do something with the metadata item. //... } PropVariantClear(&id); PropVariantClear(&value); } } The GetValue method retrieves a specific metadata item by schema and/or ID. This method is similar to the GetValueByIndex method except that it retrieves a metadata item that has a specific schema or ID. The GetEnumerator method returns an enumerator of each metadata item in the metadata block. This enables applications to use an enumerator to navigate your metadata format. If your metadata format does not have a notion of schemas for metadata items, the GetValue... methods should ignore this property. If, however, your format supports schema naming, you should anticipate a NULL value. If a metadata item is an embedded metadata block, create a metadata handler from the substream of the embedded content and return the new metadata handler. If there is no metadata reader available for the nested block, instantiate and return an unknown metadata reader. To create a new metadata reader for the embedded block, call the component factory's CreateMetadataReaderFromContainer or CreateMetadataReader methods, or call the WICMatchMetadataContent function. If the metadata stream contains big-endian content, the metadata reader is responsible for swapping any data values it processes. It is also responsible for informing any nested metadata readers that they are working with big-endian data stream. However, all values should be returned in little-endian format. Implement support for namespace navigation by supporting queries where the metadata item ID is a VT_CLSID (a GUID) corresponding to a metadata format. If a nested metadata reader for that format is identified during parsing, it must be returned. This enables applications to use a metadata query reader to search your metadata format. When getting a metadata item by ID, you should use PropVariantChangeType Function to coerce the ID into the expected type. For example, the IFD reader will coerce an ID to type VT_UI2 to coincide with the data type of an IFD tag ID USHORT. The input type and expected type must both be PROPVARIANT to do this. This is not required, but doing this coercion simplifies code that calls the reader to query for metadata items. reader with a data stream containing your metadata block. Your reader parses this stream to access the underlying metadata items. Your metadata reader is initialized with a substream that is positioned at the beginning of the raw metadata content. If your reader does not require the full stream, the substream is limited in range to only the content of the metadata block; otherwise, the full metadata stream is provided with the position set at the beginning of your metadata block. The SaveEx method is used by metadata writers to serialize your metadata block. When SaveEx is used in a metadata reader, it should return WINCODEC_ERR_UNSUPPORTEDOPERATION. IWICStreamProvider Interface The IWICStreamProvider interface enables your metadata reader to provide references to its content stream, provide information about the stream, and refresh cached versions of the stream. The following code shows the definition of the IWICStreamProvider interface as defined in the wincodecsdk.idl file. The GetStream method retrieves a reference to your metadata stream. The stream you return should have the stream pointer reset to the start position. If your metadata format requires full stream access, the start position should be the start of your metadata block. The GetPersistOptions method returns the stream's current options from the WICPersistOptions enumeration. The GetPreferredVendorGUID method returns the GUID of the vendor of the metadata reader. The RefreshStream method refreshes the metadata stream. This method must call LoadEx with a NULL stream for any nested metadata blocks. This is necessary because nested metadata blocks and their items may no longer exist, due to in-place editing. Creating a Metadata Writer A metadata writer is a type of metadata handler that provides a way to serialize a metadata block to an image frame, or outside an individual frame if the image format supports it. The main access to the metadata writers within a codec is through the IWICMetadataBlockWriter interface that each WIC encoder implements. This interface enables applications to enumerate each of the metadata blocks embedded in an image so that the appropriate metadata writer can be discovered and instantiated for each metadata block. Metadata blocks that do not have a corresponding metadata writer are considered unknown, and are defined as the GUID CLSID_WICUnknownMetadataReader. To enable WIC enabled applications to serialize and write your metadata format, you must create a class that implements the following interfaces: IWICMetadataWriter,aWriter Interface The IWICMetadataWriter interface must be implemented by your metadata writer. Additionally, because IWICMetadataWriter inherits from IWICMetadataReader, you must also implement all the methods of IWICMetadataReader. Because both handler types require the same interface inheritance, you might want to create a single class that provides both reading and writing functionality. The following code shows the definition of the metadata writer interface as defined in the wincodecsdk.idl file. interface IWICMetadataWriter : IWICMetadataReader { HRESULT SetValue( [in, unique] const PROPVARIANT *pvarSchema, [in] const PROPVARIANT *pvarId, [in] const PROPVARIANT *pvarValue ); HRESULT SetValueByIndex( [in] UINT nIndex, [in, unique] const PROPVARIANT *pvarSchema, [in] const PROPVARIANT *pvarId, [in] const PROPVARIANT *pvarValue ); HRESULT RemoveValue( [in, unique] const PROPVARIANT *pvarSchema, [in] const PROPVARIANT *pvarId ); HRESULT RemoveValueByIndex( [in] UINT nIndex ); }; The SetValue method writes the specified metadata item to the metadata stream. The SetValueByIndex method writes the specified metadata item to the specified index in the metadata stream. The index does not refer to the ID but to the position of the item within the metadata block. The RemoveValue method removes the specified metadata item from the metadata stream. The RemoveValueByIndex method removes the metadata item at the specified index from the metadata stream. After removing an item, it is expected that the remaining metadata items will occupy the vacated index if the index is not the last index. It is also expected that the count will change after the item is removed. It is the metadata writer's responsibility to convert the PROPVARIANT items to the underlying structure required by your format. However, unlike the metadata reader, VARIANT types should not normally be coerced to different types as the caller is specifically indicating what data type to use. Your metadata writer must commit all metadata items to the image stream, including hidden or unrecognized values. This includes unknown nested metadata blocks. However, it is the encoder's responsibility to set any critical metadata items prior to initiating the save operation. If the metadata stream contains big-endian content, the metadata writer is responsible for swapping any data values it processes. It is also responsible for informing any nested metadata writers that they are working with a big-endian data stream when they save. Implement support for namespace creation and removal by supporting set and remove operations on metadata items with a type of VT_CLSID (a GUID) corresponding to a metadata format. The metadata writer calls the WICSerializeMetadataContent function to properly embed the nested metadata writer content into the parent metadata writer. If your metadata format supports in-place encoding, you are responsible for managing the required padding. For more information on in-place encoding, see WIC Metadata Overview and Overview of Reading and Writing Image Metadata. handler with a data stream containing your metadata block. The SaveEx method serializes the metadata into a stream. If the provided stream is the same as initialization stream, you should perform in-place encoding. If in-place encoding is supported, this method should return WINCODEC_ERR_TOOMUCHMETADATA when there is insufficient padding to perform in-place encoding. If in-place encoding is not supported, this method should return WINCODEC_ERR_UNSUPPORTEDOPERATION. The IPersistStream::GetSizeMax method must be implemented and must return the exact size of the metadata content that would be written in subsequent save. The IPersistStream::IsDirty method should be implemented if the metadata writer is initialized through a stream, so that an image can reliably determine whether its content has changed. If your metadata format supports nested metadata blocks, your metadata writer should delegate to the nested metadata writers the serializing of its content when saving to a stream. IWICStreamProvider Interface The implementation of the IWICStreamProvider interface for a metadata writer is the same as that of a metadata reader. For more information, see Creating a Metadata Reader section in this document. Installing and Registering a Metadata Handler To install a metadata handler, you must provide the handler assembly and register it in the system registry. You can decide how and when the registry keys are populated. Note For readability, the actual hexadecimal GUIDs are not shown in the registry keys shown in the following sections of this document. To find the hexadecimal value for a specified friendly name, see the wincodec.idl and wincodecsdk.idl files. Metadata Handler Registry Keys. Note In the following registry key listings, {Reader CLSID} refers to the unique CLSID that you provide for your metadata reader. {Writer CLSID} refers to the unique CLSID that you provide for your metadata writer. {Handler CLSID} refers to the reader's CLSID, the writer's CLSID, or both, depending on which handlers you are providing. {Container GUID} refers to the container object (image format or metadata format) that can contain the metadata block. The following registry keys register your metadata handler with the other metadata handlers available: In addition to registering your handlers in their respective categories, you must also register additional keys that provide information specific to the handler. Readers and writers share similar registry key requirements. The following syntax shows how to register a handler. Both the reader handler and writer handler must be registered in this way, using their respective CLSIDs: [HKEY_CLASSES_ROOT\CLSID\{CLSID}] "Vendor"={VendorGUID} "Date"="yyyy-mm-dd" "Version"="Major.Minor.Build.Number" "SpecVersion"="Major.Minor.Build.Number" "MetadataFormat"={MetadataFormatGUID} "RequiresFullStream"=dword:1|0 "SupportsPadding"= dword:1|0 "FixedSize"=0 [HKEY_CLASSES_ROOT\CLSID\{CLSID}\InProcServer32] @="drive:\path\yourdll.dll" "ThreadingModel"="Apartment" [HKEY_CLASSES_ROOT\CLSID\{CLSID}\{LCID}] Author="Author's Name" Description = " Metadata Description" DeviceManufacturer ="Manufacturer Name" DeviceModels="Device,Device" FriendlyName="Friendly Name" Metadata Readers The metadata reader registration also includes keys that describe how the reader can be embedded in a container format. A container format can be an image format such as TIFF or JPEG; it can also be another metadata format such as an IFD metadata block. Natively supported image container formats are listed in wincodec.idl; each image container format is defined as a GUID with a name that begins with GUID_ContainerFormat. Natively supported metadata container formats are listed in wincodecsdk.idl; each metadata container format is defined as a GUID with a name that begins with GUID_MetadataFormat. The following keys register a container that the metadata reader supports, and the data needed to read from that container. Each container supported by the reader must be registered in this way. The Pattern key describes the binary pattern that is used to match the metadata block to the reader. When defining a pattern for your metadata reader, it should be reliable enough that a positive match means the metadata reader can understand the metadata in the metadata block being processed. The DataOffset key describes the fixed offset of the metadata from the block header. This key is optional and, if not specified, means that the actual metadata cannot be located using a fixed offset from the block header. Metadata Writers The metadata writer registration also includes keys that describe how to write out the header preceding the metadata content embedded in a container format. As with the reader, a container format can be an image format or another metadata block. The following keys register a container that the metadata writer supports, and the data needed to write the header and metadata. Each container supported by the writer must be registered in this way. The WriteHeader key describes the binary pattern of the metadata block header to be written. This binary pattern coincides with the metadata format's reader Pattern key. The WriteOffset key describes the fixed offset from the block header at which the metadata should be written. This key is optional and, if not specified, means that the actual metadata should not be written out with the header. All metadata handlers must be digitally signed to participate in the WIC discovery process. WIC will not load any handler that is not signed by a trusted certificate authority. For more information on digital signing, see Introduction to Code Signing. Special Considerations The following sections include additional information you must consider when creating your own metadata handlers. PROPVARIANTS WIC uses a PROPVARIANT to represent a metadata item for both reading and writing. A PROPVARIANT provides a data type and data value for a metadata item used within a metadata format. As the writer of a metadata handler, you have a lot of flexibility on how data is stored in the metadata format and how data is represented within a metadata block. The following table provides guidelines to help you decide on the appropriate PROPVARIANT type to use in different situations. To avoid redundancy in representing array items, do not use safe arrays; use only simple arrays. This reduces the work an application needs to perform when interpreting PROPVARIANT types. Avoid using VT_BYREF and store values inline whenever possible. VT_BYREF is inefficient for small types (common for metadata items) and does not provide size information. Before using a PROPVARIANT, always call PropVariantInit to initialize the value. When you are finished with the PROPVARIANT, always call PropVariantClear to release any memory allocated for the variable. 8BIM Handlers When writing a metadata handler for an 8BIM metadata block, you must use a signature that encapsulates both the 8BIM signature and the ID. For example, the native 8BIMIPTC metadata reader provides the following registry information for reader discovery: The 8BIMIPTC reader has a registered pattern of 0x38, 0x42, 0x49, 0x4D, 0x04, 0x04. The first four bytes (0x38, 0x42, 0x49, 0x4D) are the 8BIM signature, and the last two bytes (0x04, 0x04) are the ID for the IPTC record. So, to write an 8BIM metadata reader for resolution information, you would need a registered pattern of 0x38, 0x42, 0x49, 0x4D, 0x03, 0xED. Again, the first four bytes (0x38, 0x42, 0x49, 0x4D) are the 8BIM signature. The last two bytes (0x03, 0xED), however, are the resolution information ID as defined by the PSD format. Related topics - Conceptual - Windows Imaging Component Overview - WIC Metadata Overview - Metadata Query Language Overview - Overview of Reading and Writing Image Metadata - How-to: Re-encode a JPEG Image with Metadata - Other Resources - How to Write a WIC-Enabled CODEC
http://msdn.microsoft.com/en-us/library/windows/apps/ee719795(v=vs.85).aspx
CC-MAIN-2014-49
en
refinedweb
User talk:Robstew From Uncyclopedia, the content-free encyclopedia edit Welcome! Hello, Robst!) 04:02, January 25, 2013 (UTC) edit Userspace Hello and welcome to Uncyclopedia! The thing about the sandbox is that it is not guaranteed to persist. If you want your stuff to last long enough to whip it into shape as a real Uncyclopedia article, put it in your userspace. For example, copy it out of the sandbox and create User:Robstew/Subaru and paste the text there. Cheers! Spıke ¬ 01:29 27-Jan-13 I see you call for suggestions. Here are some. - Section 1: "Is considered to be" are words that don't do anything. I wrote about stuff like this once at User:SPIKE/Cliches-1 which might help you write. - Section 2: I don't know who these guys are. Just in general, though, they should be famous people, and just putting them in a list isn't funny unless what you write is, and unless it relates to real life. - Section 3: Shit jokes and fart jokes aren't ever funny, especially if the joke is missing. - Section 4: How to avoid the STI: We have a namespace called HowTo:for this kind of writing (instruction guides). The normal article should look like an encyclopedia article and this type of writing doesn't fit. A lot to think about; I hope that helps. Spıke ¬ 01:36 27-Jan-13 Thanks^ I'll keep that in mind. Will edit. Hope you can see this, Jesus Christ this place is a mess of communication.--Robstew (talk) 03:07, January 27, 2013 (UTC) - I came over here to say almost identical things. And a “thank you” for the poem. Was that page original or has it been derived from elsewhere? (Derivations are fine, by the way. Just can't do a complete copypasta without attribution, depending in the source. And rewriting something enough to make it derivative work is a doddle.) • Puppy's talk page • 01:40 27 Jan Not a copy paste, however the pictures are basically random files I found in the dark. Unfortunately, since I am typing on mobile I can't upload my own photos or web files. Is that a no-no? The text itself is based on fact, but I have to link them to a source. What I have now is just out of what I already know. I probably will link some sources though. --Robstew (talk) 02:20, January 27, 2013 (UTC) - Uploading images from an iDevice (or similar) is not a no-no, just technically a pain in the arse. You have to first upload them to flickr, and then import them from there to here. I'd started putting together a "HowTo" on that ages ago, including a couple of rather nifty tricks, but it's complex. If you have an image found on another site somewhere then hit me on my talk page with the URL and I can get them on here. - Having said that there are a few bits of copyright law that we have to be mindful of when uploading images. For the most part, if you're using the image in an article, you can upload comfortably as being protected for the purposes of parody. (So upload away, but try not to use the site as an image host.) • Puppy's talk page • 02:54 27 Jan - Hello as well. No, everything is fine and all is allowed, except some stuff that Puppy knows about. No need to link anything to a source here, we are a satirical rendition of wikipedia and sources are for those with brains (although a few of us also edit and write at wikipedia). Good to meet you, and don't let serious notes on your talkpage make you into a serious writer, even here. Check out our Rules, and pay particular attention to number three (well, all of them, but number three). Aleister 2:58 27-1-'13 Thanks to all. I'll Definately look into all of that. Being a new member makes everything a bit confusing. --Robstew (talk) 03:07, January 27, 2013 (UTC) - We're here to confusehelp! • Puppy's talk page • 03:13 27 Jan edit Deleted article I'm not too sure about the history of the main space page, but there is a mirror site that holds a lot of our older articles. I located this: which is pretty dreadful and not much use, because it's about a different topic. You can also ask an admin to restore a deleted page so you can see what was there, but I'd advise against it. You can always get more detailed feedback via Pee review, but it takes ages for us to go through the queue there. Quicker to just ask someone directly. If you have a look at HoS you can see some of the more popular stuff that the three guys above have written. We all have different styles, so a little reading of our stuff will give you an idea of what we write like. A lot of the users listed there have moved on to other things though. Special:Recentchanges will give you a good idea of who is still around, and what's still happening. • Puppy's talk page • 04:04 27 Jan edit Subaru Just took a quick look at the article, and it appears you have something against a specific person (Ken Block, I believe). This is generally frowned upon, unless it is a famous person, in which case making fun of something that the celebrity has done in public, or is otherwise known for, is fine. I am not familiar with this person, so something describing who this person is and why they are notable or famous would be helpful (for instance, if they are a sports figure, mention that at least once). Their name should be in Wikipedia at least, or come up as notable in a Google search (that is, more than a Facebook page and a few people finder pages on them). ---- Simsilikesims(♀UN) Talk here. 04:27, January 27, 2013 (UTC) Ken Block is a rather famous rally racing driver, I'll elaborate on him in an edit. Look him up on youtube, some of his "Gymkhana" videos are pretty cool. He is not, however a good rally driver, and is known to be cocky off the track. --Robstew (talk) 04:35, January 27, 2013 (UTC) Thanks for the heads up. - I just made an edit to that page. I apologise if I edit conflicted. Often when we're editing a page we'll add comments in the edit summary, which you can see in the history tab here and will also be at the top of the page. You'll also be able to see recent edits to pages you watch on Special:Watchlist, which will also show you the summary notes. (I talk too much. Going away again now.) • Puppy's talk page • 05:44 27 Jan That's fine, like I said, any edits are welcome. I should have looked at my messages. As you can see, while I have been relatively successful in uploading pictures, I am experiencing some difficulty organizing them (Captions, etc.).--Robstew (talk) 05:50, January 27, 2013 (UTC) - WP:Wikipedia:Picture tutorial will give you the info you need, and then a whole bunch more. You're doing well with them so far though, so don't stress. • Puppy's talk page • 06:19 27 Jan Thanks, I think that's all the pictures the article can hold without looking too choppy. Anyway, i'm gonna be offline for awhile, so if anyone sees any cleanup to be done feel free to take action if you so desire. --Robstew (talk) 06:26, January 27, 2013 (UTC) edit Infobox I have looked in on the day's changes. - One thing you seem to be trying to do is cram an "Infobox" into a thumbnail. For God's sake, find a template and all you will have to do is fill out the fields, and it will work correctly on everyone's browser too. Perhaps Puppy can find you the right template for this type of article. - Another thing you seem to be trying to do is give us dollops of your own personal opinion. Subaru deserves to be bowed down to, Ken Block is an asshole, Pastrana has a name that sounds like something else, etc. This never works unless you can really make it clever. I know you're not done yet, but you need some notion about where you're taking all this--especially when we get to Section 6 and you are bandying around liberals and lesbians as if merely writing down the names of the groups were a joke. What do they have to do with Subarus? What is the point you intend to make? More stuff for you to think about; keep at it! Spıke ¬ 23:28 27-Jan-13 - Thank you for the advice, Spike. looking at wikia's page on formatting didn't mention anything on an infobox, I'll look a little harder. On the personal opinion issue, I'll try to type some more stuff about each celebrity (maybe a section for each?) and modify the liberal/lesbian/mitsubishi relationship. Over the next few days, I'm going to change my focus to the text itself as today I have been distracted by getting pictures aligned.Thanks! --Robstew (talk) 23:39, January 27, 2013 (UTC) I've recoded your initial chart using {{Infobox}} itself. Click on that link to see the instructions: There are options that let you do a whole lot more with this. Spıke ¬ 23:58 27-Jan-13 It is shaping up! I like the paragraphs of prose much more than just making a bullet point and leaving it at that. But regarding your most recent edit, please don't talk directly to the reader (unless doing so is very funny). Ordinarily, it breaks the con that this is an encyclopedia. It's like an actor "breaking the fourth wall" of the stage, that is, talking to the audience or revealing that he is in a play. It is an available technique but you have to be sure you know why you're using it. Cheers. Spıke ¬ 02:10 28-Jan-13 And here is your morning nagging! On Subaru, quotations are better if you invent a real person, and bonus points if he never would have said what you have him saying, or if he said something very similar but certainly didn't mean what you said he meant. On C-130, please learn the difference between it's and its immediately! If you cannot substitute "it is" then you cannot write it's. Happy editing! Spıke ¬ 15:48 28-Jan-13 Alright. I'll look into the grammar issue. That article is just something I whipped together, so I have a lot of refining to do. On the quotations, I'll see who I can find. Thanks again!--Robstew (talk) 15:52, January 28, 2013 (UTC) edit Subaru, again It's not quite ready for mainspace. It still reads a little too much like you are trying to sell the Subaru to other dyed-in-the-wool Subaru fans, to praise its greatness, and doesn't have enough comedy for the guy whose Subaru has just seized up. You have a lot of photos, and they are backing up and crowding because you haven't written enough text. - Given that you have an Infobox with a photo, you don't need the usual initial photo, especially when its caption refers to text somewhere else. Move this photo to where it's referenced. - Remove the heading ==Subaru== so the following paragraph just becomes the introduction. (The Table of Contents will follow it). - In Ken Block, you have a bulleted list in the middle of nowhere, a list without an introduction to tell us what it's a list of. In Pastrana, you have a list inside a list. Why? - Regarding "Rally History": I'm 11 and what's a rally? No, I'm not 11, but this jumps in with no introduction. - Beware? Go to the bookcase, pull out any encyclopedia, and show me any article that has a heading Beware. Happy more editing! Spıke ¬ 23:23 28-Jan-13 Only you moved it first! Regarding your Change Summary, you have indeed done all you could--except check this talk page!!! Now, if you had waited for me to comment, I could have moved this without creating redirects. You have created several--they make the previous names continue to succeed at retrieving the article, but you must now ask for them to be deleted. So go to UN:QVFD and add the following entries: {{Redirect|User:Robstew/Subaru}} - moved to mainspace {{Redirect|Uncyclopedia:Subaru}} - created in error I've added Subaru to {{Cars}} to fix the template at the bottom of the article. Spıke ¬ 23:32, 23:36 28-Jan-13 - Thanks for the help Spike. Still working on getting rid of those redirects. --Robstew (talk) 23:38, January 28, 2013 (UTC) - I took care of those redirects (I think). I copy/pasted the entries into today's date section.--Robstew (talk) 23:43, January 28, 2013 (UTC) Looks correct at QVFD. Welcome to mainspace! Did I mention? please start paragraphs on talk pages with one or more : characters if necessary to indent your posts from those of other people. Be seeing you! Spıke ¬ 00:00 29-Jan-13 edit Subaru2 Oh God, you are creating redirects all over the place. Having moved your article to mainspace, you could edit it right there in mainspace. Whatever; just clean up after yourself (at QVFD). Spıke ¬ 00:05 29-Jan-13 - I think I managed to clean up my tracks. Good lord this place is confusing. By redirects, I thought they meant Double links or infinite loop links. Oh well. Thanks again! --Robstew (talk) 00:17, January 29, 2013 (UTC) I think that, based on your last move, the mainspace page Subaru now contains nothing but the text #REDIRECT [[User:Robstew/Subaru2]] which we don't want to be permanent. A double redirect is one of these that points to a page that points to another page. We don't want this to ever happen. Wikis are indeed designed for the coder as well as the writer; but the result is pretty pages with universal publication, which is nice. Spıke ¬ 00:23 29-Jan-13 OK, here's more. - Please end your love affair with <br clear=all>. You are trying too hard to manage small details of the layout of the page; you should concentrate on content and let Uncyclopedia lay it out the same way as it does other pages, unless gaping problems emerge. - Please don't get chatty with the reader; it breaks the resemblance to an encyclopedia article. (See above under "fourth wall.") - As you are seeing above, please see all of the above, and do the stuff I told you to do that you haven't done yet. A global search for "it's" would be in order. Spıke ¬ 02:10 29-Jan-13 Right. I'll get on that. I did have gaping problems with photos and text, I should probably just type more. I looked up Its vs. It's on google, I get the difference now. My C-130 article is clear of those problems I think. Thanks again, will edit.--Robstew (talk) 02:24, January 29, 2013 (UTC) edit Home-made Please don't pepper your article with external links. This site is for original comedy creations, not a jumping-off point to other stuff that exists on the Web. Thanks. Spıke ¬ 01:52 31-Jan-13 - Sorry, that's just what the ICU template said. Will edit. Should I link it to other articles? Is it long enough? How do I categorize it? I so confuse... :( --Robstew (talk) 01:58, January 31, 2013 (UTC) Yeah, that message means links to other Uncyclopedia articles with the double-bracket coding. Ignore what it says about red-links (links to other articles that don't exist), as there were none when RAHB put it into your article. To make it look more like a regular Uncyclopedia article, you should use {{Q}} rather than BLOCKQUOTE for quotations; see most other articles for the correct form. No, the article doesn't seem long enough to me. And "Home-made crap" ought to go away too, as it is just a list of short things. For categories, you can certainly use Category:Phrases. Spıke ¬ 02:27 31-Jan-13 Now, over in C-130, I see you have discovered how to refer to the user's name. In my opinion, this pranks the reader (convincing him that somehow you wrote the article with him personally in mind) to amuse yourself, when you should be trying to amuse the reader. We are not all agreed here on this point, but think about it. Spıke ¬ 11:51 31-Jan-13 edit The use of templates Given SPIKE's suggestions above, and you're discovering the use of templates, I figured I'd give you a little help. There are two kinds of template. Ones that use parameters, and ones that don't. For instance: “This is a quote” Some templates have more functionality that goes beyond the standard. {{RL}} for instance is designed to clean up redlinks on a page without removing actual links. (Note when we talk about links we're talking about links on site. External links are a different matter.) - {{RL}} {{RL|This page does not exist}} - This page does not existThis page does not exist {{RL|This page does exist}} When I started editing here I barely used templates. I had a bit of HTML knowledge and did much the same as yourself. Templates do makes a bundle of things easier though. The best way to see how a page you like the look of has been formatted. Read the code and steal it for yourself. The more you learn about coding the funkier your articles can become. But these cosmetic things are tricks that can either work or fail badly. Unless your article is designed around a funky look, you need to focus on content. These few articles illustrate what I mean: - Love this uses barely any templates, but instead focuses on good writing to make the funny - Stereotype uses a couple of templates but most of the funky coding is just the table at the base (which I just stole the look from a catalogue I was reading) - Microsoft knowledge base uses very few templates, but a lot of coding, and one funky image - Twitter uses some very complex coding but mostly hidden inside a bundle of templates. - Game:Alone in the dark is lots of coding and some very complex templates, and some funky images. Each of these articles has been featured, but each of them use different techniques to make with the funny. Oddly, the most complex one is definitely the latter (and you really don't want to know how long it took to put that together), but what makes that funny is not the funky coding, but the actual text itself. All the coding does is mimic a “frame” for the content. Of them my personal favourite is Love, because it is textually funny. TL;DR version: Templates and code can help, and in some places need to be there, but the focus really needs to be on making the funny first. (And if ever you need a funky code, there are people here who can do it.) • Puppy's talk page • 12:51 31 Jan edit Hello And good to meet you. Keep up the good work as long as you're having fun. Funnybony, one of our writers, calls Uncy a playground for adults, and when I become an adult I will play here too. Aleister 12:00 31-1-'13
http://uncyclopedia.wikia.com/wiki/User_talk:Robstew
CC-MAIN-2014-49
en
refinedweb
iVisibilityCuller Struct Reference [Visibility] This interface represents a visibility culling system. More... #include <iengine/viscull.h> Detailed Description This interface represents a visibility culling system. To use it you first register visibility objects (which are all the objects for which you want to test visibility) to this culler. A visibility culler can usually also support shadow calculation. Main creators of instances implementing this interface: - Dynavis culler plugin (crystalspace.culling.dynavis) - Frustvis culler plugin (crystalspace.culling.frustvis) Main ways to get pointers to this interface: Main users of this interface: Definition at line 101 of file viscull.h. Member Function Documentation Start casting shadows from a given point in space. What this will do is traverse all objects registered to the visibility culler. If some object implements iShadowCaster then this function will use the shadows casted by that object and put them in the frustum view. This function will then also call the object function which is assigned to iFrustumView. That object function will (for example) call iShadowReceiver->CastShadows() to cast the collected shadows on the shadow receiver. Intersect a segment with all objects in the visibility culler and return them all in an iterator. If accurate is true then a more accurate (and slower) method is used. Intersect a beam using this culler and return the intersection point, the mesh and optional polygon index. If the returned mesh is 0 then this means that the object belonging to the culler itself was hit. Some meshes don't support returning polygon indices in which case that field will always be -1. If accurate is false then a less accurate (and faster) method is used. In that case the polygon index will never be filled. Intersect a segment with all objects in the visibility culler and return them all in an iterator. This function is less accurate then IntersectSegment() because it might also return objects that are not even hit by the beam but just close to it. Parse a document node with additional parameters for this culler. Returns error message on error or 0 otherwise. Precache visibility culling. This can be useful in case you want to ensure that render speed doesn't get any hickups as soon as a portal to this sector becomes visible. iEngine->PrecacheDraw() will call this function. Register a visibility object with this culler. If this visibility object also supports iShadowCaster and this visibility culler supports shadow casting then it will automatically get registered as a shadow caster as well. Same for iShadowReceiver. Setup all data for this visibility culler. This needs to be called before the culler is used for the first time. The given name will be used to cache the data. Unregister a visibility object with this culler. Mark all objects as visible that intersect with the given bounding sphere. Notify the visibility callback of all objects that are in the volume formed by the set of planes. Can be used for frustum intersection, box intersection, .... - Remarks: - Warning! This function can only use up to 32 planes. Do the visibility test from a given viewpoint. This will first clear the visible flag on all registered objects and then it will mark all visible objects. If this function returns false then all objects are visible. Notify the visibility callback of all objects that intersect with the given bounding sphere. Mark all objects as visible that are in the volume formed by the set of planes. Can be used for frustum intersection, box intersection, .... Warning! This function can only use up to 32 planes. Mark all objects as visible that intersect with the given bounding box. The documentation for this struct was generated from the following file: Generated for Crystal Space 1.4.1 by doxygen 1.7.1
http://www.crystalspace3d.org/docs/online/api-1.4/structiVisibilityCuller.html
CC-MAIN-2014-49
en
refinedweb
31 October 2012 08:33 [Source: ICIS news] SINGAPORE (ICIS)--?xml:namespace> “This is a regular turnaround,” the source said, adding that it will last for about five days. Haohua Yuhang has a total PVC capacity of 500,000 tonne/year, with 400,000 tonnes/year at Qinyang. The producer sold carbide-based PVC during the turnaround at yuan (CNY) 6,500/tonne ($1,042/tonne) EXW (ex-works) on 31 October, according to Chemease. The spot PVC prices in central The shutdown of the unit may not have any significant impact on the local PVC market because of end-users’ weak demand, a local PVC market player said. (
http://www.icis.com/Articles/2012/10/31/9609080/chinas-haohua-yuhang-chemical-shuts-pvc-unit-on-30.html
CC-MAIN-2014-49
en
refinedweb
hello, i am working on a tempconverter program, i have tried a number of things to get this program to work and everytime i try something it seems to either mess something else up, or only do half of what i need it to do. could anyone give me some hints or insight as to what i am doing wrong? thanks Code: double fTOc(double); //Fahrenheit to Celsius double cTOf(double); //Celsius to Fahrenheit int main() { double temp;//temperature entered char scale[1];//identify scale used (C or F) double absZeroC = -273.15;//variable to easier identify absolute zero in Celsius double absZeroF = -459.67;//variable to easier identify absolute zero in Fahrenheit printf("Enter the temperature followed by F or C( Ex: \"75 F\"):"); scanf("%lf%s", &temp, scale); if (scale == "C") if (temp > absZeroC) printf("\nTemperature in Fahrenheit is %f.\n", cTOf(temp)); else if ( temp < absZeroC) printf("\nTemperature %f is less than absolute zero %f", temp, absZeroC); if (scale == "F") if (temp > absZeroF) printf("\nTemperature in Celsius is %f.\n", fTOc(temp)); else if (temp < absZeroF) printf("\nTemperature %f is less than absolute zero %f", temp, absZeroF); else printf("\nInvalid scale entry! Use C for Celsius or F for Fahrenheit!\n\n"); system("pause"); return 0; } double fTOc(double f)//fah to celsius { return (5.0 / 9.0) * (f - 32); } double cTOf(double c)//celsius to fah { return ((9.0 / 5.0) * c) + 32; }
http://cboard.cprogramming.com/c-programming/112968-help-temp-converter-printable-thread.html
CC-MAIN-2014-49
en
refinedweb
java.lang.Object javax.faces.component.UIComponentjavax.faces.component.UIComponent javax.faces.component.UIComponentBasejavax.faces.component.UIComponentBase org.apache.myfaces.custom.globalId.GlobalIdorg.apache.myfaces.custom.globalId.GlobalId public class GlobalId A simple container-component that causes its child components to render a clientId value without any prefix. Important: this component works only when run in a JSF-1.2 (or later) environment. When run in a JSF-1.1 environment it will not cause an error, but will instead act like a NamingContainer itself, ie will add its own id to the child component's clientId. Every JSF component has a "clientId" property; when the component is rendered, many components output this as part of the rendered representation. In particular, when rendering HTML, many components write an "id" attribute on their html element which contains the clientId. The clientId is defined as being the clientId value of the nearest NamingContainer ancestor plus ":" plus the component's id. The prefixing of the parent container's clientId is important for safely building views from multiple files (eg using Facelets templating or JSP includes). However in some cases it is necessary or useful to render a clientId which is just the raw id of the component without any naming-container prefix; this component can be used to do that simply by adding an instance of this type as an ancestor of the problem components. This works for all JSF components, not just Tomahawk ones. Use of this component should be a "last resort"; having clientIds which contain the id of the ancestor NamingContainer is important and useful behaviour. It allows a view to be built from multiple different files (using facelets templating or jsp includes); without this feature, component ids would need to be very carefully managed to ensure the same id was not used in two places. In addition, it would not be possible to include the same page fragment twice. Ids are sometimes used by Cascading Style Sheets to address individual components, and JSF compound ids are not usable by CSS. However wherever possible use a style class to select the component rather than using this component to assign a "global" id. Ids are sometimes used by javascript "onclick" handlers to locate HTML elements associated with the clicked item (document.getById). Here, the onclick handler method can be passed the id of the clicked object, and some simple string manipulation can then compute the correct clientId for the target component, rather than using this component to assign a "global" id to the component to be accessed. This component is similar to the "forceId" attribute available on many Tomahawk components. Unlike the forceId attribute this (a) can be used with all components, not just Tomahawk ones, and (b) applies to all its child components. Note that since JSF1.2 forms have the property prefixId which can be set to false to make a UIForm act as if it is not a NamingContainer. This is a good idea; the form component should probably never have been a NamingContainer, and disabling this has no significant negative effects. public static final java.lang.String COMPONENT_FAMILY public static final java.lang.String COMPONENT_TYPE public GlobalId() public java.lang.String getFamily() getFamilyin class javax.faces.component.UIComponent public java.lang.String getContainerClientId(javax.faces.context.FacesContext facesContext) getContainerClientIdin class javax.faces.component.UIComponent
http://myfaces.apache.org/sandbox-project/tomahawk-sandbox12/testapidocs/org/apache/myfaces/custom/globalId/GlobalId.html
CC-MAIN-2014-49
en
refinedweb
Code Focused Ondrej Balas continues his discussion on refactoring your code for dependency injection, this time focusing on the composition root pattern. Building upon my article from last month, "How To Refactor for Dependency Injection," I'll continue the discussion by focusing on a common design pattern called composition root. The composition root pattern is just the concept of having a single spot in your application in which the application is composed and the object graph created. The actual mechanism for this is irrelevant, though commonly a dependency injection container is used. The Composition Root Using a composition root will differ based on the type of application, as it should be as close to the application's entry point as possible. The idea is that the composition root will contain all of the configuration necessary to compose all the application pieces. Take, for example, an application that models a fictional notification service. Figure 1 shows all the objects and interfaces I'll be using. The red lines show dependencies, and the blue lines show implementations of interfaces. In many applications, an object graph like that in Figure 1 would be created by having each object instantiate its own dependencies, making maintenance more difficult than it needs to be. Using the composition root pattern, both the binding of implementations to abstractions and the dependency resolution should take place in one spot. A sample composition root is shown in Listing 1. public class CompositionRoot { public NotificationEngine CreateNotificationEngine() { IDataStream dataStream = new SomeDataStream(); IEmailCredentialsProvider emailCredentialsProvider = new EmailCredentialsProvider(); IEmailSettingsProvider emailSettingsProvider = new EmailSettingsProvider(); IEmailSender emailSender = new EmailSender(emailCredentialsProvider, emailSettingsProvider); IConfigurationReader configurationReader = new ConfigurationReader(); ILogger logger = new FileSystemLogger("somepath.txt"); return new NotificationEngine(dataStream, emailSender, configurationReader, logger); } } And then it could be used like this: static void Main(string[] args) { CompositionRoot root = new CompositionRoot(); NotificationEngine engine = root.CreateNotificationEngine(); engine.SendNotification(); } Refactoring to or writing a new application with a composition root like this is a great first step toward more maintainable code. It also allows the code to satisfy the dependency inversion principle, which specifies that higher-level objects (such as NotificationEngine in the example) should depend on abstractions instead of concrete implementations. This allows the composition root to perform dependency injection, giving the concrete implementations to the NotificationEngine. While this is a perfectly valid way to implement a composition root, when most people think dependency injection, what they really want is a dependency injection container, also commonly referred to as an inversion of control (IOC) container. Dependency Injection Containers There are hundreds of dependency injection containers available, from large, general-purpose frameworks to tiny containers that meet a specific need related to dependency injection. With such a wide variety to choose from, it can be overwhelming to choose the right one. The good news is that when using the composition root pattern, most containers behave the same, with only minor differences in syntax, and can be easily replaced by a different one. There are some exceptions to this with dependency injection frameworks like the Managed Extensibility Framework (MEF), which will be discussed in more detail later in this series. Generally when I start a new project, my dependency injection container of choice is an open source framework called Ninject. Ninject is a general-purpose framework with a few drawbacks, but I find it has an easy syntax and a lot of features that make it more pleasant to use than some similar frameworks. Dependency injection containers make it even easier to create a composition root, because they allow you to bind interfaces to implementations, rather than actually create any objects. Instead, Ninject will remember the bindings and then intelligently create objects as they're requested, based on the bindings. The same composition root in Listing 1 changes to look like the composition root in Listing 2 instead. public class CompositionRoot { private IKernel kernel = new StandardKernel(); public CompositionRoot() { kernel.Bind<IDataStream>().To<SomeDataStream>(); kernel.Bind<IEmailCredentialsProvider>().To<EmailCredentialsProvider>(); kernel.Bind<IEmailSettingsProvider>().To<EmailSettingsProvider>(); kernel.Bind<IEmailSender>().To<EmailSender>(); kernel.Bind<IConfigurationReader>().To<ConfigurationReader>(); kernel.Bind<ILogger>().To<FileSystemLogger>().WithConstructorArgument( "filePath", "somepath.txt"); } public NotificationEngine CreateNotificationEngine() { return kernel.Get<NotificationEngine>(); } } In this example, nothing is actually being created inside the constructor. You can think of the Bind statements as being instructions that say, "When this is requested, return one of these." It's only when the Get method is called from within CreateNotificationEngine that objects are actually created. When Get is called, the Bind instructions given to Ninject cause it to perform the following steps: Notice that in the first step, Ninject didn't have a Bind instruction for NotificationEngine. When this happens, Ninject automatically self-binds and treats the class as if it were bound to itself. Not all dependency injection frameworks do this by default, so if you're using a different framework you may need to explicitly bind NotificationEngine to itself. Other Ways to Configure the Dependency Injection Container In the example, I used code inside the constructor of the CompositionRoot to configure the Ninject bindings. While this method is a great way to get started with containers, many containers also offer alternative ways for setup. One common alternative is to use XML configuration, which has the added benefit of being modifiable without needing to recompile the application. Another method is to use automatic registration, in which the container uses naming conventions to perform bindings automatically, rather than needing them to be specified. I consider these to be advanced scenarios and outside of the scope of this introduction to containers, but it's important to know they exist. Setting these up can vary wildly between containers, so for further information, I recommend looking at the documentation for your container of choice. Lifecycle Management One often-overlooked benefit of dependency injection containers is that they allow control of the lifecycle of the objects they manage. In my examples thus far, every time a NotificationEngine is requested, it and all of its dependencies are recreated. This happens because the default lifecycle of objects in Ninject is transient, but may not be the desired behavior. For example, if more than one NotificationEngine is used, it might make sense for them to all share a single logger. To do that, you simply need to tell Ninject which lifecycle to use at bind time, like this: kernel.Bind<ILogger>().To<FileSystemLogger>().InSingletonScope().WithConstructorArgument( "filePath", "somepath.txt"); InSingletonScope added on to the binding will tell Ninject to only ever create a single instance of that object, no matter how many times it's requested from the container. Commonly available lifecycles include: For the full list, as well as usage examples, please refer to the Ninject documentation on Object Scopes. Alternative Dependency Injection Containers As I mentioned already, though Ninject is personally my container of choice, there are many other suitable alternatives. Some of the more popular ones include: As this series continues, I'll explore some of the pros and cons of the various containers and how they compare to Ninject, as well as get into more dependency injection-related patterns and advanced usage scenarios. About the Author Ondrej Balas is the owner of UseTech Design, a small development company based in Troy, Mich., that focuses primarily on the .NET Framework and other Microsoft technologies. Like many other developers, he began writing code at a young age and hasn't stopped. Most of the work he does today is in big data, algorithm design and software architecture. Printable Format > More TechLibrary I agree to this site's Privacy Policy. > More Webcasts
http://visualstudiomagazine.com/articles/2014/06/01/how-to-refactor-for-dependency-injection.aspx
CC-MAIN-2014-49
en
refinedweb
In this section, you will learn how to read a file using Scanner class. Description of code: J2SE5.0 provides some additional classes and methods that has made the programming easier. In comparison to any input - output stream, the class java.util.Scanner perform read and write operations easily. It also parses the primitive data. Scanner class gives a great deal of power and flexibility. You can see in the given example, we have created a Scanner object and parses the file through the File object. It then calls the hasNextLine() method. This method returns true if another line exists in the Scanner's input until it reaches the end of the file. The nextLine() method returns a string on a separate line until it reaches the end of the file. Here is the code: import java.io.*; import java.util.Scanner; public class FileScanner { public static void main(String[] args) throws Exception { File file = new File("C:/file.txt"); Scanner scanner = new Scanner(file); while (scanner.hasNextLine()) { String line = scanner.nextLine(); System.out.println(line); } } } In the above code, instead of any input-output stream, we have used Scanner class to read the file. Output:
http://www.roseindia.net/tutorial/java/core/files/filescanner.html
CC-MAIN-2014-49
en
refinedweb
iCelGame Struct Reference A networked game. More... #include <physicallayer/network.h> Inheritance diagram for iCelGame: Detailed Description A networked game. It maintains the main data of a game: game type, game info and pointers to the local client and server. Definition at line 50 of file network.h. Member Function Documentation Return the local client if available, 0 otherwise. Return the general info of the game. Return the local server if available, 0 otherwise. Return the type of the game. Return true if a local client is available in this process, false otherwise. Return true if a local server is available in this process, false otherwise. The documentation for this struct was generated from the following file: Generated for CEL: Crystal Entity Layer 2.0 by doxygen 1.6.1
http://crystalspace3d.org/cel/docs/online/api-2.0/structiCelGame.html
CC-MAIN-2014-49
en
refinedweb
EL expressions are one of the main driving forces for JavaServer Faces. Most dynamic characteristics of pages and widgets are governed by EL expressions. In JSF 1.x, there are some limitations for EL expressions that can at times be a little frustrating. One of the limitations is the fact that no custom functions or operators can be used in EL expressions. Quite some time ago, I wrote this article – – to demonstrate a trick for using a Map interface implementation to access custom functionality from EL expression after all. However, things can even be better. Rather than jumping through the somewhat elaborate hoops of implementing the Map and consructing complex EL expressions, there are two other approaches. One is to create a custom EL Resolver can configure it in the faces-config.xml. Another is discussed in this article. It involves registering custom Java methods as eligible for use in EL expressions. And that really makes life a lot easier. It allows us to create EL expressions such as: #{cel:concat (cel:upper( bean.property), cel:max(bean2.property, bean3.property), cel:avg(bean4.list))} or #{cel:substr(bean.property, 1, 5)} Leveraging new custom operators in EL expressions is done in a few simple steps: - Create custom class with static method(s) - Create a tag library (.tld file) - Register each method that should be supported in EL expressions - Add a reference to the tag library’s URI in the jsp:root element for the page - Use the registered functions in EL expressions in the page As a very simple example, let’s take a look at two EL extensions: a concat operator and an upper. 1. Create custom class with static method(s) The class could hardly be simpler: package nl.amis.jsf; public final class ELFun { /** * Method that concattenates two strings. More strings can be concattenated * through nested calls such as * #{cel:concat('a', cel:concat('b','c'))} * * @param first string to concattenate * @param second string to concattenate * @return first and second string concattenated together */ public static String concat(final String first, final String second) { return (first == null ? "" : first) + (second== null ? "" : second); } /** * Function that returns the uppercased rendition of the input. * * to be used in EL expressions like this: * #{cel:upper('a')} * can be combined with other functions like this: * #{cel:concat( cel:upper('a'), cel:upper('B'))} * * @param input string to uppercase * @return input turned to uppercase */ public static String upper(final String input) { return (input== null ? "" : input.toUpperCase()); } } 2. Create a tag library descriptor (.tld file) A TLD file is a straightforward XML document, used to descripe custom JSF UI components, Validators and other extension. And also custom functions. A TLD file is typically located in the WEB-INF directory of the application. An important element of the TLD is the uri. This element is used to identify the Tag Library when referenced from pages. <> </taglib> 3. Register each method that should be supported in EL expressions The TLD file contains a function entry for each operator to be enabled for use in EL expressions. For a function we need to indicate the name to be used in EL expressions, a reference to the class that contains the implementation for the function and the exact signature – name, result type and input parameters – for the method that is backing the function: <> <function> <name>concat</name> <function-class>nl.amis.jsf.ELFun</function-class> <function-signature>java.lang.String concat(java.lang.String, java.lang.String)</function-signature> </function> <function> <name>upper</name> <function-class>nl.amis.jsf.ELFun</function-class> <function-signature>java.lang.String upper(java.lang.String)</function-signature> </function> </taglib> 4. Add a reference to the tag library’s URI in the jsp:root element for the page <jsp:root xmlns: 5. Use the registered functions in EL expressions in the page <h:form <h:outputText </h:form> And that really is all there is to it. You could choose to create the Tag Library and custom class(es) in a separate project, deploy it to JAR and associate the JAR file with other JSF projects that then can leverage these custom functions in their EL expressions. (Note: my thanks goes to Robert Willem of Brilman who first introduced me to this functionality) Thanks for sharing. How is it different in JSF 2.0? The only step that is missing is referring .tld in web.xml as facelets tag library.
http://technology.amis.nl/2012/01/17/using-custom-functions-in-el-expressions-in-jsf-1-x/
CC-MAIN-2014-49
en
refinedweb
In this article, I'll show you how to create a custom messagebox user control in Silverlight 3, so that you can use it in any Silverlight application instead of the default messagebox. messagebox In order to be able to complete this tutorial, you should know a little about Microsoft Expression Blend, which we are going to use in creating the MessageBox user control. I assume that you are familiar with Expression Blend and with its UI. MessageBox There's no doubt that the messagebox control is used massively in any application whatever it was desktop or web as you may need to ask the user for something, warn him for some dangerous operations, or just display an information to the user, and so on... The control must be very flexible to handle all of these scenarios. Moreover, it must be a modal dialog which prevents users from accessing the background of the page until the dialog is closed. Start Expression Blend and choose Control library project template (in order to generate a DLL so that it can be added to in any Silverlight application), then remove the MainControl user control from your solution and add a new Child Window. This child window will be opened as a modal dialog, so we are now ready for changing the style of the child window and make it look like the screen shot. MainControl In Objects and timeline panel, right click on the ChildWindow node, then choose Edit Template, then Edit a Copy. Now we will be able to customize the layout of the child window to meet our requirements. You will realize that the 5th Border is the most important one that includes all the UI elements. ChildWindow Expand the 5th border control and choose the Chrome node; the Chrome node is the header of the messagebox, so we are going to change its background color using the gradient brush using the following XAML code: messagebox After finishing the header style, we are going to add a background image to give it a better appearance, so add an image in the grid that hosts the chrome element, or you can just adjust the background color of that grid, so it's up to you. At this point, we have finished styling our custom messagebox layout, but we want to display the text message and add some buttons to the body of the messagebox, so back to the childwindow we are going to add the following items: childwindow Textblock Image butto3 StackPanel So the final UI tree should look like this in the Objects & timeline panel: So far we've finished all the design work for the user control, so let's add some options for the user. We are going to add two enums in our namespace (MessageBoxButtons & MessageBoxIcon), in order to enable the user to select the correct message type, for example he/she may want to display an error message, so we should display the error icon in the message. MessageBoxButtons MessageBoxIcon public enum MessageBoxButtons { Ok, YesNo, YesNoCancel, OkCancel } public enum MessageBoxIcon { Question, Information, Error, None, Warning } One last thing we want to know is which button was clicked by the user, so what about making a public method Show(string message, string title); that returns the DialogResult exactly like the messagebox class in the Windows Forms? Unfortunately this is not applicable in Silverlight, because in windows forms the messagebox waits for a response from the user so that the program can complete processing, but this is not the same case in Silverlight. The modal dialog will be opened and the rest of code will be executed as well, so the application is not waiting for the user to close the dialog in order to complete its work. In order to overcome this issue, we are going to handle the close event of the messagebox control so that we can know the clicked button. public Show(string message, string title); DialogResult I declared a delegate function with one parameter of type MessageBoxResult in order to know which button was clicked. MessageBoxResult //delegate to get the selected MessageBoxResult public delegate void MessageBoxClosedDelegate(MessageBoxResult result); //event will be fired in the Close event of the usercontrol public event MessageBoxClosedDelegate OnMessageBoxClosed; //property to keep the result of the messagebox public MessageBoxResult Result { get; set; } private void MessageBoxChildWindow_Closed(object sender, EventArgs e) { if (OnMessageBoxClosed != null) OnMessageBoxClosed(this.Result); } First you will need to add the generated DLL from this class library to any Silverlight application. The child window class has a public method called "Show()" so we are going to use it for opening the messagebox, but we want to set its title, text message, icon & number of buttons according to user choices; so we can send these parameters whether through a public method or through an overload for the constructor, I chose the 2nd option and created another overload for the default constructor and passed the parameters through it, so use the following code to open the messagebox: Show() //displayed message string msg = "An error has occurred and the operation was cancelled, Are you sure you want to continue?"; //creating new instance from the MessageBoxControl MessageBoxControl.MessageBoxChildWindow w = new MessageBoxControl.MessageBoxChildWindow("Error", msg, MessageBoxControl.MessageBoxButtons.YesNo, MessageBoxControl.MessageBoxIcon.Error); //define the close event handler for the control w.OnMessageBoxClosed += new MessageBoxControl.MessageBoxChildWindow.MessageBoxClosedDelegate (w_OnMessageBoxClosed); //open the message w.Show(); I've attached a test application that displays the messagebox dynamically, so that you can test all of its.
http://www.codeproject.com/Articles/42477/Custom-MessageBox-Control-for-Silverlight-3?fid=1549528&df=90&mpp=10&sort=Position&spc=None&tid=3493615&PageFlow=FixedWidth
CC-MAIN-2014-49
en
refinedweb
To make the handling of Episerver Campaign web services even easier, Episerver provides native APIs that encapsulate all of the web service functionality. Contact customer support for further details. Java The native Java API is based on Axis (tested with version 1.2) and is accessed via a factory class. The following libraries must be embedded: - optivo-broadmail-api*.jar (this library can be found in this ZIP file) - axis* - axis-jaxrpc* - org.apache.commons - commons-discovery* - commons-logging* - javax.mail* - wsdl4j* Example import broadmail.api.soapll.*; import broadmail.api.soapll.factory.*; try { // Obtain a factory WebserviceFactory factory = WebserviceFactory.newInstance(); // From that factory all webservice interfaces are available // As an example we will perform a login and a blacklist check SessionWebservice sessionWebservice = factory.newSessionWebservice(); String session = sessionWebservice.login(1234, "user", "pass"); BlacklistWebservice blacklistWebservice = factory.newBlacklistWebservice(); boolean isBlacklisted = blacklistWebservice.isBlacklisted(session, "test@example.com"); sessionWebservice.logout(session); } catch (WebserviceException exception) { //An error occured exception.print.StackTrace(); } PHP The webservice API can be queried directly and easily using the native PHP SOAP interface (from PHP 5.0.1 and newer). Whenever the API expects binary data (java.langByte[]), these must be submitted as a string. The string must represent the binary data. To read the binary data of a file, you may use the operation file_get_contents(). The following example shows the login and adding of an email address to a recipient list: You can find the below mentioned mandatorId (i.e., the client ID) by performing the following the steps: - Open the Episerver Campaign start menu and, under Administration, click API overview. The API overview window opens. - Switch to the SOAP API tab. Beneath the Client ID heading, you can find the client ID of the client you are currently working in. Sample script for the native SOAP interface (from PHP 5.0.1): $client = new SoapClient(''); $webservice = new SoapClient(''); $session = $client->login($mandatorId, $username, $password); $operation = $webservice->add2($session, $recipientListId, $optinProcessId, $recipientId, $emailAddress, $attributeNames, $attributeValues); $session = $client->logout($session); echo '<pre>'; var_dump($operation); echo '</pre>'; ?> Libraries for older PHP versions If you are using an older PHP version (prior to version 5.0.1), we provide a library and samples in the archive file of this documentation to embed in your PHP. For PHP version 5.0.1 and newer, this library is deprecated, since it comes with a native SOAP client. The following example shows the login and query of the blacklist status of an email address: Sample script for NuSOAP Interface for older PHP Versions: // Include the library require_once('broadmail_rpc.php'); // Create a new factory and login. // 1234 is the mandatorId, "user" and "pass" are credentials $factory = new BroadmailRpcFactory(1234, 'user', 'pass'); // This is how error handling works. You should check the result of the getError() method // after each(!) call to a webservice method if ($factory->getError()) { die('Error during login. Details: '.$factory->getError()); } // Now create a BlacklistWebservice instance ans a call method. $blacklistWs = $factory->newBlacklistWebservice(); $isBlacklisted = $blacklistWs->isBlacklisted('you@example.com'); if ($blacklistWs->getError()) { die ('Error while checking blacklist status. Details: '.$factory->getError()) } // Don't forget to log out. $factory->logout(); // Print out the result. if ($isBlacklisted) { echo 'The emailadress "you@example.com" is blacklisted!'); } .NET If you want to use the SOAP-API with a .NET framework, all methods that require to submit a multidimensional array or return such arrays must be replaced by replacement methods. The reason for this is that .NET does not support processing of multidimensional arrays. Example: To query several recipients in the RecipientWebservice, the default method getAll would return a multidimensional array with the following pattern: [ [email1, firstName1, lastName1] [email2, firstName2, lastName2] [email3, firstName3, lastName3] ] This array cannot be processed by the .NET framework. Use the replacement method getAllFlat instead. The returned array in this method has been flattened to the following pattern: To process this array, the fields must be indexed to allocate the respective fields to one recipient. Last updated: Apr 25, 2018
https://world.episerver.com/documentation/developer-guides/campaign/SOAP-API/introduction-to-the-soap-api/native-apis/
CC-MAIN-2021-04
en
refinedweb
nbd_aio_pread_structured - Man Page read from the NBD server Synopsis #include <libnbd.h> typedef struct { int (*callback) (void *user_data, const void *subbuf, size_t count, uint64_t offset, unsigned status, int *error); void *user_data; void (*free) (void *user_data); } nbd_chunk_callback; typedef struct { int (*callback) (void *user_data, int *error); void *user_data; void (*free) (void *user_data); } nbd_completion_callback; int64_t nbd_aio_pread_structured (struct nbd_handle *h, void *buf, size_t count, uint64_t offset, nbd_chunk_callback chunk_callback, nbd_completion_callback completion_callback, uint32_t flags); Description Issue a read command to the NBD server. To check if the command completed, call nbd_aio_command_completed(3). Or supply the optional completion_callback which will be invoked as described in “Completion callbacks” in libnbd(3). Other parameters behave as documented in nbd_pread_structured(3). This call returns the 64 bit cookie of the command. The cookie is ≥ 1. Cookies are unique (per libnbd handle, not globally)._AIO_PREAD_STRUCTURED 1 See Also nbd_aio_command_completed(3), nbd_aio_pread(3), nbd_create(3), nbd_pread_structured(3), nbd_set_strict_mode(3), “Issuing asynchronous commands” in libnbd_can_df(3), nbd_pread_structured(3).
https://www.mankier.com/3/nbd_aio_pread_structured
CC-MAIN-2021-04
en
refinedweb
Standard, composed of machines and human experts, need to recommend the maternity line when she says she’s in her ‘third trimester’, identify a medical professional when she writes that she ‘used to wear scrubs to work’, and distill ‘taking a trip’ into a Fix for vacation clothing. While we’re not totally “there” yet with the holy grail to NLP, word vectors (also referred to as distributed representations) are an amazing tool that sweeps away some of the issues of dealing with human language. The machines work in tandem with the stylists as a support mechanism to help identify and summarize textual information from the customers. The human experts will make the final call on what actions will be taken. The goal of this post is to be a motivating introduction to word vectors and demonstrate their real-world utility. The following example set the natural language community afire1 back in 2013:\[king - man + women = queen\] In this example, a human posed a question to a computer: what is king - man + woman? This is similar to an SAT-style analogy ( man is to woman as king is to what?). And a computer solved this equation and answered: queen. Under the hood, the machine gets that the biggest difference between the words for man and woman is gender. Add that gender difference to king, and you get queen. This is astonishing because we’ve never explicitly taught the machine anything about gender! In fact, we’ve never handed the computer anything like a dictionary, a thesaurus, or a network of word relationships. We haven’t even tried to break apart a sentence into its constituent parts of speech2. We’ve simply fed a mountain of text into an algorithm called word2vec and expected it to learn from context. Word by word, it tries to predict the other surrounding words in a sentence. Or rather, it internally represents words as vectors, and given a word vector, it tries to predict the other word vectors in the nearby text3. The algorithm eventually sees so many examples that it can infer the gender of a single word, that both the The Times and The Sun are newspapers, that The Matrix is a sci-fi movie, and that the style of an article of clothing might be boho or edgy. That word vectors represent much of the information available in a dictionary definition is a convenient and almost miraculous side effect of trying to predict the context of a word. Internally high dimensional vectors stand in for the words, and some of those dimensions are encoding gender properties. Each axis of a vector encodes a property, and the magnitude along that axis represents the relevance of that property to the word4. If the gender axis is more positive, then it’s more feminine; more negative, more masculine. Applied appropriately,word vectors are dramatically more meaningful and more flexible than current techniques5 and let computers peer into text in a fundamentally new way. It’s surprisingly easy to get started using libraries like gensim (in Python) or Spark (in Scala & Python) – all you need to know is how to add, subtract, and multiply vectors! Let’s review the new abilities that word vectors grant us. Similar words are nearby vectors('vacation') # [(. In this case, we’ve looked for vectors that are nearby to the word vacation by measuring the similarity (usually cosine similarity) to the root word and sorting by that. Destinations Vacation Season Holidays Wedding Month Above is an interactive visualization of the words nearest to vacation. The more similar a word to it’s genre, the larger the radius of the marker. Hover over the bubbles to reveal the words they represent7. And these words aren’t just nearby; they’re also in several clusters. So we can determine that the words most similar to vacation come in a variety of flavors: one cluster might be wedding-related, but another might relate to destinations like Belize. Of course our human stylists understand when a client says “I’m going to Belize in March” that she has an upcoming vacation. But the computer can potentially tag this as a ‘vacation’ fix because the word vector for Belize is similar to that for vacation. We can then make sure that the Fixes our customers get are vacation-appropriate! Ideas are words that can be added & subtracted We have the ability to search semantically by adding and subtracting word vectors8. This empowers us to creatively add and subtract concepts and ideas. Let’s start with a style we know a customer liked, item_3469: Our customer recently became pregnant, so let’s try and find something like item_3469 but along the pregnant dimension: model.most_similar('ITEM_3469', 'pregnant') matches = list(filter(lambda x: 'ITEM_' in x[0], matches)) # ['ITEM_13792', # 'ITEM_11275', # 'ITEM_11868'] Of course the item IDs aren’t immediately informative, but the pictures let us know that we’ve done well: The first two are items have prominent black & white stripes like item_3469 but have the added property that they’re great maternity-wear. The last item changes the pattern away from stripes but is still a loose blouse that’s great for an expectant mother. Here we’ve simply added the word vector for pregnant to the word vector for item_3469, and looked up the word vectors most similar to that result9. Our stylists tailor each Fix to their clients, and this prototype system may free them to mix and match artistic concepts about style, size and fit to creatively search for new items. Summarizing sentences & documents At Stitch Fix, we work hard to craft a uniquely-styled Fix for each of our customers. At every stage of a Fix we collect feedback: what would you like in your next Fix? What did you think of the items we sent you? What worked? What didn’t? The spectrum of responses is myriad, but vectorizing those sentences10 allows us to begin systematically categorizing those documents: from gensim.models import Doc2Vec fn = "word_vectors_blog_post_v01_notes" model = Doc2Vec.load(fn) model.most_similar('pregnant') matches = list(filter(lambda x: 'SENT_' in x[0], matches)) # ['...I am currently 23 weeks pregnant...', # '...I'm now 10 weeks pregnant...', # '...not showing too much yet...', # '...15 weeks now. Baby bump...', # '...6 weeks post partum!...', # '...12 weeks postpartum and am nursing...', # '...I have my baby shower that...', # '...am still breastfeeding...', # '...I would love an outfit for a baby shower...'] In this example we calculate which sentences are closest to the word pregnant. This list also skips over many literal matches of pregnant in order to demonstrate the more advanced capabilities. We’ve also censored sentences to keep out personally identifying text. Also note that the last sentence is a false positive: while similar to the word pregnant, she’s unlikely to be interested in maternity clothing. This allows us to understand not just what words mean, but condense our client comments, notes, and requests in a quantifiable way. We can for example categorize our sentences by first calculating the similarity between a sentence and a word: def get_vector(word): return model.syn0norm[model.vocab[word].index] def calculate_similarity(sentence, word): vec_a = get_vector(sentence) vec_b = get_vector(word) sim = np.dot(vec_a, vec_b) return sim calculate_similarity('SENT_47973, 'casual') # 0.308 We calculated the overlap between a sentence with label SENT_47973 and the word casual. The sentence is previously trained from this customer text: ‘I need some weekend wear. Comfy but stylish.’ The similarity to casual is about 0.308, which is pretty high. Having built a function that computes the similarity between a sentence and a word, we can build a table of customer comments and their similarities to a given topic: A table like this around helps us quickly answer how many people are looking for comfortable clothes or finding defects in the clothing we send them. What we didn’t mention While word vectorization is an elegant way to solve many practical text processing problems, it does have a few shortcomings and considerations: Word vectorization requires a lot of text. You can download pretrained word vectors yourself, but if you have a highly specialized vocabulary then you’ll need to train your own word vectors and have a lot of example text. Typically this means hundreds of millions of words, which is the equivalent of 1,000 books, 500,000 comments, or 4,000,000 tweets. Cleaning the text. You’ll need to clean the words of punctuation and normalize Unicode11 characters, which can take significant manual effort. In this case, there are a few tools that can help like FTFY, SpaCy, NLTK, and the Stanford Core NLP. SpaCy even comes with word vector support built-in. Memory & performance. The training of vectors requires a high-memory and high-performance multicore machine. Training can take several hours to several days but shouldn’t need frequent retraining. If you use pretrained vectors, then this isn’t an issue. Databases. Modern SQL systems aren’t well-suited to performing the vector addition, subtraction and multiplication searching in vector space requires. There are a few libraries that will help you quickly find the most similar items12: annoy, ball trees, locality-sensitive hashing (LSH) or FLANN. False-positives & exactness. Despite the impressive results that come with word vectorization, no NLP technique is perfect. Take care that your system is robust to results that a computer deems relevant but an expert human wouldn’t. Conclusion The goal of this post was to convince you that word vectors give us a simple and flexible platform for understanding text. We’ve covered a few diverse examples that should help build your confidence in developing and deploying NLP systems and what problems they can solve. While most coverage of word vectors has been from a scientific angle, or demonstrating toy examples, we at Stitch Fix think this technology is ripe for industrial application. In fact, Stitch Fix is the perfect testbed for these kinds of new technologies: with expert stylists in the loop, we can move rapidly on new and prototypical algorithms without worrying too much about edge and corner cases. The creative world of fashion is one of the few domains left that computers don’t understand. If you’re interested in helping us break down that wall, apply! Further reading There are a few miscellaneous topics that we didn’t have room to cover or were too peripheral: There’s an excellent nuts and bolts explanation and derivation of the word2vec algorithm. There’s a similarly useful iPython Notebook version too. Translating word-by-word English into Spanish is equivalent to matrix rotations. This means that all of the basic linear algebra operators (addition, subtraction, dot products, and matrix rotations) have meaningful functions on human language. Word vectors can also be used to find the odd word out. Interestingly, the same skip-gram algorithm can be applied to a social graph instead of sentence structure. The authors equate a sequence of social network graph visits (a random walk) to a sequence of words (a sentence in word2vec) to generate a dense summary vector. A brief but very visual overview of distributed representations is available here. Intriguingly, the word2vec algorithm can be reinterpreted as a matrix factorization method using point-wise mutual information. This theoretical breakthrough cleanly connects older and faster but more memory-intensive techniques with word2vec’s streaming algorithm approach. 1 See also the original papers, and the subsequently bombastic media frenzy, the race to understand why word2vec works so well, some academic drama on GloVe vs word2vec, and a nice introduction to the algorithms behind word2vec from my friend Radim Řehůřek. ← 2 Although see Omer Levy and Yoav Goldberg’s post for an interesting approach that has the word2vec context defined by parsing the sentence structure. Doing this introduces a more functional similarity between words (see this demo). For example, Hogwarts in word2vec is similar to dementors and dumbledore, as they’re all from Harry Potter, while parsing context gives sunnydale and colinwood as they’re similarly prestigious schools. ← 3 This is describing the ‘skip-gram’ mode of word2vec where the target word is asked to predict the surrounding context. Interestingly, we can also get similar results by doing the reverse: using the surrounding text to predict a word in the middle! This model, called continuous bag-of-words (CBOW), loses word order and so we lose a bit of grammatical information since that’s very sensitive to the position of a word in a sentence. This means CBOW-trained word vectors tend to do worse in a syntactic sense: the resulting vectors more poorly encode whether a word is an adjective or a verb, or a noun. ← 4 More generally, a linear combination of axes encodes the properties. We can attempt to rotate into the correct basis by using PCA (as long as we only include a few nearby words) or visualize that space using t-SNE (although we lose the concept of a single axis encoding structure). ← 5 Compare word vectors to sentiment analysis, which effectively distills everything into one dimension of ‘happy or sad’, or document labeling efforts like Latent Dirichlet Allocations that sort words into a few types. In either case, we can only ask these simpler models to categorize new documents into a few predetermined groups. With word vectors we can encapsulate far more diversity without having to build a labeled training text (and thus with less effort.) ← 6 You can download this file freely from here. ← 7 This is using an advanced visualization technique called t-SNE. This allows us to project down to 2D while still trying to maintain the local structure. This helps pop up the several word clusters that are near to the word vacation. ← 8 Check out this live demo with just wikipedia words here. ← 9 We’ve used cosine similarity to find the nearest items, but, we could’ve chosen the 3COSMUL method. This combines vectors multiplicatively instead of additively and seems to get better results (pdf warning!). This stays truer to cosine distance and in general prevents one word from dominating in any one dimension. ← 10 You can easily make a vector for a whole sentence by following the Doc2Vec tutorial (also called paragraph vector) in gensim, or by clustering words using the Chinese Restaurant Process. ← 11 If you’re using Python 2, this is a great reason to reduce Unicode headaches and switch to Python 3. ← 12 See a comparison of these techniques here. My recommendation is using LSH if you need a pure Python solution, and annoy if you need a solution that is memory light. ←
https://multithreaded.stitchfix.com/blog/2015/03/11/word-is-worth-a-thousand-vectors/
CC-MAIN-2021-04
en
refinedweb
Let me warn you up front, this game engine is nowhere near production ready. It’s very much a work in progress, with missing documentation, missing features and crashes are far too common. This is certainly not a game engine to choose today for game development, that’s why this is just a preview instead of a Closer Look. It is however a shocking capable game engine that you should keep your eye on! There is also a video available here. What is the Banshee Engine? So, what is the Banshee Engine? Currently at release 0.3, Banshee Engine is an open source, C++ powered 2D/3D game engine with a complete game editor. On top of that there is a managed scripting layer, enabling you to develop game logic using C#. It is available under a dual license, LGPL and a Commercial “pay what you want” license… and yes, what you want to pay could be $0 if you so chose. Banshee is available on Github, there are binaries available for download, although for now the engine is limited to Windows only. The engine also only targets Windows at the moment, but is being written with portability in mind. The Editor Here is the Banshee Editor in action: The layout is pretty traditional. On the top left you have the various resources that make up your game. Below that you have the Hierarchy view which is essentially your current scene’s contents. At the bottom we have the logs. On the right hand side is the inspector, which is a context aware editing form. Of course centered to the view is the Scene view, which also has a Unity like Game preview window. The interface is extremely customizable, with all tabs being closable, undockable or even free floated. It works well on high DPI monitors and on multiple displays. It does occasionally have issues with mouse hover or cursor and sadly tab doesn’t work between text input fields, but for the most part the UI works as expected. The 3D view you can Orbit the camera using RMB, pan with MMB and zoom in with the scroll wheel. Of course LBM is used for selection. There are the traditional per axis editing widgets for Transforms, Rotations and Scales. You have a widget in the top right corner for moving between various views as well as shifting between Perspective and Orthographic project. Oddly there doesn’t appear to be an option for multiple concurrent views, nor puzzlingly enough, are there axis markers ( color coded lines to show the location of X,Y,Z axis ). The editor idles nicely, using only 4% or so CPU at idle, meaning the engine is fairly friendly to laptop battery life. There are several built in Scene objects, including geometric primitives. The engine also takes an Entity/Component approach, with several components built in that can be attached to a Scene Object: Importing assets into the engine is as simple as dragging and dropping to the Library window: With a resource selected, you can control how it is imported in the Inspector: The importer can handle FBX, DAE and OBJ format 3D files as well as PNG, PSD, BMP and JPG images. You can also import fonts as well as shaders, both GLSL and HLSL formats. Coding Coding in Banshee is done in one of two ways. You can extend the editor and engine using C++ code. The code itself is written in modern C++ 14, although documentation on native coding is essentially non existent at this point in time. For games, the primary going interface is using C#. It current supports C# 6 language features. To script a component, create a new Script in Resources panel: Next, select a scene object, then drag and drop the script onto the bottom any the form in the inspector. Double clicking the script will bring it up in Visual Studio if installed. The script will have full IntelliSense in Visual Studio: Scripting a component is a matter of handling various callbacks, such as OnUpdate() which is called each frame. You can access the attached entity (er… Scene Object) via the .SceneObject member. Here is a very simple script that moves the selected object by 0.1 pixels each update: namespace BansheeEngine { public class NewComponent : Component { private void OnInitialize() { } private void OnUpdate() { this.SceneObject.MoveLocal(new Vector3(0.1f, 0.0f, 0.0f)); } private void OnDestroy() { } } } Documentation This is very much a work in progress. Right now there is a solid reference for the Managed API, the Native API (C++), but the tools user manual is essentially a stub. There is an architecture cheat sheet which gives a pretty broad overview of the engine and how the pieces fit together. There is also a guide to compiling the engine from source. For those that are interested in giving things a go from C++ only, there is a C++ game example available here. Unfortunately there are no downloadable projects or managed examples, a glaring flaw at this point that make it a lot harder to learn. As of right now, the lack of editor documentation or samples to get started with, really do make it hard to learn, especially if you are trying to figure out if something isn’t working because you are doing it wrong, the feature isn’t implemented or there is simply a bug. That said, these are all things that should improve in time. Conclusion This is a game engine for early adopters only. It’s not even close to ready for primetime. On the other hand, the kernel or core is there and remarkably robust. While not the most stable by any stretch of the word, and with lacking documentation, I think you will be surprised with just how capable this engine actually is. The potential for a great game engine is here under the surface, just waiting for a community to make it happen. The Video
https://gamefromscratch.com/banshee-game-engine-preview/
CC-MAIN-2021-04
en
refinedweb
JEP 3 — Adding support for multi-fields search¶ Table of Contents - Author Nan Wang (nan.wang@jina.ai) - Created May. 28, 2020 - Status Proposal - Related JEPs - - Created on Jina VCS version TBA - Merged to Jina VCS version TBA - Released in Jina version TBA - Discussions Table of Contents Motivation¶ The Multi-field search is commonly used in practice. Concretely, as a user, I want to limit the query within some selected fields. In the following use case, there are two documents and three two fields in each of them, i.e. title and summary. The user wants to query painter but only from the title field. The expected result will be {‘doc_id’: 11, ‘title’: ‘hackers and painters’}. { "doc_id": 10, "title": "the story of the art", "summary": "This is a book about the history of the art, and the stories of the great painters" }, { "doc_id": 11, "title": "hackers and painters", "summary": "This book discusses hacking, start-up companies, and many other technological issues" } Rationale¶ The core issue of this use case is the need of marking the Chunks from different fields. During the query time, the user should be able to change the selected fields in different queries without rebuilding the query Flow. Modify jina.proto¶ Let’s take the following Flow as an example. The FieldsMapper is a Crafter that split each Document into fields and add the field_name information for Chunks. Afterwards, the Chunks containing the title and the summary information are processed differently in two pathways and stored seperately. To add the field information into Chunks, we need first add new fields in the protobuf defination. At the Chunk level, one new field, namely field_name, is required to denote the field information of the Chunk. Each Document have one or more fields, and each field can be further splitted into one or more Chunks. In other words, each Chunk can only be assigned to one field, but each field contains one or more Chunks. The concept of field can be considered as a group of Chunks. Secondly, at the Request level, we will add another new field, namely filter_by, for the SearchRequest. This is used to store the information of on which fields the user wants to query. By adding this information, the users can specify different fields to query in each search request. Adapt Index-Flow Pods¶ During index time, most parts of the Flow stay the same as before. To make the Encoder only encode the Chunks whose field_name meet the selected fields, a new argument, filter_by, is introduced to specify which fields will be encoded. To do so, we need adapt EncodeDriver and the extract_docs(). def extract_docs( docs: Iterable['jina_pb2.Document'], filter_by: Union[str, Tuple[str], List[str]], embedding: bool) -> Tuple: """ :param filter_by: a list of service names to wait """ class EncodeDriver(BaseEncodeDriver): def __init__(self, filter_by: Union[str, List[str], Tuple[str]] = None, *args, **kwargs) super().__init__(*args, **kwargs) self.filter_by = filter_by def __call__(self, *args, **kwargs): filter_by = self.filter_by if self._request.__class__.__name__ == 'SearchRequest': filter_by = self.req.filter_by contents, chunk_pts, no_chunk_docs, bad_chunk_ids = \ extract_docs(self.req.docs, self.filter_by, embedding=False) In order to make the Indexer only index the Chunks whose field_name meet the selected fields, we need to adapt the VectorIndexDriver as well. class VectorIndexDriver(BaseIndexDriver): def __init__(self, filter_by: Union[str, List[str], Tuple[str]] = None, *args, **kwargs): super().__init__(*args, **kwargs) self.filter_by = filter_by def __call__(self, *args, **kwargs): embed_vecs, chunk_pts, no_chunk_docs, bad_chunk_ids = \ extract_docs(self.req.docs, self.filter_by, embedding=True) The same change goes for the ChunkKVIndexDriver. class ChunkKVIndexDriver(KVIndexDriver): def __init__(self, level: str = 'chunk', filter_by: Union[str, List[str], Tuple[str]] = None, *args, **kwargs): super().__init__(level, *args, **kwargs) self.filter_by = filter_by if self.filter_by else [] def __call__(self, *args, **kwargs): from google.protobuf.json_format import MessageToJson content = { f'c{c.chunk_id}': MessageToJson(c) for d in self.req.docs for c in d.chunks if len(self.filter_by) > 0 and c.field_name in self.filter_by} if content: self.exec_fn(content) Adapt Query-Flow Pods¶ During the query time, Moreover, we need to refactor the BasePea so that the Pea gets the information of how many incoming messages are expected. The expected number of incoming messages will change from query to query because the user will select different fields with the filter_by argument. In the current version (v.0.1.15), this information is fixed and stored in self.args.num_parts when the graph is built. And the Pea will NOT start processing the data until the expected number of incoming messages arrive. In order to make the Pea handle the varying number of incoming messages, we need to make the expected number adjustable on the fly for each query. Note that the self.args.num_parts is the upper bound of the expected number of incoming messages. Thereafter, it is reasonable to set the expected number of incoming messages as following, num_part = self.args.num_part if self.request_type == 'SearchRequest': # modify the num_part on the fly for SearchRequest num_part = min(self.args.num_part, max(len(self.request.filtered_by), 1)) Furthermore, the VectorSearchDriver and the KVSearchDriver also need to be adapted accordingly in order to only process the Chunks meet the filter_by requirement. class VectorSearchDriver(BaseSearchDriver): def __call__(self, *args, **kwargs): embed_vecs, chunk_pts, no_chunk_docs, bad_chunk_ids = \ extract_docs(self.req.docs, self.req.filter_by, embedding=True) ... class KVSearchDriver(BaseSearchDriver): def __call__(self, *args, **kwargs): ... elif self.level == 'chunk': for d in self.req.docs: for c in d.chunks: if c.field_name not in self.req.filter_by: continue ... elif self.level == 'all': for d in self.req.docs: self._update_topk_docs(d) for c in d.chunks: if c.field_name not in self.req.filter_by: continue ... ... Specification¶ For the use case above, the index.yml will be defined as following, !Flow pods: fields_mapper: uses: mapper.yml title_encoder: uses: title_encoder.yml needs: fields_mapper sum_encoder: uses: sum_encoder.yml needs: fields_mapper title_indexer: uses: title_indexer.yml needs: title_encoder sum_indexer: uses: sum_indexer.yml needs: sum_encoder join: needs: - title_indexer - sum_indexer And the mapper.yml will be defined as below, !FilterMapper requests: on: [SearchRequest, IndexRequest]: - !MapperDriver with: method: craft mapping: {'title': 'title', 'summary': 'summ'} The sum_encoder.yml is as below, !AnotherTextEncoder requests: on: [SearchRequest, IndexRequest]: - !EncodeDriver with: method: encode filter_by: summ The sum_indexer.yml is as below, !ChunkIndexer components: - !NumpyIndexer with: index_filename: vec.gz - !BasePbIndexer with: index_filename: chunk.gz requests: on: IndexRequest: - !VectorIndexDriver with: executor: NumpyIndexer filter_by: summ - !PruneDriver {} - !KVIndexDriver with: executor: BasePbIndexer filter_by: summ SearchRequest: - !VectorSearchDriver with: executor: NumpyIndexer filter_by: summ - !PruneDriver {} - !KVSearchDriver with: executor: BasePbIndexer filter_by: summ To send the request, one can specify the filter_by argument as below, with flow.build() as fl: fl.search(read_data_fn, callback=call_back_fn, filter_by=['title',]) Open Issues¶ This use case can be further extened to the multi-modality search by extending the filter_by to accepting the mimitype.
https://docs.jina.ai/chapters/jep/jep-3/index.html
CC-MAIN-2021-04
en
refinedweb
stash ipad error:cannot close history panel hi, I just tried stash on my old iPadPro I saw some toolbar icons on top of the keyboard. after I clicked that ‘H’ icon, it showed a history panel on right top corner screen. but then, I found no way to close this history panel, and anywhere else except this panel is unclickable, the only way to exit is double Home to kill this App. then I checked this on iPhone, I found there’s a close X icon in top left corner of the history panel, which I couldn’t find in iPad. known issue? @shtek same on iPad mini 4. Little triangle at top of window shows that the view is presented as popover but this presentation does not exist on iPhone. @shtek If you really need it, change in site-packages/stash/system/shui.py History present sheet instead of popover def history_present(self, listsource): table = ui.TableView() listsource.font = self.BUTTON_FONT table.data_source = listsource table.delegate = listsource table.width = 300 table.height = 300 table.row_height = self.BUTTON_FONT[1] + 4 table.present('sheet') table.wait_modal() perfect. thank you
https://forum.omz-software.com/topic/5551/stash-ipad-error-cannot-close-history-panel
CC-MAIN-2021-04
en
refinedweb
LHC is an extended/advanced HTTP client. Implementing basic http-communication enhancements like interceptors, exception handling, format handling, accessing response data, configuring endpoints and placeholders and fully compatible, RFC-compliant URL-template support. LHC uses typhoeus for low level http communication. See LHS, if you are searching for something more high level that can query webservices easily and provides an ActiveRecord like interface. Quick start guide gem install lhc or add it to your Gemfile: gem 'lhc' use it like: response = LHC.get('') response.data.items[0] response.data.items[0].recommended response.body response.headers Table of contents - Quick start guide - Basic methods - Request - Response - Exceptions - Configuration - Interceptors - Quick start: Configure/Enable Interceptors - Interceptors on local request level - Core Interceptors - Authentication Interceptor - Caching Interceptor - Default Timeout Interceptor - Logging Interceptor - Monitoring Interceptor - Prometheus Interceptor - Retry Interceptor - Rollbar Interceptor - Throttle - Zipkin - Create an interceptor from scratch - Testing - License Basic methods Available are get, put & delete. Other methods are available using LHC.request(options). Request The request class handles the http request, implements the interceptor pattern, loads configured endpoints, generates urls from url-templates and raises exceptions for any response code that is not indicating success (2xx). response = LHC.request(url: '', method: :options) response.request.response #<LHC::Response> the associated response. response.request. #<Hash> the options used for creating the request. response.request.params # access request params response.request.headers # access request headers response.request.url #<String> URL that is used for doing the request response.request.method #<Symbol> provides the used http-method Formats You can use any of the basic methods in combination with a format like json: LHC.json.get() Currently supported formats: json, multipart, plain (for no formatting) If formats are used, headers for Content-Type and Accept are enforced by LHC, but also http bodies are translated by LHC, so you can pass bodies as ruby objects: LHC.json.post('', body: { text: 'Hi there' }) # Content-Type: application/json # Accept: application/json # Translates body to "{\"text\":\"Hi there\"}" before sending Default format If you use LHC's basic methods LHC.get, LHC.post etc. without any explicit format, JSON will be chosen as the default format. Unformatted requests In case you need to send requests without LHC formatting headers or the body, use plain: LHC.plain.post('', body: { weird: 'format%s2xX' }) Upload with LHC If you want to upload data with LHC, it's recommended to use the multipart format: response = LHC.multipart.post('', body: { file }) response.headers['Location'] # Content-Type: multipart/form-data # Leaves body unformatted Parallel requests If you pass an array of requests to LHC.request, it will perform those requests in parallel. You will get back an array of LHC::Response objects in the same order of the passed requests. = [] << { url: '' } << { url: '' } responses = LHC.request() LHC.request([request1, request2, request3]) # returns [response1, response2, response3] Follow redirects LHC.get('', followlocation: true) Transfer data through the request body Data that is transfered using the HTTP request body is transfered using the selected format, or the default json, so you need to provide it as a ruby object. Also consider setting the http header for content-type or use one of the provided formats, like LHC.json. LHC.post('', body: feedback, headers: { 'Content-Type' => 'application/json' } ) Request parameters When using LHC, try to pass params via params option. It's not recommended to build a url and attach the parameters yourself: DO LHC.get('', params: { q: 'Restaurant' }) DON'T LHC.get('') Array Parameter Encoding LHC can encode array parameters in URLs in two ways. The default is :rack which generates URL parameters compatible with Rack and Rails. LHC.get('', params: { q: [1, 2] }) #[]=1&q[]=2 Some Java-based apps expect their arrays in the :multi format: LHC.get('', params: { q: [1, 2] }, params_encoding: :multi) # Request URL encoding LHC, by default, encodes urls: LHC.get(' space') # LHC.get('', params: { q: 'some space' }) # which can be disabled: LHC.get(' space', url_encoding: false) # space Request URL-Templates Instead of using concrete urls you can also use url-templates that contain placeholders. This is especially handy for configuring an endpoint once and generate the url from the params when doing the request. Since version 7.0 url templates follow the RFC 6750. LHC.get('{id}', params:{ id: 123 }) # GET You can also use URL templates, when configuring endpoints: LHC.configure do |c| c.endpoint(:find_feedback, '{id}') end LHC.get(:find_feedback, params:{ id: 123 }) # GET If you miss to provide a parameter that is part of the url-template, it will raise an exception. Request timeout Working and configuring timeouts is important, to ensure your app stays alive when services you depend on start to get really slow... LHC forwards two timeout options directly to typhoeus: timeout (in seconds) - The maximum time in seconds that you allow the libcurl transfer operation to take. Normally, name lookups can take a considerable time and limiting operations to less than a few seconds risk aborting perfectly normal operations. This option may cause libcurl to use the SIGALRM signal to timeout system calls. connecttimeout (in seconds) - It should contain the maximum time in seconds that you allow the connection phase to the server to take. This only limits the connection phase, it has no impact once it has connected. Set to zero to switch to the default built-in connection timeout - 300 seconds. LHC.get('', timeout: 5, connecttimeout: 1) LHC provides a timeout interceptor that lets you apply default timeout values to all the requests that you are performig in your application. Request Agent LHC identifies itself towards outher services, using the User-Agent header. User-Agent LHC (9.4.2) [] If LHC is used in an Rails Application context, also the application name is added to the User-Agent header. User-Agent LHC (9.4.2; MyRailsApplicationName) [] Response response.request #<LHC::Request> the associated request. response.data #<OpenStruct> in case response body contains parsable JSON. response.data.something.nested response.body #<String> response.code #<Fixnum> response.headers #<Hash> response.time #<Fixnum> Provides response time in ms. response.timeout? #true|false Accessing response data The response data can be access with dot-notation and square-bracket notation. You can convert response data to open structs or json (if the response format is json). response = LHC.request(url: '') response.data.as_open_struct #<OpenStruct name='local.ch'> response.data.as_json # { name: 'local.ch' } response.data.name # 'local.ch' response.data[:name] # 'local.ch' You can also access response data directly through the response object (with square bracket notation only): LHC.json.get(url: '')[:name] Exceptions Anything but a response code indicating success (2xx) raises an exception. LHC.get('localhost') # UnknownError: 0 LHC.get('') # LHC::Timeout: 0 You can access the response object that was causing the error. LHC.get('local.ch') rescue => e e.response #<LHC:Response> e.response.code # 403 e.response.timeout? # false Rails.logger.error e # LHC::UnknownError: get # Params: {:url=>"", :method=>:get} # Response Code: 0 # <Response Body> All errors that are raise by LHC inherit from LHC::Error. They are divided into LHC::ClientError, LHC::ServerError, LHC::Timeout and LHC::UnkownError and mapped according to the following status code. 400 => LHC::BadRequest 401 => LHC::Unauthorized 402 => LHC::PaymentRequired 403 => LHC::Forbidden 403 => LHC::Forbidden 404 => LHC::NotFound 405 => LHC::MethodNotAllowed 406 => LHC::NotAcceptable 407 => LHC::ProxyAuthenticationRequired 408 => LHC::RequestTimeout 409 => LHC::Conflict 410 => LHC::Gone 411 => LHC::LengthRequired 412 => LHC::PreconditionFailed 413 => LHC::RequestEntityTooLarge 414 => LHC::RequestUriToLong 415 => LHC::UnsupportedMediaType 416 => LHC::RequestedRangeNotSatisfiable 417 => LHC::ExpectationFailed 422 => LHC::UnprocessableEntity 423 => LHC::Locked 424 => LHC::FailedDependency 426 => LHC::UpgradeRequired 500 => LHC::InternalServerError 501 => LHC::NotImplemented 502 => LHC::BadGateway 503 => LHC::ServiceUnavailable 504 => LHC::GatewayTimeout 505 => LHC::HttpVersionNotSupported 507 => LHC::InsufficientStorage 510 => LHC::NotExtended timeout? => LHC::Timeout anything_else => LHC::UnknownError Custom error handling (rescue) You can provide custom error handlers to handle errors happening during the request. If a error handler is provided nothing is raised. If your error handler returns anything else but nil it replaces the response body. handler = ->(response){ do_something_with_response; return {name: 'unknown'} } response = LHC.get('', rescue: handler) response.data.name # 'unknown' Ignore certain errors As it's discouraged to rescue errors and then don't handle them (ruby styleguide)[], but you often want to continue working with nil, LHC provides the ignore option. Errors listed in this option will not be raised and will leave the response.body and response.data to stay nil. You can either pass the LHC error class you want to be ignored or an array of LHC error classes. response = LHC.get('', ignore: LHC::NotFound) response.body # nil response.data # nil response.error_ignored? # true response.request.error_ignored? # true Configuration If you want to configure LHC, do it on initialization (like in a Rails initializer, environment.rb or application.rb), otherwise you could run into the problem that certain configurations can only be set once. You can use LHC.configure to prevent the initialization problem. Take care that you only use LHC.configure once, because it is actually reseting previously made configurations and applies the new once. LHC.configure do |c| c.placeholder :datastore, '' c.endpoint :feedbacks, '{+datastore}/feedbacks', params: { has_reviews: true } c.interceptors = [CachingInterceptor, MonitorInterceptor, TrackingIdInterceptor] end Configuring endpoints You can configure endpoints, for later use, by giving them a name, a url and some parameters (optional). LHC.configure do |c| c.endpoint(:feedbacks, '', params: { has_reviews: true }) c.endpoint(:find_feedback, '{id}') end LHC.get(:feedbacks) # GET LHC.get(:find_feedback, params:{ id: 123 }) # GET Explicit request options override configured options. LHC.get(:feedbacks, params: { has_reviews: false }) # Overrides configured params Configuring placeholders You can configure global placeholders, that are used when generating urls from url-templates. LHC.configure do |c| c.placeholder(:datastore, '') c.endpoint(:feedbacks, '{+datastore}/feedbacks', { params: { has_reviews: true } }) end LHC.get(:feedbacks) # Interceptors To monitor and manipulate the HTTP communication done with LHC, you can define interceptors that follow the (Inteceptor Pattern)[]. There are some interceptors that are part of LHC already, so called Core Interceptors, that cover some basic usecases. Quick start: Configure/Enable Interceptors LHC.configure do |c| c.interceptors = [LHC::Auth, LHC::Caching, LHC::DefaultTimeout, LHC::Logging, LHC::Monitoring, LHC::Prometheus, LHC::Retry, LHC::Rollbar, LHC::Zipkin] end You can only set the list of global interceptors once and you can not alter it after you set it. Interceptors on local request level You can override the global list of interceptors on local request level: interceptors = LHC.config.interceptors interceptors -= [LHC::Caching] # remove caching interceptors += [LHC::Retry] # add retry LHC.request({url: '', retry: 2, interceptors: interceptors}) LHC.request({url: '', interceptors: []}) # no interceptor for this request at all Core Interceptors Authentication Interceptor Add the auth interceptor to your basic set of LHC interceptors. LHC.configure do |c| c.interceptors = [LHC::Auth] end Bearer Authentication LHC.get('', auth: { bearer: -> { access_token } }) Adds the following header to the request: 'Authorization': 'Bearer 123456' Assuming the method access_token responds on runtime of the request with 123456. Basic Authentication LHC.get('', auth: { basic: { username: 'steve', password: 'can' } }) Adds the following header to the request: 'Authorization': 'Basic c3RldmU6Y2Fu' Which is the base64 encoded credentials "username:password". Body Authentication LHC.post('', auth: { body: { userToken: 'dheur5hrk3' } }) Adds the following to body of all requests: { "userToken": "dheur5hrk3" } Reauthenticate The current implementation can only offer reauthenticate for client access tokens. For this to work the following has to be given: - You have configured and implemented LHC::Auth.refresh_client_token = -> { TokenRefreshUtil.client_access_token(true) }which when called will force a refresh of the token and return the new value. It is also expected that this implementation will handle invalidating caches if necessary. - Your interceptors contain LHC::Authand LHC::Retry, whereas LHC::Retrycomes after LHC::Authin the chain. Bearer Authentication with client access token Reauthentication will be initiated if: - setup is correct response.success?is false and an LHC::Unauthorizedwas observed - reauthentication wasn't already attempted once If this is the case, this happens: - refresh the client token, by calling refresh_client_token - the authentication header will be updated with the new token LHC::Retrywill be triggered by adding retry: { max: 1 }to the request options Caching Interceptor Add the cache interceptor to your basic set of LHC interceptors. LHC.configure do |c| c.interceptors = [LHC::Caching] end You can configure your own cache (default Rails.cache) and logger (default Rails.logger): LHC::Caching.cache = ActiveSupport::Cache::MemoryStore.new Caching is not enabled by default, although you added it to your basic set of interceptors. If you want to have requests served/stored and stored in/from cache, you have to enable it by request. LHC.get('', cache: true) You can also enable caching when configuring an endpoint in LHS. class Feedbacks < LHS::Service endpoint '{+datastore}/v2/feedbacks', cache: true end Only GET requests are cached by default. If you want to cache any other request method, just configure it: LHC.get('', cache: { methods: [:get] }) Responses served from cache are marked as served from cache: response = LHC.get('', cache: true) response.from_cache? # true You can also use a central http cache to be used by the LHC::Caching interceptor. If you configure a local and a central cache, LHC will perform multi-level-caching. LHC will try to retrieve cached information first from the central, in case of a miss from the local cache, while writing back into both. LHC::Caching.central = { read: 'redis://[email protected]:6379/0', write: 'redis://[email protected]:6379/0' } Options LHC.get('', cache: { key: 'key' expires_in: 1.day, race_condition_ttl: 15.seconds, use: ActiveSupport::Cache::MemoryStore.new }) expires_in - lets the cache expires every X seconds. key - Set the key that is used for caching by using the option. Every key is prefixed with LHC_CACHE(v1):. race_condition_ttl -. use - Set an explicit cache to be used for this request. If this option is missing LHC::Caching.cache is used. Default Timeout Interceptor Applies default timeout values to all requests made in an application, that uses LHC. LHC.configure do |c| c.interceptors = [LHC::DefaultTimeout] end timeout default: 15 seconds connecttimeout default: 2 seconds Overwrite defaults LHC::DefaultTimeout.timeout = 5 # seconds LHC::DefaultTimeout.connecttimeout = 3 # seconds Logging Interceptor The logging interceptor logs all requests done with LHC to the Rails logs. Installation LHC.configure do |c| c.interceptors = [LHC::Logging] end LHC::Logging.logger = Rails.logger What and how it logs The logging Interceptor logs basic information about the request and the response: LHC.get('') # Before LHC request<70128730317500> GET at 2018-05-23T07:53:19+02:00 Params={} Headers={\"User-Agent\"=>\"Typhoeus -\", \"Expect\"=>\"\"} # After LHC response for request<70128730317500>: GET at 2018-05-23T07:53:28+02:00 Time=0ms URL= Configure You can configure the logger beeing used by the logging interceptor: LHC::Logging.logger = Another::Logger Monitoring Interceptor The monitoring interceptor reports all requests done with LHC to a given StatsD instance. Installation LHC.configure do |c| c.interceptors = [LHC::Monitoring] end You also have to configure statsd in order to have the monitoring interceptor report. LHC::Monitoring.statsd = <your-instance-of-statsd> Environment By default, the monitoring interceptor uses Rails.env to determine the environment. In case you want to configure that, use: LHC::Monitoring.env = ENV['DEPLOYMENT_TYPE'] || Rails.env What it tracks It tracks request attempts with before_request and after_request (counts). In case your workers/processes are getting killed due limited time constraints, you are able to detect deltas with relying on "before_request", and "after_request" counts: Before and after request tracking "lhc.<app_name>.<env>.<host>.<http_method>.before_request", 1 "lhc.<app_name>.<env>.<host>.<http_method>.after_request", 1 Response tracking In case of a successful response it reports the response code with a count and the response time with a gauge value. LHC.get('') "lhc.<app_name>.<env>.<host>.<http_method>.count", 1 "lhc.<app_name>.<env>.<host>.<http_method>.200", 1 "lhc.<app_name>.<env>.<host>.<http_method>.time", 43 In case of a unsuccessful response it reports the response code with a count but no time: LHC.get('') "lhc.<app_name>.<env>.<host>.<http_method>.count", 1 "lhc.<app_name>.<env>.<host>.<http_method>.500", 1 Timeout tracking Timeouts are also reported: "lhc.<app_name>.<env>.<host>.<http_method>.timeout", 1 All the dots in the host are getting replaced with underscore, because dot is the default separator in graphite. Caching tracking When you want to track caching stats please make sure you have enabled the LHC::Caching and the LHC::Monitoring interceptor. Make sure that the LHC::Caching is listed before LHC::Monitoring interceptor when configuring interceptors: LHC.configure do |c| c.interceptors = [LHC::Caching, LHC::Monitoring] end If a response was served from cache it tracks: "lhc.<app_name>.<env>.<host>.<http_method>.cache.hit", 1 If a response was not served from cache it tracks: "lhc.<app_name>.<env>.<host>.<http_method>.cache.miss", 1 Configure It is possible to set the key for Monitoring Interceptor on per request basis: LHC.get('', monitoring_key: 'local_website') "local_website.count", 1 "local_website.200", 1 "local_website.time", 43 If you use this approach you need to add all namespaces (app, environment etc.) to the key on your own. Prometheus Interceptor Logs basic request/response information to prometheus. require 'prometheus/client' LHC.configure do |c| c.interceptors = [LHC::Prometheus] end LHC::Prometheus.client = Prometheus::Client LHC::Prometheus.namespace = 'web_location_app' LHC.get('') Creates a prometheus counter that receives additional meta information for: :code, :successand :timeout. Creates a prometheus histogram for response times in milliseconds. Retry Interceptor If you enable the retry interceptor, you can have LHC retry requests for you: LHC.configure do |c| c.interceptors = [LHC::Retry] end response = LHC.get('', retry: true) It will try to retry the request up to 3 times (default) internally, before it passes the last response back, or raises an error for the last response. Consider, that all other interceptors will run for every single retry. Limit the amount of retries while making the request LHC.get('', retry: { max: 1 }) Change the default maximum of retries of the retry interceptor LHC::Retry.max = 3 Retry all requests If you want to retry all requests made from your application, you just need to configure it globally: LHC::Retry.all = true configuration.interceptors = [LHC::Retry] Do not retry certain response codes If you do not want to retry based on certain response codes, use retry in combination with explicit ignore: LHC.get('', ignore: LHC::NotFound, retry: { max: 1 }) Or if you use LHC::Retry.all: LHC.get('', ignore: LHC::NotFound) Rollbar Interceptor Forward errors to rollbar when exceptions occur during http requests. LHC.configure do |c| c.interceptors = [LHC::Rollbar] end LHC.get('') If it raises, it forwards the request and response object to rollbar, which contain all necessary data. Forward additional parameters LHC.get('', rollbar: { tracking_key: 'this particular request' }) Throttle The throttle interceptor allows you to raise an exception if a predefined quota of a provider request limit is reached in advance. LHC.configure do |c| c.interceptors = [LHC::Throttle] end = { throttle: { track: true, break: '80%', provider: 'local.ch', limit: { header: 'Rate-Limit-Limit' }, remaining: { header: 'Rate-Limit-Remaining' }, expires: { header: 'Rate-Limit-Reset' } } } LHC.get('', ) # { headers: { 'Rate-Limit-Limit' => 100, 'Rate-Limit-Remaining' => 19 } } LHC.get('', ) # raises LHC::Throttle::OutOfQuota: Reached predefined quota for local.ch Options Description track: enables tracking of current limit/remaining requests of rate-limiting break: quota in percent after which errors are raised. Percentage symbol is optional, values will be converted to integer (e.g. '23.5' will become 23) provider: name of the provider under which throttling tracking is aggregated, limit: - a hard-coded integer - a hash pointing at the response header containing the limit value - a proc that receives the response as argument and returns the limit value remaining: - a hash pointing at the response header containing the current amount of remaining requests - a proc that receives the response as argument and returns the current amount of remaining requests expires: - a hash pointing at the response header containing the timestamp when the quota will reset - a proc that receives the response as argument and returns the timestamp when the quota will reset Zipkin ** Zipkin 0.33 breaks our current implementation of the Zipkin interceptor ** Zipkin is a distributed tracing system. It helps gather timing data needed to troubleshoot latency problems in microservice architectures Zipkin Distributed Tracing. Add the zipkin interceptor to your basic set of LHC interceptors. LHC.configure do |c| c.interceptors = [LHC::Zipkin] end The following configuration needs to happen in the application that wants to run this interceptor: - Add gem 'zipkin-tracer', '< 0.33.0'to your Gemfile. - Add the necessary Rack middleware and configuration config.middleware.use ZipkinTracer::RackHandler, { service_name: 'service-name', # name your service will be known as in zipkin service_port: 80, # the port information that is sent along the trace json_api_host: '', # the zipkin endpoint sample_rate: 1 # sample rate, where 1 = 100% of all requests, and 0.1 is 10% of all requests } Create an interceptor from scratch class TrackingIdInterceptor < LHC::Interceptor def before_request request.params[:tid] = 123 end end LHC.configure do |c| c.interceptors = [TrackingIdInterceptor] end Interceptor callbacks before_raw_request is called before the raw typhoeus request is prepared/created. before_request is called when the request is prepared and about to be executed. after_request is called after request was started. before_response is called when response started to arrive. after_response is called after the response arrived completely. Interceptor request/response Every interceptor can directly access their instance request or response. Provide a response replacement through an interceptor Inside an interceptor, you are able to provide a response, rather then doing a real request. This is useful for implementing e.g. caching. class LHC::Cache < LHC::Interceptor def before_request(request) cached_response = Rails.cache.fetch(request.url) return LHC::Response.new(cached_response) if cached_response end end Take care that having more than one interceptor trying to return a response will cause an exception. You can access the request.response to identify if a response was already provided by another interceptor. class RemoteCacheInterceptor < LHC::Interceptor def before_request(request) return unless request.response.nil? return LHC::Response.new(remote_cache) end end Testing When writing tests for your application when using LHC, please make sure you require the lhc rspec test helper: # spec/spec_helper.rb require 'lhc/rspec' License GNU General Public License Version 3.
https://www.rubydoc.info/gems/lhc/13.1.0
CC-MAIN-2021-04
en
refinedweb
ReSharper C++ 2020.1 Early Access Program Is Now Open Today we are launching the Early Access Program for the next major release of ReSharper C++ – 2020.1. Try the EAP builds for free and get early access to the latest improvements and upcoming features! You can download the new EAP build from our website, or via the Toolbox App. DOWNLOAD RESHARPER C++ 2020.1 EAP The first EAP build improves on the features introduced in the recent 2019.3 release. It also includes some new features and enhancements. The main highlights of this build are: - C++ support: using enum, attributes, and more support for C++20’s concepts - Code completion: attributes, goto, std::forward, and calling base function - Code analysis: new inspections and quick fixes - Unreal Engine 4: better Rename refactoring - Sorting of #include directives: more options - Other changes C++ support In this first EAP, we have introduced more features of the C++20 standard and we’ve extended support for C++17 attributes. Using enum C++20 improves using declarations to support bringing specific enumerators into the local scope. The new using enum syntax allows you to add all the enumerators from the target enumeration. As a result, you can omit repetitions of the enumeration name when using its member enumerators, making your code more concise. ReSharper C++ 2020.1 supports the new syntax and also adds a new refactoring that helps with adding using enum statements. To invoke it, place the caret at an enumerator and press Ctrl+Shift+R, or choose ReSharper | Refactor | Refactor This from the main menu, and then select Introduce Using Enum from the Refactor This menu: C++20’s Concepts ReSharper C++ 2020.1 supports two new concept-related features: - Abbreviated function templates – we have extended C++20’s Concepts support with this new syntax for function and lambda declarations. Now you can declare a template function with the autoor concept autoplaceholder in the list of parameters: - Constrained type placeholders – now you can constrain an autotype with a concept: Attributes We’ve extended support for the C++17 [[maybe_unused]] and [[nodiscard]] attributes. Here is a short overview of when you can use them and how ReSharper C++ can help. [[maybe_unused]] The [[maybe_unused]] attribute can be added to avoid warnings from the compiler about an unused name or entity. This attribute is applicable to class declarations, function return types, variables, and more. When the caret is on an unused entity, a new context action Add [[maybe_unused]] will be available: And another context action will help you to replace usages of the UNREFERENCED_PARAMETER macro with a [[maybe_unused]] attribute: [[nodiscard]] The [[nodiscard]] attribute can be added to raise a warning when the return value of a function is not used. ReSharper C++ 2020.1 now offers you the option to declare generated getters and constructors [[nodiscard]] in the generation wizard. You can also use a quick fix for the modernize-use-nodiscard inspection to add the attribute to member functions. Code completion Code Completion is a very important feature for improving your productivity. Just start typing or press Ctrl+Space to see the suggestion lists for the following new code completions: - attribute names: - label names for the gotostatement: std::forwardin the postfix template suggestion list: - parameters for a call to a base function from an overriding function: Code analysis New inspections detect more cases when you should prefer static_cast, and the corresponding quick fix is here to help you update your code: - Functional-style cast used instead of a C++ cast: reinterpret_castused instead of a static_castwhen casting to void*: And there’s one more new inspection – if a local variable is captured by a lambda but not used inside the lambda body, ReSharper C++ notifies you and suggests removing the unused capture: Unreal Engine 4 Rename is one of the most useful refactorings, and now it has more UE4-specific support: - When renaming a UE4 header, the corresponding #include *.generated.hdirective will also be updated. - When renaming a UE4 type, the corresponding header and source files (with the A, F, E prefixes) will also be renamed. Sorting of #include directives We presented this feature in the 2019.3 release, but we have already received a lot of feedback, and we’re ready to make the sorting rules even more customizable. There are two new options: - Case-sensitive sort – used to place all includes starting with uppercase letters before the lowercase ones. - Group headers by directory – used to create groups of headers based on their location. Other changes A small but pleasant improvement for typing assistance: you can now select any piece of code and enter a single parenthesis/bracket/quote to surround the selection with the corresponding characters. We’ve also added new filter categories in Go to for concepts and namespaces. Press Ctrl+T and type “/” to see all the available filters. We have resolved the performance issue related to type hints in dependent code. They are now shown by default without any problems! The full list of all issues fixed in this EAP build can be found in our issue tracker. If you want to know more about what is coming in future builds, check out our roadmap. Download, try it out, and share your feedback with us! DOWNLOAD RESHARPER C++ 2020.1 EAP Your ReSharper C++ team JetBrains The Drive to Develop 2 Responses to ReSharper C++ 2020.1 Early Access Program Is Now Open sergegers says:February 27, 2020 Hmm… Abbreviated function templates are not supported by compiler yet. Igor Akhmetov says:February 27, 2020 Abbreviated function templates aren’t supported by MSVC yet, but there are several good reasons why we added them: 1) Clang 10 supports them (as does GCC 10), and you can use Clang from VS for either production builds, or to experiment with new language features. 2) We anticipate that this feature will be implemented in MSVC very soon, and we’d like you to be able to use R++ with newly supported language features right away. Implementing a big feature like that takes some time on our part, and then it also has to get tested and shipped in a release build. 3) We wanted to finalize our work on concepts support.
https://blog.jetbrains.com/rscpp/2020/02/27/resharper-cpp-2020-1-eap/
CC-MAIN-2021-04
en
refinedweb
Get the highlights in your inbox every week. Program hardware from the Linux command line | Opensource.com Program hardware from the Linux command line Programming hardware has become more common thanks to the rise of the Internet of Things (IoT). RT-Thread lets you contact devices from the Linux command line with FinSH. Subscribe now RT-Thread is an open source real-time operating system used for programming Internet of Things (IoT) devices. FinSH is RT-Thread's command-line component, and it provides a set of operation interfaces enabling users to contact a device from the command line. It's mainly used to debug or view system information. Usually, development debugging is displayed using hardware debuggers and printf logs. In some cases, however, these two methods are not very useful because it's abstracted from what's running, and they can be difficult to parse. RT-Thread is a multi-thread system, though, which is helpful when you want to know the state of a running thread, or the current state of a manual control system. Because it's multi-threaded, you're able to have an interactive shell, so you can enter commands, call a function directly on the device to get the information you need, or control the program's behavior. This may seem ordinary to you if you're only used to modern operating systems such as Linux or BSD, but for hardware hackers this is a profound luxury, and a far cry from wiring serial cables directly onto boards to get glimpses of errors. FinSH has two modes: - A C-language interpreter mode, known as c-style - A traditional command-line mode, known as msh(module shell) In the C-language interpretation mode, FinSH can parse expressions that execute most of the C language and access functions and global variables on the system using function calls. It can also create variables from the command line. In msh mode, FinSH operates similarly to traditional shells such as Bash. The GNU command standardWhen we were developing FinSH, we learned that before you can write a command-line application, you need to become familiar with GNU command-line standards. This framework of standard practices helps bring familiarity to an interface, which helps developers feel comfortable and productive when using it. A complete GNU command consists of four main parts: - Command name (executable): The name of the command line program - Sub-command: The sub-function name of the command program - Options: Configuration options for the sub-command function - Arguments: The corresponding arguments for the configuration options of the sub-command function You can see this in action with any command. Taking Git as an example: git reset --hard HEAD~1 Which breaks down as: command-line-apps_2.png The executable command is git, the sub-command is reset, the option used is --head, and the argument is HEAD~1. Another example: systemctl enable --now firewalld The executable command is systemctl, the sub-command is enable, the option is --now, and the argument is firewalld. Imagine you want to write a command-line program that complies with the GNU standards using RT-Thread. FinSH has everything you need, and will run your code as expected. Better still, you can rely on this compliance so you can confidently port your favorite Linux programs. Write an elegant command-line program Here's an example of RT-Thread running a command that RT-Thread developers use every day. usage: env.py package [-h] [--force-update] [--update] [--list] [--wizard] [--upgrade] [--printenv] optional arguments: -h, --help show this help message and exit --force-update force update and clean packages, install or remove the packages by your settings in menuconfig --update update packages, install or remove the packages by your settings in menuconfig --list list target packages --wizard create a new package with wizard --upgrade upgrade local packages list and ENV scripts from git repo --printenv print environmental variables to check As you can tell, it looks familiar and acts like most POSIX applications that you might already run on Linux or BSD. Help is provided when incorrect or insufficient syntax is used, both long and short options are supported, and the general user interface is familiar to anyone who's used a Unix terminal. Kinds of options There are many different kinds of options, and they can be divided into two main categories by length: - Short options: Consist of one hyphen plus a single letter, e.g., the -hoption in pkgs -h - Long options: Consist of two hyphens plus words or letters, e.g., the --targetoption in scons- --target-mdk5 You can divide these options into three categories, determined by whether they have arguments: - No arguments: The option cannot be followed by arguments - Arguments must be included: The option must be followed by arguments - Arguments optional: Arguments after the option are allowed but not required As you'd expect from most Linux commands, FinSH option parsing is pretty flexible. It can distinguish an option from an argument based on a space or equal sign as delimiter, or just by extracting the option itself and assuming that whatever follows is the argument (in other words, no delimiter at all): wavplay -v 50 wavplay -v50 wavplay --vol=50 Using optparse If you've ever written a command-line application, you may know there's generally a library or module for your language of choice called optparse. It's provided to programmers so that options (such as -v or --verbose) entered as part of a command can be parsed in relation to the rest of the command. It's what helps your code know an option from a sub-command or argument. When writing a command for FinSH, the optparse package expects this format: MSH_CMD_EXPORT_ALIAS(pkgs, pkgs, this is test cmd.); You can implement options using the long or short form, or both. For example: static struct optparse_long long_opts[] = { {"help" , 'h', OPTPARSE_NONE}, // Long command: help, corresponding to short command h, without arguments. {"force-update", 0 , OPTPARSE_NONE}, // Long comman: force-update, without arguments {"update" , 0 , OPTPARSE_NONE}, {"list" , 0 , OPTPARSE_NONE}, {"wizard" , 0 , OPTPARSE_NONE}, {"upgrade" , 0 , OPTPARSE_NONE}, {"printenv" , 0 , OPTPARSE_NONE}, { NULL , 0 , OPTPARSE_NONE} }; After the options are created, write the command and instructions for each option and its arguments: static void usage(void) { rt_kprintf("usage: env.py package [-h] [--force-update] [--update] [--list] [--wizard]\n"); rt_kprintf(" [--upgrade] [--printenv]\n\n"); rt_kprintf("optional arguments:\n"); rt_kprintf(" -h, --help show this help message and exit\n"); rt_kprintf(" --force-update force update and clean packages, install or remove the\n"); rt_kprintf(" packages by your settings in menuconfig\n"); rt_kprintf(" --update update packages, install or remove the packages by your\n"); rt_kprintf(" settings in menuconfig\n"); rt_kprintf(" --list list target packages\n"); rt_kprintf(" --wizard create a new package with wizard\n"); rt_kprintf(" --upgrade upgrade local packages list and ENV scripts from git repo\n"); rt_kprintf(" --printenv print environmental variables to check\n"); } The next step is parsing. While you can't implement its functions yet, the framework of the parsed code is the same: int pkgs(int argc, char **argv) { int ch; int option_index; struct optparse options; if(argc == 1) { usage(); return RT_EOK; } optparse_init(&options, argv); while((ch = optparse_long(&options, long_opts, &option_index)) != -1) { ch = ch; rt_kprintf("\n"); rt_kprintf("optopt = %c\n", options.optopt); rt_kprintf("optarg = %s\n", options.optarg); rt_kprintf("optind = %d\n", options.optind); rt_kprintf("option_index = %d\n", option_index); } rt_kprintf("\n"); return RT_EOK; } Here is the function head file: #include "optparse.h" #include "finsh.h" Then, compile and download onto a device. command-line-apps_3.png Hardware hacking Programming hardware can seem intimidating, but with IoT it's becoming more and more common. Not everything can or should be run on a Raspberry Pi, but with RT-Thread you can maintain a familiar Linux feel, thanks to FinSH. If you're curious about coding on bare metal, give RT-Thread a try. 1 Comment, Register or Log in to post a comment. a good read - thanks for the useful information
https://opensource.com/article/20/9/hardware-command-line
CC-MAIN-2021-04
en
refinedweb
Can two projects be combined without the namespaces interfering? I would like to have a single compile produce a hex file that contains both the DFU and application portions of the memory content without the two interfering. The DFU only knows about the start address of the application, and the application doesn't know anything about the DFU. Hello Marc: Is this a Kinetis project? Typically to merge two projects (bootloader + application) you can add the binary of one of them into the other project, using the linker file. Check the next threads which are related to your question: - AN2295 Integration... a few questions (long thread, but check Kan_Li suggestions in the middle). - Re: 2 separate programs on one processor (Bootloader- and Applicationcode) flashed with Open SDA. I hope this helps. Regards!, Jorge Gonzalez ----------------------------------------------------------------------------------------------------------------------- Note: If this post answers your question, please click the Correct Answer button. Thank you! -----------------------------------------------------------------------------------------------------------------------
https://community.nxp.com/thread/329217
CC-MAIN-2018-22
en
refinedweb