text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
[ ] Jing Zhao updated HDFS-6353: ---------------------------- Status: Patch Available (was: Open) > Handle checkpoint failure more gracefully > ----------------------------------------- > > Key: HDFS-6353 > URL: > Project: Hadoop HDFS > Issue Type: Sub-task > Components: namenode > Reporter: Suresh Srinivas > Assignee: Jing Zhao > Attachments: HDFS-6353.000.patch, HDFS-6353.001.patch > > > One of the failure patterns I have seen is, in some rare circumstances, due to some inconsistency the secondary or standby fails to consume editlog. The only solution when this happens is to save the namespace at the current active namenode. But sometimes when this happens, unsuspecting admin might end up restarting the namenode, requiring more complicated solution to the problem (such as ignore editlog record that cannot be consumed etc.). > How about adding the following functionality: > When checkpointer (standby or secondary) fails to consume editlog, based on a configurable flag (on/off) to let the active namenode know about this failure. Active namenode can enters safemode and saves namespace. When in this type of safemode, namenode UI also shows information about checkpoint failure and that it is saving namespace. Once the namespace is saved, namenode can come out of safemode. > This means service unavailability (even in HA cluster). But it might be worth it to avoid long startup times or need for other manual fixes. Thoughts? -- This message was sent by Atlassian JIRA (v6.3.4#6332)
http://mail-archives.apache.org/mod_mbox/hadoop-hdfs-issues/201501.mbox/%3CJIRA.12712918.1399488774000.66674.1421114794995@Atlassian.JIRA%3E
CC-MAIN-2017-39
en
refinedweb
This is an excerpt from the Scala Cookbook (partially modified for the internet). This is a short recipe, Recipe 7.3, “How to rename members on import in Scala.” Problem You want to rename Scala members when you import them to help avoid namespace collisions or confusion. Solution Give the class you’re importing a new name when you import it with this import syntax: import java.util.{ArrayList => JavaList} Then, within your code, refer to the class by the alias you’ve given it: val list = new JavaList[String] You can also rename multiple classes at one time during the import process: import java.util.{Date => JDate, HashMap => JHashMap} Because you’ve created these aliases during the import process, the original (real) name of the class can’t be used in your code. For instance, in the last example, the following code will fail because the compiler can’t find the java.util.HashMap class: // error: this won't compile because HashMap was renamed during the import process val map = new HashMap[String, String] Discussion As shown, you can create a new name for a class when you import it, and can then refer to it by the new name, or alias. The book Programming in Scala, by Odersky, et al (Artima), refers to this as a renaming clause. This can be very helpful when trying to avoid namespace collisions and confusion. Class names like Listener, Handler, Client, Server, and many more are all very common, and it can be helpful to give them an alias when you import them. From a strategy perspective, you can either rename all classes that might be conflicting or confusing: import java.util.{HashMap => JavaHashMap} import scala.collection.mutable.{Map => ScalaMutableMap} or you can just rename one class to clarify the situation: import java.util.{HashMap => JavaHashMap} import scala.collection.mutable.Map As an interesting combination of several recipes, not only can you rename classes on import, but you can even rename class members. As an example of this, in shell scripts I tend to rename the println method to a shorter name, as shown here in the REPL: scala> import System.out.{println => p} import System.out.{println=>p} scala> p("hello") hello The Scala Cookbook This tutorial is sponsored by the Scala Cookbook, which I wrote for O’Reilly: You can find the Scala Cookbook at these locations: Add new comment
https://alvinalexander.com/scala/how-to-rename-members-import-scala-classes-methods-functions
CC-MAIN-2017-39
en
refinedweb
05 September 2008 19:55 [Source: ICIS news] WASHINGTON (?xml:namespace> The Synthetic Organic Chemical Manufacturers Association (SOCMA) said it has begun a series of meetings with regulatory officials of the outgoing Bush administration and with staff in the election campaigns of Republican presidential nominee John McCain and Democratic candidate Barak Obama. “We want to do what we can to help wrap up pending matters at regulatory agencies or on Capitol Hill before the Bush administration leaves office,” said association president Joe Acker. “Of course not everything we want can be concluded in the few months remaining to the Bush White House, so we’re also meeting with McCain’s people and with the Obama campaign folks so that they’ll at least be aware of things that are of interest to our industry,” he said. For example, Acker said SOCMA officials are to meet with Environmental Protection Agency (EPA) staff to gauge and urge progress on the North American chemical assessment and management programme (ChAMP) that was agreed to by the ChAMP is seen by SOCMA and other chemical industry trade groups as a more palatable and workable alternative to the EU’s wide-ranging plan for the registration, evaluation and authorisation of chemicals (Reach), now being put in force. ChAMP is a risk-based approach to chemicals control while Reach is based chiefly on the precautionary principle. “We want to find out from EPA where development of that ChAMP programme is at,” Acker said. “Is it being pushed hard? Will it carry over to the next administration, or is it losing energy?” Acker said the association’s representatives also will meet with the Department of Homeland Security (DHS) to see how well implementation of the Chemical Facility Anti-Terrorism Standards (CFATS) is progressing. A new and more Democrat-dominated Congress may want to enact tougher antiterrorism security requirements for the some 7,000 Specialty and batch chemical producers represented by SOCMA and a broad array of other manufacturers also will press key members of Congress in the next few weeks to complete work on legislation to renew the federal research and development (R&D) tax credit. That tax credit expired at the end of 2007, and SOCMA and others are anxious to see it renewed before the end of this Congress, warning that Congress returns from its August recess next Monday but will be in session for only a few weeks before the legislators break again to resume re-election campaigning in advance of the nationwide
http://www.icis.com/Articles/2008/09/05/9154478/us-chems-worry-that-key-issues-will-falter.html
CC-MAIN-2015-06
en
refinedweb
This document is a NOTE made available by the W3 Consortium for discussion only. This indicates no endorsement of its content by the W3C, nor that the Consortium has allocated, is allocating, or will be allocating any resources to the issues addressed by the NOTE. It replaces the previous version of the SOX language specification and represents the current, implemented version of the language. This document is a submission to W3C from Commerce One, Inc.. Please see Acknowledged Submissions to W3C regarding its disposition. This document describes SOX 2.0, the second version of the Schema for Object-Oriented XML. SOX is a schema language (or metagrammar) for defining the syntactic structure and partial semantics of XML document types. As such, SOX is an alternative to XML DTDs and can be used to define the same class of document types (with the exception of external parsed entities). However, SOX extends the language of DTDs by supporting: All of these features are supported with strong type-checking and validation. A SOX schema is also a valid XML instance according to the SOX DTD, enabling the application of XML content management tools to schema management. SOX was initially developed to support the development of large-scale, distributed electronic commerce applications but is applicable across the whole range of applications of markup. As compared to XML DTDs, SOX dramatically decreases the complexity of supporting interoperation among heterogenous applications by facilitating software mapping of XML data structures, expressing domain abstractions and common relationships directly and explicitly, enabling reuse at the document design and the application programming levels, and supporting the generation of common application components. Although SOX 2.0 retains many of the features of SOX 1.0, it represents an additional year of actual implementation experience. Commerce One has a working implementation of this language and will be releasing products based on it. Our goals in releasing this second version are two-fold: From the markup world, the SOX proposal is informed by the XML 1.0 [XML] specification as well as the XML-Data submission [XML-Data] and the Document Content Description submission[DCD]. However many of SOX' requirements come from the distributed computing world and SOX features have been heavily influenced by the Java[JAVA] programming language. As a SOX document is not defined by a DTD, the native XML 1.0 Doctype declaration is not appropriate for a document instance to declare SOX Schema information (note that this may not continue to be true once the W3C Schema WG defines an official mechanism). Therefore we have created the soxtype declaration , which mimics the XML doctype declaration, but using a processing instruction. The soxtype declaration must be the first statement in a document following the optional XML declaration and declares that the default namespace [XML-Namespaces] of this document is that of the given SOX Schema, just as a doctype declaration declares that an XML 1.0 document conforms to the given Document Type Declaration (DTD). The soxtype declaration is not currently intended to coexist in the same document with a doctype declaration, but there is no particular restriction prohibiting this. The soxtype declaration includes two parts: An example declaration would look like: <?soxtype urn:x-commerceone:document:com:commerceone:schema1.sox$1.0?> where the schema definition is located through resolving the urn urn:x-commerceone:document:com:commerceone:schema1:schema1.sox$1.0. The part following the "$" gives the version number. A small sample document would look like: <?soxtype urn:x-commerceone:document:com:commerceone:schema1.sox$1.0?> <Root> <Body/> </Root> The root element of the instance is not required to belong to the default namespace. This will be further elaborated in the discussion of the import PI below. With the arrival of namespaces and polymorphism, it is no longer necessarily possible to indicate a single schema to which the entire instance conforms. Nevertheless, we require a mechanism to indicate a set of schemata containing definitions for all the element and datatypes appearing in an instance. This function is handled by the import processing instruction. The import PI contains one argument, an absolute URI indicating a schema. An example import PI would look like: <?import urn:x-commerceone:document:com:commerceone:schema1.sox$1.0?> where the schema definition is located through resolving the URI: urn:x-commerceone:document:com:commerceone:schema1.sox$1.0. The part following the "$" gives the version number. A small sample document would look like: <?soxtype urn:x-commerceone:document:com:commerceone:schema1.sox$1.0?> <?import urn:x-commerceone:document:com:commerceone:schema2.sox$1.0?> <Root <s2:Body xmlns: </Root> In the following, the namespace attributes on Root overrides the default namespace declared in the soxtype PI, and Body is assumed to also be in schema2.sox. <?soxtype urn:x-commerceone:document:com:commerceone:schema1.sox$1.0?> <?import urn:x-commerceone:document:com:commerceone:schema2.sox$1.0?> <Root xmlns="urn:x-commerceone:document:com:commerceone:schema2.sox$.0"> <Body/> </Root> It is not necessary that for every element type appearing in the instance there be a corresponding import or soxtype processing instruction. Nevertheless, all schema information must be available before processing of the root element begins. We can define a transitive closure property over schemata to accomplish this. We require each schema declared in a soxtype or import PI to be processed. Furthermore, if any definition in a schema being processed refers to a definition ( elementtype or datatype ) in another schema, that other schema must be processed. The set of imports must be sufficient so that starting with the soxtype and the imports , and processing all schemata transitively referenced by them, all elementtypes and datatypes found in the instance will have been processed. The definition of an XML schema is performed with the schema element. The corresponding DTD fragment is: <!ELEMENT schema (intro?, (datatype | elementtype |join |comment | namespace)*)> <!ATTLIST schema prefix NMTOKEN #IMPLIED uri CDATA #REQUIRED soxlang-version NMTOKEN #FIXED "V2.0"> The following is a minimal valid instance of a schema: <schema uri="urn:x-commerceone:sampleSchema"/> However it is unlikely there will be many schemata containing no definitions at all. A schema consists of any number of definitions of datatypes or element types (both of which will be explained subsequently). In other words, a schema is a set of definitions, not the set of files, database entries, etc., we use to store a representation of these definitions: the physical storage mechanism we use may change frequently, without the set of definitions being affected at all. A schema may start out as a single file, then be split among several files (which would currently be linked using the join mechanism described below), before being stored in a database. But despite being stored in three different ways, the set of definitions remains the same, and it is that set which comprises the schema. Each schema has a unique name to identify it. This name is a URI, given in the uri attribute of the schema element in a fragment. In essence, it proclaims the set of definitions in this schema element to be a subset of the definitions forming the schema identified by the uri attribute. The join construct allows definitions from externally defined fragments belonging to the same schema to be pulled in. There are three main classes of symbols created during the construction of a Schema: elementtype names, datatype names, and namespace prefixes. In the current version of the language, both elementtype names and datatype names are maintained in a single set; it is illegal to use the same name to represent both an element type and a datatype. Prefixes for namespaces, however, are kept separately. It is entirely legal, although potentially confusing, to use the same name for a namespace prefix as for an elementtype or datatype . NOTE: It is our intention to separate the datatype and elementtype namespaces in a future version of SOX. When we do so, this will be entirely backwards compatible. The expressive power of SOX, however, is unchanged whether the namespaces are separated or coallesced. The prefix attribute specifies a prefix available for referencing names created in this schema. As the current schema is the default namespace for dereferencing names, this is not strictly necessary, however it can be useful when definitions from several namespaces are mixed in close proximity. The uri attribute provides a URI establishing the namespace of this schema fragment. This attribute has become required to prevent accidental name capture through the join mechanism described below. This must be an absolute URI. The namespace element is considered a declaration, not a definition. The namespace being declared is defined elsewhere (probably in a Schema file) N.B. SOX does not deal with a number of linking issues related to the organization of schemata into multiple files. The intro element is available to provide an introduction to the schema as a whole. It consists of a number of HTML elements. The exact allowable contents of intro is available in the htmltext.ent file reproduced at the end of this document. The following are the constraints placed on Schema files, aside from strict structural conformance to the DTD. A Schema is about types. Types are both defined and referenced. Referencing is done using names. Once a type (or namespace) is given a name (or prefix) in a fragment, that name that name is bound in that fragment. Names that are only referenced are free in that fragment. A Schema fragment is processed in an environment. That environment extends beyond the fragment to include other schemata and the SOX definition itself. This document defines how the environment of a fragment is defined, but does not discuss how it is physically constructed. A Schema fragment cannot be successfully processed unless all the names that are free in it are bound somewhere in its environment. It is an error for a name to be bound twice in a fragment's environment. A fragment specifies its environment by providing bindings for all the free names it contains. There are four pieces to its environment that a fragment must specify: Names are either qualified or unqualified. Unqualified names must be defined either in the current fragment, in the rest of the schema, or in the SOX definition. It can only be defined in one of the three. Qualified names must be defined in the schema declared for that name with a namespace declaration. For each qualified name in a fragment there must be a corresponding namespace declaration in the same fragment. The schema element itself is considered a namespace declaration for the current namespace, so qualified names using the value in the prefix attribute of the schema element must resolve to the current schema (although not necessarily in the same fragment). In this version of the language it is an error to create a definition whose local name is either one of the intrinsic datatypes. <!ELEMENT namespace (explain?)> <!ATTLIST namespace prefix NMTOKEN #REQUIRED namespace CDATA #REQUIRED > All of the names defined in a single Schema belong to the same namespace, and can be used without qualifier. Schemata, however, frequently need to refer to definitions in other namespaces. A namespace declaration allows access to definitions in the referenced schema when appearing with an appropriate prefix attribute. A namespace declaration is scoped to the current schema fragment only. Namespace declarations made in one schema fragment are not visible in other fragments belonging to the same schema, even when referenced through a join . NOTE: We will use a prefix attribute for the prefix, instead of using colonized names, in accordance with the XSDL spec [XSDL]. The prefix attribute shows up on various elements, including attdef, scalar , and extends, all of which include a reference to a definition. XML requires the use of qualified names to make such references in document instances. A qualified name in an instance consists of two parts separated by a colon: SOX implements qualified names through the use of two attributes, one for the name, and one for the prefix. Qualified names can be used wherever a reference to a definition is allowed - a schema cannot define a name in another schema, but it can extend an elementtype from another schema, or use a datatype from another Schema. For example, the following fragment declares the urn:foo namespace and associates it with prefix bar: <namespace prefix="bar" namespace="urn:foo"/> If we later need to include a foobar element from the urn:foo namespace in an element type that would be done using the following fragment: <elementtype name="et"> <model> <element prefix="bar" type="foobar" name="whatever"/> <element prefix="bar" type="foobar"/> </model> </elementtype> A valid instance of this would be: <et><whatever><bar:foobar xmlns:</whatever> <bar:foobar xmlns:</et> Each prefix must be unique within a schema fragment. While it is not a fatal error to declare a non-existent namespace, it is a fatal error to reference an element in a non-existent namespace, or to reference a non-existent element in an existing namespace. Both of these are semantic errors. It is a fatal runtime error if the processor is unable to retrieve a definition during processing. A processor should distinguish among these cases. This specification does not address the issue of how to retrieve definitions. The value of the namespace attribute of the namespace declaration must be the URI of a schema, as described above. In other words, it must be the same as the value appearing in the uri attribute of the schema element of files which define that schema. The explain element exists inside several different SOX constructs. It provides a hook for including documentation within a schema and exploits commonly known HTML [HTML-4] constructs. <!ELEMENT explain (title?, synopsis?, (%html.block;)+) > The title and html.block elements are common HTML constructs whose exact definitions are specified in the htmltext.ent file included at the end of this document. The synopsis is used to give a purpose or synopsis to the thing being explain ed. It is a single paragraph of text. Because it is HTML embedded in XML, all the HTML constructs must be used in a well-formed manner. It is common to take advantage of SGML tag minimization in writing HTML documents, but that would result in well-formedness errors in a SOX schema. In XML Schema documents, element type definitions reproduce the expressiveness of XML element type declarations using explicit element and attribute markup. An element type may be defined by using the elementtype element with the required name attribute, and a subordinate model or extends element (both of which will be described subsequently). The corresponding DTD fragment is: <!ELEMENT elementtype (explain?, (extends | ((empty|model), (attdef)*)))> <!ATTLIST elementtype name NMTOKEN #REQUIRED > The following example defines an element type of name inline : <elementtype name="inline"> <explain> <synopsis>This defines the <em>inline</em> element</synopsis> </explain> <model> <string/> </model> </elementtype> A mechanism for attaching attributes to an element type is described later. A valid instance for this fragment would be: <inline>This is a string</inline> The name of an element type may be any valid unqualified XML element type name corresponding to the Name production in the XML 1.0 language definition. The name must be unique among the names of element types and datatypes defined in the current Schema, which includes the current document or other documents belonging to the same Schema processed through the join mechanism or other resolution mechanism. An element type may be referenced by the element and extends elements. It is a fatal error to re-assign an element name, or to reference an element type which is not defined. The value of the name attribute must be unique across all elementtype names defined in this schema. The content model of an element type defines the structure and composition of an element of that type in an XML instance. The definition of a content model in XML Schema documents extends the expressiveness of XML DTDs by providing greater specificity of the minimum and maximum number of times some content model atom may be repeated. This allows an XML Schema designer with more precise control than is offered by XML's *, ? and + occurrence indicators. The DTD fragment corresponding to an content model definition is: <!ELEMENT model (string|element|choice|sequence)> An empty atom is used to indicate that an element may not contain any content, as in the case of the BR element below: <elementtype name="BR"> <empty/> </elementtype> In order to properly support extensibility (explained below) an empty content model is considered to be an empty sequence . Valid instances are: <BR/> or: <BR></BR> The string atom indicates that a content model is simply string content and is an evolution of #PCDATA . It can be used as in the example above. In addition, the string value may be constrained to be of a particular datatype defined by the optional datatype attribute (and prefix , if the datatype is in another schema) as can be seen in the DTD fragment: <!ELEMENT string EMPTY> <!ATTLIST string prefix NMTOKEN #IMPLIED datatype NMTOKEN "string" > Any element type with string in its content model is considered to be a choice group with an occurs value of "*". This is consistent with the XML 1.0 spec which requires that any content model containing #PCDATA be in a choice with a Kleene star. It is unclear if this restriction will be maintained in the Schema world. In the example below, the size element type's content model is string content constrained to be an int : <elementtype name="size"> <model> <string datatype="int" /> </model> </elementtype> A valid example of this would be: <size>12345</size> However the following would not be valid: <size>12r34</size> A content model may also comprise zero or more repetitions of another element. The DTD fragment for this definition is: <!ELEMENT element EMPTY> <!ATTLIST element prefix NMTOKEN #IMPLIED type NMTOKEN #REQUIRED name NMTOKEN #IMPLIED occurs CDATA #IMPLIED > The defined element is an instance of either a previously defined datatype or element type, which is refered to by the required type attribute. As before, it is a fatal error to reference a datatype or element type that is not defined. For purposes of extensibility, a content model with just one element is considered a sequence of length one. The name attribute may be used to assign a name to the defined element when it appears in an instance. As datatypes are not also element types, the name attribute must have a value when type references a datatype. When name is specified, this shows up as an additional element wrapping an element of the referenced type. The following fragment demonstrates the use of name . The type int refers to the built-in integer datatype: <elementtype name="paragraph"> <model> <string/> </model> </elementtype> <elementtype name="block"> <model> <sequence> <element name="p" type="paragraph"/> <element name="position" type="int"/> </sequence> </model> </elementtype> A valid instance would look like this: <block><p><paragraph>this is the paragraph</paragraph></p><position>12345</position></block> The following would not be valid: <block><paragraph>you must use the name</paragraph><int>1</int></block> The occurs attribute indicates the number of repetitions of the instanced element . It can take on the values of: We will call an occurs where N1 is not equal to N2 an indefinite occurs. The degenerate case of "0,0" is allowed and means exactly 0 repetitions, which is treated the same as if the declaration did not occur. NOTE: Values of the form " N1,N2 " and " N1,* " are not currently supported. They will be treated as an occurs of " + " if N1 is greater than 0, or as a " * " otherwise. In the following example, the definition of the content model for a list element type specifies that it contains a minimum of 2 and a maximum of 9 item elements. <elementtype name="list"> <model> <element type="item" occurs="2,9"/> </model> </elementtype> A valid instance of the above would be: <list><item/><item/></list> The choice atom defines a content model to comprise one of a set of choices of element , choice or sequence content models. The relevant DTD fragment is: <!ELEMENT choice ((element|choice|sequence), (element|choice|sequence)+) > <!ATTLIST choice name NMTOKEN #IMPLIED occurs CDATA #IMPLIED > As with element , the occurs attribute specifies the number of repetitions, and it can take the same values as defined earlier. In the following example, the dl element type's content model specifies that either a single dt or a single dd element is allowed. <elementtype name="dl"> <model> <choice> <element type="dt"/> <element type="dd"/> </choice> </model> </elementtype> A valid instance would be: <dl><dt/></dl> or <dl><dd/></dl> but not: <dl><dd/><dt/></dl> The sequence atom defines a content model to consist of the specified element , choice or sequence content models appended together in the order specified. The relevant DTD fragment is: <!ELEMENT sequence ((element|choice|sequence), (element|choice|sequence)+) > <!ATTLIST sequence name NMTOKEN #IMPLIED occurs CDATA #IMPLIED > As before the occurs attribute specifies the number of repetitions of the entire sequence , and it can take the same values as defined earlier. In the following example, the dl element type's content model specifies that a single dt followed by a single dd is allowed. <elementtype name="dl"> <model> <sequence> <element type="dt"/> <element type="dd"/> </sequence> </model> </elementtype> A valid instance would be: <dl><dt/><dd/></dl> The various content model atoms defined above may be combined to allow the definition of complex content models. For example, the dl element type's content model below specifies that a dh is followed by two or more dt or dd elements. <elementtype name="dl"> <model> <sequence> <element type="dh"/> <choice occurs="2,*"> <element type="dt"/> <element type="dd"/> </choice> </sequence> </model> </elementtype> A valid instance would be: <dl><dh/><dt/><dd/><dd/><dt/></dl> The occurs attribute may not occur on the outermost sequence or choice in an elementtype definition. This is the sequence or choice immediately contained within model . The outermost sequence or choice within the model must occur exactly once. The value of the name attribute (if any) given to an element, choice, or sequence, must be unique within the innermost enclosing construct. For example, the following is legal: <sequence> <element name="a" type="string"/> <element name="b" type="int"/> </sequence> while the following is not: <sequence> <element name="a" type="string"/> <element name="a" type="int"/> </sequence> Likewise, the following is valid: <sequence> <element name="a" type="string"/> <sequence name="c"> <element name="a" type="string"/> <element name="c" type="int"/> </sequence> <element name="b" type="string"/> </sequence> As in object-oriented inheritance, an element may specialize (or subclass) from another element by inheriting its structure and then adding on to its content model. Inheritance is specified using the extends construct, and the relevant DTD fragment is: <!ELEMENT extends (append?, attdef*)> <!ATTLIST extends prefix NMTOKEN #IMPLIED type NMTOKEN #REQUIRED > <!ELEMENT append (element|choice|sequence)+> The type attribute refers to the base element type that is being extended, and the structure of the append atom has the same contents as that of model . The base type must be already defined. The contents of the append element are added to the end of the parent's content model (the outermost sequence ). Note that the append element has been made optional. This makes it possible to declare semantically distinct element types whose structures remain the same as that of some common parent. In the following example, the element type datednote has the content model of the element type it extends ( note ) with an appended date (using the intrinsic date datatype). The multinote element type polymorphyically can use either. <elementtype name="note"> <model><element type="p" occurs="+"></model> </elementtype> <elementtype name="datednote"> <extends type="note"> <append> <element type="date" name="adate"> <element type="time" name="atime" occurs="?"/> </append> </extends> </elementtype> <elementtype name="multinote"> <element type="note" occurs="+"/> </elementtype> The following is a valid instance of multinote: <multinote> <note><p>This is a plain note</p></note> <datednote> <p>This is a dated note</p> <adate>19981209</adate> <atime>10:23:32</atime> </datednote> </multinote> SOX permits an elementtype in one namespace to extend an elementtype in another Schema. In order to work correctly with the current draft of the W3C Namespace Recommendation, each element in an instance belongs to the namespace in which it was declared. In the above example, if note were declared in schema Foo.sox and both datednote and multinote in Bar.sox , then the following would be an appropriately prefixed version of the above example: <bar:multinote xmlns: <foo:note><foo:p>This is a plain note</foo:p></foo:note> <bar:datednote> <foo:p>This is a dated note</foo:p> <bar:adate>19981209</bar:adate> <bar:atime>10:23:32</bar:atime> </bar:datednote> </bar:multinote> A prefered solution would be to consider local names as being in the namespace of the elementtype they are declared in (or its subtypes) and therefore need not be prefixed. This would be similar to the treatment of attributes. In order to support extensibility, each elementtype must be either a choice or a sequence. By default: There are some constraints placed on the form of content models to avoid ambiguous models and interminable documents. These are an extension of similar restriction in XML 1.0. For any possible path in the parse tree from an element to a descendant of itself there must be an intervening optional node (?, *, or indefinite occurs) or intevening choice node with at least two children. This assures that infinite documents are not required. For any optional node in the parse tree (?, *, +, indefinite occurs, or choice), none of the descendants of elements in its first set may be in its follow set. These conditions continue to hold when extending an elementtype. Due to the desire to maintain substitutability of extended elementtypes with their base type, SOX 2.0 does not allow the extension of elementtypes whose content model is choice. In order to maintain substitutability, any element extending a choice would need to be a subtype of something already in the choice, so it would already be valid in the parent type. When extending a sequence with a occurs of indefinite extent, the resulting content model must be checked for ambiguity with itself. In other words, if the parent model was x* , where x is some content model, then the content model xx ( x followed immediately by x ) must have been unambiguous. If the child model is now x+e (i.e., x plus some additional element), then the content model (x+e)(x+e) must still be unambiguous. Within an element, the value of the name attribute is scoped to the surrounding elementtype. It neither creates nor prevents the creation of a top level definition using the same name. However, in order to maintain backwards compatibility with XML 1.0, the binding between name and type must be global. In other words, once a name has been used with a particular type, it cannot be used with another type. We expect this restriction to be relaxed in a future version. Along the lines of being able to define element types, XML Schema provides the means to define and refer to datatypes. XML Schema defines a set of intrinsic datatypes, listed below, from which user-defined datatypes may be derived. This list is derived from existing ISO standards [ISO-31, ISO-8601, DATETIME] and common programming language practice. Some of these have been refered to elsewhere in this document. It also (currently) provides the datatype element for actually defining new datatypes. The appropriate DTD fragment is: <!ELEMENT datatype (explain?, (enumeration|scalar|varchar)) > <!ATTLIST datatype name NMTOKEN #REQUIRED > The value of the name attribute specifies the name of the new datatype. This must be unique across all the datatypes defined for this schema. The three operators, enumeration , scalar , and varchar (all further described below), each derive a new datatype from an existing datatype. The existing datatype can be either one of the intrisic types, or some other user-defined datatype, whether in this schema or another. Both scalar and varchar have some restrictions on which datatypes they can extend, as described below. The intrinsic datatypes define the domains of the atomic data units in XML Schema documents. The list below includes those datatypes defined intrinsic to this version of the Schema language. XML Schema documents provide a mechanism for defining enumerations to constrain attribute or element string content. An enumeration datatype is a finite set of values enumerated by the option elements inside the enumeration element. <!ELEMENT enumeration (explain?, option)+ > <!ATTLIST enumeration prefix NMTOKEN #IMPLIED datatype NMTOKEN #REQUIRED > <!ELEMENT option (#PCDATA)* > The datatype attribute of the enumeration specifies the instrinsic (see list above) or user-defined datatype (i.e., other enumeration , varchar , or scalar ) being refined. If the datatype is not defined in this Schema, then the prefix of the appropriate schema must be specified. Each option has a value representing a valid value for the datatype being extended. The following example demonstrates the definition and use of an enumeration datatype: <datatype name="colortype"> <enumeration datatype="NMTOKEN"> <option>Red</option> <option>Blue</option> <option>Green</option> </enumeration> </datatype> <elementtype name="car"> <empty/> <attdef name="color"datatype="colortype"><required/></attdef> </elementtype> <elementtype name="bus"> <model> <element name="color" type="colortype"> </model> </elementtype> The following are valid instances: <car color="Red"/> <bus><color>Blue</color></bus> Scalar datatypes are used for creating subtypes of number . The relevant DTD fragment is: <" > Digits specifies the maximum number of digits for the integral part of the number, decimals specifies the maximum number of digits for the fraction part of the number. The following constraints must hold: An example scalar would be: <scalar digits="4" decimals="3" minvalue="-9999" maxvalue="8888" minexclusive="true"/> The following are valid: -9998.999, 8887.999, 0.0, 8888. The following are invalid: -9999, 8888.001. The number datatype covers all rational numbers which can be finitely specified as a decimal (i.e., it does not cover infinitely repeating decimals, such as 1/9, or irrational reals, such as p). The other intrinsic classes are designed to be easily mappable to existing datatypes in common programming languages. Varchar , adapted from SQL[SQL], is for specifying string types with fixed maximum length. <!ELEMENT varchar EMPTY> <!ATTLIST varchar prefix NMTOKEN #IMPLIED datatype NMTOKEN "string" maxlength CDATA #REQUIRED> The value of maxlength must be a non-negative integer. An example varchar use is: <datatype name="var"> <varchar maxlength="4"/> </datatype> <elementtype name="wrap"> <model> <string datatype ="var"/> </model> </elementtype> A valid instance would be: <wrap>abc</wrap> as would: <wrap>abcd</wrap> An invalid instance would be: <wrap>abcde</wrap> Float , double , int , long , and byte are all predefined scalar types. This means they can be used as base types for any new definition of a scalar and can be referenced as datatypes. The value of maxlength in varchar must be greater than or equal to 0. Note that a length of 0 implies an empty string. The datatype of a varchar must be either string or another varchar or one of string, NMTOKEN, NMTOKENS, ID, IDREF, or IDREFS. The names of the intrinsic types are reserved by SOX. It is an error to define a datatype or elementtype using the name of one of the intrinsic types. Attribute definitions in XML Schema documents may be defined as part of the element type definition. An attribute definition has a name and a type, and must include a presence element. The relevant fragment of the DTD is: <!ELEMENT attdef (explain?,(enumeration | scalar | varchar)?, (required|implied|default|fixed)?)> <!ATTLIST attdef name NMTOKEN #REQUIRED prefix NMTOKEN #IMPLIED datatype NMTOKEN #IMPLIED> The name of the attribute is defined by the value of name , and it may be of a certain datatype . It must be unique among the attributes for this elementtype. If a value is not given for the datatype attribute and the attdef does not contain an enumeration , scalar , or varchar , then a datatype of string is assumed. An attribute value's presence in an instance may be specified as default , fixed , required or implied, as in the XML specification. If no value is given for the presence, then it defaults to implied . In all cases, the value of an attribute must be valid for its datatype. The DTD fragment for the presence elements is: <!ELEMENT default (#PCDATA) > <!ELEMENT fixed (#PCDATA) > <!ELEMENT required EMPTY > <!ELEMENT implied EMPTY > A default presence indicates that the value of the attribute is automatically set to the default value if none is specified. In an instance, if the attribute is defined to have another value, the default is ignored. A fixed presence indicates that the value of the attribute is assumed to be the fixed value if no value is specified. The attribute may also be explicitly assigned exactly this fixed value. In an instance, if the attribute is defined to have a different value, this signals a fatal error. An empty value can be specified for default or fixed. A required presence indicates that in an instance, whenever the parent element appears, this attribute must be assigned a value. And finally, an implied presence indicates that the application is given the responsibility of filling in a default value if no attribute value is defined in an instance. In XML Schema documents, unlike XML DTDs, enumerations may be specified for any attribute type. This information will be lost if an XML DTD is generated from an XML Schema document, except for attributes of type NMTOKEN , indicating a name token. If there is an enumeration specified for an attribute and also a fixed or default value, then that fixed or default value must be a member of the enumeration . Thus, example attribute definitions (within the definition of an elementtype , of course) might be: <elementtype name="car"> <empty/> <attdef name="owner" datatype="string"/> <attdef name="color"> <enumeration datatype="NMTOKEN"> <option>Red</option> <option>Blue</option> <option>Green</option> </enumeration> <required/> </attdef> </elementtype> An instance corresponding to this definition would look like: <car color="Blue" owner="John Smith"/> It is a fatal error for the datatype attribute of the attdef element to have a value if there is an enclosed enumeration , scalar , or varchar. An externally-defined schema file whose definitions belong to the same namespace may be pulled in and parsed with the current schema definition. This is accomplished using the join construct. The relevant fragment of the DTD is: <!ELEMENT join (explain?)> <!ATTLIST join datatype NMTOKEN #FIXED "schema" public CDATA #IMPLIED system CDATA #REQUIRED> The datatype attribute is fixed as only schemas may be included for now. The public attribute is the public identifier of the file as defined in the XML 1.0 specification. The system attribute is the URI of the file containing the schema definition to be lexically included. The entity manager must resolve this URI. A join ed file is read only once by the parser. The parser determines identity between files by comparison of the values of the system attribute. It is not illegal for two join elements to reference the same URI, but one will be ignored. It is a user error to use two different URIs which ultimately map to the same file. Note that reading a fragment twice will cause an error as the schema processor will create all its definitions twice. The joined file must belong to the same namespace as the joining one, as identified by the URI attribute of the root elements. In the current implementation, both namespace and join elements use URIs to point to external files, and processing a namespace involves retrieving the physical file referenced by the namespace element. It is not an error for a join element to reference this file again, however it is a runtime error for that file to actually be retrieved a second time. In other words, no fragment for a schema is to be processed more than once. <!-- ************************************************************* --> <!-- XML Schema DTD --> <!-- PUBLIC "-//Commerce One Inc.//DTD XML Schema 2.0//EN" --> <!-- SYSTEM "schema.dtd" --> <!-- Copyright: Commerce One Inc., 1997, 1998, 1999 Date created: 17 Dec 1997 Date revised: 03 June 1999 Version: 2.0 --> <!-- ************************************************************* --> <!-- ************************************************************* --> <!-- XML Schema ************************************************** --> <!-- ************************************************************* --> <!ENTITY % htmltext SYSTEM "htmltext.ent"> %htmltext; <!ELEMENT schema (intro?, (datatype | elementtype | join | comment | namespace)*) > <!ATTLIST schema prefix NMTOKEN #IMPLIED uri CDATA #REQUIRED soxlang-version NMTOKEN #FIXED "V0.2.2"> <!-- ************************************************************* --> <!-- ELEMENTS **************************************************** --> <!-- ************************************************************* --> <!-- An Element Type definition requires a name. It is defined to extend a named element, as an instance of a named element, as an EMPTY or ANY element with optional attribute definitions, or with a content model with optional attribute definitions. --> <!ELEMENT elementtype (explain?, (extends | ((empty|model), (attdef)*)))> <!ATTLIST elementtype name NMTOKEN #REQUIRED > <!ELEMENT empty EMPTY > <!-- ************************************************************* --> <!-- MODEL ******************************************************* --> <!-- ************************************************************* --> <!ELEMENT model (string|element|choice|sequence)> <!ELEMENT extends (append?, attdef*)> <!ATTLIST extends prefix NMTOKEN #IMPLIED type NMTOKEN #REQUIRED > <!ELEMENT append (element|choice|sequence)+> <!ELEMENT element EMPTY > <!ATTLIST element prefix NMTOKEN #IMPLIED type NMTOKEN #REQUIRED name NMTOKEN #IMPLIED occurs CDATA #IMPLIED > <!ELEMENT string EMPTY > <!ATTLIST string prefix NMTOKEN #IMPLIED datatype NMTOKEN "string" > <!ELEMENT choice ((element|choice|sequence), (element|choice|sequence)+) > <!ATTLIST choice name NMTOKEN #IMPLIED occurs CDATA #IMPLIED > <!ELEMENT sequence ((element|choice|sequence), (element|choice|sequence)+) > <!ATTLIST sequence name NMTOKEN #IMPLIED occurs CDATA #IMPLIED > <!-- replacement for "include" --> <!ELEMENT join (explain?)> <!ATTLIST join datatype NMTOKEN #FIXED "schema" public CDATA #IMPLIED system CDATA #REQUIRED> <!-- ************************************************************* --> <!-- ATTRIBUTES ************************************************** --> <!-- ************************************************************* --> <!-- An attribute definition has a name and datatype, and must have a presence element "required|implied|default|fixed" included. It may have a namespace associated with it, or inherit enumeration is supposed to define the domain of acceptable value --> <!ELEMENT attdef (explain?, (enumeration | scalar | varchar)?, (required|implied|default|fixed)?)> <!ATTLIST attdef name NMTOKEN #REQUIRED prefix NMTOKEN #IMPLIED datatype NMTOKEN #IMPLIED > <!ELEMENT default (#PCDATA) > <!ELEMENT fixed (#PCDATA) > <!ELEMENT required EMPTY > <!ELEMENT implied EMPTY > <!-- ************************************************************* --> <!-- DATATYPE **************************************************** --> <!-- ************************************************************* --> <!ELEMENT datatype (explain?, (enumeration|scalar|varchar)) > <!ATTLIST datatype name NMTOKEN #REQUIRED > <!ELEMENT enumeration (explain?, option)+ > <!ATTLIST enumeration prefix NMTOKEN #IMPLIED datatype NMTOKEN #REQUIRED > <!ELEMENT option (#PCDATA)* > <" > <!ELEMENT varchar EMPTY> <!ATTLIST varchar prefix NMTOKEN #IMPLIED datatype NMTOKEN "string" maxlength CDATA #REQUIRED> <!-- ************************************************************* --> <!-- COMMENT ***************************************************** --> <!-- ************************************************************* --> <!ELEMENT comment (#PCDATA)> <!-- Namespaces --> <!ELEMENT namespace (explain?) > <!ATTLIST namespace prefix NMTOKEN #REQUIRED namespace CDATA #REQUIRED > <!-- ************************************************************* --> <!-- HTML Text: SOX uses HTML element types for convenience. --> <!-- ************************************************************* --> <!-- Copyright: Commerce One Systems Inc., 1997, 1998 Date created: 17 Dec 1997 Date revised: 01 Mar 1999 Version: 1.0 --> <!-- ************************************************************* --> <!ENTITY % html.nonheading " table | p | bq | pre | ol | ul | dl" > <!ENTITY % html.text "#PCDATA| a | abbr | b | big | br | cite | code | em | i | img | q | small | span | strike | strong | sub | sup | tt | u " > <!ENTITY % html.heading.text "#PCDATA| a | abbr | b | big | br | cite | code | em | i | img | q | small | span | strike | strong | sub | sup | tt | u " > <!ENTITY % html.block "h1|h2|h3|h4|h5|DT|%html.nonheading;" > <!-- ************************************************************* --> <!-- The intro element type is used to introduce a schema. It contains a general description of the purpose and use of the schema's document type or components. --> <!ELEMENT intro ((%html.block;)+) > <!-- The explain element is use to document a component within a schema. --> <!ELEMENT explain (title?, synopsis?, (%html.block;)+) > <!-- The title is used to give a human readable title to some type name. --> <!ELEMENT title (%html.text;)* > <!-- The synopsis is used to give a purpose or synopsis to the thing being explained. It is a single paragraph of text. --> <!ELEMENT synopsis (%html.text;)* > <!-- ************************************************************* --> <!ELEMENT h1 (%html.heading.text;)* > <!ELEMENT h2 (%html.heading.text;)* > <!ELEMENT h3 (%html.heading.text;)* > <!ELEMENT h4 (%html.heading.text;)* > <!ELEMENT h5 (%html.heading.text;)* > <!ELEMENT DT (%html.heading.text;)* > <!-- ************************************************************* --> <!ELEMENT b (#PCDATA)* > <!ELEMENT br EMPTY > <!ELEMENT big (#PCDATA)* > <!ELEMENT i (#PCDATA)* > <!ELEMENT small (#PCDATA)* > <!ELEMENT sub (#PCDATA)* > <!ELEMENT sup (#PCDATA)* > <!ELEMENT strike (#PCDATA)* > <!ELEMENT tt (#PCDATA)* > <!ELEMENT u (#PCDATA)* > <!ELEMENT abbr (#PCDATA)* > <!ELEMENT cite (#PCDATA)* > <!ELEMENT code (#PCDATA)* > <!ELEMENT em (#PCDATA)* > <!ELEMENT q (#PCDATA)* > <!ELEMENT span (#PCDATA)* > <!ELEMENT strong (#PCDATA)* > <!-- ************************************************************* --> <!ELEMENT a (%html.text;)* > <!ATTLIST a name CDATA #IMPLIED href CDATA #IMPLIED title CDATA #IMPLIED > <!-- ************************************************************* --> <!ELEMENT img (explain?) > <!ATTLIST img src CDATA #REQUIRED alt CDATA #REQUIRED longdesc CDATA #IMPLIED usemap CDATA #IMPLIED > <!-- ************************************************************* --> <!ELEMENT pre (%html.text;)* > <!ATTLIST pre xml:space (preserve) #REQUIRED > <!-- ************************************************************* --> <!ELEMENT p (%html.text;)* > <!ELEMENT bq (%html.text;)* > <!ELEMENT ol (lh?, li+) > <!ELEMENT ul (lh?, li+) > <!ELEMENT lh (%html.heading.text;)* > <!ELEMENT li (%html.text;|%html.block;)* > <!ELEMENT dl (dh?,(dt,dd)+) > <!ELEMENT dh (%html.heading.text;)* > <!ELEMENT dt (%html.text;|%html.block;)* > <!ELEMENT dd (%html.text;|%html.block;)* > <!-- ************************************************************* --> <!ELEMENT table (thead?, tbody) > <!ATTLIST table cols CDATA #IMPLIED width CDATA #IMPLIED height CDATA #IMPLIED align (left|center|right|justify) #IMPLIED valign (top | middle | bottom | baseline) #IMPLIED vspace CDATA #IMPLIED hspace CDATA #IMPLIED cellpadding CDATA #IMPLIED cellspacing CDATA #IMPLIED border CDATA #IMPLIED frame (box|void|above|below|hsides|vsides|lhs|rhs) #IMPLIED rules (none|groups|rows|cols|all) #IMPLIED > <!ELEMENT thead (tr)+ > <!ATTLIST thead align (left|center|right|justify) #IMPLIED valign (top|middle|bottom|baseline) #IMPLIED > <!ELEMENT tbody (tr)+ > <!ATTLIST tbody align (left|center|right|justify) #IMPLIED valign (top|middle|bottom|baseline) #IMPLIED > <!ELEMENT tr (th | td)+ > <!ATTLIST tr align (left|center|right|justify) #IMPLIED valign (top | middle | bottom | baseline) #IMPLIED > <!ELEMENT th (%html.text;|%html.block;)* > <!ATTLIST th colspan CDATA #IMPLIED rowspan CDATA #IMPLIED width CDATA #IMPLIED height CDATA #IMPLIED align (left|center|right|justify) #IMPLIED valign (top | middle | bottom | baseline) #IMPLIED > <!ELEMENT td (%html.text;|%html.block;)* > <!ATTLIST td colspan CDATA #IMPLIED rowspan CDATA #IMPLIED width CDATA #IMPLIED height CDATA #IMPLIED align (left|center|right|justify) #IMPLIED valign (top | middle | bottom | baseline) #IMPLIED > <!-- ************************************************************* -->
http://www.w3.org/TR/NOTE-SOX/
CC-MAIN-2015-06
en
refinedweb
I am completely convinced that the patch you are using (which Ibelieve is the same as the one Andrew calls"tty-shutdown-race-fix.patch") is the problem. What happens is thatrelease_dev() in tty_io.c calls cancel_delayed_work(), which callsdel_timer_sync() without decrementing nr_queued for keventd_wq.When flush_scheduled_work() gets called it sleeps on the work_donewaitqueue. The only place work_done gets woken up is inrun_workqueue, and it only happens ifatomic_dec_and_test(&cwq->nr_queued) decrements nr_queued to 0.But after calling cancel_delayed_work(), that can never happen (wedeleted the timer that was going to add the work that we're waitingfor).It seems to me that the implementation of cancel_delayed_work() isnot quite right. We need to decrement nr_queued if we actuallystopped the work from being added to the workqueue.Andrew, I've never seen a reply from you about this, can you tell meif I'm missing something here?By the way, I assume that the process below is the one that's hung: bash D C04CDC68 4233453816 8524 1 8387 (L-TLB) Call Trace: [<c010cd75>] do_IRQ+0x235/0x370 [<c01394a5>] flush_workqueue+0x305/0x450 [<c010ac18>] common_interrupt+0x18/0x20 [<c011de30>] default_wake_function+0x0/0x20 [<c011de30>] default_wake_function+0x0/0x20 [<c0257a44>] release_dev+0x6a4/0x860 [<c01566ab>] zap_pmd_range+0x4b/0x70 [<c0258204>] tty_release+0x94/0x1b0 [<c016dd7c>] __fput+0xac/0x100 [<c0258170>] tty_release+0x0/0x1b0It would seem to be stuck in the flush_workqueue() called fromrelease_dev(), just as I would expect.Shawn, can you try the patch below instead of Andrew's ttyfix2? - Roland===== drivers/char/tty_io.c 1.72 vs edited =====--- 1.72/drivers/char/tty_io.c Thu Apr 3 10:20:22 2003+++ edited/drivers/char/tty_io.c Tue Apr 8 20:23:44 2003@@ -1286,8 +1286,15 @@ } /*- * Make sure that the tty's task queue isn't activated. + * Prevent flush_to_ldisc() from rescheduling the work for later. Then+ * kill any delayed work. */+ clear_bit(TTY_DONT_FLIP, &tty->flags);+ cancel_delayed_work(&tty->flip.work);++ /*+ * Wait for ->hangup_work and ->flip.work handlers to terminate+ */ flush_scheduled_work(); /* ===== include/linux/workqueue.h 1.4 vs edited =====--- 1.4/include/linux/workqueue.h Mon Nov 4 13:12:06 2002+++ edited/include/linux/workqueue.h Tue Apr 8 20:42:41 2003@@ -63,5 +63,12 @@ extern void init_workqueues(void); +/*+ * Kill off a pending schedule_delayed_work(). Note that the work callback+ * function may still be running on return from cancel_delayed_work(). Run+ * flush_scheduled_work() to wait on it.+ */+extern int cancel_delayed_work(struct work_struct *work);+ #endif ===== kernel/workqueue.c 1.6 vs edited =====--- 1.6/kernel/workqueue.c Tue Feb 11 14:57:54 2003+++ edited/kernel/workqueue.c Tue Apr 8 20:27:50 2003@@ -125,6 +125,24 @@ return ret; } +int cancel_delayed_work(struct work_struct *work) {+ struct cpu_workqueue_struct *cwq = work->wq_data;+ int ret;++ ret = del_timer_sync(&work->timer);+ if (ret) {+ /*+ * Wake up 'work done' waiters (flush) if we just+ * removed the last thing on the workqueue.+ */+ if (atomic_dec_and_test(&cwq->nr_queued))+ wake_up(&cwq->work_done);++ }++ return ret;+}+ static inline void run_workqueue(struct cpu_workqueue_struct *cwq) { unsigned long flags;@@ -378,5 +396,5 @@ EXPORT_SYMBOL(schedule_work); EXPORT_SYMBOL(schedule_delayed_work);+EXPORT_SYMBOL(cancel_delayed_work); EXPORT_SYMBOL(flush_scheduled_work);--To unsubscribe from this list: send the line "unsubscribe linux-kernel" inthe body of a message to majordomo@vger.kernel.orgMore majordomo info at read the FAQ at
http://lkml.org/lkml/2003/4/9/2
CC-MAIN-2015-06
en
refinedweb
Sentiment analysis is becoming a popular area of research and social media analysis, especially around user reviews and tweets. It is a special case of text mining generally focused on identifying opinion polarity, and while it’s often not very accurate, it can still be useful. For simplicity (and because the training data is easily accessible) I’ll focus on 2 possible sentiment classifications: positive and negative. NLTK Naive Bayes Classification NLTK comes with all the pieces you need to get started on sentiment analysis: a movie reviews corpus with reviews categorized into pos and neg categories, and a number of trainable classifiers. We’ll start with a simple NaiveBayesClassifier as a baseline, using boolean word feature extraction. Bag of Words Feature Extraction All of the NLTK classifiers work with featstructs, which can be simple dictionaries mapping a feature name to a feature value. For text, we’ll use a simplified bag of words model where every word is feature name with a value of True. Here’s the feature extraction method: def word_feats(words): return dict([(word, True) for word in words]) Training Set vs Test Set and Accuracy The movie reviews corpus has 1000 positive files and 1000 negative files. We’ll use 3/4 of them as the training set, and the rest as the test set. This gives us 1500 training instances and 500 test instances. The classifier training method expects to be given a list of tokens in the form of [(feats, label)] where feats is a feature dictionary and label is the classification label. In our case, feats will be of the form {word: True} and label will be one of ‘pos’ or ‘neg’. For accuracy evaluation, we can use nltk.classify.util.accuracy with the test set as the gold standard. Training and Testing the Naive Bayes Classifier Here’s the complete python code for training and testing a Naive Bayes Classifier on the movie review corpus. import nltk.classify.util) print 'accuracy:', nltk.classify.util.accuracy(classifier, testfeats) classifier.show_most_informative_features() And the output is: train on 1500 instances, test on 500 instances accuracy: 0.728 As you can see, the 10 most informative features are, for the most part, highly descriptive adjectives. The only 2 words that seem a bit odd are “vulnerable” and “avoids”. Perhaps these words refer to important plot points or character development that signify a good movie. Whatever the case, with simple assumptions and very little code we’re able to get almost 73% accuracy. This is somewhat near human accuracy, as apparently people agree on sentiment only around 80% of the time. Future articles in this series will cover precision & recall metrics, alternative classifiers, and techniques for improving accuracy. Pingback: GPS Trivia | -(Lab *) oneTwoClick()
http://streamhacker.com/2010/05/10/text-classification-sentiment-analysis-naive-bayes-classifier/comment-page-2/
CC-MAIN-2015-06
en
refinedweb
Template:Construction From Uncyclopedia, the content-free encyclopedia Note: Pages with this template are added to Category:Work In Progress, unless they are in the user namespace. Note 2: Electric Boogaloo: This template will not save your page if it is in violation of the vanity policies, etc, or if it is the only thing on the page or the article is short and crap (such as 3 lines long).
http://uncyclopedia.wikia.com/wiki/Template:Construction?oldid=5254973
CC-MAIN-2015-06
en
refinedweb
BeanShell ScriptingQR for this page From Fiji BeanShell is the scripting language in Fiji which is similar both to the ImageJ macro language and to Java. In fact, you can even execute almost verbatim Java code, but the common case is to write scripts, i.e. leave out all the syntactic sugar to make your code part of a class. BeanShell also does not require strict typing (read: you do not need to declare variables with types), making it easy to turn prototype code into proper Java after seeing that the code works. Contents Quickstart If you are already familiar with Java or the macro language, the syntax of Beanshell will be familiar to you. The obligatory Hello, World! example: // This prints 'Hello, World!' to the output area print("Hello, World!"); Variables can be assigned values: someName = 1; someName = "Hello"; Variables are not strongly typed in BeanShell by default; If you use a variable name without specifying the type of it, you can assign anything to it. Optionally, you can declare variables with a data type, in which case the type is enforced: String s; s = 1; // this fails Note: The builtin functions of the ImageJ Macro language are not available in Beanshell. Syntax Variables - A variable is a placeholder for a changing entity - Each variable has a name - Each variable has a value - Values can be any data type (numeric, text, etc) - Variables can be assigned new values Set variables' values Variables are assigned a value by statements of the form name = value ended by a semicolon. The value can be an expression. intensity = 255; value = 2 * 8 + 1; title = “Hello, World!”; text = “title”; Using variables You can set a variable's value to the value of a second variable: text = title; Note that the variable name on the left hand side of the equal sign refers to the variable itself (not its value), but on the right hand side, the variable name refers to the current value stored in the variable. As soon as the variables are assigned a new value, they simply forget the old value: x = y; y = x; After the first statement, x took on the value of y, so that the second statement does not change the value of y. The right hand side of an assignment can contain complicated expressions: x = y * y – 2 * y + 3; Note that the right hand side needs to be evaluated first before the value is assigned to the variable: intensity = intensity * 2; This statement just doubled the value of intensity. It is important to use comments in your source code, not only for other people to understand the intent of the code, but also for yourself, when you come back to your code in 6 months. Comments look like this: // This is a comment trying to help you to // remember what you meant to do here: a = Math.exp(x * Math.sin(y)) + Math.atan(x * y – a); You should not repeat the code in English, but describe the important aspects not conveyed by the code. For example, there might be a bug requiring a workaround, and you might want to explain that in the comments (lest you try to "fix" the workaround). Comments are often abused to disable code, such as debugging statements // x = 10; // hard-code x to 10 for now, just for debugging If you have a substantial amount of things to say in a comment, you might use multi-line comments: /* Multi-line comments can be started by a slash followed by a star, and their end is marked by a star followed by a slash: */ Further reading For more information, see BeanShell's Quickstart page. Tips You can source scripts (i.e. interpret another script before continuing to interpret the current script) using this line: this.interpreter.source("the-other-script.bsh"); Examples Add CIEL*a*b numbers to the status bar If your monitor is calibrated to sRGB, there is an easy way to do display also the L, a and b values in the status bar: import color.CIELAB; import java.awt.Label; import java.awt.event.KeyEvent; import java.awt.event.KeyListener; import java.util.regex.Matcher; import java.util.regex.Pattern; // IJ1's API does not offer all I want setAccessibility(true); // press Escape on the Fiji window to stop it class Add_CIELab_to_Status extends Thread implements KeyListener { protected ImageJ ij; protected Label status; protected Pattern pattern = Pattern.compile("^.* value=([0-9]*),([0-9]*),([0-9]*)$"); protected float[] lab, rgb; public Add_CIELab_to_Status() { ij = IJ.getInstance(); status = ij.statusLine; ij.addKeyListener(this); lab = new float[3]; rgb = new float[3]; } public void run() { try { for (;;) { String text = status.getText(); Matcher matcher = pattern.matcher(text); if (matcher.matches()) { for (int i = 0; i < 3; i++) rgb[i] = Float.parseFloat(matcher.group(i + 1)) / 255; CIELAB.sRGB2CIELAB(rgb, lab); status.setText(text + ", L=" + IJ.d2s(lab[0], 2) + ",a*=" + IJ.d2s(lab[1], 3) + ",b*=" + IJ.d2s(lab[2], 3)); } Thread.sleep(5); } } catch (InterruptedException e) {} } public void keyPressed(KeyEvent e) { if (e.getKeyCode() == KeyEvent.VK_ESCAPE) { ij.removeKeyListener(this); interrupt(); } } public void keyReleased(KeyEvent e) {} public void keyTyped(KeyEvent e) {} } new Add_CIELab_to_Status().start(); This example starts a new thread (make sure to implement the run() method but actually call the start() method!) which polls the status bar. It also registers itself as a key listener so it can stop the process when the user hits the Escape key when the main window is in focus.
http://fiji.sc/wiki/index.php/Beanshell_Scripting
CC-MAIN-2015-06
en
refinedweb
Patent application title: Methods and Apparatus for Scalable Array Processor Interrupt Detection and Response Inventors: Edwin Frank Barry (Vilas, NC, US) Patrick R. Marchand (Apex, NC, US) Gerald G. Pechanek (Cary, NC, US) Gerald G. Pechanek (Cary, NC, US) Larry D. Larsen (Raleigh, NC, US) Assignees: Altera Corporation IPC8 Class: AG06F938FI USPC Class: 712205 Class name: Electrical computers and digital processing systems: processing architectures and instruction processing (e.g., processors) instruction fetching Publication date: 2012-07-05 Patent application number: 20120173849 Abstract: Hardware and software techniques for interrupt detection and response in a scalable pipelined array processor environment are described. Utilizing these techniques, a sequential program execution model with interrupts can be maintained in a highly parallel scalable pipelined array processing containing multiple processing elements and distributed memories and register files. When an interrupt occurs, interface signals are provided to all PEs to support independent interrupt operations in each PE dependent upon the local PE instruction sequence prior to the interrupt. Processing/element exception interrupts are supported and low latency interrupt processing is also provided for embedded systems where real time signal processing is to required. Further, a hierarchical interrupt structure is used allowing a generalized debug approach using debut interrupts and a dynamic debut monitor mechanism. Claims: 1-20. (canceled) 21. A hardware system providing array conditional execution comprising: a sequence processor (SP) controller coupled to an array of two or more processing elements (PEs), wherein the SP controller is configured to distribute a first instruction and then a conditional execute instruction to each PE of the array of two or more PEs; a program settable set condition code (SetCC) register in each PE comprising SetCC bits that are configured to specify a condition as a combination of internal PE flags that are locally set resulting from execution on a functional unit in each PE; a condition generation unit (CGU) in each PE configured to set the condition in a conditional execute flag in response to execution of the first instruction on the functional unit in each PE; and conditional execution control logic in each PE configured to control the functional unit in each PE to execute the conditional execute instruction based on a determination that a state of the conditional execute flag matches a specified state encoded in the conditional execute instruction. 22. The hardware system of claim 21, wherein one or more instructions that do not affect the condition on each PE are distributed for execution to each PE after the first instruction is distributed to each PE and before the conditional execute instruction is distributed to each PE. 23. The hardware system of claim 21 further comprising: SetCC decode logic configured to decode the SetCC bits to determine the combination of internal PE flags. 24. The hardware system of claim 21, wherein the conditional execute instruction is conditionally executed on the array in parallel on the two or more PEs. 25. The hardware system of claim 21, wherein the internal PE flags comprise a carry flag, a negative flag, an overflow flag, and a zero flag determined for each execution operation on the functional unit in each PE. 26. The hardware system of claim 25, wherein each execution operation is associated with a data type specified by the conditional execute instruction. 27. The hardware system of claim 21, wherein the conditional execute flag is a first conditional execute flag selected from a plurality of conditional execute flags and the first instruction is encoded with a first specification to set the first conditional execute flag. 28. The hardware system of claim 21, wherein the conditional execute flag is a first conditional execute flag selected from a plurality of conditional execute flags and the first instruction is encoded with a second specification to individually set each conditional execute flag of the plurality of conditional execute flags, wherein each conditional execute flag is associated with a different data element of a packed data operation. 29. The hardware system of claim 21, wherein the specified state encoded in the conditional execute instruction is a true state indicating the conditional execute instruction is to be executed if the state of the conditional execute flag is true. 30. A method for array conditional execution, the method comprising: distributing a first instruction and then a conditional execute instruction from a sequence processor (SP) controller to each processing element (PE) of an array of two or more PEs; loading set condition code (SetCC) bits in a program settable SetCC register in each PE, wherein the SetCC register is configured to specify a condition as a combination of internal PE flags that are locally resulting from execution on a functional unit in each PE; executing the first instruction on the function unit in each PE to set the condition in a conditional execute flag; and executing the conditional execute instruction based on a determination that a state of the conditional execute flag matches a specified state encoded in the conditional execute instruction. 31. The method of claim 30, wherein the conditional execute flag comprises a plurality of data element conditional execute flags, wherein each data element conditional execute flag corresponds with a data element of a packed data operation, and wherein the first instruction is encoded with a specification to individually enable the setting of the data element conditional execute flags according to the number of data element operations specified by the first instruction and whether a condition specified by the first instruction occurs during execution of the corresponding data element operation. 32. The method of claim 31 further comprising: operating on each data element corresponding to a conditional execute flag of the multiple conditional execute flags which has a state that matches a specified state encoded in the conditional execute instruction. 33. The method of claim 32, wherein the specified state encoded in the conditional execute instruction is a false state indicating the conditional execute instruction is to execute on each data element having a corresponding conditional execute flag in the false state. 34. The method of claim 30 further comprising: distributing to each PE one or more instructions for execution that do not affect the condition on each PE after the first instruction and before the conditional execute instruction. 35. The method of claim 30, wherein the SetCC register specifies exceptions to be detected in function units in each PE and once detected forwarded to the SP controller. 36. A method for array conditional execution, the method comprising: receiving a first instruction and then a conditional execute instruction in a first processing element (PE) and a second PE of an array of two or more PEs; executing the first instruction on a function unit in each PE to set a conditional execute indication according to a combination of internal PE flags that are locally set as a side effect of executing the first instruction; and executing the conditional execute instruction based on a determination that a state of the conditional execute indication matches a specified state encoded in the conditional execute instruction. 37. The method of claim 36, wherein the combination of internal PE flags is specified by a bit field in a program loaded set condition code (SetCC) register. 38. The method of claim 36, wherein the first instruction is a packed data instruction and the conditional execute indication is a set of conditional execute flags that corresponds to a set of packed data operations specified by the first instruction, wherein each conditional execute flag is associated with a different data element of the packed data operation. 39. The method of claim 38 further comprising: operating on each data element corresponding to a conditional execute flag of the set of conditional execute flags which has a state that matches a specified state encoded in the conditional execute instruction. 40. The method of claim 38, wherein the combination of internal PE flags is a plurality of separate combinations of internal PE flags each of the separate combinations associated with internal PE flags that are locally set as a side effect of operating on each data element of the packed data operation. Description: [0001] The present application is a divisional of U.S. application Ser. No. 12/956,316 filed Nov. 30, 2010 which is a divisional of U.S. application Ser. No. 12/120,543 filed May 14, 2008 which is a divisional of U.S. application Ser. No. 10/931,751 filed Sep. 1, 2004 which is a divisional of U.S. application Ser. No. 09/791,256 filed Feb. 23, 2001 and claims the benefit of U.S. Provisional Application Ser. No. 60/184,529 filed Feb. 24, 2000 which is incorporated by reference herein in its entirety. FIELD OF THE INVENTION [0002] The present invention relates generally to improved techniques for interrupt detection and response in a scalable pipelined array processor. More particularly, the present invention addresses methods and apparatus for such interrupt detection and response in the context of highly parallel scalable pipeline array processor architectures employing multiple processing elements, such as the manifold array (ManArray) architecture. BACKGROUND OF THE INVENTION [0003] The typical architecture of a digital signal processor is based upon a sequential model of instruction execution that keeps track of program instruction execution with a program counter. When an interrupt is acknowledged in this model, the normal program flow is interrupted and a branch to an interrupt handler typically occurs. After the interrupt is handled, a return from the interrupt handler occurs and the normal program flow is restarted. This sequential model must be maintained in pipelined processors even when interrupts occur that modify the normal sequential instruction flow. The sequential model of instruction execution is used in the advanced indirect very long instruction word (iVLIW) scalable ManArray processor even though multiple processor elements (PEs) operate in parallel each executing up to five packed data instructions. The ManArray family of core processors provides multiple cores 1×1, 1×2, 2×2, 2×4, 4×4, and so on that provide different performance characteristics depending upon the number of and type of PEs used in the cores. [0004] Each PE typically contains its own register file and local PE memory, resulting in a distributed memory and distributed register file model. Each PE, if not masked off, executes instructions in synchronism and in a sequential flow as dictated by the instruction sequence fetched by a sequence processor (SP) array controller. The SP controls the fetching of the instructions that are sent to all the PEs. This sequential instruction flow must be maintained across all the PEs even when interrupts are detected in the SP that modify the instruction sequence. The sequence of operations and machine state must be the same whether an interrupt occurs or not. In addition, individual PEs can cause errors which can be detected and reported by a distributed interrupt mechanism. In a pipelined array processor, determining which instruction, which PE, and which data element in a packed data operation may have caused an exception type of interrupt is a difficult task. [0005] In developing complex systems and debugging of complex programs, it is important to provide mechanisms that control instruction fetching, provide single-step operation, monitor for internal core and external core events, provide the ability to modify registers, instruction memory, VLIW memory (VIM), and data memory, and provide instruction address and data address eventpoints. There are two standard approaches to achieving the desired observability and controllability of hardware for debug purposes. [0006] One approach involves the use of scan chains and clock-stepping, along with a suitable hardware interface, possibly via a joint test action group (JTAG) interface, to a debug control module that supports basic debug commands. This approach allows access on a cycle by cycle basis to any resources included in the scan chains, usually registers and memory. It relies on the library/process technology to support the scan chain insertion and may change with each implementation. [0007] The second approach uses a resident debug monitor program, which may be linked with an application or reside in on-chip read only memory ROM. Debug interrupts may be triggered by internal or external events, and the monitor program then interacts with an external debugger to provide access to internal resources using the instruction set of the processor. [0008] It is important to note that the use of scan chains is a hardware intensive approach which relies on supporting hardware external to the core processor to be available for testing and debug. In a system-on-chip (SOC) environment where processing cores from one company are mixed with other hardware functions, such as peripheral interfaces possibly from other companies, requiring specialized external hardware support for debug and development reasons is a difficult approach. In the second approach described above, requiring the supporting debug monitor program be resident with an application or in an on-chip ROM is also not desirable due to the reduction in the application program space. [0009] Thus, it is recognized that it will be highly advantageous to have a multiple-PE synchronized interrupt control and a dynamic debug monitor mechanism provided in a scalable processor family of embedded cores based on a single architecture model that uses common tools to support software configurable processor designs optimized for performance, power, and price across multiple types of applications using standard application specific integral circuit (ASIC) processes as discussed further below. SUMMARY OF THE INVENTION [0010] In one aspect of the present invention, a manifold array (ManArray) architecture is adapted to employ the present invention to solve the problem of maintaining the sequential program execution model with interrupts in a highly parallel scalable pipelined array processor containing multiple processing elements and distributed memories and register tiles. In this aspect, PE exception interrupts are supported and low latency interrupt processing is provided for embedded systems where real time signal processing is required. In addition, the interrupt apparatus proposed here provides debug monitor functions that allow for a debug operation without a debug monitor program being loaded along with or prior to loading application code. This approach provides a dynamic debug monitor, in which the debug monitor code is dynamically loaded into the processor and executed on any debug event that stops the processor, such as a breakpoint or "stop" command. The debug monitor code is unloaded when processing resumes. This approach may also advantageously include a static debug monitor as a subset of its operation and it also provides some of the benefits of fully external debug control which is found in the scan chain approach. [0011] Various further aspects of the present invention include effective techniques for synchronized interrupt control in the multiple PE environment, interruptible pipelined 2-cycle instructions, and condition forwarding techniques allowing interrupts between instructions. Further, techniques for address interrupts which provide a range of addresses on a master control bus (MCB) to which mailbox data may be written, with each address able to cause a different maskable interrupt, are provided. Further, special fetch control is provided for addresses in an interrupt vector table (IVT) which allows fetch to occur from within the memory at the specified address, or from a general coprocessor instruction port, such as the debug instruction register (DBIR) at interrupt vector 1 of the Manta implementation of the ManArray architecture, by way of example. [0012] These and other advantages of the present invention will be apparent from the drawings and the Detailed Description which follow. BRIEF DESCRIPTION OF THE DRAWINGS [0013] FIG. 1 illustrates a ManArray 2×2 iVLIW processor which can suitably he employed with this invention; [0014] FIG. 2A illustrates an exemplary encoding and syntax/operation table for a system call interrupt (SYSCALL) instruction in accordance with the present invention; [0015] FIG. 2B illustrates a four mode interrupt transition state diagram; [0016] FIG. 3 illustrates external and internal interrupt requests to and output from a system interrupt select unit in accordance with the present invention; [0017] FIG. 4 illustrates how a single general purpose interrupt (GPI) bit of an interrupt request register (IRR) is generated in accordance with the present invention; [0018] FIG. 5 illustrates how a non maskable interrupt bit in the IRR is generated from an OR of its sources; [0019] FIG. 6 illustrates how a debug interrupt bit in the IRR is generated from an OR of its sources; [0020] FIG. 7 illustrates an exemplary interrupt vector table (IVT) which may suitably reside in instruction memory; [0021] FIG. 8 illustrates a SYSCALL instruction vector mapping in accordance with the present invention; [0022] FIG. 9 illustrates the registers involved in interrupt processing; [0023] FIG. 10A illustrates a sliding interrupt processing pipeline diagram; [0024] FIG. 10B illustrates interrupt forwarding registers used in the SP and all PEs; [0025] FIG. 10C illustrates pipeline flow when an interrupt occurs and the saving of flag information in saved status registers (SSRs); [0026] FIG. 10D illustrates pipeline flow for single cycle short instruction words when a user mode program is preempted by a GPI; [0027] FIG. 11 illustrates a CE3c encoding description for 3-bit conditional execution; [0028] FIG. 12 illustrates a CE2b encoding description for 2-bit conditional execution; [0029] FIG. 13 illustrates a status and control register 0 (SCR0) bit placement; [0030] FIG. 14A illustrates a SetCC register 5-bit encoding description for conditional execution and PE exception interrupts; [0031] FIG. 14B illustrates a SetCC register 5-bit encoding description for conditional execution and PE exception interrupts; [0032] FIG. 15 illustrates an alternative implementation for a PE exception interface to the SP; [0033] FIG. 16 illustrates an alternative implementation for PE address generation for a PE exception interface to the SP; [0034] FIG. 17 illustrates aspects of an interrupt vector table for use in conjunction with the present invention; [0035] FIG. 18 illustrates aspects of the utilization of a debug instruction register (DBIR); [0036] FIG. 19 illustrates aspects of the utilization of DSP control register (DSPCTL); [0037] FIG. 20 illustrates aspects of the utilization of a debug status register (DBSTAT); [0038] FIGS. 21 and 22 illustrate aspects of the utilization of a debug-data-out (DBDOUT) and debug-data-in (DBDIN) register, respectively; and [0039] FIG. 23 illustrates aspects of an exemplary DSP ManArray residing on an MCB and ManArray data bus (MDB). DETAILED DESCRIPTION [0040] Further details of a presently preferred ManArray core, architecture, and instructions for use in conjunction with the present invention are found in: [0041] U.S. Pat. No. 6,023,753; [0042] U.S. Pat. No. 6,167,502; [0043] U.S. Pat. No. 6,343,356; [0044] U.S. Pat. No. 6,167,501; [0045] U.S. Pat. No. 6,219,776; [0046] U.S. Pat. No. 6,151,668; [0047] U.S. Pat. No. 6,173,389; [0048] U.S. Pat. No. 6,216,223; [0049] U.S. Pat. No. 6,366,999; [0050] U.S. Pat. No. 6,446,190; [0051] U.S. Pat. No. 6,356,994; [0052] U.S. Pat. No. 6,408,382; [0053] U.S. Pat. No. 6,697,427; [0054] U.S. Pat. No. 6,260,082; [0055] U.S. Pat. No. 6,256,683; [0056] U.S. Pat. No. 6,397,324; [0057] U.S. patent application Ser. No. 09/598,567 entitled "Methods and Apparatus for Improved Efficiency in Pipeline Simulation and Emulation" filed Jun. 21, 2000; [0058] U.S. Pat. No. 6,622,234; [0059] U.S. Pat. No. 6,735,690; [0060] U.S. Pat. No. 6,654,870; [0061] U.S. patent application Ser. No. 09/599,980 entitled "Methods and Apparatus for Parallel Processing Utilizing a Manifold Array (ManArray) Architecture and Instruction Syntax" filed Jun. 22, 2000; [0062] U.S. patent application Ser. No. 09/791,940 entitled "Methods and Apparatus for Providing Bit-Reversal and Multicast Functions Utilizing DMA Controller" filed Feb. 23, 2001; and [0063] U.S. patent application Ser. No. 09/792,819 entitled "Methods and Apparatus for Flexible Strength Coprocessing Interface" filed Feb. 23, 2001; [0064] all of which are assigned to the assignee of the present invention and incorporated by reference herein in their entirety. [0065] In a presently preferred embodiment of the present invention, a ManArray 2×2 iVLIW single instruction multiple data stream (SIMD) processor 100 as shown in FIG. 1 may be adapted as described further below for use in conjunction with the present invention. Processor 100 comprises a sequence processor (SP) controller combined with a processing element-0 (PE0) to form an SP/PE0 combined unit 101, as described in further detail in U.S. patent application Ser. No. 09/169,072 entitled "Methods and Apparatus for Dynamically Merging an Array Controller with an Array Processing Element". Three additional PEs 151, 153, and 155 are also utilized to demonstrate the apparatus for scalable array processor interrupt detection and response mechanism. It is noted that the PEs can be also labeled with their matrix positions as shown in parentheses for PE0 (PE00) 101, PE1 (PE01) 151, PE2 (PE10) 153, and PE3 (PE11) 155. The SP/PE0 101 contains an instruction fetch (I-fetch) controller 103 to allow the fetching of short instruction words (SIW) or abbreviated-instruction words from a B-bit instruction memory 105, where B is determined by the application instruction-abbreviation process to be a reduced number of bits representing ManArray native instructions and/or to contain two or more abbreviated instructions as further described in U.S. patent application Ser. No. 09/422,015 filed Oct. 21, 1999 and incorporated by reference herein in its entirety. If an instruction abbreviation apparatus is not used then B is determined by the SIW format. The fetch controller 103 provides the typical functions needed in a programmable processor, such as a program counter (PC), a branch capability, eventpoint loop operations (see U.S. Provisional Application Ser. No. 60/140,245 entitled "Methods and Apparatus for Generalized Event Detection and Action Specification in a Processor" filed Jun. 21, 1999 for further details), and support for interrupts. It also provides the instruction memory control which could include an instruction cache if needed by an application. In addition, the I-fetch controller 103 dispatches instruction words and instruction control information to the other PEs in the system by means of a D-bit instruction bus 102. D is determined by the implementation, which for the exemplary ManArray coprocessor D=32-bits. The instruction bus 102 may include additional control signals as needed in an abbreviated-instruction translation apparatus. [0066] In this exemplary system 100, common elements are used throughout to simplify the explanation, though actual implementations are not limited to this restriction. For example, the execution units 131 in the combined SP/PE0 101 can be separated into a set of execution units optimized for the control function, for example, fixed point execution units in the SP, and the PE0 as well as the other PEs can be optimized for a floating point application. For the purposes of this description, it is assumed that the execution units 131 are of the same type in the SP/PE0 and the PEs. In a similar manner, SP/PE0 and the other PEs use a five instruction slot iVLIW architecture which contains a VLIW memory (VIM) 109 and an instruction decode and VIM controller functional unit 107 which receives instructions as dispatched from the SP/PE0's I-fetch unit 103 and generates VIM addresses and control signals 108 required to access the iVLIWs stored in the VIM. Referenced instruction types are identified by the letters SLAMD in VIM 109, where the letters are matched up with instruction types as follows: Store (S), Load (L), Arithmetic Logic Unit or ALU (A), Multiply Accumulate Unit or MAU (M), and Data Select Unit or DSU (D). [0067] The basic concept of loading the iVLIWs is described in more detail in U.S. patent application Ser. No. 09/187,539 entitled "Methods and Apparatus for Efficient Synchronous MIMD Operations with iVLIW PE-to-PE Communication". Also contained in the SP/PE0 and the other PEs is a common PE configurable register file 127 which is described in further detail in U.S. patent application Ser. No. 09/169,255 entitled "Method and Apparatus for Dynamic Instruction Controlled Reconfiguration Register File with Extended Precision". Due to the combined nature of the SP/PEO the data memory interface controller 125 must handle the data processing needs of both the SP controller, with SP data in memory 121, and PE0, with PE0 data in memory 123. The SP/PEO controller 125 also is the controlling point of the data that is sent over the 32-bit or 64 various aspects of which are described in greater detail in U.S. patent application Ser. No. 08/885,310 entitled "Manifold Array Processor", and U.S. patent application Ser. No. 09/169,256 entitled "Methods and Apparatus for Manifold Array Processing", and U.S. patent application Ser. No. 09/169,256 entitled "Methods and Apparatus for ManArray PE-to-PE Switch Control". The interface to a host processor, other peripheral devices, and/or external memory can be done in many ways. For completeness, a primary interface mechanism is contained in a direct memory access (DMA) control unit 181 that provides a scalable ManArray data bus (MDB) 183 that connects to devices and interface units external to the ManArray core. The DMA control unit 181 provides the data flow and bus arbitration mechanisms needed for these external devices to interface to the ManArray core memories via the multiplexed bus interface represented by line 185. A high level view of a ManArray control bus (MCB) 191 is also shown in FIG. 1. The ManArray architecture uses two primary bus interfaces: the ManArray data bus (MDB), and the ManArray control bus (MCB). The MDB provides for high volume data flow in and out of the DSP array. The MCB provides a path for peripheral access and control. The width of either bus varies between different implementations of ManArray processor cores. The width of the MDB is set according to the data bandwidth requirements of the array in a given application, as well as the overall complexity of the on-chip system. Further details of presently preferred DMA control and coprocessing interface techniques are found in U.S. application Ser. No. 09/791,940 and Provisional Application Ser. No. 60/184,668 both of which are entitled "Methods and Apparatus for Providing Bit-Reversal and Multicast Functions Utilizing DMA Controller" and which were filed Feb. 23, 2001 and Feb. 24, 2000, respectively, and U.S. application Ser. No. 09/972,819 and Provisional Application Ser. No. 60/184,560 both entitled "Methods and Apparatus for Flexible Strength Coprocessing Interface" filed Feb. 23, 2001 and Feb. 24, 2000, respectively, all of which are incorporated by reference in their entirety herein. [0068] Interrupt Processing [0069] Up to 32 interrupts including general purpose interrupts (GPI-4-GPI-31), non-maskable interrupts (NMI), and others, are recognized, prioritized, and processed in this exemplary ManArray scalable array processor in accordance with the present invention as described further below. To begin with, a processor interrupt is an event which causes the preemption of the currently executing program in order to initiate special program actions. Processing an interrupt generally involves the following steps: [0070] Save the minimum context of the currently executing program, [0071] Save the current instruction address (or program counter), [0072] Determine the interrupt service routine (ISR) start address and branch to it, [0073] Execute the interrupt program code until a "return from interrupt" instruction is decoded, [0074] Restore the interrupted program's context, and [0075] Restore the program counter and resume the interrupted program. Interrupts are specified in three primary ways: a classification of the interrupt signals into three levels, whether they are asynchronous versus synchronous, and maskable versus non-maskable. Interrupt level is a classification of interrupt signals where the classification is by rank or degree of importance. In an exemplary ManArray system, there are three levels of interrupts where 1 is the lowest and 3 the highest. These ManArray interrupts levels are: interrupt level 1 is for GPI and SYSCALL; interrupt level 2 is for NMI; and interrupt level 3 is for Debug. SYSCALL is an instruction which causes the address of an instruction immediately following SYSCALL to be saved in a general-purpose interrupt link register (GPILR) and the PC is loaded with the specified vector from the system vector table. The system vector table contains 32 vectors numbered from 0 to 31. Each vector contains a 32-bit address used as the target of a SYSCALL. FIG. 2A shows an exemplary encoding 202 and a syntax/operation table 204 for a presently preferred SYSCALL instruction. [0076] By design choice, interrupts at one classification level cannot preempt interrupts at the same level or interrupts at a higher level, unless this rule is specifically overridden by software, but may preempt interrupts at a lower level. This condition creates a hierarchical interrupt structure. Synchronous interrupts occur as a result of instruction execution while asynchronous interrupts occur as a result of events external to the instruction processing pipeline. Maskable interrupts are those which may be enabled or disabled by software while non-maskable interrupts may not be disabled, once they have been enabled, by software. Interrupt enable/disable bits control whether an interrupt is serviced or not. An interrupt can become pending even if it is disabled. [0077] Interrupt hardware provides for the following: [0078] Interrupt sources and source selection, [0079] Interrupt control (enable/disable), [0080] Interrupt mapping: source event-to-ISR, and [0081] Hardware support for context save/restore. These items are discussed further below. [0082] Interrupt Modes and Priorities [0083] In ManArray processors, there are four interrupt modes of operation not including low power modes, and three levels of interrupts which cause the processor to switch between modes. The modes shown in the four mode interrupt transition state diagram 200 of FIG. 2B are: a user mode 205, a system mode 210, an NMI mode 215, and a debug mode 220. User mode is the normal mode of operation for an application program, system mode is the mode of operation associated with handling a first level type of interrupt, such as a GPI or SYSCALL, NMI mode is the mode of operation associated with the handling of a non-maskable interrupt, for example the processing state associated with a loss of power interrupt, and debug mode is the mode of operation associated with the handling of a debut interrupt, such as single step and break points. [0084] A processor mode of operation is characterized by the type of interrupts that can, by default, preempt it and the hardware support for context saving and restoration. In an exemplary ManArray core, there are up to 28 GPI level interrupts that may be pending, GPI-04 through GPI-31, with GPI-04 having highest priority and GPI-31 lowest when more than one GPI is asserted simultaneously. State diagram 200 of FIG. 2B illustrates the processor modes and how interrupts of each level cause mode transitions. The interrupt hardware automatically masks interrupts (disables interrupt service) at the same or lower level once an interrupt is accepted for processing (acknowledged). The software may reenable a pending interrupt, but this should be done only after copying to memory the registers which were saved by hardware when the interrupt being processed was acknowledged, otherwise they will be overwritten. The default rules are: [0085] GPI 233, SYSCALL 234, NMI 232 and debug interrupts 231 may preempt a user mode 205 program. SYSCALL 234 does this explicitly. [0086] NMI 237 and debug interrupts 236 may preempt a GPI program (ISR) running in system mode 210. [0087] Debug interrupts 238 may preempt an NMI mode 215 program (ISR). [0088] GPIs save status (PC and flags) and 2-cycle instruction data registers when acknowledged. SYSCALL 234 operates the same as a GPI 233 from the standpoint of saving state, and uses the same registers as the GPIs 233. [0089] Debug interrupts 231 save status and 2-cycle instruction data registers when they preempt user mode 205 programs, but save only status information when they preempt system mode ISRs 210 or NMI ISRs 215. The state saved during interrupt processing is discussed further below. [0090] NMI interrupts 237 save status but share the same hardware with system mode 210. Therefore, non-maskable interrupts are not fully recoverable to the pre-interrupt state, but the context in which they occur is saved. [0091] 3--Interrupt Sources [0092] There are multiple sources of interrupts to a DSP core, such as the ManArray processor described herein. These sources may be divided into two basic types, synchronous and asynchronous. Synchronous interrupts are generated as a direct result of instruction execution within the DSP core. Asynchronous interrupts are generated as a result of other system events. Asynchronous interrupt sources may be further divided into external sources (those coming from outside the ManArray system core) and internal sources (those coming from devices within the system core). Up to 32 interrupt signals may be simultaneously asserted to the DSP core at any time, and each of these 32 may arise from multiple sources. A module called the system interrupt select unit (SISU) gathers all interrupt sources and, based on its configuration which is programmable in software, selects which of the possible 32 interrupts may be sent to the DSP core. There is a central interrupt controller 320 shown in FIG. 3 called the interrupt control unit (ICU) within the DSP core. One task of the ICU is to arbitrate between the 32 pending interrupts which are held in an interrupt request register (IRR) within the ICU. The ICU arbitrates between pending interrupts in the IRR on each cycle. [0093] Synchronous Interrupt Sources [0094] One method of initiating an interrupt is by directly setting bits in the interrupt request register (IRR) that is located in the DSP interrupt control unit (ICU) 320. This direct setting may be done by load instructions or DSU COPY or BIT operations. [0095] Another method of initiating an interrupt is by using a SYSCALL instruction. This SYSCALL initiated interrupt is a synchronous interrupt which operates at the same level as GPIs. SYSCALL is a control instruction which combines the features of a call instruction with those of an interrupt. The argument to the SYSCALL instruction is a vector number. This number refers to an entry in the SYSCALL table 800 of FIG. 8 which is located in SP instruction memory starting at address 0x00000080 through address 0x000000FF containing 32 vectors. A SYSCALL is at the same level as a GPI and causes GPIs to be disabled via the general purpose interrupt enable (GIE) bit in status and control register 0 (SCR0). It also uses the same interrupt status and link registers as a GPI. [0096] Asynchronous Interrupt Sources [0097] Asynchronous interrupt sources are grouped under their respective interrupt levels, Debug, NMI and GPI. The address interrupt described further below can generate any of these three levels of interrupts. [0098] Debug and Address Interrupts [0099] Debug interrupt resources include the debug control register, debug instruction register and debug breakpoint registers. Examples of debug interrupts in the context of the exemplary ManArray processor are for software break points and for single stepping the processor. [0100] Address interrupts are a mechanism for invoking any interrupt by writing to a particular address on the MCB as listed in table 700 of FIG. 7. When a write is detected to an address mapped to an address interrupt, the corresponding interrupt signal is asserted to the DSP core interrupt control unit. There are four ranges of 32 byte addresses each of which are defined to generate address interrupts. A write to an address in a first range (Range 0) 720 causes the corresponding interrupt, a single pulse on the wire to the ICU. A write to a second range (Range 1) 725 causes assertion of the corresponding interrupt signal and also writes the data to a register "mailbox" (MBOX1). A write to further ranges (Ranges 2 and 3) 730 and 735, respectively, has the same effect as a write to Range 1, with data going to register mailboxes 2 and 3, respectively. In another example, an address interrupt may be used to generate an NMI to the DSP core by writing to one of the addresses associated with an NMI row 740 and one of the columns 710. For further details, see the interrupt source/vector table of FIG. 7 and its discussion below. [0101] NMI [0102] The NMI may come from either an internal or external source. It may be invoked by either a signal or by an address interrupt. [0103] GPI Level Interrupts [0104] The general purpose interrupts may suitably include, four example, DMA, timer, bus errors, external interrupts, and address interrupts. There are four DMA interrupt signals (wires), two from each DMA lane controller (LC). LCs are also capable of generating address interrupts via the MCB. [0105] A system timer is designed to provide a periodic interrupt source and an absolute time reference. [0106] When a bus master generates a target address which is not acknowledged by a slave device, an interrupt may be generated. [0107] External interrupts are signals which are inputs to the processor system core interface. [0108] An address interrupt may be used to generate any GPI to the DSP core, in a similar manner to that described above in connection debug and address interrupts. [0109] Interrupt Selection [0110] External and internal interrupt signals converge at a system interrupt select unit (SISU) 310 shown in interrupt interface 300 of FIG. 3. Registers in this unit allow selection and control of internal and external interrupt sources for sending to the DSP ICU. A single register, the interrupt source control register (INTSRC) determines if a particular interrupt vector will respond to an internal or external interrupt. FIG. 3 shows the interrupt sources converging at the SISU 310 and the resulting set of 30 interrupt signals 330 sent to the interrupt request register (IRR) in the DSP ICU 320. [0111] FIG. 4 shows logic circuitry 400 to illustrate how a single GPI bit of the interrupt request register (IRR) is generated. A core interrupt select register (CISRS) bit 412 selects via multiplexer 410 between an external 415 or internal 420 interrupt source. An address interrupt 425 enabled by an address interrupt enable register (AIER) 435 or a selected interrupt source 430 generates the interrupt request 440. FIG. 5 shows logic circuitry 500 which illustrates how the NMI bit in the IRR is generated from its sources. Note that the sources are Ored (510, 520) together rather than multiplexed allowing any NMI event to pass through unmasked. FIG. 6 shows logic circuitry 600 illustrating how the DBG bit in the IRR is generated from its sources. Note again that the sources are ORed (610, 620) together rather than multiplexed. [0112] Mapping Interrupts to Interrupt Service Routines (ISRs) [0113] There are two mechanisms for mapping interrupt events to their associated ISRs. Asynchronous interrupts are mapped to interrupt handlers through an interrupt vector table (IVT) 700 shown in FIG. 7 which also describes the assignment of interrupt sources to their corresponding vectors in the interrupt vector table. [0114] Software generated SYSCALL interrupts are mapped to interrupt handlers through a SYSCALL vector table 800 shown in FIG. 8. The interrupt vector table 700 may advantageously reside in a processor instruction memory from address 0x00000000 through address 0x0000007F. It consists of 32 addresses, each of which contains the address of the first instruction of an ISR corresponding to an interrupt source. [0115] An example of operation in accordance with the present invention is discussed below. Interrupt GPI-04 715 of FIG. 7 has an associated interrupt vector (address pointer) 04 at address 0x00000010 in instruction memory which should be initialized to contain the address of the first instruction of an ISR for GIP-04. This vector may be invoked by an external interrupt source, if the external source is enabled in the INTSRC register. In the exemplary ManArray processor, when GPI-04 is configured for an internal source, the interrupt may be asserted by the DSP system timer. In addition, MCB data writes to addresses 0x00300204, 0x00300224, 0x00300244, and 0x00300264 will cause this interrupt to be asserted if their respective ranges are enabled in the address interrupt enable register (ADIEN). Writes to the last three addresses will additionally latch data in the corresponding "mailbox" register MBOX1, MBOX2, or MBOX3 which can be used for interprocessor communication. [0116] FIG. 8 shows SYSCALL vector mapping 800. ISRs which are invoked with SYSCALL have the same characteristics as GPI ISRs. [0117] Interrupt Control [0118] Registers involved with interrupt control are shown in register table 900 of FIG. 9. [0119] Further details of the presently preferred interrupt source control register and the address interrupt enable register are shown in the tables below TABLE-US-00001 INTSRC Interrupt Source Configuration Register Table Reset value: 0x00000000 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 EX EX EX EX EX EX EX EX EX EX EX EX EX EX EX EX T31 T30 T29 T28 T27 T26 T25 T24 T23 T22 T21 T20 T19 T18 T17 T16 15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00 EX EX EX EX EX EX EX EX EX EX EX EX T15 T14 T13 T12 T11 T10 T09 T08 T07 T06 T05 T04 R Reserved EXTxx 0 = Internal source 1 = External source indicates data missing or illegible when filed [0120] ADIEN Address Interrupt Enable Register TABLE-US-00002 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00 AIR3 AIR2 AIR1 AIR0 AIRX Enable Address Interrupt Range `x` 0 = Address Interrupt for range `x` disabled 1 = Address Interrupt for range `x` enabled Address interrupts are triggered by writes to specific addresses (mapped to the ManArray Control Bus). Each range contains 32 (byte) addresses. When a range's AIR bit is set, a write to a particular address in the range causes the corresponding interrupt to be asserted to the DSP core. [0121] Interrupt Processing Specifics Interrupt processing involves the following steps: [0122] 1. Interrupt detection, [0123] 2. Interrupt arbitration, [0124] 3. Save essential program state (PC, flags, 2-cycle target data), [0125] 4. Fetch IVT vector into PC, [0126] 5. Execute ISR, [0127] 6. Execute RETI, [0128] 7. Restore essential program state, and [0129] 8. Restore PC from appropriate interrupt link register. Some specific points of the exemplary ManArray processor implementation are: [0130] When multiple interrupts are pending their service order is as follows: Debug, NMI, and GPI-04, GPI-05, . . . etc. [0131] A SYSCALL instruction, if in decode, will execute as if it were of higher priority than any GPI. If there is an NMI or Debug interrupt pending, then the SYSCALL ISR will be preempted after the first instruction is admitted to the pipe (only one instruction of the ISR will execute). [0132] One instruction is allowed to execute at any level before the next interrupt is allowed to preempt. This constraint means that if an RETI is executed at the end of a GPI ISR and another GPI is pending, then exactly one instruction of the USER level program will execute before the next GPI's ISR is fetched. [0133] The Debug interrupt saves PC, flags and interrupt forwarding registers (IFRs) when it is accepted for processing (acknowledged) while in User mode. If it is acknowledged while in GPI mode or NMI mode, it will only save PC and flags as it uses the same IFRs as the GPI level. [0134] If processing a Debug interrupt ISR, and the Debug IRR bit is set, then an RETI will result in exactly one instruction executing before returning to the Debug ISR. [0135] Load VLIW (LV) instructions are not interruptible and therefore are considered one (multi-cycle) instruction. Further details of LV instructions are provided in U.S. Pat. No. 6,151,668 which is incorporated by reference herein in its entirety. [0136] Interrupt Pipeline Diagrams [0137] FIG. 10A depicts an interrupt pipeline diagram 1000 that can be used to depict the events that happen in an instruction flow when an interrupt occurs. To use the diagram for this purpose, follow these directions: [0138] 1. Cut FIG. 10A along dashed line 1002, and [0139] 2. Slide "instruction stream" 10-17 1030 under execution units fetch (F), decode (DEC), execute 1 (Ex1), condition return I/execute 2 (CRI/EX2) and condition return 2 (CR2) to 1032 observe flag generation and condition feedback visually. FIG. 10B illustrates a system 1050 with interrupt forwarding registers used in an SP and all PEs with functional units, load unit (LU) 1052, store unit (SU) 1054, DSU 1056, ALU 1058, MAU 1060 and condition generation unit (CGU) 1062. Configurable register file, also known as compute register file (CRF) 1064 is also shown. FIG. 10C shows a flag table 1080 illustrating saved flag information within the saved status registers (SSRs). [0140] FIG. 10A is based upon the following assumptions: [0141] 1. Only current flags 1026 and hot conditions 1034 from condition return 1 (CR1) 1004 and hot conditions 1036 from CR2 1006 affect conditional execution. Hot conditions are the condition information generated in the last stage of an execution unit's operation and are available in the condition return stage of the pipeline prior to their being latched at the end of the condition return stage. The net result of condition generation unit (CGU) 1062 condition reduction is labeled "Condex flags" (1038). [0142] 2. Execution unit updates (EX Flag Updates) 1040 do not affect conditional execution until the instruction which generates them reaches CR1 phase. [0143] 3. Interrupt acknowledge occurs between I3 1008 and I4 1010. On RETI, the state of the pipe must be restored so that it appears to I4 as if no interrupt had occurred. [0144] 4. Each execution unit supplies hot condition flags and pipe phase information. The CGU 1062 must decode this information into a set of flags from each phase or "no flags" if a phase does not have an instruction which updates flags. Using this information, it can supply the correct "Condex flags" 1038 to the DEC and EX1 in stages 1012 and 1014, and update the latched flags 1042 correctly. [0145] 5. Note that the muxes 1016, 1018 and 1020 represent the logical "selection" between flag information from each phase. [0146] Referring to FIG. 10A and sliding the instructions I0-I7 1030 right along the execution units 1032, interrupt processing proceeds as follows: [0147] 1. When instruction 3 (I3) 1008 is in DEC 1012: The interrupt is acknowledged. The fetch program counter (PC) which contains the address of I4 1010 is saved to the correct interrupt link register (ILR). [0148] 2. When I3 is in execute 1(EX1) pipeline stage 1014: Update all flags according to I1 1022 , I2 1023 and I3 1008 normally. Save the Condex flags. These are the "hot" flags which are to be supplied to I4 1010 when it is in decode. [0149] 3. When I3 1008 is in CR1 1004: Save the status and control register (SCR0) since this might be read by I4 in EX1 and it might have been updated by I3 in EX1. Update Condex flags based on I2 and I3, and save the Condex flags. These will be fed back to I4 1010 and I5 1024 and provided as input to flag update mux 1016 (selecting between Condex flags and EX Flag Updates). If I3 contains a 2-cycle instruction, execution unit result data must be saved to an interrupt forwarding register (IFR). Both ALU 1058 and MAU 1060 require 64-bit IFRs to save this data. [0150] 4. When I3 is in CR2: Since I3 might be a 2-cycle instruction, save CR2 flags (shown in figure). These flags will be fed into the CR1/CR2 flag select mux 1020 when I4 reaches CR1. All other select inputs will by then be supplied by new instructions I4 and I5. [0151] On the return from interrupt (RETI), the following events occur: [0152] 1. Restore ILR to fetch PC and fetch I4. [0153] 2. I4 in DEC: Supply Condex flags that were saved in step 2 above. These flags will be used for conditional execution. Restore saved SCR0 (from Step 3) since this SCRO is read by I4. [0154] 3. I4 in EX1: Supply Condex flags that were saved in Step 3 above for I4 and I5 conditional execution. Condex flags are also supplied to EX/Condex Flag select mux 1016. Since I4 provides flag information to the CGU, the CGU determines the proper update based on the saved Condex flag information and new I4 EX flag update information. If 2-cycle data from I3 was saved, supply this to the write-back path of CRF 1064 via multiplexers 1065 and 1066. This will update the CRF 1064 unless I4 contains 1-cycle instructions in the same unit(s) that I3 used for 2-cycle instructions. [0155] 4. I4 in CR1: Supply CR2 flags to CR1/CR2 mux 1020, with all other mux controls provided normally by CGU based on inputs from instructions (I4 and I5) in earlier stages. [0156] 5. Done, instruction processing continues normally. [0157] The hardware provides interrupt forwarding registers 1070-1076 as illustrated in the system 1050 of FIG. 10B, in the SP and all PEs that are used as follows: [0158] (1) When an interrupt occurs and is acknowledged, all instructions in the decode phase are allowed to proceed through execute. One-cycle instructions are allowed to complete and update their target registers and flags. Any two-cycle instructions are allowed to complete also, but their output, which includes result data, result operand register addresses and flag information, is saved in a set of special purpose registers termed the "interrupt forwarding registers" (IFRs) 1070-1076 as shown in FIG. 10B, and no update is made to the register file (CRF) 1064 or status registers. [0159] Uniquely, when an interrupt occurs, interface signals are provided to all PEs to support the following operations independently in each PE dependent upon the local PE instruction sequence prior to the interrupt. For example, there can be a different mixture of 1-cycle and 2-cycle instructions in each PE at the time of an interrupt and by using this signal interface and local information in each PE the proper operation will occur in each PE on the return from interrupt, providing synchronized interrupt control in the multiple PE environment. These interface signals include save/restore signals, interrupt type, and extended or normal pipe status. Specifically, these interface signals are: [0160] Save SSR State Machine State (SP_VCU_s_ssr_state[1:0]) [0161] These two bits indicate the state of an internal Save Saved Status Register (SSR) state machine. The signals represent 4 possible states (IDLE, I4_EX, I5_EX, I6_EX). When not in the idle state, the Save SSR state machine indicates the phase of the pipe that the interrupted instruction would be in had an interrupt not occurred. If you consider a sequence of 6 instructions (I1, I2, . . . , I6), and the fourth instruction is interrupted, the listed state machine labels indicate when the 4th, 5th and 6th instructions would have been in the execute phase of the pipeline. This machine state information is used locally in each PE as one of the indicators for when the IFRs need to be saved and what state needs to be saved to SSRs. [0162] Restore SSR State Machine State (SP_VCU_r_ssr_state[1:10]) [0163] These bits indicate the state of an internal Restore SSR state machine. The signals represent 4 possible states (IDLE, I4_DC, I5_DC, I6_DC). When not in the idle state, the Restore SSR state machine indicates the phase of the pipe that the interrupted instruction is in after it is fetched and put into the pipe again (i.e., from a return from interrupt). If you consider a sequence of 6 instructions (I1, I2, . . . ,I6), and the fourth instruction is interrupted, the state machine labels indicate when the 4th, 5th and 6th instructions are in the decode phase of the pipeline. This machine state information is used locally in each PE as one of the indicators for when the IFRs need to be restored and when state needs to be restored from the SSRs. [0164] Save SSRs (SP_VCU_save_ssr) [0165] This bit indicates when the SSRs must be saved. [0166] Transfer System SSRs to User SSRs (SP_VCU_xfer_ssr) [0167] This signal indicates the System SSRs must be transferred to the User SSRs. [0168] Select User SSRs (VCU_sel_gssr) [0169] This signal indicates which SSRs (System or User SSRs) should be used when restoring the SSR to the hot flags and SCR0. It is asserted when restoring flags from the System SSRs. [0170] Extend Pipe when Returning from Interrupt Service Routine (SP_VCU_reti_extend pipe) [0171] When asserted, this bit indicates that a return from interrupt will need to extend the pipe. [0172] (2) The address of the instruction in FETCH phase (current PC) is saved to the appropriate link register. [0173] (3) The interrupt handler is invoked through the normal means such as a vector table lookup and branch to target address. [0174] (4) When the RETI instruction is executed, it causes the restoration of the saved SCRO and link address from the appropriate link and saved-status registers. [0175] (5) When the instruction at the link address reaches the EXECUTE phase, the data in the interrupt forwarding registers, for those units whose last instruction prior to interrupt handling was a two-cycle instruction, is made available to the register file 1064 and the CGU 1062 instead of the data coming from the corresponding unit. From the CGU and register file point of view, this operation has the same behavior that would have occurred if the interrupt had never happened. [0176] FIGS. 10C and 10D illustrate interrupt pipeline diagrams 1080 and 1090 for an example of interrupt processing as currently implemented. The columns SSR Save 1084, SSR-XFER 1086, OP in Fetch 1088, System Mode 1090 and User Mode 1092 in FIG. 10C show the state of the interrupt state machine for each cycle indicated in the cycle column 1082. Further, FIG. 10D shows the pipeline state of a bit within the interrupt request register (IRR) 1095, the instruction whose address is contained in the interrupt link register (ILR) 1096, the state of the interrupt status register (ISR) 1097, the state of the GPIE interrupt enable bit found in SCR0 1098, the interrupt level (ILVL) 1099, and the instruction being processed in the set of pipeline stages (fetch (F) 1021, predecode (PD) 1023, decode (D) 1025, execute 1 (EX1) 1027, and condition return (CR) 1029). It is assumed that the individually selectable general purpose interrupts are enabled and the interrupt vector number that is stored in SCR1 gets updated at the same time that IMOD is updated in SCR0. [0177] In the present exemplary processes, any time an interrupt is taken, there will be 3 cycles during which information needed to restore the pipeline is saved away in the saved status registers (SSR0, SSR1, and SSR2). The information is saved when the SSR-SAVE column 1084 in table 1080 has a "1" in it. The easiest way to understand how the three 32-bit SSR registers are loaded is by breaking them down into six 16-bit fields. SSR0 is made up of the user mode decode phase (UMDP) and user mode execute phase (UMEP) components. SSR1 is made up of the user mode condition return phase (UMCP) and system mode condition return phase (SMCP) components. SSR2 is made up of the system mode decode phase (SMDP) and system mode execute phase (SMEP) components. [0178] SMCP--System Mode Condition Return Phase (Upper Half of SSR1) [0179] SMEP--System Mode Execution Phase (Upper Half of SSR2) [0180] SMDP--System Mode Decode Phase (Lower Half of SSR2) [0181] UMCP--User Mode Condition Return Phase (Lower Half of SSR1) [0182] UMEP--User Mode Execute Phase (Upper Half of SSR0) [0183] UMDP--User Mode Decode Phase (Lower Half of SSR0) When interrupt processing begins, the data is first stored to the system mode registers. Then, depending on the mode of operation before and after the interrupt, the system mode registers, may be transferred to the user mode registers. For example, if the mode of operation before the interrupt is taken is a USER mode, the SSR-XFER will be asserted. If the SSR-XFER bit in column 1086 is asserted, the contents of the system mode registers are transferred to the user mode registers. [0184] In the example shown in FIG. 10C, the floating point subtract (Fsub), a 2-cycle instruction, is preempted by an interrupt. The Hot State Flags (HOTSFs) are control bits indicating local machine state in the exemplary implementation and these are as follows: [0185] HOTSFs={HOTSF3, HOTSF2,HOTSF1,HOTSF0}; [0186] HOTSF3=bit indicating that a 2-cycle operation is in execute and it could have control of the flag update. [0187] HOTSF2=bit indicating that a 2-cycle ALU instruction is in the execute (EX1) pipeline stage. [0188] HOTSF1=bit indicating that a 2-cycle MAU instruction is in the execute (EX1) pipeline stage. [0189] HOTSF0=bit indicating that a LU or DSU instruction that is targeted at SCRO is in the execute (EX1) pipeline stage. [0190] In cycle 4, 1081, since the SSR-SAVE signal was asserted, the FSub hotflags and hot state flags will be saved into SMCP. The SMCP is loaded with the Hotflags, arithmetic scalar flags (CNVZ) arithmetic condition flags (F0-F7), and the HOTSFs signals for the instruction that would be in Execute if the interrupt had not occurred, in this example, the FSub. In cycle 5 1083, SMEP is loaded with the contents of SMCP, and SMCP is loaded with the current hotflags and the hot state flags from cycle 4. The SMCP is loaded with the Hotflags (CNVZ & F0-F7) and the HOTSFs from the previous cycle. In cycle 6 1085, SMDP gets the contents of SMEP, SMEP gets the contents of SMCP, and SMCP gets loaded with the current hotflags, and the hot state flags for cycle 4. The SMCP is loaded with the Hotflags (CNVZ & F0-F7) and the HOTSFs from two cycles before. [0191] In cycle 7 1087, since the SSR-XFER signal was asserted in the previous cycle, the user mode phase components are loaded with copies of the system mode phase components. [0192] Whenever the SSR-save bit is asserted and a 2-cycle operation (ALU or MAU) is in the EX2 pipeline stage, the target compute register of the 2-cycle operation is not updated. Rather, the data, address, and write enables, i.e., bits indicating data type are stored in the corresponding execution unit forwarding registers. [0193] In more detail, the pipeline diagram of FIG. 10D depicts the events that occur when a GPI preempts a user mode program after the fetch of a single cycle subtract (Sub) short instruction word with a nonexpanded normal pipe. Note that the SSR-XFER bit 1094 is asserted in this case since it is a GPI taking the mode of operation from a user mode (ILVI=USR) to a system mode (ILVL=GPI). It would also be asserted when taking an interrupt that leaves the mode of operation in the same mode as it was before the interrupt came along (i.e., nesting general purpose interrupts). For the interrupt request register (IRR) 1095, the bit corresponding to the interrupt taken is cleared in the IRR. The general purpose or debug interrupt link register (ILR) 1096, holds the address of the instruction that will be executed following the interrupt. In FIG. 10D, only one of these registers (GPISR) is shown in column 1096. The general purpose or debug interrupt status register (GPISR or DBISR) 1097 contains a copy of SCRO, so that flag state may be restored following the interrupt. Here, only one of these registers (GPISR) is shown in column 1097. Interrupt enable (IE), bits 31-29 of SCRO are GPI enable, NMI enable, and DBI enable--here only the applicable enable bit (GPIE) 1098 is shown. Bits 28 and 27 of SCRO contain the interrupt mode (IMode) which encodes the four, user, GPI, NMI, or debug modes. [0194] CE3c Extension [0195] In the exemplary ManArray processor, a hierarchical conditional execution architecture is defined comprising 1-bit, 2-bit, and 3-bit form is. The 1-bit form is a subset of the 2-bit and 3-bit forms and the 2-bit form is a subset of the 3-bit form. In the exemplary ManArray processor, the load and store units use a CE1 1-bit form, the MAU, ALU, and DSU use the 3-bit CE3 form, though different implementations may use subsets of the 3-bit form depending upon algorithmic needs. The hierarchical conditional execution architecture is further explained in U.S. patent application Ser. No. 09/238,446 entitled "Methods and Apparatus to Support Conditional Execution in a VLIW-Based Array Processor With Subword Execution" filed on Jan. 28, 1999 and incorporated herein in its entirety. [0196] Two 3-bit forms of conditional execution, CE3a and CE3b, specify how to set the ACFs using C, N, V, or Z flags. These forms are described in greater detail in the above mentioned application. A new 3-bit form is specified in the present invention and labeled CE3c. The N and Z options available in the 3-bit CE3a definition are incorporated in the new CE3c encoding format 1100 encodings 1105 and 1106 respectively, illustrated in FIG. 11. The present invention addresses the adaptation of CE2 to use its presently reserved encoding for a registered SetCC form of conditional execution. The new form of CE2, which is a superset of the previous CE2, is referred to as CE2b whose encoding format is shown in table 1200 of FIG. 12. A new programmable register is used in conjunction with the CE2b and CE3c encodings and is named the SetCC field of SCRO as addressed further below. These bits are used to specify many new combinations of the arithmetic side effect (C, N, V, and Z) flags to cover exceptions detected in the execution units and to provide enhanced flexibility in each of the instructions for algorithmic use. Due to the improved flexibility, it may be possible to replace the original 3-bit CE3a or CE3b with CE3c in future architectures. Alternatively, a mode bit or bits of control could be provided and the hardware could then support the multiple forms of CE3. These CE3 encodings specify whether an instruction is to unconditionally execute and not affect the ACFs, conditionally execute on true or false and not affect the ACFs, or provide a register specified conditional execution function. The ASFs are set as defined by the instruction. in an exemplary implementation for a ManArray processor, the SetCC field of 5-bits 1310 which will preferably be located in an SCR0 register 1300 as shown in FIG. 13. The new format of SCR0 includes the addition of the SetCC bits 12-8 1310, an exception mask bit-13 1315, and the maskable PE exception interrupt signal bit 20 1320. C, N, V, Z, cc, SetCC, ccmask, and F7-F0 bits are always set to 0 by reset. The proposed SetCC definition shown in encoding table 1400 of FIGS. 14A and 14B, specifies some logical combination of flags such as packed data ORing of flags. The encoding also reserves room for floating point exception flags, or the like, for future architectures. [0197] A proposed syntax defining the SetCC operations is "OptypeCC" where the CC represents the options given in FIGS. 14A and 14B for a number of logical combinations of the ASFs. The number of ACFs affected is determined by the packed data element count in the current instruction and shown in FIGS. 14A and 14B. FIGS. 14A and 14B specify the use of packed data side effect signals C, N, V, and Z for each elemental operation of a multiple element packed data operation. These packed data side-effect signals are not programmer visible in the exemplary ManArray system. Specifically, the C7-C0, N7-N0, V7-V0, and Z7-Z0 terms represent internal flag signals pertinent for each data element operation in a packed data operation. "Size" is a packed data function that selects the appropriate affected C7-C0, N7-N0, V7-V0, and Z7-Z0 terms to be ORed based on the number of data elements involved in the packed data operation. For example, in a quad operation, the internal signals C3-C0, N3-N0, V3-V0, and Z3-Z0 may be affected by the operation and would be involved in the ORing while C7-C4, N7-N4, V7-V4, and Z7-Z4 are not affected and would not be involved in the specified operation. [0198] A new form of CE3 conditional execution architecture is next addressed with reference to FIG. 11. Two of the CE3c encodings 1103 and 1104 specify the partial execution of packed data operations based upon the ACFs. CE3c also includes the CE2b general extension that controls the setting of the ACFs based upon the registered SetCC parameter 1102. The proposed CE3c 3-bit conditional execution architecture in ManArray provides the programmer with five different levels of functionality: [0199] 1. unconditional execution of the operation, does not affect the ACFs, [0200] 2. conditional execution of the operation on all packed data elements, does not affect the ACFs, [0201] 3. unconditional execution of the operation, ACFs set as specified by the SetCC register, [0202] 4. conditional selection of data elements for execution, does not affect the ACFs, and [0203] 5. unconditional execution of the operation with control over ACF setting. [0204] In each case, data elements will be affected by the operation in different ways: [0205] 1. In the first case, the operation always occurs on all data elements. [0206] 2. In the second case, the operation either occurs on all data elements or the operation does not occur at all. [0207] 3. In the third case, the operation always occurs on all data elements and the ACFs are set in the CR phase of this operation. The 011 CE3c encoding 1102 shown in FIG. 11 would allow the ACFs F7-F0 to be set as specified by a SetCC register as seen in FIGS. 14A and 14B. [0208] 4. In the fourth case, the operation always occurs but only acts on those data elements that have a corresponding ACF of the appropriate value for the specified true or false coding. In this fourth case, the packed data instruction is considered to partially execute in that the update of the destination register in the SP or in parallel in the PEs only occurs where the corresponding ACF is of the designated condition. [0209] 5. In the fifth case, the N and Z flags represent two side effects from the instruction that is executing. An instruction may be unconditionally executed and affect the flags based on one of the conditions, N or Z. [0210] The syntax defining the fourth case operations is "Tm" and "Fm," for "true multiple" and "false multiple." The "multiple" case uses the packed data element count in the current instruction to determine the number of flags to be considered in the operation. For example, an instruction Tm.add.sa.4h would execute the add instruction on each of the 4 halfwords based on to the current settings of F0, F1, F2, and F3. This execution occurs regardless of how these four flags were set. This approach enables the testing of one data type with the operation on a second data type. For example, one could operate on quad bytes setting flags F3-F0, then a conditional quad half-word operation can be specified based on F3-F0 providing partial execution of the packed data type based on the states of F3-F0. Certain instructions, primarily those in the MAU and ALU, allow a conditional execution CE3c 3-bit extension field to be specified. [0211] PE Exception Interrupts [0212] Since the interrupt logic is in an SP, such as the SP 101, a mechanism to detect exceptions and forward the PE exception information to the SP is presented next. In addition, a method of determining which instruction caused the exception interrupt, in which PE, and in which sub data type operation is also discussed. [0213] One of the first questions to consider is when can an exception be detected and how will this detection be handled in the pipeline. The present invention operates utilizing a PE exception which can cause an interrupt to the SP and the PE exception is based upon conditions latched at the end of the CR phase. A whole cycle is allowed to propagate any exception signal from the PEs to the interrupt logic in the SP. Each PE is provided with an individual wire for the exception signal to be sent back to the SP where it is stored in an MRF register. These PE exception signals are also ORed together to generate a maskable PE exception interrupt. The cc flag represents the maskable PE exception interrupt signal. By reading the PE exception field in an MRF register, the SP can determine which PE or PEs have exceptions. Additional details relating to the PE exception are obtained by having the SP poll the PE causing an exception to find out the other information concerning the exception such as which data element in a packed operation caused the problem. This PE-local information is stored in a PE MRF register. One acceptable approach to resetting stored exception information is to reset it automatically whenever the values are read. [0214] In certain implementations, it is possible to make selectable the use of the SetCC register to either set the ACFs, cause an exception interrupt, or both for the programmed SetCC register specified condition. If the SetCC is enabled for exception interrupts and if the specified condition is detected, then an exception interrupt would be generated from the PE or PEs detecting the condition. This exception interrupt signal is maskable. If SetCC is to be used for setting ACFs and generating exception interrupts, then, depending upon system requirements, two separate SetCC type registers can be defined in a more optimum manner for each intended use. When a single SetCC register is used for both ACF and exception interrupt, note that the exception cc is tested for every cycle while the F0 flag can only be set when an instruction is issued using 011 CE3c encoding 1102 as shown in FIG. 11. [0215] For determining which instruction caused an exception interrupt, a history buffer in the SP is used containing a set number of instructions in the pipeline history so that the instruction that indirectly caused the PE exception can be determined. The number of history registers used depends upon the length of the instruction pipeline. A method of tagging the instructions in the history buffer to identify which one caused the exception interrupt is used. Even in SMIMD operation, this approach is sufficient since the contents of the VIM can be accessed if necessary. An ACF history buffer in each PE and the SP can also be used to determine which packed data element caused the exception. [0216] Alternatives for the Arithmetic Scalar Flag (ASF) Definition [0217] The definition of the C, N, V, Z flags, known collectively as the ASFs to be used in an exemplary system specifies the ASFs to be based on the least significant operation of a packed data operation. For single or one word (1W) operations, the least significant operation is the same as the single word operation. Consequently, the JMPcc instruction based on C, N, V, Z flags set by the 1W operation is used regularly. Setting of the C, N, V, Z flags by any other type of packed data operation in preparation for a JMPcc conditional branch is not always very useful so improving the definition of the ASFs would be beneficial. [0218] Improvements to the ASF definition addressed by the present invention are described below. The present C flag is replaced with a new version C' that is an OR of the packed data C flags. Likewise the N flag is replaced with a new version N' that is an OR of the packed data N flags, a V' that is an OR of the packed data V flags, and a Z' that is an OR of the packed data Z flags. The OR function is based upon the packed data size, i.e. 4H word OR four flags and an 8B word OR eight. In the 1W case, any existing code for an existing system which uses the JMPcc based upon 1W operations would also work in the new system and no change to the existing code would be needed. With the OR of the separate flags across the data types, some unique capabilities are obtained. For example, if any packed data result produced an overflow, a conditional JMP test could be easily done to branch to an error handling routine. [0219] In a first option, for JMPcc conditions based upon logical combinations of C', N', V, and Z', the preceding operation would need to be of the 1W single word type, otherwise the tested condition may not be very meaningful. To make JMPec type operations based upon logical combinations of the ASF' flags more useful, a further change is required. The execution units which produce C, N, V, and Z flags must latch the individual packed data C, N, V, and Z flag information at the end of an instruction's execution cycle. In the condition return phase, these individually latched packed data C, N, V, and Z information flags are logically combined to generate individual packed data GT, LE, and the like signals. These individual packed data GT, LE, and the like, signals can then be ORed into hot flag signals for use by the JMPcc type instructions. These OR conditions are shown in FIGS. 14A and 14B and are the same logical combinations used in the SetCC register specified conditions. Then, a JMPGT would branch, if "any" of the packed data operations resulted in a GT comparison. For example, following a packed data SUB instruction with a JMPGT becomes feasible. Rather than saving all packed data flags in a miscellaneous register file (MRF) register only the single hot flag state "cc" being tested for is saved in SCRO. Once the "cc" state has been latched in SCRO it can be used to cause an exception interrupt as defined further in the PE exception interrupt section below, if this interrupt is not masked. [0220] As an alternate second option, it is possible to define, for both Manta and ManArray approaches that only the 1W case is meaningful for use with the JMPec, CALLcc, and other conditional branch type instructions. By using the SetCC register and conditional execution with CE3b and CE3c, it will be possible to set the ACFs based upon a logical combination of the packed data ASFs and then use true (T.) or false (F.) forms of the JMP, CALL, and other conditional instructions to accomplish the same task. [0221] For ManArray, the generic ASF is as follows: [0222] Arithmetic Scalar Flags Affected [0223] C=1 if a carry occurs on any packed data operation, 0 otherwise, [0224] N=MSB of result of any packed data operation, [0225] V=1 if an overflow occurs on any packed data operation, 0 otherwise, and [0226] Z=1 if result is zero on any packed data operation, 0 otherwise. [0227] PE Exception Interrupts Alternative [0228] Rather than have each PE supply a separate exception wire, an alternative approach is to use a single wire that is daisy-chain ORed as the signal propagates from PE to PE, as shown for PE0-PEn for system 1560 of FIG. 15. In FIG. 15, a single line ORed exception signal and an exemplary signal flow are illustrated where the exception cc is generated in each PE assuming that cc=0 for no exception and cc=1 for an exception. The exception cc is generated every instruction execution cycle as specified by the SetCC register. If multiple PEs cause exceptions at the same time, each exception is handled sequentially until all are handled. [0229] The PE addresses are handled in a similar manner as the single exception signal. An additional set of "n" wires for a 2n array supplies the PE address. For example, a 4×4 array would require only five signal lines, four for the address and one for the exception signal. An exemplary functional view of suitable address logic 1600 for each PE in a 2×2 array is shown in FIG. 16. The logic 1600 is implemented using a 2×2 AND-OR, such as AND-ORs 1602 and 1604 per PE address bit. [0230] With this approach, the PE closest to the SP on the chain will block PE exception addresses behind it until the local PE's exception is cleared. It is noted that if each PE can generate multiple exception types and there becomes associated with each type a priority or level of importance, then additional interface signals can be provided between PEs to notify the adjacent PEs that a higher priority exception situation is coming from a PE higher up in the chain. This notification can cause a PE to pass the higher priority signals. In a similar manner, an exception interface can be provided that gives the exception type information along with the PE address and single exception signal. The exception types can be monitored to determine priority levels and whether a PE is to pass a signal to the next PE or not. [0231] Debug Interrupt Processing [0232] There is a region of DSP instruction memory called an "interrupt vector table" (IVT) 1701 and shown in FIG. 17 which contains a sequence of instruction addresses. For the exemplary system this table resides at instruction memory address 0x0000 through 0x007F, where each entry is itself the 32-bit (4 byte) address of the first instruction to be fetched after the interrupt control unit accepts an interrupt signal corresponding to the entry. The first entry at instruction memory address 0x0000 (1740) contains the address of the first instruction to fetch after RESET is removed. The third entry at instruction memory address 0x0008 (1722) contains the address of the first instruction to be fetched when a debug interrupt occurs. Debug interrupts have the highest interrupt priority and are accepted at almost any time and cannot be masked. There are a few times at which a debug interrupt is not immediately acknowledged, such as when a load-VLIW (LV) instruction sequence is in progress, but there are few of these cases. There is a special table entry at instruction memory address 0x0004 (1720) in the exemplary system. [0233] This entry has a "shadow" register 1800 associated with it called the Debug Instruction Register (DBIR) shown in FIG. 18. In addition, there are a set of control bits that are used to determine its behavior. Normally, in responding to an interrupt, a value is fetched from the IVT and placed into the program counter (PC) 1760, and it determines where the next instruction will be fetched. If a program branch targets an address in the IVT memory range, then the value fetched would be assumed to be an instruction and placed into the instruction decode register (IDR) 1750. Since the IVT contains addresses and not instructions, this would normally fail. However, in the case of address 0x0004, an instruction fetch targeting this address will cause the processor to attempt to fetch from its "shadow" register, the DBIR (if it is enabled). If there is an instruction in the DBIR, then it is read and placed into the IDR for subsequent decode. If there is not an instruction in the DBIR, the processor stalls immediately, does not advance the instructions in the pipeline, and waits for an instruction to be written to the DBIR. There are three control bits which relate to the DBIR. The debug instruction register enable (DBIREN) bit 1920 of the DSP control register (DSPCTL) 1900 shown in FIG. 19 when set to 1 enables the DBIR "shadow" register. If this bit is 0, then a fetch from 0x0004 will return the data from that instruction memory location with no special side-effects. Two other bits residing in the Debug Status Register (DBSTAT) 2000 of FIG. 20 are the "debug instruction present" (DBIP) bit 2030, and the "debug stall" (DBSTALL) bit 2020. The DBIP bit is set whenever a value is written to the DBIR either from the MCB or from the SPR bus. This bit is cleared whenever an instruction fetch from 0x0004 occurs (not an interrupt vector fetch). When this bit is cleared and an instruction fetch is attempted from 0x0004 then the DBSTALL bit of the DBSTAT register is set and the processor stalls as described above. When this bit is set and an instruction fetch is attempted, the contents of the DBIR are sent to the IDR for decoding and subsequent execution. [0234] When the debug interrupt vector at instruction memory address 0x0008 is loaded with a value of 0x0004, and the DBIREN bit of the DSPCTL register is set to 1 (enabling the DBIR), then when a debug interrupt occurs, 0x0004 is first loaded into the PC (vector load) and the next instruction fetch is attempted at address 0x0004. When this occurs, the processor either stalls (if DBIP=0) or fetches the instruction in the DBIR and executes it. Using this mechanism it is possible to stop the processor pipeline (having saved vital hardware state when the interrupt is accepted) and have an external agent, a test module (or debugger function), take over control of the processor. [0235] As an additional note, on returning from any interrupt, at least one instruction is executed before the next interrupt vector is fetched, even if an interrupt is pending when the return-from-interrupt instruction (RETI) is executed. In the case where a debug interrupt is pending when the RETI instruction is executed, exactly one instruction is executed before fetching from the first address of the debug service routine (or from the DBIR if the vector is programmed to 0x0004). This behavior allows the program to be single-stepped by setting the debug interrupt request bit in the interrupt request register (IRR) while still in the debug interrupt handler. Then when the RETI is executed, a single instruction is executed before reentering the debug interrupt mode. [0236] Two additional registers along with two control bits are used during debug processing to allow a debug host or test module to communicate with debug code running in the target processor. The debug-data-out (DBDOUT) register 2100 of FIG. 21 and the debug-data-in (DBDIN) register 2200 of FIG. 22 are used for sending data out from the processor and reading data into the processor respectively. A write to the DBDOUT register causes a status bit, debug data output buffer full bit (DBDOBF) 2040 of the DBSTAT register to be set. This bit also controls a signal which may be routed to an interrupt on an external device (e.g. the test module or debug host). The complement of this signal is routed also to an interrupt on the target processor so that it may use interrupt notification when data has been read from the DBDOUT register. The DBDOUT register is visible to MCB bus masters and when read, the DBDOBF bit to be cleared. An alternate read address is provided which allows the DBDOUT data to be read without clearing the DBDOBF bit. When an external debug host or test module writes to the DBDIN register, the debug data input-buffer-full bit (DBDIBF) 2050 of the DBSTAT register is set. This bit also controls a signal which is routed to an interrupt on the processor target. The complement of this signal is available to be routed back to the debug host or test module as an optional interrupt source. When the target processor reads the DBDIN register, the DBDIBF bit is cleared. [0237] Given the preceeding background, the following discussion describes a typical debug sequence assuming that the debug interrupt vector in the IVT is programmed with a 0x0004 (that is, pointing to the DBIR register) and the DBIR is enabled (DBIREN=1). FIG. 23 illustrates an exemplary DSP ManArray processor 2310 residing on an MCB 2030 and an MDB 234. An external device which we will call the "test module" residing on the MCB, initiates a debug interrupt on the target processor core. The test module is assumed be an MCB bus master supporting simple read and write accesses to slave devices on the bus. The test module actually provides an interface between some standard debug hardware (such as a JTAG port or serial port) and the MCB, and translates read/write requests into the MCB protocol. A debug interrupt may be initiated by writing to a particular MCB address, or configuring an instruction event point register described in further detail in U.S. application Ser. No. 09/598,566 to cause a debug interrupt when a particular DSP condition occurs such as fetching an instruction from a specified address, or fetching data from a particular address with a particular value. [0238] The processor hardware responds to the interrupt by saving critical processor state, such as the program status and control register, SCR0, and several other internal bits of state. The debug interrupt vector is fetched (having contents 0x0004) into the PC and then the processor attempts to read an instruction from 0x0004 causing an access to the DBIR register. If the DBIP bit of the DBSTAT register is 0, then the processor stalls waiting for an action from the test module. When the processor stalls, the DBSTALL bit of the DBSTAT register is set to 1. This bit is also connected to a signal which may be routed (as an interrupt for example) to the test module. This is useful if an event point register is used to initiate the debug interrupt. Rather than polling the DBSTAT register, the test module may be configured to wait for the DBSTALL signal to be asserted. If the DBIP bit is set to 1, then the processor fetches the value in the DBIR and attempts to execute it as an instruction. Typically, the DBIR does not have an instruction present when the debug interrupt is asserted, allowing the processor to be stopped. [0239] The debugger then reads a segment of the DSP instruction memory via the test module, and saves it in an external storage area. It replaces this segment of user program with a debug monitor program. [0240] The test module then writes a jump-direct (JMPD) instruction to the DBIR. When this occurs the DBIP bit is set, and the processor fetches this instruction into the IDR for decode, after which it is cleared again. The debugger design must make sure that no programmer visible processor state is changed until it has been saved through the test module. This JMPD instruction targets the debug monitor code. [0241] The monitor code is executed in such a way as to retain the program state. The DBDOUT register is used to write data values and processor state out to the test module. [0242] To resume program execution, the test module writes state information back to the processor using the DBDIN register. When all state has been reloaded, the debug monitor code jumps to instruction address 0x0004 which results in a debug stall. [0243] The test module lastly writes an RETI instruction to the DBIR which causes the internal hardware state to be restored and execution resumed in the program where it was interrupted. [0244] It will be noted that the debug sequence mentioned above could take place in several stages with successive reloads of instructions, using very little instruction memory. [0245] It should also be noted that it is possible to execute the state save/restore sequence by just feeding instructions through the DBIR. Doing this requires that the PC be "locked" , that is, prevented from updating by incrementing. This is done using a bit of the DSP control register (DSPCTL) called the "lock PC" (LOCKPC) bit 1930. When this bit is 1, the PC does is not updated as a result of instruction fetch or execution. This means when the LOCKPC bit is 1, branch instructions have no effect, other than updating the state of the user link register (ULR) (for CALL-type instructions). Typically a small amount of instruction memory is used to "inject" a debug monitor program since this allows execution of state save/restore using loop instructions providing a significant performance gain. [0246] If a debug monitor is designed to be always resident in processor memory, when the debug interrupt occurs, it does not need to be directed to the DBIR, but rather to the entry point of the debug monitor code. [0247] Reset of the processor is carried out using the RESETDSP bit 1940 of the DSPCTL register. Setting this bit to 1 puts the processor into a RESET state. Clearing this bit allows the processor to fetch the RESET vector from the IVT into the PC, the fetch the first program instruction from this location. It is possible to enter the debug state immediately from RESET if the value 0x0004 is placed in the reset vector address (0x0000) of the IVT, and the DBIREN bit of the DSPCTL register is set to 1. This results in the first instruction fetch coming from the DBIR register. If no instruction is present then the processor waits for an instruction to be loaded. [0248], or to the ManArray architecture as it evolves in the future. Patent applications by Edwin Frank Barry, Vilas, NC US Patent applications by Gerald G. Pechanek, Cary, NC US Patent applications by Larry D. Larsen, Raleigh, NC US Patent applications by Patrick R. Marchand, Apex, NC US Patent applications by Altera Corporation Patent applications in class INSTRUCTION FETCHING Patent applications in all subclasses INSTRUCTION FETCHING User Contributions: Comment about this patent or add new information about this topic:
http://www.faqs.org/patents/app/20120173849
CC-MAIN-2015-06
en
refinedweb
Home -> Community -> Usenet -> c.d.o.misc -> Re: Avoiding repeating code in PL/SQL DML Martin T. wrote: > Hey all. (Oracle 9.2.0.1.0 on Windows XP) > > I have the following DML in my PL/SQL code: > ---- > delete from machine_down_times > where start_measure_id in ( > SELECT m.id > from measures m > where m.order_id = p_order_id > and time_stamp >= v_delete_from_date > and time_stamp < v_delete_to_date > ) > and stop_measure_id in ( > SELECT m.id > from measures m > where m.order_id = p_order_id >) > > * VIEW -- ... clutters schema namespace with something only used in my > package > * PL/SQL Collection -- ... performance? It would have to be public to > be used as TABLE(v_collection) (?) > * Inline VIEW -- (what if I have the same SELECT in multiple DML > stmts?) > > thanks a bunch! > > best, > Martin In 10g I'd recommend using the WITH clause. In 9i an inline view. But is this a real problem or just a matter of elegance? -- Daniel A. Morgan University of Washington damorgan_at_x.washington.edu (replace x with u to respond) Puget Sound Oracle Users Group on Mon Jul 31 2006 - 10:13:48 CDT Original text of this message
http://www.orafaq.com/usenet/comp.databases.oracle.misc/2006/07/31/0617.htm
CC-MAIN-2015-06
en
refinedweb
I am making my first text based game. When I show the code please don't gasp at how horribly I organize my code. I know it is bad, but that isn't the issue. I checked my last function but every time I try to build and compile it, it gives me this error- "1>c:\documents and settings\owner\my documents\visual studio 2005\projects\textbasedgame\textbasedgame\textgame .cpp(167) : error C2059: syntax error : '}'" This shows an error on the last line of the code here is the code: Please helpPlease helpCode:#include<iostream> #include<time.h> #include<cstring> #include<string> using namespace std; int lvl; int experience; int base; int action; int damage; int magicdamage; string type; int money = 500; string gender; int health = 100; int excess; string name; void goblinbattle(); int battleaction; string battleaction2; int wis = 5; int str = 5; int agi = 5; int main() { int gender1; cout<<"Welcome to the world of Leflorin.\n"; cout<<"Here you will fight evil monsters and become powerful.\n"; cout<<"You fight for your nation, you fight to bring down the evil Scylla!\n"; cout<<"Are you ready... \n"; system("PAUSE"); while(base > 3 || base < 1) { system("cls"); cout<<"Choose your base stat: \n"; cout<<"1. Wizard: Powerful in all magics\n"; cout<<"2. Warrior: Powerful with all weapons\n"; cout<<"3. Thief: Quick and nimble\n"; cout<<"Choose now... "; cin>>base; if(base > 3 || base < 1 || !base) { cout<<"Incorrect entry"; cin.clear(); cin.ignore(80, '\n'); } } switch(base) { case 1: { wis++; type = "wizard"; } break; case 2: { str++; type = "warrior"; } break; case 3: { agi++; type = "Thief"; } break; default: { cout<<"Incorrect entry"; } break; } system("cls"); cout<<"Congratulations, after much hard work, you have become a "<<type<<"!\n"; cout<<"Your stats are now: \n"; cout<<"wisdom: "<<wis<<"\nStrength: "<<str<<"\nAgility: "<<agi<<"\n"; system("PAUSE"); system("cls"); cin.clear(); cin.ignore(80, '\n'); cout<<"Now choose your gender:\n1. Male\n2. Female\n"; cin>>gender1; if(!gender1) { cout<<"Incorrect entry."; cin.clear(); cin.ignore(80, '\n'); } switch(gender1) { case 1: { gender = "Male"; } break; case 2: { gender = "Female"; } break; default: { cout<<"Incorrect entry."; } } cout<<"Now that we know your a "<<gender<<", we need to know\nwhat your name is.\n"; cin>>name; cout<<"\n"; cout<<"What a powerful name, "<<name<<" is!\n"; system("cls"); cout<<"Lets review your information:\nType:\n "<<type<<"\nGender:\n "<<gender<<"\nName:\n "<<name<<"\n"; cout<<"Now you are ready to begin your struggle!"; system("PAUSE"); system("cls"); cout<<"You lie on a cot in your barn after many hours of hard work. When you wake up you hear\na crackle as if something is burning..."; cout<<"\nYou hasten to the door where you see a goblin with a torch.\nYour whole house is a giant, smoldering, decayed pile of ash.\n"; cout<<"The goblin turns to look at you, his eyes a dull grey.\nYou slowly draw your sword, deep inside yourself\nyou draw on your abilities."; cout<<"Anger, hate, revenge swells up inside of you!\nIt is time to fight\n"; system("PAUSE"); goblinbattle(); system("PAUSE"); return 0; } void goblinbattle() { do { srand ( time(NULL) ); damage = rand() % str + 1; magicdamage = rand() % wis + 1; int gobstr = 4; int gobhealth; gobhealth = 25; cout<<"Choose your action: \n"; cout<<"1. Melee\n2. Magic\n3. Rest\n"; cin>>battleaction; switch(battleaction) { case 1: { action = rand() % 100 + 1; if(action <= (35 - str)) { cout<<"You miss the enemy with your sword.\n"; cout<<"The enemy has "<<gobhealth<<" health left.\n"; } else { cout<<"You cut deep into the enemy.\n"; cout<<"You do " <<damage<<" damage to the enemy.\n"; cout<<"He has " <<(gobhealth - damage)<<" health left.\n"; } } break; default: { cout<<"Incorrect entry"; } break; } } }//here
http://cboard.cprogramming.com/cplusplus-programming/90214-can%27t-find-error.html
CC-MAIN-2015-06
en
refinedweb
Windows PowerShell (a.k.a. Monad) is a new CLI (Command Line Interface) provided by Microsoft. PowerShell is based on .NET Framework 2.0, and passes data as .NET objects. In this article, you'll see how to develop a commandlet (cmdlet, PowerShell commands) which support wildcards, and uses ETS (Extended Type System), and how to use CustomPSSnapIn. The sample uses IIS 7 and the IIS 7 .NET libraries (Microsoft.Web.Administration) to retrieve the list of websites in the local IIS 7 server. Cmdlets are tiny .NET classes derived from System.Management.Automation.Cmdlet or from System.Management.Automation.PSCmdlet, and override a few methods with your own logic. The cmdlets are installed to PowerShell, and can be used from PowerShell, or from other applications which use PowerShell to invoke cmdlets. System.Management.Automation.Cmdlet System.Management.Automation.PSCmdlet A cmdlet class can derived from two different classes: Cmdlet and PSCmdlet. The difference is how much you depend on the Windows PowerShell environment. When deriving from Cmdlet, you aren't really depending on PowerShell. You are not impacted by any changes in the PowerShell runtime. In addition, your cmdlet can be invoked directly from any application instead of invoking it through the Windows PowerShell runtime. Cmdlet PSCmdlet In most cases, deriving from Cmdlet is the best choice, except when you need full integration with the PowerShell runtime, access to session state data, call scripts etc. Then, you'll derive from PSCmdlet. Every cmdlet has a name in the same template: verb-noun. The verb (get, set, new, add, etc.) is from a built-in list of verb names. The noun is for your. Get VerbsCommon ws ShouldProcess Note that in the top of the code, I have these using statements: using using System; using System.Collections.Generic; using System.Text; using System.Management.Automation; using System.Collections; using Microsoft.Web.Administration; using System.Collections.ObjectModel; using System.Diagnostics; using System.Reflection; The ones in bolds are "special" namespaces which are relevant for PowerShell, except Microsoft.Web.Administration which is used to manage IIS 7. Microsoft.Web.Administration Almost any PowerShell cmdlet will use parameters to help users get relevant information. The parameters are, actually, properties which have the ParameterAttribute before: ParameterAttribute [Parameter(Position = 0, Mandatory = true, ValueFromPipeline = true, ValueFromPipelineByPropertyName = true, HelpMessage = "Enter filter by site name (support wildcard)")] [Alias("SiteName")] public string Name { set { names = value; } } Parameters can be accessed using position or property name. This means that if we set the parameter at position 0, you can call the cmdlet like this: get-websites *. The "*" is the parameter, or using property name: get-websites -Name *. Here, we also define the alias: get-websites -SiteName *. A mandatory parameter means the user must enter a value for the parameter. In our cmdlet, we can override a few methods. We must override at least one from this list: The code here, in most cases, is used to prepare the cmdlet. This code runs only once, when the cmdlet calls. This is the most commonly overridden method. This method includes the main logic. The code can run more than once, as required. This overridden method is used to finalize the cmdlet operation. We can also override the StopProcessing method, which includes code that will run on an unexpected stop of a cmdlet (for example, the user uses Ctrl+c). StopProcessing I override only the ProcessRecord method. First, I create an instance of the generic System.Collections.ObjectModel.Collection<> collection. This is the collection type PowerShell uses. The type is PSObject, which is the main object PowerShell uses. ProcessRecord System.Collections.ObjectModel.Collection<> PSObject Because we want to support wildcards, I use the built-in wildcard classes that comes with PowerShell: WildcardOptions options = WildcardOptions.IgnoreCase |WildcardOptions.Compiled; WildcardPattern wildcard = new WildcardPattern(names, options); Then, we create an instance of Microsoft.Web.Administration.ServerManager, the object used to manage IIS 7 websites. Microsoft.Web.Administration.ServerManager In the foreach loop, we check for every site to see if its name matches the wildcard. If it does, we convert it to PSObject. foreach Extended Type System is one of the main and most interesting PowerShell concepts. We can extend any type we want, and add members in addition to the built-in ones. The PSObject object is the main object in PowerShell, because it includes the original object and the extended members in the same object, and gives the user who invokes this cmdlet the option to use any member - the original members and the extended ones. ps.Properties.Add(new PSNoteProperty("MaxBandwidthMB", site.Limits.MaxBandwidth / 1024)); Here, we add a new property called MaxBandwidthMB, and its value is the the original bandwidth value / 1024. MaxBandwidthMB Types can be extended from code, or from XML files, in a specific format. Here, we will see an example to extend a type with a new property - but we can add properties and methods from a lot of types: aliases, scripts, code methods, etc. Finally, we add the PSObject instance which includes the original (early bound object) and the extended members to the collection, and use the WriteHost method to write it to the host (can be PowerShell command line host, or another application that invoke our cmdlet). WriteHost After you'll finish the cmdlet, you can get a list of the members of the object returned from the cmdlet. Here you can see our extended property (marked): We use a try...catch statement, and if an exception occurs, we use the WriteError method to write information about the error to the host. try...catch WriteError If we use this cmdlet now from the console, we will get a strange output, which includes the object types and a few values. We have to specify the default output view we want. We do this in the format.ps1xml file. Note that we use the MaxBandwidthMB property, which is an extended one. This is the output without the format file: The snap-in includes the details PowerShell needs to install the cmdlet. We can derive it from PSSnapIn which is the "default" - install everything you can, or from CustomPSSnapIn, then we set exactly what to do. PSSnapIn CustomPSSnapIn Here, we add the cmdlet and the format file. First, we define a collection for cmdlets, formats, types, and providers. In the constructor, we add the cmdlet and the format file. We also override a few properties to include information about our snap-in. And, we override the includes the DLL of the project. Now, you have to add the snap-in: add-pssnapin cpdemo Note that you may have to change the path of the format file in the snapin.cs class file. And that's all! The cmdlet is ready to use and returns a collection of PSObjects which includes Microsoft.Web.Administration.Site and an extended property. Microsoft.Web.Administration.Site In PowerShell, you can use the object that the cmdlet returns for more things. This command, for example, will save the output to a CSV file: get-ws d* | Where{$_.MaxBandwidthMB -gt 4000000} | Select-Object Name,MaxBandwidthMB | out-csv c:\csv.csv This command will save a new CSV file which includes the list of names and the MaxBandwidthMB property for all sites for which the name begins with "d" and where the value of the property MaxBandwidthMB > 4.
http://www.codeproject.com/Articles/20867/Build-a-PowerShell-cmdlet
CC-MAIN-2015-06
en
refinedweb
iTextureList Struct ReferenceThis class represents a list of texture wrappers. More... [Textures & Materials] #include <iengine/texture.h> Inheritance diagram for iTextureList: Detailed DescriptionThis class represents a list of texture wrappers. Main ways to get pointers to this interface: Definition at line 168 of file texture.h. Member Function Documentation Add a texture. Find a texture and return its index. Find a texture by name. Return a texture by index. Return the number of textures in this list. Create a engine wrapper for a pre-prepared iTextureHandle The handle will be IncRefed. Create a new texture. Remove the nth texture. Remove a texture. Remove all textures. The documentation for this struct was generated from the following file: Generated for Crystal Space 1.0.2 by doxygen 1.4.7
http://www.crystalspace3d.org/docs/online/api-1.0/structiTextureList.html
CC-MAIN-2015-06
en
refinedweb
Object Attribute DllImportAttribute mscorlib RuntimeInfrastructure Indicates that the target method of this attribute is an export from an unmanaged shared library. This attribute provides the information needed to call a method exported from an unmanaged shared library. This attribute provides the name of the shared library file, the name of the method within that library, the calling convention, and character set of the unmanaged function. [Note: A shared library refers to Dynamically Linked Libraries on Windows systems, and Shared Libraries on Unix systems.] Compilers are required to not preserve this type in metadata as a custom attribute. Instead, compilers are required to emit it directly in the file format, as described in Partition II of the CLI Specification. Metadata consumers, such as the Reflection API, are required to retrieve this data from the file format and return it as if it were a custom attribute. The following example demonstrates the use of the DllImportAttribute. [Note: The non-standard GetLocalTimeAPI used in this example indicates the current local system time.] using System; using System.Runtime.InteropServices; [ StructLayout( LayoutKind.Sequential )] public class SystemTime { public ushort year; public ushort month; public ushort dayOfWeek; public ushort day; public ushort hour; public ushort minute; public ushort second; public ushort milliseconds; } public class LibWrap { [ DllImportAttribute( "Kernel32", CharSet=CharSet.Auto, CallingConvention=CallingConvention.StdCall, EntryPoint="GetLocalTime" )] public static extern void GetLocalTime( SystemTime st ); } public class DllImportAttributeTest { public static void Main() { SystemTime st = new SystemTime(); LibWrap.GetLocalTime( st ); Console.Write( "The Date and Time is: " ); Console.Write( "{0:00}/{1:00}/{2} at ", st.month, st.day, st.year ); Console.WriteLine( "{0:00}:{1:00}:{2:00}", st.hour, st.minute, st.second ); } }When run at the given time on the given date, the output produced was The Date and Time is: 05/16/2001 at 11:39:17 AttributeUsageAttribute(AttributeTargets.Method, AllowMultiple=false, Inherited=false) System.Runtime.InteropServices Namespace DllImportAttribute Constructors DllImportAttribute Constructor DllImportAttribute Fields DllImportAttribute.CallingConvention Field DllImportAttribute.CharSet Field DllImportAttribute.EntryPoint Field DllImportAttribute.ExactSpelling Field DllImportAttribute Properties DllImportAttribute.Value Property Constructs and initializes a new instance of the DllImportAttribute class. - dllName - A String that specifies the name of the shared library containing the unmanaged method to import. If the shared library specified in dllName is not found, an error occurs at runtime. System.Runtime.InteropServices.DllImportAttribute Class, System.Runtime.InteropServices Namespace A CallingConvention value that specifies the calling convention used when passing arguments to the unmanaged implementation of a method in a shared library. The default CallingConvention value is System.Runtime.InteropServices.CallingConvention.StdCall. System.Runtime.InteropServices.DllImportAttribute Class, System.Runtime.InteropServices Namespace A CharSet value that controls function name modification and indicates how the String arguments to the method will be marshaled. This field is set to one of the CharSet values to indicate the required modifications to the name of the imported function and to the String arguments of the function. The default value for System.Runtime.InteropServices.DllImportAttribute.CharSet is System.Runtime.InteropServices.CharSet.Ansi. If System.Runtime.InteropServices.DllImportAttribute.CharSet is set to System.Runtime.InteropServices.CharSet.Unicode, all string arguments are converted to Unicode characters before being passed to the unmanaged implementation. If the field is set to System.Runtime.InteropServices.CharSet.Ansi the string characters are converted to ANSI characters. If System.Runtime.InteropServices.DllImportAttribute.CharSet is set to System.Runtime.InteropServices.CharSet.Auto, the String and function name conversion is platform dependent. The System.Runtime.InteropServices.DllImportAttribute.CharSet field might also be used to determine which version of a function is imported from the specified shared library by modifying the provided name of the function. The name modification is platform specific, and includes additional characters to indicate the character set. The default value of this field is System.Runtime.InteropServices.CharSet.Ansi. System.Runtime.InteropServices.DllImportAttribute Class, System.Runtime.InteropServices Namespace A String that specifies the name of the shared library entry point. System.Runtime.InteropServices.DllImportAttribute Class, System.Runtime.InteropServices Namespace A Boolean value indicating whether the name of the entry point in the unmanaged library is modified to correspond to the CharSet value specified in the System.Runtime.InteropServices.DllImportAttribute.CharSet field. System.Runtime.InteropServices.DllImportAttribute Class, System.Runtime.InteropServices Namespace Gets the name of the shared library file with the entry point. A String containing the name of the shared library file from which a function implementation is imported. This property is read-only. System.Runtime.InteropServices.DllImportAttribute Class, System.Runtime.InteropServices Namespace
http://www.gnu.org/software/dotgnu/pnetlib-doc/System/Runtime/InteropServices/DllImportAttribute.html
CC-MAIN-2015-06
en
refinedweb
iofunc_func_init() Initialize the default POSIX-layer function tables Synopsis: #include <sys/iofunc.h> void iofunc_func_init( unsigned nconnect, resmgr_connect_funcs_t *connect, unsigned nio, resmgr_io_funcs_t *io ); Since: BlackBerry 10.0.0 Arguments: - nconnect - The number of entries in the connect table that you want to fill. Typically, you pass _RESMGR_CONNECT_NFUNCS for this argument. - connect - A pointer to a resmgr_connect_funcs_t structure that you want to fill with the default connect functions. - nio - The number of entries in the io table that you want to fill. Typically, you pass _RESMGR_IO_NFUNCS for this argument. - io - A pointer to a resmgr_io_funcs_t structure that you want to fill with the default I/O functions. Library: libc Use the -l c option to qcc to link against this library. This library is usually included automatically. Description:. Examples:(). Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/i/iofunc_func_init.html
CC-MAIN-2015-06
en
refinedweb
This tutorial introduces texture mapping. It's the first in a series of tutorials about texturing in GLSL shaders in Blender.Edit). The horizontal coordinate is officially.”. For example,. The OpenGL's wrap mode corresponds to Blender's settings under Properties > Texture tab > Image Mapping. Unfortunately, Blender doesn't appear to set the OpenGL wrap mode but it is always set to “repeat”. Texturing a Sphere in BlenderEdit To map the image of the Earth's surface to the left onto a sphere in Blender, you first have to download this image to your computer: click the image to the left until you get to a larger version and save it (usually with a right-click) to your computer (remember where you saved it). Then switch to Blender and add a sphere (in an Info window choose Add > Mesh > UV Sphere), select it in the 3D View (by right-clicking), activate smooth shading (in the Tool Shelf of the 3D View, press t if it is not active), make sure that Display > Shading: GLSL is set in the Properties of the 3D View (press n if they aren't displayed), and switch the Viewport Shading of the 3D View to Textured (the second icon to the right of the main menu in the 3D View). Now (with the sphere still being selected) add a material (in a Properties window > Material tab > New). Then add a new texture (in the Properties window > Textures tab > New) and select Image or Movie for the Type and click Image > Open. Select your file in the file browser and click on Open Image (or double-click it in the file browser). The image should now appear in the preview section of the Textures tab and Blender should put it onto the sphere in the 3D View. Now you should make sure that the Coordinates in the Properties window > Textures tab > Mapping are set to Generated. This means that our texture coordinates will be set to the coordinates in object space. Specifying or generating texture coordinates (i.e. UVs) in any modeling tool is a whole different topic which is well beyond the scope of this tutorial. With these settings, Blender will also send texture coordinates to the vertex shader. (Actually, we could also use the object coordinates in gl_Vertex because they are the same in this case.) Thus, we can write a vertex shader that receives the texture coordinates and hands them through to the fragment shader. The fragment shader then does some computation on the four-dimensional texture coordinates to compute the longitude and latitude (scale to the range from 0 to 1), which will be used as texture coordinates here. Usually this step would be unnecessary since the texture coordinates should already correctly specify where to look up the texture image. (In fact, any such processing of texture coordinates in the fragment shader should be avoided for performance reasons; here I'm only using this trick to avoid setting up appropriate UV texture coordinates.) The Python script to set up the shader could be: import bge cont = bge.logic.getCurrentController() VertexShader = """ varying vec4 texCoords; // texture coordinates at this vertex void main() { texCoords = gl_MultiTexCoord0; // in this case equal to gl_Vertex gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; } """ FragmentShader = """ varying vec4 texCoords; // interpolated texture coordinates for this fragment uniform sampler2D textureUnit; // a small integer identifying a texture image void main() { vec2 longitudeLatitude = vec2( (atan(texCoords.y, texCoords.x) / 3.1415926 + 1.0) * 0.5, 1.0 - acos(texCoords.z) / 3.1415926); // processing of the texture coordinates; // this is unnecessary if correct texture coordinates // are specified within Blender gl_FragColor = texture2D(textureUnit, longitudeLatitude); // look up the color of the texture image specified // by the uniform "textureUnit" at the position // specified by "longitudeLatitude.x" and // "longitudeLatitude.y" and return it in "gl_FragColor" } """ mesh = cont.owner.meshes[0] for mat in mesh.materials: shader = mat.getShader() if shader != None: if not shader.isValid(): shader.setSource(VertexShader, FragmentShader, 1) shader.setSampler('textureUnit', 0) Note the last line shader.setSampler('textureUnit', 0) in the Python script: it sets the uniform variable textureUnity to 0. This specifies that the texture should be used which is first in the list in the Properties window > Textures tab. A value of 1 would select the second in the list, etc. In fact, for each sampler2D variable that you use in a fragment shader, you have to set its value with a call to setSampler in the Python script as shown above. Actually, a sampler2D uniform specifies the texture unit of the GPU. (A texture unit is a part of the hardware that is responsible for the lookup and interpolation of colors in texture images.) The number of texure units of GPUs is available in the built-in constant gl_MaxTextureUnits, which is usually 4 or 8. Thus, the number of different texture images available in a fragment shader is limited to this number. If everything went right, the texture image should now appear correctly mapped onto the sphere when you start the game engine by pressing p. (Otherwise Blender maps it differently onto the sphere.) Congratulations! How It WorksEdit Since many techniques use texture mapping, it pays off very well to understand what is happening here. Therefore, let's review the shader code: The vertices of Blender's sphere object come with attribute data in gl_MultiTexCoord0 for each vertex, which specifies texture coordinates that are in our particular example the same values as in the attribute gl_Vertex, which specifies a position in object space.. In this particular example, the fragment shader computes new texture coordinates in longitudeLatitude. Usually, this wouldn't be necessary because correct texture coordinates should be specified within Blender using UV mapping. The fragment shader then uses the texture coordinates to look up a color in the texture image specified by the uniform textureUnit. SummaryEdit You have reached the end of one of the most important tutorials. We have looked at: - How to set up a Blender object for texturing. - How to import a texture image. - How a vertex shader and a fragment shader work together to map a texture image onto a mesh.”. < GLSL Programming/Blender
http://en.m.wikibooks.org/wiki/GLSL_Programming/Blender/Textured_Spheres
CC-MAIN-2015-06
en
refinedweb
GLPK/Python There are several Python language bindings to choose from. Each provides a differing level of abstraction. All are open source software. The Scripting plus MathProg page offers further information on the use of Python and GLPK. Contents Python-GLPK[edit][edit] The following minimalistic program will show the GLPK version number: import glpk print glpk.glp_version() Build and install from source[edit][edit][edit][edit][edit][edit][edit][edit][edit][edit] Simple swig bindings for the GNU Linear Programming Kit A description, installation instructions, and an example are available on PyPI: The source is available on GitHub: User recommendations[edit][edit] - ↑ Hart, William E. (2008). Python optimization modeling objects (Pyomo)..
http://en.wikibooks.org/wiki/GLPK/Python
CC-MAIN-2015-06
en
refinedweb
This article is in the Product Showcase section for our sponsors at CodeProject. These articles are intended to provide you with information on products and services that we consider useful and of value to developers. A lot of buzz talks over Internet which suggests that machine learning and Artificial Intelligence (AI) are basically the same thing, but this is a misunderstanding. Both machine learning and Knowledge Reasoning have the same concern: the construction of intelligent software. However, while machine learning is an approach to AI based on algorithms whose performance improve as they are exposed to more data over time, Knowledge Reasoning is a sibling approach based on symbolic logic. Knowledge Reasoning’s strategy is usually developed by using functional and logic based programming languages such as Lisp*, Prolog*, and ML* due to their ability to perform symbolic manipulation. This kind of manipulation is often associated with expert systems, where high level rules are often provided by humans and used to simulate knowledge, avoiding low-level language details. This focus is called Mind Centered. Commonly, some kind of (backward or forward) logical inference is needed. Machine learning, on its turn, is associated with low-level mathematical representations of systems and a set of training data that lead the system toward performance improvement. Once there is no high-level modeling, the process is called Brain Centered. Any language that facilitates writing vector algebra and numeric calculus over an imperative paradigm works just fine. For instance, there are several machine learning systems written in Python* simply because the mathematical support is available as libraries for such programming language. This article aims to explore what happens when Intel solutions support functional and logic programming languages that are regularly used for AI. Despite machine learning systems success over the last two decades, the place for traditional AI has neither disappeared nor diminished, especially in systems where it is necessary to explain why a computer program behaves the way it does. Hence, it is not feasible to believe that next generations of learning systems will be developed without high-level descriptions, and thus it is expected that some problems will demand symbolical solutions. Prolog and similar programming languages are valuable tools for solving such problems. As it will be detailed below, this article proposes a Prolog interpreter recompilation using Intel® C++ Compiler and libraries in order to evaluate their contribution to logic based AI. The two main products used are Intel® Parallel Studio XE Cluster Edition and SWI-Prolog interpreter. An experiment with a classical AI problem is also presented. 1. The following description uses a system equipped with: Intel® Core™ i7 4500U@1.8 GHz processor, 64 bits, Ubuntu 16.04 LTS operating system, 8GB RAM, and hyper threading turned on with 2 threads per core (you may check it by typing sudo dmidecode -t processor | grep -E '(Core Count|Thread Count)') . Different operating systems may require minor changes. sudo dmidecode -t processor | grep -E '(Core Count|Thread Count) 2. Preparing the environment. Optimizing performance on hardware is an iterative process. Figure 1 shows a flow chart describing how the various Intel tools help you in several stages of such optimization task. The most convenient way to install Intel tools is downloading and installing Intel® Parallel Studio XE 2017. Extracting the .tgz file, you will obtain a folder called parallel_studio_xe_2017update4_cluster_edition_online (or similar version). Open the terminal and then choose the graphical installation: parallel_studio_xe_2017update4_cluster_edition_online <user>@<host>:~% cd parallel_studio_xe_2017update4_cluster_edition_online <user>@<host>:~/parallel_studio_xe_2017update4_cluster_edition_online% ./install_GUI.sh Although you may prefer to perform a full install, this article will choose a custom installation with components that are frequently useful for many developers. It is recommended that these components also be installed to allow further use of such performance libraries in subsequent projects. The installation is very straight-forward, and it does not require many comments to be made. After finishing such task, you must test the availability of Intel® C++ Compiler by typing in your terminal: <user>@<host>:~% cd .. <user>@<host>:~% icc --version icc (ICC) 17.0.4 20170411 If the icc command was not found, it is because the environment variables for running the compiler environment were not set. You must do it by running a predefined script with an argument that specifies the target architecture: icc <user>@<host>:~% source /opt/intel/compilers_and_libraries/linux/bin/compilervars.sh -arch intel64 -platform linux If you wish, you may save disk space by doing: <user>@<host>:~% rm -r parallel_studio_xe_2017update4_cluster_edition_online 3. Building Prolog. This article uses the SWI-Prolog interpreter2, which is covered by the Simplified BSD license. SWI-Prolog offers a comprehensive free Prolog environment. It is widely used in research and education as well as commercial applications. You must download the sources in .tar.gz format. At the time this article was written, the available version is 7.4.2. First, decompress the download file: <user>@<host>:~% tar zxvf swipl-<version>.tar.gz Then, create a folder where the Prolog interpreter will be installed: <user>@<host>:~% mkdir swipl_intel After that, get ready to edit the building variables: <user>@<host>:~% cd swipl-<version> <user>@<host>:~/swipl-<version>% cp -p build.templ build <user>@<host>:~/swipl-<version>% <edit> build At the build file, look for the PREFIX variable, which indicates the place where SWI-Prolog will be installed. You must set it to: build PREFIX PREFIX=$HOME/swipl_intel Then, it is necessary to set some compilation variables. The CC variable must be changed to indicate that Intel® C++ Compiler will be used instead of other compilers. The COFLAGS enables optimizations for speed. The compiler vectorization is enabled at –O2. You may choose higher levels (–O3), but the suggested flag is the generally recommended optimization level. With this option, the compiler performs some basic loop optimizations, inlining of intrinsic, intra-file interprocedural optimization, and most common compiler optimization technologies. The –mkl=parallel option allows access to a set of math functions that are optimized and threaded to explore all the features of the latest Intel® Core™ processors. It must be used with a certain Intel® MKL threading layer, depending on the threading option provided. In this article, the Intel® TBB is such an option and it is used by choosing –tbb flag. At last, the CMFLAGS indicates the compilation will create a 64-bit executable. CC COFLAGS –O2 –O3 –mkl=parallel –tbb CMFLAGS export CC="icc" export COFLAGS="-O2 -mkl=parallel -tbb" export CMFLAGS="-m64" Save your build file and close it. Note that when this article was written, SWI-Prolog was not Message Passing Interface (MPI) ready3. Besides, when checking its source-code, no OpenMP* macros were found (OMP) and thus it is possible that SWI-Prolog is not OpenMP ready too. OMP If you already have an SWI-Prolog instance installed on your computer you might get confused with which interpreter version was compiled with Intel libraries, and which was not. Therefore, it is useful to indicate that you are using the Intel version by prompting such feature when you call SWI-Prolog interpreter. Thus, the following instruction provides a customized welcome message when running the interpreter: <user>@<host>:~/swipl-<version>% cd boot <user>@<host>:~/swipl-<version>/boot% <edit> messages.pl prolog_message(welcome) --> [ 'Welcome to SWI-Prolog (' ], prolog_message(threads), prolog_message(address_bits), ['version ' ], prolog_message(version), [ ')', nl ], prolog_message(copyright), [ nl ], prolog_message(user_versions), [ nl ], prolog_message(documentaton), [ nl, nl ]. and add @ Intel® architecture by changing it to: @ Intel® architecture prolog_message(welcome) --> [ 'Welcome to SWI-Prolog (' ], prolog_message(threads), prolog_message(address_bits), ['version ' ], prolog_message(version), [ ') @ Intel® architecture', nl ], prolog_message(copyright), [ nl ], prolog_message(user_versions), [ nl ], prolog_message(documentaton), [ nl, nl ]. Save your messages.pl file and close it. Start building. messages.pl <user>@<host>:~/swipl-<version>/boot% cd .. <user>@<host>:~/swipl-<version>% ./build The compilation performs several checking and it takes some time. Don’t worry, it is really very verbose. Finally, you will get something like this: make[1]: Leaving directory '~/swipl-<version>/src' Warning: Found 9 issues. No errors during package build Now you may run SWI-Prolog interpreter by typing: <user>@<host>:~/swipl-<version>% cd ~/swipl_intel/lib/swipl-7.4.2/bin/x86_64-linux <user>@<host>:~/swipl_intel/lib/swipl-<version>/bin/x86_64-linux% ./swipl Welcome to SWI-Prolog (threaded, 64 bits, version 7.4.2) @ Intel® architecture SWI-Prolog comes with ABSOLUTELY NO WARRANTY. This is free software. Please run ?- license. for legal details. For online help and background, visit For built-in help, use ?- help(Topic). or ?- apropos(Word). 1 ?- For exiting the interpreter, type halt. . Now you a ready to use Prolog, powered by Intel® architecture. halt. . You may also save disk space by doing: <user>@<host>:~/swipl_intel/lib/swipl-<version>/bin/x86_64-linux% cd ~ <user>@<host>:~% rm -r swipl-<version> Until now, there is an Intel compiled version of SWI-Prolog in your computer. Since this experiment intends to compare such combination with another environment, a SWI-Prolog interpreter using a different compiler, such as gcc 5.4.0, is needed. The procedure for building an alternative version is quite similar to the one described in this article. The Tower of Hanoi puzzle4 is a classical AI problem and it was used for probing the Prolog interpreters. The following code is the most optimized implementation: move(1,X,Y,_) :- write('Move top disk from '), write(X), write(' to '), write(Y), nl. move(N,X,Y,Z) :- N>1, M is N-1, move(M,X,Z,Y), move(1,X,Y,_), move(M,Z,Y,X). It moves the disks between pylons and logs their moments. When loading such implementation and running a 3 disk instance problem (move(3,left,right,center)), the following output is obtained after 48 inferences: true . This test intends to compare the performance of Intel SWI-Prolog version against gcc compiled version. Note that terminal output printing is a slow operation, so it is not recommended to use it in benchmarking tests since it masquerades results. Therefore, the program was changed in order to provide a better probe with a dummy sum of two integers. move(1,X,Y,_) :- S is 1 + 2. move(N,X,Y,Z) :- N>1, M is N-1, move(M,X,Z,Y), move(1,X,Y,_), move(M,Z,Y,X). Recall that the SWI-Prolog source-code did not seem to be OpenMP ready. However, most loops can be threaded by inserting the macro #pragma omp parallel for right before the loop. Thus, time-consuming loops from SWI-Prolog proof procedure were located and the OpenMP macro was attached to such loops. The source-code was compiled with –openmp option, a third compilation of Prolog interpreter was built, and 8 threads were used. If the reader wishes to build this parallelized version of Prolog, the following must be done. #pragma omp parallel for –openmp At ~/swipl-/src/pl-main.c add #include <omp.h> to the header section of pl-main.c; if you chose, you can add omp_set_num_threads(8) inside main method to specify 8 OpenMP threads. Recall that this experiment environment provides 4 cores and hyper threading turned on with 2 threads per core, thus 8 threads are used, otherwise leave it out and OpenMP will automatically allocate the maximum number of threads it can. ~/swipl-/src/pl-main.c #include <omp.h> pl-main.c; omp_set_num_threads(8) main int main(int argc, char **argv){ omp_set_num_threads(8); #if O_CTRLC main_thread_id = GetCurrentThreadId(); SetConsoleCtrlHandler((PHANDLER_ROUTINE)consoleHandlerRoutine, TRUE); #endif #if O_ANSI_COLORS PL_w32_wrap_ansi_console(); /* decode ANSI color sequences (ESC[...m) */ #endif if ( !PL_initialise(argc, argv) ) PL_halt(1); for(;;) { int status = PL_toplevel() ? 0 : 1; PL_halt(status); } return 0; } At ~/swipl-<version>/src/pl-prof.c add #include <omp.h> to the header section of pl-prof.c; add #pragma omp parallel for right before the for-loop from methods activateProfiler, add_parent_ref, profResumeParent, freeProfileNode, freeProfileData(void). int activateProfiler(prof_status active ARG_LD){ .......... < non relevant source code ommited > .......… LD->profile.active = active; #pragma omp parallel for for(i=0; i<MAX_PROF_TYPES; i++) { if ( types[i] && types[i]->activate ) (*types[i]->activate)(active); } .......... < non relevant source code ommited > .......... return TRUE; } static void add_parent_ref(node_sum *sum, call_node *self, void *handle, PL_prof_type_t *type, int cycle) { prof_ref *r; sum->calls += self->calls; sum->redos += self->redos; #pragma omp parallel for for(r=sum->callers; r; r=r->next) { if ( r->handle == handle && r->cycle == cycle ) { r->calls += self->calls; r->redos += self->redos; r->ticks += self->ticks; r->sibling_ticks += self->sibling_ticks; return; } } r = allocHeapOrHalt(sizeof(*r)); r->calls = self->calls; r->redos = self->redos; r->ticks = self->ticks; r->sibling_ticks = self->sibling_ticks; r->handle = handle; r->type = type; r->cycle = cycle; r->next = sum->callers; sum->callers = r; } void profResumeParent(struct call_node *node ARG_LD) { call_node *n; if ( node && node->magic != PROFNODE_MAGIC ) return; LD->profile.accounting = TRUE; #pragma omp parallel for for(n=LD->profile.current; n && n != node; n=n->parent) { n->exits++; } LD->profile.accounting = FALSE; LD->profile.current = node; } static void freeProfileNode(call_node *node ARG_LD) { call_node *n, *next; assert(node->magic == PROFNODE_MAGIC); #pragma omp parallel for for(n=node->siblings; n; n=next) { next = n->next; freeProfileNode(n PASS_LD); } node->magic = 0; freeHeap(node, sizeof(*node)); LD->profile.nodes--; } static void freeProfileData(void) { GET_LD call_node *n, *next; n = LD->profile.roots; LD->profile.roots = NULL; LD->profile.current = NULL; #pragma omp parallel for for(; n; n=next) { next = n->next; freeProfileNode(n PASS_LD); } assert(LD->profile.nodes == 0); } The test employs a 20 disk instance problem, which is accomplished after 3,145,724 inferences. The time was measured using Prolog function called time. Each test ran 300 times in a loop and any result that is much higher than others was discarded. Figure 2 presents the CPU time consumed by all three configurations. Considering the gcc compiled Prolog as baseline, the speedup obtained by Intel tools was 1.35. This is a good result since the source-code was not changed at all, parallelism was not explored by the developer and specialized methods were not called, that is, all blind duty was delegated to Intel® C++ Compiler and libraries. When Intel implementation of OpenMP 4.0 was used, the same speedup increased to 4.60x. This article deliberately paid attention to logic based AI. It shows that benefits with using Intel development tools for AI problems are not restricted to machine learning. A common distribution of Prolog was compiled with Intel® C++ Compiler, Intel® MKL and Intel implementation of OpenMP 4.0. A significant acceleration was obtained, even though the algorithm of Prolog inference mechanism is not easily optimized. Therefore, any solution for a symbolic logic problem, implemented in such Prolog interpreter, will be powered by an enhanced engine. 1. Intel. Getting Started with Intel® Parallel Studio XE 2017 Cluster Edition for Linux*, Intel® Parallel Studio 2017 Documentation, 2017. 2. SWI-Prolog, 2017., access on June 18th, 2017. 3. Swiprolog - Summary and Version Information, High Performance Computing, Division of Information Technology, University of Maryland, 2017., access on June 20th, 2017. 4. A. Beck, M. N. Bleicher, D. W. Crowe, Excursions into Mathematics, A K Peters, 2000. 5. Russell, Stuart; Norvig, Peter. Artificial Intelligence: A Modern Approach, Prentice Hall Series in Artificial Intelligence, Pearson Education Inc., 2nd edition, 2003. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
https://codeproject.freetls.fastly.net/Articles/1203631/Building-and-Probing-Prolog-with-Intel-Architect?PageFlow=Fluid
CC-MAIN-2021-39
en
refinedweb
CONCEPTS USED: Dynamic programming. DIFFICULTY LEVEL: Easy. PROBLEM STATEMENT (SIMPLIFIED ): Arnab is now handed N rupees, he is asked by his mom to buy at least two ingredients which when used to make a dish will be sweetest. The sweetness of the dish is the product of the cost of all the ingredients. Now Arnab's Mom has given Arnab the exact amount, so after buying the suitable ingredients there will be no money left with Arnab. The market which Arnab visits have ingredients of all possible positive costs (cost>0). Help Arnab find the maximum value of sweetness. For Example : N=4, 4 can be divided into (1,3)and (2,2). (2,2)gives the maximum result i.e. 4. N=5, Similarly,we can divide 5 as- (1,4)and (2,3). 2*3 gives the maximum product. OBSERVATION: WRONG APPROACH: One may think that dividing the given number into two equal halves gives the maximum product. But this approach is wrong. Example: N=10; if you think (5,5)gives the maximum product ,you are wrong because the maximum product is 36!! 10 can be divided as [3,3,4]. *Mathematically, we are given n and we need to maximize a1 a2 a3 …. aK such that n = a1 + a2 + a3 … + aK and a1, a2, … ak > 0.** SOLVING APPROACH: >This problem is similar to Rod Cutting Problem. We can get the maximum product by making a cut at different positions and comparing the values obtained after a cut. We can recursively call the same function for a piece obtained after a cut. Can you think of the recursive function now? > maxProduct(n) = max(i(n-i), maxProductRec(n-i)i) for all i in {1, 2, 3 .. n},where maxProduct(n) is the maximum product of divisions of n. Refer to this image for better understanding with data structures and algorithms. You are encouraged to implement the above brute force on your own first ,before looking at the solution. See original problem statement here Overlapping Subproblems: Let’s consider the conditions for using DP to find an efficient solution: Overlapping Sub-problems — Yes. From the image above ,you can notice the overlapping subproblems. When you implement using recursion, each subproblem will be computed several times. Optimal Substructure — Yes. At each node in the call-tree, we’re calling the recursive function on a smaller number. The decision which path to go down is based on the max product returned for each sub-problem. Recomputations of same subproblems can be avoided by constructing a temporary array dp[] in bottom up manner. O(n) approach: The idea is to break the number into multiples of 2 or 3. If you write the breaking results for couple of numbers like 7 to 10 you should get the idea. Assuming the max number is 60, there is a simple dynamic solution: int dp[60]; public: int integerBreak(int n) { dp[1]=1,dp[2]=1,dp[3]=2,dp[4]=4,dp[5]=6,dp[6]=9; for(int i=7;i<=n;i++) dp[i]=max(dp[i-3]*3,dp[i-2]*2); return dp[n]; } SOLUTIONS: #include <stdio.h>=mx>dp[j]*(i-j)?mx:dp[j]*(i-j); }dp[i]=mx; } int t;scanf("%d",&t); while(t--) { int n;scanf("%d",&n); printf("%d\n",dp[n]); } return 0; } import java.util.Scanner; import java.util.*; class HelloWorld{ public static void main(String []args){ Scanner myObj = new Scanner(System.in); long [] dp = new long[101]; dp[1]=0;dp[2]=1;dp[3]=2;dp[4]=4;dp[0]=0;dp[5]=6;dp[6]=9; for(int i=7;i<101;i++){ long mx=-1; for(int j=1;j<=i;j++){ mx=Math.max(mx,dp[j]*(i-j)); } dp[i]=mx; } int t=myObj.nextInt(); while(t-- > 0){ int n=myObj.nextInt(); System.out.println(dp[n]); } } } #include <bits/stdc++.h> using namespace std;=max(mx,dp[j]*(i-j)); }dp[i]=mx; } int t;cin>>t; while(t--) { int n;cin>>n; cout<<dp[n]<<"\n"; } return 0; } Space Complexity of the Dynamic Programming solution is O(n).
https://www.prepbytes.com/blog/dynamic-programming/foolish-items/
CC-MAIN-2021-39
en
refinedweb
#include <RWMutex.hpp> rw_mutex A mutex divided into reading and writing. Of course, rw_mutex is already defined in linux C. But it is dependent on the linux OS, so that cannot be compiled in Window having the rw_mutex. There's not a class like rw_mutex in STL yet. It's the reason why RWMutex is provided. As that reason, if STL supports the rw_mutex in near future, the RWMutex can be deprecated. Library - Critical Section Definition at line 29 of file RWMutex.hpp. Default Constructor. Definition at line 49 of file RWMutex.hpp. Lock on read. Increases a reading count. When write_lock is on a progress, wait until write_unlock to be called. Definition at line 67 of file RWMutex.hpp. Referenced by samchon::library::UniqueReadLock::lock(), samchon::library::SharedReadLock::lock(), and samchon::library::SharedReadLock::SharedReadLock(). Unlock of read. Decreases a reading count. When write_lock had done after read_lock, it continues by read_unlock if the reading count was 1 (read_unlock makes the count to be zero). Definition at line 90 of file RWMutex.hpp. Referenced by samchon::library::UniqueReadLock::unlock(), samchon::library::SharedReadLock::unlock(), samchon::library::SharedReadLock::~SharedReadLock(), and samchon::library::UniqueReadLock::~UniqueReadLock(). Lock on writing. Changes writing flag to true. If another write_lock or read_lock is on a progress, wait until them to be unlocked. Definition at line 117 of file RWMutex.hpp. Referenced by samchon::library::UniqueWriteLock::lock(), and samchon::library::SharedWriteLock::lock(). Unlock on writing. Definition at line 130 of file RWMutex.hpp. Referenced by samchon::library::UniqueWriteLock::unlock(), samchon::library::SharedWriteLock::unlock(), samchon::library::SharedWriteLock::~SharedWriteLock(), and samchon::library::UniqueWriteLock::~UniqueWriteLock().
http://samchon.github.io/framework/api/cpp/df/d85/classsamchon_1_1library_1_1RWMutex.html
CC-MAIN-2022-27
en
refinedweb
Josh Bloch I was reading Effective Java again - its 99% applicable to .NET also. Now there is this whole section on making defensive copies and returning clones, all intended to prevent people from violating the invariants that your class is trying to enforce. I am 100,000% sure JB is absolutely correct in describing these vulnerabilities, but, wow, it seems like a TON of overhead both in terms of code and performance, for the remote case that someone is smart enough to try to mangle your code that way. Few of the books/examples/etc. that I come across suggest going to these lengths, so I am wondering, is JB's advice unduly coloured by his authoring ultra-widely used API's at Sun? Or do most OO programmers really do this? NetFreak Tuesday, September 28, 2004 Oh, also, I should add, this is a Fantastic book, my skepticism about this particular piece of advice notwithstanding. NetFreak Tuesday, September 28, 2004 I would agree that these types of defensive mechanisms are primarily a concern when exposing the API to the outsite world. In the .Net environment an example of an API that is "public" but not for reuse is the Context class in the Remoting namespace; the documentation for this class fully qualifies it with "this class is used internally by the .Net framework and is not intended to by used in your code". Of course people could still use this code, but at their own risk. This is in contrast to java.util.Hashtable that is a public API both technically and explicitly. BTW... Effective Java is easily the best book on API design I've come across. ~harris <a href=">Harris Reynolds</a> Wednesday, September 29, 2004 I agree completely. Its unfortunate that it appears at first glance to be Java specific. NetFreak Wednesday, September 29, 2004 Recent Topics Fog Creek Home
https://discuss.fogcreek.com/dotnetquestions/4403.html
CC-MAIN-2018-30
en
refinedweb
The most common problem we see with flask apps is that people try and call app.run() # don't do this! This actually tries to launch Flask's own development server. That's not necessary on PythonAnywhere, because we do the server part for you. All you need is to import your flask app into your wsgi file, something like this: from my_flask_app import app as application The app has to be renamed application, like that. Do not call app.run() anywhere in your code as it will conflict with the PythonAnywhere workers and cause 504 errors. Or, if you must call app.run() (eg to be able to run a test server on your own pc), then make sure it's inside an if __name__ == '__main__': block Other than that, be sure to check out our guide to Debugging import errors for general tips on dealing with problems in your wsgi config.
https://help.pythonanywhere.com/pages/Flask504Error/
CC-MAIN-2018-30
en
refinedweb
NSModule(3) NSModule(3) NAME NSModule - programmatic interface for working with modules and symbols SYNOPSIS #include long NSVersionOfLinkTimeLibrary( const char *libraryName); extern int _NSGetExecutablePath( char *buf, unsigned long *bufsize) extern void NSInstallLinkEditErrorHandlers( NSLinkEditErrorHandlers *handlers); extern void NSLinkEditError( NSLinkEditErrors *c, int *errorNumber, const char **fileName, const char **errorString);‐ lection of symbols. A dynamic shared library is composed of one or more modules with each of those modules containing a separate collec‐ tion of symbols. If a symbol is used from a module then all the sym‐‐ ited to only Mach-O MH_BUNDLE types which are used for plugins. A mod‐ ule name is specified when a module is linked so that later NSNameOf‐ Module can be used with the module handle and to do things like report errors. When a module is linked, all libraries referenced by the mod‐ ule‐ gram. If any errors occur the handlers installed with NSInstal‐_PRIVATE With this option the global symbols from the module are not made part of the global symbol table of the program. The global sym‐ bols of the module can then be looked up using NSLookupSymbolIn‐ Module. NSLINKMODULE_OPTION_RETURN_ON_ERROR With this option if errors occur while binding this module it is automaticly unloaded and NULL is returned as the module handle. To get the error information for the module that failed to load the routine NSLinkEditError is then used. It has the same parameters as the link edit error handler (see below) except all the parameters are pointers in which the information is returned indirectly. NSLINKMODULE_OPTION_DONT_CALL_MOD_INIT_ROUTINES With this option the module init routines are not called. This is only useful to the fix-and-continue implementation. With this option the parameter, moduleName is assumed to be a string with the logical name of the image with the physical name of the object file tailing after the NULL character of the logical name. This is only use‐ ful‐‐). NSIsSymbolNameDefinedInImage is passed a pointer to the mach_header of a mach_header structure of a dynamic library being used by the program and a symbol name. This returns TRUE or FALSE based on if the symbol is defined in the specified image or one of the image's sub-frameworks or sub-umbrellas. If the program was built with the ld(1) -force_flat_namespace flag or executed with the environment variable DYLD_FORCE_FLAT_NAMESPACE set and the pointer to a mach_header struc‐ ture is not of a bundle loaded with the NSLINKMODULE_OPTION_PRIVATE option of NSLinkModule(3) then the pointer to a mach_header is ignored and the symbol is looked up in all the images using the first defini‐ tion if found. The image handle parameter for NSLookupSymbolInImage and NSIsSymbol‐ NameDefinedInImage is a pointer to a read-only mach header structure of‐ bol. If any errors occur the handlers installed with NSInstal‐‐ ULE_OPTION_PRIVATE option of NSLinkModule(3) then the pointer to a mach_header is ignored and the symbol is looked up in all the images using the first definition found. If the option NSLOOKUPSYMBOLINIM‐ AGE_OPTION_RETURN_ON_ERROR is not used if any errors occur the handlers installed with NSInstallLinkEditErrorHandlers are called or the default action is taken if there are no handlers. The options of NSLookupSym‐ bolInImage are as follows: NSLOOKUPSYMBOLINIMAGE_OPTION_BIND Just bind the non-lazy symbols of module that defines the sym‐ bolName and let all lazy symbols in the module be bound on first call. This should be used in the normal case for a trusted mod‐ ule expected to bind without any errors like a module in a sys‐‐ out any errors. NSLOOKUPSYMBOLINIMAGE_OPTION_BIND_FULLY Bind all the symbols of the module that defines the symbolName and all the dependent symbols of all needed libraries. This should only be used for things like signal handlers and linkedit error handlers that can't bind other symbols when executing to handle the signal or error. NSLOOKUPSYMBOLINIMAGE_OPTION_RETURN_ON_ERROR With this option if errors occur while binding the module that defines the symbolName then the module is automaticly unloaded and NULL is returned as the NSSymbol. To get the error informa‐ tion for why the module that failed to bind the routine‐‐‐ tine NSLinkEditError is then used. It has the same parameters as the link edit error handler (see below) except all the param‐ eters are pointers in which the information is returned indi‐ rectly. if the image_name passed for the library has not already been loaded it is not loaded. Only if it has been loaded the pointer to the mach_header will not be NULL. NSADDIMAGE_OPTION_MATCH_FILENAME_BY_INSTALLNAME When this option is specified if a later load of a dependent dynamic library with a file system path is needed by an image that matches the install name of the dynamic library loaded with this option, then the dynamic library loaded with the call to NSAddImage() is used in place of the dependent dynamic library. NSVersionOfRunTimeLibrary is passed the install name of a dynamic buf‐ fer. If the buffer is not large enough, -1 is returned and the expected buffer size is copied in *bufsize. Note that _NSGetExecutablePath will return "a path" to the executable not a "real path" to the executable. That is the path may be a symbolic link and not the real file. And with deep directories the total bufsize needed could be more than MAX‐ PATHLEN. ERROR HANDLING NSInstallLinkEditErrorHandlers is passed a pointer to a NSLinkEditEr‐ rorHandlers which contains three function pointers to be used for han‐. If the user does not supply these functions, the default will be to write an error message on to file descriptor 2 (usually stderr) and exit the program (except for the linkEdit error handler when the NSLinkEditErrors is NSLinkEditWarningError, then the default is to do nothing). The specified undefined handler may make calls to any of the runtime loading functions to add modules based on the undefined symbol name. After dealing with this symbol name successfully (by doing a runtime loading operation to resolve the undefined reference) the handler sim‐ ply returns. If more symbol's names remain undefined the handler will be called repeatedly with an undefined symbol name. If the handler can't deal with the symbol it should not return (put up a panel, abort, etc) and cause the program to exit. Or it can remove itself as the undefined handler and return which will cause the default action of printing the undefined symbol names and exiting. The specified multiply defined symbol handler is called during the process of runtime linking and thus it may not call any of the runtime loading functions as only one set of linking operations can be per‐‐‐ ALSO SEE NSObjectFileImage(3), dyld(3) Apple Computer, Inc. March 10, 2001 NSModule(3)[top]
http://www.polarhome.com/service/man/?qf=NSModule&tf=2&of=OpenDarwin&sf=3
CC-MAIN-2018-30
en
refinedweb
Brief Overview of an "Object" in Scala Brief Overview of an "Object" in Scala In the Java world we are all familiar with the term object and interchangeably use it with the term instance. However, they could not be more different in Scala. Join the DZone community and get the full member experience.Join For Free Get the Edge with a Professional Java IDE. 30-day free trial. In different but somewhat related concept. It's totally different because “object” keyword represents a single instance of that with which it is used. And similar because it still represents some instance. I have divided the post into different sections namely: - Simple example of object - An example of inbuilt object in Scala library - Companion classes and Companion objects - Companion object with apply method Simple example of “object” object SimpleObject{ val param1 : Int = 10 var param2 : String = "Yes" def method1 = "Method 1" def sum(a:Int, b:Int) = a + b } object Main { def main(args: Array[String]) = { println(SimpleObject.param1) println(SimpleObject.param2) SimpleObject.param2 = "No" println(SimpleObject.param2) println(SimpleObject.method1) println(SimpleObject.sum(10, 15)); } } In the above object- SimpleObject we have declared 2 parameters and 2 methods. To know the different between var and val please read val versus var in scala. The SimpleObject object is used in another object Main which declares the main method. The Main object containing the main is the starting point of the application. People from the Java world will immediately relate the use of the SimpleObject to the static members in Java. I will touch upon that concept as well towards the end of the post when I explain about Companion classes. An important feature of an object in Scala is that it is Singleton i.e there is only one instance. And you cannot even use the new operator to create another instance. An example of inbuilt object in Scala library There are numerous examples of object‘s in Scala API and one among them is theConsole object which Implements functionality for printing Scala values on the terminal as well as reading specific values. Also defines constants for marking up text on ANSI terminals. Lets look at an example of using Console object: object Main { def main(args: Array[String]) = { Console.println("Enter some string: "); val input = Console.readLine; Console.println("The string entered is: "+input) } } Companion classes and Companion objects By now you have an brief idea of what an object is. Now lets see yet another concept associated with object called the Companion classes/Companion objects. When we have an object with same name as that of the class then we say that the object is aCompanion object and the class is companion class. In addition to having the same name, they should be part of the same package and be defined in the same file. The below code shows the Companion objects in action: object Main { def main(args: Array[String]) = { var emp = new Employee(123,"Sana", "Blore") println(emp)//123 Sana, Blore Employee.saveToDb(emp)//Saving: Sana to db } } //Companion class class Employee(id:Int, n:String, p: String){ val empId = id val name = n val place = p override def toString() = this.empId+" "+this.name+", "+this.place } //Companion object object Employee{ def saveToDb(emp: Employee){ println("Saving: " + emp.name+" to db"); } } In the above code we have an Employee class and an associated companion object. The companion object defines a method called saveToDb. In the same code I have shown how the companion class and companion object are used. The method saveToDb exhibits behaviour similar to that of the statics in Java. In scala there is no concept of “static” members, the same functionality is provided by the companion object. Looking at the same example in Java we would have: public class StaticSample { public static void main(String[] args){ Employee emp = new Employee(123, "Sana", "Blore"); System.out.println(emp);//123 Sana, Blore //Accessing the static method using the object. emp.saveToDb(emp);//Saving employee: Sana //Accessing the static method using class name. Employee.saveToDb(emp);//Saving employee: Sana } } class Employee{ String name; String place; int id; public Employee(int id, String name, String place) { this.id = id; this.name = name; this.place = place; } public String toString(){ return this.id+" "+this.name+", "+this.place; } public static void saveToDb(Employee emp){ System.out.println("Saving employee: "+emp.name); } } One can notice that in Java the static members can be accessed both using the instance and the class name, which can be quite confusing. But the same is not possible in Scala because the only way to access the members in the companion object is to use the companion object and not the instance of the companion class. Companion class/object in Scala API In Scala API there are numerous places where concept of companion object is leverage and one such usage is the Array class. The companion object for Array class consists of methods like apply, copy, empty among others. Companion object with apply method Lets consider the Employee example from above and add an apply method to it. The use of apply method is that it provides a way to create instances of the companion class without using the new operator directly and instead use the object literal syntax to create the instance of the companion class. Lets look at the sample code: //replace this with the use of new operator above. var emp = Employee(123,"Sana", "Blore") object Employee{ //Rest of the definition is as above. def apply(id: Int, name: String, place: String) = new Employee(id, name, place) } The apply method can be used to create Factories for creating instances of the companion class. In the Scala API Array class contains apply method which enables us to create arrays like: var arr1 = Array(1,2,3,4,5) var arr2 = Array("a","b","c") This was in brief about object in Scala. Get the Java IDE that understands code & makes developing enjoyable. Level up your code with IntelliJ IDEA. Download the free trial. }}
https://dzone.com/articles/brief-overview-object-scala
CC-MAIN-2018-30
en
refinedweb
Programming in C# – GPIB interface and instruments. GPIB is the oldest communication interface among instruments, so it is probably equipped with the largest number of instruments. Therefore, most people still use GPIB. This article shows you how to communicate with an external device using GPIB connection in C#. About GPIB The most used GPIB interface is the GPIB-USB-HS from Natinal Instruments (NI). There are other interfaces from Keysight Technology, but I can not find much information. The amount of information about programming in GPIB is small, and programming is difficult. In this blog, I would like to explain how to make a GPIB program as easy as possible. Installation of driver In this article, GPIB interface is assumed to use GPIB-USB-HS. It is necessary to install the driver before using GPIB-USB-HS. If you have a purchased CD, you can use that CD. If you do not have it, download the NI-488.2 driver from the NI company’s HP. Download the OS and version from the NI site. Once downloaded, run NI-488.3. During the installation, you have a choice of features to install. At this time, be sure to install .NET Framework 4 or later. .NET Framework is difficult to find in the list, you may not be able to install it without noticing it. Please install with default setting except this. Add class library Start Visual Studio C# and open a new project. At the top right of the project in “Solution” in “Solution Explorer tree” there is a category called “Reference Settings”. To add a new reference, right-click the References category and select “Add Reference”. To use the NI GPIB library, add NationalInstruments.Common and NationalInstruments.VisaNS to your project. These two files are located in different places depending on the driver and the version of windows, so please search in windows. Add code Move to the code editor screen. Since the line of using … is lined up at the top of the file, add the following line at the bottom. using NationalInstruments.VisaNS; Next we go back to the designer and add ritchTextBox and button. Double-click button1 to move to the code editor. Add the following line under the public partial class Form1: Form in the code editor. private MessageBasedSession mbSession; This is a class used for GPIB communication, and declares the class. Next, write the following 3 lines in button1_Click. mbSession=(MessageBasedSession)ResourceManager.GetLocalManager().Open("GPIB0::17::INSTR"); string responseString = mbSession.Query("*IDN?"); mbSession.Dispose(); The first line of the above code prepares (Open) communication with the external device via GPIB. In the above example, the GPIB address is 17 but change according to the settings of the external device. Next, send a command to the external device with mbsession.Query (). In the above example we sent the *IDN? Command. This can be used commonly for all GPIB standard commands. When the external device receives this command, the device name and serial number will be returned. The reply from the external device is assigned to the argument of mbSession.Query. Communication is terminated with the mbSession.Dispose (). Normally, the execution part of the communication is bound by try in preparation for sudden interruption. It will look like this when viewed through the program so far. Use “try” to prepare for a sudden communication outage. The program so far is as follows. using System; ・ ・ using System.Windows.Forms; using NationalInstruments.VisaNS; namespace WindowsFormsApplication1 { public partial class Form1 : Form { private MessageBasedSession mbSession; public Form1() { InitializeComponent(); } private void button1_Click(object sender, EventArgs e) { try { mbSession = (MessageBasedSession)ResourceManager.GetLocalManager().Open("GPIB0::17::INSTR"); richTextBox1.Text = mbSession.Query("*IDN?"); mbSession.Dispose(); } catch { MessageBox.Show("通信エラー"); } } } } Summary In this article, I introduced you to communicate with external devices via C # and GPIB. This time, I explained only the part that communicates with the external device in C#, for the time being the command of the device. Next time, I will explain the specific programming by limiting the model of the external device.
https://kesoku-blog.com/?p=2024&lang=en
CC-MAIN-2021-04
en
refinedweb
The problem Convert a Number to Hexadecimal Leetcode Solution provides us with an integer. Then asks us to convert the given integer in decimal number system to hexadecimal number system. More formally, the question requires us to convert an integer given in base 10 to a base 16 representation. We had already solved a problem where we were given a number in the decimal number system. And had to convert it into base 7. So, before moving on further, let’s take a look at a few examples. Example 26 1a Explanation: This conversion is easy, if happen to know about the hexadecimal number system. But if you are unaware of it, just convert the given number into base 16 representation. We do that by repetitive division and storing remainder. One thing to note that, 10 is represented using ‘a’ in hexadecimal notation. -1 ffffffff Explanation: Since the negative numbers are stored as their 2’s complement notation. The -1 in its 2s complement notation is 11111111111111111111111111111111. So, we just convert this into hexadecimal which is shown in the output. Approach for Convert a Number to Hexadecimal Leetcode Solution Before diving deep into the problem Convert a Number to Hexadecimal Leetcode Solution. Let’s first familiarize ourselves with the hexadecimal number system. So, the hexadecimal number system is also like the decimal number system but the numbers 10 to 15 are represented using lower-case alphabets from ‘a’ to ‘f’. So, we can simply convert an integer in a decimal number system to base 16 representation. And after conversion, we simply replace the numbers 10 – 15 with a – f. But what, do we do with negative numbers? Since negative numbers are stored in 2s complement notation in the binary system. We simply store the number in an unsigned int and just convert it into base 16. The code in Java language also performs the same thing but is implemented in a bit different manner using bit manipulation. So, first we take & of the given number with 15. This operation is equivalent to taking mod with 16. Then using left shift is equivalent to division using 16. Code C++ code to Convert a Number to Hexadecimal Leetcode Solution #include <bits/stdc++.h> using namespace std; const string decToHex = "0123456789abcdef"; string toHex(int n){ if(n==0) return "0"; unsigned int num = n; string ans = ""; while(num > 0){ ans = decToHex[num%16] + ans; num /= 16; } return ans; } int main(){ cout<<toHex(26); } 1a Java code to Convert a Number to Hexadecimal Leetcode Solution import java.util.*; import java.lang.*; import java.io.*; class Solution { public static String toHex(int n) { String decToHex = "0123456789abcdef"; if(n==0) return "0"; int num = n; String ans = ""; while(num != 0){ ans = decToHex.charAt(num&15) + ans; num = num >>> 4; } return ans; } public static void main (String[] args) throws java.lang.Exception { System.out.print(toHex(-1)); } } ffffffff Complexity Analysis Time Complexity O(M(n)log n), where n is the length of the given input, M(n) is the time it takes to divide two 2-bit numbers. So, the time complexity is logarithmic. Space Complexity O(1), since we had not stored any data regarding each digit in the number. The space complexity is constant.
https://www.tutorialcup.com/leetcode-solutions/convert-a-number-to-hexadecimal-leetcode-solution.htm
CC-MAIN-2021-04
en
refinedweb
While entering code, you forgot the name of either a method you wanted to call or some of a method’s parameters. Use Eclipse’s code assist (also called content assist) to help out. When you enter the name of an object or class in the JDT code editor followed by a period (.) and then pause, code assist displays the members of that object or class, and you can select the one you want. You also can bring up code assist at any time (e.g., when you’ve positioned the cursor inside a method’s parentheses, and you want to see what arguments that method takes) by pressing Ctrl-Space or by selecting Edit→ Content Assist. Code (or content) assist is one of the good things about using a full Java IDE. It’s an invaluable tool that accelerates development, and it’s a handy resource that you’ll probably find yourself relying on in time. In the code example we’ve been developing over the previous few recipes, enter the following code to display some text: public class FirstApp { public static void main(String[] args) { System.out.println("Stay cool."); } } To work with code assist, enter System. in the main method of the FirstApp project, then pause. Code assist displays the classes and methods in the System namespace, as shown in Figure 1-11. Double-click out in the code assist list so that code assist inserts that member into your code, insert a period so that the phrase now reads System.out., and pause again. Code assist now displays the methods of the out class. Double-click the code assist suggestion println(String arg0), and code assist inserts the following code into the main method: public class FirstApp { public static void main(String[] args) { System.out.println( ) } } Edit this to add the text Stay cool. . Note that code assist adds the closing quotation mark automatically as you type: public class FirstApp { public static void main(String[] args) { System.out.println("Stay cool.") } } As soon as you enter this code, Eclipse displays it with a wavy red underline, shown in Figure 1-12, to indicate that a syntax problem exists. Rest the mouse cursor over the new code, and a tool tip appears, also shown in Figure 1-12, indicating that a semicolon is missing. Note also that a red box (displayed in stunning black and white in the figure) appears in the overview bar to the right of the code. Clicking that box jumps to the error, which is handy if you’ve got a lot of errors and a long code file. Tip Deprecated methods also are underlined automatically in the JDT editor, but in yellow, not red. Syntax warnings in general are displayed with yellow boxes in the overview bar. Add that semicolon now to the end of the line to give you the complete code and to make the wavy red line disappear. Tip Eclipse can format your code automatically, adding indents and cleaning up the source code nicely, which is great if you’re pasting code from somewhere else. Just select Source→ Format, and Eclipse will handle the details. In time, you’ll probably find yourself using this feature more often than you expected. Finally, save the file by clicking the disk icon in the toolbar or by selecting File→ Save. An unsaved file appears with an asterisk before its name in its editor tab (as shown in Figure 1-12), but the asterisk disappears when the file is saved. If you don’t save a code file before trying to compile and run that code, Eclipse will prompt you to do so. We’ll run this code in the next recipe. To sum up, code assist is a great tool for code completion, and it will start automatically when you insert a period (.) in the JDT editor after the name of an object or class. You also can make code assist appear at any time while you’re typing code; just press Ctrl-Space or select Edit→ Content Assist. Recipe 1.10 on running your code; Chapter 1 of Eclipse (O’Reilly). Get Eclipse Cookbook now with O’Reilly online learning. O’Reilly members experience live online training, plus books, videos, and digital content from 200+ publishers.
https://www.oreilly.com/library/view/eclipse-cookbook/0596007108/ch01s10.html
CC-MAIN-2021-04
en
refinedweb
Introduction to SDET Interview Questions and Answers SDET, Software Design Engineer in Test or Software Development Engineer in Test, stands for mainly testing performed on a software product. It actually needed some candidate who can able to develop and as well as performing testing. This was initially started by Microsoft but currently, other organizations are very conscious on the same and they are really looking for someone who expert in SDET for involving in full development of their product and as well as involving with the testing design which needs to be performed for that individual development. The organization can introduce the same resource in two key tasks will always profitable for them. here we will discuss top SDET Interview Questions. Now, if you are looking for a job which is related to SDET then you need to prepare for the 2020 SDET Interview Questions. It is true that every interview is different as per the different job profiles. Here, we have prepared the important SDET Interview Questions and Answers which will help you get success in your interview. In this 2020 SDET Interview Questions article, we shall present 10 most important and frequently asked SDET interview questions. These interview questions are divided into two parts are as follows: Part 1 – SDET Interview Questions (Basic) This first part covers basic Interview Questions and Answers. Q1. Explain differences in details between Software Development Engineering in Test (SDET) and testing software manually? Answer: SDET is mainly using doe automation testing. Means develop a product can be tested automatically without manual intervention. Whereas manual testing is not at all meet these criteria. Q2. Write a program to reverse a number in any language? Answer: public class reverseNumber { public long reverse(long num) { long temp=0; while(num!=0) { temp=(temp*10)+(num%10); num=num/10; } return temp; } public static void main(String args[]) { long n= 654312; reverseNumber inp = new reverseNumber(); System.out.println(“Given number is “+ n); System.out.println(“Reverse of given number is “+inp.reverse(n)); } } Q3. a very small period of time. Documentation or planning is not always possible for that, but some of the organization maintained some specific tools for tracking this kind of task especially for additional billing. Let us move to the next SDET Interview Questions. Q4. Two big keywords normally very much useful for the tester, one is the priority and another one is severity, explain the difference between them in detail?. Q5. Explain details explanation of the job responsibility of a tester or Software Development Engineering in test role? Answer: This is the common SDET Interview Questions asked in an interview. Several responsibilities normally need to follow by an SDET tester in current IT industry. - Write automation of testing and set up the same for varieties platforms like web or mobile. - Managing and handling bug report. - Maintaining the proper communication channel between the developer and the client. - Preparing and delivering test cases. Q6. What is ad-hoc testing? Answer: Ad-hoc testing is defined as the testing is being done on an ad-hoc basis without any reference and proper inputs to the test case and without any plan, test cases, and documentation. The main objective of this type of testing is to find defects and break the application by executing different flows of the application or random functionality. Ad-hoc testing is an informal way of finding bugs from an application and can be performed by anyone in the team. It will be difficult to find bugs without test cases but sometimes during ad-hoc testing bugs will find which we didn’t find through normal testing or existing test cases. Q7. Given some example with details regarding some of the typical experience or excessive load working day of a tester or software development engineer in test (SDET) resources? Answer: Three key tasks are always taken huge time for the tester in any day: - Understanding the requirements of the project. - Preparing and executing require test cases based on the client expected functionalities. - Reporting about the bugs identified on individual functionality developed for the client to the developer and retest the same after redelivery by the developer for ensuring expected functionality properly deliver without any common bug. Part 2 – SDET Interview Questions (Advanced) Let us now have a look at the advanced Interview Questions and Answers. Q8.: - Validating bug reports provided by the tester. How bug got resolved and retesting done by the tester or not. - Validating all the test cases written by the tester for that specific functionality, documentation, and confirmation taken from the tester on the same. - Run automate test cases for ensuring new functionalities does not break any existing functionality. - Sometimes validating test coverage report, which ensures all the developing component has been covered by test cases written. Q9. Write a program to swap two numbers without using any temp variable? Answers:); } } Q10. If someone needs one specific format of bug reports from a tester, then what will be the best way or approach can take by the tester for providing the same? Answer: One bug report normally contains below: - Bug Summary - Reproduce steps - Expected behavior and current behavior of one specific bug. Let us move to the next SDET Interview Questions. Q11. Explain in detail about different kinds of testing called Alpha and Beta? Answer: Alpha testing done by the tester identified bugs before moving the product to live environment or to the end user. The beta bug is normally identified by the end user who is the actual users of the product or application. Q12. Q13. Normally there have different categories available to make one specific group by of varieties test cases, given the explanation of them? Answer: This is the most popular SDET Interview Questions asked in an interview. Some popular test cases in the current IT industry are below: - Functional testing - Frontend or User interface testing - Performance Testing - Integration testing - Load testing or User usability testing - Security testing Q14. Common challenges one software tester normally faced, that is proper documentation not maintaining for testing. In that case how we can overcome the same? Answer: It is one of the common scenarios where documentation is not properly available for all kinds. Recommended Articles This has been a guide to the list of SDET Interview Questions and Answers so that the candidate can crackdown these Interview Questions easily. Here in this post, we have studied top SDET Interview Questions which are often asked in interviews. You may also look at the following articles to learn more –
https://www.educba.com/sdet-interview-questions/?source=leftnav
CC-MAIN-2021-04
en
refinedweb
It seems that in the vain of several py36-packages a conflict arises between net-mgmt/netbox and sysutils/py-salt. Having installed sysutils/py-salt@py36 on all to maintain systems, the installation of net-mgmt/netbox requires to to delete sysutils/py36-salt and security/py36-pycrypto (in our case), obviously triggered by the conflict between the packages security/py-pycrypto and security/py-pycryptodome (FLAVOR set to py36 by default): [...] All repositories are up to date. Checking integrity... done (1 conflicting) - py36-pycryptodome-3.9.0 conflicts with py36-pycrypto-2.6.1_3 on /usr/local/lib/python3.6/site-packages/Crypto/Cipher/AES.py Checking integrity... done (0 conflicting) The following 65 package(s) will be affected (of 0 checked): Installed packages to be REMOVED: py36-pycrypto-2.6.1_3 py36-salt-2019.2.2_1 The packages were build via a poudriere-based builder on local site. (In reply to O. Hartmann from comment #0) Thank you for the report. It's indeed an interesting case of a Python package collision. Here are some quick facts about the related Python packages: security/py-pycrypto: ~~~~~~~~~~~~~~~~~~~~~ The project seems to be dead as the last update was in 2013. (see Github issue #173) security/py-pycryptodome: ~~~~~~~~~~~~~~~~~~~~~~~~~ This a fork of security/py-pycrypto which is actively maintained and can be used as a drop-in replacement for security/py-pycrypto. The trade off: security/py-pycrypto and security/py-cryptodome cannot coexist because both use the same package name (= "Crypto"). security/py-pycryptodomex: ~~~~~~~~~~~~~~~~~~~~~~~~~~ Like security/py-cryptodome (= same upstream) but uses a different package name (= "Cryptodome" instead "Crypto") thus it can coexist with security/py-pycrypto. Let's go over to NetBox and Salt: net-mgmt/netbox: ~~~~~~~~~~~~~~~~ Upstream switched from security/py-pycrypto to security/py-pycryptodome in 2017. (see GitHub issue #1527) sysutils/py-salt: ~~~~~~~~~~~~~~~~~ The requirement for security/py-pycrypto is pulled in via the ZEROMQ or TCP options. The ZEROMQ option is also set as default one and reflects the actual requirements by upstream as given in sysutils/py-salt's 'requirements/zeromq.txt': > # PyCrypto has issues on Windows, while pycryptodomex does not > pycrypto>=2.6.1; sys.platform != 'win32' > pycryptodomex; sys.platform == 'win32' There was a PR (see Github pull request #45971) in 2018 for the ZeroMQ dependency to use security/py-cryptodome in favor of security/py-pycrypto. But that PR was then closed after some discussion because upstream had tried it a while ago which led to some problems due a bug with a specific version of security/py-pycryptodome. Conclusion: ~~~~~~~~~~~ Because sysutils/py-salt is IMHO a pretty complex and frequently used port that need good care a patching of its 'requirements/zeromq.txt' is not trivial. I did also some quick comparison of the code: net-mgmt/netbox: ~~~~~~~~~~~~~~~~ > $ grep -r -e Crypto ./netbox-2.6.7/* |wc -l > 8 sysutils/py-salt: ~~~~~~~~~~~~~~~~~ > $ grep -r -e Crypto ./salt-2019.2.2/ |wc -l > 244 So I'll create a PR for NetBox that switches from security/py-cryptodome to security/py-cryptodomex as it seems the best (and quickest) option to solve the issue. Created attachment 209127 [details] netbox-switch-to-py-pycryptodomex.patch Attached is a patch that switches all relevant code to security/py-pycryptodomex . My tests were successful so far: - Login/Logoff -> OK - Unlock/Lock secrets via private key -> OK - Create new user key pair -> OK - Active new user key pair -> OK - Unlock/Lock secrets with new user key pair -> OK I'll upstream that patch with some additional changes (related to documentation) as soon as possible. Upstream of Netbox closed my PR without merging it. I asked again if the decision could be reconsidered but I guess the odds aren't very high. I'll use then the set of patches attached in this PR for a while to get the issue with the Python package collision resolved. Even if it's means additional QA work in the future with every update of Netbox. From a long-term perspective the developers of Salt should really try again to switch fully from security/py-pycrypto to security/py-cryptodome or security/py-cryptodomex. (In reply to O. Hartmann from comment #0) Did you already have the opportunity to test the attached patch? As already mentioned in comment #2 there shouldn't be any problems with switching from `pycryptodome` to `pycryptodomex`. Comment on attachment 209127 [details] netbox-switch-to-py-pycryptodomex.patch Pull my proposed patch for net-mgmt/netbox back as it wasn't accepted by upstream and I won't have always the time to do full QA against an unsupported version of net-mgmt/netbox. After some thinking and internal discussions I close this bug as "not a bug" because there's little we can do to resolve the issue from a Ports related perspective. There's still some confusion (in the Ports and Python world) about the existence of both security/py-pycryptodome and security/py-pycryptodomex ports. While security/py-pycryptodome is meant as a drop-in replacement for security/py-pycrypto that seems to be no longer 100% true due some API incompatibilities (see also issue #89). And security/py-pycryptodomex seems to be hardly used in the Python world at all. To keep it short: - The problem can only be resolved at upstream level. - Upstream of security/py-pycryptodome should start using it's own namespace (= "Cryptodome" instead "Crypto") and make the security/py-pycryptodomex package obsolete. OR - Upstream of sysutils/py-salt tries again to fully abandon security/py-pycrypto (remove 'pycrypto' from the requirements for ZeroMQ). According to ##52674 ( - "PyCryptodome as replacement for PyCrypto", Salt _can_ use security/py-pycryptodome, but doesn't do to pycrypto being mentioned in the requirements.txt file for ZeroMQ (even though it may or may not use it directly). From the issue text: ... and there is a picking[sic] order as to which package is used as follows: PyCrypto - basic level PyCryptodomx - preferred over PyCrypto if installed M2Crypto - preferred over PyCryptodome and PyCrypto if installed Although 52674 is closed, David Murphy (dmurphy18) goes on to talk about how this will likely be revisited once Python2 has reached end-of-life (in a little under a month and a half at this writing) -- around the Neon release (probably in 2020).
https://bugs.freebsd.org/bugzilla/show_bug.cgi?format=multiple&id=241913
CC-MAIN-2021-04
en
refinedweb
7.0 GORM 7.0 brings support for the latest versions of key dependencies including: Java 8 minimum (Java 11 supported) Hibernate 5.3 minimum Spring 5.2 minimum 1.1.3..8.. Dependency Upgrades GORM 7.0 supports a minimum version of Java 8, Hibernate 5.3.x and Spring 5.2.x. Each of these underlying components may have changes that require altering your application. These changes are beyond the scope of this documentation. 1.2.2. Package Restructuring and Deprecations Previously deprecated classes have been deleted from this release and in order to support Java 11 modules in the future some package re-structuring has occurred. 1.2.3. Changes to Proxy Handling GORM no longer creates custom proxy factories nor automatically unwraps Hibernate proxies. This makes it more consistent to the way regular Hibernate behaves and reduces the complexity required at the framework level. You may need to alter instanceof checks are manually unwrap proxies in certain cases. 1.2.4. Module grails-validation Deprecated and Removed In GORM 6.x the grails-validation module was deprecated and replaced by grails-datastore-gorm-validation. Deprecated interfaces were maintained for backwards compatibility. In GORM 7.0 these deprecated classes have been removed and all dependency on grails-validation removed. 1.2.5. Transactions Now Required for all Operations Previous versions of Hibernate allowed read operations to be executed without the presence of a declaration transaction. Hibernate 5.2 and above require the presence of an active transaction. If see a javax.persistence.TransactionRequiredException exception it means your method lacks a @Transactional annotation around it. 2. Getting Started To use GORM 7.0.4 for Hibernate in Grails 3 you can specify the following configuration in build.gradle: dependencies { compile "org.grails.plugins:hibernate5:7.0.4"=7.0.4("7.0.4 7.0.4:7.0.4.RELEASE") compile "org.hibernate:hibernate-core" compile "org.hibernate:hibernate-ehcache":7.0.4. Consider the following domain class: package org.bookstore class Book { } This class will map automatically to a table in the database called book (the same name as the.): Example B class Face { Nose: Flight { Airport departureAirport Airport destinationAirport } 5.1.3. Many-to-many GORM } GORM.: | --------------------------------------------- 5.2. Composition in GORM As well as associations, GORM: 5.3. Inheritance in GORM. 5.3.1. Considerations At the database level GORM by default uses table-per-hierarchy mapping with a discriminator column called class so the parent class ( Content) and its subclasses ( BlogEntry, Book etc.), share the same table.. 5.3.2. Polymorphic Queries 5.4. Sets, Lists and Maps 5.4.1. Sets of Objects:() 5.4.3. Bags of Objects If ordering and uniqueness aren’t a concern (or if you manage these explicitly) then you can use the Hibernate Bag type to represent mapped collections. The only change required for this is to define the collection type as a. 5.4.4. Maps of Objects If you want a simple map of string/value pairs GORM can map this with the following: class Author { Map books // map of ISBN:book names } def a = new Author() a.books = ['1590597583':"My. 5.4.5. A Note on Collection Types and Performance.. If you are using Grails this typically done for you automatically, which manages your Hibernate session. If you are using GORM outside of Grails then you may need to manually flush the session at the end of your operation.. 6.2. Saving and Updating GORM. 6.3. Deleting Objects def p = Person.get(1) p.delete() As with saves, Hibernate will use transactional write-behind to perform the delete; to perform the delete in-place you can use the flush argument::] }). 6.6. Configuring Eager Fetching.: class Flight { ... static mapping = { batchSize 10 } } 6.9. Pessimistic and Optimistic Locking 6.9.1. Optimistic. isDirty and Proxies Dirty checking uses the equals() method to determine if a property has changed. In the case of associations, it is important to recognize that if the association is a proxy, comparing properties on the domain that are not related to the identifier will initialize the proxy, causing another database query. If the association does not define equals() method, then the default Groovy behavior of verifying the instances are the same will be used. Because proxies are not the same instance as an instance loaded from the database, which can cause confusing behavior. It is recommended to implement the equals() method if you need to check the dirtiness of an association. For example: class Author { Long id String name /** * This ensures that if either or both of the instances * have a null id (new instances), they are not equal. */ @Override boolean equals(o) { if (!(o instanceof Author)) return false if (this.is(o)) return true Author that = (Author) o if (id !=null && that.id !=null) return id == that.id return false } } class Book { Long id String title Author author } 6.10.3. getDirtyPropertyNames } 6.10.4. getPersistentValue You can use the getPersistentValue(fieldName) } } 7. Querying with GORM. 7.1. Listing instances def books = Book.list(). 7.2. Retrieval by Database Identifier def book = Book.get(23) def books = Book.getAll(23, 93, 81) 7.3. Dynamic Finders) 7.3.1. Method Expressions. The possible comparators include: InList- In the list of given values LessThan- less than)() 7.3.2. Boolean logic (AND/OR): def books = Book.findAllByTitleLikeOrReleaseDateGreaterThan( "%Java%", new Date() - 30) 7.3.3. Querying Associations.>. 7.4.3. Conjunction, Disjunction and Neg') } 7.4.4. Property Comparison Queries: 7.4.5. Querying Associations::" } } 7.4.8. More Advanced Subqueries in GORM():. 7.5.3. Querying with Projections.. Consider that the following table represents the data in the BOX table. The query above would return results like this: [[18, 14], [20, 16], [22, 18], [26, 36]] Each of the inner lists contains the 2 projected values for each Box, perimeter and area.. 7.5.6. Using SQL Restrictions] } Also note that the SQL used here is not necessarily portable across databases. 7.5.7. Using Scrollable Results. 7.5.8. Setting properties in the Criteria instance: import org.hibernate.FetchMode as FM ... def results = c.list { maxResults(10) firstResult(50) fetchMode("aRelationship", FM.JOIN) } 7.5.9. Querying with Eager Fetching: import org.hibernate.FetchMode as FM ... def results = Airport.withCriteria { eq "region", "EMEA" fetchMode "flights", FM.SELECT } Although this approach triggers a second query to get the flights association, you will get reliable results - even with the maxResults option.. 7.5.10. Method Reference If you invoke the builder with no method name such as: c { ... } The build defaults to listing all the results and hence the above is equivalent to: c.list { ... } 7.5.11. Combining Criteria.:' } 7.6.2. Executing Detached Criteria Queries Unlike regular criteria, Detached Criteria are lazy, in that no query is executed at the point of definition. Once a Detached Criteria query has been constructed then there are a number of useful query methods which are summarized in the table below::: def results = Person.withCriteria { gtAll "age", { projections { property "age" } between 'age', 18, 65 } order "firstName" } The following table summarizes criteria methods for operating on subqueries that return multiple results: 7.6.4. Batch Operations with Detached Criteria") To batch delete records you can use the deleteAll method: def criteria = new DetachedCriteria(Person).build { eq 'lastName', 'Simpson' } int total = criteria.deleteAll() 7.7. Hibernate Query Language (HQL)%'")]) 8. Advanced GORM Features The following sections cover more advanced usages of GORM including caching, custom mapping and events. 8.1. Events and Auto Timestamping which implements PostInsertEventListener, PostUpdateEventListener, and PostDeleteEventListener using the following in an application: beans = { auditListener(AuditEventListener) hibernateEventListeners(HibernateEventListeners) { listenerMap = ['post-insert': auditListener, 'post-update': auditListener, 'post-delete': auditList type:. One-to-Many Mapping'] } }' } } 8.2.2. Caching Strategy Setting up caching.` } Cache usages Below (i.e. if it is (GORM adds it for you) you can still configure its mapping like the other properties. For example to customise the column for the id property you can do: class Person { ... static mapping = { table 'people' version false id column: 'person_id' } } 8.2.5. Composite Primary Keys: class Address { Person person static mapping = { columns { person { column name: "FirstName" column name: "LastName" } } } } 8.2.6. Database Indices. 8.2.7. Optimistic Locking and Versioning } } Version columns types By default GORM maps the version property as a Long that gets incremented by one each time an instance is updated. But Hibernate also supports using a Timestamp, for example:.. Note that ORM DSL does not currently support the "subselect" fetching strategy. Lazy Single-Ended Associations. Lazy Single-Ended Associations and Proxies Pet { String name } class Dog extends Pet { } class Person { String name Pet pet } and assume that we have a single Person instance with a Dog as the pet. The following code will work as you would expect: method by GORM:. 8.2.9. Custom Cascade Behaviour.. The Hibernate reference manual has some information on custom types, but here we will focus on how to map them in GORM. explicitly define in the mapping what columns to use, since Hibernate can only use the property name for a single column. Fortunately, GORM. 8.2.11. Derived Properties. With that in place, when a Product is retrieved with something like Product.get(42), the SQL that is generated to support that will look something like:) } } 8.3. Default Sort Order.: Exception (both checked or runtime exception) or Error.:7.0.4:7.0.4.RELEASE" 12.6.3. The AllTenantsResolver interface If you are using discriminator-based multi-tenancy then you may need to implement the AllTenantsResolver interface in your TenantResolver implementation. Cascade constraints validation If GORM entity references some other entities, then during its constraints evaluation (validation) the constraints of the referenced entity could be evaluated also, if needed. There is a special parameter cascadeValidate in the entity mappings section, which manage the way of this cascaded validation happens. class Author { Publisher publisher static mapping = { publisher(cascadeValidate: "dirty") } } class Publisher { String name static constraints = { name blank: false } } The following table presents all options, which can be used: It is possible to set the global option for the cascadeValidate: grails.gorm.default.mapping = { '*'(cascadeValidate: 'dirty') } 13.4. Constraints Reference The following table summarizes the available constraints with a brief example: 13,. 13.5.1. Constraints Affecting String Properties inList.5.2. Constraints Affecting Numeric Properties min max range If the max, min, or range constraint is defined, GORM GORM uses the minimum precision value from the constraints. (GORM uses the minimum of the two, because any length that exceeds that minimum precision will result in a validation error.) scale If the scale constraint is defined, then GORM G } }
http://gorm.grails.org/latest/hibernate/manual/
CC-MAIN-2021-04
en
refinedweb
help fsolve (numpy) - alexandrepfurlan Hi all, I'm trying obtain the roots of a function that depends on a parameter. For example : The equation eos= math.log(1.-x)+x**2*(e22*y+ e11*(1-y) + 2*y*(1-y)*e12) I need to obtain the roots (x) such as eos = 0 for a specific value of y. In order words, I fix y, and I solve (using fsolve) eos. I'm trying to do : `def EOS(x,y) : e11=1.00 ; e22=0.40 ; e12=0.60 return math.log(1.-x)+x**2*(e22*y+ e11*(1-y) + 2*x2*(1-y)*e12) for i in arange(1,99,1) : y=i*0.01 ans[i]=fsolve(lambda x: EOS(x,y),x0) But I have a wrong answer. Someone know how use the fsolve (or other alternative way) with y as parameter (not a variable) ? Someone could help me ? Many thanks for the help Best Alexandre - Webmaster4o You'll probably have better luck posting on stackoverflow. It's a wider community The sympylibrary (preinstalled in Pythonista) might be useful for this, it allows you to work with formulas and equations that contain symbols, and I think you can substitute symbols and solve for a symbol as a variable and such. I haven't used those features of sympymuch (I use it mostly as an advanced calculator) so I can't help you with any details, sorry. This podcast might be of interest for a recent description of Sympy from its author. alex, pythonista does not, as far as i can tell, come with fsolve. perhaps you can elaborate about what is failing. If you plug your answer back into EOS, does it return something near zero? If so, you have found your root. If not, is it possible that a real root does not exist? What are you using for x0, and x2 in the code you posted? One problem you may be having: x=0 is a root of your above equation for all y, and in fact is the only root unless x2 is less than -0.8333. Perhaps this is nit the function you meant to be finding roots of (missing parenthesis, etc?) It might help you to plot these functions vs x. import matplotlib.pyplot as plt import numpy as np x2=-1 #roots only exist if x2 is negative, and <-0.83 def EOS(x,y) : e11=1.00 ; e22=0.40 ; e12=0.60 return np.log(1.-x)+x**2*(e22*y+ e11*(1-y) + 2*x2*(1-y)*e12) for i in np.arange(1,99,1) : y=i*0.01 #ans[i]=fsolve(lambda x: EOS(x,y),x0) x=np.linspace(-5,1,100) plt.plot(x,EOS(x,y)) plt.show()
https://forum.omz-software.com/topic/2925/help-fsolve-numpy
CC-MAIN-2021-04
en
refinedweb
Hello and welcome back, in this article we will continue to develop the cryptocurrency application. In the previous few chapters, we had only used the At the beginning of the program we will import the exchangerates module as well as get the ticker object from blockchain (rest call). from blockchain import exchangerates try: ticker = exchangerates.get_ticker() # get the ticker object from blockchain except: print("An exception occurred") Next, we will comment out the line to use the for key, value in exchange_rate_s.items(): # populate exchange rate string and the currency tuple #sell_buy += base_crypto + ":" + key + " " + str(value) + "\n" curr1 += (key,) sell_buy += "Bitcoin : Currency price every 15 minute:" + "\n\n" # print the 15 min price for every bitcoin/currency for k in ticker: sell_buy += "BTC:" + str(k) + " " + str(ticker[k].p15min) + "\n" If we load the data we will see the below outcome. If you want to see the entire source code then please go back to the previous chapter to read them. Are you a developer or a python programmer? Join mine private chat room through this link to see what am I working on right now.
https://kibiwebgeek.com/use-blockchain-api-to-retrieve-the-bitcoin-exchange-rate-within-the-15-minutes-period-of-the-time/
CC-MAIN-2021-04
en
refinedweb
DeskPi Pro + 8GB Pi 4 Despite having worked on a number of ARM platforms I’ve never actually had an ARM based development box at home. I have a Raspberry Pi B Classic (the original 256MB rev 0002 variant) a coworker gave me some years ago, but it’s not what you’d choose for a build machine and generally gets used as a self contained TFTP/console server for hooking up to devices under test. Mostly I’ve been able to do kernel development with the cross compilers already built as part of Debian, and either use pre-built images or Debian directly when I need userland pieces. At a previous job I had a Marvell MACCHIATObin available to me, which works out as a nice platform - quad core A72 @ 2GHz with 16GB RAM, proper SATA and a PCIe slot. However they’re still a bit pricey for a casual home machine. I really like the look of the HoneyComb LX2 - 16 A72 cores, up to 64GB RAM - but it’s even more expensive. So when I saw the existence of the 8GB Raspberry Pi 4 I was interested. Firstly, the Pi 4 is a proper 64 bit device (my existing Pi B is ARMv6 which means it needs to run Raspbian instead of native Debian armhf), capable of running an upstream kernel and unmodified Debian userspace. Secondly the Pi 4 has a USB 3 controller sitting on a PCIe bus rather than just the limited SoC USB 2 controller. It’s not SATA, but it’s still a fairly decent method of attaching some storage that’s faster/more reliable than an SD card. Finally 8GB RAM is starting to get to a decent amount - for a headless build box 4GB is probably generally enough, but I wanted some headroom. The Pi comes as a bare board, so I needed a case. Ideally I wanted something self contained that could take the Pi, provide a USB/SATA adaptor and take the drive too. I came across the pre-order for the DeskPi Pro, decided it was the sort of thing I was after, and ordered one towards the end of September. It finally arrived at the start of December, at which point I got round to ordering a Pi 4 from CPC. Total cost ~ £120 for the case + Pi. The Bad First, let’s get the bad parts out of the way. I managed to break a USB port on the Desk Pi. It has a pair of forward facing ports, I plugged my wireless keyboard dongle into it and when trying to remove it the solid spacer bit in the socket broke off. I’ve never had this happen to me before and I’ve been using USB devices for 20 years, so I’m putting the blame on a shoddy socket. The first drive I tried was an old Crucial M500 mSATA device. I have an adaptor that makes it look like a normal 2.5” drive so I used that. Unfortunately it resulted in a boot loop; the Pi would boot its initial firmware, try to talk to the drive and then reboot before even loading Linux. The DeskPi Pro comes with an m2 adaptor and I had a spare m2 drive, so I tried that and it all worked fine. This might just be power issues, but it was an unfortunate experience especially after the USB port had broken off. (Given I ended up using an M.2 drive another case option would have been the Argon ONE M.2, which is a bit more compact.) The Annoying The case is a little snug; I was worried I was going to damage things as I slid it in. Additionally the construction process is a little involved. There’s a good set of instructions, but there are a lot of pieces and screws involved. This includes a couple of FFC cables to join things up. I think this is because they’ve attempted to make a compact case rather than allowing a little extra room, and it does have the advantage that once assembled it feels robust without anything loose in it. I hate the need for an external USB3 dongle to bridge from the Pi to the USB/SATA adaptor. All the cases I’ve seen with an internal drive bay have to do this, because the USB3 isn’t brought out internally by the Pi, but it just looks ugly to me. It’s hidden at the back, but meh. Fan control is via a USB/serial device, which is fine, but it attaches to the USB C power port which defaults to being a USB peripheral. Raspbian based kernels support device tree overlays which allows easy reconfiguration to host mode, but for a Debian based system I ended up rolling my own dtb file. I changed #include "bcm283x-rpi-usb-peripheral.dtsi" to #include "bcm283x-rpi-usb-host.dtsi" in arch/arm/boot/dts/bcm2711-rpi-4-b.dts and then I did: cpp -nostdinc -I include -I arch -undef -x assembler-with-cpp \ arch/arm/boot/dts/bcm2711-rpi-4-b.dts > rpi4.preprocessed dtc -I dts -O dtb rpi4.preprocessed -o bcm2711-rpi-4-b.dtb and the resulting bcm2711-rpi-4-b.dtb file replaced the one in /boot/firmware. This isn’t a necessary step if you don’t want to use the cooling fan in the case, or the front USB ports, and it’s not really anyone’s fault, but it was an annoying extra step to have to figure out. The DeskPi came with a microSD card that was supposed to have RaspiOS already on it. It didn’t, it was blank. In my case that was fine, because I wanted to use Debian, but it was a minor niggle. The Good I used Gunnar’s pre-built Pi Debian image and it Just Worked; I dd’d it to the microSD as instructed and the Pi 4 came up with working wifi, video and USB enabling me to get it configured for my network. I did an apt upgrade and got updated to the Buster 10.7 release, as well as the latest 5.9 backport kernel, and everything came back without effort after a reboot. It’s lovely to be able to run Debian on this device without having to futz around with self-compiled kernels. The DeskPi makes a lot of effort to route things externally. The SD slot is brought out to the front, making it easy to fiddle with the card contents without having to open the case to replace it. All the important ports are brought out to the back either through orientation of the Pi, or extenders in the case. That means the built in Pi USB ports, the HDMI sockets (conveniently converted to full size internally), an audio jack and a USB-C power port. The aforementioned USB3 dongle for the bridge to the drive is the only external thing that’s annoying. Thermally things seem good too. I haven’t done a full torture test yet, but with the fan off the system is sitting at about 40°C while fairly idle. Some loops in bash that push load up to above 2 get the temperature up to 46°C or so, and turning the fan on brings it down to 40°C again. It’s audible, but quieter than my laptop and not annoying. I liked the way the case came with everything I needed other than the Pi 4 and a suitable disk drive. There was an included PSU (a proper USB-C PD device, UK plug), the heatsink/fan is there, the USB/SATA converter is there and even an SD card is provided (though that’s just because I had a pre-order). Speaking of the SD, I only needed it for initial setup. Recent Pi 4 bootloaders are capable of booting directly from USB mass storage devices. So I upgraded using the RPi EEPROM Recovery image (which just needs extracted to the SD FAT partition, no need for anything complicated - boot with it and the screen goes all green and you know it’s ok), then created a FAT partition at the start of the drive for the kernel / bootloader config and a regular EXT4 partition for root. Copies everything over, updated paths, took out the SD and it all just works happily. Summary My main complaint is the broken USB port, which feels like the result of a cheap connector. For a front facing port expected to see more use than the rear ports I think there’s a reasonable expectation of robustness. However I’m an early adopter and maybe future runs will be better. Other than that I’m pretty happy. The case is exactly the sort of thing I wanted; I was looking for something that would turn the Pi into a box that can sit on my desk on the network and that I don’t have to worry about knocking wires out of or lots of cables hooking bits up. Everything being included made it very convenient to get up and running. I still haven’t poked the Pi that hard, but first impressions are looking good for it being a trouble free ARM64 dev box in the corner, until I can justify a HoneyComb.
http://www.earth.li/~noodles/blog/2020/12/deskpi-pro-and-pi4.html
CC-MAIN-2021-04
en
refinedweb
#include <deal.II/base/tensor_function.h> This class is a model for a tensor valued function. The interface of the class is mostly the same as that for the Function class, with the exception that it does not support vector-valued functions with several components, but that the return type is always tensor-valued. The returned values of the evaluation of objects of this type are always whole tensors, while for the Function class, one can ask for a specific component only, or use the vector_value function, which however does not return the value, but rather writes it into the address provided by its second argument. The reason for the different behavior of the classes is that in the case of tensor valued functions, the size of the argument is known to the compiler a priori, such that the correct amount of memory can be allocated on the stack for the return value; on the other hand, for the vector valued functions, the size is not known to the compiler, so memory has to be allocated on the heap, resulting in relatively expensive copy operations. One can therefore consider this class a specialization of the Function class for which the size is known. An additional benefit is that tensors of arbitrary rank can be returned, not only vectors, as for them the size can be determined similarly simply. Definition at line 56 of file tensor_function. Constructor. May take an initial value for the time variable, which defaults to zero. Virtual destructor; absolutely necessary in this case, as classes are usually not used by their true type, but rather through pointers to this base class. Return the value of the function at the given point. Reimplemented in TensorFunctionParser< rank, dim, Number >, and ConstantTensorFunction< rank, dim, Number >..
https://dealii.org/developer/doxygen/deal.II/classTensorFunction.html
CC-MAIN-2021-04
en
refinedweb
I am working on a simple program that will grab the memory address of a given variable upto 64 bits(unsigned long). currently this is the code I have but for someo reason the compiler is throwing me warnings saying that my method is returning address of a local variable when that is what I have intended. int main(int argc, char *argv[]) { char* one = argv[1]; long memaddress = address(one); } uint64_t address( char * strin) { return (uint64_t) &strin; } You can imagine the function definition and its call long address = address(one); //... uint64_t address( char * strin) { return (uint64_t) &strin; } the following way long address = address(one); //... uint64_t address( void ) { char * strin = one; return (uint64_t) &strin; } As you see variable strin is a local variable of the function. It will be destroyed after exiting the function. Thus its address after exiting the function will be invalid. And the compiler warns you about this. To avoid the warning you could write the function at least the following way uint64_t address( char ** strin) { return (uint64_t) &*strin; } and call it like long address = address(&one);
https://codedump.io/share/3WSPaeDI7PBt/1/function-returns-address-of-local-variable--wreturn-local-addr
CC-MAIN-2017-17
en
refinedweb
.mysql.sql;24 25 import org.xquark.extractor.sql.Context;26 27 public class SqlTable extends org.xquark.extractor.sql.SqlTable28 {29 30 private static final String RCSRevision = "$Revision: 1.3 $";31 private static final String RCSName = "$Name: $";32 33 34 public SqlTable() {35 }36 37 public SqlTable(String name) {38 super(name);39 }40 41 public SqlTable(String catalogName, String schemaName, String tableName) {42 super(catalogName, schemaName, tableName);43 }44 45 public String toSql(Context context) {46 return super.toSql(context,false);47 }48 }49 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/xquark/extractor/mysql/sql/SqlTable.java.htm
CC-MAIN-2017-17
en
refinedweb
#include <RenderState.h> An abstract base class that can be extended to support custom material auto bindings. Implementing a custom auto binding resolver allows the set of built-in parameter auto bindings to be extended or overridden. Any parameter auto binding that is set on a material will be forwarded to any custom auto binding resolvers, in the order in which they are registered. If a registered resolver returns true (specifying that it handles the specified autoBinding), no further code will be executed for that autoBinding. This allows auto binding resolvers to not only implement new/custom binding strings, but it also lets them override existing/built-in ones. For this reason, you should ensure that you ONLY return true if you explicitly handle a custom auto binding; return false otherwise. Note that the custom resolver is called only once for a RenderState object when its node binding is initially set. This occurs when a material is initially bound to a renderable (Model, Terrain, etc) that belongs to a Node. The resolver is NOT called each frame or each time the RenderState is bound. Therefore, when implementing custom auto bindings for values that change over time, you should bind a method pointer to the passed in MaterialParaemter using the MaterialParameter::bindValue method. This way, the bound method will be called each frame to set an updated value into the MaterialParameter. If no registered resolvers explicitly handle an auto binding, the binding will attempt to be resolved using the internal/built-in resolver, which is able to handle any auto bindings found in the RenderState::AutoBinding enumeration. When an instance of a class that extends AutoBindingResolver is created, it is automatically registered as a custom auto binding handler. Likewise, it is automatically deregistered on destruction. Destructor. Constructor. Called when an unrecognized material auto binding is encountered during material loading. Implemenations of this method should do a string comparison on the passed in name parameter and decide whether or not they should handle the parameter. If the parameter is not handled, false should be returned so that other auto binding resolvers get a chance to handle the parameter. Otherwise, the parameter should be set or bound and true should be returned.
http://gameplay3d.github.io/GamePlay/api/classgameplay_1_1_render_state_1_1_auto_binding_resolver.html
CC-MAIN-2017-17
en
refinedweb
I am beginner in programming. I am learning C and our teacher have ask us to do some swapping two number without using any other variables. please help code in C It is quite simpler than you thought. Here is code: #include <stdio.h> int main() { int x = 10, y = 5; // Code to swap 'x' and 'y' x = x + y; // x now becomes 15 y = x - y; // y becomes 10 x = x - y; // x becomes 5 printf("After Swapping: x = %d, y = %d", x, y); return 0; }
https://codedump.io/share/0NFJom6VAvOB/1/how-to-swap-two-number-with-using-no-other-variable
CC-MAIN-2017-17
en
refinedweb
The detection utilities are a set of scripts in the core asset package that provide a convenient way to detect what a user’s hand is doing. For example, you can detect when the fingers of a hand are curled or extended, whether a finger or palm are pointing in a particular direction, or whether the hand or fingertip are close to one of a set of target objects. the Detection Examples package, which also contains an additional scene that illustrates how to use detectors. To use a detector, add it to a scene as a component of a game object. In general, it makes the most sense to put the script on an object related to its function, for example, to put a detector you are going to use to detect the state of a thumb on the thumb object itself. You can use the HandAttachments prefab from the Attachments module to separate the concerns of visual representation and physics interaction from game logic. The HandAttachments prefab exposes the most important transforms for the parts of the hand. Once the detector is added to the scene, you can set its properties in the Unity Inspector. The properties vary by detector, but most include the following: When a detector turns on (activates) or turns off (deactivates), it dispatches standard Unity events. You can hook up these events in the Unity Inspector panel to pretty much anu Unity game object or its components, or to scripts that you write yourself. The primary Detector events are OnActivate and OnDeactivate. Some detectors dispatch additional events. You can combine multiple detectors to create more complex behavior using the DetectorLogicGate script. A logic gate takes any number of other detectors as input and outputs a single boolean. It is a type of Detector object, so it also dispatches OnActivate and OnDeactivate events. You can set logic gates to be AND gates (all inputs must be true for the output to be true) or OR gates (the output is true if any input is true). You can also negate the output to configure the gate as a NAND or NOR gate. Since a logic gate is, itself, a Detector, you can hook up multiple logic gates to create arbitrarily complex logic. However, if you have more than a couple of gates, you should consider whether it is more maintainable to just write a script that encompasses that logic. If you do connect multiple gates, uncheck the “Add All Sibling Detectors” option and drag the proper Detectors to the gate’s Detector list manually. The HandAttachments prefab has a AttachmentController script that exposes two methods, Activate() and Deactivate() that you can hook directly to the event dispatchers of a detector. The AttachmentController script enables child game objects of the attachment controller when Activate() is called and disables them when Deactivate() is called. Thus, you can use detectors to turn objects attached to the hand on or off. The following collection of ideas illustrate how to use detectors to implement behaviors and interaction in your application. To detect a “Thumb’s Up” use an ExtendedFingerDetector to check that the thumb is the only extended finger and a FingerDirectionDetector to check detect when the thumb is pointing up. Combine these detectors with an AND-type logic gate: You can place these components together on the Thumb transform of a HandAttachment (or anywhere convenient on a HandModel really). To detect when a palm is facing the camera use an ExtendedFingerDetector to check that all the fingers are extended and a PalmDirectionDetector to check detect when the palm is facing the camera. Combine these detectors with an AND-type logic gate: You can place these components together on the Palm transform of a HandAttachment (or anywhere convenient on a HandModel really). Picking up objects in Unity can be fairly complex and have many edge-cases that need to be taken into account. Things get especially complex when you want to pick up objects that also have rigidbodies that you want to otherwise collide with the hands. This challenge is the reason Leap Motion created the Interaction Engine. For the simplest cases, however, you can use detectors to determine when the hand should pickup or release an object and write Unity scripts to do the actual object movement or re-parenting. The following example uses a ProximityDetector to select the object to pick up; a PinchDetector to trigger pick-up and release, and a simple custom script to attach and detach objects to the hand’s pick up point. Add the following components to the PinchPoint game object of a HandAttachment: ProximityDetector – add any objects eligible to be picked up to the Target Objects list. (You can also use tags or layers to identify potential targets.) PinchDetector – The default settings work, but you can adjust the Activate and Deactivate Pinch Distance properties to adjust how far or close the thumb and index finger must be apart to pick up or release an object. Add a new script component named “Pickup” using the following code: using UnityEngine; using Leap.Unity; public class Pickup : MonoBehaviour { GameObject _target; public void setTarget(GameObject target) { if (_target == null) { _target = target; } } public void pickupTarget() { if (_target) { StartCoroutine(changeParent()); Rigidbody rb = _target.gameObject.GetComponent<Rigidbody>(); if(rb != null) { rb.isKinematic = true; } } } //Avoids object jumping when passing from hand to hand. IEnumerator changeParent() { yield return null; if(_target != null) _target.transform.parent = transform; } public void releaseTarget() { if (_target && _target.activeInHierarchy) { if (_target.transform.parent == transform) { //Only reset if we are still the parent Rigidbody rb = _target.gameObject.GetComponent<Rigidbody>(); if (rb != null) { rb.isKinematic = false; } _target.transform.parent = null; } _target = null; } } public void clearTarget(){ _target = null; } } Finally, set the detector event dispatchers to call the Pickup script methods: This script is sufficient to pick up game objects with and without rigidbodies and pass them from hand-to-hand. To avoid collision problems, the script turns rigid bodies to kinematic when they are picked up and turns them to non-kinematic when they are released. To create your own Detector classes, you must extend the Detector base class and implement logic that calls Activate() when your detector turns on and Deactivate() when it turns off. Most of the provided detector scripts use a coroutine that checks the watched state. You can do the computation in one of Unity’s Update() callbacks, but it may be less efficient to do so if you don’t need to check every Unity frame. It is also a good idea to implement a gizmo drawing script when possible so that you can see how the detector is working while looking at your hands. The following code is a bare-bones template for a Detector implementation. To complete this template, you would, at a minimum, add the logic to access the tracking data you are interested in and check whether it satisfies some criteria. You can use the existing Detector implementations for further examples. using UnityEngine; using UnityEngine.Events; using Leap; using Leap.Unity; public class CustomDetector : Detector { public float Period = .1f; //seconds public float OnValue = 1.0f; public float OffValue = 1.5f; private float gizmoSize = .1f; private IEnumerator watcherCoroutine; void Awake(){ watcherCoroutine = watcher(); } void OnEnable () { StopCoroutine(watcherCoroutine); StartCoroutine(watcherCoroutine); } void OnDisable () { StopCoroutine(watcherCoroutine); } IEnumerator watcher(){ float watchedValue = 20; while(true){ //Your logic to compute or check the current watchedValue goes here if(watchedValue > OffValue){ Activate(); } if(watchedValue < OnValue){ Deactivate(); } yield return new WaitForSeconds(Period); } } #if UNITY_EDITOR void OnDrawGizmos(){ if(IsActive){ Gizmos.color = OnColor; } else { Gizmos.color = OffColor; } Gizmos.DrawWireSphere(transform.position, gizmoSize, OnValue); Gizmos.color = LimitColor; Gizmos.DrawWireSphere(transform.position, gizmoSize, OffValue); } #endif }
https://developer-archive.leapmotion.com/documentation/unity/unity/Unity_DetectionUtilities.html
CC-MAIN-2017-17
en
refinedweb
thread_pool_create() Create a thread pool handle Synopsis: #include <sys/iofunc.h> #include <sys/dispatch.h> thread_pool_t * thread_pool_create ( thread_pool_attr_t * pool_attr, unsigned flags ); Since: BlackBerry 10.0.0:; const char *tid_name; unsigned reserved[7]; } thread_pool_attr_t;. - tid_name - NULL, or a pointer to a null-terminated name for the threads in the pool. If set, this string is passed to pthread_setname_np() when the thread pool creates a new thread. The scope of the tid_name string must match the lifetime of the thread pool itself. For example, this is valid: pool_attr.tid_name = "fsys_resmgr"; but using a local or automatic variable like this isn't: { char name[32]; snprintf(name, sizeof(name), "cam %d:%d", cam.path, cam.target); pool_attr.tid_name = name; ... return; } Errors: - ENOMEM - Insufficient memory to allocate internal data structures. Examples: Here's a simple multithreaded resource manager: /* Define an appropriate interrupt number: */ #define INTNUM 0 #include <stdio.h> #include <stddef.h> #include <stdlib.h> #include <string; int id; if((dpp = dispatch_create()) == NULL) { fprintf( stderr, "%s: Unable to allocate dispatch handle.\n", argv[0] ); return EXIT_FAILURE; } memset( &pool_attr, 0, sizeof pool_attr ); pool_attr.handle = dpp; pool_attr.context_alloc = (void *) dispatch_context_alloc; pool_attr.block_func = (void *) dispatch_block; pool_attr.unblock_func = (void *) dispatch_unblock; pool_attr.handler_func = (void *) dispatch_handler; pool_attr.context_free = (void *) dispatch_context_free; pool_attr.lo_water = 2; pool_attr.hi_water = 4; pool_attr.increment = 1; pool_attr.maximum = 50; pool_attr.tid_name = "my_thread_pool"; if((tpp = thread_pool_create( &pool_attr, POOL_FLAG_EXIT_SELF)) == NULL ) { fprintf(stderr, "%s: Unable to initialize thread pool.\n", argv[0]); return EXIT_FAILURE; } iofunc_func_init( _RESMGR_CONNECT_NFUNCS, &connect_funcs, _RESMGR_IO_NFUNCS, &io_funcs ); iofunc_attr_init( &attr, S_IFNAM | 0666, 0, 0 ); memset( &resmgr_attr, 0, sizeof resmgr_attr ); resmgr_attr.nparts_max = 1; resmgr_attr.msg_max_size = 2048; if((id = resmgr_attach( dpp, &resmgr_attr, "/dev/mynull", _FTYPE_ANY, 0, &connect_funcs, &io_funcs, &attr )) == -1) { fprintf( stderr, "%s: Unable to attach name.\n", argv[0] ); return EXIT_FAILURE; } /* Start the thread which will handle interrupt events. */ pthread_create ( NULL, NULL, interrupt_thread, NULL ); /* Never returns */ thread_pool_start( tpp ); return EXIT_SUCCESS; } For more examples using the dispatch interface, see dispatch_create(), message_attach(), and resmgr_attach(). Classification: Last modified: 2014-06-24 Got questions about leaving a comment? Get answers from our Disqus FAQ.comments powered by Disqus
http://developer.blackberry.com/native/reference/core/com.qnx.doc.neutrino.lib_ref/topic/t/thread_pool_create.html
CC-MAIN-2017-17
en
refinedweb
GitPython Tutorial¶ GitPython provides object model access to your git repository. This tutorial is composed of multiple sections, most of which explains a real-life usecase. All code presented here originated from test_docs.py to assure correctness. Knowing this should also allow you to more easily run the code for your own testing purposes, all you need is a developer installation of git-python. Meet the Repo type¶ The first step is to create a git.Repo object to represent your repository. from git import Repo join = osp.join # rorepo is a Repo instance pointing to the git-python repository. # For all you know, the first argument to Repo is a path to the repository # you want to work with repo = Repo(self.rorepo.working_tree_dir) assert not repo.bare In the above example, the directory self.rorepo.working_tree_dir equals /Users/mtrier/Development/git-python and is my working repository which contains the .git directory. You can also initialize GitPython with a bare repository. bare_repo = Repo.init(join(rw_dir, 'bare-repo'), bare=True) assert bare_repo.bare A repo object provides high-level access to your data, it allows you to create and delete heads, tags and remotes and access the configuration of the repository. repo.config_reader() # get a config reader for read-only access with repo.config_writer(): # get a config writer to change configuration pass # call release() to be sure changes are written and locks are released Query the active branch, query untracked files or whether the repository data has been modified. assert not bare_repo.is_dirty() # check the dirty state repo.untracked_files # retrieve a list of untracked files # ['my_untracked_file'] Clone from existing repositories or initialize new empty ones. cloned_repo = repo.clone(join(rw_dir, 'to/this/path')) assert cloned_repo.__class__ is Repo # clone an existing repository assert Repo.init(join(rw_dir, 'path/for/new/repo')).__class__ is Repo Archive the repository contents to a tar file. with open(join(rw_dir, 'repo.tar'), 'wb') as fp: repo.archive(fp) Advanced Repo Usage¶ And of course, there is much more you can do with this type, most of the following will be explained in greater detail in specific tutorials. Don’t worry if you don’t understand some of these examples right away, as they may require a thorough understanding of gits inner workings. Query relevant repository paths ... assert osp.isdir(cloned_repo.working_tree_dir) # directory with your work files assert cloned_repo.git_dir.startswith(cloned_repo.working_tree_dir) # directory containing the git repository assert bare_repo.working_tree_dir is None # bare repositories have no working tree Heads Heads are branches in git-speak. References are pointers to a specific commit or to other references. Heads and Tags are a kind of references. GitPython allows you to query them rather intuitively. self.assertEqual(repo.head.ref, repo.heads.master, # head is a sym-ref pointing to master "It's ok if TC not running from `master`.") self.assertEqual(repo.tags['0.3.5'], repo.tag('refs/tags/0.3.5')) # you can access tags in various ways too self.assertEqual(repo.refs.master, repo.heads['master']) # .refs provides all refs, ie heads ... if 'TRAVIS' not in os.environ: self.assertEqual(repo.refs['origin/master'], repo.remotes.origin.refs.master) # ... remotes ... self.assertEqual(repo.refs['0.3.5'], repo.tags['0.3.5']) # ... and tags You can also create new heads ... new_branch = cloned_repo.create_head('feature') # create a new branch ... assert cloned_repo.active_branch != new_branch # which wasn't checked out yet ... self.assertEqual(new_branch.commit, cloned_repo.active_branch.commit) # pointing to the checked-out commit # It's easy to let a branch point to the previous commit, without affecting anything else # Each reference provides access to the git object it points to, usually commits assert new_branch.set_commit('HEAD~1').commit == cloned_repo.active_branch.commit.parents[0] ... and tags ... past = cloned_repo.create_tag('past', ref=new_branch, message="This is a tag-object pointing to %s" % new_branch.name) self.assertEqual(past.commit, new_branch.commit) # the tag points to the specified commit assert past.tag.message.startswith("This is") # and its object carries the message provided now = cloned_repo.create_tag('now') # This is a tag-reference. It may not carry meta-data assert now.tag is None You can traverse down to git objects through references and other objects. Some objects like commits have additional meta-data to query. assert now.commit.message != past.commit.message # You can read objects directly through binary streams, no working tree required assert (now.commit.tree / 'VERSION').data_stream.read().decode('ascii').startswith('2') # You can traverse trees as well to handle all contained files of a particular commit file_count = 0 tree_count = 0 tree = past.commit.tree for item in tree.traverse(): file_count += item.type == 'blob' tree_count += item.type == 'tree' assert file_count and tree_count # we have accumulated all directories and files self.assertEqual(len(tree.blobs) + len(tree.trees), len(tree)) # a tree is iterable on its children Remotes allow to handle fetch, pull and push operations, while providing optional real-time progress information to progress delegates._tree_dir) assert origin.exists() for fetch_info in origin.fetch(progress=MyProgressPrinter()): print("Updated %s to %s" % (fetch_info.ref, fetch_info.commit)) # create a local branch at the latest fetched master. We specify the name statically, but you have all # information to do it programatically as well. bare_master = bare_repo.create_head('master', origin.refs.master) bare_repo.head.set_reference(bare_master) assert not bare_repo.delete_remote(origin).exists() # push and pull behave very similarly The index is also called stage in git-speak. It is used to prepare new commits, and can be used to keep results of merge operations. Our index implementation allows to stream date into the index, which is useful for bare repositories that do not have a working tree. self.assertEqual(new_branch.checkout(), cloned_repo.active_branch) # checking out branch adjusts the wtree self.assertEqual(new_branch.commit, past.commit) # Now the past is checked out new_file_path = osp.join(cloned_repo.working_tree_dir, 'my-new-file') open(new_file_path, 'wb').close() # create new file in working tree cloned_repo.index.add([new_file_path]) # add it to the index # Commit the changes to deviate masters history cloned_repo.index.commit("Added a new file in the past - for later merege") # prepare a merge master = cloned_repo.heads.master # right-hand side is ahead of us, in the future merge_base = cloned_repo.merge_base(new_branch, master) # allwos for a three-way merge cloned_repo.index.merge_tree(master, base=merge_base) # write the merge result into index cloned_repo.index.commit("Merged past and now into future ;)", parent_commits=(new_branch.commit, master.commit)) # now new_branch is ahead of master, which probably should be checked out and reset softly. # note that all these operations didn't touch the working tree, as we managed it ourselves. # This definitely requires you to know what you are doing :) ! assert osp.basename(new_file_path) in new_branch.commit.tree # new file is now in tree master.commit = new_branch.commit # let master point to most recent commit cloned_repo.head.reference = master # we adjusted just the reference, not the working tree or index Submodules represent all aspects of git submodules, which allows you query all of their related information, and manipulate in various ways. # create a new submodule and check it out on the spot, setup to track master branch of `bare_repo` # As our GitPython repository has submodules already that point to github, make sure we don't # interact with them for sm in cloned_repo.submodules: assert not sm.remove().exists() # after removal, the sm doesn't exist anymore sm = cloned_repo.create_submodule('mysubrepo', 'path/to/subrepo', url=bare_repo.git_dir, branch='master') # .gitmodules was written and added to the index, which is now being committed cloned_repo.index.commit("Added submodule") assert sm.exists() and sm.module_exists() # this submodule is defintely available sm.remove(module=True, configuration=False) # remove the working tree assert sm.exists() and not sm.module_exists() # the submodule itself is still available # update all submodules, non-recursively to save time, this method is very powerful, go have a look cloned_repo.submodule_update(recursive=False) assert sm.module_exists() # The submodules working tree was checked out by update Examining References¶ References are the tips of your commit graph from which you can easily examine the history of your project. import git repo = git.Repo.clone_from(self._small_repo_url(), osp.join(rw_dir, 'repo'), branch='master') heads = repo.heads master = heads.master # lists can be accessed by name for convenience master.commit # the commit pointed to by head called master master.rename('new_name') # rename heads master.rename('master') Tags are (usually immutable) references to a commit and/or a tag object. tags = repo.tags tagref = tags[0] tagref.tag # tags may have tag objects carrying additional information tagref.commit # but they always point to commits repo.delete_tag(tagref) # delete or repo.create_tag("my_tag") # create tags using the repo for convenience A symbolic reference is a special case of a reference as it points to another reference instead of a commit. head = repo.head # the head points to the active branch/ref master = head.reference # retrieve the reference the head points to master.commit # from here you use it as any other reference Access the reflog easily. log = master.log() log[0] # first (i.e. oldest) reflog entry log[-1] # last (i.e. most recent) reflog entry Modifying References¶ You can easily create and delete reference types or modify where they point to. new_branch = repo.create_head('new') # create a new one new_branch.commit = 'HEAD~10' # set branch to another commit without changing index or working trees repo.delete_head(new_branch) # delete an existing head - only works if it is not checked out Create or delete tags the same way except you may not change them afterwards. new_tag = repo.create_tag('my_new_tag', message='my message') # You cannot change the commit a tag points to. Tags need to be re-created self.failUnlessRaises(AttributeError, setattr, new_tag, 'commit', repo.commit('HEAD~1')) repo.delete_tag(new_tag) Change the symbolic reference to switch branches cheaply (without adjusting the index or the working tree). new_branch = repo.create_head('another-branch') repo.head.reference = new_branch Understanding Objects¶ In GitPython, all objects can be accessed through their common base, can be compared and hashed. They are usually not instantiated directly, but through references or specialized repository functions. hc = repo.head.commit hct = hc.tree hc != hct # @NoEffect hc != repo.tags[0] # @NoEffect hc == repo.head.reference.commit # @NoEffect Common fields are ... self.assertEqual(hct.type, 'tree') # preset string type, being a class attribute assert hct.size > 0 # size in bytes assert len(hct.hexsha) == 40 assert len(hct.binsha) == 20 Index objects are objects that can be put into git’s index. These objects are trees, blobs and submodules which additionally know about their path in the file system as well as their mode. self.assertEqual(hct.path, '') # root tree has no path assert hct.trees[0].path != '' # the first contained item has one though self.assertEqual(hct.mode, 0o40000) # trees have the mode of a linux directory self.assertEqual(hct.blobs[0].mode, 0o100644) # blobs have specific mode, comparable to a standard linux fs Access blob data (or any object data) using streams. hct.blobs[0].data_stream.read() # stream object to read data from hct.blobs[0].stream_data(open(osp.join(rw_dir, 'blob_data'), 'wb')) # write data to given stream The Commit object¶ Commit objects contain information about a specific commit. Obtain commits using references as done in Examining References or as follows. Obtain commits at the specified revision repo.commit('master') repo.commit('v0.8.1') repo.commit('HEAD~10') Iterate 50 commits, and if you need paging, you can specify a number of commits to skip. fifty_first_commits = list(repo.iter_commits('master', max_count=50)) assert len(fifty_first_commits) == 50 # this will return commits 21-30 from the commit list as traversed backwards master ten_commits_past_twenty = list(repo.iter_commits('master', max_count=10, skip=20)) assert len(ten_commits_past_twenty) == 10 assert fifty_first_commits[20:30] == ten_commits_past_twenty A commit object carries all sorts of meta-data headcommit = repo.head.commit assert len(headcommit.hexsha) == 40 assert len(headcommit.parents) > 0 assert headcommit.tree.type == 'tree' assert headcommit.author.name == 'Sebastian Thiel' assert isinstance(headcommit.authored_date, int) assert headcommit.committer.name == 'Sebastian Thiel' assert isinstance(headcommit.committed_date, int) assert headcommit.message != '' Note: date time is represented in a seconds since epoch format. Conversion to human readable form can be accomplished with the various time module methods. import time time.asctime(time.gmtime(headcommit.committed_date)) time.strftime("%a, %d %b %Y %H:%M", time.gmtime(headcommit.committed_date)) You can traverse a commit’s ancestry by chaining calls to parents assert headcommit.parents[0].parents[0].parents[0] == repo.commit('master^^^') The above corresponds to master^^^ or master~3 in git parlance. The Tree object¶ A tree records pointers to the contents of a directory. Let’s say you want the root tree of the latest commit on the master branch tree = repo.heads.master.commit.tree assert len(tree.hexsha) == 40 Once you have a tree, you can get its contents assert len(tree.trees) > 0 # trees are subdirectories assert len(tree.blobs) > 0 # blobs are files assert len(tree.blobs) + len(tree.trees) == len(tree) It is useful to know that a tree behaves like a list with the ability to query entries by name self.assertEqual(tree['smmap'], tree / 'smmap') # access by index and by sub-path for entry in tree: # intuitive iteration of tree members print(entry) blob = tree.trees[0].blobs[0] # let's get a blob in a sub-tree assert blob.name assert len(blob.path) < len(blob.abspath) self.assertEqual(tree.trees[0].name + '/' + blob.name, blob.path) # this is how relative blob path generated self.assertEqual(tree[blob.path], blob) # you can use paths like 'dir/file' in tree There is a convenience method that allows you to get a named sub-object from a tree with a syntax similar to how paths are written in a posix system assert tree / 'smmap' == tree['smmap'] assert tree / blob.path == tree[blob.path] You can also get a commit’s root tree directly from the repository # This example shows the various types of allowed ref-specs assert repo.tree() == repo.head.commit.tree past = repo.commit('HEAD~5') assert repo.tree(past) == repo.tree(past.hexsha) self.assertEqual(repo.tree('v0.8.1').type, 'tree') # yes, you can provide any refspec - works everywhere As trees allow direct access to their intermediate child entries only, use the traverse method to obtain an iterator to retrieve entries recursively assert len(tree) < len(list(tree.traverse())) Note If trees return Submodule objects, they will assume that they exist at the current head’s commit. The tree it originated from may be rooted at another commit though, that it doesn’t know. That is why the caller would have to set the submodule’s owning or parent commit using the set_parent_commit(my_commit) method. The Index Object¶ The git index is the stage containing changes to be written with the next commit or where merges finally have to take place. You may freely access and manipulate this information using the IndexFile object. Modify the index with ease index = repo.index # The index contains all blobs in a flat list assert len(list(index.iter_blobs())) == len([o for o in repo.head.commit.tree.traverse() if o.type == 'blob']) # Access blob objects for (path, stage), entry in index.entries.items(): # @UnusedVariable pass new_file_path = osp.join(repo.working_tree_dir, 'new-file-name') open(new_file_path, 'w').close() index.add([new_file_path]) # add a new file to the index index.remove(['LICENSE']) # remove an existing one assert osp.isfile(osp.join(repo.working_tree_dir, 'LICENSE')) # working tree is untouched self.assertEqual(index.commit("my commit message").type, 'commit') # commit changed index repo.active_branch.commit = repo.commit('HEAD~1') # forget last commit from git import Actor author = Actor("An author", "author@example.com") committer = Actor("A committer", "committer@example.com") # commit by commit message and author and committer index.commit("my commit message", author=author, committer=committer) Create new indices from other trees or as result of a merge. Write that result to a new index file for later inspection. from git import IndexFile # loads a tree into a temporary index, which exists just in memory IndexFile.from_tree(repo, 'HEAD~1') # merge two trees three-way into memory merge_index = IndexFile.from_tree(repo, 'HEAD~10', 'HEAD', repo.merge_base('HEAD~10', 'HEAD')) # and persist it merge_index.write(osp.join(rw_dir, 'merged_index')) Handling Remotes¶ Remotes are used as alias for a foreign repository to ease pushing to and fetching from them empty_repo = git.Repo.init(osp.join(rw_dir, 'empty')) origin = empty_repo.create_remote('origin', repo.remotes.origin.url) assert origin.exists() assert origin == empty_repo.remotes.origin == empty_repo.remotes['origin'] origin.fetch() # assure we actually have data. fetch() returns useful information # Setup a local tracking branch of a remote branch empty_repo.create_head('master', origin.refs.master) # create local branch "master" from remote "master" empty_repo.heads.master.set_tracking_branch(origin.refs.master) # set local "master" to track remote "master empty_repo.heads.master.checkout() # checkout local "master" to working tree # Three above commands in one: empty_repo.create_head('master', origin.refs.master).set_tracking_branch(origin.refs.master).checkout() # rename remotes origin.rename('new_origin') # push and pull behaves similarly to `git push|pull` origin.pull() origin.push() # assert not empty_repo.delete_remote(origin).exists() # create and delete remotes You can easily access configuration information for a remote by accessing options as if they where attributes. The modification of remote configuration is more explicit though. assert origin.url == repo.remotes.origin.url with origin.config_writer as cw: cw.set("pushurl", "other_url") # Please note that in python 2, writing origin.config_writer.set(...) is totally safe. # In py3 __del__ calls can be delayed, thus not writing changes in time. You can also specify per-call custom environments using a new context manager on the Git command, e.g. for using a specific SSH key. The following example works with git starting at v2.3: ssh_cmd = 'ssh -i id_deployment_key' with repo.git.custom_environment(GIT_SSH_COMMAND=ssh_cmd): repo.remotes.origin.fetch() This one sets a custom script to be executed in place of ssh, and can be used in git prior to v2.3: ssh_executable = os.path.join(rw_dir, 'my_ssh_executable.sh') with repo.git.custom_environment(GIT_SSH=ssh_executable): repo.remotes.origin.fetch() Here’s an example executable that can be used in place of the ssh_executable above: #!/bin/sh ID_RSA=/var/lib/openshift/5562b947ecdd5ce939000038/app-deployments/id_rsa exec /usr/bin/ssh -o StrictHostKeyChecking=no -i $ID_RSA "$@" Please note that the script must be executable (i.e. chomd +x script.sh). StrictHostKeyChecking=no is used to avoid prompts asking to save the hosts key to ~/.ssh/known_hosts, which happens in case you run this as daemon. You might also have a look at Git.update_environment(...) in case you want to setup a changed environment more permanently. Submodule Handling¶ Submodules can be conveniently handled using the methods provided by GitPython, and as an added benefit, GitPython provides functionality which behave smarter and less error prone than its original c-git implementation, that is GitPython tries hard to keep your repository consistent when updating submodules recursively or adjusting the existing configuration. repo = self.rorepo sms = repo.submodules assert len(sms) == 1 sm = sms[0] self.assertEqual(sm.name, 'gitdb') # git-python has gitdb as single submodule ... self.assertEqual(sm.children()[0].name, 'smmap') # ... which has smmap as single submodule # The module is the repository referenced by the submodule assert sm.module_exists() # the module is available, which doesn't have to be the case. assert sm.module().working_tree_dir.endswith('gitdb') # the submodule's absolute path is the module's path assert sm.abspath == sm.module().working_tree_dir self.assertEqual(len(sm.hexsha), 40) # Its sha defines the commit to checkout assert sm.exists() # yes, this submodule is valid and exists # read its configuration conveniently assert sm.config_reader().get_value('path') == sm.path self.assertEqual(len(sm.children()), 1) # query the submodule hierarchy In addition to the query functionality, you can move the submodule’s repository to a different path < move(...)>, write its configuration < config_writer().set_value(...).release()>, update its working tree < update(...)>, and remove or add them < remove(...), add(...)>. If you obtained your submodule object by traversing a tree object which is not rooted at the head’s commit, you have to inform the submodule about its actual commit to retrieve the data from by using the set_parent_commit(...) method. The special RootModule type allows you to treat your master repository as root of a hierarchy of submodules, which allows very convenient submodule handling. Its update(...) method is reimplemented to provide an advanced way of updating submodules as they change their values over time. The update method will track changes and make sure your working tree and submodule checkouts stay consistent, which is very useful in case submodules get deleted or added to name just two of the handled cases. Additionally, GitPython adds functionality to track a specific branch, instead of just a commit. Supported by customized update methods, you are able to automatically update submodules to the latest revision available in the remote repository, as well as to keep track of changes and movements of these submodules. To use it, set the name of the branch you want to track to the submodule.$name.branch option of the .gitmodules file, and use GitPython update methods on the resulting repository with the to_latest_revision parameter turned on. In the latter case, the sha of your submodule will be ignored, instead a local tracking branch will be updated to the respective remote branch automatically, provided there are no local changes. The resulting behaviour is much like the one of svn::externals, which can be useful in times. Obtaining Diff Information¶ Diffs can generally be obtained by subclasses of Diffable as they provide the diff method. This operation yields a DiffIndex allowing you to easily access diff information about paths. Diffs can be made between the Index and Trees, Index and the working tree, trees and trees as well as trees and the working copy. If commits are involved, their tree will be used implicitly. hcommit = repo.head.commit hcommit.diff() # diff tree against index hcommit.diff('HEAD~1') # diff tree against previous tree hcommit.diff(None) # diff tree against working tree index = repo.index index.diff() # diff index against itself yielding empty diff index.diff(None) # diff index against working copy index.diff('HEAD') # diff index against current HEAD tree The item returned is a DiffIndex which is essentially a list of Diff objects. It provides additional filtering to ease finding what you might be looking for. # Traverse added Diff objects only for diff_added in hcommit.diff('HEAD~1').iter_change_type('A'): print(diff_added) Use the diff framework if you want to implement git-status like functionality. - A diff between the index and the commit’s tree your HEAD points to - use repo.index.diff(repo.head.commit) - A diff between the index and the working tree - use repo.index.diff(None) - A list of untracked files - use repo.untracked_files Switching Branches¶ To switch between branches similar to git checkout, you effectively need to point your HEAD symbolic reference to the new branch and reset your index and working copy to match. A simple manual way to do it is the following one # Reset our working tree 10 commits into the past past_branch = repo.create_head('past_branch', 'HEAD~10') repo.head.reference = past_branch assert not repo.head.is_detached # reset the index and working tree to match the pointed-to commit repo.head.reset(index=True, working_tree=True) # To detach your head, you have to point to a commit directy repo.head.reference = repo.commit('HEAD~5') assert repo.head.is_detached # now our head points 15 commits into the past, whereas the working tree # and index are 10 commits in the past The previous approach would brutally overwrite the user’s changes in the working copy and index though and is less sophisticated than a git-checkout. The latter will generally prevent you from destroying your work. Use the safer approach as follows. # checkout the branch using git-checkout. It will fail as the working tree appears dirty self.failUnlessRaises(git.GitCommandError, repo.heads.master.checkout) repo.heads.past_branch.checkout() Initializing a repository¶ In this example, we will initialize an empty repository, add an empty file to the index, and commit the change. import git repo_dir = osp.join(rw_dir, 'my-new-repo') file_name = osp.join(repo_dir, 'new-file') r = git.Repo.init(repo_dir) # This function just creates an empty file ... open(file_name, 'wb').close() r.index.add([file_name]) r.index.commit("initial commit") Please have a look at the individual methods as they usually support a vast amount of arguments to customize their behavior. Using git directly¶ In case you are missing functionality as it has not been wrapped, you may conveniently use the git command directly. It is owned by each repository instance. git = repo.git git.checkout('HEAD', b="my_new_branch") # create a new branch git.branch('another-new-one') git.branch('-D', 'another-new-one') # pass strings for full control over argument order git.for_each_ref() # '-' becomes '_' when calling it The return value will by default be a string of the standard output channel produced by the command. Keyword arguments translate to short and long keyword arguments on the command-line. The special notion git.command(flag=True) will create a flag without value like command --flag. If None is found in the arguments, it will be dropped silently. Lists and tuples passed as arguments will be unpacked recursively to individual arguments. Objects are converted to strings using the str(...) function. Object Databases¶ git.Repo instances are powered by its object database instance which will be used when extracting any data, or when writing new objects. The type of the database determines certain performance characteristics, such as the quantity of objects that can be read per second, the resource usage when reading large data files, as well as the average memory footprint of your application. GitDB¶ The GitDB is a pure-python implementation of the git object database. It is the default database to use in GitPython 0.3. Its uses less memory when handling huge files, but will be 2 to 5 times slower when extracting large quantities small of objects from densely packed repositories: repo = Repo("path/to/repo", odbt=GitDB) GitCmdObjectDB¶ The git command database uses persistent git-cat-file instances to read repository information. These operate very fast under all conditions, but will consume additional memory for the process itself. When extracting large files, memory usage will be much higher than the one of the GitDB: repo = Repo("path/to/repo", odbt=GitCmdObjectDB) Git Command Debugging and Customization¶ Using environment variables, you can further adjust the behaviour of the git command. - GIT_PYTHON_TRACE - If set to non-0, all executed git commands will be shown as they happen - If set to full, the executed git command _and_ its entire output on stdout and stderr will be shown as they happen NOTE: All logging is outputted using a Python logger, so make sure your program is configured to show INFO-level messages. If this is not the case, try adding the following to your program:import logging logging.basicConfig(level=logging.INFO) - GIT_PYTHON_GIT_EXECUTABLE - If set, it should contain the full path to the git executable, e.g. c:\Program Files (x86)\Git\bin\git.exe on windows or /usr/bin/git on linux.
http://gitpython.readthedocs.io/en/stable/tutorial.html
CC-MAIN-2017-17
en
refinedweb
cabal-install. But unfortunately I run into the same problem when installing the HTTP package, which is needed for cabal itself. So I am in a (pun totally intended, as you'll see later on) catch-22, and each time I feel being lost completely. But there is a cure to this particular problem: create a modified mv dist/build/autogen/Paths_HTTP.hs dist/build and patch it with $ diff -u dist/build/autogen/Paths_HTTP.hs dist/build --- dist/build/autogen/Paths_HTTP.hs 2013-02-27 20:07:01.437225000 +0100 +++ dist/build/Paths_HTTP.hs 2013-02-27 20:20:36.735526000 +0100 @@ -6,6 +6,7 @@ import Data.Version (Version(..)) import System.Environment (getEnv) +import Control.Exception version :: Version version = Version {versionBranch = [4000,2,8], versionTags = []} @@ -17,11 +18,14 @@ datadir = "/home/ggreif/share/HTTP-4000.2.8" libexecdir = "/home/ggreif/libexec" +hardCoded :: FilePath -> IOException -> IO FilePath +hardCoded dir = const $ return dir + getBinDir, getLibDir, getDataDir, getLibexecDir :: IO FilePath -getBinDir = catch (getEnv "HTTP_bindir") (\_ -> return bindir) -getLibDir = catch (getEnv "HTTP_libdir") (\_ -> return libdir) -getDataDir = catch (getEnv "HTTP_datadir") (\_ -> return datadir) -getLibexecDir = catch (getEnv "HTTP_libexecdir") (\_ -> return libexecdir) +getBinDir = catch (getEnv "HTTP_bindir") (hardCoded bindir) +getLibDir = catch (getEnv "HTTP_libdir") (hardCoded libdir) +getDataDir = catch (getEnv "HTTP_datadir") (hardCoded datadir) +getLibexecDir = catch (getEnv "HTTP_libexecdir") (hardCoded libexecdir) getDataFileName :: FilePath -> IO FilePath getDataFileName name = do This file is then preferably found by GHC and all is okay. Incidentally I already employed this trick in the past, but forgot about the details so I had to reinvent it again. After escaping this particular subhell of cabal I came up with this blog post in order to not lose my way in the future. Hopefully it helps you too.
http://heisenbug.blogspot.de/2013/02/
CC-MAIN-2017-17
en
refinedweb
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. i can't install my odoo v7 module on odoo v8 module? i have error when i try install my customized module on odoo v8 ImportError: No module named osv Change your import to: from openerp.osv import fields, osv tahnk u for helping me,but i need to install restaurant module from v8 to v7 can i do that? I really need your help, I have the same problem using odoo v8. Vos decis: from openerp.osv import fields, osv But as this is done and where. Thank you About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/i-can-t-install-my-odoo-v7-module-on-odoo-v8-module-64013
CC-MAIN-2017-17
en
refinedweb
. Extends "DESCRIPTION" in OODoc::Object. Extends "OVERLOADED" in OODoc::Object. Extends "METHODS" in OODoc::Object. Extends "Constructors" in OODoc::Object. -Option --Default skip_links undef The parser should not attempt to load modules which match the REGEXP or are equal or sub-namespace of STRING. More than one of these can be passed in an ARRAY. Extends "Inheritance knowledge" in OODoc::Object. Inherited,). Extends "Commonly used functions" in OODoc::Object. Inherited, see "Commonly used functions" in OODoc::Object Inherited, see "Commonly used functions" in OODoc::Object Extends Text blocks have to get the finishing touch in the final formatting phase. The parser has to fix the text block segments to create a formatter dependent output. Only a few formatters are predefined. A call to addManual() expects a new manual object (a OODoc::Manual), however an incompatible thing was passed. Usually, intended was a call to manualsForPackage() or mainManual(). This module is part of OODoc distribution version 2.01, built on November 11, 2015. Website: This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. See
http://search.cpan.org/~markov/OODoc-2.01/lib/OODoc/Parser.pod
CC-MAIN-2017-17
en
refinedweb
Mark Thanks for your reply. If you can give me some hints to the following, I'll have all the ideas I need to be able to tell Breeze Designer developer what is going wrong with his implemetation of Python ActiveScripting macro capability. I am quite sure now that it is quite buggy and hopefully he can fix things. My first question: Breeze Desinger comes with Type Library used for reference. The file is: Breeze20.tlb. I was able to load it in Visual Basic 6.0 (from the References menu by browsing and choosing the tlb file) and look at the methods and properties of the scene object. However in PythonWin: >>> from win32com.client import pythoncom >>> pythoncom.LoadTypeLib("d:\winpov\breeze\program\Breeze20.tlb") Traceback (most recent call last): File "<interactive input>", line 1, in ? com_error: (-2147312566, 'Error loading type library/DLL.', None, None) Breeze20.tlb is used as a reference file. Why can't PythonWin load it? Now for the rest of the post: "Mark Hammond" <MarkH at ActiveState.com> wrote in message news:3B942FF4.9040006 at ActiveState.com... > Maan Hamze wrote: > > 2. Using Python ActiveScripting > > From inside the applicatio itself (Breeze Designer) there is a facility to > > run Macros with languages with ActiveScripting capabilities including > > Python. > > "Breeze.Scene" is created automatically when Breeze is started. > > a macro starts with $Scripting_Language > > > "Breeze.Scene" should not be created automatically - that would suck. > Hopefully what you mean is that a "Breeze" object is created, and it > should have a "Scene" attribute. > My mistake Mark. It was a Typo. This is what the Breeze doc mentions: "All functions are available though the Breeze Designer **scene** object. This object is pre-created when using macros **from within Breeze Designer**. That is to use any on the following functions a scene. should be added to the front of the function." > You could try printing "globals()" to see exactly what is in the > ActiveScripting namespace. If "Breeze.Scene" really does exist in the > namespace, then we will need to pull some tricks to work around a very > poor decision by the povray people. > Actually Breeze Designer is a povray modeller, but it is not done by the povray people. It translates a scene/model into povray syntax. >From inside Breeze Designer I had the macro: $Python import win32traceutil from win32com.client import Dispatch print globals() #to print into the Python Trace Collector of PythonWin scene=Dispatch("breeze.scene") #notice small case letters :) rest of code...... Is that the usual way of getting the object when scripting inside an application (not through PythonWin)? What I do not like about this program is that with VBScript, scene object seems to be given. But with other scripting languages one has to be able to get the object. So there must be some default implementation that allows VBscript to see it, but for Python to get it before using it (otherwise Python reports that scene is not a defined name). Please note that I used "breeze.scene" this time. I was using "Breeze.Scene" before because that is what is in the Windows Registry (it is not listed in PythonWin COM browser so I looked in the registry, found it, and used it.) And Breeze.Scene was working while scripting from within PythonWin. So it would never have occured to me to use breeze.scene. 1. breeze.scene IS working now from within a macro in Breeze. But Breeze is still crashing sometimes and it got nothing to do with Python. The problem is with Breeze Designer when it tries to open the OpenGL Perspective window. 2. That is what I am getting in the Python Trace Collector (by using print globals() in the macro): {'ax': <win32com.axscript.client.pyscript.AXScriptAttribute instance at 02A815BC>, 'win32traceutil': <module 'win32traceutil' from 'd:\python\win32\lib\win32traceutil.pyc'>, 'Scene': <NamedItemAttribute<ScriptItem at 44574700: Scene>>, etc......etc......... So, yep, it is....... Scene (not scene). But it is breeze.scene that is working not Breeze.Scene. Maan
https://mail.python.org/pipermail/python-list/2001-September/102834.html
CC-MAIN-2017-17
en
refinedweb
Odoo Help Odoo is the world's easiest all-in-one management software. It includes hundreds of business apps: CRM | e-Commerce | Accounting | Inventory | PoS | Project management | MRP | etc. how to restrict number of records to create Hi friendz, Is there any way to restrict the number of records to be created for a single object or table My requirement needs me to create only 3 contacts for my company. Is there any way to do so. Thanks & Regards, Atchuthan Hello, You can do this by def create() like: class test(osv.osv): _name = "test" def create(self, cr, uid, vals, context=None): limit = len(self.search(cr, uid, [], context=context)) if(limit >= 15): raise osv.except_osv(_("Warning!"), _("Message to display")) else: return super(test, self).create(cr, uid, vals, context=context) Here, test should be name of your object. I have given to 15 record's creation. Exactly what i would have answering. Maybe you must overwritten the write too. def write() is used to update the record. And in question it's to restrict number of records to create., So no needed to overwrite write(). When you add a contact to an already existing Customer, it don't pass by Customer write ? @Xsias I have fetched all the records of 'test' object. We need to pass domain according to our requirement. About This Community Odoo Training Center Access to our E-learning platform and experience all Odoo Apps through learning videos, exercises and Quizz.Test it now
https://www.odoo.com/forum/help-1/question/how-to-restrict-number-of-records-to-create-27099
CC-MAIN-2017-17
en
refinedweb
t_rcvuderr - receive a unit data error indication #include <xti.h> int t_rcvuderr( int fd, struct t_uderr *uderr) This function is used in connectionless-mode to. error information to be returned in uderr will be discarded. - [TNOTSUPPORT] - This function is not supported by the underlying transport provider. - [TNOUDERR] - No unit data error indication currently exists on the specified_rcvudata(), t_sndudata().
http://pubs.opengroup.org/onlinepubs/007908775/xns/t_rcvuderr.html
CC-MAIN-2017-17
en
refinedweb
I have bought a mechanical arm before some days. Its picture looks a little cute. There are two colors for consumers to choose, black and white. I chose black one. There are four servos for it. First one is to control its mechanical hand to catch something. The second one is to control height of its arm and hand. The third servo is to distance between mechanical and object. The last one is to control mechanical arm to turn direction, left or right. When you finish the construction, how to make it work? - I chose an Arduino UNO board named Freaduino to control it. - Connecting the first servo with D0 pin in the UNO board. - Connecting the second servo with D1 pin in the UNO board. - Connecting the third servo with D2 pin in the UNO board. - Connecting the fourth servo with D3 pin in the UNO board. - Check if every servo is working normally. At the end of the passage I will upload test code. Now I am very carried away with how to control this mechanical arm. We can use potentiometer, rocker or Bluetooth to control it. Potentiometer is awkward to use in here. Rocker is universal and not cool enough. So I chose Bluetooth module last. The last step, connecting Bluetooth module with board. Because there are four servos connected with board, we give the board extra power. If don’t do that, the USB circuit is not enough to support it. If we want to control it by UNO board, we must put the libraries of Servo into the folder of libraries. The libraries will be uploaded at last of passage. We can use class of Servo to create four variable. Such as: Servo Servo_Catch; Servo Servo_Height; Servo Servo_Distance; Servo Servo_Direction; And we must give them pins’ number to control corresponding servos. We use “attach” function to do it. Such as: Servo_Catch.attach(0); Servo_Height.attach(1); Servo_Distance.attach(2); Servo_Direction.attach(3); These are initialize of servos. The most useful function of servos is “write()”. This is to control degrees of servos. If you fill a number in the parameter, the servo will turn to the corresponding degree. For example, if I want servo of catching to turn 30 degrees, I can use it like: Servo.write(10); Is it very easy to use? Next, let’s try to write program about how to control the mechanical arm by Bluetooth. In here, I use libraries of ElecfreaksCar directly to receive data from Bluetooth module. It will be easy if I use the libraries. We can use class of ElecfreaksCar to create a new variable which is named BluetoothModule, such as: ElecfreaksCar BluetoothModule; And we use function of “recievedData()” to receive data which is from Bluetooth. We use function of “getRoll()” and “getPitch()” to get data of rocker of APP to control the mechanical arm. Now it is an example about use APP to turn on and turn off LED which is on the UNO board. #include "ElecfreaksCar.h" ElecfreaksCar BluetoothModule; int ledPin = 13; void setup() { Serial.begin(115200); pinMode(ledPin, OUTPUT); } void loop() { while(Serial.available()) { uint8_t c = Serial.read(); BluetoothModule.recievedData(&c, 1); } if(BluetoothModule.getRoll() > 125) { digitalWrite(ledPin, LOW); } else { digitalWrite(ledPin, HIGH); } } Of course, we don’t use these functions to control the LED. So we modify it to make it can control the mechanical arm by Bluetooth. Now I list out functions about class of ElecfreaksCar: getRoll(); //it will return data of roll, from 0 to 250 getPitch(); //it will return data of pitch, from 0 to 250 setFun(void (*Function)()); //it will run the sentence of Function when user presses button of APP. This is program about the mechanical arm witch is written by me. And you can change it to create yourself program of mechanical arm. #include "ElecfreaksCar.h" #include "Servo.h" ElecfreaksCar BluetoothModule; //define a variable of class of ElecfreaksCar which is named BluetoothModule Servo Servo_Roll; //This is a servo which to turn direction Servo Servo_Distance; //This is a servo which is to control the distance between the machanical hand and object. Servo Servo_Catch; //THis is a servo which is to control the manchanical hand to catch or let go. float P=125.00; //This is value of pitch. This value is middle value when the rocker in the middle float R=125.00; //This is value of roll. This value is middle value when the rocker in the middle unsigned char Flag_Catch = 0; //This is a flag about if the machanical hand catch or not void Button() //This function is to be run when user touch the button of APP { if(Flag_Catch == 1) { Servo_Catch.write(0); //Catch Flag_Catch = 0; } else { Servo_Catch.write(90); //let go Flag_Catch = 1; } } void setup() { Serial.begin(115200); //baud rate BluetoothModule.setFun(Button); //The arduino will run the function which is in the parameter. In here, it will run the function of "Button" Servo_Catch.attach(2); //the servo of catch is connected with pin D2 Servo_Distance.attach(4); //the servo of controlling distance between machanical hand and object is connected with pin D4 Servo_Roll.attach(5); //the servo of controlling direction is connected with pin D5 } void loop() { while(Serial.available()) //if there is any data come from bluetooth, it will into the function of while { uint8_t c = Serial.read(); //read the data of serial com BluetoothModule.recievedData(&c, 1); //recieve the data P=(float)BluetoothModule.getPitch(); //get the data of pitch R=(float)BluetoothModule.getRoll(); //get the data of roll P = P * 0.72; //This is important. the value of the rocker of APP is from 0 to 250, But the degree of servo is from 0 degree to 180 degrees. //So we must make the value of pitch to multiplicative (180/250). R = R * 0.72; //the same as pitch } Servo_Distance.write((int)P); //make the servo to run the degree Servo_Roll.write((int)R); } I’m Yuno. See you next time. 😀 ElecfreaksCar APP Download: Libraries Download:
http://www.elecfreaks.com/8331.html
CC-MAIN-2017-17
en
refinedweb
Returning local variable pointers from a function is something you should never do. Because the returned address will exist, but the data inside it will have lost it’s scope. Nevertheless, whether or not you should do it, is something people often disregard, and thus GCC 5.x and above have tried to ensure you don’t. Starting from 5.0.0 and above versions of GCC (and in extension G++), you cannot return local variable pointers from a function Thus a program like this - #include <iostream> using namespace std; int * createArr () { int arr[3] = {1,2,3}; cout << "function " << arr << endl; return arr; } int main() { int * val = createArr(); cout << " main " << val; } Will return this in GCC5 function 0x7fff5e9e18f0 main 0 In earlier versions of GCC (4.9 and below) it used to return this - function 0x7fff5e9e18f0 main 0x7fff5e9e18f0 TL;DR; In GCC5.x and above, local variable pointers in a function are returned as 0 to the calling function
http://blog.codingblocks.com/2016/local-variable-pointer-returning-changes-in-gcc-5
CC-MAIN-2018-43
en
refinedweb
For Day 5, the challenge is to calculate the number of steps needed to bounce around an array following one of two rulesets. My Python 2 solution is below: def simple_increment(i): return i + 1 def part_2_increment(i): return i - 1 if i >= 3 else i + 1 def play_maze(maze, increment=simple_increment): maze = maze[:] i = 0 steps = 0 while i >= 0 and i < len(maze): new_i = i + maze[i] maze[i] = increment(maze[i]) i = new_i steps += 1 return steps with open("input.txt", "r") as f: maze = [int(x.strip("\n")) for x in f.readlines()] print "Part 1: ", play_maze(maze) print "Part 2: ", play_maze(maze, part_2_increment) Advent of Code runs every day up to Christmas, you should join in!. Get the latest posts delivered right to your inbox.
https://blog.jscott.me/advent-of-code-day-5/
CC-MAIN-2018-43
en
refinedweb
We all know C doesn’t support polymorphism. You all have taken CS 10 in Java, so you know what that means. It means you can’t have something like the following: int sum(int a, int b); double sum(double a, double b); int sum(int a, int b, int c); double sum(double a, double b, double c); Nope. We have to have independent sum functions like this: int sum2int(int a, int b); double sum2double(double a, double b); You get the point. Anyone disagree / find a way to do polymorphism in C? No? Okay, so we all agree. This is a limitation on the language. Well if that’s the case, how does a function like printf exist? Isn’t this a perfect example of a function that takes multiple arguments? Hmm. This is today’s puzzle. As it turns out, there is another feature of C that you haven’t learned about! Functions in C can accept a variable number of arguments. These are known as variadic functions. Let’s take a look at the file stuff.c: #include <stdarg.h> #include <stdio.h> double f1(int argNum, ...) { va_list arguments; va_start(arguments, argNum); double sum = 0; for (int i = 0; i < argNum; i++) sum += va_arg(arguments, int); va_end(arguments); return sum / argNum; } int main(int argc, const char *argv[]) { printf("average: %lf\n", f1(4, 1, 2, 3, 4)); return 0; } Pretty cool, eh? The ellipses ... are used to indicate a function takes a variable number of arguments. Note that the compiler cannot know how many arguments are being passed, so the first argument must be the number of arguments. The function printf gets around this by having the first parameter be the format of the string, and the code then deduces how many parameters there must be from the format of the string. Note the inclusion of stdarg.h which is required to define the va_list structure along with va_start, va_arg, and va_end functions. In Java terms, va_list arguments is declaring an iterator arguments, while the line va_start(arguments, argNum) is establishing how many arguments there are to iterate over. The actual .next() “method” of the iterator is called using va_arg, which simply returns the next argument. Once this is done, you call va_end to cleanup any memory associated with the argument parsing. Obviously, the code prints 2.5000 as the result. So is this polymorphism? No, not really. Note that the second argument for va_arg is the type of the argument! The compiler really does not know what the argument can be. If the caller has to specify the type of the arguments at every point of invocation, then it removes the entire benefit of polymorphism in the first place. This is the first step necessary to achieve polymorphism, but until you can deduce the type you cannot achieve the full power of polymorphism. Another useful function in this set is vsprintf. You can use this to copy the contents of the variable arguments into a character buffer, just like sprintf. Here is an example. void function(const char *fmt, ...) { // setup char buffer[MAX_BUFFER_SIZE]; va_list arguments; va_start(arguments, fmt); // copy content vsprintf(buffer, fmt, arguments); // cleanup va_end(arguments); fprintf(stdout, "%s\n", buffer); } One of the new features introduced in the new C99 standard is variadic macros. As the name suggests, they are macros that also take a various number of arguments. You may remember from two recitations ago that Vipul mentioned you can use __FILE__ and __LINE__ within your print statements to help debug what file and what line this particular statement is printing from. That’s pretty cool. But what if we could create a wrapper around printf that did that for us automatically so we could do even less work? Here is our wrapper around printf and the macro that calls the wrapper: #define dbgprintf(...) realdbgprintf (__FILE__, __LINE__, __VA_ARGS__) void realdbgprintf (const char *SourceFilename, int SourceLineno, const char *CFormatString, ...); Nice! How does this work? The call dbgprintf ("Hello, world"); expands out to realdbgprintf (__FILE__, __LINE__, "Hello, world"); Another example? Why not. dbgprintf("%d + %d = %d", 2, 2, 5); becomes realdbgprintf(__FILE__, __LINE__, "%d + %d = %d", 2, 2, 5); Without variadic macros, writing wrappers to printf is not directly possible. – The Wisdom of Wikipedia But what did people do before the existance of C99 standard that allowed this feature within macros? Remember that old parentheses trick with macros? We exploit the same thing. #define dbgprintf(x) realdbgprintf x Then you would call it with something like this: dbgprintf (("Hello, world %d", 27)); Which would expand to realdbgprintf ("Hello, world %d", 27); Which is what we wanted! Cute. Reading is good! Sort of. Minimal reading is good – we’re not the humanities. So here is some minimal reading. Understanding va_list Variadic functions in C Example above came directly from C++ . com and Alex Allain’s brilliant tutorials. If you haven’t guessed by now, I’m a big fan of Alex. Variadic macros Interesting Case – read this
https://cs50.notablog.xyz/puzzle/Puzzle6.html
CC-MAIN-2018-43
en
refinedweb
A simple Python package for creating or reading GDSII layout files. Project description. Documentation - Complete documentation can be found at: - Download - The package can be downloaded for installation via easy_install at - Gallery A Simple Example Here is a simple example that shows the creation of some text with alignment features. It involves the creation of drawing geometry, Cell and a Layout . The result is saved as a GDSII file, and also displayed to the screen: import os.path from gdsCAD import * # Create some things to draw: amarks = templates.AlignmentMarks(('A', 'C'), (1,2)) text = shapes.Label('Hello\nworld!', 200, (0, 0)) box = shapes.Box((-500, -400), (1500, 400), 10, layer=2) # Create a Cell to hold the objects cell = core.Cell('EXAMPLE') cell.add([text, box]) cell.add(amarks, origin=(-200, 0)) cell.add(amarks, origin=(1200, 0)) # Create two copies of the Cell top = core.Cell('TOP') cell_array = core.CellArray(cell, 1, 2, (0, 850)) top.add(cell_array) # Add the copied cell to a Layout and save layout = core.Layout('LIBRARY') layout.add(top) layout.save('output.gds') layout.show() Recent Changes - v0.4.5 (05.02.15) - Added to_path and to_boundary conversion methods - Added experimental DXFImport - v0.4.4 (12.12.14) - Added Ellipse boundary (cjermain) - Added missing area method to base classes - Fixed bug when objects are defined with integers then translated by float (cjermain) - Added missing flatten method - v0.4.3 (07.10.14) - (bugfix) Boundaries to again accept non-numpy point lists - Removed deprecated labels attribute from Cell - Reduced internal uses of Cell._references - v0.4.2 (15.09.14) - (bugfix) Boundaries are now closed as they should be (thanks Phil) - gdsImport loads all Boundary points (including final closing point) from file - v0.4.1 (05.06.14) - Allow Boundaries with unlimited number of points via multiple XY entries - v0.4.0 (07.05.14) - Several performance improvements: Layout saving, reference selection, and bounding boxes should all be faster - Layout save now only uniquifies cell names that are not already unique - v0.3.7 (14.02.14) - More colors for layer numbers greater than six (Matthias Blaicher) - v0.3.6 (12.12.13) bugfix - Fixed installation to include missing resource files - v0.3.5 (11.12.13 PM) bugfix - Introduced automatic version numbering - git_version module is now included in distribution (Thanks Matthias) - v0.3.2 (11.12.13) - CellArray spacing can now be non-orthogonal - Block will now take cell spacing information from the attribute cell.spacing - v0.3.1 (06.12.13) - Added support for Hershey Fonts. - Thanks to Matthias Blaicher. Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/gdsCAD/
CC-MAIN-2018-43
en
refinedweb
Suppose you have a series of pages, all of which have the same navigation bar, contact information, or footer. What can you do? Well, one common "solution" is to cut and paste the same HTML snippets into all the pages. This is a bad idea because when you change the common piece, you have to change every page that uses it. Another common solution is to use some sort of server-side include mechanism whereby the common piece gets inserted as the page is requested . This general approach is a good one, but the typical mechanisms are server specific. Enter jsp:include , a portable mechanism that lets you insert any of the following into the JSP output: The content of an HTML page. The content of a plain text document. The output of JSP page. The output of a servlet. The jsp:include action includes the output of a secondary page at the time the main page is requested. Although the output of the included pages cannot contain JSP, the pages can be the result of resources that use servlets or JSP to create the output. That is, the URL that refers to the included resource is interpreted in the normal manner by the server and thus can be a servlet or JSP page. The server runs the included page in the usual way and places the output into the main page. This is precisely the behavior of the include method of the RequestDispatcher class (see Chapter 15, "Integrating Servlets and JSP: The Model View Controller (MVC) Architecture"), which is what servlets use if they want to do this type of file inclusion. You designate the included page with the page attribute, as shown below. This attribute is required; it should be a relative URL referencing the resource whose output should be included. <jsp:include Relative URLs that do not start with a slash are interpreted relative to the location of the main page. Relative URLs that start with a slash are interpreted relative to the base Web application directory, not relative to the server root. For example, consider a JSP page in the headlines Web application that is accessed by the URL http:// host /headlines/sports/table-tennis.jsp . The table-tennis.jsp file is in the sports subdirectory of whatever directory is used by the headlines Web application. Now, consider the following two include statements. <jsp:include <jsp:include In the first case, the system would look for cheng-yinghua.jsp in the bios subdirectory of sports (i.e., in the sports/bios sub-subdirectory of the main directory of the headlines application). In the second case, the system would look for footer.jsp in the templates subdirectory of the headlines application, not in the templates subdirectory of the server root. The jsp:include action never causes the system to look at files outside of the current Web application. If you have trouble remembering how the system interprets URLs that begin with slashes , remember this rule: they are interpreted relative to the current Web application whenever the server handles them; they are interpreted relative to the server root only when the client (browser) handles them. For example, the URL in <jsp:include is interpreted within the context of the current Web application because the server resolves the URL; the browser never sees it. But, the URL in <IMG SRC="/path/file" ...> is interpreted relative to the server's base directory because the browser resolves the URL; the browser knows nothing about Web applications. For information on Web applications, see Section 2.11. Core Note URLs that start with slashes are interpreted differently by the server than by the browser. The server always interprets them relative to the current Web application. The browser always interprets them relative to the server root. Finally, note that you are permitted to place your pages in the WEB-INF directory. Although the client is prohibited from directly accessing files in this directory, it is the server, not the client, that accesses files referenced by the page attribute of jsp:include . In fact, placing the included pages in WEB-INF is a recommended practice; doing so will prevent them from being accidentally accessed by the client (which would be bad, since they are usually incomplete HTML documents). Core Approach To prevent the included files from being accessed separately, place them in WEB-INF or a subdirectory thereof. The jsp:include action is one of the first JSP constructs we have seen that has only XML syntax, with no equivalent "classic" syntax. If you are unfamiliar with XML, note three things: XML element names can contain colons. So, do not be thrown off by the fact that the element name is jsp:include . In fact, the XML-compatible version of all standard JSP elements starts with the jsp prefix (or namespace). XML tags are case sensitive. In standard HTML, it does not matter if you say BODY , body , or Body . In XML, it matters. So, be sure to use jsp:include in all lower case. XML tags must be explicitly closed. In HTML, there are container elements such as H1 that have both start and end tags ( <H1> ... </H1> ) as well as standalone elements such as IMG or HR that have no end tags ( <HR> ). In addition, the HTML specification defines the end tags of some container elements (e.g., TR , P ) to be optional. In XML, all elements are container elements, and end tags are never optional. However, as a convenience, you can replace bodyless snippets such as <blah></blah> with <blah / > . So when using jsp:include , be sure to include that trailing slash. In addition to the required page attribute, jsp:include has a second attribute: flush , as shown below. This attribute is optional; it specifies whether the output stream of the main page should flushed before the inclusion of the page (the default is false ). Note, however, that in JSP 1.1, flush was a required attribute and the only legal value was true . <jsp:include As an example of a typical use of jsp:include , consider the simple news summary page shown in Listing 13.1. Page developers can change the news items in the files Item1.html through Item3.html (Listings 13.2 through 13.4) without having to update the main news page. Figure 13-1 shows the result. Notice that the included pieces are not complete Web pages. The included pages can be HTML files, plain text files, JSP pages, or servlets (but with JSP pages and servlets, only the output of the page is included, not the actual code). In all cases, however, the client sees only the composite result. So, if both the main page and the included pieces contain tags such as DOCTYPE , BODY , etc., the result will be illegal HTML because these tags will appear twice in the result that the client sees. With servlets and JSP, it is always a good habit to view the HTML source and submit the URL to an HTML validator (see Section 3.5, "Simple HTML-Building Utilities"). When jsp:include is used, this advice is even more important because beginners often erroneously design both the main page and the included page as complete HTML documents. Do not use complete HTML documents for your included pages. Include only the HTML tags appropriate to the place where the included files will be inserted. <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN"> <HTML> <HEAD> <TITLE>What's New at JspNews.com</TITLE> <LINK REL=STYLESHEET </HEAD> <BODY> <TABLE BORDER=5 <TR><TH CLASS="TITLE"> What's New at JspNews.com</TABLE> <P> Here is a summary of our three most recent news stories: <OL> <LI> <jsp:include <LI> <jsp:include <LI> <jsp:include </OL> </BODY></HTML> <B>Bill Gates acts humble.</B> In a startling and unexpected development, Microsoft big wig Bill Gates put on an open act of humility yesterday. <A HREF="">More details...</A> <B>Scott McNealy acts serious.</B> In an unexpected twist, wisecracking Sun head Scott McNealy was sober and subdued at yesterday's meeting. <A HREF="">More details...</A> <B>Larry Ellison acts conciliatory.</B> Catching his competitors off guard yesterday, Oracle prez Larry Ellison referred to his rivals in friendly and respectful terms. <A HREF="">More details...</A> The included page uses the same request object as the originally requested page. As a result, the included page normally sees the same request parameters as the main page. If, however, you want to add to or replace those parameters, you can use the jsp:param element (which has name and value attributes) to do so. For example, consider the following snippet. <jsp:include <jsp:param </jsp:include> Now, suppose that the main page is invoked by means of . In such a case, the following list summarizes the results of various getParameter calls. In main page ( MainPage.jsp ). (Regardless of whether the getParameter calls are before or after the file inclusion.) request.getParameter("fgColor") returns "RED" . request.getParameter("bgColor") returns null . In included page ( StandardHeading. jsp ). request.getParameter("bgColor") returns "YELLOW" . If the main page receives a request parameter that is also specified with the jsp:param element, the value from jsp:param takes precedence only in the included page.
https://flylib.com/books/en/1.94.1.127/1/
CC-MAIN-2018-43
en
refinedweb
For Day 16, the challenge is to rearrange characters according to a given "dance" a billion times and find where they end up. My Python solution is below: import string with open("input.txt", "r") as o: commands = o.read().split(",") order = list(string.ascii_lowercase[:16]) def dance(order, commands): order = order[:] for command in commands: if command[0] == "s": num = int(command[1:]) order = order[-num:] + order[:-num] elif command[0] == "x": c = command[1:].split("/") a = int(c[0]) b = int(c[1]) order[a], order[b] = order[b], order[a] elif command[0] == "p": c = command[1:].split("/") a = order.index(c[0]) b = order.index(c[1]) order[a], order[b] = order[b], order[a] return order print "Part 1", "".join(dance(order, commands)) def part2(order, commands): original_order = order # Find the smallest cycle i = 1 order = dance(order, commands) while not order == original_order: i += 1 order = dance(order, commands) # Skip as many cycles as possible for i in range(1000000000 % i): order = dance(order, commands) return order print "Part 2", "".join(part2(order, commands)) I just brute force it until I find the first cycle - then skip as many as I can before finishing off the billion iterations. I think I could optimize the dance method a little by modifying the list in place rather than making new ones using the substring - but the code would be a lot less clean. Advent of Code runs every day up to Christmas, you should join in!. Get the latest posts delivered right to your inbox.
https://blog.jscott.me/advent-of-code-day-16/
CC-MAIN-2018-43
en
refinedweb
Board index » ruby All times are UTC Something on the lines of class Foo attr_reader :Foo.bar end puts Foo.bar Foo.bar="wibble" puts Foo.bar although this gives the following: /home/stehill1/tmp/attr.rb:4: undefined method `bar' for :Foo:Symbol (NameError) Thanks Steve... Paul class Dave class <<self attr_accessor :wombat end # etc... end Dave.wombat = 123 Dave... ml .. Hm there are a couple of typos in your solution (and Dave's wombat is a completely different animal - a.k.a. a ``per class instance'' attribute) - ------ --- accessors.rb.new Wed Jan 16 18:11:59 2002 self.class_eval %{ ## def self.#{symbol} end %} ## def self.#{symbol}(x) end %} end ------ /Christoph Thanks for pointing out my error; I've fixed this in CVS. 1. addign class attr reader in C extension 2. function as attr of subclass != method (???) 3. Accessor methods 4. Accessor Methods - Why? 5. Defining accessor methods 6. proposal: accessor methods 7. need help: funcall and accessor methods 8. How can I redefine accessors (and method) in Full CLOS (compared to AMOP) 9. setf methods for structure accessors used as parameters 10. Accessor methods question 11. Graphic-file readers and writers 12. Writers wanted, readers needed
http://computer-programming-forum.com/39-ruby/ee77df093c691fc4.htm
CC-MAIN-2018-43
en
refinedweb
Microsoft has published a report today detailing a never-before-seen series of attacks against Kubeflow, a toolkit for running machine learning (ML) operations on top of Kubernetes clusters. The attacks have been going on since April this year, and Microsoft says its end-goal has been to install a cryptocurrency miner on Kubernetes clusters running Kubeflow instances exposed to the internet. According to Yossi Weizman, a security researcher with Microsoft’s Azure Security Center, the company has detected these types of attacks against “tens of Kubernetes clusters” running Kubeflow. But while the number of hijacked clusters is small in comparison to previous Kubernetes attacks, the profits for crooks and the financial losses to server owners are most likely much higher than other attacks seen before. “Nodes that are used for ML tasks are often relatively powerful, and in some cases include GPUs,” Weizman explained. “This fact makes Kubernetes clusters that are used for ML tasks a perfect target for crypto mining campaigns, which was the aim of this attack.” Attacks began in April this year Microsoft says it’s been tracking these attacks since April when it first saw them get underway and documented the first attack wave, before crooks expanded their focus from general-purpose Kubernetes instances to ML-focused clusters running Kubeflow. As it learned more from its investigation into the early attacks, Microsoft now says it believes the most likely point of entry for the attacks are misconfigured Kubeflow instances. In a report today, Microsoft said that Kubeflow admins most likely changed the Kubeflow default settings and exposed the toolkit’s admin panel on the internet. By default, the Kubeflow management panel is exposed only internally and accessible from inside the Kubernetes cluster. Kubernetes threat matrix for the atacks on Kubeflow instances Image: Microsoft Weizman said that since April, a cryptomining gang has been scanning for these dashboards, accessing the internet-exposed admin panels, and deploying new server images to Kubeflow clusters, with these images focused on running XMRig, a Monero cryptocurrency mining application. How to detect hacked Kubeflows In case server administrators may want to investigate their clusters for any hacked Kubeflow instances, Weizman provided the following steps. - Verify that the malicious container is not deployed in the cluster. The following command can help you to check it: kubectl get pods –all-namespaces -o jsonpath=”{.items[*].spec.containers[*].image}” | grep -i ddsfdfsaadfs - In case Kubeflow is deployed in the cluster, make sure that its dashboard isn’t exposed to the internet: check the type of the Istio ingress service by the following command and make sure that it is not a load balancer with a public IP: kubectl get service istio-ingressgateway -n istio-system
https://nikolanews.com/microsoft-discovers-cryptomining-gang-hijacking-ml-focused-kubernetes-clusters/
CC-MAIN-2021-10
en
refinedweb
Introduction Two ubiquitous coding strategies are to use assertions and to generate debugging output. The .NET System.Diagnostics.Debug andSystem.Diagnostics.TRACE classes are designed to help programmers do these things. Unfortunately, the organization of these classes makes using them inappropriately in many cases. This article describes the reasoning behind this conclusion and gives a solution that does not have the problems of the Debug and TRACE classes. C# code is included. The solution involves four entities: a class that is used for assertions and only assertions, a preprocessor token that controls whether the class executes assertions, and a similar class and preprocessor token for development output. We also give a general introduction to using assertions. Outline An outline of this article is as follows. The next two sections respectively introduce assertions and development output. These introductions are followed by a discussion of some goals that should be met by classes that assist with assertions and development output. The next section discusses the .NET Debug and TRACE classes and shows how these classes do not meet the goals. The remainder of the article presents two standalone classes that can be used to meet the goals. Introduction to Assertions Assertions are a traditional software engineering tool that can be used with almost all programming languages. In his excellent book, “Large-Scale C++ Software Design” [2], John Lakos discusses using assertions in C and C++ (the text has been adapted slightly for this article): The Standard C library provides a macro called assert (see assert.h) for guaranteeing that a given expression evaluates to a non-zero (true) value; otherwise an error message is printed and program execution is terminated. Assertions are convenient to use and are a powerful implementation-level documentation tool for developers. Assert statements are like active comments — they not only make assumptions clear and precise, but if these assumptions are violated, they actually do something about it. The use of assert statements can be an effective way to catch program logic errors at runtime, and yet they are easily filtered out of production code. Once development is complete, the runtime cost of these redundant tests for coding errors can be eliminated simply by defining the preprocessor symbol NDEBUG during compilation. Be sure, however, to remember that code placed in the assert itself will be omitted in the production version. An assertion is best used to test a condition only when all of the following hold: - the condition should never be false if the code is correct, - the condition is not so trivial so as to obviously be always true, and - the condition is in some sense internal to a body of software. Assertions should almost never be used to detect situations that arise during software’s normal operation. For example, usually, assertions should not be used to check for errors in a user’s input. It may, however, make sense to use assertions to verify that a caller has already checked a user’s input. An “assertion failure” is said to occur when an assertion detects that its condition is false and takes appropriate action, such as throwing an exception. Since this is exactly what an assertion is supposed to do, the term “assertion failure” is something of a misnomer. Nevertheless, the term is standard and is useful because it provides a name for an important situation. Let’s look at an example of using assertions. This example deals with a small C# program; the true value of assertions may not become apparent until one has used assertions with larger programs. With this in mind, then, consider the following small example. (In this example we assume that the Assert.Test method is defined elsewhere and performs a function similar to that of C’s assert macro.) class Shift { /* This method returns a circular left shift of x's right 3 bits. If x is not between 0 and 7 inclusive, undefined behavior may result, and this undefined behavior may change over time and may depend on what algorithm this method uses. */ static int shift3(int x) { Assert.Test((x >= 0) && (x <= 7)); return ((x >> 2) & 1) | ((x << 1) & 6); } static void Main() { while (true) { string s = Console.ReadLine(); if ((s == null) || (s.Length == 0)) // no more input { break ; } char c = s[0]; #if inappropriate Assert.Test((c >= '0') && (c <= '7')); int x = c; Console.WriteLine("The result is {0}", shift3(x)); #else if (((c >= '0') && (c <= '7'))) { int x = c; Console.WriteLine("The result is {0}", shift3(x)); } else { Console.WriteLine( "Please enter a number between 0 and 7."); } #endif } } } The assertion in Main is inappropriate, because users may enter numbers outside of the range 0 to 7, and the program’s normal function includes detecting such entries and responding appropriately. The code inside the #else region checks the input appropriately. This example illustrates a general test that weeds out some inappropriate assertions: ask whether the program would function correctly with a given assertion removed and if the answer is “no,” then the assertion is probably inappropriate. Next, let’s consider the assertion in the shift3 method. This assertion is appropriate because shift3 explicitly assumes that its argument x is between 0 and 7. The documentation before shift3 implies that if the program is correct, then the calling method will ensure that the argument to shift3 is between 0 and 7, as does the code in Main‘s #else region. Were it shift3‘s responsibility to check its argument and return an error code for invalid values, then the assertion in shift3 would not be appropriate. Notice that the appropriateness of this assertion depends not only on the code but also on the documentation, i.e., on policies regarding the responsibilities of code. The Shift class contains a serious bug. When we run the program, we find that the assertion in shift3 throws an exception. How can we find out what’s going on? Well, the assertion did its job by throwing an exception, so we know immediately that the contract between shift3 and its caller has been broken. We can proceed by determining why the contract was broken, i.e., why shift3 received an improper value of x. The problem is that x in Main should be assigned the value c – ‘0’, not c. After changing the assignment and recompiling we find that the program works. The assertion in this example performs two valuable functions. First, it concisely summarizes the contract that shift3 has with its callers. The assertion makes it easy for a reader to quickly understand details of this contract. Second, if the contract is broken, the breaking of the contract is detected immediately. It is almost always easier to figure out what is wrong when a problem is exposed immediately before program execution has reached a later point that may be only tenuously related to the source of the problem. This service that assertions can provide — immediate detection of errors — is called “feedback at the point of failure.” More information about assertions can be found in web search engines and in software engineering textbooks. Introduction to Development Output We define development output to simply be program output that is intended to be generated only during the development phase of software production. Such output is often used for determining what is going on in a program, especially during debugging. For example, while debugging the code in the previous section, we might want to print some output. We could do this by adding Console.WriteLine calls, as follows. (The code differs slightly from that in the previous section.) static void Main() { while (true) { string s = Console.ReadLine(); if ((s == null) || (s.Length == 0)) { break ; } int x = s[0]; Console.WriteLine("s={0}", s); Console.WriteLine("x={0}", x); if (((x >= 0) && (x <= 7))) { Console.WriteLine("The result is {0}", shift3(x)); } else { Console.WriteLine( "Please enter a number between 0 and 7."); } } } After running the program and seeing the development output, we may realize that the value of x is not being computed correctly from the string s. This may help us understand that we need to subtract the character constant ‘0’ from s[0] when computing x. After fixing the bug and inspecting the development output in the fixed version of the program, we would likely remove the Console.WriteLine statements that produce development output. Using the Console.WriteLine method like this to produce development output is not too bad. In fact, in a small program, sometimes this is the best way. This technique does have some drawbacks, though, and these drawbacks become important in large projects. First, it can be difficult to distinguish between temporary and permanent Console.WriteLine calls. Second, temporary calls like the ones in the example can come to reside in a program for a long time, possibly permanently, and we do not want these calls to produce output in released software. What we would like is a way for this output to appear during the development process, but not with released versions of software. Goals for Providers of Assertion and Development Output Services Let’s look at the capabilities that we do and do not want from software that provides assertion and development output utility services. In particular, we will look at what types of software builds should have assertions execute, and what types of builds shouldn’t. We will also look at the same topic for development output. Usually, we want assertions on (executing) during development. It’s also useful to be able to turn assertions off during development, e.g., when cthe ode is temporarily structured in a way that causes assertions to fail. In release builds, we usually want assertions off. But for some release builds it makes sense to have assertions on, especially when developers have a close relationship to the environment in which the released product is being used, or when developers run with assertions on all the time, as many developers do. We also want to be able to turn development output on and off during development builds. Executables built in release builds should not produce development output. Our goals, then, are as follows: we want a choice about having assertions on or off for development builds, a choice about having assertions on or off for release builds, and a choice about having development output on or off for development builds. We want development output to always be off for release builds. Deficiencies of the .NET Debug and TRACE Classes Two obvious candidates for achieving our goals are the Debug and TRACE classes in .NET’s System.Diagnostics namespace. Let’s examine these classes to see if they help us meet our goals. First, let’s look at using the Debug class. We will call using the Debug class “Policy 1”; variants are called “Policy 1A”, “Policy 1B”, etc. Policy 1A. Use Debug for both assertions and development output. Policy 1A fails to meet our goals because it requires that if assertions are on in a given release build, then development output will also be on in the same build. This is because, as controlled by the DEBUG preprocessor token, either all the members of the Debug class are on, or none are. Policy 1B. Use Debug for assertions, and mandate not using the members of Debug that involve development output. One flaw with Policy 1B is that if we want assertions on in a release build, we have to define DEBUG for the release build, which is confusing at best. Also, this policy is difficult to maintain. Someone may simply forget to avoid using development output aspects of Debug. Or, when someone new to a project encounters code that uses Debug.Assert, she may start using the non-assert members of Debug because she may be used to this from other projects, or because when she sees Debug.Assert in the code, it may seem natural to use other parts ofDebug. Policy 1C. Use Debug for development output, and do not use the members of Debug that involve assertions. Policy 1C’s problems are essentially the same as Policy 1B’s. If one gives sufficient weight to the flaws described above, as we do, then one can conclude that the Debug class should be used for neither assertions nor development output. Now let’s look at using .NET’s TRACE class. Using the TRACE class has exactly the same problems as using the Debug class. There is another problem with using the TRACE class. Grimes[1] has observed that “Visual Studio.NET defines TRACE [the preprocessor token] for C# projects created with the project wizards.” This creates an expectation among Visual Studio.NET users that TRACE will be defined for release builds. Respecting this expectation would mean that - if TRACE controls development output, then release builds would contain development output, and - if TRACE controls assertions, then we could not turn assertions off for release builds. Both of these consequences violate the design goals in the previous section. As with the Debug class, we conclude that the TRACE class should be used for neither assertions nor for development output. The problems we have discussed with the .NET base class library’s Debug and TRACE classes stem from the dependencies they introduce between assertions and development output. It is probably better to separate assert functionality from development output functionality. For example, the Debug and TRACE classes might better have been designed to have no assert functionality, and assert functionality could have been implemented in a separate class that is used for only assertions. Our Solution Since .NET’s base class library does not provide a separate class that is used only for assertions, we developed our own system for assertions and development output. This system is very simple to understand and use. It comprises four entities: - the Assert class, - the Nib class, - the ASSERT preprocessor token, and - the NIB preprocessor token. The Assert class is for assertions and only assertions. The Nib class is for development output and only development output. Listing 1 shows the Assert class. The Assert class’s public methods execute only when the ASSERT preprocessor token is defined. This is the only thing controlled by the ASSERT token. Listing 2 shows the Nib class. The Nib class’s public methods execute only when the NIB preprocessor token is defined, and this is the only thing controlled by the NIB token. Below is an example where the Shift class has been written to use the Assert and Nib classes. (For the comments, see the previous examples.) class Shift { static int shift3(int x) { Assert.Test((x >= 0) && (x <= 7)); return ((x >> 2) & 1) | ((x << 1) & 6); } static void Main() { while (true) { string s = Console.ReadLine(); if ((s == null) || (s.Length == 0)) { break ; } char c = s[0]; if (((c >= '0') && (c <= '7'))) { int x = c; Nib.WriteLine("s={0}", s); Nib.WriteLine("c={0}", c); Nib.WriteLine("x={0}", x); Console.WriteLine("The result is {0}", shift3(x)); } else { Console.WriteLine( "Please enter a number between 0 and 7."); } } } } The use here of the Assert and Nib classes should be self-explanatory. It’s helpful to have a software engineering term that means, “something that controls development output.” In this regard, the term “nib” possesses the advantage of not having other meanings commonly associated with programming. Another nice feature is that that “nib” is only a few characters long. A bonus is that the meaning of the English word “nib” is related to the Nib class’s function — the nib of a pen is the part that applies ink to paper. Conclusion In this article, we have introduced assertions and development output, discussed goals for a provider of assertion and development output services, and reviewed problems with using the .NET Debug and TRACE classes for assertions and development output. We then presented a simple and easily understood system for using assertions and development output. This system uses two classes and two preprocessor tokens. We have found that this system is convenient and effective in practice. References [1] Richard Grimes, Developing Applications with Visual Studio.NET. Addison-Wesley, 2002, ISBN 0-201-70852-3. [2] John Lakos, Large-Scale C++ Software Design. Addison-Wesley, 1996, ISBN 0-201-63362-0. Listings Listing 1: The Assert Class. using System; class Assert { // Probably, FailedException instances should be created // only from within the Assert class. public class FailedException : ApplicationException { public FailedException(string s) : base(s) {} } [System.Diagnostics.Conditional("ASSERT")] public static void Test(bool condition) { if (condition) { return; } throw new FailedException("Assertion failed."); } [System.Diagnostics.Conditional("ASSERT")] public static void Test(bool condition, string message) { if (condition) { return; } throw new FailedException("Assertion '" + message + "' failed."); } } Listing 2: The Nib Class. // (This version of the Nib class always writes to System.Console. Later // versions might add functionality similar to System.Debug.Listeners.) using System; class Nib { [System.Diagnostics.Conditional("NIB")] public static void Write(object obj) { Console.Write(obj.ToString()); Console.Out.Flush(); } [System.Diagnostics.Conditional("NIB")] public static void W(object obj) // short name { Console.Write(obj.ToString()); Console.Out.Flush(); } [System.Diagnostics.Conditional("NIB")] public static void Write(string s, params object[] args) { Console.Write(s, args); Console.Out.Flush(); } [System.Diagnostics.Conditional("NIB")] public static void W(string s, params object[] args) // short name { Console.Write(s, args); Console.Out.Flush(); } [System.Diagnostics.Conditional("NIB")] public static void WriteLine(string s, params object[] args) { Console.WriteLine(s, args); Console.Out.Flush(); } [System.Diagnostics.Conditional("NIB")] public static void WL(string s, params object[] args) // short name { Console.WriteLine(s, args); Console.Out.Flush(); } } Revisions 4/17/04 – Original
http://csharp-station.com/Article/Index/Assertions
CC-MAIN-2021-10
en
refinedweb
The. CTRL+Q opens the quick launch so you can search an indexed list of every feature available in Visual Studio. For example, If you want to do add a new item, use the quick launch to with that as your search term and receive guidance on how to do that. In Visual Studio, users can apply Quick Launch to instantly explore and complete activities for IDE as elements like templates, options, and menus. One thing to remember is that users can’t apply Quick Launch to explore for code and figures. With a lot of nested statements, it can be tough to keep track of opening and closing braces which, if missing, can cause compiler errors. Use CTRL+ ] to find the matching closing brace of a function or class and reduce the chance of falling prey to annoying error messages. Sometimes making code work comes at the expense of making it look good. Proper indentation and spacing make code readable and that’s how CTRL+K+F works. Just highlight the section you need to format and it cleans up sloppy coding like magic. For loops and if-then conditions have a standard structure that’s tedious to type over and over. To automate that process, you just need to type the beginning of your condition. For example, type ‘Try,’ hit the TAB key twice, and you get access to the snippets that complete the condition for you down to the braces. All you have to do is modify the parameters and you’re good to go. This shortcut combines three debugging Visual Studio code commands in one. CTRL+SHIFT+F5 lets you end the debugging session, rebuild it, and create a new debugging session. Manually adding and removing ‘//’ is tedious especially, if you have a long piece of code you want to deactivate. CTRL+K+C is a quicker way to bulk comment. Just highlight the block and type the Visual Studio shortcuts. When you need to make those lines active again, highlight the block and use CTRL+K+U to uncomment. You can also use Ctrl+Shift+/ for toggling. The toggling can be used for block comments because Ctrl+/ is a shortcut for toggling line comments and block comments. To execute this, click on the settings and then click ‘Keyboard Shortcuts’. Here you will see a “toggle block.” Now, click and enter your combination. Having multiple screens open helps you multitask. But if you want to focus on one section, going full screen used to mean losing important panels like the menu bar. ALT+SHIFT+ENTER lets you go full screen, but you retain access to your menu and panels. Another benefit is that you gain access to another four to 10 extra lines of code, depending on your screen resolution. You’ve got your TRY-CATCH or IF loop structure but still need some code to put inside. Use Ctrl+K+S to open up a contextual menu from which you can choose the snippets you need to populate your condition. Bookmarks help you keep track of the special markers in your code. For example, if there’s a function that you’re constantly referring to, CTRL+K+K marks that line with a little dot at the left. Additionally, use CTRL+K+N to cycle to the next bookmark in the list and CTRL+K+P for previous bookmarks. Just remember that the bookmark tags the line of the code, not the code itself. The Clipboard Ring is a Visual Studio feature that allows copying multiple code blocks and pasting them. Users can copy various lines of code and put them in the clipboard. These lines of code can then be pasted when required. This improves development productivity. The copied code is stored in a memory, and users can use them in IDE. CTRL+C allows you to keep the last 15 copied pieces of content in the clipboard. CTRL+SHIFT+V gives you access to this clipboard ring where you can scroll through the list of copied lines until you find the one you want to paste. If your code file is too long and you want to make it more manageable, consider minimizing it with CTRL+M+M Visual Studio code shortcut keys. Just select the whole file and use this hotkey to collapse all functions to the most basic view. You can re-expand a specific section to see what you want. You can also use CTRL+M+O to collapse to the definition level, which may be a more useful view. You have a code block and want to edit an event so that it’s reflected throughout the other lines in the block. Instead of changing each line individually, hold ALT then click and drag to highlight that block. Type the change you want and you’ll see all selected lines change at once. In Microsoft, visual studio users can choose a block of text by pressing down the Alt key when choosing code and text with the mouse. This is particularly helpful for selecting a string of data or code as opposed to the whole line. These Microsoft Visual Studio shortcuts are faster alternatives to copy-move-paste. To change the location of a certain block of code, highlight the lines then click ALT+↑(up arrow) to move all lines up at once or ALT+↓ (down arrow) to move all likes down. In Visual Studio, users can use the Find All References to see where the required code details have referenced the codebase thoroughly. The Find All References is accessible on the context list or just press Shift + F12. To see the instance of a class, hover over the name and hit F12. To see everywhere you’ve used that class, use SHIFT+F12. These VS code hotkeys are absolutely necessary. Image, you’ve been scrolling down many lines of code and want to go back to some reference that’s 100 lines away. Instead of scrolling up or down to find that place, use CTRL+-(minus) to step backward through the navigation history, which shows everywhere, you’ve clicked and in the order, you clicked them. To go forward, use CTRL+SHIFT+-. Check all Visual Studio hotkeys in this video tutorial: The Build-in Microsoft visual studio means compile and connect only the root files that have been modified since the previous build. The Rebuild feature in Microsoft Visual studio means compile and connect all root files despite whether they changed or not. CTRL+SHIFT+B is a quicker way to build a solution. Use these Visual Studio keyboard shortcuts if you want to create a new task, for example. Type the word ‘task’ and use CTRL+. (dot) to see a menu. Press enter and you’ll see the namespace appear. Autocomplete helps with any coding issues, such as maintaining naming conventions. You do a build and find that you didn’t name a property properly. Instead of hunting for every reference, click on the variable and use CTRL+R+R. This hotkey will not only rename the property but also change the name wherever it’s referenced. When you click Apply, you’ll see all the references it will rename. These Visual Studio hotkeys can be useful when you’re debugging. If you want to step into a function as far as it can go and not just move to the next line, press F11 to jump into the constructor. Always remember that it doesn’t work if you are debugging a static constructor. If not, then it works as usual. For this, the constructor is only called the once. So, before the class is accessed for the first time always make sure that the debugger is attached to it. When you see the light bulb, it means there’s an easier action to take. You can take advantage of the hotkey shortcuts. For example, if you have unused USING statements or if you want to add a clause, ALT+ ENTER will get rid of unnecessary statements as well as add that recommended snippet to your clause. Visual Studio contains a characteristic that enables users to add a bookmark. This bookmark can be added to a line of code in a solution. As with a regular bookmark that instantly enables users to go back to the last place, the Visual Studio allows users to immediately find a labeled line in the code. Users can generate many bookmarks and they can instantly navigate between them. Now, to remove this bookmark we have a shortcut key Ctrl+K. These Visual Studio code hotkeys are useful for removing the syntax of the comment from the prevailing line or currently marked lines of code. For example, if you are using the code editor and you want to remove the already written syntax of comments then Ctrl-K comes under the text manipulation Visual Studio keyboard shortcuts. This key is the part of project-related shortcuts. For example, you are using a Microsoft Visual Studio and you have developed a new project called “MyProject”. Now, if you want to open this project in the code editor then Ctrl+Shift+O can be used. The project Visual Studio code shortcut keys are very useful if you are working on a big project and repositories. This shortcut key is also the part of project-related Visual Studio shortcut keys. For example, if you want to override base class methods. Now, you want to achieve this in an already derived class when an overridable method is called. To execute this in the Class View pane you can use Ctrl+Alt+Insert key. This shortcut key is the part of Search and replaces related Visual Studio hotkeys. This hotkey starts an incremental search. After pressing Ctrl+I, the user can insert the text. Once the text has been entered, this key will help in finding the text and the related pattern. The search and replace Visual Studio keyboard shortcuts are useful in finding various codes and comments from the code editor. This shortcut key is also the part of Search and replaces related Visual Studio code shortcuts. This key is used for selecting or clearing the Regular Expression option. With the help of Alt+F3, R the special characters can be used in the Find and Replace methods. This key is the part of Debugging related Visual Studio commands. This shortcut key displays the Memory 1 window to observe memory in the method being debugged. This is especially beneficial when you do not have debugging figures ready for the code. It is also important for studying at large buffers. This key is the part of Object browser-related Microsoft Visual Studio shortcuts. This displays the Object Browser to inspect the classes, attributes, processes, events, and constants specified either in the project or by elements and sample libraries referenced by the project. In the visual studio, the Tool window is a child window of the integrated development environment. The IDE is used to display various information. Each view includes two tool window collections. These are known as primary, the secondary. In this, only one tool window from each collection or group can be active. This shortcut key is the part of Tool window related commands. This switches the window inside or out of a method enabling text inside the window to be chosen. This key is the part of Windows manipulation related Visual Studio code shortcut keys. It allows moving to the next tab in the document or window. For example, if you can switch the HTML editor from its design view to HTML view. Visual Studio enables users to create cursors. In Visual Studio, users can create a cursor file. This File is a bitmap file with . cur extension. For creating this file, just right click on the selected project and select Add New Item. Now, select Cursor File and this will create a .cur file. This shortcut key is the part of General Visual Studio code commands. This key moves the cursor to the preceding item, for instance in the TaskList window or Find Results window. This hotkey displays the Solution Explorer. The solution explorer is responsible for listing the projects and files in the current solution of the project. The solution explorer is a window that allows users to explore and maintain all projects. This hotkey displays the Toolbox. The toolbox is an important component of VS. It includes controls and other objects that can be moved into editor window and designer windows. Many controls can be easily added to the projects with the help of a toolbox. This hotkey displays the property pages for the objects and controls currently selected. For instance, one can use Shift+F4 to display a project’s settings and many other such properties. Users can modify and see the configuration by using this hotkey. This hotkey is used to display the web browser window in the Visual Studio. The Ctrl+Alt+R enables users to view or monitor various web pages on the Internet. This hotkey is used to display the Macro Explorer window. It lists all available macros. Macros allow users to automate repetitive tasks in the IDE. The Alt+F8 is one of the important hotkeys in Visual Studio. The Ctrl+Shift+G is used to define the elements to be adjusted by utilizing a hidden grid. The grid spacing can be configured on the Design pane of HTML designer and the grid will automatically adjust itself the next time users open a document. This Visual Studio hotkey is used to display the bookmark dialog. Users can use bookmarks to identify or point particular code lines to comment on important messages or to quickly return to a particular location. The Ctrl+K shortcut is used to add a bookmark. The Ctrl+F9 enables or disables the breakpoint. It is used to define the breakpoint on the current line of code. It is one of the handy hotkeys of Visual Studio. The F5 hotkey is used to debug the application. It is used to run the application in the debugger mode and it displays what the code is doing when it runs. On the other hand, the ctrl + F5 hotkey is used to execute the application without the debugger. This Visual Studio shortcut comes under window management hotkeys. It is used to open the immediate window. The immediate window enables communication with parameters and variables when the written program is in the debug mode. It allows the modification and inspection of the variables or parameters of the written code. This hotkey is widely used by developers for checking things. This hotkey allows developers to navigate to the subsequent description, information, or reference of an object. It is accessible in the object browser and Class View window. It is also accessible in source editing windows with the Shift+F12 shortcut. It is one of the most widely used hotkeys. This hotkey is also used to invoke the View.BrowseNext. It comes under the View section of Visual Studio hotkeys. The hotkey Ctrl+Shift+2 is used to invoke the View.BrowsePrevious. In short, it comes under the View class and also called a navigation shortcut. This hotkey is used to invoke the CrossAppDomainDelegate. It is used to executes the code in a different application domain that is recognized by the named delegate. It is used in the system namespace and it is a part of mscorlib.dll assembly. This hotkey is also used to invoke the AppDomain.DoCallBack(CrossAppDomainDelegate) method. Every application domain has appdomain variable. In this, a constructor is launched when an attachment is packed into an application domain, and the destructor is launched when the application domain is relieved. The appdomain variable describes an application domain, which is a private setting where applications perform. This class cannot be derived. The hotkey “S” is used to invoke Stackalloc. It is used to allocate a block of memory on the stack. A block built through the method execution is implicitly abandoned when that method echoes. Users cannot explicitly release the memory designated with stackalloc. A stack-allocated memory block is not related to garbage collection and doesn’t have to be bound with a fixed statement. The method of stackalloc implicitly allows buffer overrun discovery characteristics in the typical (common) language runtime (CLR). If a buffer overrun is identified, the method is stopped as promptly as possible to reduce the risk that malicious code is performed. The A+B hotkey is used to invoke the AccessViolationException. An access violation happens in unmanaged or insecure code when the code tries to write to memory that has not been designated, or to which it does not have a path. This normally happens because a pointer has the wrong value. Not all writes within wrong pointers point to access infractions, so an access violation normally means that some reads or writes have transpired into bad pointers, and that memory might be damaged. Thus, access violations almost invariably mean severe coding errors. An AccessViolationException explicitly recognizes these grave errors. This hotkey is used to invoke the Console.WriteLine Method. It is used to write the defined data, supported by the prevailing line terminator, to the regular output stream. It can be used with various parameters. It is also used to write the text descript ion of the defined objects, supported by the prevailing line terminator, to the regular output utilizing the designated format data. The default line terminator is a line whose purpose is a position return accompanied by a line feed. You can adjust the line terminator by placing the TextWriter.NewLine section of the Out section to a different string.
https://bytescout.com/blog/visual-studio-hot-keys.html
CC-MAIN-2021-10
en
refinedweb
Today? Redux Redux actionsx reducer. BONUS: Redux middleware In some cases more is more. Preparation. Starting the development environment. Redux articles duck We will start by defining our imports and types. The Article type can be copied from our articles.tsx while the rest is new. // File: src/redux/ducks/article.ts /* eslint-disable no-param-reassign */ import { Middleware } from 'redux' import { createAction, createReducer } from '@reduxjs/toolkit' import { apiRequest } from './api' export type Article = { title: string author: string date: number tags: string[] excerpt: string urls: { page: string url: string }[] } export type RequestStatus = 'idle' | 'pending' type InitialState = { data: Article[] | null status: RequestStatus } For our actions, we need to be able to - request articles - store articles - set the status of the UI - handle a request error // File: src/redux/ducks/article.ts export const requestArticlesData = createAction( '[ARTICLE] request data' ) export const setArticlesStatus = createAction( '[ARTICLE] set status', (status: RequestStatus) => ({ payload: { status } }) ) export const storeArticlesData = createAction( '[ARTICLE] store data', (data: Article[]) => ({ payload: { data } }) ) export const cancelArticlesRequest = createAction( '[ARTICLE] cancel failed request', (error: string) => ({ payload: {. // File: src/redux/ducks/article.ts export const articleMiddleware: Middleware = ({ dispatch, getState }) => next => action => { next(action) if (requestArticlesData.match(action)) { const state = getState() if (!(state.article && state.article.status === 'pending')) { dispatch(apiRequest({ url: '/articles', method: 'GET', onSuccess: data => storeArticlesData(data), onError: error => cancelArticlesRequest(error) })) dispatch(setArticlesStatus('pending')) } } if (cancelArticlesRequest.match(action)) { const { error } = action.payload console.log("Error while requesting articles: ", error) // eslint-disable-line no-console dispatch(setArticlesStatus('idle')) } } Our last bit here is the default export for our articleReducer. We only need to handle actions that either store the article data or simply update the UI state. // File: src/redux/ducks/article.ts const articleReducer = createReducer(initialState, (builder) => { builder .addCase(setArticlesStatus, (state, action) => { const { status } = action.payload state.status = status }) .addCase(storeArticlesData, (state, action) => { const { data } = action.payload state.data = data state.status = 'idle' }) }) export default articleReducer Redux API duck. // File: src/redux/ducks/api.ts import { Middleware, Action } from 'redux' import { createAction } from '@reduxjs/toolkit' const API_HOST = '' export type SuccessAction<T> = (data: T) => Action export type ErrorAction = (message: string) => Action export type ApiBaseRequest = { url: string headers?: Record<string, string> } export type ApiGetRequest = ApiBaseRequest & { method: 'GET' } export type ApiPostRequest = ApiBaseRequest & { method: 'POST' data: Record<string, unknown> } export type ApiPutRequest = ApiBaseRequest & { method: 'PUT' data: Record<string, unknown> } export type ApiDeleteRequest = ApiBaseRequest & { method: 'DELETE' } export type ApiRequest = ApiGetRequest | ApiPostRequest | ApiPutRequest | ApiDeleteRequest export type ApiRequestPayload<T = never> = ApiRequest & { onSuccess: SuccessAction<T> onError: ErrorAction } Our actions are relatively simple, now that we have defined all the typings above. We have our apiRequest as well as the apiSuccess and apiError actions. // File: src/redux/ducks/api.ts export const apiRequest = createAction( "[API] Request", (api: ApiRequestPayload<any>) => ({ // eslint-disable-line @typescript-eslint/no-explicit-any payload: { ...api }, }) ) export const apiSuccess = createAction( "[API] Success", (onSuccess: SuccessAction<unknown>, data: unknown) => ({ payload: { onSuccess, data }, }) ) export const apiError = createAction( "[API] Error", (onError: ErrorAction, message: string) => ({ payload: { onError, message }, }) ). // File: src/redux/ducks/api.ts export const apiMiddleware: Middleware = ({ dispatch }) => next => action => { next(action) if (apiRequest.match(action)) { const { url, method, headers, onSuccess, onError, }: ApiRequestPayload<any> = action.payload // eslint-disable-line @typescript-eslint/no-explicit-any fetch(`${API_HOST}${url}`, { method, headers }) .then(response => response.json()) .then(reponseData => dispatch(apiSuccess(onSuccess, reponseData))) .catch(error => { dispatch(apiError(onError, error.message)) }) return } if (apiSuccess.match(action)) { const { onSuccess, data } = action.payload dispatch(onSuccess(data)) } if (apiError.match(action)) { const { onError, message } = action.payload dispatch(onError(message)) } } Redux - wiring everything up We now need to register our reducers with the rootReducer and add a rootMiddleware to register our new apiMiddleware and articlesMiddleware. // File: src/redux/rootReducer.ts import { combineReducers } from '@reduxjs/toolkit' import articleReducer from './ducks/articles' const rootReducer = combineReducers({ articles: articleReducer, }) export default rootReducer // File: src/redux/rootMiddleware.ts import { apiMiddleware } from './ducks/api' import { articlesMiddleware } from './ducks/articles' export default [ apiMiddleware,. React Redux, hook things up with the new store. // File: src/components/pages/articles.hooks.ts import { useEffect } from 'react' import { requestArticlesData, Article } from '../../redux/ducks/articles' import { useReduxDispatch, useReduxSelector } from '../../redux' export const useArticlesData = (): Article[] | null => { const data = useReduxSelector(state => { return state.articles.data || null }) const dispatch = useReduxDispatch() useEffect(() => { if (!data) { dispatch(requestArticlesData()) } }, [dispatch, data]) return data } With this in place, we can clean up our Articles.tsx and remove everything by replacing all the state logic with our new hook. // File: src/components/pages/articles.tsx import React from 'react' import { useArticlesData } from './articles.hooks' const Articles = (): React.ReactElement => { const data = useArticlesData() return ( // nothing changed here so I skipped this part ) } export default Articles './articles.hooks' as the linter thought .hooks was the file ending... we can't have that. "import/extensions": [ "error", "never", { "style": "always", "hooks": "always" // this is new } ],. Discussion (0)
https://dev.to/allbitsequal/react-bootstrapping-deep-dive-into-redux-messaging-patterns-1e7b
CC-MAIN-2021-10
en
refinedweb
These are chat archives for FreeCodeCamp/HelpFrontEnd c0d0er2 sends brownie points to @dwquach :sparkles: :thumbsup: :sparkles: ?sig={random number}to the end of the url to prevent this, for example: $("body").css("background-image", "url(" + "" + Math.random() + ")"); <h2>element for your name. The display style for that is display:block;which makes it occupy its own line. Try changing it to something else. Or putting a class on it with display: inline-block; col-s-3is not correct, it should be col-sm-3, for all of those, I think? Unless you're trying to do something else. varLink</a>"; dwquach sends brownie points to @tylermoeller :sparkles: :thumbsup: :sparkles: willstanleyus sends brownie points to @khaduch :sparkles: :thumbsup: :sparkles: victorhall sends brownie points to @mot01 :sparkles: :thumbsup: :sparkles: victorhall sends brownie points to @mot01 :sparkles: :thumbsup: :sparkles: :warning: victorhall already gave mot01 points quackidy sends brownie points to @dwquach and @khaduch :sparkles: :thumbsup: :sparkles: It's very unfinished, I'm not sure if I'm headed in the right directionIt's very unfinished, I'm not sure if I'm headed in the right direction function updateRecords(id, prop, value) { if(collection.tracks.hasOwnProperty()){ return collection; }else if (collection.tracks.hasOwnProperty === false){ collection.tracks.push } collectionhave a tracksproperty, ever? What kind of properties does collectionhave - the "first level" of properties. For that matter, what kind of structure is collection? collection, and an id, what would you do with that to try and find a tracksproperty? Your answer is close, but you have to make sure that you understand when you can use dot notation vs. bracket notation to access an object. johnnunns sends brownie points to @khaduch :sparkles: :thumbsup: :sparkles: function updateRecords(id, prop, value) { if(collection["1245"].tracks.hasOwnProperty()){ return collection; }else { collection['1245'].tracks.push("Don't know") } idvalues, and those values will be the numbers, so you should be able to use that variable in your data access. function updateRecords(id, prop, value) { if(collection["1245"].tracks.hasOwnProperty(prop)){ return collection; }else { collection['1245'].tracks.push("Don't know"); } id- there is no way to make it work otherwise! Check the test cases that will be run to validate your solution. idbecause when you call the function, as they have one sample call at the bottom of the edit window, that value will be avaiable in the idvariable and you'll be using it in the way that will be a general-purpose solution. But, for the sake of argument, you are using one of the values. So let's move on to the next thing. If you have this construct: collection["1245"]- what does that allow you to access within the data? @moT01 - well, it's better than counting into the negative values, I've seen that. So the next question is that I didn't hear any sound when it changed from the session to the break? I think that is one of the requirements, they mention it in the user stories, I think? One other thing, and if you have it working, it's fine - it's easier to keep your time value in seconds only, and not have to make calculations against seconds and minutes. But it looks good overall! collection["1245"], how would you determine whether or not it has a tracksproperty? collection["1245"].tracks.hasOwnProperty(prop)- what is that going to tell you? Is it correct? (I would say that tracksmight be there, it might not be there, and if it is there, it needs to be an array, as many of the records are initialized with an array.) But does an array have properties? // Setup var collection = { "2548": { "album": "Slippery When Wet", "artist": "Bon Jovi", "tracks": [ "Let It Rock", "You Give Love a Bad Name" ] }, "2468": { "album": "1999", "artist": "Prince", "tracks": [ "1999", "Little Red Corvette" ] }, "1245": { "artist": "Robert Palmer", "tracks": [ ] }, "5439": { "album": "ABBA Gold" } }; // Keep a copy of the collection for tests var collectionCopy = JSON.parse(JSON.stringify(collection)); // Only change code below this line function updateRecords(id, prop, value) { if(collection[1245].tracks.hasOwnProperty(value)){ return collection; }else { collection[1245].tracks.push("Addicted to Love"); } if(collection[5439].hasOwnProperty(prop)) { return collection; }else if(collection[5439]) } id object.newProperty = "something";for example. If you want to add something to an array, as with this problem, you have to use .push(), which you know. But again, in the general case, you have to make sure that there is an array there in order to push. So you cannot just do object.someOtherProperty.push("something");if there isn't a someOtherPropertythat doesn't contain an array. For the general case, you want to make sure that there is or isn't a property for the "tracks" case so you can properly create it if necessary, and .push()to it if it exists. mot01 sends brownie points to @khaduch :sparkles: :thumbsup: :sparkles: idwill contain that value, and you can use it as collection[id], and your code has to be written to just handle the other cases - for example, if value === ''is one of the things that they mention - the specific record that you have the ID for is the one that will be tested. It will work! @johnnunns when you call... updateRecords(5439, "tracks", ""); those values get plugged into your variable... updateRecords(id, prop, value); so using collection[id], in this case, will give you access to the collection with the id of 5439 maybe you already got that part, trying to bring more clarity @johnnunns - one example of one of these tests - the first one: updateRecords(5439, "artist", "ABBA"); Your function will be called - the function arguments will be: The test says that after the function runs: artistshould be "ABBA" So the testing code is going to read the collection object that is returned. Before your function runs, that record doesn't have an artist field. But you will add it, and since it is not a tracks property, no need to worry about array-type structures, you can just set that property using the variables and bracket notation - collection[id][prop] = value; That's a hint... It is equivalent to having the code collection["5439"].artist = "ABBA"; but it is reusable because it is parameterized with the function arguments that will assume the values for the current function invocation.{query.value}&format=json&origin=* if (collection[id].hasOwnProperty('someproperty'); .hasOwnProperty(), I think that's what you meant? And the point is that you want to be able to work on multiple records. The thing is, that the function is only called with one value at a time - so for each invocation of the function, you'll probably be using different values, but ONLY ONE AT A TIME! That's the beauty of it! You can, of course, have something that would take action on all of the records in the collection, say, if you wanted to add another field - but you'd have to be getting multiple pieces of data in an array or something - so calm yourself. :) It will work, I promise! ``` function updateRecords(id, prop, value) { if(collection[id].tracks.hasOwnProperty(value)){ return collection; }else { collection[id].tracks.push("Addicted to Love"); } if(collection[id].hasOwnProperty(prop)) { return collection; }else { collection[id].push("tracks","artists"); } } ``` @khaduch @moT01 function updateRecords(id, prop, value) { if(collection[id].tracks.hasOwnProperty(value)){ return collection; }else { collection[id].tracks.push("Addicted to Love"); } if(collection[id].hasOwnProperty(prop)) { return collection; }else { collection[id].push("tracks","artists"); } } replaced the specific song with 'value'replaced the specific song with 'value' function updateRecords(id, prop, value) { if(collection[id].tracks.hasOwnProperty(value)){ return collection; }else { collection[id].tracks.push(value); } if(collection[id].hasOwnProperty(prop)) { return collection; }else { collection[id].push("tracks","artists"); } @johnnunns - there are a few problems here. value === ''or value !== ''somewhere in your code to be able to handle that situation, as described in the problem description collection[id].tracks.hasOwnProperty(value)- this is not correct. tracksdoes not contain an object as its value, so it cannot be used with .hasOwnProperty()in the way you have it coded. (You would most likely get an error in the console if you looked at it when this code was attempted to be run.) collection[id].tracks.push("Addicted to Love");would work - if prop === "tracks"(you were supposed to be operating on the "tracks" property) and if value === "Addicted to Love"- In other words, you have hard-coded something that should be using the function arguments. if(collection[id].hasOwnProperty(prop)) {- this test is not useful here. The function should always return collection;after it updates the record. collection[id].push("tracks","artists");- this is also not useful here - there is no condition here where you should be pushing the words "tracks" and "artists" into an object (which probably will fail anyway, since it collection[id]is not an array... You have some of the concepts going in the right direction, but things are quite jumbled and confused, but let's try another approach. Look at the conditions that they want you to check - one of the biggies is value === "" or value !== "". If the value is blank, you are just supposed to delete the property that is given in the variable prop, at the ID that is given in the variable id. You have to use the delete function. You can write this code like this: if ( value === '' ) { delete collection[id][prop]; // because the value is blank, just delete this } else { // the value is not blank - there are other things to consider, most specifically, if `prop === "tracks"` if (prop === "tracks" ) { // do the things here to properly handle the "tracks" property } else { // the prop variable is not "tracks", just add the prop and value to the given record. } } return collection; // this is always done at the end of the function that is what you should have as a basic idea for your function - you need to fill in the details. Look at the description of the problem, with this framework in mind, and see if you can get some of the tests to pass. johnnunns sends brownie points to @khaduch and @mot01 :sparkles: :thumbsup: :sparkles: joshfilippi sends brownie points to @mot01 :sparkles: :thumbsup: :sparkles: .getJSON()but although the API is clearly returning data in a browser it doesn't appear to be hitting my script... $.getJSON("", function(data) { console.log("OPoop"); }); console.log()should be console.log(data); joshfilippi sends brownie points to @livonian-router :sparkles: :thumbsup: :sparkles: jamespayne sends brownie points to @joshfilippi :sparkles: :thumbsup: :sparkles: "data": { "locationName": "test tag", "address": "Bangladesh,Boalkhali", "assetTags": [ { "tagType": "du", "tagValue": "aadadasd" } ] } var option_cate = '<li class="item"><span>' + malwareLabel +' : ' + dummyJson.data[key] + ' </span></li>'; $(option_cate).appendTo('#malware-menu'); but it gets printed as locationName : test tag, address : Bangladesh,Boalkhali, Asset Tags : [object Object] The data for assetTags gets printed as object as it is an array of object How to print the values of the inner array also jamespayne sends brownie points to @davidminaz :sparkles: :thumbsup: :sparkles: .getJSON()with that API replaceWith();, replace();, show();, hide();, but I can't get it right. $(document).ready(function() { $resultsList = $('#resultsList'); $('#submit').click(function() { var query = $("#query").val(); var<div class="card-content"><span class="card-title">'+ data[1][i] +'</span><p>'+ data[2][i] +'</p></div><div class="card-action"><a href="'+ data[3][i] +'" target="_blank">read full article</a></div></div></li>'); } }); }); }); append();to show the data on the page davidminaz sends brownie points to @jamespayne :sparkles: :thumbsup: :sparkles: when i click military? $(document).ready(function(){ var int = setInterval(updateTime.bind(this, military),1000,military); $("#time").click(function(){ if(int) { clearInterval(int); int = setInterval(updateTime.bind(this, military),1000,military); } military = !military; }); }); format) in your updateTime function. function updateTime(){ var d = new Date(), hours = d.getHours(), displayHours = military == true ? hours : hours % 12, minutes = d.getMinutes() > 9 ? d.getMinutes() : ("0" + d.getMinutes()), time = displayHours + ":" + minutes; if(hours < 12){ time += " a.m."; }else{ time += " p.m."; } $("#time").html(time); } $(document).ready(function(){ $("#time").click(military, function(){ military = !military; }); setInterval(updateTime,1000); }); scopeand parameters to be passed to a function. callthe function .bindwill be reflected in your function. thisinside your function) and then 2nd arguments onward are parameters to be passed. uaefame sends brownie points to @adityaparab :sparkles: :thumbsup: :sparkles: :cookie: 767 | @adityaparab | trieucrew sends brownie points to @adityaparab :sparkles: :thumbsup: :sparkles: bahaaiman sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles: var app = angular.module('wikiApp', []); app.controller('myCtrl', function ($scope, $http) { $scope.searchUrl = `? format=json &action=query &generator=search &gsrnamespace=0 &gsrlimit=10 &prop=pageimages|extracts &pilimit=max &exintro &explaintext &exsentences=1&exlimit=max&gsrsearch=Albert&callback=?`; $http.jsonp($scope.searchUrl) .success( function (data) { var results = data.query.pages; angular.forEach(results, function (v, k) { $scope.results.push({ title: v.title, body: v.extract, page: page + v.pageid }) }) }); }); Following is the code: var count = 0; function cc(card) { // Only change code below this line switch (card) { case 2: case 3: case 4: case 5: case 6: count += 1; break; case 7: case 8: case 9: count += 0; break; case 10: case 'J': case 'Q': case 'K': case 'A': count -= 1; } if (count <= 0) { console.log (count + " Hold"); } else console.log (count + " Bet"); //return count; // Only change code above this line } // Add/remove calls to test your function. // Note: Only the last will display cc(2); cc(3); cc(4); cc(5); cc(6); I am not getting any output. Any ideas? Hi guys, i dont know what the problem is here, my s function stream keeps returning undefined when the returned value is defined.. my Code: $(document).ready(function() { var channels = ["ESL_SC2", "OgamingSC2", "cretetion", "freecodecamp", "storbeck", "habathcx", "RobotCaleb", "noobs2ninjas"]; function stream(channel) { var streamResult; var stream_link = "" + channel; $.getJSON(stream_link, function(result) { streamResult = result.stream; }); return streamResult; }//end function stream alert(stream("ESL_SC2")); function channelInfo(stream, channel) { //var status = stream(channel); if(status === null) { var channel_status = "offline"; }else { var channel_status = "online"; }//end else var link = "" + channel; $.ajax({ url: link, dataType: "jsonp", success: function(json) { $(".table").append("<tr><td><img src = '" + json.logo + "' width = '320'></td>"); $(".table").append("<td>" + json.display_name + "</td>"); $(".table").append("<td>" + channel_status + "</td></tr>"); }//end success });//end $.ajax }//end channelInfo for(i = 0; i < channels.length; i++) { channelInfo(stream, channels[i]); }//end for });///end document codepen: any help please $.getJSONis an asynhronous call. That means, that the callback function given to it resolves late, and the code doesn't wait for it with running. so streamResultis not YET set when you return. When you want to encapsulate that stuff in a function still, you can use something like: function stream(channel, cb) { var streamResult; var stream_link = "" + channel; $.getJSON(stream_link, function(result) { cb(result.stream); }); } And use it as: stream("ESL_SC2", function(stream) { // here you have the stream }); section { height: 100vh; width: 100%; } pawelrokosz sends brownie points to @alpox :sparkles: :thumbsup: :sparkles: purpose50 sends brownie points to @alpox :sparkles: :thumbsup: :sparkles: purpose50 sends brownie points to @benwebdev :sparkles: :thumbsup: :sparkles: purpose50 sends brownie points to @sorinr :sparkles: :thumbsup: :sparkles: makzin sends brownie points to @tylermoeller :sparkles: :thumbsup: :sparkles: <div class="col-md-3"> <button class="btn btn-block"><a href=""target="_blank">Minds</a></button> </div> <div class="col-md-3"> <button class="btn btn-block"><a href="" target="_blank">Twitter</a></button> </div> <div class="col-md-3"> <button class="btn btn-block"><a href="" target="_blank">Github</a></button> </div> <div class="col-md-3"> <button class="btn btn-block"><a href="" target="_blank">StumbleUpon</a></button></div> </div> </div> plz got my buttons looking good but do they work no div classcontainer-fluid div classrow div classcol-md-3 button classbtn btn-blocka hrefhttpswwwmindscompielotarget_blankmindsabutton div div classcol-md-3 button classbtn btn-blocka hrefhttpstwittercompaul_standley target_blanktwitterabutton div div classcol-md-3 button classbtn btn-blocka hrefhttpsgithubcom target_blankgithubabutton div div classcol-md-3 button classbtn btn-blocka hrefhttpwwwstumbleuponcomstumblerpaulstandley1972 target_blankstumbleuponabuttondiv div div <button>element, without any type attribute, is used for submitting a <form>. If you want to style your hyperlinks like buttons, use the btnclass with your <a> elements instead. @emamador With this code: for(var i = 0; i < aiColors.length; i++) { playAiColInt(i); } You are still calling a function that uses setTimeout(), so the for loop continues to completion before the setTimeout is finished. ssgriffen sends brownie points to @tylermoeller :sparkles: :thumbsup: :sparkles: pielo2 sends brownie points to @tylermoeller :sparkles: :thumbsup: :sparkles: uaefame sends brownie points to @mot01 :sparkles: :thumbsup: :sparkles: uaefame sends brownie points to @mot01 :sparkles: :thumbsup: :sparkles: img-responsiveclass to your images - or use CSS and use max-width: 100% c0d0er2 sends brownie points to @tylermoeller :sparkles: :thumbsup: :sparkles: ash1108 sends brownie points to @ankit-prgmr :sparkles: :thumbsup: :sparkles: Can anybody please help me on Use Bracket Notation to Find the First Character in a String in JavaScript? Here is my code: var firstLetterOfFirstName = ""; var firstName = "Ada"; firstLetterOfFirstName = firstName[0]; // Setup var firstLetterOfLastName = ""; var lastName = "Lovelace"; // Only change code below this line var firstLetterOfLastName = ""; var lastName = "Frank"; firstLetteOfFirstName = firstName[0]; Here is my link: c0d0er2 sends brownie points to @igoramidzic :sparkles: :thumbsup: :sparkles: ./img/... your code can move from one server to another igoramidzic sends brownie points to @tylermoeller :sparkles: :thumbsup: :sparkles: /* ---------------------------------------------------| | \ | | \ | | \ | |____________________________\__| */ transform: rotate(45deg);. You'll have to adjust the position to line it up correctly though. Another way is with SVG: <div class="div-with-diagonal-line"> <svg width="100" height="100"> <path d="M 0 0 L 100 100" stroke="red" stroke- </svg> </div> .div-with-diagonal-line { background-color: #eee; width: 100px; height: 100px; } <ul clas="nav navbar-nav"> finkbeca sends brownie points to @tyler :sparkles: :thumbsup: :sparkles: jinnd319 sends brownie points to @tylermoeller :sparkles: :thumbsup: :sparkles: Here is my JS: $(document).ready(function() { $(".text-primary").addClass("animated bounce"); }); And HTML: <h1 class="text-primary">Jean-Christophe Victor</h1> mc00t sends brownie points to @larrygold :sparkles: :thumbsup: :sparkles: brycemcdonald86 sends brownie points to @zovaaa :sparkles: :thumbsup: :sparkles: event.preventDefault()right above your search()call. brycemcdonald86 sends brownie points to @tylermoeller :sparkles: :thumbsup: :sparkles:
https://gitter.im/FreeCodeCamp/HelpFrontEnd/archives/2016/12/29
CC-MAIN-2021-10
en
refinedweb
Usage Signature: interface SparkChartElement<K, D extends oj.SparkChartElement.Item|any> Generic Parameters Typescript Import Format //To typecheck the element APIs, import as below. import {SparkChartElement} from "ojs/ojchart"; //For the transpiled javascript to load the element's module, import as below import "ojs/ojchart"; For additional information visit: Note:. itemTemplate The itemTemplateslot is used to specify the template for creating each item of the spark chart when a DataProvider has been specified with the data attribute. The slot content must be a <template> element. When the template is executed for each item, it will have access to the spark chart's binding context and the following properties: - $current - an object that contains information for the current item. (See oj.ojSparkChart.ItemTemplateContext or the table below for a list of properties available on $current) - alias - if as attribute was specified, the value will be used to provide an application-named alias for $current. The content of the template should only be one <oj-spark-chart-item> element. See the oj-spark-chart-item doc for more details. Properties of $current: tooltipTemplate The tooltipTemplateslot is used to specify custom tooltip content. This slot takes precedence over the tooltip.renderer property if specified. When the template is executed, the component's binding context is extended with the following properties: - $current - an object that contains information for the spark chart. (See oj.ojSparkChart.TooltipContext or the table below for a list of properties available on $current) Properties of $current: Attributes (nullable) animation-duration :number - The duration of the animations in milliseconds. The default value comes from the CSS and varies based on theme. Names animation-on-data-change :"auto"|"none" - Defines the animation that is applied on data changes. - Default Value: "none" Supported Values: Names animation-on-display :"auto"|"none" - Defines the animation that is shown on initial display. - Default Value: "none" Supported Values: Names area-color :string - The color of the area in area or lineWithArea spark chart. - Default Value: "" Names area-svg-class-name :string - The CSS style class to apply if the type is area or lineWithArea. The style class and inline style will override any other styling specified through the properties. For tooltips and hover interactivity, it's recommended to also pass a representative color to the color attribute. - Default Value: "" Names area-svg-style :CSSStyleDeclaration - The inline style to apply if the type is area or lineWithArea. The style class and inline style will override any other styling specified through the properties. For tooltips and hover interactivity, it's recommended to also pass a representative color to the color attribute. Only SVG CSS style properties are supported. - Default Value: {} Names as :string - An alias for the $current context variable when referenced inside itemTemplate when using a DataProvider. - Deprecated: - Default Value: '' Names bar-gap-ratio :number - Specifies the width of the bar gap as a ratio of the item width. The valid value is a number from 0 to 1. - Default Value: 0.25 Names baseline-scaling :"zero"|"min" - Defines whether the axis baseline starts at the minimum value of the data or at zero. - Default Value: "min" Supported Values: Names color :string - The color of the data items. The default value varies based on theme. Names data :(DataProvider.<K, D>|null) - The DataProvider for the spark chart. It should provide rows where each row corresponds to a single spark chart item. The DataProvider can either have an arbitrary data shape, in which case an element must be specified in the itemTemplate slot or it can have oj.ojSparkChart.Item as its data shape, in which case no template is required. - Default Value: null Names first-color :string - The color of the first data item. - Default Value: "" Names high-color :string - The color of the data item with the greatest value. - Default Value: "" Names items :(Array.<oj.ojSparkChart.Item>|Array.<number>|Promise.<Array.<oj.ojSparkChart.Item>>|Promise.<Array.<number>>|null) - An array of objects with the following properties that define the data for the spark chart. Also accepts a Promise for deferred data rendering. Type details
https://www.oracle.com/webfolder/technetwork/jet/jsdocs/oj.ojSparkChart.html
CC-MAIN-2021-10
en
refinedweb
This page explains how to use Stackdriver and Prometheus and Grafana for logging and monitoring. Refer to Logging and Monitoring Overview for summary of the configuration options available. Using Stackdriver The following sections explain how to use Stackdriver GCP region where you want to store Stackdriver logs. It is a good idea to choose a region that is near your on-prem data center. You provided this value during installation. cluster_name: Cluster name via the Logs Viewer in Cloud Console. For example, to access a container's logs: - Open the Logs Viewer in Cloud Console for your project. - Find logs for a container by: - Clicking on the top-left log catalog drop-down box and selecting Kubernetes Container. - Selecting the cluster name, then the namespace, and then a container from the hierarchy. Accessing metrics data You can choose from over 3000 metrics by using Metrics Explorer. To access Metrics Explorer, do the following: In the Google Cloud Console, select Monitoring, or use the following button: Select Resources > Metrics Explorer. Accessing Stackdriver metadata Metadata is used indirectly via metrics. When you filter for metrics in Stackdriver. Prometheus and Grafana The following sections explain how to use Prometheus and Grafana with GKE On-Prem clusters.manger, enter the following command: kubectl --kubeconfig [USER_CLUSTER_KUBECONFIG] -n kube-system delete monitoring monitoring-sample.
https://cloud.google.com/anthos/clusters/docs/on-prem/1.1/how-to/administration/logging-and-monitoring
CC-MAIN-2021-10
en
refinedweb
React-native mobile product showcase This simple guide will show you how to adapt the Flotiq Mobile Expo application source code to work as a product showcase app. You will build a mobile app that will let your users: - browse through the list of products, - read product details, - search through the product list. The app will be synchronized with your Flotiq account, so you can use the CMS to add and update products and it will compile for Android and iOS phones, out of the box. The code changes required in this guide are minimal, but it might take some time to setup the working environment, both for Android and iOS. Prerequisites We encourage you to download the Flotiq mobile expo application from your Google Play or Apple App Store and connect it with your Flotiq account. This way you will understand how the application works and what you can expect. The article assumes: - you already registered a free Flotiq account - you know how to retrieve your API keys. Here are the remaining essentials: Fork the application repo Go to Flotiq Mobile Expo on GitHub and fork our repo. You will be making some changes to the code and it will be easier to keep track of it later on. Don't forget to give us a star if you find it useful! Setup your workspace - Install XCode on your Mac or - Install Android Studio, for example through JetBrains Toolbox. Once installed - launch it and install an emulator with a recent Android Virtual Device - Clone the git repository you just forked or use ours: git clone - Install node dependencies in your project directory: npm install - Start the iOS emulator npx react-native run-ios - Or start the Android emulator npx react-native run-android This should bring up the emulator and launch Flotiq app. The screen you will see allows you to connect with your Flotiq account, but we will do this through a simple change in the source code. If you have any issues - consult the README file in the application repo. Code updates Here are the steps needed to connect the app to your Flotiq account and simplify it, so it only displays the products. Authenticate with your Flotiq API key The code in the repository uses a login screen to authenticate with your API key. We won't need that for our Product Showcase application, but we still need to authenticate with the Flotiq API. - Login to the Flotiq dashboard - Create a scoped API key for the Product and Media content types - Copy the key. - Now save it in your React code, by adding the following line to the App.jsfile: import FlotiqNavigator from './app/navigation/FlotiqNavigator/FlotiqNavigator'; import contentTypesReducer from './app/store/reducers/contentTypes'; import authReducer from './app/store/reducers/auth'; // Add this line after imports: AsyncStorage.setItem('flotiqApiKey', "<< YOUR FLOTIQ READ-ONLY API KEY HERE >>"); enableScreens(); Once you save the file - the application should automatically reload in the emulator and the login screen should be skipped. You should now see the application's home screen: Simplify navigation For our Product Showcase app we would like to skip to the product list immediately, instead of showing the default Home screen and Content Type browser screen. To achieve that - you will need to update how the navigation is structured. Open the StackNavigator.js file and make the necessary adjustments: - Remove the {{HomeStackScreen()}}line in the RootStackNavigatorcomponent, - Remove the entire Stack.Screencalled ContentTypesScreenin the ContentTypesStackScreenconstant, - Make the following adjustments in ContentTypeObjectsScreen.js - comment out the first line add the following constants: //const { contentTypeName, partOfTitleProps, withReachTextProps, refetchData, contentTypeLabel } = props.route.params; const contentTypeName = 'product' const partOfTitleProps = ['name'] const withReachTextProps = ['description'] const refetchData = true Now, to properly hide the splash screen - add the following import statement: import SplashScreen from 'react-native-splash-screen'; and add the following useEffect() before the first one appearing in the file: useEffect(() => { if (!isLoading) { SplashScreen.hide(); } }, [isLoading]); Finally, in the contentTypeObjectsScreenOptions method - replace the screenTitle const with a static one: const screenTitle = "Products" Here's the full list of changes that have to be made to simplify the original app, in case you missed something. Effects That's it! You should now see the product list immediately after the app has loaded: Now, you can go and play with it or publish it straight to the App stores. The original application has already been approved by Apple and Google stores, so it should be a quick and easy task to get your app approved too! Some ideas you can try: - add product images to the list, - modify the product detail screen, - remove add / edit content buttons. Have fun, and tell us what you built! Discussion (1) Let us know in the comments if you found this useful, we would love to see your apps in the stores, too :-)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/flotiq/product-showcase-mobile-app-in-react-52hh
CC-MAIN-2021-10
en
refinedweb
Python - Remove duplicates from a String While working with strings, there are many instances where it is needed to remove duplicates from a given string. In Python, there are many ways to achieve this. Method 1: Remove duplicate characters without order In the below example, a function called MyFunction is created which takes a string as an argument and converts it into a set called MySet. As in a set, elements are unordered and duplication of elements are not allowed, hence all duplicate characters will be removed in an unordered fashion. After that, the join method is used to combine all elements of MySet with an empty string., def MyFunction(str): MySet = set(str) NewString = "".join(MySet) return NewString MyString = "Hello Python" print(MyFunction(MyString)) The output of the above code will be: HtPy nloeh Method 2: Remove duplicate characters with order In this example, an inbuilt module called collections is imported in the current script to use its one of the class called OrderedDict. The fromkeys() method of class OrderedDict is used to remove duplicate characters from the given string., import collections as ct def MyFunction(str): NewString = "".join(ct.OrderedDict.fromkeys(str)) return NewString MyString = "Hello Python" print(MyFunction(MyString)) The output of the above code will be: Helo Pythn
https://www.alphacodingskills.com/python/pages/python-remove-duplicate-characters-from-a-string.php
CC-MAIN-2021-10
en
refinedweb
GREPPER SEARCH SNIPPETS USAGE DOCS INSTALL GREPPER All Languages >> C++ >> binary tree search “binary tree search” Code Answer’s binary tree search cpp by Rid09 on Jun 07 2020 Donate 6 /* This is just the seaching function you need to write the required code. Thank you. */ void searchNode(Node *root, int data) { if(root == NULL) { cout << "Tree is empty\n"; return; } queue<Node*> q; q.push(root); while(!q.empty()) { Node *temp = q.front(); q.pop(); if(temp->data == data) { cout << "Node found\n"; return; } if(temp->left != NULL) q.push(temp->left); if(temp->right != NULL) q.push(temp->right); } cout << "Node not found\n"; } binary search tree whatever by Frightened Ferret on Dec 09 2020 Donate 1 # Driver Code arr = [ 2, 3, 4, 10, 40 ] x = 10 # Function call result = binarySearch(arr, 0, len(arr)-1, x) if result != -1: print ("Element is present at index % d" % result) else: print ("Element is not present in array") binary search tree whatever by Frightened Ferret on Dec 09 2020 Donate 0. C++ answers related to “binary tree search” binary search function in c++ bst traversal code in data structure with c++ check for bst C++ queries related to “binary tree search” when to use binary search tree binary search tree in c++ is binary tree binary search tree bin search tree technique for binary search tree is binary search tree how to use a binary search tree tree search Binary search tree function code search tree implementation binary search tree? binary search tree in binary tree how to do a binary search tree binary search tree, Binary Search Tree(BST) Which of the following is false about a binary search tree complete binary tree searching binary tree algorithm binary tree search algorithm complete binary tree example searching in binary search tree binary search tree in data structure using c program which of the following statements about a binary search tree is correct tree search algorithm Binary search tree search nodes in Java binary search tree insertion sort What are binary search trees? binary search tree in c++, task viva question binary search tree in c++, task viva binary search tree in data structure using c search in binary tree c searching in a binary tree bst in java what is bst in data structure creating a binary search tree algorithm to create a binary search tree Search in a Virtually Complete Binary Tree binary search tree f search in a bst find binary tree find node in a binary tree complexity pseudocode ninary tree search 2. Create a binary tree consist of 10 nodes having English alphabetic and numerical data. 2. Create a binary tree consist on 10 nodes having English alphabetic and numerical data. implimentation binart search tree binary search tree python geeksforgeeks binary search tree operations in data structure bst structure binary serch tree FIND ELEMENT IN BINARY SEARCH TREE binary search tree bst binary searc tree BST create display tree bst binary search trees java binary searc htree optimal search tree data structure binary search tree binary search tree characteristics search value binary tree how to write a binary search tree geeks for geekso bst example simple binary search node binary search tree data structure in c inorder binary tree Develop a menu driven program to implement Binary Tree/Binary search tree to perform the following operations. i)Insertion ii) Traversing in different order(Depth first Traversal) iii) Search and display the node and its parent node. iv) To find height biary serach tree construct binary search tree bst characters python bst tree in data structure balanced binary search tree example of binary search search in binary tree binary search tree data structure what is binary searchtree search in binary tree is done with searching in binary tree is done using searching in binary tree random binary search tree insert to queue in c+ define bst in data structure binary tree acsl binary search tree operations c++ binary search tree search insertion operation in a bst bst in c validate binary search tree State the features of Binary Search Tree. design a binary search tree bst search tree searching in binary tree geeksforgeeks bst meaning data structure time complexity of binary search tree operations insertion in binary tree binary search tree generator search tree Binary search tree scan tree insert Implement a C program to construct a Binary Search Tree, to search for an element in BST and to display the elements in the tree using Inorder traversal. Implement a C program to construct a Binary Search Tree, to search for an element in BST and to display the elements in the tree using Inorder traversals. Code for displaying binary Search Tree Structure. Code for displaying binary Search Tree Structure. C++ how to use binary search tree in program binary search tree pseudocode insertion of node in binary tree binary search tree algorithm to search a node in binary tree complete binary search tree linear search in bst complexity in binary search tree binary search tree method search how to insert new data in binary search tree c function to inster a element in bst The way in which search tree is searched without using any information about search space is _____________________ What is Binary Search Tree. binary search problem geeksforgeeks implement binary search tree binary serach tree complexidade temporal range search tree geeks binary tree inserti 3. node when is a binary search tree is needed binary search tree in order c++ binary search tree insertion in binary search tree algorithm to create binary search tree binary search in bst binary search tree cpp code create tree for binary search binary search tree in data structure in cp binary search tee inserting in binary tree program program for binary search tree Which of the following is correct for searching a key in a binary search tree? searching a key in a binary search tree CODE insertion in binary serch tree java time complexity binary search tree display other name of BST dsa find a node in binary tree how to search bst insertion in binary tree algorithm binary tree add order binary search tree in data structure code Binary Serach tree is binary tree. binary search tree c++ data structure create binary search tree algorithm building a binary search tree binary search tree nodes binary tree insertion binary tree traversal insertion in data structure bst in tre in c binary tree python geeksforgeeks What is an ADT ,Write an algorithm to insert an element into BST. binart tree gfg how the search for an element in a binary search tree Create a binary search tree in which each node stores the following information of a person: name, age, NID, and height. The tree will be created based on the height of each person. binary search tree buildig a bst with traverse in data structure in c++ buildig a bst in data structure in c++ build a bst in data structure in c++ binary search tree inorder array C program to create binary search tree and insert nodes in it binary serch tree java tree queue or BST which is the best data structure bst search in c binary search tree hegiht binary search tree deptth from key binary tree insert to last position A binary search tree (BST) is a binary tree with the following properties: howto insert elelement into binarytree online binary search tree binary tree binary add binary tree and binary search tree binary search trees geek binary search tree array what is a binary search tree in data structure binary search tree in c, search binary search tree in c, search and recursive transversal binary tree pseudocode binary search tree in java binary search tree binary search binaryt search tree binary serac tree binary search tree find bst algorithm in data structure how to insert numbers in binary search tree binary search tree put Binary Search tree in java binary tree search searching arrange data in binary search tree binary search tree balanc3e base cs binary search trees creation of binary search tree search node in a tree binary searcht ree what is bst tree binary list search BST function time bineary serach tree for two variables binary seach tree code binary search tree items binary search tree class bst display binary search treee binary search tree\ binary search tree pop binary search tree functions what is a binary search tree data structure binary tree search in c Write an algorithm to create a binary search tree. binary search tree program bst tree creation binary search tree demo algorithm to insert element in binary search tree search algorith mbst binary searxch tree binary sreach tree binary search tree converter binary search tree properties what are binary search trees Write algorithm to implement search operation on Binary Search tree. treenode java geeksforgeeks how to create a binary search tree When a program searches a binary tree, how many nodes will it visit? when a program searches a binary tree how many nodes will it visit example of a binary search tree implementation of bst bst node bst root what is the need binary search tree bst diagram algorithm to insert a node into binary search tree. not binary search tree format binary search tree root gfg binary tree bst search binary seach tree search binary tree what is binary search tree algorithm Implement an abstract datatype of Binary Search Tree. Write class definitions of Binary tree node and Binary Search Tree. Implement the following operations of Binary search tree. in binary search tree output is in what is binary search treee binary search tree write function search tree algorithm write a binary serach tree create a binary search tree Create a Binary Search Tree with given mentioned data In the worst case, a binary search tree will take how much time to search an element? How to search for a key in a binary search tree? * bidon node binary search tin node binary search binary serach tree nimary search tree binar ysearch tree list binary search tree list search time complexity for binary tree what is the input format of tree in binarysearch.com binary search trees why is every element in a binaey swatch tree moving from one to any complexity of node insertion for a binary search treee insertion and deletion time in bst binary search tree c binary search geeks binary saerch tree create binary search tree searching through the binary tree tree implementation gfg bst tree data geeks for geeks binary tree how to search key in binary search tree binary tree searching binary seach tree in c tree binary search gfg binary trees binary serach nodes function of bst root in bst Design a function that produces the largest course number in the tree. binary search tree theory what is a binary search atree bst insert delete search complexity trees binary search tree computer science binary search tree how to parse a binary search tree search a binary tree Explain the benefits of using a binary search tree, compared to a stack, when searching for a specific item. Flag Use a Binary Search Tree data structure to solve this question in a java program with the fastest possible time complexity. Include all classes/methods in a single java file. binary tree search c finding with of a binary search tree search node binary search tree bst computer science find value in binary tree bijogfc24 binary search tree where each node holds two values What is the worst case time complexity of searching an element in a Binary Search Tree? binary tree gfg long binary search tree bianry tree gfg Binary Anti-Search Tree (BAST) whats binary search tree The time complexity for inserting an element into a binary search tree is bst java binary search tree diagram binary search tree problem c++ binary search tree list of problems c++ binary search tree problems c++ bynary search tree binary search tree full code binary search tree (BST) binary search trees (BST) bianry search tree BST insertion search in bst Binary Search Trees C?? bst in data structure search in tree inary search tree binary tree in java geeksforgeeks insertion in bst tree properties of the binary search tree. Write down the properties of the binary search tree binary search tree search value binary search tree construction properties of the binary search tree binary tree in gfg binary search the folloing data binary search tree in data structure binary search geeksforgeeks BST insertion/deletion/search c++ time complexity how to work with binary search trees binary search tree store in node ccode to identify the type of node in binary search tree code to identify the type of node in binary search tree make a binary search tree binary search tree ajava binaru search tree search in binary search tree binary tree search node bonary search tree data structures similar to binary search tree Binary Tree search: search of a binary search tree binary tree geeksforgeeks binary search tree algorithm bst definition binary searched tree binary search tree complexity binary search tree on c++ what is binary search tree bst date structure search in a binary search tree what is binary search tree in data structure bst creation binary search tree program in c gfg binary search teree questiojn finding an element in binary search tree best search tree implementation find node algorithm binary tree properties of binary search tree binary search trees in c properties of bst Binary tree search time complexity binary search trees binary search tree search time complexity binary trees gfg binary seatch tree binary search tree in c binary search tree geeksforgeeks space and time complexity of adding all nodes in a bst python binary search tree definition binary search trees c++ find a node in the binary tree from where it starts accepting BST property binary tree operations Binary search tree search bst algorithm binary search tree. what is a binary search tree BST IMPLEMENTATION searching binary tree which of insertaion sequense cannot produce the binary search tree for each on binary search tree A Binary Search Tree represented as an array as given below: binary search tree example tree node insertion searching an element from a Binary Search Tree in c++ full code for searching a node from a Binary Search Tree in c++ code for searching a node from a Binary Search Tree. how to construct a Binary Search Tree in c++ how to construct a Binary Search Tree in c++ code root node of a binary search tree time complexity of adding a node in binary search tree bst code seraching a key in binary search tree in java big o binary search tree example bianary search tree java binary search tree code in data structure pseudocode for searching element in a binary tree binary search tree code binary search tree gfg code binary search tree is used to minimise the length of message Bank coding it which of the following is our application of binary search tree it can be used to remove insert node from bst runtime The binary Search tree is a non-linear data structures that support many non-modifying dynamic-set operations, including geeksforgeeks binary search tree python example binary search tree gfg code for binary search tree binary search tree code in java binarysearchtree java search,insert,delete binary tree time complexity in worst case binary search tree and binmary tree examples binary search tree java bst insertion complexity binary search tree c++ binary search geekeks bst insert binary search tree insertion search bst BSt c++ binary search tree algorithm? binary search tree searching program to practice java binary search tree O(n) binary search tree bst tree Search binary search tree BST bst data structure search in bst geeksforgeeks bst binary search tree list to binary search tree geeksforgeeks bst insert delete what is a bst Insert and search for numbers in a binary tree. search binary tree time complexity binary search tree to list geeksforgeeks bst programming binary search tree prolems what is bst? insertion order in bst searching an element in atree binary search tree implementation ap.practise.binarysearchtrees.TreeNode@4e25154f Binary Search Tree binary tree search Learn how Grepper helps you improve as a Developer! INSTALL GREPPER FOR CHROME More “Kinda” Related C++ Answers View All C++ Answers » never gonna give you up lyrics eosio multi index secondary index how to read and parse a json file with rapidjson December global holidays ue4 log float arduino for command eosio multi index clear arduino sprintf float how to set a range for public int or float unity dlopen failed: library "libomp.so" not found how to print to the serial monitor arduino flutter margins is javascript for websites only bold text latex Html tab list conda environments how to use winmain function how to convert qt string to string all of the stars lyrics screen record ios simulator git branch in my bash prompt ios base sync with stdio first missing number leetcode gfg right view of tree gfg bottom view of tree cout does not name a type flutter datetime format ‘setprecision’ was not declared in this scope eosio parse string Runtime Error: Runtime ErrorBad memory access (SIGBUS) mysqli connect what is difference between ciel and floor eosio check account exist qt design editor hide window bottom white bar underline in latex BAPS chess perft 5 sql server convert utc to pst SQL command how to hide ui elements unity twitch qt disable resizing window check if intent has extras what is meaning of bus error in compattive programming make cin cout faster sfml base program binary exponentiation font awesome bootstrap cdn hello world gmod hitman job code kwakiutl tribe artifacts eosio get time what is difference between single inverted and double inverted in programming languages OPA in expanse cout console google test assert eq float rick roll eosio require_find Name one example of a “decider” program that you regularly encounter in real life. undefined reference to instance certificate exe application sony pictures animation films produced swich case arduino hashing in competitive programming meter espacios en cadena c differentialble programming non stoichiometric nacl is yellow google spreadsheets add two strings root to leaf path print google test assert exception what s[i]-'0' does what is order in of preeendence in float, int, char, bool Runtime Error: Runtime ErrorFloating-point exception (SIGFPE tb6600 stepper motor driver arduino code factorial in c++ penjanje difference endl and \n glfw example window eosio name to string linerenderer follow camera unity il2cpp unity stuck c# porn class is replace by structure arduino pinmode widechartomultibyte mql4 move stop loss netflix spicoli is TLE means my code is correct but taking more time to computr sfml basic program never gonna give you up Poland SFML window cuda dim3 kahoot glitches what is pi unity controller input add on screen debug message ue4 what is difffrence between s.length() and s.size() Runtime Error: Runtime ErrorAbort signal from abort(3) (SIGABRT) binary representation differ in bits coding languages for zoom when ratings will be updated for codechef count number of zeros in array in O(logN) il2cpp stuck unity Html tabulation crow c++ what is the associative property of an operator body parser multiply strings factorial c++ without using function binary exponentiation modulo m what is interrupt handling strtol how to modulo 10^9+7 monotonic deque gfg left view of tree cout how to get ipv4 address in php leetcode fast io tic tac toe c++ naive pattern matching algorithm void value not ignored as it ought to be c++ factorial function template catalan number program error jump to case label roscpp publish int32 use ls in windows cin.getline ellipsis pop_back gta san andreas for loop merge intervals setbits ue4 c++ enum cin.ignore markdown link syntax euler phi gfg unix command to see processes running check a number in string oncomponentbeginoverlap ue4 c++ memset x pow n mod m sfml default program fibonacci series using recursion flake8 max line length bootstrap.min.css visual studio check if a key is in a map first prime numbers Given an undirected graph, count the number of connected components. knapsack problem crypto npm random bytes gfg top view of tree ViewController import program to know if a number is prime caesar cipher program in c++ phph date install arduino ide ubuntu border radius layout android xml arduino lcd hello world factorial download youtube videos evaluate reverse polish notation gfg esp8266 builtin led c++ chrono get milliseconds microsoft flight simulator Write the program for stack using linked list. try catch error allow cross origin insert image using set atribute how to round to nearest whole number unity min coin change problem dp round all columns in R dataframe to 3 digits how to change colour image to grey in opencv c++ netflix best 2020 series jupyter lab use conda environment add partition mysql cvtcolor rgb to gray check for bst printf c how to manually start windows app Dynamically allocate a string object and save the address in the pointer variable p. team fortress calculate factorial square root overleaf delay millis arduino virtual destructor memmove what do you mean by smallest anagram of a string ios_base::sync_with_stdio(false);cin.tie(NULL); binary tree search nginx linux install virtual box kali string to char array qt make widget ignore mouse events rotate by 90 degree convert number to string onoverlapbegin ue4 c++ vbs check if file exists grep xargs sed tower of hanoi cudamemcpy google test assert stdout new line sweetalert2 email and password pubg_mobile_memory_hacking_examples-master euclid algorithm oncomponentendoverlap ue4 c++ va_list to printf how to reset linerenderer unity range of int z transfrom mathlab floyd warshall algorithm arduino falling edge apple and orange hackerrank solution in c++ fork was not declared in this scope residuo en lenguaje c how to switch to another branch in git google translate Write a program that inputs test scores of a student and display his grade pdf to text python 3 Inner Section Sticky Scroll in elementor If ERRORLEVEL cin does not wait for input & in xml implement a linked list in typescript what was the piep piper app google pdf iframe viwer find all the palindrome substring in a given string print all unique subsets Write a program to print following pattern; 1 1 2 1 2 3 1 2 3 4 use sleep in c in windows ascii value rick astley - never gonna give you up binary to decimal flutter jwt arduino switch case knapsack android emulator wifi connected without internet error: invalid conversion from ‘int*’ to ‘int’ [-fpermissive] print reverse number strtok COunt the number of continous subsequences such that the sum is between longest common subsequence cin.fail() tkinter python tutorial variadic templates first fit algorithm emotions no of bits in a number quicksort Check if a Number is Odd or Even using Bitwise Operators factorization in logn Happy New Year! adjacency list memcpy library cpp #include Count possible triangles operator overloading Find the intersection point at the window boundary (base on region code) first prime numbers less than can you add a bool and an int how the theam are store in database win32 c++ call winrt async method synchrnously mpgh dbd merge sort pseudocode flutter websocket auto reconnect qt window bottom bar factorial trailing zeroes Your age doubled is: xx where x is the users age doubled. (print answer with no decimal places) qt widget list set selected depth first search variant hold type sass set variable if not defined inconsequential meaning enable_if vs enable_if_t 9+20 n=127 i=0 s=0 while n>0: r=n%10 p=8^i s=s+p*r i+=1 n=n/10 print(s) amusia F && T || !(T) && F a bag1 contains red blue and green balls and bag2 contains red blue and green balls in c++ error: ‘CV_WINDOW_AUTOSIZE’ was not declared in this scope varint index GoPro camera for kids aus ue4 modular character how to print binary of 1 in 32 bit is not a nonstatic data member or base class of class read potentiometer arduino how to add external library in clion gcc suppress warning inline i'm still here lyrics what does it mean to fkush the output stream hobo 8 cudaMalloc exponenciacion binaria expected unqualified-id before 'if' hell0w worldcpp Newton's sqrt in c++ Temporary file using MSFT API in cpp how to remove filmora watermark for free UPARAM(ref) strong number gfg how to find sum of values on path in atree cocos2d c++ linux div content editable lexene token pairs of java codes create a bitset of 1024 bits, rc.local not running centos 6 why exceptions can lead to memory leaks subtracting two large numbers mj glUniform bool find number of 1s in a binary cv::mat image how to show constellations in starry night orion special edition road repair hackerrank problem solving solution github libevent parse multipart formdata site:stackoverflow.com what does npl mean? convert ascii char value to hexadecimal c++ gcd substitution failure is not an error hz uepic games github PUBG_APIKEY=<your-api-key> npm t is x prime? how to use line renderer moving camera unity hybrid inheritance template random 1 diem tren man hinh bang dev c Write a program that inputs time in seconds and converts it into hh-mm-ss format changing values of mat in opencv c++ msdn parse command line how to run a msi file raspbrain snake and ladder game code in c++ download 130 divided by -10 cuda atomic inc Parse error. Expected a command name, got unquoted argument with text "//". registering a new QML type programa para saber si un numero es primo while(n--) mingw no admin lunar ckient Write a program that asks a user for their birth year encoded as two digits (like "62") and for the current year, also encoded as two digits (like "99"). The program is to correctly write out the users age in years vprintf laravel tutorial w3schools pdf expected initializer before 'isdigit'| is obje file binary?? cuda shared array Runtime error(Exit status:153(File size limit exceeded)) c++ cuda shared variable system.drawing.color to system.consolecolor Qt asynchronous HTTP request defining function in other file summary a long walk to water ssfml fullscreen All palindromic substrings centos7 mlock2 in c, is class uppercase or lowercase decimal to english solve linear equations geeksforgeeks palindrome naive string matching algorithm sort csv file by certain parameter in python import matrix from excel to matlab nodemcu web server slider lru cache gfg golden sphere cpp destin'y child cuda atomic swap How to find the kth smallest number in cinstant space PCL RANSAC primos menores que Explain operator overloading with an example. Write a program to write content into text file. equilibrium point code english to decimal simple timer arduino blynk library error bootstrap best gun in freefire for headshot c++ asio read full socket data into buffer distance from point to line hexo Create a program that finds the minimum value in these numbers the unbounded knapsack problem. how are graphics in games made glut keyboard input what is blob in computer vision primeros numeors primos menores que sjfoajf;klsjflasdkfjk;lasjfjajkf;dslafjdjalkkkjakkkkkkkkkkkkkkkkfaWZdfbhjkkkk gauds qt graphics scene map cursor position dictionary addon chrome best fit algorithm gcd of two numbers by modulo least number of coins to form a sum crystal ball c++ GCD2 lambda function qt connect c++ print 3d cube laravel documentation clear qlayout cudamemcpyasync primeros numeros primos Write a program to implement Rabin Karp algorithm for pattern matching. kotch curve opengl c++ cuda allocate memory nth root of m can derived class access private members namespace "std" n'a pas de membre "filesystem" powershell script query mssql windows authentication columntransformer onehotencoder #pragma GCC target ("avx2") #pragma GCC optimization ("O3") #pragma GCC optimization ("unroll-loops") remove item from layout commentaires php how togreper google snake game rgb(100,100,100,0.5) validation c++ cuda copy memory python converter to c set width qpushbutton call to constructor of 'extClockType' is ambiguous extClockType time2; rosrun actionlib_msgs genaction.py jquery ajax post json asp.net core javidx9 age convert GLFWwindow* to IntPtr what does sultion mean in a particle model arduino flame sensor project dinamica02 pbinfo qregexpvalidator qlineedit email address @testing-library/react-native switch visual studio getline not working why ostream cannot be constant string get full cin factorion hwo to make a script to give track battery and give notification combination sum iv leetcode getline vs cin.getline bitwise operator regexp_like oracle c++ flutter RaisedButton with icon __builtin_ctz OpenService FAILED 1060 ue4 c++ how to open a blueprint widget linux x11 copy paste event dice game c++ with standard deviation catalan number calculator math expressions gtest assert not equal zookeeper c++ example Smallest Positive missing number segmentation fault means code to search for consecutive words in c how to write a conclusion statement for an informative essay error: invalid use of template-name without an argument list this is my p phone number in punjabi write a program that simulates the rolling of two dice in c++ why do men drink liquor google test assert throw get player pawn how to implement binders and decorators on c++ lik python? python geeksforgeeks matrix transpose tiling sdl window full screen Convert binary tree to a doubly linked list qt remove resize handle doxygen cmake rabin karp algorithm how to pronounce beaucoup how to find isomorphic strings c++ 14 for sublime windoes build system alternating subsequence codeforces memset array bool clean list widget qt hwo to send token on redirection in passport cube mapping sdl How to check if a triangular cycle exists in a graph CREATE TABLE SKILSS SQL mkdir boost filesystem how to convert string into number Combination Sum calc else if find last digit of number rand() esp32 arduino mqtt building native binary with il2cpp unity pycharm young physicist codeforces sum of n natural numbers in c how to convert radians to degrees sanity testing pyqt connect yeet how to convert pdf to binary data by php how to make custom domain extension I need to write an int function in which there are only cout statements and if I return 0/1 it prints them too. New Year's Eve Cod Cold War no recoil iterative quicksort open a url with dev
https://www.codegrepper.com/code-examples/cpp/binary+tree+search
CC-MAIN-2021-10
en
refinedweb
The interval tree being integrated in needs unit tests integrated as well. Created attachment 66465 [details] Patch This patch won't build until bug 45060 and bug 45160 land, but it should give reviewers an idea of how the tree has been tested. Attachment 66465 [details] did not build on chromium: Build output: Comment on attachment 66465 [details] Patch View in context: > WebKit/chromium/tests/PODIntervalTreeTest.cpp:109 > +namespace { nit: add new line after "namespace {" to match the new line above the closing bracket > WebKit/chromium/tests/PODIntervalTreeTest.cpp:150 > +namespace { nit: add new line below this line > WebKit/chromium/tests/PODIntervalTreeTest.cpp:199 > + int len = nextRandom(maximumValue); nit: len -> length > WebKit/chromium/tests/PODIntervalTreeTest.cpp:210 > + int idx = nextRandom(addedElements.size()); nit: idx -> index R=me w/ those nits fixed Committed r66809: <> might have broken Leopard Intel Debug (Tests)
https://bugs.webkit.org/show_bug.cgi?id=45161
CC-MAIN-2021-10
en
refinedweb
hashids.scalahashids.scala A Scala port of hashids.js library to generate short hashes from one or many numbers. Ported from hashids.java by fanweixiao - Hashid is initialized with an alphabet, saltand a minimum hash length - It's possible to hash single and multiple long numbers - Hashes are unique across the salt value - Hashes are decryptable to a single or multiple numbers respectively - Hashes don't contain English curse words - Supports positive long numbers - The primary purpose of hashids is to obfuscate ids - Do not use hashids for security purposes or compression The goal of the portThe goal of the port Besides the goals of the original library, this scala port is written without mutable state. Also you get clear exceptions in following cases: IllegalArgumentExceptionwhen alphabet, you provided, contains duplicates IllegalArgumentExceptionif alphabet contains spaces IllegalArgumentExceptionif alphabet is less then 16 chars long IllegalArgumentExceptionwhen calling encodeHexwith non-HEX string IllegalStateExceptionwhen calling decodewith hash, produced with different salt UsageUsage Cross-Built for Scala 2.11, 2.12 and 2.13 libraryDependencies += "com.github.ancane" %% "hashids-scala" % "1.4" import org.hashids.Hashids, Hashids._ Encode(hash)Encode(hash) You should provide your own unique salt to get hashes, different from other hashids. Do not use salf from the examples. val hashids = Hashids("this is my salt") val hash = hashids.encode(12345L) > "NkK9" Decode(unhash)Decode(unhash) During decryption, same salt must be used to get original numbers back: val hashids = Hashids("this is my salt") val numbers = hashids.decode("NkK9") > List(12345L): Seq[Long] LicenseLicense MIT License. See the LICENSE file.
https://index.scala-lang.org/ancane/hashids.scala/hashids-scala/1.4?target=_2.13
CC-MAIN-2021-10
en
refinedweb
- Type: Improvement - Status: Open - Priority: Major - Resolution: Unresolved - Affects Version/s: 2.0.11 - - Component/s: Core Interceptors - Labels:None Currently, there doesn't seem to be a method of validating an indexed property (ie. a collection) directly or indirecetly within an action. For instance, given an action OrderAction with a property order of type Order where order contains a collection trades of type Trade, the following is not possible: @Validation public class OrderAction { private Order order = new Order(); @Validations(requiredFields = { @RequiredFieldValidator(message = "", key = "errors.required", fieldName = "order.trades[].field1"), @RequiredFieldValidator(message = "", key = "errors.required", fieldName = "order.trades[].field2"), }) public String execute() throws Exception { .... } The closest thing to support for something like the above is using the VisitorFieldValidator validator. Although it has the following disadvantages: 1. It must be placed on the target object (invasive and not context sensitive) 2. It does not work with an indirect collection (as in the example above) Something like this should be possible (even struts1 validator supports indexedListProperty) On a related topic, it should be possible to do the following to display the error in the resulting view: <s:fielderror><s:param>order.trades[2].field1</s:param></s:fielderror>
https://issues.apache.org/jira/browse/WW-2656
CC-MAIN-2021-10
en
refinedweb
Unix-style Fortune teller text display on LCD Dependencies: 4DGL-uLCD-SE SDFileSystem mbed main.cpp - Committer: - alexcrepory - Date: - 2015-03-09 - Revision: - 1:4d5e6b8edd00 - Parent: - 0:672a66c015ca - Child: - 2:7507d0c0e509 File content as of revision 1:4d5e6b8edd00: #include "mbed.h" #include "SDFileSystem.h" #include "uLCD_4DGL.h" SDFileSystem sd(p11, p12, p13, p14, "sd"); //sd card DigitalIn pb(p21); //pushbutton uLCD_4DGL uLCD(p9,p10,p8); // serial tx, serial rx, reset pin; int main() { char buffer[300]; //buffer to store the quotation float rando=0; //variable responsible to receive the random value uLCD.cls(); //clear the screen printf("Hello\n"); //check the conection with the computer mkdir("/sd/mydir", 0777); //create a folder calle mydir //Create the file with the quotations FILE *fp = fopen("/sd/mydir/sdtest.txt", "w"); if(fp == NULL) { error("Could not open file for write\n"); }("Press the button\n");//lets you know when the file is created while (true){ if (pb == 1){ //open the file to be read FILE *ft = fopen("/sd/mydir/sdtest.txt", "r+"); if(ft == NULL) { error("Could not open file for write\n"); } rando = rand()%10+1;//random value for(int i=0; i<rando; i++){ //copy to buffer the quotation of the random value fgets(buffer, 300, ft); } uLCD.cls(); uLCD.printf("%s\n", buffer); //prints in the scrren the string printf("%s\n", buffer); fclose(ft); wait(0.2); } } }
https://os.mbed.com/users/alexcrepory/code/4180Lab4/file/4d5e6b8edd00/main.cpp/
CC-MAIN-2021-10
en
refinedweb
SDL does work and can be integrated into a React Native application. Please follow the React Native Getting Started guide for how to create a new React Native application if you need one. To install SDL into your React Native app, you will need to follow the React Native Native Module's guide to integrate the SDL library into your application using React Native's Native Modules feature. You must make sure you have Native Modules installed as a dependency in order to use 3rd party APIs in a React Native application. If this is not done your app will not work with SmartDeviceLink. Native API methods are not exposed to JavaScript automatically, this must be done manually by you. Then see the SDL Installation Guide for more information on installing SDL's native library. This guide is not meant to walk you through how to make a React Native app but help you integrate SDL into an existing application. We will show you a basic example of how to communicate between your app's JavaScript code and SDL's native Obj-C code. For more advanced features, please refer to the React Native documentation linked above. Native API methods are not exposed automatically to JavaScript. This means you must expose methods you wish to use from SDL to your React Native app. You must implement the RCTBridgeModule protocol into a bridge class (see below for an example). Please follow SmartDeviceLink Integration Basics guide for the basic setup of a native SDL ProxyManager class that your bridge code will communicate with. This is the necessary starting point in order to continue with this example. Also set up a simple UI with buttons and some text on the SDL side. To create a native module you must implement the RCTBridgeModule protocol. Update your ProxyManager to include RCTBridgeModule. #import <React/RCTBridgeModule.h> @interface ProxyManager : NSObject <RCTBridgeModule> <#Proxy Manager code#> @end An RCT_EXPORT_MODULE() macro must be added to the implementation file to expose the class to React Native. @implementation ProxyManager RCT_EXPORT_MODULE(); <#Proxy Manager code#> @end Before you move forward, you must add #import "React/RCTBridgeModule.h" to your Bridging Header. When creating a Swift application and importing Objective-C code, Xcode should ask if it should create this header file for you. You can create this file manually as well. You must include this bridging header for your React Native app to work. @objc(ProxyManager) class ProxyManager: NSObject { <#Proxy Manager Code#> } Next, to expose the above Swift class to React Native, you must create an Objective-C file and wrap the Swift class name in a RCT_EXTERN_MODULE in order to use the Swift class in a React Native app. #import "React/RCTBridgeModule.h" @interface RCT_EXTERN_MODULE(ProxyManager, NSObject) @end Inside the ProxyManger class, post a notification for a particular event you wish to execute. The 'Event Emitter' class, which you will see later in the documentation, will observe this event notification and will call the React Native listener that you will set up later in the documentation below. Inside the ProxyManager add a soft button to your SDL HMI. Inside the soft button handler, post the notification and pass along a reference to the sdlManager in order to update your React Native UI through the bridge. SDLSoftButtonObject *softButton = [[SDLSoftButtonObject alloc] initWithName:@"Button" state:[[SDLSoftButtonState alloc] initWithStateName:@"State 1" text:@"Data" artwork:nil] handler:^(SDLOnButtonPress * _Nullable buttonPress, SDLOnButtonEvent * _Nullable buttonEvent) { if (buttonPress == nil) { return; } NSDictionary *userInfo = @{@"sdlManager": self.sdlManager}; [[NSNotificationCenter defaultCenter] postNotificationName:<#Notification Name#> object:nil userInfo:managers]; }]; self.sdlManager.screenManager.softButtonObjects = @[softButton]; let softButton = SDLSoftButtonObject(name: "Button", state: SDLSoftButtonState(stateName: "State", text: "Data", artwork: nil), handler: { (buttonPress, butonEvent) in guard buttonPress == nil else { return } let userInfo = ["sdlManager": self.sdlManager] NotificationCenter.default.post(name: NSNotification.Name(rawValue: <#Notification Name#>), object: nil, userInfo: managers) }) self.sdlManager.screenManager.softButtonObjects = [softButton]; Create the class that will be the listener for the notification you created above. This class will be sending and receiving messages from your JavaScript code (React Native). The required supportedEvents method returns an array of supported event names. Sending an event name that is not included in the array will result in an error. An "event" is sending a message from native code to React Native code. #import <React/RCTEventEmitter.h> #import <React/RCTBridgeModule.h> #import <Foundation/Foundation.h> NS_ASSUME_NONNULL_BEGIN @interface SDLEventEmitter : RCTEventEmitter @end NS_ASSUME_NONNULL_END #import "SDLEventEmitter.h" #import "ProxyManager.h" #import <React/RCTConvert.h> #import <SmartDeviceLink/SmartDeviceLink.h> @implementation SDLEventEmitter RCT_EXPORT_MODULE() - (instancetype)init { self = [super init]; // Subscribe to event notifications sent from ProxyManager [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(getDoActionNotification:) name:<#Notification Name#> object:nil]; return self; } // Required Method defining known action names - (NSArray<NSString *> *)supportedEvents { return @[@"DoAction"]; } // Run this code when the subscribed event notification is received - (void)getDoActionNotification:(NSNotification *)notification { if(self.sdlManager == nil) { self.sdlManager = notification.userInfo[@"sdlManager"]; } // Send the event to your React Native code with a dictionary of information [self sendEventWithName:@"DoAction" body:@{@"type": @"actionType"}]; } @end @objc(SDLEventEmitter) class SDLEventEmitter: RCTEventEmitter { override init() { // Subscribe to event notifications sent from ProxyManager NotificationCenter.default.addObserver(self, selector: #selector(doAction(_:)), name: Notification.Name(rawValue: "<#Notification Name#>", object: nil) super.init() } // Required Method defining known action names override func supportedEvents() -> [String]! { return ["DoAction"] } // Run this code when the subscribed event notification is received @objc func doAction(_ notification: Notification) { if self.sdlManger == nil { self.sdlManager = notification.userInfo["sdlManager"] } // Send the event to your React Native code with a dictionary of information sendEvent(withName: "DoAction", body: ["type": "actionType"]) } } The above example will call into your JavaScript code with an event type DoAction. Inside your React Native (JavaScript) code, create a NativeEventEmitter object within your EventEmitter module and add a listener for the event. import { NativeEventEmitter, NativeModules } from 'react-native'; const { SDLEventEmitter } = NativeModules; const testEventEmitter = new NativeEventEmitter(SDLEventEmitter); // Build a listener to listen for events const testData = testEventEmitter.addListener( 'DoAction', () => SDLEventEmitter.eventCall({ "data": { "low": "77", "high": "87", "currentTemp": "82", "rain": "50%" } } ) ) The last step is to wrap any native code methods you wish to expose to your JavaScript code inside RCT_EXPORT_METHOD for Objective-C and RCT_EXTERN_METHOD for Swift. We've seen above how native code can send notifications to your JavaScript code, now we will see how your JavaScript code can send notifications into your native SmartDeviceLink code. Inside the SDLEventEmitter.m file add the following method: RCT_EXPORT_METHOD(eventCall:(NSDictionary *)dict) { [self.sdlManager.screenManager beginUpdates]; self.sdlManager.screenManager.textField1 = [NSString stringWithFormat:@"Low: %@ ºF", [RCTConvert NSString:dict[@"data"][@"low"]]]; self.sdlManager.screenManager.textField2 = [NSString stringWithFormat:@"High: %@ ºF", [RCTConvert NSString:dict[@"data"][@"high"]]]; [self.sdlManager.screenManager endUpdatesWithCompletionHandler:^(NSError * _Nullable error) { if (error != nil) { <#Error#> } else { <#Success#> } }]; } If you're making a React Native application and using native Swift code, you will need to create the Objective-C bridger for the SDLEventEmitter class you created above. Wrap the method(s) you wish to expose in a RCT_EXTERN_METHOD macro inside your wrapper class. This wrapper will allow the JavaScript code to talk with your native code. Make sure you add #import "React/RCTEventEmitter.h" to the apps bridging header. #import "React/RCTBridgeModule.h" #import "React/RCTEventEmitter.h" @interface RCT_EXTERN_MODULE(SDLEventEmitter, RCTEventEmitter) RCT_EXTERN_METHOD(eventCall:(eventCall: (id)dict)) @end Add the following method to SDLEventEmitter.swift: @objc func eventCall(_ dict: NSDictionary) { self.sdlManager.screenManager.beginUpdates() let data = dict["data"]! as! NSDictionary self.sdlManager.screenManager.textField1 = "Low: \(data["low"]!) °F")" self.sdlManager.screenManager.textField2 = "High: \(data["high"]!) °F")" self.sdlManager.screenManager.endUpdates() } By now you should have a basic React Native application that can send a message from the Native side to the React Native layer. If done correctly the application should update the SDL UI when clicking the soft button on the head unit. The above documentation walked you through how to send a message to React Native and receive a message containing data back.View on GitHub.com
https://smartdevicelink.com/en/guides/iOS/frequently-asked-questions/react-native/
CC-MAIN-2021-10
en
refinedweb
[ ] Andrew Gaul closed JCLOUDS-578. ------------------------------- Resolution: Won't Fix > Custom HTTP Headers in Rackspace SwiftObjects are ignored > ---------------------------------------------------------- > > Key: JCLOUDS-578 > URL: > Project: jclouds > Issue Type: Bug > Components: jclouds-blobstore > Affects Versions: 1.7.1 > Reporter: Daren Klamer > Priority: Minor > Labels: rackspace, swift > > Rackspace allows users to set custom HTTP Headers on their files that are being served. This is especially useful for web fonts, as some browsers refuse to use the fonts unless the following header is set (more info [here|]): > {{Access-Control-Allow-Origin: *}} > Upon getting a SwiftObject from my Rackspace CloudFilesClient object and calling {{getAllHeaders().put( "Access-Control-Allow-Origin", "*" )}} I could see that the header was being lost and not sent with the underlying HTTP request. > As a workaround here I patched [{{org.jclouds.openstack.swift.binders.BindSwiftObjectMetadataToRequest}}|]. Instead of simply binding the blob to the request and returning it (lines 86 + 87), I changed it to bind the blob, then added all headers from the blob to the request. > {noformat} > Blob blob = object2Blob.apply( object ); > request = mdBinder.bindToRequest( request, blob ); > return ( R ) request.toBuilder().replaceHeaders( blob.getAllHeaders() ).build(); > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
http://mail-archives.us.apache.org/mod_mbox/jclouds-notifications/201510.mbox/%3CJIRA.12717456.1401415566000.55753.1445831068023@Atlassian.JIRA%3E
CC-MAIN-2021-10
en
refinedweb
Set Up a Dev Environment Who This Guide Is For We recommend this guide for users who: Write or maintain a large number (10+) of custom scripts regularly Collaborate on a team of 3 developers or more Want to implement standard software development practices (and tools) alongside their scripts, for example: Version control (Git) Configuration as code (YAML) Automated testing (Spock) Continuous integration Want to keep their scripts in a centralized place that isn’t simply a script root directory on their server Want all the benefits of working with IntelliJ IDEA, for example: Efficient script-writing and less log hunting Quick access to the Atlassian APIs Code auto-completion, syntax checks, and other IDE features This is a quick-start guide for users who could benefit from connecting ScriptRunner to tools such as Git, IntelliJ, and Maven, without mastering them. The goal is to give useful tools to the average scripter without holding back more experienced developers. This guide is NOT a detailed guide on how to use tools like Git, IntelliJ, and Maven. If you are looking for those detailed instructions, you will have to consult each product’s own documentation. Requirements The software and hardware requirements for this guide follow: Software Hardware Your memory needs vary based on the Atlassian host application that you’re working with. Here are Atlassian’s recommendations: Jira - 2GB of RAM Confluence - 6GB of RAM Bitbucket - 3GB of RAM Experienced Atlassian plugin developers recommend 16GB of RAM on your development computer if possible, particularly as you run more and more things to test and debug your application (IntelliJ, the Atlassian application, multiple web browsers, etc.). The ScriptRunner Samples Project This project contains script plugins for the ScriptRunner Suite (Jira, Confluence, and Bitbucket Server). The following tasks lead you through working with the Scriptrunner Samples project to connect the tools. Import the Project into IntelliJ Once the project has been imported, IntelliJ downloads dependencies. This takes a while, but let it finish before moving on to the next task. Set a JDK for the Project Java Development Kit 8 should be configured as the SDK for the project. There’s a chance the SDK may already be configured, but if not, you can find instructions by clicking the links below. Configure a Global SDK. Configure your Project SDK. Configure your Module SDK. Each screenshot represents approximately how your configuration should look. Build and Run Jira/Confluence/Bitbucket with the Sample Plugin If you have fulfilled the above requirements and tasks, you should now be able to build the sample plugins. Inside a terminal, you can start each application with its respective command: Those commands should start a locally running instance of the application at:<application>and you should see something like this in your terminal: Login using the following credentials: Username: admin Password: admin Complete the application’s setup screens, if prompted. Test to make sure everything has started properly by following these steps: Navigate to the Script Console, and then switch to the File tab. Start typing the name of the sample script that’s installed in each plugin ( ScratchScript.groovy), and then click the suggestion. Click Run You should see the string returned by that script in the Results tab. Script Roots Information The base pom adds the following script roots: <module>/src/main/resources <module>/src/test/resources <module>/src/test/groovy Any scripts inside these directories can be run from the Script Console (or any other ScriptRunner extension point such as event listeners) without specifying a full path (like you did with the ScratchScript.groovy file above). Advanced IntelliJ IDEA Configurations Read on for some advanced configuration options that will make your scripting experience even better. Create Debug Configuration in IDEA A debugger can be helpful for complex scripts. Follow these steps to create a run configuration for starting a debugger: Select Run > Edit Configurations in the top menu bar. Press the + button, and then select Remote. Set a name for your debug configuration (e.g. "Jira"). Set the debug port to 5005. Click on the Logs tab, and then press the + button. Enter an Alias (e.g. Jira logs). Navigate to the location of your logs by using the Browse button. The log files should be at the following locations: Jira - scriptrunner-samples/jira/target/jira/home/log/atlassian-jira.log Bitbucket - scriptrunner-samples/bitbucket/target/bitbucket/home/log/atlassian-bitbucket.log Confluence - The log file is not be present in the target directory, but it is written to the Console tab in IntelliJ when you run confluence:debug. The targetdirectory is only present if the application has been previously built. If you’ve followed all of the tasks in this guide, you completed this in the Build and Run Jira/Confluence/Bitbucket with the Sample Plugin section. Click OK. The debug configuration should look approximately like this: - Click Apply, and then click OK. Since you already started the application in the previous steps, click Run > Debug Jira (or whichever debugger you created) to start debugging your application. That action starts the debugger, and a window like this one should show up in a lower area of IDEA: Debug a Groovy Script You can use the debugger you just created with a Groovy script. In IDEA, open ScratchScript.groovy. Each application has its own so make sure you open the correct one. Set a Breakpoint on the line the string begins on. Execute your script via the Script Console. The debugger should stop the execution of the script at the location of your breakpoint. Use the inspector to: Step through the code Mid-execution, look at the values of variables in your script Mid-execution, evaluate a Groovy code fragment Click the green run button to resume the execution. Connecting IntelliJ IDEA with the Atlassian Source Code One of the biggest benefits of writing your code in IDEA is that you can access the JavaDoc directly in the IDE instead of needing to go to the API’s documentation website. If you have purchased a license from an Atlassian product, you will have access to the Source Download page. Using Jira as an example, follow these steps to create a new Groovy script to see the connection working: Create a new file called UsersCount.groovy inside the src/main/resources/script root. Use the following code in the file: import com.atlassian.jira.component.ComponentAccessor import com.atlassian.jira.user.util.UserManager def userManager = ComponentAccessor.getUserManager() as UserManager def message = "My instance contains ${userManager.totalUserCount} user(s)." log.warn(message) Run UsersCount.groov using the Script Console. In the Logs tab, you can see a message stating "My instance contains X users." You should see the same message in the Jira Logs tab of your IntelliJ debugger. Browse the Java API The UserManager class can do many different things. Follow one of these steps to explore its capabilities. In IntelliJ, type userManager., and you will see the properties and methods available to be used. In, IntelliJ, navigate to the class file itself by holding down CTRL (or CMD in OSX) and clicking on the class name. Try this for the UserManagerclass located at the end of line 4. You will see something like this: The decompiler already gives you a lot of information about the class, but you can go even further and see the entire implementation of it. If you click the Choose Sources… link, and then select the folder that contains the application’s source (which you downloaded earlier), the source for the class is shown. Then, the IDE will provide better parameter names and inline JavaDoc. You may have to do this multiple times, once for each dependency ( jira-software, jira-servicedesketc). In each case, just point to the root of the downloaded sources, select everything, and let IDEA configure it. Advanced Plugin Configuration There are a number of settings in the plugin’s pom.xml file that you may want to change to suit your needs while writing your scripts. Customize Product Versions The project’s parent pom sets a lot of default values for the versions of the libraries to be used when writing your scripts. Those default values can be changed if you are going to write scripts for a different version of the Atlassian applications. You can change them by editing the <properties> section of your pom.xml. For example, to set the Jira version to 7.13.11, your properties block might look like: <properties> <jira.version>7.13.11</jira.version> </properties> Use <confluence.version> for Confluence, <bitbucket.version> for Bitbucket (in their respective pom.xml). Adding Additional Applications Adding additional applications is done inside the <applications> block. For example, if you are writing a plugin for Jira, you may require Jira Software, or Jira Service Management. Those two have been added to jira/pom.xml for you, but for others you will need to add them. To make sure they get installed, uncomment out the application(s) you would like. An example of this code is shown below: <applications> <!-- Include Jira Software features --> <!-- <application> <applicationKey>jira-software</applicationKey> <version>${jira.software.version}</version> </application> --> <!-- Include Jira Service Desk features --> <!-- <application> <applicationKey>jira-servicedesk</applicationKey> <version>${jira.servicedesk.version}</version> </application> --> </applications> If you are going to install Jira Service Management, make sure its version is compatible with your Jira version. Changing the Default HTTP Port The base pom sets all applications to run on port 8080 for consistency, rather than their defaults. That can be changed by adding a <httpPort> entry to the configuration block of your AMPS plugin (jira-maven-plugin, confluence-maven-plugin, bitbucket-maven-plugin). An example of this code is shown below: <plugin> <groupId>com.atlassian.maven.plugins</groupId> <artifactId>jira-maven-plugin</artifactId> <configuration> <!-- Other code here... --> <httpPort>2990</httpPort> <!-- Other code here... --> </configuration> </plugin> Go Further The next time you want to create and debug a new script, you will just need to add a new Groovy script to your IDEA project and run the <application>:debug Maven goal to test it. As your script library grows, you may find you need even more out of your local development environment. Development Lifecycle Do you find copying and pasting scripts between IntelliJ and the ScriptRunner web interface tedious? There are two better ways: the Script Editor and a script plugin. Using the Script Editor, you can edit and create files directly from ScriptRunner’s UI. Check out the ScriptRunner Script Editor documentation for more information. The whole environment you just set up is actually a full blown Atlassin plugin development. Take a look at the documentation on Creating a Script Plugin for more information on how to use that environment to develop and package your scripts for deployment in test and production instances. Execute Tests For automated testing in ScriptRunner in the project that you just set up, you can add tests under <module>/src/test/resources. You can then run them with our built-in script located under Administration > Built-In Scripts > Test Runner. See Test Your Code documentation for more information. Use an External Tool for Running Scripts Against Jira/Confluence/Bitbucket One handy tool for debugging is adding an external tool in IntelliJ to run an arbitrary Groovy script against ScriptRunner. We will use Jira as an example, but the same can be done for Confluence and Bitbucket. Navigate to IntelliJ Preferences > Tools > External Tools. Click +and fill in the following fields: Name: Run in Jira Program: curl Arguments: -u admin:admin --header "X-Atlassian-token: no-check" -X POST --data "scriptFile=$FilePathRelativeToSourcepath$" Make sure your application’s base URL and port are correct. Working directory: $ProjectFileDir$ The fields should look like this: Click OK Invoke the test by running Tools > External Tools > Run in Jira in the top menu bar of IntelliJ. This will POSTthe body of the script file you’re working with to your locally running Jira server and execute it, just as if you’d copy-and-pasted the code into the Script Console and clicked Run. Useful links See the following links for more information: Apache Groovy documentation See the Jira API Reference See the Bitbucket API Reference See the Atlassian Shared Application Layer API Reference See the Atlassian Answers questions tagged as ScriptRunner-related
https://docs.adaptavist.com/sr4c/latest/best-practices/write-code/set-up-a-dev-environment
CC-MAIN-2021-10
en
refinedweb
Encrypt and Decrypt Files using Python Want to share your content on python-bloggers? click here. In this article we will discuss how to encrypt and decrypt files using Python. Table of Contents - Introduction - Creating a key - Encrypting a file - Decrypting a file - Complete Object-Oriented Programming Example - Conclusion Introduction In the evolving world of data and information transfer, security of the file contents remain to be one of the greatest concerns for companies. Some information can be password protected (emails, logins) while other information being transferred via emails or FTP lacks efficiency if protected by some keyword. This is where file encryption plays a big role and provides security and convenience sought by parties engaged in file transfers. So what is encryption? It is a process of converting information into some form of a code to hide its true content. The only way to access the file information then is to decrypt it. The process of encryption/decryption is called cryptography. Let’s see how we can encrypt and decrypt some of our files using Python. We will follow symmetric encryption which means using the same key to encrypt and decrypt the files. To continue following this tutorial we will need the following Python library: cryptography. If you don’t have them installed, please open “Command Prompt” (on Windows) and install them using the following code: pip install cryptography And we will also need a sample file we will be working with. Below is the sample .csv file with some data on students’ grades: Creating a Key In our example we will be using symmetric equation: from cryptography.fernet import Fernet Fernet is authenticated cryptography which doesn’t allow to read and/or modify the file without a “key”. Now, let’s create the key and save it in the same folder as our data file: key = Fernet.generate_key() with open('mykey.key', 'wb') as mykey: mykey.write(key) If you check the directory where you Python code is located, you should see the mykey.key file. You can open it with any text editor (in my case it shows up in the local directory because I use VS Code). The file should contain one line which is a string of some order of characters. For me it is “VlD8h2tEiJkQpKKnDNKnu8ya2fpIBMOo5oc7JKNasvk=”. will see the following output: VlD8h2tEiJkQpKKnDNKnu8ya2fpIBMOo5oc7JKNasvk= The encryption key is now stored locally as the key variable. Encrypting a File Now that we have the file to encrypt and the encryption key, we will now write a function to utilize these and return the encrypted file: f = Fernet(key) with open('grades.csv', 'rb') as original_file: original = original_file.read() encrypted = f.encrypt(original) with open ('enc_grades.csv', 'wb') as encrypted_file: encrypted_file.write(encrypted) Let’s discuss what we did here: - We initialize the Fernet object as store is as a local variable f - Next, we read our original data (grades.csv file) into original - Then we encrypt the data using the Fernet object and store it as encrypted - And finally, we write it into a new .csv file called “enc_grades.csv” You can take a look at the encrypted file here: Decrypting a File After you encrypted the file and, for example, successfully transferred the file to another location, you will want to access it. Now, that data is in the encrypted format. The next step is to decrypt it back to the original content. The process we will follow now is the reverse of the encryption in the previous part. Exactly the same process, but now we will go from encrypted file to decrypted file: f = Fernet(key) with open('enc_grades.csv', 'rb') as encrypted_file: encrypted = encrypted_file.read() decrypted = f.decrypt(encrypted) with open('dec_grades.csv', 'wb') as decrypted_file: decrypted_file.write(decrypted) Let’s discuss what we did here: - We initialize the Fernet object as store is as a local variable f - Next, we read our encrypted data (enc_grades.csv file) into encrypted - Then we decrypt the data using the Fernet object and store it as decrypted - And finally, we write it into a new .csv file called “dec_grades.csv” You can take a look at the encrypted file here: Comparing “dec_grades.csv” with the original “grades.csv”, you will see that in fact these two have identical contents. Our encryption/decryption process was successful. Complete Object-Oriented Programming Example This is a bonus part where I organized everything in a more structured format: class Encryptor(): def key_create(self): key = Fernet.generate_key() return key def key_write(self, key, key_name): with open(key_name, 'wb') as mykey: mykey.write(key) def key_load(self, key_name): with open(key_name, 'rb') as mykey: key = mykey.read() return key def file_encrypt(self, key, original_file, encrypted_file): f = Fernet(key) with open(original_file, 'rb') as file: original = file.read() encrypted = f.encrypt(original) with open (encrypted_file, 'wb') as file: file.write(encrypted) def file_decrypt(self, key, encrypted_file, decrypted_file): f = Fernet(key) with open(encrypted_file, 'rb') as file: encrypted = file.read() decrypted = f.decrypt(encrypted) with open(decrypted_file, 'wb') as file: file.write(decrypted) And this is an example of encryption/decryption using the above class: encryptor=Encryptor() mykey=encryptor.key_create() encryptor.key_write(mykey, 'mykey.key') loaded_key=encryptor.key_load('mykey.key') encryptor.file_encrypt(loaded_key, 'grades.csv', 'enc_grades.csv') encryptor.file_decrypt(loaded_key, 'enc_grades.csv', 'dec_grades.csv') Conclusion This article introduces basic symmetric file encryption and decryption using Python. We have discussed some parts of cryptography library as well as created a full process example. Feel free to leave comments below if you have any questions or have suggestions for some edits and check out more of my Python Programming articles. The post Encrypt and Decrypt Files using Python appeared first on PyShark. Want to share your content on python-bloggers? click here.
https://python-bloggers.com/2020/09/encrypt-and-decrypt-files-using-python/
CC-MAIN-2021-10
en
refinedweb
read binary marching cubes file More... #include <vtkMCubesReader.h> read binary marching cubes file vtkMCubesReader is a source object that reads binary marching cubes files. (Marching cubes is an isosurfacing technique that generates many triangles.) The binary format is supported by W. Lorensen's marching cubes program (and the vtkSliceCubes object). The format repeats point coordinates, so this object will merge the points with a vtkLocator object. You can choose to supply the vtkLocator or use the default. Definition at line 64 of file vtkMCubesReader.h. Definition at line 67 of file vtkMCubesReader FlipNormals turned off and Normals set to true. Specify file name of marching cubes file. Set / get the file name of the marching cubes limits file. Specify a header size if one exists. The header is skipped and not used at this time. Specify whether to flip normals in opposite direction. Flipping ONLY changes the direction of the normal vector. Contrast this with flipping in vtkPolyDataNormals which flips both the normal and the cell point order. Specify whether to read normals. These methods should be used instead of the SwapBytes methods. They indicate the byte ordering of the file you are trying to read in. These methods will then either swap or not swap the bytes depending on the byte ordering of the machine it is being run on. For example, reading in a BigEndian file on a BigEndian machine will result in no swapping. Trying to read the same file on a LittleEndian machine will result in swapping. As a quick note most UNIX machines are BigEndian while PC's and VAX tend to be LittleEndian. So if the file you are reading in was generated on a VAX or PC, SetDataByteOrderToLittleEndian otherwise SetDataByteOrderToBigEndian. Turn on/off byte swapping. Set / get a spatial locator for merging points. By default, an instance of vtkMergePoints is used. Create default locator. Used to create one when none is specified. This is called by the superclass. This is the method you should override. Reimplemented from vtkPolyDataAlgorithm. Definition at line 175 of file vtkMCubesReader.h. Definition at line 176 of file vtkMCubesReader.h. Definition at line 177 of file vtkMCubesReader.h. Definition at line 178 of file vtkMCubesReader.h. Definition at line 179 of file vtkMCubesReader.h. Definition at line 180 of file vtkMCubesReader.h. Definition at line 181 of file vtkMCubesReader.h.
https://vtk.org/doc/nightly/html/classvtkMCubesReader.html
CC-MAIN-2021-10
en
refinedweb
As programmers, we often come across a concept called type inference. To begin with let me clarify that type inference is not something unique to Scala, there are many other languages like Haskell, Rust and C# etc that have this language feature. Going by the bookish definition “Type inference refers to the automatic detection of the data type of an expression in a programming language”. Speaking in layman terms it means the language is intelligent enough to automatically deduce the type of the expression eg. String, Int, Decimal etc. Having learned what is type inference, the next inevitable question is “Why?” The sole purpose of having type inference is to help the programmer avoid verbose typing but still maintain the compile-time type safety of a statically typed language. So speaking simply type inference is the amalgamation of the two best of worlds that is static and dynamic typing. Having answered the “Why?” the next to follow is “What?” The type system is a language component that is responsible for type checking. Scala is a statically typed language, so there are always defined set of types and anything that doesn’t fall inside that set is classified as an invalid type and an appropriate error is thrown at compile time. Another way of answering this is computers are not intelligent enough to rectify human mistakes, and certain things are better handled by the compiler rather than relying on programmers to set them right. Tons of bugs are given birth due to these improper types. The question that follows is “How it fits?” and “How does it make a difference?” the type system exists to ensure type safety, and the levels of strictness are what differentiates between different languages and run times. The ability to infer types automatically makes many programming tasks easier, leaving the programmer free to omit type annotations while still permitting type checking. The final question before we dive into the Scala Type System details is “Can we classify the languages on basis of their type Systems?” The answer is YES. But this simple yes will still make you feel dizzy because the number of classification types is far too broad a range. Let me run you through the types : - Dynamic type checking - Static type checking - Inferred vs Manifest - Nominal vs Structural - Dependant typing - Gradual typing - Latent typing - Sub-structural typing - Uniqueness typing - Strong and weak typing Even introducing each of them would be beyond the scope of this article, but feel free to explore them. The point of interest here lies the classification category of Scala. Scala is classified as a statically typed language with type inference. There is a strong relationship between functional programming and type inference. Global Type Inference vs Local Type Inference In the global type inference often Hindley-Milner algorithm is used to deduces the types. The Hindley-Milner algorithm is also called as Global type inference. It reads the source code as a whole and deduces the types. Scala’s type system works in a little different manner. Scala deduces using local type inference. Scala’s follows a combination of sub-typing and local type inference.Let me elaborate the above with an example : def factorial(a: Int) = if (a <= 1) 1 else a * factorial(a – 1) The correct looking code snippet shall give you the following error: Scala type errorThe above snippet computes the factorial value based on the number passed in. If we notice the error, the compiler is not able to deduce the type of the recursive function. Surprisingly, the same (almost same) code can execute in Haskell without any errors. let factorial 0 = 1; factorial n = n * factorial (n – 1) The answer to it is “Haskell Global type inference”. In Scala, we have to annotate the types wherever local type inference does not help. In order for the above snippet to work in Scala, notice the type Int is explicitly mentioned. def factorial(a:Int): Int = if(a <=1) 1 else a * factorial(a – 1) The question that comes up is “Why did Scala use local type inference over Global type Inference” For languages that are multi-paradigm, it is really hard to do global or Hindley-Milner style type inference since it restricts implementing OOP features such as inheritance and method overloading. Languages like Haskell still do it but Scala has decided to take a different trade-off. Scala’s Type System and Sub Typing Figure [1] A type system is made up of predefined components or types and this forms the foundation of how they are inferred. If we starting digging further in Scala source code we would find that it all points to the Any class. It is worth noting that types are not regular classes, although they seem to be. Sub-typing is not supported by the Hindley-Milner algorithm, but it is essential in a multi-paradigm world. This is also another reason why Scala does not use the HM algorithm. Let us try to understand subtyping with help of an example when we are constructing a heterogeneous list, the sub-typing converts the lower type into a higher type wherever necessary. A simple example would be of converting an Int to a Double. If it cannot be fit, it goes to the top level i.e the Any type. All of these conversions can be translated to the type system hierarchy. scala> List(10, ‘a’) res0: List[Int] = List(10, 97) scala> List(20.2,10) res1: List[Double] = List(20.2, 10.0) scala> List(“Hello”, 10, true) res2: List[Any] = List(Hello, 10, true) After having the understanding of the Scala Type system and subtyping let us understand when to use them and most importantly when to not use them. It is good to use them when it saves programmer time and also where type information does not really matter. Situations could be inside of a function or a loop where the information about types is obvious. And one should definitely avoid using them when type information is important i.e it should not leave the programmer who reads to code guessing about types. And with guessing comes mistakes, with mistakes come bad code, and with bad code comes frustration, with frustration, comes the end of all. During the course of the blog, we have aptly discussed that a type system is a syntactic method for automatically checking the absence of certain erroneous behaviours by classifying program phrases according to the kinds of values they compute. Happy Reading! 4 thoughts on “Back2Basics: Introduction to Scala Type System” Reblogged this on Harmeet Singh(Taara).
https://blog.knoldus.com/2018/04/09/back2basics-introduction-to-scala-type-system/
CC-MAIN-2018-30
en
refinedweb
How: Application workflow is pretty simple. - You enter search request. At screenshot you can see that search request was i7 skylake - Press at button Request - Application send request to google. - Application parses respond from the first page from google and shows urls and titles of urls. The program itself can be downloaded from here Below I also provide hear of that program: Snippet using System; using System.IO; using System.Net; using System.Text; using System.Windows.Forms; using NScrape; using HtmlAgilityPack; using System.Text.RegularExpressions; namespace WebRequesting { public partial class Form1 : Form { HtmlAgilityPack.HtmlDocument htmlSnippet = new HtmlAgilityPack.HtmlDocument(); public Form1() { InitializeComponent(); } private void btn1_Click(object sender, EventArgs e) { lstTitles.Items.Clear(); lstUrls.Items.Clear(); StringBuilder bufferForHtml = new StringBuilder(); byte[] encodedBytes = new byte[8192]; var urlForSearch = "" + txtSearch.Text.Trim(); var request = (HttpWebRequest)System.Net.WebRequest.Create(urlForSearch); var response = (HttpWebResponse)request.GetResponse(); using (Stream responseFromGoogle = response.GetResponseStream()) { var enc = response.GetEncoding(); int count = 0; do { count = responseFromGoogle.Read(encodedBytes, 0, encodedBytes.Length); if (count != 0) { var tempString = enc.GetString(encodedBytes, 0, count); bufferForHtml.Append(tempString); } } while (count > 0); } string sbb = bufferForHtml.ToString(); var processedHtml = new HtmlAgilityPack.HtmlDocument { OptionOutputAsXml = true }; processedHtml.LoadHtml(sbb); var doc = processedHtml.DocumentNode; foreach (var link in doc.SelectNodes("//a[@href]")) { string hrefValue = link.GetAttributeValue("href", string.Empty); if (!hrefValue.ToUpper().Contains("GOOGLE") && hrefValue.Contains("/url?q=") && hrefValue.ToUpper().Contains("HTTP")) { int index = hrefValue.IndexOf("&"); if (index > 0) { hrefValue = hrefValue.Substring(0, index); lstTitles.Items.Add(hrefValue.Replace("/url?q=", string.Empty)); string output = Regex.Replace(link.InnerText, ""\\.?", string.Empty); lstUrls.Items.Add(output); } } } } } } If you like C#, you'll see, that program in general sends request, receives response, decodes result, and then parses url, and those, which follow certain criteria are added to listbox. Alex said That is a great code, however Google may block you for automated searches. How to overcome this limitation? And how would this code solve captcha? docotor said Yes, I understand that google can block and actually will block. In order to prevent blocking I as usually apply following options: 1. Use timer, in order to send requests not often. 2. With help of Webproxy class make connection via proxies.
http://blog.zaletskyy.com/how-to-parse-google-search-results
CC-MAIN-2018-30
en
refinedweb
itemFocusOutHandler with editable data grid selected item issueDazMMT Oct 8, 2010 5:37 AM Hi All, I’m relatively new to Flex and have an issue that is causing a lot of frustration. I’m sure that the solution is string me in the face but I just can’t see the wood for the trees. OK, I’m trying to achieve a data grid that is populated with data based on a one-to-many relationship between people and courses with a join to show people that do not have course assigned and then assign these by date if required. This is then action by itemFocusOutHandler where the database is then updated with the data grid selected item. The issue: When you move away from the updated cell the event is trigger and the database is updated with the new selected row and not the one moved from. I was expecting the updated to trigger before the selected item is change but this does not seem to be the case. Example Course added to John on the 24th On tab away to Jason the selected item changes and Jason is added to the database with no entry. Alsom If I then go back to John from Jason the database will be updated with the data for John Database view: Where Person equals Any help you are willing to provide will be very much appreciated. And the code (I have tried to simplify this where possible): <?xml version="1.0" encoding="utf-8"?> <s:Application xmlns:fx="" xmlns:s="library://ns.adobe.com/flex/spark" xmlns:mx="library://ns.adobe.com/flex/mx" minWidth="955" minHeight="600" xmlns: <fx:Script> <![CDATA[ import mx.controls.DateField; import mx.controls.Text; import mx.data.ChangeObject; import mx.events.CalendarLayoutChangeEvent; import mx.events.CollectionEvent; import mx.events.DataGridEvent; import mx.events.FlexEvent; import mx.events.ListEvent; protected function dC1_changeHandler(event:CalendarLayoutChangeEvent):void { var dforQ:String = DateField.dateToString(dC1.selectedDate, "DD/MM/YYYY" ); var SD189:String = dforQ; gateCbyDResult.token = csr.gateCbyD(SD189); } protected function dataGrid_itemFocusOutHandler(event:DataGridEvent):void { var dateSQLDate:String = DateField.dateToString(dC1.selectedDate, "DD/MM/YYYY" ); var entrydate:String = dateSQLDate; saveNewResult.token = csr.saveNew(dataGrid.selectedItem.course,entrydate,dataGrid.selectedItem.empid); } ]]> </fx:Script> <fx:Declarations> <csr:Csr <s:CallResponder <s:CallResponder </fx:Declarations> <mx:DataGrid <mx:columns> <mx:DataGridColumn <mx:DataGridColumn <mx:DataGridColumn </mx:columns> </mx:DataGrid> <mx:DateChooser </s:Application> Thanks again... 1. Re: itemFocusOutHandler with editable data grid selected item issueFlex harUI Oct 8, 2010 11:33 AM (in response to DazMMT) ItemEditEnd event? 2. Re: itemFocusOutHandler with editable data grid selected item issueDazMMT Oct 9, 2010 2:09 AM (in response to Flex harUI) Yes. see line - itemEditEnd="dataGrid_itemFocusOutHandler(event)" Thanks, Darren. 3. Re: itemFocusOutHandler with editable data grid selected item issueFlex harUI Oct 9, 2010 7:07 PM (in response to DazMMT) If your subject says itemFocusOut handler, it implies you are using a focusOut event. I didn't read the code that carefully. Verify that the selectedItem hasn't already changed. Maybe it has. You should be able to use the information in the DataGridEvent to access the data item that was edited. 4. Re: itemFocusOutHandler with editable data grid selected item issueDazMMT Oct 12, 2010 10:00 AM (in response to Flex harUI) Hello all, “Flex harUI” was correct in that the focus had moved on from the item as I had initially suspected but thought that this was a error in the way that flex handles the code. I still do but I will save that for another discussion. After playing around with this I came up with many solutions. The one that I will adopt is to add a save button (bC1) and the loop through the dataGrid. I show below a simplified version of my code. It may help if you experience a similar issue. protected function bC1(event:MouseEvent):void { for (var i:int=0; i<gateCbyDResult.lastResult.length; i++) { gateCbyDResult.lastResult.getItemAt(i); var dateSQLDate:String = DateField.dateToString(dC1.selectedDate, "DD/MM/YYYY" ); var entrydate:String = dateSQLDate; var course:String = getRegByDayResult.lastResult.getItemAt(i).course; var empid:String = getRegByDayResult.lastResult.getItemAt(i).empid; saveNewRegResult.token = register.saveNewReg(course,entrydate,empid); } } To add additional functionality like edit and delete you would just need to add an if statement to check the data against the original. I hope it helps, Darren.
https://forums.adobe.com/thread/735650
CC-MAIN-2018-30
en
refinedweb
LINQ and ADO.NET LINQ to DataSet DataSet and DataTable store a lot of data in memory, but are limited in their query functionality. LINQ to DataSet provides rich query functionality for them. DataSet = in-memory relational representation of data. Primary disconnected data object used by many ASP.NET apps. DataSet contains DataTable and DataRelation objects. DataSet Schema Before working with data need to define schema. Can do automatically, programmatically or via XML schema definition, e.g. for code DataSet companyData = new DataSet("CompanyList"); DataTable company = companyData.Tables.Add("company"); company.Columns.Add("Id", typeof(Guid)); company.PrimaryKey = new DataColumn{} { company.columns["Id"] }; . DataTable employee= companyData.Tables.Add("employee"); employee.Columns.Add("Id", typeof(Guid)); employee.PrimaryKey = new DataColumn{} { employee.columns["Id"] }; companyData.Relations.Add("Company_Employee", company.Columns["Id"}, employee.Columns["Id"]); Populating DataSet Can write code to add rows: DataTable compnay = companyData.Tables["Company"]; company.Rows.Add(Guid.NewGuid(), "Northwind Traders"); Can use DataAdapter control to fill DataTable. The DbDataAdapter class (derived from DataAdapter) retrieved and updates DataTable from a data store. Provider specific version of DbDataAdapter exist for SQL Server, Oracle and XML. DbDataAdapter.Select() defines how data is to be retrieved. DataAdapter.Fill moves data from data store to DataTable. DbConnection conn = new SqlConnection(pubs.ConnectionString); SqlCommand cmd = (SqlCommand) conn.CreateCommand(); cmd.CommandType = CommandType.Text; cmd.CommandText = "SELECT pub_id, pub_name FROM Publishers"); SqlDataAdapter da = new SqlDataAdapter(cmd); DataSet pubsDataSet = new DataSet("Pubs"); da.Fill(pubDataSet, "publishers"); Saving Changes DataAdapter.Update method retrieves updates from DataTable and executes appropriate InsertCommand, UpdateCommand or DeleteCommand to send changes to data store on row-by-row basis Update method examines RowState property for each row, if not Unchanged then changes sent to database. For Update method to work the select, insert, update and delete commands must be assigned to DbDataAdapter. Can create these commands using DbDataAdapter configuration wizard (which starts when DbDataAdapter dropped onto webpage). Can also populate DbDataAdapter commands via DbCommandBuilder which creates insert, update and delete commands from a select statement. Saving Changes In Batches Controlled by UpdateBatchSize property of DbDataAdapter. When 0 the DbDataAdapter will use the largest possible batch size. Otherwise set it to the number of changes to be sent to database in one batch (default is 1). Typed Data Sets DataSet based on strongly typed objects. Allow programming against actual table and field schemas and not relying on strings, i.e. DataTable cTable = salesData.Tables["Company"] vs DataTable cTable = vendorData.Company; Typed DataSet inherits from DataSet. Provide property for each table. Do same for each field in table. Can use XSD to generate typed DataSet class. Can use DataSetEditor to graphically create and modify XSD file. Querying With LINQ To DataSet DataTable exposed as IEnumerable collections => LINQ statements like other objects: DataTable employees = MyDataProvider.GetEmployeeData(); var query = from employe in employes.AsEnumerable() where employee.Field<Decimal>("salary") > 20 select employee; decimal avgSal = employees.AsEnumerable().Average(employee => employee.Field<Decimal>("Salary")); Use AsEnumerable() to provide something for LINQ to work against. Querying With Typed Data Sets Do not need to use Field method, instead query against Tablename.Fieldname construct: SqlDataAdapter adp = new SqlDataAdapter("select * from publishers;" pbsCnn.ConnectionString); adp.Fill(pubs, "publishers"); var pbsQuery = from p in pubs.punlishers where p.country == "USA" select p; Cross-table LINQ to DataSets First load DataSet with two tables. Assign DataTables to variables and use these in LINQ query. Results of query pushed to anonymous type. Comparing Data Compare data in one or more tables using following operators: - Distinct - distinct DataRows in collection - Union - joins two DataTables together - Intersect - DataRows that appear in both DataTables - Except - DataRows that appear in one or other (but not both) DataTables. LINQ to SQL Works directly with SQL Server database. Build object-relational (O.R) map that connects .NET classes to database elements. One map built, code against database as if coding against objects. Note, Entity Framework provides similar functionality but not limited to SQL Server. Part of ADO.NET so can use other ADO.NET components like transactions, existing objects written against ADO.NET, stored procedures, etc. Mapping Objects to Relational Data Visual Studio provides two automated code generators. Can also manually code own O/R map for full control over mapping. Using Designer Tool Use Visual Studio LINQ to SQL designer. Provides design surface on which to build classes. Access designer via Add New Item dialogue. Generates DBML (Database Markup Language) files - contains XML metadata, layout file for designer and code behind file which contains object used to access database code. Build map by dragging entities from Visual Studio Server Explorer onto design surface. Command Line Tool Use SqlMetal command line tool to generate DBML files and O/R code. Useful for large databases where not practical to use design surface. Simplest option is point tool at database file / provide connection string Via Code Editor Granular control, but a lot of work. Create class file and use the System.Data.Linq.Mapping namespace. Link class to table via Table attribute: [Table(Name="Author")] public class Author { Create properties on class with Column attribute applied: private string _authorId; [Column(IsPrimaryKey=true, Storage="_authorId", Name="au_Id"] public string Id { get { return _authorId; } set { _authorId = value; } } Finally create class to expose tables. Inherit from DataContext object. Acts as go-between for database and your objects. public class PubsDb: DataContext { public Table<Authors> Authors; Querying Data Much like any other LINQ query. PubsDatabaseModelDataContext pubs = new PubsDatabaseModelDataContext(); var authQuery = from auth in pubs.authors where auth.state == "CA" select auth; Inserting, Updating and Deleting Simply make change to object instance and save Once complete call SubmitChanges method of DataContext to persist. PubsDatabaseModelDataContext pubs = new PubsDatabaseModelDataContext(); author auth = new author(); auth.au_id = "000"; pubs.authors.InsertOnSubmit(auth); pubs.SubmitChanges(); LINQ to Entities Works in similar way to LINQ to SQL. Define model to represent application domain, then a map between model and actual data source. Creating Entity Model Use the ADO.NET Entity Data Model template from Add New Item dialogue. Generates EDMX file which is XML file that can be edited in designer together with code behind file containing actual entity objects. Visual Studio uses Data Model Wizard to create model (either from scratch or existing database). Can use Visual Studio tools to edit model and related mapping. Tools can also do following; generate database, validate model against data source, map to stored procedures, update model from database. LINQ to Entities Queries Generate new instance of entity model and write queries against data it represents. Entity model inherits from System.Data.Objects.ObjectContext using (PubsModel.pubsEntities pubs = new PubsModel.pubsEntities()) { var authQuery = from a in pubs.Authors where a.State == "CA" select a; }
https://www.gsys.biz/resources/70-515/linq-to-ado
CC-MAIN-2018-30
en
refinedweb
Here's the source code: import java.applet.*; import java.awt.*; public class Ingredients2 extends Applet { private TextField t; private double price = 7.00; public void init() { this.setLayout(new GridLayout(11,1)); this.add( new Label("What do you want on your pizza?", Label.CENTER)); this.add(new Checkbox("Pepperoni")); this.add(new Checkbox("Olives")); this.add(new Checkbox("Onions")); this.add(new Checkbox("Sausage")); this.add(new Checkbox("Peppers")); this.add(new Checkbox("Extra Cheese")); this.add(new Checkbox("Ham")); this.add(new Checkbox("Pineapple")); this.add(new Checkbox("Anchovies")); this.t = new TextField("$" + String.valueOf(price)); // so people can't change the price of the pizza this.t.setEditable(false); this.add(this.t); } /* I've removed the code to handle events since it isn't relevant to this example */ } Grid layouts are very easy to use. This applet is just three lines different from the previous version, and one of those is a change in the name of the class that wasn't really necessary.
http://www.cafeaulait.org/course/week8/14.html
CC-MAIN-2020-29
en
refinedweb
Accessing and managing users¶ Users are an indispensible part of your web GIS. As the number of users grow, you can see value in automating your management tasks such as provisioning licenses, privileges, creating and removing user accounts etc. The gis module provides you with User and UserManager classes to respresent users as objects and help you accomplish the most common tasks. In this guide, we will learn about: - About your account - Properties of a Userobject - Searching for user accounts - Creating new user accounts - Deleting user accounts As you might have seen the pattern with ContentManager and Item objects, the UserManager object is a resource manager that gives you access to User objects. You access a UserManager object not by instantiating that class through its constructor, but by accessing the users property of your GIS object. This is the typical pattern of usage throughout the gis module. from arcgis.gis import GIS gis = GIS("portal url", "username", "password") You can access your user account by accessing me property as shown below: me = gis.users.me me Similar to Item objects, when using the Jupyter notebook IDE, you can visualize User objects in rich HTML representation with thumbnails and attribute information. me.access 'public' You can find out when an account was last active and determine if an account was abandoned and remove it if necessary. import time # convert Unix epoch time to local time created_time = time.localtime(me.created/1000) print("Created: {}/{}/{}".format(created_time[0], created_time[1], created_time[2])) last_accessed = time.localtime(me.lastLogin/1000) print("Last active: {}/{}/{}".format(last_accessed[0], last_accessed[1], last_accessed[2])) Created: 2016/11/11 Last active: 2016/12/12 Let us print some more information about this account print(me.description, " ", me.email, " ", me.firstName, " ", me.lastName, " ", me.fullName) print(me.level, " ", me.mfaEnabled, " ", me.provider, " ", me.userType) None amani@esri.com arcgis python arcgis python 2 False arcgis arcgisonly You can determine how much storage is being used by this account quota = me.storageQuota used = me.storageUsage pc_usage = round((used / quota)*100, 2) print("Usage: " + str(pc_usage) + "%") Usage: 0.12% You can determine the groups the user is a member of: user_groups = me.groups print("Member of " + str(len(user_groups)) + " groups") # groups are returned as a dictionary. Lets print the first dict as a sample user_groups[0] Member of 3 groups {'access': 'public', 'capabilities': [], 'created': 1479768353725, 'description': None, 'id': '90aa28e7a3a0467da2ec4d508d019775', 'isFav': False, 'isInvitationOnly': False, 'isReadOnly': False, 'isViewOnly': False, 'modified': 1479768353725, 'owner': 'arcgis_python_api', 'phone': None, 'provider': None, 'providerGroupName': None, 'snippet': None, 'sortField': 'avgRating', 'sortOrder': 'desc', 'tags': ['arcgis_python_api', 'automation', 'dino_tests'], 'thumbnail': None, 'title': 'group1', 'userMembership': {'applications': 0, 'memberType': 'owner', 'username': 'arcgis_python_api'}} Searching for user accounts¶ The search() method of UserManager class helps you search for users of the org. The query parameter in the search() method accepts standard ArcGIS REST API queries and behaves similar to the search method on ContentManager and GroupManager classes. To illustrate this better, let us search ArcGIS Online as there are many more users available there. # anonymous connection to ArcGIS Online ago_gis = GIS() # search the users whose email address ends with esri.com esri_public_accounts = ago_gis.users.search(query='email = @esri.com') len(esri_public_accounts) 95 Each element in the list returned is a User object that you can query. # lets filter out Esri curator accounts from this list curator_accounts = [acc for acc in esri_public_accounts if acc.username.startswith('Esri_Curator')] curator_accounts [<User username:Esri_Curator_Basemaps>, <User username:Esri_Curator_Boundaries>, <User username:Esri_Curator_Demographic>, <User username:Esri_Curator_EarthObs>, <User username:Esri_Curator_Historical>, <User username:Esri_Curator_Imagery>, <User username:Esri_Curator_Landscape>, <User username:Esri_Curator_Transport>, <User username:Esri_Curator_Urban>] curator_accounts[0] Once you know a user's username, you can access that object using the get() method. Let us access the Esri curator account for historical maps esri_hist_maps = ago_gis.users.get(username='Esri_Curator_Historical') esri_hist_maps Creating new user accounts¶ You can add new users to the org using either the create() methods available on the UserManager class. The signup() method is limited in scope as it can be used only for adding built-in accounts to an ArcGIS Enterprise instance and not for an org that is hosted on ArcGIS Online. However, you can call the create() method. Note, you can disable self-signup in your ArcGIS Enterprise which would render the You need admin privileges to call the create() method. This method is very powerful in an instance of ArcGIS Enterprise, as it allows you to create new accounts from either the arcgis built-in credential store or your enterprise's credential store. For an ArcGIS Online Organization, you can only create users that will use the built-in credential store. For the case of accounts from a built-in credential store, you would provide a password when the account is created. The user can change it at any time once they login. For accounts from your enterprise's credential store, you can ignore the password parameter and your users will authenticate through that credential store. In addition to role that can be set, a level can be used to allocate accounts based on the privileges that members need. The level determines which privileges are available to the member. The enterprise. A user_type determines the privileges that can be granted to a member. It affects the applications a user can use and actions they can perform in the organization. Learn more about the different values that user_type parameter can take here. Let us log in to an ArcGIS Enterprise and create some users: # let us create a built-in account with username: demo_user1 with org_user privilege demo_user1 = gis.users.create(username = 'demo_user1', password = '0286eb9ac01f', firstname = 'demo', lastname = 'user', email = 'python@esri.com', description = 'Demonstrating how to create users using ArcGIS Python API', role = 'org_user', level = 2, user_type = 'creatorUT', provider = 'arcgis') demo_user1 Note that we specified arcgisas the providerargument. If you were creating accounts from your enterprise credential store, you would specify this value as enterpriseand use the idpUsernameparameter to specify the username of the user in that credential store. To learn more about this configuration, refer to this help topic on setting up enterprise logins. Note, the role parameter was specified as org_user. This takes us to the next section on Role and RoleManager objects. ArcGIS provides a security concept called roles which defines the privileges a user has within an organization. By default, your org has 3 roles - org_user, org_publisher and org_admin. You can refer to this topic on organizational roles to learn about these three roles and their privileges. In summary, a user role can be an active user of the org, create items, join groups and share content. A publisher role has all of user privileges and can create hosted content and perform analysis. An administrator role has all possible privileges. Depending on the size of your org and the security needs, you can customize this and create any number of roles with fine grained privileges. For reference on custom roles in an org, refer to this doc To know about the role of a User object, you can query the role property: demo_user1_role = demo_user1.role print(type(demo_user1_role)) print(demo_user1_role) <class 'str'> org_user Since this user was created with a built in role specified as a string, we get back a string with value org_user. Managing user roles¶ Let us create a new role that can only publish tile layers. This role should have none of admin privileges and can have only some of user privileges, namely creating new items and joining groups. Creating new roles¶ To create a new role, call the create() on RoleManager class. As with any resource manager, you should access it through the roles property on a UserManager object. You should access the UserManager object in turn through the users property of your GIS object. # create a tiles publisher role privilege_list = ['portal:publisher:publishTiles', 'portal:user:createItem', 'portal:user:joinGroup'] tiles_pub_role = gis.users.roles.create(name = 'tiles_publisher', description = 'User that can publish tile layers', privileges = privilege_list) tiles_pub_role <Role name: tiles_publisher, description: User that can publish tile layers> # inspect the privileges of this role tiles_pub_role.privileges ['portal:publisher:publishTiles', 'portal:user:createItem', 'portal:user:joinGroup'] Note: the privileges parameter was provided a list of strings specifying each individual privilege. Refer to the api ref doc on the privileges parameter to know about the finite list of strings you can use. tiles_pub_user = gis.users.create(username='tiles_publisher', password = 'b0cb0c9f63e', firstname = 'tiles', lastname = 'publisher', email = 'python@esri.com', description = 'custom role, can only publish tile layers', role = 'org_user') #org_user as thats the closest. tiles_pub_user Querying the privileges property of a User object returns a list of strings with fine grained privileges. When creating a Role object, you can pick and choose from this or refer to the api ref doc. tiles_pub_user.privileges ['features:user:edit', 'portal:user:createGroup', 'portal:user:createItem', 'portal:user:joinGroup', 'portal:user:joinNonOrgGroup', 'portal:user:shareGroupToOrg', 'portal:user:shareGroupToPublic', 'portal:user:shareToGroup', 'portal:user:shareToOrg', 'portal:user:shareToPublic', 'premium:user:demographics', 'premium:user:elevation', 'premium:user:geocode', 'premium:user:geoenrichment', 'premium:user:networkanalysis', 'premium:user:spatialanalysis'] Let us update this user's privileges tiles_pub_user.update_role(role = tiles_pub_role) True # query the privileges to confirm tiles_pub_user.privileges ['portal:publisher:publishTiles', 'portal:user:createItem', 'portal:user:joinGroup'] Querying the roleId property of a User returns you the custom roles' ID. You can use this to search for that role to know more details or create another user with the same role: tiles_pub_user.roleId 'rYzfnni7g5AvFsRz' searched_role = gis.users.roles.get_role(tiles_pub_user.roleId) searched_role.description 'User that can publish tile layers' Listing all the custom roles in an org¶ When migrating users from one org to another or even to duplicate an org on new infrastructure, you would go through the process of cloning the users and their roles. For this, you can get the list of roles using the all() method on the RolesManager resource object: gis.users.roles.all(max_roles=50) [<Role name: Viewer, description: Viewer>, <Role name: tiles_publisher, description: User that can publish tile layers>, <Role name: role1, description: role1>, <Role name: role1, description: role1>, <Role name: role1, description: role1>, <Role name: role1, description: role1>] Deleting user accounts¶ You can delete user accounts by calling the delete() method on a User object from an account that has administrator privileges. However, deleting raises important questions such as what happens to the content owned by that user? Further, ArcGIS does not allow you to delete users until you have dealt with that users' items and groups. Thus as an administrator, it becomes useful to list and view the content owned by any user in your org. Accessing user content¶ Once you have a User object, you can view the folders and items owned by the user by querying the folders property and calling the items() method. # let us access an account named publisher1 publisher1 = gis.users.get('publisher1') publisher1 #list all folders as dictionaries publisher1_folder_list = publisher1.folders publisher1_folder_list [{'created': 1479773023422, 'id': '318b56a2280d49d0894c0b7ac66e01fd', 'title': 'f1_english', 'username': 'publisher1'}, {'created': 1479773023854, 'id': 'b9680c43ee184e1faa5aa9990359ac6a', 'title': 'f2_敏感性增加', 'username': 'publisher1'}, {'created': 1479773024225, 'id': '1bafef1c4bf54a108d1b85f0f5ae7d3e', 'title': 'f3_Kompatibilität', 'username': 'publisher1'}] # list all items belonging to this user publisher1_item_list_rootfolder = publisher1.items() print("Total number of items in root folder: " + str(len(publisher1_item_list_rootfolder))) #access the first item for a sample publisher1_item_list_rootfolder[0] Total number of items in root folder: 33 # list all items in the first folder publisher1.items(folder = publisher1_folder_list[0]) [<Item title:"set1_major_cities" type:Locator Package owner:publisher1>, <Item title:"set1_shifting_opportunity" type:Image owner:publisher1>, <Item title:"set1_DSHS_Regions" type:KML owner:publisher1>, <Item title:"set1_Counties" type:KML owner:publisher1>, <Item title:"set1_Counties" type:KML owner:publisher1>, <Item title:"set1_shifting_opportunity" type:Image owner:publisher1>, <Item title:"set1_shifting_opportunity" type:Image owner:publisher1>] Thus using a GIS object created with an account that has admin privileges, you were able to query the contents of another user without knowing that user's password or logging in as that user. Reassigning user content¶ As an administrator, you have the privileges to list and view other users' content. When the time comes to delete a user account, you can filter these items and choose to preserve some of them and delete the rest. Let us delete the tiles_pub_user account we created earlier in this guide. # list the items owned by the user tiles_pub_user_items = tiles_pub_user.items() tiles_pub_user_items [<Item title:"ocean_tiles" type:Map Service owner:tiles_publisher>, <Item title:"ocean_tiles2" type:Map Service owner:tiles_publisher>, <Item title:"Transport_tiles" type:Map Service owner:tiles_publisher>, <Item title:"income_by_county" type:Map Service owner:tiles_publisher>, <Item title:"counties_by_population" type:Map Service owner:tiles_publisher>, <Item title:"ocean_tiles3" type:Map Service owner:tiles_publisher>] You can reassign specific items to another user by calling the reassign_to() method on that Item object. Let us reassign the tile layer named Transport_tiles to publisher1 account from earlier. We can get rid of the redundant ocean_tiles items and reassign the rest, to the account arcgis_python_api. Since this user does not have privilege to create groups, we do not have to worry about that. We can then delete this user safely. # reassign Transport_tiles to publisher1 transport_tiles_item = tiles_pub_user_items[2] transport_tiles_item # the reassign_to() method accepts user name as a string. We can also specify a destination folder name transport_tiles_item.reassign_to(target_owner = 'publisher1', target_folder= 'f1_english') True # now let us get rid of redundant ocean tiles items tiles_pub_user_items[1].delete() True tiles_pub_user_items[-1].delete() # an index of -1 in a list refers to the last item True Now we are left with a few more items which should all go to user arcgis_python_api. We can either call reassign_to() method of the User object or call the delete() method of the User object and pass this information to the reassign_to parameter. Let's do that: tiles_pub_user.delete(reassign_to='arcgis_python_api') True Thus, we have successfully deleted a user after taking care of that user's content. Feedback on this topic?
https://developers.arcgis.com/python/guide/accessing-and-managing-users/
CC-MAIN-2020-29
en
refinedweb
Plugin Interface¶ DNF plugin can be any Python class fulfilling the following criteria: - it derives from dnf.Plugin, - it is made available in a Python module stored in one of the Conf.pluginpath, - provides its own nameand __init__(). When DNF CLI runs it loads the plugins found in the paths during the CLI’s initialization. - class dnf. Plugin¶ The base class all DNF plugins must derive from. name¶ The plugin must set this class variable to a string identifying the plugin. The string can only contain alphanumeric characters and underscores. - static read_config(conf)¶ Read plugin’s configuration into a ConfigParser compatible instance. conf is a Confinstance used to look up the plugin configuration directory. __init__(base, cli)¶ The plugin must override this. Called immediately after all the plugins are loaded. base is an instance of dnf.Base. cli is an instance of dnf.cli.Clibut can also be Nonein case DNF is running without a CLI (e.g. from an extension). config()¶ This hook is called immediately after the CLI/extension is finished configuring DNF. The plugin can use this to tweak the global configuration or the repository configuration. resolved()¶ This hook is called immediately after the CLI has finished resolving a transaction. The plugin can use this to inspect the resolved but not yet executed Base.transaction. sack()¶ This hook is called immediately after Base.sackis initialized with data from all the enabled repos. pre_transaction()¶ This hook is called just before transaction execution. This means after a successful transaction test. RPMDB is locked during that time. register_command(command_class)¶ A class decorator for automatic command registration. Example of a plugin that provides a hello-world dnf command (the file must be placed in one of the pluginpath directories: import dnf @dnf.plugin.register_command class HelloWorldCommand(dnf.cli.Command): aliases = ('hello-world',) summary = 'The example command' def run(self): print('Hello world!') To run the command: $ dnf hello-world Hello world! You may want to see the comparison with yum plugin hook API.
https://dnf.readthedocs.io/en/latest/api_plugins.html
CC-MAIN-2020-29
en
refinedweb
I'm happy to announce the 7th developer snapshot of the JBoss JCA project. The full release notes are here. ShrinkWrap support This release adds support for deploying ShrinkWrap archives through the embedded configuration. This will allow you to quickly build resource adapters for your test cases without having physical representation on disk. ShrinkWrap is very easy to use import org.jboss.shrinkwrap.api.Archives; import org.jboss.shrinkwrap.api.spec.JavaArchive; import org.jboss.shrinkwrap.api.spec.ResourceAdapterArchive; /** *); } catch (Throwable t) { log.error(t.getMessage(), t); fail(t.getMessage()); } finally { embedded.undeploy(raa); } } You can see a complete example of using ShrinkWrap in our org.jboss.jca.test.embedded.unit.ShrinkWrapTestCase test case. Deployment verifier This release also adds a deployment verifier that verifies specification requirements for the resource adapter classes. The verifier will output which requirements which havn't been implemented, like This will give resource adapter developers a possibility to verify their implementation against the specification to a higher degree. The verifier can of course be configured to serve your needs. We will continue to add new rules in future releases as well as provide an XML representation of the reports for tool processing. Feel free to drop by our forum to help out with these tasks. The Road Ahead Since EE6 have been released this month, we will focus on getting closer to a full implementation. For Those About to Rock, We Salute You ! [WebSite] [Download] [JIRA] [Forum]
https://in.relation.to/2009/12/30/jboss-jca-100-alpha-7-is-out/
CC-MAIN-2020-29
en
refinedweb
Today I want to show how we can use “Enterprise Architect” and the LieberLieber “Amuse” Plug-In to integrate “ASP.NET Mvc Web Application” into a flowchart. Here you can see the final solution. [mediaplayer src=’/wp-content/uploads/AmuseMvcExample.wmv’ autoLoad=1 autoPlay=1] To download the video click here. If you haven’t seen my previous posts please check out “LieberLieber AMUSE – Using Statemachines to Build Winforms flows” and “LieberLieber AMUSE – Using Statemachines and ASP.NET WebServices”. Ok, first of all have a look at the Asp.net Mvc Solution. As you can see I added an AuthenticationController (provides the controller methods), AuthenticationService (provides the authentication logic) and AuthenticationViewModel (provides the view model for the web form) to the solution. The authentication workflow looks similar like in my first example. First we want to show the login form (ValidateUser.aspx) which provides username and password. If the user enters the wrong username and password we want to show him the access denied form (AccessDenied.aspx). If the authentication succeeds we want to show the user the succeed form (LoginSuccessful.aspx). In “Enterprise Architect” I created 4 “Packages” and loaded the necessary source files to have a good overview of the Asp.net Mvc solution. 1. “AmuseMvc” which holds the “StateMachine”. 2. “Controllers” which contains the Asp.net Mvc controllers. 3. “Models” which contains the Asp.net Mvc view models and authentication service. 4. “Views” which contains the Asp.net Mvc views. If you want to use your “Asp.net Mvc Web Application” in Amuse you have to build it and load it under “External References” at the “State Machine” diagram. 1. “Add-Ins” – “Amuse” – “External Reference”. 2. Now select your Asp.net Mvc library. In our example “AmuseMvcExample.dll”. 3. Then import all three “Types” into “Enterprise Architect”: a. “AmuseMvcExample.Controllers. AuthenticationController” b. “AmuseMvcExample.Models. AuthenticationViewModel” c. “AmuseMvcExample.Models. AuthenticationService” But hold on, one important reference is missing. Maybe you remember my last post “LieberLieber AMUSE – Using Statemachines and ASP.NET WebServices” where we used the “ASP.NET Development Server” and had to use a batch file to start the “ASP.NET Web Development Server”? To avoid this and make it more confortable I created a “WebDevServer” class which provides the necessary functionality (for example: Start, Stop and RedirectToAction). You can find this in the “AmuseMvc.dll” So add also “AmuseMvc. WebDevServer” to the “StateMachine” diagram. After we loaded all the external references, your “Project Browser” should look like in the following screenshot: Ok now I declared the attributes. Name: “ValidUser” Type: “bool” Scope: “Public” Initial: “false” Name: “MyAuthenticationViewModel” Type: “AuthenticationViewModel” Scope: “Public” Name: “MyAuthenticationService” Type: “AuthenticationService” Scope: “Public” Initial: “new AuthenticationService()” Name: “MyAuthenticationController” Type: “AuthenticationController” Scope: “Public” Initial: “new AuthenticationController ()” Name: “MyWebDevServer” Type: “WebDevServer” Scope: “Public” Initial: “new WebDevServer(“C:\YourProjectPath“)” –> The WebDevServer needs the path to the “Asp.net Mvc Web Application” (without last backslash)! Now we create the operations: Name: “StartWebDevServer” Return Type: “void” Behavior – Initial Code: “MyWebDevServer.Start();” Name: “StopWebDevServer” Return Type: “void” Behavior – Initial Code: “MyWebDevServer.Stop();” Name: “OpenLoginForm” Return Type: “void” Behavior – Initial Code: “MyAuthenticationViewModel = (AuthenticationViewModel) MyDevServer.GetViewModelFromAction(“Authentication”, “ValidateUser”,typeof (AuthenticationViewModel)) –> This operation redirects to the action and get the entered values back. Name: “ValidateUser” Return Type: “void” Behavior – Initial Code: “ValidUser = MyAuthenticationService.ValidateUser( MyAuthenticationViewModel.UserName,MyAuthenticationViewModel.Password);” –> This operation validates against our “AuthenticationService” and set the “ValidUser” attribute. Name: “OpenAccessDeniedForm” Return Type: “void” Behavior – Initial Code: “MyDevServer.RedirectToAction(“Authentication”,”AccessDenied”);” –> This operation opens the access denied form and we don’t need any return value. Name: “OpenAccessDeniedForm” Return Type: “void” Behavior – Initial Code: “MyDevServer.RedirectToAction(“Authentication”,”LoginSuccessful”);” –> This operation opens the login successful form and we don’t need any return value. So now we can call the operations in the workflow but first we will add two new “States” for “StartDevServer” and “StopDevServer” like in the following screenshot. So now we call the operations on the states: Before we can start the flowchart we have to do two more things. First import the namespaces in the “AmuseMvc” class. using AmuseMvcExample.Controllers; using AmuseMvcExample.Models; using AmuseMvc; And second in the “Asp.net Mvc Web Application”, we have to add the “AmuseMvcWeb.dll” which provides the “AmuseHandler “. You have to set up this handler in the web.config. Add following line to the <httpHandlers> section: <add verb="*" path="*.amusemvc" type="AmuseMvcWeb.AmuseHandler"/> And also don’t forget to add following line into the “Global.asax.cs” – “RegisterRoutes” method: Public class MvcApplication : System.Web.HttpApplication { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.IgnoreRoute("{resource}.amusemvc/{*pathInfo}"); The library also contains the “AmuseAttribute” for the controller classes. So if you want to use the controller in Amuse please mark the controller class with [Amuse]: In our example over the “AuthenticationController”. [Amuse] public class AuthenticationController : Controller { private readonly IauthenticationService _authenticationService; public AuthenticationController() { _authenticationService = new AuthenticationService(); } After that we are ready to go and you can start the flowchart! If you want to try this example please make sure that you change the path for the WebDevServer to the Asp.net Mvc Application first!! Attribute: MyWebDevServer = new MyWebDevServer(“YOURPATH”); To pass the validation use: Username:amuse Password:mvc Here is the source code. Greate Sample. When will you come to VIE?
https://blog.lieberlieber.com/2010/07/22/lieberlieber-amuse-using-statemachines-and-asp-net-mvc/?replytocom=447
CC-MAIN-2020-29
en
refinedweb
Roslyn Primer – Part I: Anatomy of a Compiler Anthony D. So, you’ve heard that VB (and C#) are open source now and you want to dive in and contribute. If you haven’t spent your life building compilers, you probably don’t know where to start. No worries, I’ll walk you through it. This post is the first of a series of blog posts focused on the Roslyn codebase. They’re intended as a primer for prototyping language features proposed on the VB Language Design repo; and contributing compiler and IDE features, and bug fixes on the Roslyn repo, both on GitHub. Despite the topic, these posts are written from the perspective of someone who’s never taken a course in compilers (I haven’t). Phases of compilation At a high level, here’s what happens: - Scanning (also called lexing) - Parsing - Semantic Analysis (also called Binding) - Lowering - Emit Some phases overlap and infringe on others a bit but that’s basically what the compiler is doing. Compiling is a lot like reading By analogy, when you read this blog post you look at a series of characters. You decide that some runs of letters form words, some is punctuation, some is whitespace. That’s what the scanner does. Then you decide that some punctuation groups things into a parenthetical, or a quotation, or terminates a sentence. Some dots are decimal points in numbers or abbreviations or initialisms. That’s what the parser does. Then you import your massive vocabulary of what words mean and look at all the words and decide what those words refer to and in combination what the sentences mean. Occasionally, you find a word with multiple meanings (overloaded terms) and you look at some amount of context to decide which of the multiple meanings is intended (like overload resolution). All of that assignment of meaning is semantic analysis. Lowering and emit don’t really have natural language equivalents other than perhaps translating from one language to another (think of it like translating an article from modern English to simplified English to another very primitive language). But you’re way smarter than a compiler Of course, you don’t do all of this one phase at a time. You don’t read a sentence in three passes because you can usually pick out words and sentences and their meaning all at once. But the compiler isn’t as smart as a human, so it does these things in phases to keep the problems simple. Every now and then, I get a bug report where someone says, “the compiler decided I meant that but obviously I meant this other thing because that doesn’t make any sense”. The compiler doesn’t know something doesn’t make sense until phase 3. And once it knows that, it can’t go back to phase 1 or 2 to correct itself (unlike you and me). “Compiling” HelloWorld Let’s go back to programming languages and look at what the compiler does to compile a simple program. The simple program just consists of the statement Call Console.WriteLine(“Hello, World!”) Scanning The Scanner runs over all the text in the files and breaks down everything into tokens: - Keyword – Call - Identifier – Console - Dot - Identifier – WriteLine - Left Parenthesis - String – “Hello, World!” - Right Parenthesis These tokens are just like words and punctuation in natural languages. Whitespace isn’t usually important since it just separates tokens. But in VB, some whitespaces, like newlines, are significant and interpreted as an “EndOfStatement” token. Parsing The Parser then looks at the list of tokens and sees how those tokens go together: - Parse a statement. - Look at the first token. Found a Call keyword. That starts a Call statement. Parse a Call statement. - A Call statement starts with the Call keyword and then an expression. Parse an expression. - Look at the next token. Found an identifier “Console”. That’s a name expression. - This might be part of a bigger expression. Look for things that could go after an identifier to make an even bigger expression. - Found a dot. An identifier followed by a dot is the beginning of a member access expression. Look for a name. Found another identifier “WriteLine”. This is a member access that says “Console.WriteLine”. - Still could be part of a bigger expression (maybe there are more dots after this?). Look for another continuing token. - Found a left parenthesis. You can’t just have a left parenthesis after an expression – this must be an invocation expression. - An invocation looks like an expression followed by an argument list. An argument list is a list of expressions (it’s more complicated than this but ignore that) separated by commas. Parse expressions and commas until you hit a right parenthesis. - Found a string literal expression. The argument list has one argument. The parse produces a tree that looks like this: - CallStatement - CallKeyword - InvocationExpression - MemberAccessExpression - IdentifierName - IdentifierToken - DotToken - IdentifierName - IdentifierToken - ArgumentList - OpenParenthesisToken - SimpleArgument - StringLiteralExpression - StringToken - CloseParenthesisToken - EndOfFileToken That’s a source file! Semantic Analysis To be clear, the compiler still has no idea (unlike you and I) that Console.WriteLine is a shared method on the Console class in the System namespace and that it has an overload that takes one string parameter and returns nothing. After all, anyone could make a class called Console. Maybe there isn’t a method called WriteLine. Maybe WriteLine is a type. That’s a dumb name for a type but the compiler doesn’t know that. If it is a type, then the program doesn’t make any sense. Piecing all of that together is semantic analysis. The Binder looks at the references provided to the compiler: the namespaces, types, type members in those references, the project-level imports, and the imports in your source file. And then it starts figuring out what’s what. - What does Console mean? - Is there something called Console in scope? - Checking the containing block: No. - Checking the containing method: No. - Checking the containing type: No. - Checking the containing type’s base types: No. - Checking the containing type’s containing type or namespace: No. - Checking the containing namespace’s containing namespaces: No. - Are there import statements? No. - Are there project-level imports? Yes. - Check each namespace imported one by one. - Found one and only one? Yes. - Console is a type. This must be a shared member. - Look for shared member named WriteLine in [mscorlib]System.Console type. - Found 19 of them. They’re all methods. - Bind all the argument expressions. - One argument is a string literal. String literal has content “Hello, World!” and type of [mscorlib]System.String. - Based on number and types of the arguments, it checks how many of the 19 methods could take one string argument. In VB, the answer is 14. But there are rules that decide which ones are better and it turns out that the one that actually takes a string is better than the one that takes object, or the one that takes a string but passing an empty ParamArray argument list, or performing an implicit narrowing conversion to any of the numeric types, Boolean, or the intrinsic conversion from string to Char or Char array. The compiler has determined that the program is an invocation of the shared void [mscorlib]System.Console::WriteLine(string) method. Passing the string literal “Hello, World”. Lowering What lowering does is take high-level language constructs that only exist in VB and translate them to lower-level constructs that the CLR/JIT compiler understands. Here are some examples of things that don’t exist at the Intermediate Language (IL) level: - Loops: IL only has goto—called “br” for branching—and conditional goto—br.true for branch when true and br.false for branch when false. - Variable scope: All variables are “in scope” for the entire method. - Using blocks: IL only has try/catch/finally so the compiler lowers a using block into a try/catch/finally block that initializes a variable and disposes of it in the finally block. - Lambda expressions: The compiler first translates lambdas into ordinary methods. If they capture any local variables, the compiler has to translate those variables into fields of an object behind the scenes. - Iterator methods: The compiler translates an Iterator method with Yield statements inside to a giant state machine, which is essentially just a giant Select Case that says, “last time you called me I was at step 1 so skip to step 2 this time”. Even though IL has a much simpler set of instructions than a higher-level language like VB everything you can write in a VB program is ultimately composed of simple instructions. In the same way that the greatest works of English literature still use just 26 letters. All of the simplicity, safety, and expressiveness of a higher-level language is what makes VB so powerful. This example of a simple call to a Shared method isn’t very complex. IL already understands method calls and string literals so there isn’t really any lowering to be done. Emit Emit is simple. Once the compiler digests your program into simple operations the CLR understands, it writes out these operations (usually to disk) into a binary file in a well-specified format. Wrapping up In this post, we looked at what a compiler does abstractly and how that process compares to how a human being might read a page of text. In the next post, we’ll dive into how the Visual Basic compiler specifically is organized. ngbmodal is the widgets of bootstrap that is used in angular like autocomplete, accordion, alert, carousel, dropdown, pagination, popover, progressbar, rating, tabset, timepicker, tooltip ect….
https://devblogs.microsoft.com/vbteam/roslyn-primer-part-i-anatomy-of-a-compiler/
CC-MAIN-2020-29
en
refinedweb
Subject: Re: [boost] [BCP] Script for global renaming Boost Namespace From: Bjørn Roald (bjorn_at_[hidden]) Date: 2009-05-30 14:11:11 Hi, Interesting, I did something similar as a C++ patch to bcp some years ago. I think there is interest for such a feature, especially if you are willing to maintain it on regular basis. I had a quick look at your code, it locks like you do similar stuff to what I did. I have not tested it. If we do expect all possible users to have Python installed, it may be nice to use python for this. But how do you propose to integrate with bcp? There should be some discussions on the list if you are interested. Google "bcp replace_namespace". -- Bjørn Roald On Saturday 30 May 2009 10:55:25 am Artyom wrote: > Hello, > > Some updates: > ------------- > > 1. Added copyright - Boost License. > 2. Some more regression tests passed, some code cleanup > > Questions: > ---------- > > 1. How can I submit it as an addon to BCP utility? > 2. Have anybody tested it, I'd like to see if there any problems, > especially on Windows platform -- I hadn't tested it there. > > > Thanks > Artyom > > P.S.: The source is there > > --- On Mon, 5/25/09, Artyom <artyomtnk_at_[hidden]> wrote: > > From: Artyom <artyomtnk_at_[hidden]> > > Subject: [BCP] Script for global renaming Boost Namespace > > To: boost_at_[hidden] > > Date: Monday, May 25, 2009, 11:14 AM > > Hello, > > > > Today Boost does not provide any backward binary > > compatibility. This > > makes big problems in shipping 3rd part libraries that > > depend on Boost > > because library user must use the same version of Boost as > > that 3rd > > part library was compiled with. > > > > The problem is become even more critical for ELF platforms > > (UNIXes) > > where all symbols are exported by default. > > > > I had written a small Python script that switches boost > > namespace > > to other, allowing 3rd part project include it without > > collisions > > with primary Boost namespace. > > > > The script passes over the source tree of boost and changes > > each > > include path from <boost/foo/bar.hpp> too > > <newnamespace/foo/bar.hpp> > > and renames all macros and identifiers from > > some_BOOST_something to > > some_NEWNAMESPACE_something and some_boost_something to > > some_newnamespace_something. > > > > It does not touch comments (copyright) and strings unless > > the string > > is in form "boost/.*" which is usually some reference for > > include. > > > > I've run this script on the 1.39 version of boost and > > successfully build full boost release and run some of > > regression tests > > like Boost.Asio, Boost.Regex, Boost.Function and others, > > > > The source code is available at: > > > > > > You run it as: > > > > ./rename /path/to/boost/source > > new_namespace_name > > > > It renames all macros and namespaces to new namespace, and > > renames > > main include directory to new_namespace_na,e. > > > > Few points: > > > > 1. It is only alpha version script, I just want to see > > feedback and > > proposals > > 2. I do not update build scripts. I assume that each > > library that > > tryes to import its own version of Boost > > would provide its own > > build system. > > 3. The build scripts should be updated differently because > > they have > > different syntax and grammar. The > > required changes are to > > fix different build defines > > > > Please, give feedback proposal. If there someone who is > > familiar > > with Boost.Build systems can actually help, this would be > > very good. > > > > Today there is a big problem with working with different > > versions of > > Boost. It should be solved. > > > > I think this script may be a valuable addition to > > Boost.BCP utility > > that allows extracting a subset of boost for integration in > > 3rd part > > tools and libraries. > > > > Thanks, > > Artyom > > > > > > > > _______________________________________________ > Unsubscribe & other changes: > Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
https://lists.boost.org/Archives/boost/2009/05/152054.php
CC-MAIN-2020-29
en
refinedweb
Terminologies & fields of research - Burst detection: An unexpectedly large number of events occurring within some certain temporal or spatial region is called a burst, suggesting unusual behaviors or activities. - Time Series Regression: [ref] Time. - Time Series Classification: [ref] Time series classification deals with classifying the data points over the time based on its’ behavior. There can be data sets which behave in an abnormal manner when comparing with other data sets. Identifying unusual and anomalous time series is becoming increasingly common for organizations - Anomaly Detection: - A part in the same time series. - Finding one or more time series which are different from others. - Some abnormal points in the same time series. - Applied for both univariate and multivariate time series. Find the windows of time series Suppose we have data like in below, we wanna find the common length interval of all groups. # find the biggest gap df['date'].diff().max() # 4 biggest gaps df['date'].diff().sort_values().iloc[-5:] # starting of each window (the gap used to separate windows is '1D') w_starts = df.reset_index()[~(df['date'].diff() < pd.to_timedelta('1D'))].index # ending of each window w_ends = (w_starts[1:] - 1).append(pd.Index([df.shape[0]-1])) # count the number of windows len(w_starts) # the biggest/average window size (in points) (w_ends - w_starts).max() (w_ends - w_starts).values.mean() # the biggest window size (in time range) pd.Timedelta((df.iloc[w_ends]['date'] - df.iloc[w_starts]['date']).max(), unit='ns') If you wanna add a window column to the original dataframe, df_tmp = df.copy() w_idx = 0 for i in range(w_starts.shape[0]): df_tmp.loc[w_starts[i]:(w_ends[i]+1), 'window'] = w_idx w_idx += 1 df_tmp.window = df_tmp.window.astype(int) # convert dtype to int64 There are other cases need to be considered, The gaps are not regular If we choose the gaps (to determine the windows) too small, there are some windows have only 1 point like in this case. Find the gap’s threshold automatically, from sklearn.cluster import MeanShift def find_gap_auto(df): X = df['date'].diff().unique() X = X[~np.isnat(X)] # remove 'NaT' X.sort() X = X.reshape(-1,1) clustering = MeanShift().fit(X) labels = clustering.labels_ cluster_min = labels[0] gap = pd.to_timedelta((X[labels!=cluster_min].min() + X[labels==cluster_min].max())/2) return gap •Notes with this notation aren't good enough. They are being updated.
https://dinhanhthi.com/time-series-tips
CC-MAIN-2020-29
en
refinedweb
: Other Open Source Projects Reading text in a table from word document Jaikar Tulluri Greenhorn Posts: 2 posted 10 years ago Hi, I am trying to read a word doc that contains a table using Java . The requirement is to create rows in a Database Table with the information extracted from the table in the word document. For example, my word doc has a table like: Heading Description Heading1 summary related to heading1 Heading2 summary related to heading2 ..... And my database contains a table named SUMMARY with the following columns: ID DateAdded FileName Heading1 Heading2 ..... (as many heading columns as the number of rows in my word document table) So I need to read the document using java, get the description text for each heading and then store that text as a BLOB in the respective table column. I am currently checking the possibility of using POI for this requirement. I wrote an example application to store entire word doc as a BLOB to mysql database an dread it back. But I am not sure how to read rows of the table one by one. Any help is greatly appreciated. Thanks, JaiKar Ulf Dittmer Rancher Posts: 43016 76 posted 10 years ago A while ago I dabbled around with reading tables from Word documents with POI. I'm no longer working on that, but attached is some code i wrote for that purposes; it should point you in the right direction. import java.io.*; import java.util.*; import org.apache.poi.hwpf.HWPFDocument; import org.apache.poi.hwpf.model.*; import org.apache.poi.hwpf.usermodel.*; import org.apache.poi.poifs.filesystem.POIFSFileSystem; public class TableTest { public static void main (String[] args) throws Exception { String fileName = "table.doc"; if (args.length > 0) { fileName = args[0]; } InputStream fis = new FileInputStream(fileName); POIFSFileSystem fs = new POIFSFileSystem(fis); HWPFDocument doc = new HWPFDocument(fs); Range range = doc.getRange(); for (int i=0; i<range.numParagraphs(); i++) { Paragraph par = range.getParagraph(i); System.out.println("paragraph "+(i+1)); System.out.println("is in table: "+par.isInTable()); System.out.println("is table row end: "+par.isTableRowEnd()); System.out.println(par.text()); } Paragraph tablePar = range.getParagraph(0); if (tablePar.isInTable()) { Table table = range.getTable(tablePar); for (int rowIdx=0; rowIdx<table.numRows(); rowIdx++) { TableRow row = table.getRow(rowIdx); System.out.println("row "+(rowIdx+1)+", is table header: "+row.isTableHeader()); for (int colIdx=0; colIdx<row.numCells(); colIdx++) { TableCell cell = row.getCell(colIdx); System.out.println("column "+(colIdx+1)+", text="+cell.getParagraph(0).text()); } } } } } Jaikar Tulluri Greenhorn Posts: 2 posted 10 years ago Hi Ulf, thank you very much for the response. I tested your example and its successfully retrieving text from table. I also wrote a sample java class that reads text from a Table in a word document. I was able to read the text in each cell of the table. But the problem I am facing is, if the cell has some text with bullets, while reading the cells I am not getting the bullets. the text() function of Cell class is returning only the plain text. So I was wondering if anyone ever tried reading the bullets along with the text using POI. I understand that POI has limited functionality. So I would like to know if it is possible or not to read bullets. If my cell has some info like below: when I say cell.text(), the result is: Thanks, Karuna Ulf Dittmer Rancher Posts: 43016 76 posted 10 years ago Paragraph extends Range - you could inspect all the character runs of the range and see where that gets you. HWPFDocument also has a getListTables method that may lead to something useful. He's my best friend. Not yours. Mine. You can have this tiny ad: Devious Experiments for a Truly Passive Greenhouse! reply Bookmark Topic Watch Topic New Topic Boost this thread! Similar Threads Displaying Table in Expanded and Collapsed manner GUI Part - Bodgitt and Scarper Reading word document Optimizing a database to print a table in word(.doc file) More...
https://www.coderanch.com/t/473792/open-source/Reading-text-table-word-document
CC-MAIN-2020-29
en
refinedweb
24801/how-can-define-multidimensional-array-in-python-using-ctype Here's one quick-and-dirty method: >>> A = ((ctypes.c_float * 10) * 10) >>> a = A() >>> a[5][5] 0.0 In Logic 1, try if i<int(length/2): instead of if i<int((length/2+1)): In ...READ MORE Hi, it is pretty simple, to be ...READ MORE Good question. I actually was stuck with ...READ MORE def add(a,b): return a + b #when i call ...READ MORE suppose you have a string with a ...READ MORE You can also use the random library's ...READ MORE Syntax : list. count(value) Code: colors = ['red', 'green', ...READ MORE can you give an example using a ...READ MORE Use os.rename(src, dst) to rename or move a file ...READ MORE The datetime class has a method strftime. strftime() ...READ MORE OR Already have an account? Sign in.
https://www.edureka.co/community/24801/how-can-define-multidimensional-array-in-python-using-ctype?show=24803
CC-MAIN-2020-29
en
refinedweb
The wrapper is not the problem, I have since discovered that this works in python 3.3 but not in 2.7. It is a problem with pywin32 I think that the wrapper is not translating the data correctly to a variant to send to the activex control. I've tried sending it data in different ways, each time I get the same error: Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\site-packages\win32com\gen_py\98E1A88B-EC78-4F8E- 9BC4-569EEBB08753x0x1x0.py", line 73, in WritePipe return self._oleobj_.InvokeTypes(5, LCID, 1, (19, 0), ((12, 1),),buffer) pywintypes.com_error: (-2147352567, 'Exception occurred.', (0, None, None, None, 0, -2147024809), None) This is how I've sent the data: Calling Writepipe with ctypes in python(2.7.5): import win32com.client import pythoncom from ctypes import * outByteBuffer1 = c_byte * 12 outByteBuffer = outByteBuffer1(0,0,0,0,9,85,206,172,6,0,0,0) errorCode = usbApp.WritePipe(outByteBuffer); I've also tried calling it normally: import win32com.client import pythoncom x = outByteBuffer1(0,0,0,0,9,85,206,172,6,0,0,0) errorCode = usbApp.WritePipe(x); Here is the python wrapper function definition def WritePipe(self, buffer=defaultNamedNotOptArg): 'method WritePipe' return self._oleobj_.InvokeTypes(5, LCID, 1, (19, 0), ((12, 1),),buffer) This is how you would call it in C#: byte[] outByteBuffer2 = new byte[] {0,0,0,0,9,85,206,6,0,0,0}; errorCode = device.WritePipe(outByteBuffer); Here is the C# wrapper: [DispId(5)] uint WritePipe(object buffer);
https://sourceforge.net/p/pywin32/bugs/656/
CC-MAIN-2017-22
en
refinedweb
Add or Edit Custom Control Dialog Box and Add or Edit User Control Dialog Box Applies To: Windows Server 2008 R2 Use the Add Custom Control and Edit Custom Control dialog boxes to specify or edit the tag prefix/namespace mapping for a custom control that will be used in multiple pages in an application. Use the Add User Control and Edit User Control dialog boxes to configure user controls, which are containers into which you can put markup and Web server controls. For more information about user and custom controls, see ASP.NET User Controls Overview. UI Element List See Also Other ResourcesConfiguring Pages and Controls in IIS 7 (online) Show:
https://technet.microsoft.com/en-us/library/cc770398.aspx
CC-MAIN-2017-22
en
refinedweb
I am trying to sort out a simple list of students mark with a simple java program however I am getting Exception in thread "main" java.lang.ClassCastException: Student cannot be cast to java.lang.Comparable public class Student { public String name; public int mark; public Student(String name, int mark){ this.name=name; this.mark=mark; } public int compareTo(Student o){ return this.mark-o.mark; } public String toString(){ String s = "Name: "+name+"\nMark: "+mark+"\n"; return s; } public static void main(String[] args) { Student Class[] = new Student[9]; Class[0] = new Student("Henry",100); Class[1] = new Student("Alex", 10); Class[2] = new Student("Danielle",100); Class[3] = new Student("Luke",10); Class[4] = new Student("Bob",59); Class[5] = new Student("Greg",76); Class[6] = new Student("Cass",43); Class[7] = new Student("Leg",12); Class[8] = new Student("Bobe",13); Arrays.sort(Class); for(int i = 0;i<Class.length;i++){ System.out.println(Class[i]); Your Student class must implement the Comparable interface in order to use Arrays#sort passing Student[] array. The fact that your class currently have a compareTo method doesn't mean it implements this interface, you have to declare this: public class Student implements Comparable<Student> { //class definition... } Make your Student class implement Comparable<Student>. The compareTo() method doesn't work on it's own while sorting. Also, Class doesn't look like a very good variable name. How about using students? Also, I see an issue in your compareTo method: public int compareTo(Student o){ return this.mark-o.mark; } Never compare on the result of subtraction of 2 integers, or longs. The result might overflow. Rather use Integer.compare(int, int) method. Also, get rid of public fields. Make them private, and provide public getters to access them. Similar Questions
http://ebanshi.cc/questions/3808237/java-compareto-method-issues
CC-MAIN-2017-22
en
refinedweb
The ready-set-go call has just resonated through the hallways at Microsoft, and we are now officially working on the next release of AX. What will the next release of AX look like? What features will it contain? What architectural changes will we make? What tools should we support? These are some of the many questions we will be working on during the upcoming months while we are defining the scope for the next release. I work in the Developer and Partner Tools team. This means I get to influence decisions like: “Should MorphX move into Visual Studio?”, “How many layers should AX 6.0 have?”, “Should X++ support eventing?”, “Do we need an Entity concept in the AOT?”, “How do we make our unit test cases available to partners?” – just to name a few. These are the questions that can get me out of bed in the morning – for a lot of good reasons, but primarily because I know my job makes a big difference to a lot of people. If you want to join me shaping the future of AX at Microsoft Development Center Copenhagen, please visit: Any one aware of AX 6.0 architecture? Whether standard AX client will be available or evrything will be web based? The only thing I would like to see that I am not involved (I do enjoy our meeting with Microsoft Product Product Planners, Programe Managers & UE Team) in already is.. using all the power of Microsoft SQL 2000/2005/2008. * Being able to control all the index options from the AOS * Being able to write more complex X++ SQL * Being able to use the fine tuning tools from MS SQL with AX 1) Object Id’s should be replaced with namespaces. GUIDS are a thing of the past! Maybe a central namespace registration service for partners would help avoiding conflicts. 2) Please do something about the report designer and runtitme. It hasn’t been updated for ages! 3) One should be able to retreive related table fields without coding explicit joins. The runtime should derive the joins from the relationships between the tables. This is something all ORM frameworks do. 4) Integrate the editor with the debugger. 5) Start decoupling and revisiting your designs. Everything in AX is bound together with "super-glue"! You have overused inheritance and failed to adhere to most of the modern object oriented design principles and best practices. 6) You should stop writing COBOL-like data access code. Its killing performance! I took a look at implementation of the new packing slip invoicing feature. It’s full of loops, in-memory joins and lookups. You should really join at the database level and minimize the roundtrips between application and database server. Nice girl. Why do you call her 6.0?.. (no offence, just kidding 🙂 I guess, moving application objects to SQL database from file DB could be also a nice thing: performance, reliability, transactions, no need for file server, etc. Well. Reporting is one gr8 tool on which DAX has been missing out lots with each new version. It seems like nothing is being done to upgrade the existing reporting architecture…..would definitely like to see the ballistic designing skills of .Net being incorporated into pre-historic DAX Reports. Kudos to the future of ERPing!!!! Sturla’s recommendation to replace Id’s with GUIDs would also greatly simplify the use of source control (no more "team server" handing out object IDs!) and distributed development. Great Post! I would really be interested in learning about the more features which would be built into AX 6.0. Yes, Eventing is very much required where we can subscribe to events raised in .NET assemblies, And also Morphx (AOT integration) with Visual Studio IDE would be awesome. Thanks for the update and keep posting! What about dropping the Id’s (tables, fields, indexes,…) as we know them and use GUID instead? That would make it more easy for partners to develop and deploy Dynamics Ax solutions. The entity concept is something Dynamics Ax has been needing from day one. That is if you could use them as form data sources like we use tables today. That would make it possible to create good business logic layer between the tables and user interface. I sometimes see MorphX (and the tight bondage of forms and tables) as one of the strongest part of the system today, but also the biggest enemy. Regards, Sturla. With complete ignorance of AX internals, I write… I’d certainly like to see MorphX look and feel more like Visual Studio, but actually move it to VS? Not unless there is some good technical reason for doing so. I’d hate to see MorphX compromised just to shoe-horn it into VS. How many layers should AX 6.0 have? How about an undefined number of layers between the Microsoft layers and CUS, and the ability to order those layers as you choose? Should AX support eventing? YES! YES! YES! Coming from the .NET and COM world this is absolutely the thing I miss most. This would reduce overlayering considerably because you wouldn’t need to place your own code in existing methods. You could have your own class with handlers for events raised by forms, reports, tables, etc. You know what would be REALLY cool? If .NET apps using the Business Connector could handle events triggered on the AOS. I’d love to be able to write a Windows Service in .NET that would sit and watch for things like insert, update, and delete events on certain tables. twh PingBack from
https://blogs.msdn.microsoft.com/mfp/2008/04/15/i-am-working-on-ax6-0-what-are-you-working-on/
CC-MAIN-2017-22
en
refinedweb
view raw I have a music related model with Artist and song Title fields and I'd like to add a column that provides a link to search Amazon's digital music store using the artist and title from the given row on a table using Tables2. Here is what I have but not sure how to add the Amazon column and provide the Artist and Title fields to the Amazon URL? models.py: class Artist (models.Model): name = models.CharField(max_length=100) class Track (models.Model): artist = models.ForeignKey(Artist, blank=True, null=True, on_delete=models.SET_NULL, verbose_name="Artist") title = models.CharField(max_length=100, verbose_name="Title") class amazonColumn(tables.Column): def render(self, value): return mark_safe('{{artist}}-{{title}}', value) # not sure how to pass artist and title records class TrackTable(tables.Table): amazon = amazonColumn() class Meta: model = Track attrs = {"class": "paleblue"} fields = ('artist', 'title', 'amazon') I would use format_html. Furthermore, you'll need to add record as a parameter to the render function, which allows you access its other attributes: class AmazonColumn(tables.Column): amazon_url = '<a href="{artist}-{title}">Amazon</a>' def render(self, record): return format_html(self.amazon_url, artist=record.artist.name, title=record.title) You might have to set empty_values=() where you instantiate the AmazonColumn in your table.
https://codedump.io/share/7pCUMdaULXsE/1/django-tables2-external-link-generation-w-custom-parameters
CC-MAIN-2017-22
en
refinedweb
CodePlexProject Hosting for Open Source Software I want to display the most recent blog posts or their headlines from my BlogEngine blog onto my website's homepage (isn't built using BlogEngine.net). This link: explains how to display content outside of wordpress on your website. What is the best way to display blog content on your website's homepage when using BlogEngine? Depends on your other site. If it is asp.net, you might be able directly reference BlogEngine and loop through the posts similar to what described in that article. If it is PHP or something else, you might need to expose your posts via web service. As last resort, you can always screen scrape your BE posts, which is ugly but works across any platform. My website is .asp on IIS 6 with .net support. I noticed that the syndication.axd output has all of the xml I need to display the most recent posts. Is there a way to read the syndication output? I would like to do something like the following (obviously doesn't work): <% 'Load XML set xml = Server.CreateObject("Microsoft.XMLDOM") xml.async = false xml.load(Server.MapPath("blogengine/syndication.axd")) 'Load XSL set xsl = Server.CreateObject("Microsoft.XMLDOM") xsl.async = false xsl.load(Server.MapPath("blog_links.xsl")) 'Transform file 'Response.Write(xml.transformNode(xsl)) %> Thanks in advance There's already some code in BE that reads RSS feeds. One that comes to mind is the BlogRoll (in the App_Code\Controls\Blogroll.cs) file. And if you're using ASP.NET 3.5, it includes some new built-in namespaces to consume (and publish) RSS. Here's one article demonstrating the new capabilities. Thanks for all of your help Are you sure you want to delete this post? You will not be able to recover it later. Are you sure you want to delete this thread? You will not be able to recover it later.
http://blogengine.codeplex.com/discussions/72488
CC-MAIN-2017-22
en
refinedweb
Introduction to Big Data in HPC, Hadoop and HDFS Part Two - Gertrude Freeman - 1 years ago - Views: Transcription 1 Introduction to Big Data in HPC, Hadoop and HDFS Part Two Research Field Key Technologies Jülich Supercomputing Centre Supercomputing & Big Data Dr. Ing. Morris Riedel Adjunct Associated Professor, University of Iceland Jülich Supercomputing Centre, Germany Head of Research Group High Productivity Data Processing Cy-Tera/LinkSCEEM HPC Administrator Workshop, Nicosia, The Cyprus Institute, 19 th January 21 th January 2015 2 Overall Course Outline 2/ 81 3 Overall Course Technologied for Large-scale distributed Big Data Management 3/ 81 4 Part Two 4/ 81 5 5/ 81 6 Emerging Big Data Analytics vs. Traditional Data Analysis Data Analytics are powerful techniques to work on large data Data Analysis is the in-depth interpretation of research data Both are a complementary technique for understanding datasets Data analytics may point to interesting events for data analysis Data Analysis supports the search for causality Describing exactly WHY something is happening Understanding causality is hard and time-consuming Searching it often leads us down the wrong paths Data Big Data Analytics focussed on correlation Not focussed on causality enough THAT it is happening Discover novel patterns/events and WHAT is happening more quickly Using correlations for invaluable insights often data speaks for itself 6/ 81 7 Google Flu Analytics Hype vs. (Scientific) Reality 2009 H1N1 Virus made headlines Nature paper from Google employees Explains how Google is able to predict flus Not only national scale, but down to regions even Possible via logged big data search queries [1] Jeremy Ginsburg et al., Detecting influenza epidemics using search engine query data, Nature 457, The Parable of Google Flu Large errors in flu prediction were avoidable and offer lessons for the use of big data (1) Transparency and Replicability impossible (2) Study the Algorithm since they keep changing (3) It s Not Just About Size of the Data ~1998-today [1] David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani, The Parable of Google Flu: Traps in Big Data Analysis, Science Vol (343), / 81 8 Modern Data Mining Applications Properties opportunity to exploit parallelism Require handling of immense amounts of data quickly Provide data that is extremely regular and can be independently processed Many modern data mining applications require computing on compute nodes (i.e. processors/cores) that operate independently from each other Independent means there is little or even no communication between tasks Examples from the Web Ranking of Web pages by importance (includes iterated matrix-vector multiplication) Searches in social networking sites (includes search in graph with hundreds of nodes/billions of edges) Major difference to typical HPC workload Not simulation of physical phenomena 8/ 81 9 Hiding the complexity of computing & data Many users of parallel data processing machines are not technical savvy users and don t need to know system details Scientific domain-scientists (e.g. biology) need to focus on their science & data Scientists from statistics/machine-learning need to focus on their algorithms & data Non-technical users raise the following requirements for a data processing machinery : [3] Science Progress The data processing machinery needs to be easy to program The machinery must hide the complexities of computing (e.g. different networks, various operating systems, etc.) It needs to take care of the complexities of parallelization (e.g. scheduling, task distribution, synchronization, etc.) 9/ 81 10 Data Processing Machinery Available Today Specialized Parallel Computers (aka Supercomputers) Interconnent between nodes is expensive (i.e. Infiniband/Myrinet) Interconnect not always needed in data analysis (independence in datasets) Programming is relatively difficult/specific, parallel programming is profession of its own Large compute clusters dominate data-intensive computing Offer cheaper parallelism (no expensive interconnect & switches between nodes) Compute nodes (i.e. processors/cores) interconnected with usual ethernet cables Provide large collections of commodity hardware (e.g. normal processors/cores) 10 / 81 11 Possible Failures during Operation Rule of thumb: The bigger the cluster is, the more frequent failures happen Reasons for failures Loss of a single node within a rack (e.g. hard disk crash, memory errors) Loss of an entire rack (e.g. network issues) Operating software errors/bugs Consequences of failures Long-running compute tasks (e.g. hours/days) need to be restarted Already written data may become inconsistent and needs to be removed Access to unique datasets maybe hindered or even unavailable 11 / 81 12 Two Key Requirements for Big Data Analytics Taking into account the ever-increasing amounts of big data Think: Big Data not always denoted by volume, there is velocity, variety, 1. Fault-tolerant and scalable data analytics processing approach Data analytics computations must be divided in small task for easy restart Restart of one data analytics task has no affect on other active tasks E.g. Hadoop implementation of the map-reduce paradigm A data analytics processing programming model is required that is easy to use and simple to program with fault-tolerance already within its design 2. Reliable scalable big data storage method Data is (almost always) accessible even if failures in nodes/disks occur Enable the access of large quantities of data with good performance E.g. Hadoop Distributed File System (HDFS) implementation A specialized distributed file system is required that assumes failures as default 12 / 81 13 Part Two Questions 13 / 81 14 14 / 81 15 Motivation: Increasing complexities in traditional HPC Different HPC Programming elements (barriers, mutexes, shared-/distributed memory, etc.) Task distribution issues (scheduling, synchronization, inter-process-communication, etc.) Complex heterogenous architectures (UMA, NUMA, hybrid, various network topologies, etc.) Data/Functional parallelism approaches (SMPD, MPMD, domain decomposition, ghosts/halo, etc. ) [4] Introduction to High Performance Computing for Scientists and Engineers [5] Parallel Computing Tutorial More recently, increasing complexity for scientists working with GPGPU solutions (e.g. CUDA, etc.) 15 / 81 16 Inspired by Traditional Model in Computer Science Break big tasks in many sub-tasks and aggregate/combine results Divide & Conquer Problem 1 partition (1) Partition the whole problem space (2) Combine the partly solutions of each partition to a whole solution of the problem Worker Worker Worker Result 2 combine P1 P2 P3 16 / 81 17 Origins of the Map-Reduce Programming Model Origin: Invented via the proprietary Google technology by Google technologists Drivers: Applications and data mining approaches around the Web Foundations go back to functional programming (e.g. LISP) [6] MapReduce: Simplified Dataset on Large Clusters, 2004 Established open source community Apache Hadoop in production (mostly business) [7] Apache Hadoop Open Source Implementation of the map-reduce programming model Based on Java programming language Broadly used also by commercial vendors within added-value software Foundation for many higher-level algorithms, frameworks, and approaches 17 / 81 18 Map-Reduce Programming Model Enables many common calculations easily on large-scale data Efficiently performed on computing clusters (but security critics exists) Offers system that is tolerant of hardware failures in computation Simple Programming Model Users need to provide two functions Map & Reduce with key/value pairs Tunings are possible with numerous configurations & combine operation Key to the understanding: The Map-Reduce Run-Time Three phases not just map-reduce Takes care or the partitioning of input data and the communication Manages parallel execution and performs sort/shuffle/grouping Coordinates/schedules all tasks that either run Map and Reduce tasks Handles faults/errors in execution and re-submit tasks Experience from Practice: Talk to your users what they want to do with map-reduce exactly algorithm implemented, developments? 18 / 81 19 Understanding Map-[Sort/Shuffle/Group]-Reduce done by the framework! Modified from [8] Mining of Massive Datasets 19 / 81 20 Key-Value Data Structures Two key functions to program by user: map and reduce Third phase sort/shuffle works with keys and sorts/groups them Input keys and values (k1,v1) are drawn from a different domain than the output keys and values (k2,v2) Intermediate keys and values (k2,v2) are from the same domain as the output keys and values map (k1,v1) list(k2,v2) reduce (k2,list(v2)) list(v2) 20 / 81 21 Key-Value Data Structures Programming Effort map (k1,v1) list(k2,v2) reduce (k2,list(v2)) list(v2) // counting words example map(string key, String value): // key: document name // value: document contents for each word w in value: EmitIntermediate(w, "1"); // the framework performs sort/shuffle // with the specified keys reduce(string key, Iterator values): // key: a word // values: a list of counts int result = 0; for each v in values: result += ParseInt(v); Emit(AsString(result)); Goal: Counting the number of each word appearing in a document (or text-stream more general) Key-Value pairs are implemented as Strings in this text-processing example for each function and as Iterator over a list Map (docname, doctext) list (wordkey, 1), Reduce (wordkey, list (1, )) list (numbercounted) [6] MapReduce: Simplified Dataset on Large Clusters, / 81 22 Map-Reduce Example: WordCount 22 / 81 23 Map-Reduce on FutureGrid/FutureSystems Transition Process FutureGrid/FutureSystems Affects straightforward use, but elements remain Contact if interested UoIceland Teaching Project Apply for an account Upload of SSH is necessary Close to real production environment Batch system (Torque) for scheduling myhadoop Torque Is a set of scripts that configure and instantiate Hadoop as a batch job myhadoop is currently installed on four different systems Alamo, Hotel, India, Sierra [17] FutureGrid/FutureSystems UoIceland Teaching Project [18] FutureGrid/FutureSystems Submit WordCount Example 23 / 81 24 Further Map-Reduce example: URL Access Very similar to the WordCount example is the following one: Count of URL Access Frequency ( How often people use a page ) Google (and other Web organizations) store billions of logs ( information ) Users independently click on pages or follow links nice parallelization Map function here processes logs of Web page requests (URL, 1) Reduce function adds togeter all values for same URL (URL, N times) [6] MapReduce: Simplified Dataset on Large Clusters Many examples and applications are oriented towards processing large quantities of Web data text Examples are typically not scientific datasets or contents of traditional business warehouse databases 24 / 81 25 Communication in Map-Reduce Algorithms Greatest cost in the communication of a map-reduce algorithm Algorithms often trade communication cost against degree of parallelism and use principle of data locality (use less network) Taking data locality into account Data locality means that network bandwidth is conserved by taking advantage of the approach that the input data (managed by DFS) is stored on (or very near, e.g. same network switch) the local disks of the machines that make up the computing clusters Modified from [6] MapReduce: Simplified Dataset on Large Clusters 25 / 81 26 Map-Reduce Optimizations Local Combiner The Map-Reduce programming model is based on Mapper, Combiner, Partitioner, and Reducer functionality supported by powerful shuffle/sort/aggregation of keys by the framework Mapper functionality is applied to input data and computes intermediate results in a distributed fashion Map Phase (local) Combiner functionality is applied in-memory to Map outputs and performs local aggregation of its results Partitioner determines to which reducer intermediate data is shuffled (cf. Computer Vision) Reduce Phase Reduce functionality is applied to intermediate input data from the Map-Phase and aggregates it for results modified from [14] Computer Vision using Map-Reduce 26 / 81 27 Not every problem can be solved with Map-Reduce Map-Reduce is not a solution to every parallelizable problem Only specific algorithms benefit from the map-reduce approach No communication-intensive parallel tasks (e.g. PDE solver) with MPI Applications that require often updates of existing datasets (writes) Implementations often have severe security limitations (distributed) Example: Amazon Online retail sales Requires a lot of updates on their product databases on each user action (a problem for the underlying file system optimization) Processes that involve little calculation but still change the database Employs thousands of computing nodes and offers them ( Amazon Cloud ) (maybe they using map-reduce for certain analytics: buying patterns) [8] Mining of Massive Datasets 27 / 81 28 Map-Reduce Limitation: Missing some form of State What if values depend on previously computed values? Map-Reduce runs map & reduce tasks in parallel and finishes Problematic when result of one Map-Reduce run is influencing another Map-Reduce run iteration state? ST sample data loaded from local file system according to partition file training samples are support vectors of former layer REDUCE REDUCE REDUCE REDUCE REDUCE Trained classifier MAP MAP MAP STOP Iterations with decrease of map & reduce tasks time [15] Study on Parallel SVM Based on MapReduce 28 / 81 29 The Need for Iterative Map-Reduce Map-Reduce runs map and reduce tasks and finishes Many parallel algorithms are iterative in nature Example from many application fields such as data clustering, dimension reduction, link analysis, or machine learning Iterative Map-Reduce enables algorithms with same Map-Reduce tasks ideas, but added is a loop to perform iterations until conditions are satisfied The transfer of states from one iteration to another iteration is specifically supported in this approach modified from [12] Z. Sun et al. 29 / 81 30 Iterative Map-Reduce with Twister MapReduce jobs are controlled by the Client node Via a multi-step process Configuration phase: The Client assigns Map-Reduce methods to the job prepares KeyValue pairs prepares static data for Map-Reduce tasks (through the partition file) if required Running phase: The Client between iterations receives results collected by the combine method exits gracefully when the job is done (check condition) Message communication between jobs Realized with message brokers, i.e. NaradaBrokering or ActiveMQ [13] Twister Web page 30 / 81 31 Overview: Iterative Map-Reduce with Twister ST = state over iterations! [13] Twister Web page 31 / 81 32 Distribute Small State with Distributed Cache Mappers need to read state from a single file (vs. data chunks) Example: Distributed spell-check application Every Mapper reads same copy of the dictionary before processing docs Dictionary ( state ) is small (~ MBs), but all nodes need to reach it Solution: Hadoop provides DistributedCache Optimal to contain small files needed for initialization (or shared libraries even) State (or input data= needed on all nodes of the cluster Simple use with Java DistributedCache Class Method AddCacheFile() add names of files which should be sent to all nodes on the system (need to be in HDFS) by the framework ST [16] Hadoop Distributed Cache = state of data per iteration or small data required by every node 32 / 81 33 Hadoop 2.x vs. Hadoop 1.x Releases Apache Hadoop 1.x had several limitations, but is still used a lot Apache Hadoop 2.x consisted of significant improvements New Scheduler Hadoop YARN with lots of configuation options (YARN = Yet Another Resource Negotiator: Map-reduce used beyond its idea) HDFS Improvements Use multiple independent Namenodes/Namespaces as federated system (addressing single point of failure ) Map-Reduce Improvements JobTracker functions are seperated into two new components (addressing single point of failure ) New ResourceManager manages the global assignment of compute resources to applications New ApplicationMaster manages the application scheduling and coordination [11] Hadoop 2.x different scheduling policies can be configured 33 / 81 34 Apache Hadoop Key Configuration Overview core-site.xml <configuration> <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> </configuration> hdfs-site.xml <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>hdfs://localhost:9001</value> </property> </configuration> NameNode JobTracker E.g. NameNode, Default status page: E.g. JobTracker, Default status page: core-site.xml contains core properties of Hadoop (fs.default.name is just one example) hdfs-site.xml contains properties directly related to HDFS (e.g. dfs.replication is just one example) mapred-site.xml contains properties related to the map-reduce programming environment 34 / 81 35 Hadoop Selected Administration & Configuration (1) Release Download Check Webpage Well maintained, often new versions [7] Apache Hadoop JobTracker, e.g. max. of map / reduce jobs per node 35 / 81 36 Hadoop Selected Administration & Configuration (2) conf/mapred-site.xml Version 2: JobTracker split : resource management and job scheduling/monitoring conf/yarn-site.xml Yarn for resource management 36 / 81 37 Hadoop Usage Examples 37 / 81 38 Apache Hadoop Architecture Elements NameNode (and Secondary NameNode) Service that has all information about the HDFS file system ( data nodes ) JobTracker (point of failure no secondary instance!) Service that farms out map-reduce tasks to specific nodes in the cluster TaskTracker (close to DataNodes, offering job slots to submit to) Entity in a node in the cluster that accepts/performs map-reduce tasks Standard Apache Hadoop Deployment (Data nodes & TaskTrackers) Secondary NameNode NameNode DataNode Part of the HDFS filesystem Responds to requests from the NameNode for filesystem operations compute nodes with data storage JobTracker 38 / 81 39 Reliability & Fault-Tolerance in Map-Reduce (1) 1. Users use the client application to submit jobs to the JobTracker 2. A JobTracker interacts with the NameNode to get data location 3. The JobTracker locates TaskTracker nodes with available slots at or near the data ( data locality principle ) 4. The JobTracker submits the work to the chosen TaskTracker nodes Map-Reduce Jobs Big Data required for job Standard Apache Hadoop Deployment (Data nodes & TaskTrackers) 4 TaskTracker 3 Secondary NameNode NameNode 2 1 compute nodes with data storage JobTracker 39 / 81 40 Reliability & Fault-Tolerance in Map-Reduce (2) 5. The TaskTracker nodes are monitored (ensure reliability ) Fault tolerance : If they do not submit heartbeat signals often enough, they are deemed to have failed & the work is given to different TaskTracker 6. The TaskTracker notifies the JobTracker when a task fails The JobTracker decides next action: it may resubmit the job elsewhere or it may mark that specific record as something to avoid The Jobtracker may may even blacklist the TaskTracker as unreliable Standard Apache Hadoop Deployment (Data nodes & TaskTrackers) TaskTracker 6 TaskTracker 6 Secondary NameNode NameNode 7. When the work is completed, the JobTracker updates its status 8. Client applications can poll the JobTracker for information 5 8 compute nodes with data storage JobTracker 7 40 / 81 41 Cluster Setups with Hadoop-On-Demand (HOD) Hadoop On Demand (HOD) is a specific Hadoop distribution for provisioning virtual Hadoop cluster deployments over a large physical cluster that is managed by a scheduler (i.e. Torque). When to use? A given physical cluster exists with nodes managed by scheduling system Semi-Automatic Deployment approach HOD provisions and maintains Hadoop Map-Reduce and HDFS instances through interaction with several HOD components on given physical nodes Performs cluster node allocation Starts Hadoop Map/Reduce and HDFS daemons on allocated nodes Makes it easy for administrators to quickly setup and use Hadoop Includes automatic configurations Generates configuration files for the Hadoop daemons and client 41 / 81 42 HOD Deployment on Different Cluster Node Types Submit nodes Users use the HOD client on these nodes to allocate cluster nodes and use the Hadoop client to submit Map-Reduce jobs [27] HOD Architecture Compute nodes The resource manager runs HOD components on these nodes to provision the Hadoop daemons that enable Map-Reduce jobs allocated compute nodes hod components 3 Resource Manager 2 Specific node The usage of HOD is optimized for users that do not want to know the low-level technical details 5 hod client hadoop client 1 compute nodes submit node 4 42 / 81 43 HOD Detailed Architecture Elements Basic System Architecture of HOD includes: A Resource Manager & Scheduling system (i.e. Torque) Hadoop Map/Reduce and HDFS daemons need to run Various HOD components (HOD RingMaster, HOD Rings) HOD RingMaster Starts as a process of the compute nodes (mother superior, in Torque) Uses a resource manager interface (pbsdsh, in Torque) Runs the HodRing as distributed tasks on the allocated compute nodes HOD Rings Communicate with the HOD RingMaster to get Hadoop commands (e.g. new map-reduce jobs) and run them accordingly Once the Hadoop commands are started they register with the RingMaster, giving information about the daemons of the HOD Rings Torque Map-Reduce Jobs hod ringmaster Since map-reduce version 2 HOD is deprected and YARN is the scheduler to be used instead hodring 43 / 81 44 Hadoop Adoptions In Industry [19] IBM Smart Data Innovation Lab [2] [1] [2] [2] [2] [2] [1] [2] class label Closed Source Algorithms in Business Solutions (e.g. also IBM SPSS) Classification Uptake of Hadoop in many different business environments, SMEs, etc. 44 / 81 45 Map-Reduce Deployment Models Map-Reduce deployments are particularly well suited for cloud computing deployments Deployments need still some useful map-reduce codes (cf. to MPI/OpenMP w/o their codes) Options to move data to strong computing power... or move compute tasks close to data EUROPE? ICELAND? High Trust? Data Privacy Low Trust? On-premise full custom Map-Reduce Appliance Map-Reduce Hosting Map-Reduce As-A-Service Bare-metal Virtualized [23] Inspired by a study on Hadoop by Accenture Clouds 45 / 81 46 Data Analytics in the view of Open Data Science Experience in investigation of available parallel data mining algorithms Clustering++ Algorithm A Implementation Regression++ closed/old source, also after asking paper authors Algorithm Extension A Implementation Classification++ implementations available Parallelization of Algorithm Extension A A implementations rare and/or not stable MLlib Stable open source algorithms are still rather rare (Map-reduce, MPI/OpenMP, and GPGPUs) 46 / 81 47 Data Analytics with SVM Algorithm Availability Tool Platform Approach Parallel Support Vector Machine Apache Mahout Java; Apache Hadoop 1.0 (map-reduce); HTC No strategy for implementation (Website), serial SVM in code Apache Spark/MLlib Apache Spark; HTC Only linear SVM; no multi-class implementation Twister/ParallelSVM Java; Apache Hadoop 1.0 (map-reduce); Twister (iterations), HTC Much dependencies on other software: Hadoop, Messaging, etc. Version 0.9 development Scikit-Learn Python; HPC/HTC Multi-class Implementations of SVM, but not fully parallelized pisvm C code; Message Passing Interface (MPI); HPC Simple multi-class parallel SVM implementation outdated (~2011) GPU accelerated LIBSVM CUDA language Multi-class parallel SVM, relatively hard to program, no std. (CUDA) psvm C code; Message Passing Interface (MPI); HPC Unstable beta, SVM implementation outdated (~2011) 47 / 81 48 Lessons Learned Hadoop & Map-Reduce Be careful of investing time & efforts Frameworks keep significantly fast changing, no peer-reviews as in HPC (e.g. Hadoop 1.0 Hadoop 2.0, new move of community to Spark) Map-reduce basically standard, but not as stable as established MPI or OpenMP Hadoop 2.0 improvements with YARN to work in HPC scheduling environments Consider and observe developments around Apache Spark Solutions on-top-of Hadoop keep changing Many different frameworks are available on top of Hadoop Often business-driven developments (e.g. to be used in recommender systems) Data Analytics with Mahout have only a limited number of algorithms (E.g. Decision trees, collaborative filtering, no SVMs, no artifical neural networks) Data Analytics with Twister works, but limited algorithms (E.g. SVM v.0.9 works, but old development/research version, unmaintained) 48 / 81 49 Part Two Questions 49 / 81 50 50 / 81 51 Distributed File Systems vs. Parallel File Systems Distributed File Systems Clients, servers, and storage devices are geographically dispersed among machines of a distributed system (often appear as single system ) Manage access to files from multiple processes But generally treat concurrent access as an unusual event E.g. Hadoop Distributed File System (HDFS) implementation Parallel File Systems Deal with many problematic questions arising during parallel programming E.g. How can hundreds or thousands of processes access the same file concurrently and efficiently? E.g. How should file pointers work? E.g. Can the UNIX sequential consistency semantics be preserved? E.g. How should file blocks be cached and buffered? 51 / 81 52 Hadoop Distributed File System (HDFS) A specialized filesystem designed for reliability and scalability Designed to run on comodity hardware Many similarities with existing distributed file systems Takes advantage of file and data replication concept Differences to traditional distributed file systems are significant Designed to be highly fault-tolerant improves dependability! Enables use with applications that have extremely large data sets Provides high throughput access to application data HDFS relaxes a few POSIX requirements Enables streaming access to file system data Origin HDFS was originally built as infrastructure for the Apache Nutch web search engine project [9] The Hadoop Distributed File System 52 / 81 53 HDFS Key Feature: File Replication Concept File replication is a useful redundancy for improving availability and performance Ideal: Replicas reside on failure-independent machines Availability of one replica should not depend on availability of others Requires ability to place replica on particular machine Failure-independent machines hard to find, but in system design easier Replication should be hidden from users But replicas must be distinguishable at lower level Different DataNode s are not visible to end-users Replication control at higher level Degree of replication must be adjustable (e.g. Hadoop configuration files) Modified from [10] Virtual Workshop 53 / 81 54 HDFS Master/NameNode The master/name node is replicated A directory for the file system as a whole knows where to find the copies All participants using the DFS know where the directory copies are The Master/Name node keeps metadata (e.g. node knows about the blocks of files) E.g. horizontal partitioning Modified from [10] Virtual Workshop 54 / 81 55 HDFS File Operations Different optimized operations on different nodes NameNode Determines the mapping of pure data blocks to DataNodes ( metadata) Executes file system namespace operations E.g. opening, closing, and renaming files and directories DataNode Serving read and write requests from HDFS filesystem clients Performs block creation, deletion, and replication (upon instruction from the NameNode) [10] Virtual Workshop 55 / 81 56 HDFS File/Block(s) Distribution Example [9] The Hadoop Distributed File System 56 / 81 57 Working with HDFS HDFS can be deployed in conjunction with another filesystem But HDFS files are not directly accessible via the normal file system Normal User Commands Create a directory named /foodir e.g. bin/hadoop dfs -mkdir /foodir View the contents of a file named /foodir/myfile.txt e.g. bin/hadoop dfs cat /foodir/myfile.txt Administrator Commands Generate a list of DataNodes e.g. bin/hadoop dfsadmin -report Decommission DataNode datanodename (e.g. maintenance/check reasons) e.g. bin/hadoop dfsadmin decommission datanodename [9] The Hadoop Distributed File System 57 / 81 Classification Techniques in Remote Sensing Research using Smart Data Analytics Classification Techniques in Remote Sensing Research using Smart Data Analytics Federated Systems and Data Division Research Group High Productivity Data Processing Morris Riedel Juelich Supercomputing High Productivity Data Processing Analytics Methods with Applications High Productivity Data Processing Analytics Methods with Applications Dr. Ing. Morris Riedel et al. Adjunct Associate Professor School of Engineering and Natural Sciences, University of Iceland Research Scalable Developments for Big Data Analytics in Remote Sensing Scalable Developments for Big Data Analytics in Remote Sensing Federated Systems and Data Division Research Group High Productivity Data Processing Dr.-Ing. Morris Riedel et al. Research Group Leader, On Establishing Big Data Breakwaters On Establishing Big Data Breakwaters with Analytics Dr. - Ing. Morris Riedel Head of Research Group High Productivity Data Processing, Juelich Supercomputing Centre, Germany Adjunct Associated Professor, Introduction to Hadoop Introduction to Hadoop 1 What is Hadoop? the big data revolution extracting value from data cloud computing 2 Understanding MapReduce the word count problem more examples MCS 572 Lecture 24 Introduction COMPREHENSIVE VIEW OF HADOOP ER. AMRINDER KAUR Assistant Professor, Department !"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets !"#$%&' ( Processing LARGE data sets )%#*'+,'-#.//"0( Framework for o! reliable o! scalable o! distributed computation of large data sets 4(5,67,!-+!"89,:*$;'0+$. Map Reduce / Hadoop / HDFS Chapter 3: Map Reduce / Hadoop / HDFS 97 Overview Outline Distributed File Systems (re-visited) Motivation Programming Model Example Applications Big Data in Apache Hadoop HDFS in Hadoop YARN 98 Overview Hadoop Distributed File System. Dhruba Borthakur June, 2007 Hadoop Distributed File System Dhruba Borthakur June, 2007 Goals of HDFS Very Large Distributed File System 10K nodes, 100 million files, 10 PB Assumes Commodity Hardware Files are replicated to handle Hadoop in Action. Justin Quan March 15, 2011 Hadoop in Action Justin Quan March 15, 2011 Poll What s to come Overview of Hadoop for the uninitiated How does Hadoop work? How do I use Hadoop? How do I get started? Final Thoughts Key Take Aways Hadoop Fault Tolerance in Hadoop for Work Migration 1 Fault Tolerance in Hadoop for Work Migration Shivaraman Janakiraman Indiana University Bloomington ABSTRACT Hadoop is a framework that runs applications on large clusters which are built on numerous Parallel Computing. Benson Muite. benson.muite@ut.ee benson. Parallel Computing Benson Muite benson.muite@ut.ee benson 3 November 2014 Hadoop, Review Hadoop Hadoop History Hadoop Framework Introduction to Hadoop and MapReduce Introduction to Hadoop and MapReduce THE CONTRACTOR IS ACTING UNDER A FRAMEWORK CONTRACT CONCLUDED WITH THE COMMISSION Large-scale Computation Traditional solutions for computing large quantities of data, Map Reduce & Hadoop Recommended Text: Big Data Map Reduce & Hadoop Recommended Text:! Large datasets are becoming more common The New York Stock Exchange generates about one terabyte of new trade data per day. Facebook hosts Parallel Databases. Parallel Architectures. Parallelism Terminology 1/4/2015. Increase performance by performing operations in parallel Parallel Databases Increase performance by performing operations in parallel Parallel Architectures Shared memory Shared disk Shared nothing closely coupled loosely coupled Parallelism Terminology Speedup: Networking in the Hadoop Cluster Hadoop and other distributed systems are increasingly the solution of choice for next generation data volumes. A high capacity, any to any, easily manageable networking layer is critical for peak Hadoop, Cloud Computing at Google. Architecture Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale Installation Tutorial (Hadoop 1.x) Contents Download and install Java JDK... 1 Download the Hadoop tar ball... 1 Update $HOME/.bashrc... 3 Configuration of Hadoop in Pseudo Distributed Mode... 4 Format the newly created cluster to create Accelerating and Simplifying Apache Accelerating and Simplifying Apache Hadoop with Panasas ActiveStor White paper NOvember 2012 1.888.PANASAS Executive Overview The technology requirements for big data vary significantly Selected Parallel and Scalable Methods for Scientific Big Data Analytics Selected Parallel and Scalable Methods for Scientific Big Data Analytics Federated Systems and Data Division Research Group High Productivity Data Processing Dr.-Ing. Morris Riedel et al. Research Group Hadoop is an open-source software framework (or platform) for Reliable + Scalable + Distributed Storage/Computational unit Failures completely Scaling Out With Apache Spark. DTL Meeting 17-04-2015 Slides based on Scaling Out With Apache Spark DTL Meeting 17-04-2015 Slides based on Your hosts Mathijs Kattenberg Technical consultant Jeroen Schot Technical consultant MapReduce and Hadoop Distributed File System MapReduce and Hadoop Distributed File System 1 B. RAMAMURTHY Contact: Dr. Bina Ramamurthy CSE Department University at Buffalo (SUNY) bina@buffalo.edu Partially Distributed Programming Model CS 2510 COMPUTER OPERATING SYSTEMS Cloud Computing MAPREDUCE Dr. Taieb Znati Computer Science Department University of Pittsburgh MAPREDUCE Programming Model Scaling Data Intensive Application MapReduce Selected Parallel and Scalable Methods for Scientific Big Data Analytics Selected Parallel and Scalable Methods for Scientific Big Data Analytics Federated Systems and Data Division Research Group High Productivity Data Processing Dr.-Ing. Morris Riedel et al. Research,
http://docplayer.net/1729693-Introduction-to-big-data-in-hpc-hadoop-and-hdfs-part-two.html
CC-MAIN-2017-22
en
refinedweb
(It's a blog.) December 28, 2013 at 5:06 pm filed under Coding Tagged FP, lisp, macros, racket Man, just when I thought I’d started to understand macros, I stumble across Racket. Don’t get me wrong. I’m still enjoying my foray into Racket. But if, for instance, you started out trying to understand Lisp macros via On Lisp, you may be in for some trouble. Yes, as far as I can tell, the usual syntax like `(foo ,bar ,@baz) will work. But if you begin to read the section of the Racket Guide about macros, it becomes clear that there’s much, much more to the picture. This is really just me thinking out loud as I work this out. I’m more sure of some things than others, and I’ll try to make clear which is which, but consider this a blanket caveat. I think I agree with the folks who’ve suggested that while it’s easy to find trivial or complex Racket macro examples, it’s hard to find examples of moderate complexity. The nice thing is that Fear of Macros exists. I began reading my way through it today. Macros in Racket seems to be built around a relatively simple idea: make syntax first class. I am not certain, but I am pretty sure that syntax like backticks, et al, are not functions. At a minimum they aren’t atoms, right? You can’t map backticks over a list of atoms, or funcall or apply it. ,, @, and backticks are all reader directives (the R in REPL), which is distinct from the environment (the E or eval in REPL). They exist at a more fundamental level to enable Lisp’s clever syntax. I don’t think there’s anything wrong with that, necessarily, but it’s interesting to contemplate Racket’s assertion that syntax should be a first-class datatype, beyond just a list. So that’s why you end up with concepts like transformers. A macro written in this way isn’t mucking around with the reader directly. (Maybe that’s how it’s implemented, though.) Rather, you’re working in a different namespace, or the moral equivalent, at compile time. A macro then becomes a function which receives a syntax object, which the programmer can manipulate via a number of other primitives. It gets a little confusing here, though, because a syntax object isn’t just a glorified AST or what have you. Functions like syntax->datum will recurse through a syntax object representing (+ 1 (* 2 3)) and yield just that, whereas others may only recurse one level, providing a syntax object for (e.g.) + and *. AIUI, these syntax objects know something about the scope in which they were introduced. And this is important because if you’re just operating at a textual substitution level, you can run into problems with scope. Is it a bit like closures? I think so, with the caveat that this would be during the compilation phase, before “real” code is executed. What do I mean by “real” code? Well, I put that in quotes because there’s not much of a difference in reality. I think it has to do with phases. So during the compile phase, you can perform computation. You could write a macro my-+ which performed addition at compile time, right? Or a real example of computation might be to take a declaration like (struct person (id name phone)) and declare accessors make-person, person-id, and so on. This is how any Lisp macro works, so I’m not singling out Racket as special. So if you consider that compile time is itself just another phase in program evaluation, issues of lexical scope and such take on a new meaning. Also, yeah, far as I can tell, in Racket there are interesting phases or times in addition to compile time. In fact I think you can arbitrarily nest phases. I wanted to say that this is the moral equivalent of nesting backticks, but I don’t think that’s accurate enough to help. To put it grossly, backticks are a way to require another layer of evaluation. Put another way, you put backticks around something to delay its evaluation by one application of eval, and a comma to remove a “layer” of delay. Digression: sometimes I try to think about this like when you have nested quotes in prose: “Alice said ‘drink this,’ so I did,” Bob said. You have three layers of nesting here: story-level (“Bob said”), Bob-level (“Alice said”), and Alice-level (“drink this”). English isn’t quite as regimented as code, but it does have rules for quotes. End digression. (You know, as if this whole thing isn’t a digression.) I believe backticks, et al, are quite distinct from “phases” in Racket. Backticks are (again) a syntactic construct for consumption by the reader. The environment doesn’t come into play because eval doesn’t care where your lists came from; they’re “just” lists. This is advantageous in terms of simplicity, I’d expect. Conversely, a syntax object might actually know what symbols, et al, it’s referring to. There’re a pile of Racket functions oriented around this concept. You can play with this at the REPL: The third expression at the prompt is interesting only because syntax-e parses a syntax objects into its constituent parts. The subsequent expression extracts foo as a syntax object and then compares it to another syntax object representing the same foo. All right, I went through all that to flesh out my own mental model for why you might want a richer datatype to represent syntax, rather than “just” lists with reader directives. The way this ends up working in Racket is that a macro receives a syntax object rather than a list of its arguments. The macro is free to manipulate that object (e.g. using syntax-e or syntax->datum). The transformer returns something which is evaluated. With that in mind, this surprised me a little bit: eval-ing a syntax object is equivalent to eval-ing a datum. Okay. I think it’s only functionally equivalent, because of how I set up the example. Specifically, the first example is “portable” to other scopes, and it’ll be able to resolve foo whether foo exists in the current context. The second example will evaluate foo in the current context, possibly failing. To expand on that: say foo didn’t exist at runtime. Or it evaluates to something different at runtime vs compile-time. In the former case, the sharp-quoted version would still work (assuming that it was a proper macro) whereas the datum version would error out. In the latter case, the computations would each evaluate to something different. Interestingly, in the latter case, that’s actually probably what you want! That’s because they’re distinct times, and you wouldn’t want a run-time binding to trample a compile-time binding, right? Hmm. In the next post I’ll think a little more about the interesting pieces this enables, how it changes how I look at macros in Racket, and maybe I’ll even try to understand them. I don’t think, in aggregate, that it’s really all that different in terms of manipulating lists. The extra pieces come from the syntax object, and what happens when your functions and constructs know something about what they’re manipulating. You can have richer affordances and such. So that’ll be interesting to contemplate.
http://incrediblevehicle.com/2013/12/28/macros-in-racket-what/
CC-MAIN-2017-22
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byCecilia Wonnacott Modified over 2 years ago 1 1 Home Health Medicare Refinement Changes Effective 1/1/2008 HFMA: Southern California chapter, March program Paul Giles, Catholic Healthcare West 2 2 Timeline First revision since October, 2000 Proposed rule published on April 27, 2007 Sixty day comment period closes June 26, 2007 Final rule August 2007 Effective date 1-1-08, but 2 step approach Those episodes beginning in 2007 but ending in 2008 Those episodes beginning in 2008 3 3 PPS Reform Rule 2008 rate update PPS case-mix adjuster replaced PPS structural reforms Case-mix creep adjustment 4 4 No Changes in Services Within Episodes Services include same required Skilled Nursing, Physical Therapy, Occupational Therapy, Speech-language pathology, Medical Social Services, Home Health Aide, and non-routine supplies 60-day Episodes 5 5 Rebase National Rate Use freestanding 2003 cost reports Hospital-based reports considered “skewed” Change in labor portion from 76.775% to 77.082% 3.0% inflation increase for FY 2008, but 2.75% decrease for case-mix creep adjustment Continue to use hospital pre-floor and pre re- classified hospital wage index Rural and urban wage indexes 6 6 Rebase National Rate For those episodes beginning in 2007 but ending in 2008 Rate = $2,337.06 (current = $2,339.00) Current rules apply Episodes beginning and ending in 2008 Rate = $2,270.62.. new refinement rules apply All rates 2% less where HHAs do not report quality data 7 7 Existing HH PPS – Average Case Mix Original design, case mix average = 1.0 Using 2003 data, analysis determined new average is 1.233, increase of 23.3% CMS suggests upward trend toward coding behavior changes 8 8 Case Mix Creep CMS explains Case Mix Creep as a natural increase in coding the acuity level of patients due to behavior changes in provider types They estimate an 8.7% creep increase since PPS started Final rule establishes a 2.75% rate reduction for each of the next 3 years and a fourth year of 2.71% Over 5 years this is a cut of $6.2B, Nationally 9 9 Episode Payments Same basic payment structure for Episodes Adjustments for LUPA PEP and Outlier Adjustments SCIC adjustments are eliminated 10 10 Case-mix System Projects patient resource use based on patient characteristics Patient characteristics / acuity level come from OASIS scoring 11 11 Past Model OASIS data elements (24 questions) organized into three dimensions: Clinical severity Functional severity Service utilization 4C x 5F x 4S = 80 HHRGs Model explained 34% of variation in resource use at the time 12 12 Research to Improve Performance Later episodes use more resources Testing additional clinical, functional and demographic variables Exploring effect of co-morbidities Testing new therapy thresholds Alternatives to account for non-routine medical supplies LUPA adjustments 13 13 2008 Changes Account for later episodes Expanded diagnosis codes Changes to MO items Three graduated therapy thresholds Four separate regression models Changes to episode reimbursement adjustments 14 14 PEP Adjustment Review PEPs = 3% of all episodes Discharge and return (55%) Transfer to another agency (42%) Move to managed care (3%) No change to current policy Didn’t look at medical necessity of admission to second agency 15 15 LUPA Review 13% of all episodes Incidence has changed little Initial and only episode LUPAs require longer visits Proposing increase of $92.63 for LUPA episodes that occur as the only episode or the initial episode during a sequence of adjacent episodes Amount will be wage adjusted 16 16 LUPA Payment Example 17 17 LUPA Payment Example 18 18 SCIC Review SCICs declining (3.7% to 2.1%) SCICs had negative margins Eliminating SCICs has little impact on total payments (0.5%) Effective 1/1/2008 SCIC adjustments eliminated 19 19 Outlier Payment Review Outliers = 13% of all episodes and payments Change to Fixed Dollar Loss Ratio=0.89, from 0.67 Loss Sharing Ratio = 0.80 Outlier target = 5% of all payments Fewer episodes will qualify for outlier payments 20 20 Specific OASIS Changes…M0110 Episode Timing (NEW) 21 21 Analysis of Later Episodes Early = 1 st or 2 nd episode Later = 3 rd or later Later have higher resource use and different relationship between clinical conditions and resource use New OASIS item to identify later episodes (MO110) Default will be “Early” 22 22 Diagnosis Codes 4 diagnosis groups in earlier model (diabetes, orthopedic, neurological, and burns and trauma) Additional code groups in new model 23 23 Expanded Diagnosis Codes (Table 2b) Blindness Blood disorders Cancer Diabetes Dysphagia Gait abnormality Gastrointestinal Heart disease Hypertension Neurological Orthopedic Psychiatric Pulmonary Skin 24 24 New OASIS Form for ICD-9 25 25 Changes …M0230/M0240 /M0246 M0246 expands and replaces M0245 Consists of 4 columns Column 1 -description of diagnoses Column 2 -ICD9 codes for M0230 – primary and up to 5 M0240 all other Column 3 –optionally used if a V code is used in column 2 in place of a case-mix code. Column 4 –optionally used if a V code is used in column 2 in place of a case-mix diagnoses that requires multiple codes 26 26 M0230/M0240 /M0246 Edits Extensive edits on V codes, secondary codes, etiology underlying codes and manifestation codes 27 27 Case-mix Model Variables Exclude MO175 and MO610 MO470, MO520 and MO800 added Delete MO245 and replace it Include scores for infected surgical wounds, abscesses, chronic ulcers and gangrene Points assigned for some secondary diagnoses Points assigned for some combinations of conditions in same episode 28 28 OASIS Case-mix Items Clinical MO230 and MO240 Primary and secondary diagnosis MO250 Therapies MO390 Vision MO420 Pain MO450 and 460 Pressure ulcers MO470 (New) and MO476 Stasis ulcers 29 29 Clinical, cont. MO488 Surgical wounds MO490 Dyspnea MO520 Urinary incontinence/catheter (New) MO530 Bowel incontinence MO550 Ostomy MO800 Injectable drugs (New) 30 30 OASIS Functional Items MO650 or 660 Dressing MO670 Bathing MO680 Toileting MO690 Transferring MO700 Ambulation 31 31 Addition of Therapy Thresholds 10 visit threshold artificial One peak at 5-7 visits (pre-PPS) and two peaks (post-PPS) below 10 and 10-13 visits New thresholds based on data analysis and policy considerations MO175 no longer used 32 32 New Therapy Thresholds 6, 14 and 20 visits Reduce undesirable emphasis on a single threshold Restore primacy of clinical considerations for rehabilitation patients 33 33 Gradations Between Thresholds Marginal cost of 7 th therapy visit = $36 One dollar decrease for each additional visit Therapy visits grouped into small aggregates 34 34 New OASIS Scoring for Case Mix Determination Four equation model Early episodes: 1 st and 2 nd episodes Late episodes: 3 or more adjacent episodes 0-13 Therapy Visits 14 or more Therapy Visits 5 Grouping steps within equations to determine case mix OASIS questions segregated into dimensions also called domains: Clinical, Functional and Service 35 35 OASIS Scoring – Diagnosis Codes If 250.00 were other diagnosis, equation 1 = 2 points but equation 2 = 4 points Up to 6 point scores may be accumulated for M0230, M0240 & M0246 between Primary and Other diagnosis codes Optional coding should be inserted in M0246 where V codes are used in column 2 First time V codes accepted as case mix codes: V55.0, V55.5, V55.6 36 36 OASIS Scoring – Diagnosis Codes Table 2B Codes, pg 8 37 37 New OASIS Scoring for Case Mix Determination Case-Mix points will vary depending upon equation to use, 51 elements Table 2A, Case Mix Scores, pg 3 38 38 OASIS Scoring – Functional Dimension 39 39 OASIS Scoring For Case Mix 40 40 Determining Case-mix Weights Each severity level represents a different number of therapy visits Indicator variables allow 4 equation model to be combined into single regression Lowest group = $1,276.66 Add amounts for additional levels from Table 4 41 41 The New HHRGs Same HHRG form (CxFxSx) but new groupings 153 groups vs. 80 currently Past groups are not comparable to new New HIPPS codes for billing 42 42 Summary of Case Mix Groups 43 43 Case Mix Weights Past Range: 0.5265 – 2.8113 New Range: 0.5549 – 3.3724 44 44 Non-Routine Medical Supply (NRS) Add-Ons 6 Set Severity Levels based upon total points Points gathered from OASIS answers All episodes will have NRS payment add-on except LUPAs no matter if supplies are provided or not 0 points will result in add-on payment of $14.12 (minimum) Set payment range $14.12 - $551.00 Payment is not wage-adjusted 45 45 OASIS Scoring For NRS Case Mix Scores 42 elements for selected skin conditions 7 elements for other clinical factors See Table 10B ICD-9 diagnoses codes for non-routine medical supplies Sum of points from the 49 elements will determine NRS severity level 46 46 OASIS Scoring For NRS Case Mix Scores Table 9 47 47 Example in ICD-9 Coding Example patient in CBSA 42060, early episode and projected 005 therapy visits Will fall into grouping #1 for point scores Assuming all dimensions have minimum scores Primary Cancer diagnosis of 149.00 in M0230 will score 4 points HHRG level would be C1F1S1 Payment w/o NRS add-on would be $1,497.70 48 48 Example in ICD-9 Coding Continuing example, if patient had other diagnoses of blood disorder 284.00 Recording this other diagnoses in M0240 or M0246 results in 2 additional points This pushes HHRG level to C2F1S1 Payment now w/o NRS add-on would be $1,885.29, $387.59 higher 49 49 Example in ICD-9 Coding Continuing example, if patient had a 2 nd other diagnoses of low vision 369.25 Recording this 2 nd other diagnoses in M0240 or M0246 results in 3 additional points This pushes HHRG level to C3F1S1 Payment now w/o NRS add-on would be $2,315.82, $430.53 higher The two other diagnoses included has increased reimbursement for the episode by $818.12 or nearly 55% 50 50 New Rate Sheet Example 51 51 Reimbursement Comparison Example 52 52 Up and Down Coding CMS announced that all up and down coding will occur automatically for the following: Early vs. Later episodes – the Medicare claims system will know the episode count based upon claims and episode dates paid. This will affect payment based on equation, grouping step and LUPA add-on M0826, number a therapy visits – Never change HIPPS code due to difference in actual # of therapy visits provided vs. the M0826 answer, claims system will adjust automatically 53 53 Billing HIPPS Codes New system of codes No longer validity flag First position is episode grouping step Positions 2 -4: severity levels Position 5 is non-routine supply severity level 5 th position is letter when supplies are billed and a number when supplies are not billed 1836 different codes for Home Health 54 54 Treatment Authorization Code 18 digit code Associated with key dates Also codes to provide logic for up and down coding RAP / Claim will reject if not correct 55 55 Current Issues Incorrect LUPA add-on payments made with episodes beginning in 2007 and ending within 2008 Claims rejecting when HIPPS code does not match code on RAP CMS updated ICD-9 codes as late as 1/28/08 Info vendor issues 56 56 Summary Major change to case-mix system Success dependant upon knowledge of changes Many will see decreased reimbursement Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/3815128/
CC-MAIN-2017-22
en
refinedweb
Help on buildind to Windows 10 Hi, guys. I'm i developer from a company that is actually porting some codes from actionscript and java to C++, and we recently have choosen Qt. I've already installed Qt Creator at some machines: in Mac and Windows 7 it´s working ok, very nice. But in some Windows 10 machines, even with fresh OS install, I can´t neither run the simplest console application: #include <QCoreApplication> int main(int argc, char *argv[]) { QCoreApplication a(argc, argv); return a.exec(); } This error always happens: Starting C:\Users\carlo\OneDrive\Documentos\build-untitled2-Desktop_Qt_5_9_0_MinGW_32bit-Debug\debug\untitled2.exe... The program has unexpectedly finished. C:\Users\carlo\OneDrive\Documentos\build-untitled2-Desktop_Qt_5_9_0_MinGW_32bit-Debug\debug\untitled2.exe crashed. I llok at some forums, and many people thinks that it seems to be caused by an abscence of some dll file. Well, I'd appreciate it for some help. Thanks a lot. Hi, just a quick guess, but try placing your program somewhere else than in the OneDrive folder. Thanks for the reply. At this machine, i'm using the OneDrive folder. But the same occurs at other installations at 2 other Windows 10 PCs. In both cases, the projects are located at "normal" folders. Ok another quick guess: have you tried building in Release instead of Debug? I´ve just tried Released and the machine crashed (freezed). I tried at other machine, the same code. The console window doens´t present the QCoreApplication standard message. Nothing happens. Both release and debug. When a close console, it returns this error: C:\Users\cranoya\Documents\build-untitled2-Desktop_Qt_5_9_0_MinGW_32bit-Release\release\untitled2.exe exited with code -1073741510 I've tried again at the crashed PC and release built worked. The console window shows the message "Press <RETURN> to close this window..." But debugg built still not works.... Are you trying just to run the app you created, or are you trying to develop on all those machines? If going for just running it, have you gone thru the windeployqt command? Hi Xyrer. I'm just trying a very simple thing: I install Qt Creator, crete a new project (console application), press the "build and run" button, and this happens. I did it many times (including reinstalling Qt) and the same happens in two of my PCs. With my Windows 7 PC at home i'm working fine (i'm working with Qt just now). With my parter's Mac it's ok too. I don't know if it's a coincidence, by these two Windows 10 PCs are giving the this headache... Thanks for replying. - JKSH Moderators @carlos-ranoya said in Help on buildind to Windows 10: I install Qt Creator, crete a new project (console application), press the "build and run" button, and this happens. What do you see if you click "Start Debugging" instead of "Build and Run" (using a Debug build)? Also, which version of Qt did you install on each PC? Are they all Qt 5.9.0 for MinGW 32-bit?
https://forum.qt.io/topic/78998/help-on-buildind-to-windows-10
CC-MAIN-2017-22
en
refinedweb
import "github.com/spf13/hugo/create" Package create provides functions to create new content. FindArchetype takes a given kind/archetype of content and returns an output path for that archetype. If no archetype is found, an empty string is returned. NewContent creates a new content file in the content directory based upon the given kind, which is used to lookup an archetype. Package create imports 12 packages (graph) and is imported by 76 packages. Updated 2017-03-25. Refresh now. Tools for package owners.
https://godoc.org/github.com/spf13/hugo/create
CC-MAIN-2017-22
en
refinedweb
Sponsored Post Creating One Browser Extension For All Browsers: Edge, Chrome, Firefox, Opera, Brave And Vivaldi By David Rousset April 5th, 2017 BrowsersCSSExtensions 0 Comments In today’s article, we’ll create a JavaScript extension that works in all major modern browsers, using the very same code base. Indeed, the Chrome extension model based on HTML, CSS and JavaScript is now available almost everywhere, and there is even a Browser Extension Community Group1. Note: We won’t cover Safari in this article because it doesn’t support the same extension model2 as others. Further Reading on SmashingMag: Link Creating A “Save For Later” Chrome Extension With Modern Web Tools3 What’s The Deal With The Samsung Internet Browser?4 Form Inputs: The Browser Support Issue You Didn’t Know You Had5 Chrome, Firefox, Safari, Opera, Edge? Impressive Web Browser Alternatives6 Basics Link I won’t cover the basics of extension development because plenty of good resources are already available from each vendor: Google7 Microsoft8 (also, see the great overview video “Building Extensions for Microsoft Edge9”) Mozilla10 (also, see the wiki11) Opera12 Brave13 So, if you’ve never built an extension before or don’t know how it works, have a quick look at those resources. Don’t worry: Building one is simple and straightforward. Our Extension Link Let’s build a proof of concept — an extension that uses artificial intelligence (AI) and computer vision to help the blind analyze images on a web page. We’ll see that, with a few lines of code, we can create some powerful features in the browser. In my case, I’m concerned with accessibility on the web and I’ve already spent some time thinking about how to make a breakout game accessible using web audio and SVG14, for instance. Still, I’ve been looking for something that would help blind people in a more general way. I was recently inspired while listening to a great talk by Chris Heilmann15 in Lisbon: “Pixels and Hidden Meaning in Pixels16.” Indeed, using today’s AI algorithms in the cloud, as well as text-to-speech technologies, exposed in the browser with the Web Speech API17 or using a remote cloud service, we can very easily build an extension that analyzes web page images with missing or improperly filled alt text properties. My little proof of concept simply extracts images from a web page (the one in the active tab) and displays the thumbnails in a list. When you click on one of the images, the extension queries the Computer Vision API to get some descriptive text for the image and then uses either the Web Speech API or Bing Speech API to share it with the visitor. The video below demonstrates it in Edge, Chrome, Firefox, Opera and Brave. You’ll notice that, even when the Computer Vision API is analyzing some CGI images, it’s very accurate! I’m really impressed by the progress the industry has made on this in recent months. I’m using these services: Computer Vision API18, Microsoft Cognitive Services This is free to use19 (with a quota). You’ll need to generate a free key; replace the TODO section in the code with your key to make this extension work on your machine. To get an idea of what this API can do, play around with it20. 21 Bing Text to Speech API22, Microsoft Cognitive Services This is also free to use23 (with a quota, too). You’ll need to generate a free key again. We’ll also use a small library24 that I wrote recently to call this API from JavaScript. If you don’t have a Bing key, the extension will always fall back to the Web Speech API, which is supported by all recent browsers. But feel free to try other similar services: Visual Recognition25, IBM Watson Cloud Vision API26, Google You can find the code for this small browser extension on my GitHub page27. Feel free to modify the code for other products you want to test. Tip To Make Your Code Compatible With All Browsers Link Most of the code and tutorials you’ll find use the namespace chrome.xxx for the Extension API (chrome.tabs, for instance). But, as I’ve said, the Extension API model is currently being standardized to browser.xxx, and some browsers are defining their own namespaces in the meantime (for example, Edge is using msBrowser). Fortunately, most of the API remains the same behind the browser. So, it’s very simple to create a little trick to support all browsers and namespace definitions, thanks to the beauty of JavaScript: window.browser = (function () { return window.msBrowser window.browser window.chrome; })(); And voilà! Of course, you’ll also need to use the subset of the API supported by all browsers. For instance: Microsoft Edge has a list of support28. Mozilla Firefox shares its current Chrome incompatibilities29. Opera maintains its own list of extension APIs supported30 by its browser. Extension Architecture Link Let’s review together the architecture of this extension. If you’re new to browser extensions, this should help you to understand the flow. Let’s start with the manifest file31: 32(View large version33) This manifest file and its associated JSON is the minimum you’ll need to load an extension in all browsers, if we’re not considering the code of the extension itself, of course. Please check the source34 in my GitHub account, and start from here to be sure that your extension is compatible with all browsers. For instance, you must specify an author property to load it in Edge; otherwise, it will throw an error. You’ll also need to use the same structure for the icons. The default_title property is also important because it’s used by screen readers in some browsers. Here are links to the documentation to help you build a manifest file that is compatible everywhere: Chrome35 Edge36 Firefox37 The sample extension used in this article is mainly based on the concept of the content script38. This is a script living in the context of the page that we’d like to inspect. Because it has access to the DOM, it will help us to retrieve the images contained in the web page. If you’d like to know more about what a content script is, Opera39, Mozilla40 and Google41 have documentation on it. Our content script42 is simple: 43(View large version44) console.log("Dare Angel content script started"); browser.runtime.onMessage.addListener(function (request, sender, sendResponse) { if (request.command == "requestImages") { var images = document.getElementsByTagName('img'); var imagesList = []; for (var i = 0; i 64 images[i].height 64)) { imagesList.push({ url: images[i].src, alt: images[i].alt }); } } sendResponse(JSON.stringify(imagesList)); } }); view raw This first logs into the console to let you check that the extension has properly loaded. Check it via your browser’s developer tool, accessible from F12, Control + Shift + I or ⌘ + ⌥ + I. It then waits for a message from the UI page with a requestImages command to get all of the images available in the current DOM, and then it returns a list of their URLs if they’re bigger than 64 × 64 pixels (to avoid all of the pixel-tracking junk and low-resolution images). 45(View large version46) The popup UI page47 we’re using is very simple and will display the list of images returned by the content script inside a flexbox container48. It loads the start.js script, which immediately creates an instance of dareangel.dashboard.js49 to send a message to the content script to get the URLs of the images in the currently visible tab. Here’s the code that lives in the UI page, requesting the URLs to the content script: browser.tabs.query({ active: true, currentWindow: true }, (tabs) = { browser.tabs.sendMessage(tabs[0].id, { command: "requestImages" }, (response) = { this._imagesList = JSON.parse(response); this._imagesList.forEach((element) = { var newImageHTMLElement = document.createElement("img"); newImageHTMLElement.src = element.url; newImageHTMLElement.alt = element.alt; newImageHTMLElement.tabIndex = this._tabIndex; this._tabIndex++; newImageHTMLElement.addEventListener("focus", (event) = { if (COMPUTERVISIONKEY !== "") { this.analyzeThisImage(event.target.src); } else { var warningMsg = document.createElement("div"); warningMsg.innerHTML = "Please generate a Computer Vision key in the other tab. Link"; this._targetDiv.insertBefore(warningMsg, this._targetDiv.firstChild); browser.tabs.create({ active: false, url: "" }); } }); this._targetDiv.appendChild(newImageHTMLElement); }); }); }); We’re creating image elements. Each image will trigger an event if it has focus, querying the Computer Vision API for review. This is done by this simple XHR call: analyzeThisImage(url) { var xhr = new XMLHttpRequest(); xhr.onreadystatechange = () = { if (xhr.readyState == 4 xhr.status == 200) { var response = document.querySelector('#response'); var reponse = JSON.parse(xhr.response); var resultToSpeak = `With a confidence of ${Math.round(reponse.description.captions[0].confidence * 100)}%, I think it's ${reponse.description.captions[0].text}`; console.log(resultToSpeak); if (!this._useBingTTS || BINGSPEECHKEY === "") { var synUtterance = new SpeechSynthesisUtterance(); synUtterance.text = resultToSpeak; window.speechSynthesis.speak(synUtterance); } else { this._bingTTSclient.synthesize(resultToSpeak); } } }; xhr.onerror = (evt) = { console.log(evt); }; try { xhr.open('POST', ''); xhr.setRequestHeader("Content-Type", "application/json"); xhr.setRequestHeader("Ocp-Apim-Subscription-Key", COMPUTERVISIONKEY); var requestObject = { "url": url }; xhr.send(JSON.stringify(requestObject)); } catch (ex) { console.log(ex); } } view raw The following articles will you help you to understand how this Computer Vision API works: “Analyzing an Image Version 1.050,” Microsoft Cognitive Services “Computer Vision API, v1.051,” Microsoft Cognitive Services This shows you via an interactive console in a web page how to call the REST API with the proper JSON properties, and the JSON object you’ll get in return. It’s useful to understand how it works and how you will call it. In our case, we’re using the describe feature of the API. You’ll also notice in the callback that we will try to use either the Web Speech API or the Bing Text-to-Speech service, based on your options. Here, then, is the global workflow of this little extension: 52(View large version53) Loading The Extension In Each Browser Link Let’s review quickly how to install the extension in each browser. Prerequisites Link Download or clone my small extension54 from GitHub somewhere to your hard drive. Also, modify dareangel.dashboard.js to add at least a Computer Vision API key. Otherwise, the extension will only be able to display the images extracted from the web page. Microsoft Edge Link First, you’ll need at least a Windows 10 Anniversary Update (OS Build 14393+) to have support for extensions in Edge. Then, open Edge and type about:flags in the address bar. Check the “Enable extension developer features.” 55 Click on “…” in the Edge’s navigation bar and then “Extensions” and then “Load extension,” and select the folder where you’ve cloned my GitHub repository. You’ll get this: 56 Click on this freshly loaded extension, and enable “Show button next to the address bar.” 57 Note the “Reload extension” button, which is useful while you’re developing your extension. You won’t be forced to remove or reinstall it during the development process; just click the button to refresh the extension. Navigate to BabylonJS626158, and click on the Dare Angel (DA) button to follow the same demo as shown in the video. Google Chrome, Opera, Vivaldi Link In Chrome, navigate to chrome://extensions. In Opera, navigate to opera://extensions. And in Vivaldi, navigate to vivaldi://extensions. Then, enable “Developer mode.” Click on “Load unpacked extension,” and choose the folder where you’ve extracted my extension. 59(View large version60) Navigate to BabylonJS626158, and open the extension to check that it works fine. Mozilla Firefox Link You’ve got two options here. The first is to temporarily load your extension, which is as easy as it is in Edge and Chrome. Open Firefox, navigate to about:debugging and click “Load Temporary Add-on.” Then, navigate to the folder of the extension, and select the manifest.json file. That’s it! Now go to BabylonJS626158 to test the extension. 63(View large version64) The only problem with this solution is that every time you close the browser, you’ll have to reload the extension. The second option would be to use the XPI packaging. You can learn more about this in “Extension Packaging65” on the Mozilla Developer Network. Brave Link The public version of Brave doesn’t have a “developer mode” embedded in it to let you load an unsigned extension. You’ll need to build your own version of it by following the steps in “Loading Chrome Extensions in Brave66.” As explained in that article, once you’ve cloned Brave, you’ll need to open the extensions.js file in a text editor. Locate the lines below, and insert the registration code for your extension. In my case, I’ve just added the two last lines: // Manually install the braveExtension and torrentExtension extensionInfo.setState(config.braveExtensionId, extensionStates.REGISTERED) loadExtension(config.braveExtensionId, getExtensionsPath('brave'), generateBraveManifest(), 'component') extensionInfo.setState('DareAngel', extensionStates.REGISTERED) loadExtension('DareAngel', getExtensionsPath('DareAngel/')) view raw Copy the extension to the app/extensions folder. Open two command prompts in the browser-laptop folder. In the first one, launch npm run watch, and wait for webpack to finish building Brave’s Electron app. It should say, “webpack: bundle is now VALID.” Otherwise, you’ll run into some issues. 67(View large version68) Then, in the second command prompt, launch npm start, which will launch our slightly custom version of Brave. In Brave, navigate to about:extensions, and you should see the extension displayed and loaded in the address bar. 69(View large version70) Debugging The Extension In Each Browser Link Tip for all browsers: Using console.log(), simply log some data from the flow of your extension. Most of the time, using the browser’s developer tools, you’ll be able to click on the JavaScript file that has logged it to open it and debug it. Microsoft Edge Link To debug the client script part, living in the context of the page, you just need to open F12. Then, click on the “Debugger” tab and find your extension’s folder. Open the script file that you’d like to debug — dareangel.client.js, in my case — and debug your code as usual, setting up breakpoints, etc. 71(View large version72) If your extension creates a separate tab to do its job (like the Page Analyzer73, which our Vorlon.js74 team published in the store), simply press F12 on that tab to debug it. 75(View large version76) If you’d like to debug the popup page, you’ll first need to get the ID of your extension. To do that, simply go into the property of the extension and you’ll find an ID property: 77 Then, you’ll need to type in the address bar something like ms-browser-extension://ID_of_your_extension/yourpage.html. In our case, it would be ms-browser-extension://DareAngel_vdbyzyarbfgh8/dashboard.html. Then, simply use F12 on this page: 78(View large version79) Google Chrome, Opera, Vivaldi, Brave Link Because Chrome and Opera rely on the same Blink code base, they share the same debugging process. Even though Brave and Vivaldi are forks of Chromium, they also share the same debugging process most of the time. To debug the client script part, open the browser’s developer tools on the page that you’d like to debug (pressing F12, Control + Shift + I or ⌘ + ⌥ + I, depending on the browser or platform you’re using). Then, click on the “Content scripts” tab and find your extension’s folder. Open the script file that you’d like to debug, and debug your code just as you would do with any JavaScript code. 80(View large version81) To debug a tab that your extension would create, it’s exactly the same as with Edge: Simply use the developer tools. 82(View large version83) For Chrome and Opera, to debug the popup page, right-click on the button of your extension next to the address bar and choose “Inspect popup,” or open the HTML pane of the popup and right-click inside it to “Inspect.” Vivaldi only supports right-click and then “Inspect” inside the HTML pane once opened. 84(View large version85) For Brave, it’s the same process as with Edge. You first need to find the GUID associated with your extension in about:extensions: 86 And then, in a separate tab, open the page you’d like to debug like — in my case, chrome-extension://bodaahkboijjjodkbmmddgjldpifcjap/dashboard.html — and open developer tools. 87(View large version88) For the layout, you have a bit of help using Shift + F8, which will let you inspect the complete frame of Brave. And you’ll discover that Brave is an Electron app using React! Note, for instance, the data-reactroot attribute. 89(View large version90) Note: I had to slightly modify the CSS of the extension for Brave because it currently displays popups with a transparent background by default, and I also had some issues with the height of my images collection. I’ve limited it to four elements in Brave. Mozilla Firefox Link Mozilla has really great documentation on debugging web extensions91. For the client script part, it’s the same as in Edge, Chrome, Opera and Brave. Simply open the developer tools in the tab you’d like to debug, and you’ll find a moz-extension://guid section with your code to debug: 92(View large version93) If you need to debug a tab that your extension would create (like Vorlon.js’ Page Analyzer extension), simply use the developer tools: 94(View large version95) Finally, debugging a popup is a bit more complex but is well explained in the “Debugging Popups96” section of the documentation. 97(View large version98) Publishing Your Extension In Each Store Link Each vendor has detailed documentation on the process to follow to publish your extension in its store. They all take similar approaches. You need to package the extension in a particular file format — most of the time, a ZIP-like container. Then, you have to submit it in a dedicated portal, choose a pricing model and wait for the review process to complete. If accepted, your extension will be downloadable in the browser itself by any user who visits the extensions store. Here are the various processes: Google: “Publish in the Chrome Web Store99” Mozilla: “Publishing your WebExtension100” Opera: “Publishing Guidelines101” Microsoft: “Packaging Microsoft Edge Extensions102” Please note that submitting a Microsoft Edge extension to the Windows Store is currently a restricted capability. Reach out to the Microsoft Edge team103 with your request to be a part of the Windows Store, and they’ll consider you for a future update. I’ve tried to share as much of what I’ve learned from working on our Vorlon.js Page Analyzer extension104 and this little proof of concept. Some developers remember the pain of working through various implementations to build their extension — whether it meant using different build directories, or working with slightly different extension APIs, or following totally different approaches, such as Firefox’s XUL extensions or Internet Explorer’s BHOs and ActiveX. It’s awesome to see that, today, using our regular JavaScript, CSS and HTML skills, we can build great extensions using the very same code base and across all browsers! Feel free to ping me on Twitter105 for any feedback. (ms, vf, r
http://www.webhostingreviewsbynerds.com/creating-one-browser-extension-for-all-browsers-edge-chrome-firefox-opera-brave-and-vivaldi/
CC-MAIN-2017-22
en
refinedweb