text
stringlengths
20
1.01M
url
stringlengths
14
1.25k
dump
stringlengths
9
15
lang
stringclasses
4 values
source
stringclasses
4 values
.html.editor.folding;21 22 import org.netbeans.editor.CodeFoldingSideBar;23 import org.netbeans.editor.SideBarFactory;24 25 /**26 * HTML Code Folding Side Bar Factory, responsible for creating CodeFoldingSideBar27 * Plugged via layer.xml28 *29 * @author Martin Roskanin, Marek Fukala30 */31 public class HTMLCodeFoldingSideBarFactory implements SideBarFactory{32 33 public HTMLCodeFoldingSideBarFactory() {34 }35 36 public javax.swing.JComponent createSideBar(javax.swing.text.JTextComponent target) {37 return new CodeFoldingSideBar(target);38 }39 40 }41 Java API By Example, From Geeks To Geeks. | Our Blog | Conditions of Use | About Us_ |
http://kickjava.com/src/org/netbeans/modules/html/editor/folding/HTMLCodeFoldingSideBarFactory.java.htm
CC-MAIN-2017-04
en
refinedweb
Details Description I have a simple POJO service with the following method: package org.tempuri.test; import org.tempuri.test.data.arrays.ArrayOfanyType; public class TypeTest { public ArrayOfanyType retArrayAnyType1D(ArrayOfanyType inArrayAnyType1D) } The ArrayOfanyType is declared like this: public class ArrayOfanyType { private Object[] anyType; public Object[] getAnyType() { if (anyType == null) return this.anyType; } public void setAnyType(Object[] anyType){ this.anyType = anyType; } } I deploy this POJO on an Axis2 1.4 runtime running on Tomcat. Then I generate a client stub using the following command: wsdl2java -ap -o ./generated -s -u -uw -uri I use the stub to invoke the service passing an OMElement in the Object array: OMFactory factory = OMAbstractFactory.getOMFactory(); OMNamespace ns = factory.createOMNamespace("", "article"); OMElement articleElement = factory.createOMElement("Article", ns); ArrayOfanyType input = new ArrayOfanyType(); input.setAnyType(new OMElement[] ); stub.retArrayAnyType1D(input); While serializing the ArrayOfanyType ADBBean I get an "Unknown type can not serialize" exception: Caused by: javax.xml.stream.XMLStreamException: Unknow type can not serialize at org.apache.axis2.databinding.utils.ConverterUtil.serializeAnyType(ConverterUtil.java:1449) at org.tempuri.test.data.arrays.xsd.ArrayOfanyType.serialize(ArrayOfanyType.java:241) at org.tempuri.test.data.arrays.xsd.ArrayOfanyType.serialize(ArrayOfanyType.java:160) at org.tempuri.test.RetArrayAnyType1D.serialize(RetArrayAnyType1D.java:203) at org.tempuri.test.RetArrayAnyType1D.serialize(RetArrayAnyType1D.java:123) at org.tempuri.test.RetArrayAnyType1D$1.serialize(RetArrayAnyType1D.java:111) ... I did not have this problem on Axis2 1.3 so I guess something have been changed in ConverterUtil. Issue Links - is duplicated by AXIS2-4439 ADBException: Any type element type has not been given - Resolved Activity - All - Work Log - History - Activity - Transitions Hi Detelin, Do you have a patch for this issue ? thanks, nandana Hi Nandana, No I don't have a patch, but, I think that if I could bring Amila's attention to this, he could explain why this is not working and a provide a possible workaround. My understanding so far is that in Axis2 1.3 the anyType was serialized and deserialized as OMElement, but now in Axis2 1.4 that is not so. I guess that decision was taken due to this jira: And since the sample above was taken from one of the integration test clases, namely org.apache.axis2.rpc.complex.ComplexDataTypesComplexDataTypesSOAP11Test#testretArrayAnyType1D(), I checked that method again in Axis2 trunk and saw that it has been commented out by Amila, with the following comment: "svn commit change the adb any type handling. currently it uses an om element to support the anytype. but this is wrong. so I have change it to use the object to represent the anytype and the serialize it accordingly. although this a current Axis2 1.3 bug, I put this commit only to trunk since this is a bit large change." So my question is whether there is a way to send OMElement in an Object array in the current version? When generating code with the wsdl2java tool it takes inputs only from the wsdl file. So if there is an element of the type anyType it generates an object type parameter for this. xsd:anyType is the parent type for all XmlSchema types as java.lang.Object for java objects. eg. <complexType> <sequence> <element name="testValue" type="xsd:anyType" minOccurs="0" nillable="true"/> </sequence> </complexType> Here you can give any standard java class so that it serialize it with the type. eg. <testValue xsi:5</testValue> so at the other end it can creates the proper objects depends on the type. But if you have a schema like this <complexType> <sequence> <any/> </sequence> </complexType> then ADB generates an OMElement for the corresponding parameter and hence you can parse an OMElement. Here you can assume that the wsdl2java tool is something which try to generate a code, so that the generated code serialize java objects and parse xmlstreams according to the schema given in the wsdl. It does not have an idea about the implementation logic of the service. this is not an issue ADB serialize anyType correctly From the XML schema specification: "2.5.4." Since on the other hand xsi:type is not mandatory, this issue is relevant, so I'm reopening it. To summarize: While the support for Java types is a real improvement over 1.3, not supporting OMElements and considering xsi:type as mandatory is a clear regression with respect to 1.3. Detelin, can you describe how exactly anyType was handled in 1.3?? And also lets say we get something like this <anything> <element1>test</element1> <element2>test</element2> </anything> then how <element1>test</element1><element2>test</element2> going to represent as an OMElment? In my interpretation anyType is like java.lang.Object. All classes are extend from Object class. when you use anyType for an element it is like using Object type for an java field. At runtime the element can have any type but the type should be a defined one. And hence you should have a value to xsi:type. I Agree with you that what schema says is ambiguous. This is the only interpretation I can come up with to write a possible implementation. In this way I could interoperate the anyType with the MSFT wsdl where with OMElement it was not possible. >? If there is an xsi:type attribute, then ADB should use the mapped Java type to represent the element. > And also lets say we get something like this > <anything> > <element1>test</element1> > <element2>test</element2> > </anything> > > then how <element1>test</element1><element2>test</element2> going to represent as an OMElment? The OMElement should actually represent the "anything" element. Alternatively it could be represented using some other object that stores a node list (the child nodes) and a list of attributes. That is why I asked how this was represented (as an OMElement) in Axis2 1.3. > In my interpretation anyType is like java.lang.Object. All classes are extend from Object class. In Java, classes are derived from java.lang.Object by extension. On the other hand, schema types are derived from anyType by restriction. They are therefore not comparable. > when you use anyType for an element it is like using Object type for an java field. At runtime the element can have any type but the type should be a defined one. The XML schema specs clearly say that an element declared with anyType can have any content, and this content is not necessarily described by an existing type. What you are describing here is not anyType, but <xsd:any (for which ADB actually uses OMElement, while this should be represented as a Java object). > And hence you should have a value to xsi:type. > I Agree with you that what schema says is ambiguous. I don't pretend that the schema specs are ambiguous. They are very clear and they make sense if one avoids comparing the schema type system (which works by restriction and extension) with the Java type system (which only works by extension). > This is the only interpretation I can come up with to write a possible > implementation. In this way I could interoperate the anyType with the MSFT wsdl where with OMElement it was not possible. As I said above, supporting xsi:type correctly is a real improvement, but on the other hand, considering xsi:type as mandatory is a regression. IMHO the correct approach is as follows: - The property storing the anyType element should be of type Object. - During deserialization: - If xsi:type is present, map the content to a Java object. - Otherwise, map it to an OMElement. - During serialization: - If the property refers to an OMElement, serialize this element and don't add xsi:type. - Otherwise, map the Java object to a schema type and serialize it with an xsi:type attribute. First of all if you take the root cause of this problem, you try to solve it in the wrong place. public class ArrayOfanyType { private Object[] anyType; public Object[] getAnyType() { if (anyType == null) return this.anyType; } public void setAnyType(Object[] anyType){ this.anyType = anyType; } } At runtime what type of objects this.anyTpye Array can have? I don't think it is OMElement. it is some type of java objects. eg. String, Integer or any other class. So isn't is correct to generate those classes at the stub rather than an OMElement. So the correct solution is to POJO should check the type an serialise it properly. Then a question comes what happens for user defined classes (eg. Student, Vehicle) For those things I think there must be a way to specify them services.xml and should show in the wsdl in a separate complex type. Then any client generating code for that can create a data binding class for that and maps using the given xsi:type in the response. if some one has a method signature like this, public void setAnyType(OMElement[] anyType) { this.anyType = anyType; } then generated wsdl should have xs:any to represent this and Stub client generates accordingly. IMHO correct place to fix this issue is at POJO level. Taking your suggestion to add OMElement support as well to xsd:anyType Can you please first check how this done in jaxbri. (i.e -d jaxbri ) I mean what this the type it generates for xs:anyType element and what happens when getting a response with the xsi:type and without it. JAXB does it the right way. Here is the relevant part from the specs: "A schema author defines an element to be of type xs:anyType to defer constraining an element to a particular type to the xml document author. Through the use of xsi:type attribute or element substitution, an xml document author provides constraints for an element defined as xs:anyType. The JAXB unmarshaller is able to unmarshal a schema defined xsd:anyType element that has been constrained within the xml document to an easy to access JAXB mapped class. However, when the xml document does not constrain the xs:anyType element, JAXB unmarshals the unconstrained content to an element node instance of a supported DOM API." this only talks about the unmarshaller. Lets see we have an element schema element like this, <element name="test" type="xs:anyType/> we get an xml like this for this, <test>test string<text> how it going to build a dom element from "test string" ? So I think it is better to try out a real sample an see how this handles. In other words you should be able to do what Yordanov has tried to do with jaxbri. If jaxbri supports this I am ok to add this support to ADB. Anyway I don't believe this is the correct fix for the reported issue. For "<test>test string</test>", JAXB creates an Element representing "test" and for "<test xsi:test string</test>", JAXB creates a String with value "test string". I did a test with this pojo class with jaxbri data binding public Object getOMElement(String name){ OMFactory omFactory = OMAbstractFactory.getOMFactory(); OMNamespace omNamespace = omFactory.createOMNamespace("","test"); OMElement omElement = omFactory.createOMElement("test", omNamespace); omElement.setText("test element"); return omElement; } But the return DOM element actually represents the xml <ns:return> <test:test xmlns:test element</test:test> </ns:return> So it gives the extra wrapper element. I am not sure this is part of xml schema or jaxb spec. But it is ok with me adding this to ADB (returning and OMElement with wrapper element) at least as a workaround for situations where we don't get xsi:type. The aar contains also the sources of the service.
https://issues.apache.org/jira/browse/AXIS2-3797?attachmentSortBy=dateTime
CC-MAIN-2017-04
en
refinedweb
java.lang.Object org.springframework.context.support.ApplicationObjectSupportorg.springframework.context.support.ApplicationObjectSupport org.springframework.web.context.support.WebApplicationObjectSupportorg.springframework.web.context.support.WebApplicationObjectSupport org.springframework.web.servlet.view.ContentNegotiatingViewResolverorg.springframework.web.servlet.view.ContentNegotiatingViewResolver public class ContentNegotiatingViewResolver Implementation of ViewResolver that resolves a view based on the request file name or Accept header.. This media type is determined by using the following criteria: setFavorPathExtension(boolean)property is true, the mediaTypesproperty is inspected for a matching media type. setFavorParameter(boolean)property is true, the mediaTypesproperty is inspected for a matching media type. The default name of the parameter is formatand it can be configured using the parameterNameproperty. mediaTypesproperty and if the Java Activation Framework (JAF) is both enabled and present on the class path, FileTypeMap.getContentType(String)is used instead. ignoreAcceptHeaderis false, the request Acceptheader is used. View public ContentNegotiatingViewResolver() public void setOrder(int order) public int getOrder() Ordered paramter, as indicated by the favorPathExtensionand favorParameterproperties.Views(List<View> defaultViews) ViewResolverchain. public void setDefaultContentType(MediaType defaultContentType) This content type will be used when file extension, parameter, nor Accept header define a content-type, either through being disabled or empty. public void setUseJaf(boolean useJaf) Default is true, i.e. the Java Activation Framework is used (if available).) protected List<MediaType> getMediaTypes(HttpServletRequest request) MediaTypefor the given HttpServletRequest. The default implementation invokes getMediaTypeFromFilename(String) if favorPathExtension property is true. If the property is false, or when a media type cannot be determined from the request path, this method will inspect the Accept header of the request. This method can be overriden to provide a different algorithm. request- the current servlet request protected MediaType getMediaTypeFromFilename(String filename) MediaTypefor the given filename. The default implementation will check the media types property first for a defined mapping. If not present, and if the Java Activation Framework can be found on the classpath, it will call FileTypeMap.getContentType(String) This method can be overriden to provide a different algorithm. filename- the current request file name (i.e. hotels.html) protected MediaType getMediaTypeFromParameter(String parameterValue) MediaTypefor the given parameter value. The default implementation will check the media types property for a defined mapping. This method can be overriden to provide a different algorithm. parameterValue- the parameter value (i.e.)
http://docs.spring.io/spring/docs/3.1.0.M1/javadoc-api/org/springframework/web/servlet/view/ContentNegotiatingViewResolver.html
CC-MAIN-2017-04
en
refinedweb
Download presentation Presentation is loading. Please wait. Published byKylie McMillan Modified over 2 years ago 1 CNGrid Software Progress Zhiwei Xu Institute of Computing TechnologySoftware Team Chinese Academy of SciencesChina National Grid EU Grid At Asia Workshop June 23, 2005, Beijing 2 Contents CNGrid Software Objectives Approach and Roadmap Status Software Development and Applications Research Focus and Techniques Ideas for EU cooperation 3 CNGrid Software Objectives Support applications in four areas Connect distributed resources into single system images: eliminate silos Mask resource heterogeneity and distribution Automate common requirements Reduce lifecycle cost of distributed applications, thus enabling sharing and cooperation 4 CNGrid Software Distributed Resources and Services App Scope of CNGrid Software Science Research Manufacturing Resources and Environment Services Sector 5 Connectivity, Transparency, Automation SPARC Oracle Solaris IA-32 MySql Linux Power4 WebSphere AIX VLIW GIS HP-UX AMD SQL Windows MatLab PDESolver Simulator Analyzer Data Miner Single System Image Application Grids CNGrid Software provides automated common supports 6 7 Web Browser C/S Client Other Client GOS API and Utilities Vega GOS Constructs and Services Resource Info Resource Mgmt Jobs User Monitoring Accounting Data GOS Kernel Apache, OMII, GT4, Effective Resources Virtual Resources HPCStorageDatabaseSoftwareFiles Physical Resources GT Services Web Services CNGrid Software OGSA Platform Layer Grid Portal Web Style Grid Portal C/S Style Other Style GridSecurityGridSecurity Grid Portal GriDaEn Grishield Vega GOS 8 GOS Constructs GR GSML Page Client Effective Virtual Physical Internet GSML Page Grip1 Grip2 Grip3Grip4 Grid Operating System GOS Kernel, Core, Libraries, Utilities Beijing Node GS GR Shanghai Node GS GR Xian Node GS Server Physical Resource GR Grid Router GS Grid Switch Agora 1 Agora 2 Mapper Composer Composing Mapping Dawning Dagger Effective Resource Virtual Resource 9 Grishield: CNGrid Security End-to-End From user log-on to physical resource execution Details are hidden from user/developer Based on WS-Security Cert based authentication; Token based authorization & AC; signature Web uCert Portal/Server uid/pass Grip Container Agora pCert Phy SvcPhy SVC pCert uTK pCert uTK pCert uTK pCert uTK UserResAA uTK Other Client pCert uCert 10 GridDaEn: Grid Data Engine System level service of GOS developed by NUDT Provide uniform data operations over global namespace Browser Grid Portal Engine Grip Container Agora Service DRB Service Grid Application uCert user cert u_p uTK DRB Service DRB Service DRB Service Grid Portal 11 GridDaEn: Grid Data Engine Global logical view Utilize a uniform three-level naming scheme that shields users from low-level heterogeneous storage resources Provide global logical view of data resources in multiple domains for users Uniform access Provide a set of uniform APIs and SDKs to access and manage geographically distributed data resources. Federated services A distributed structure: distributed DRB (Data Request Broker) and distributed MDIS (Metadata Information Server) Several DRBs combined to provide federated services Distributed data replication and caching 12 Grid Data Engine 13 Vega GOS 14 Vega GOS and OGSA V1.0 Vega is an implementation of (part of) OGSA Vega would like to contribute to OGSA After implementation and testing (running codes) Loose coupling Partner with other groups Focus on 4 key issues and aim at minimal common requirements Naming, Process/States, VO, Programming Vega complements existing grid projects Focus on implementation architecture, not protocols/services Use computer systems approach, not middleware or network Utilize existing software At Vega GOS kernel level –Apache; OMII, GT4; Commercial As services At Vega GOS application level 15 Naming in OGSA and Vega GOS Vega matches OGSA 3-level naming convention OGSA Human-OrientedAbstractAddress Vega (EVP) EffectiveVirtualPhysical As the primary way for virtualization OGSA Naming specification must include Precise definitions and axioms Syntax and semantics (rough consensus) Who provides, uses, and maintains such names Scoping and name/address space Lifecycle and exception handling Mapping, resolution, binding Provision for resources 16 PT(V1E1) Layered Resource Naming And Mapping PRes1P2 P3 Router1 P4 Router2 Service Container AService Container B V2V3 V4 ERes1 E2E3 PT(V2E1) PT(V3E2) PT(V2E2) PT(V4E3) Agora1 VRes1 vres://router_name:res_v_name Agora2 eres://an:ren eres://agora_name:res_e_name Effective resource Virtual resource Physical resource Top Layer (Agora) Overlaps Bottom Layer (Router) 17 VO in OGSA and Vega GOS There is no precise definition of VO in OGSA Agora is a concrete example of VO (community) Agora has a precise definition, and it holds Subjects, objects, context/policies information Agora-related system services Agora is persistent and static Application programmer knows the agora concept, but agora does not appear in app codes 18 Inner Structure of Agora Tomcat+Axis Agora Access Control Mechanism Authorization Engine Resource Mgmt. ClientUser Mgmt. Client User Login Resource Authorization Resource Mgmt. Interface User Mgmt. Interface Resource Mgmt. ServiceUser Mgmt. Service RoleProxy User Name profile ERes Mapping VResPT Tomcat+Axis AAA Client Authorization Authority Service AC Policy Mgmt. Resource Selection 19 Process/States in OGSA and Vega GOS There is no process concept in OGSA 1.0 Grip is distributed process in grids environment A runtime construct representing a subject (a grid user running a grid application) to access and utilize objects (grid resources and services) Classification of states Session related Application logic specific Grid system related Resource related Service specific Grip 20 Physical Service Grip create Agora Service grip uid/pass Proxy, Profile bind ERes name VRes name, Token, PT invoke Physical Service getResult grip Network of Resource Routers authentication resource selection resource authorization resource locating service invocation return cache close uCert, Profile uCert Profile VRes Token PT uCert Profile VRes Token PT PRes Ret uCert Profile VRes Token PT PRes 21 Core and Kernel Put It Together Web Grip User, App Logic Address Space, States Agora Policies: Security and Selection Phy SvcPhy SVC Other Client System Services Resource Services UI and Utility Tools Common Supports not per-service or per-application codes Follow the E2E and KISS principles Loose coupling; Hide details, reduce coding; Try to minimize abstractions 4 abstractions: User, (Effective) Service, Grip, Agora 5 API functions 22 GSML : Grid Service Markup Language Main Constructs of the language: Pipes are software components consuming various resources (include services). At runtime, pipes are independent, concurrent, event-driven processes (or threads). The only way for interacting with pipes is sending events to or intercepting events from them. A new programming language for end users XML-based, descriptive rather than imperative Event-driven model Component-based design Focus on interaction 23 GSML Software Suite: A WYSIWYG Composer Edit Area Event Properties Resource Repositories 24 Resource Information Monitor E-learningCollaboration GSML: Demo Applications Digital library 25 GSML:A Simple Example untitled wsdlLocation portName StockQuoteSoap …. 26 Aviation and Space Simulation Computing 27 Biological Computing - Genome Sequence Tracing 28 Geological Computing - Underground Water Evaluation 29 CNGrid Software Roadmap in previewSample Apps alpha betaCNGrid Apps CNGrid Deploy on OMII and GT CI6016 & GCC 2005 Exhibit 30 Suggestions for EU Cooperation Infrastructure Projects Connect China National Grid to EU grids CNGrid Software to connect resources and applications Research Projects Net-centric OS Architecture Key OS abstractions and constructs (Naming/Virtualization, VO, Grip) Exception handling Optimization Programming Environment Language and tools Debugging Similar presentations © 2017 SlidePlayer.com Inc.
http://slideplayer.com/slide/739459/
CC-MAIN-2017-04
en
refinedweb
This module is unmaintained. Maybe someday... DOMForm is a Python module for web scraping and web testing. It knows how to evaluate embedded JavaScript code in response to appropriate events. DOMForm supports both the ClientForm 0.1.x HTML form interface and the HTML DOM level 2 interface (note that ATM the DOM is written to an out-of-date version of the specification, and has some hacks to get it to work with "DOM as deployed"). The ClientForm interface makes it easy to parse HTML forms, fill them in and return them to the server. The DOM interface makes it easy to get at other parts of the document, and makes JavaScript support possible. The ability to switch back and forth between the two interfaces allows simpler code than would result from using either interface alone. DOMForm is partly derived from several third-party libraries. JavaScript support currently depends on Mozilla's GPLed spidermonkey JavaScript interpreter (which is available separately from Mozilla itself), and a Python interface to spidermonkey. This package allows you to use web pages containing JavaScript code, have that code automatically executed at appropriate times, and have the results reflected both in an HTML DOM tree and in a higher-level browser-like object model (only the ClientForm part of this browser interface is implemented so far). Of course, automatic execution of much code depends on the use of either the browser-like interface or equivalent DOM methods: otherwise, the code can't know when the JavaScript should be executed. XXX lots of stuff not implemented yet: eg., javascript: URLs (easy to do, though). It's easy to switch between the ClientForm API and the DOM, thus making it hard to get stuck in a position where further progress requires disproportionate coding effort: from urllib2 import urlopen from DOMForm import ParseResponse response = urlopen("") window = ParseResponse(response) window.document # HTML DOM Level 2 HTMLDocument interface forms = window._htmlforms # list of objects supporting ClientForm.HTMLForm i/face form = forms[0] assert form.name == "some_form" domform = form.node # level 2 HTML DOM HTMLFormElement interface control = form.find_control("some_control") # ClientForm.Control i/face domcontrol = control.node # corresponding level 2 HTML DOM HTMLElement i/face doc.some_form._htmlform # back to the ClientForm.HTMLForm interface again doc.some_form.some_control._control # ClientForm.Control interface again response = urlopen(form.click()) # domform.submit() also works Note that the level 2 HTML DOM interface is currently based on an old version of the specification, with some imperfect changes to provide some support for XHTML. To interpret JavaScript, you need to pass the interpret argument to ParseResponse or ParseFile: window = ParseResponse(response, interpret=["javascript"]) The HTML DOM should allow you to get at anything you need to know. Still, since the DOM does some normalisation and is only created after the original HTML has been fed through HTMLTidy, you may sometimes need or want access to the original HTML. ClientCookie's SeekableProcessor is one way of doing that: from ClientCookie import build_opener, SeekableProcessor opener = build_opener(SeekableProcessor) response = opener.open("") window = ParseResponse(response) html = response.read() response.seek(0) # carry on using response object as if it hadn't been .read() Or you can store the html somewhere, then use ParseFile instead of ParseResponse. If you want the HTML after the Javascript has been interpreted, use from xml.dom.ext import XHtmlPrint XHtmlPrint(doc, fileobj) XHtmlPrettyPrint makes nicer output. Both functions will print any DOM node, not just an HTMLDocument. There's some more documentation in the docstrings. Thanks to Andrew Clover for advice and code on DOM 'liveness', all the PyXML contributors, and Gisle Aas, for the HTML::Form Perl code from which ClientForm was originally derived. Most of the bugs are in JavaScript support (which is very dodgy) and the DOM implementation. The ClientForm work-alike stuff is relatively stable (but see the entities and select_default bugs listed below). except *feature to be fixed. There are a few print statements scattered about, as a result of this. Note that code listed with JavaScript error messages can be the WRONG CODE! Don't take it seriously. decorate_DOM(window)after this happens, to regenerate the HTMLFormand all its Controls, and rebind them to the DOM. I probably won't fix this (I'm guessing it won't cause problems). Windowclass is still just stubs. This will be fixed, gradually. ATM, you can likely quite easily derive your own Windowclass with stubs that suit your application, and pass it to one of the Parse*functions through the window_classargument. javascript:scheme URLs, external JavaScript loading, etc. aren't implemented yet (but they're easy to add). innerHTMLisn't implemented. Thanks to my hacks (for live-ness, IE compatibility, bug fixes, changes to match newest DOM standard etc.), it's probably quite buggy, too. sgmlop. RADIOcontrols. This should be fixed soon. onclick- executed. You just have to fire your own events: from DOMForm import fireHTMLEvent, fireMouseEvent # Say we've got a DOM node, domnode, representing a button, and we want to # simulate clicking it. fireHTMLEvent(domnode, "focus") fireMouseEvent(domnode, "click") fireHTMLEvent(domnode, "blur") # Of course, this is missing events like mouseover, which would be fired # by a browser, but we probably don't even need the focus or blur either. For installation instructions, see the INSTALL file included in the distribution. Python 2.3 and PyXML 0.8.3 are required (earlier versions may work, but are untested). Currently mxTidy is required (I may switch to uTidylib at some point). The spidermonkey Python module is required if you want JavaScript interpretation. Development release. This is the first alpha release: there are many known bugs, and interfaces will change. Good question. I wanted something smaller, not dependant on any browser, and also liked the idea of an easy-to-understand implementation of the browser object model in pure Python. 2.3 (earlier versions may work, but are untested). The BSD license (included in distribution). Note that spidermonkey and its Python interface are under the GPL. _htmlformsbegin with an underscore? Because attributes that start with an underscore ("_") are not exposed to JavaScript by the spidermonkey module. The ClientCookie package makes it easy to get seek()able response objects, which is convenient for debugging. See also here for few relevant tips. Also see General FAQs. I prefer questions and comments to be sent to the mailing list rather than direct to me. John J. Lee, May 2006.
http://wwwsearch.sourceforge.net/old/DOMForm/
CC-MAIN-2017-04
en
refinedweb
In this article I describe a way to create an HTML helper method using ASP.NET MVC3 to make a select element that can be dynamically enabled or disabled without using the "disabled" attribute. disabled What is the purpose of writing this article? For your understanding I will describe why it isn't feasible to use the "disabled" attribute in ASP.NET MVC. Disabling the HTML element by the "disabled" attribute makes the HTTP POST request to ignore that value altogether and not send it back to the server which would be required in certain cases. Other possibilities to make the element unselectable would be to intercept certain mouse events but I couldn't make that work. A possible solution to the problem is to hide the element by another element that is invisible to the user by making the original select element uncontrollable by the user. For this I used a span element that I positioned with the "position:relative" CSS value and encapsulated all the logic and template generation code in an HTML helper method so it can be used with ease anytime it is needed. span position:relative For the purpose of this article I created a demo application that will demonstrate a hypothetical usage of the solution presented. The application will be a very simple one containing a single view page with a dropdown list of genders, a checkbox that toggles the status of the dropdown, and a submit button. Our page will use a view model class called PersonViewModel that will contain a property for the selected gender and a list of the possible genders. PersonViewModel public class PersonViewModel { [Required] public String Gender {get; set;} public IEnumerable<SelectListItem> GendersList { get {…} set {…} ... } The selection of a concrete gender is required to submit the form to the server, therefore the user has to select one of the two values from the dropdown list. By unchecking the checkbox the dropdown list gets disabled but only visually, and disabling it doesn’t prevent the selected value from being sent to the server. Hence, the purpose of this article is fulfilled. To design the above form we can use the following HTML, JavaScript, and Razor statements: @using (Html.BeginForm()) { Select gender: @Html.DisableableDropDownListFor(x => x.Gender, Model.GendersList, "Please select", new { id = "cbGender", style = "color: green;width:120px;"}) @Html.ValidationMessageFor(x => x.Gender) <input type="submit" value="Submit" /> } Enable/disable gender dropdown list <input type="checkbox" id="cb_Toggle" checked="true" /> <script type="text/javascript"> $("#cb_Toggle").click(function () { var $Gender = $("#cbGender"); if ($(this).attr("checked")) $Gender.enableDropDownList(); else $Gender.disableDropDownList(); }); </script> As you can see, we call the DisableableDropDownListFor helper method to generate us the markup for the dropdown list and we have a fairly simple JavaScript code that makes the actual enabling and disabling possible. All the user of this method needs to know is the ID of the generated element on which to invoke the two JavaScript methods: DisableableDropDownListFor enableDropDownList() disableDropDownList() In case the ID is omitted, the generated ID in markup will be the name of the property for which the dropdown list is created. In the case of this view model that would be Gender, but because we defined it explicitly (to cbGender), this won’t be taken into account. If a valid value is selected from the dropdown list and the form is submitted, the action method invoked simply returns some JavaScript code that pops up an alert message with the selected value. Gender cbGender The HTML helper methods are defined in a separate class library called MvcUtilities; the HtmlControlHelper class contains two variants that can be used for generating a markup for a disableable dropdown list. The first variant uses a string parameter to define the name of the property of the view model class that holds the selected value, the other variant uses an expression parameter to define the property. To avoid code repetition I created the DropDownList method that gets invoked by each of the variants: HtmlControlHelper DropDownList static IHtmlString DropDownList( String name, Object htmlAttributes, Func<dictionary<string,>, String> generator) This is the main method that creates the markup for a disableable dropdown list. Before we get down to business let’s review the markup generated by the invocation of this method for the current Gender parameter defined by our view model class. <select id="ddl_cbGender" class="valid" style="color: green; width: 120px;" name="Gender" data- <option>Please select</option> <option value="male" selected="true">male</option> <option value="female">female</option> As you can see, the actual dropdown list is encapsulated in a span element and this element contains another span element having the ID of span_Hider that serves the purpose of hiding our actual dropdown list. span_Hider The actual dropdown list will have an ID pre-pended by “ddl_” (ddl_cbGender in this case) while the outer span element will have the id provided by the user or the name of the property for which the dropdown list is generated otherwise. ddl_cbGender The hiding of the element is accomplished by the JavaScript code that gets emitted by this helper method: <script type="text/javascript"> $(document).ready(function () { $.fn.extend({ disableDropDownList: function () { var $ddl = $(this); $ddl.find('#span_Hider').css('z-index', '1'); }, enableDropDownList: function () { var $ddl = $(this); $ddl.find('#span_Hider').css('z-index', '-1'); } }); }); </script> By this code we create two global methods using jQuery that toggle the status of the span_Hider element by increasing or decreasing the z-index of it. This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)
https://www.codeproject.com/tips/343344/creating-a-disableable-dropdownlist-in-asp-net-mvc?pageflow=fixedwidth
CC-MAIN-2017-04
en
refinedweb
Using Display and Editor Templates in ASP.NET MVC Today, we discuss how Display and Editor Templates keep your custom MVC form fields consistent in your web application. If you've worked with forms before in ASP.NET MVC, you know if you have a duplicate input and display field in a model across multiple screens, most users perform the old "copy and paste" HTML from one view to another. If you need to display an image or a model with various types, how would you display, or even edit, that model? Regardless of whether you use one field or one model, ASP.NET MVC provides various techniques on how to display and edit data types. Overview The techniques I refer to are the Display and Editor helpers. They include: - DisplayFor() - Display HTML based on a specific type - DisplayForModel() - Display HTML for a specific model type - EditorFor() - Allow editing HTML based on a specific type - EditorForModel() - Allow editing HTML base on a specific model type The entire idea of using these helper methods is to keep a consistent interface throughout your entire application. Let's start with an example for our Display methods. If we have the following model for our View, public class ExampleViewModel { public string FirstName { get; set; } public string LastName { get; set; } public DateTime LastLogon { get; set; } } and in our View, we use a display helper like this, @Html.DisplayFor(e=> e.LastLogon) Mvc would look in the \Views\Shared\DisplayTemplates\<type>.cshtml and load that into your View. Yes, it can get that granular. If we did have a DateTime.cshtml in that directory, it would probably look like this: Views\Shared\DisplayTemplates\DateTime.cshtml @String.Format("{0:D}", Model) @* Thursday, 10 April 2008 *@ Whatever type you need for displaying (or editing) data, these HtmlHelpers act just like an @Html.Partial("/Views/Shared/DisplayTemplates/DateTime.cshtml", Model.LastLogon) only it's based more on the data type as opposed to just calling a partial view. For the EditorFor(), you would create an EditorTemplates folder under the Shared folder (just like the DisplayTemplates) and place a DateTime.cshtml file in there which would look like this: Views\Shared\EditorTemplates\DateTime.cshtml @Html.TextBox("", ViewData.TemplateInfo.FormattedModelValue, new { @class = "text-box single-line", type = "datetime" }) Now, every time you write EditorFor() for a DateTime field, this template is what will render out to the View. The same is true for the DisplayFor(). They're made for specific data types. DisplayForModel() and EditorForModel() The same is true with the DisplayForModel() and EditorForModel(). Let's continue with our ExampleViewModel from above. We can place an ExampleViewModel.cshtml in the DisplayTemplates folder that looks like this: Views\Shared\DisplayTemplates\ExampleViewModel.cshtml @model DanylkoWeb.ViewModel.ExampleViewModel <div class="row"> @Html.LabelFor(e=> e.FirstName) @Html.DisplayFor(e=> e.FirstName) </div> <div class="row"> @Html.LabelFor(e => e.LastName) @Html.DisplayFor(e => e.LastName) </div> <div class="row"> @Html.LabelFor(e => e.LastLogon) @Html.DisplayFor(e => e.LastLogon) </div> If you want to make them editable, replace each DisplayFor() with EditorFor() and you're done. Bootstrap Pagination Everyone is using Bootstrap nowadays and they have a way to style your paging as shown on their site. There is a way to make a custom pager using DisplayTemplates. One thing you need to pull this off is the MvcPaging library which can be found in the NuGet repository. Open the Package Manager Console in Visual Studio (View/Other Windows/Package Manager Console) and type: Install-Package MvcPaging Once that's installed, you can use paging on any of your Views. @Html.Pager(Model.Posts.PageSize, Model.Posts.PageNumber, Model.Posts.TotalItemCount).Options(o => o.DisplayTemplate("BootstrapPagination")) Notice at the end of the chain, we have an Options method that can define a DisplayTemplate for us. "So what does the BootstrapPagination.cshtml file look like?" I'm glad you asked. Views\Shared\DisplayTemplates\BootstrapPagination.cshtml @model PaginationModel <div class="text-center"> <ul class="pagination pagination-sm"> @foreach (var link in Model.PaginationLinks) { @BuildLink(link) } </ul> </div> @helper BuildLink(PaginationLink link) { var liBuilder = new TagBuilder("li"); if (link.IsCurrent) { liBuilder.MergeAttribute("class", "active"); } if (!link.Active) { liBuilder.MergeAttribute("class", "disabled"); } var aBuilder = new TagBuilder("a"); aBuilder.MergeAttribute("href", link.Url ?? "#"); // Ajax support if (Model.AjaxOptions != null) { foreach (var ajaxOption in Model.AjaxOptions.ToUnobtrusiveHtmlAttributes()) { aBuilder.MergeAttribute(ajaxOption.Key, ajaxOption.Value.ToString(), true); } } aBuilder.SetInnerText(link.DisplayText); liBuilder.InnerHtml = aBuilder.ToString(); @Html.Raw(liBuilder.ToString()) } Of course, we should place this BuildLink into an HtmlHelper, but I decided to include it in the View for quick demo purposes. Now you can customize your paging control to any way you like using HTML and CSS. Conclusion In this post, we covered the basics of how to setup DisplayFor(), DisplayForModel(), EditorFor(), and EditorForModel() using the DisplayTemplates and EditorTemplates folders and took it one step further by using the MvcPaging library to create a custom Bootstrap pagination display type. The best way to keep your interface consistent in your application is to provide each complex type or model with familiar groupings in your Views. Display and Editor Helpers provide a quick and easy way to make your application more modular using specific Views for your data types. Did you find a better way to create slicker DisplayFor() or EditorFor() Helpers? Post your comments below.
https://www.danylkoweb.com/Blog/using-display-and-editor-templates-in-aspnet-mvc-CR
CC-MAIN-2017-04
en
refinedweb
Hi everyone i have a question about how to use a class so i make this script: >>> class className: def createName(self,name): self.name=name def displayName(self): return self.name def saying(self): print 'Hello %s' % self.name >>> first=className >>> first.createName('X') I learn this from a tutorial but i have an error: Traceback (most recent call last): File "<pyshell#11>", line 1, in <module> first.createName('X') TypeError: unbound method createName() must be called with className instance as first argument (got str instance instead) the tutorial i watched is using 2.6 and i'm using 2.7 also what is the difference between return and print in def and why i need to use (self,name) instead of only (self) Thank you very much
https://www.gamedev.net/topic/650158-how-to-use-classand-what-is-return/
CC-MAIN-2017-04
en
refinedweb
The Future of XML 273 An anonymous reader writes "How will you use XML in years to come? The wheels of progress turn slowly, but turn they do. The outline of XML's future is becoming clear. The exact timeline is a tad uncertain, but where XML is going isn't. XML's future lies with the Web, and more specifically with Web publishing. 'Word processors, spreadsheets, games, diagramming tools, and more are all migrating into the browser. This trend will only accelerate in the coming year as local storage in Web browsers makes it increasingly possible to work offline. But XML is still firmly grounded in Web 1.0 publishing, and that's still very important.'" "How will you use XML in years to come?" (Score:5, Insightful) Re:"How will you use XML in years to come?" (Score:5, Interesting) JSON/YAML is/are better (not considering, of course, the variety and maturity of available tools; but then, perhaps, you don't always need most of what is out there in XML tools, either) for lots of things (mostly, the kinds of things TFA notes XML wasn't designed for and often isn't the best choice for),things that aren't marked-up text. Where you actually want an extensible language for text-centric markup, rather than a structured format for interchange of something that isn't marked-up text, XML seems to be a pretty good choice. Of course, for some reason, that seems to be a minority of the uses of XML. Re: (Score:2, Interesting) Re:"How will you use XML in years to come?" (Score:4, Insightful) JSON is inflicting Javascript on everyone. There are other programming languages out there. Also, XML can painlessly create meta-documents made up of other people's XML documents. Re:"How will you use XML in years to come?" (Score:5, Informative) No, it really doesn't, but if "JavaScript" in the name bothers you, you might feel better with YAML. And there are JSON and/or YAML libraries for quite a lot of them. So what? Re:"How will you use XML in years to come?" (Score:5, Insightful) No, it wouldn't because JSON is bare bones data. It's simply nested hash tables, arrays and strings. XML does much more than that. XML can represent a lot of information in a simple, easy-to-understand format. JSON strips it out for speed & efficiency. Which sort of gets into the point I did want to make but was too impatient to explain: JSON is good where JSON is best, and XML is good where XML is best. I dislike the one-uber-alles arguments because it's ignoring other situations and their needs. Would you like to live in a world of S-expressions [wikipedia.org]? The LISP people would point out there are libraries to read/write S-expressions, so why use JSON? The answer of course is that we want more than simply nesting lists of strings. We want our markup languages to fit our requirements, not the other way around. And saying "JSON for Everything", which the original poster did was... silly. My problems with JSON are: JSON is great for AJAX where XML is clunky and a little bit slower (my own speed tests hasn't shown there's a huge hit, but it is significant). XML is great for document-type data like formatted documents or electronic data interchange between heavy-weight processes. My point was that the original poster's JSON is everything was narrow-minded, and that XML answers a very specific need. There are tonnes of mark-up languages out there, and I think XML is a great machine-based language. I hate it when humans have to write XML to configure something though. That really ticks me off. But that's the point: there should not be one mark-up language to rule them all. A mark-up language for every purpose. Re:"How will you use XML in years to come?" (Score:4, Insightful) One file (format) will not rule them all. XML is good if you want to design a communication protocol between your software, and some other unknown program. JSON is much lighter. Far less kilobits needed to transfer the same information so when performance is important and you control everything then use JSON. When it comes to humans editing config files I find traditional ini files, or Writing more complex, relational data to disk? Sqlite often solves the problem quickly. Re:"How will you use XML in years to come?" (Score:5, Funny) If you're giving me a choice... why yes, please! Where can I get one of these worlds you're talking about? Re:"How will you use XML in years to come?" (Score:4, Interesting). Re: (Score:3, Interesting) You forgot XSLT.. XSLT is a nice backwards chaining theorem prover, very similar to Prolog. I like it and use it a lot - currently for me it venerates SQL, Hibernate mappings, C# code and Velocity macros from a single source XML document. But there's nothing magic about it, and if we didn't have XSLT it would be very easy to do the same sort of thing in LISP or Prolog, or (slightly more awkwardly) in conventional programming languages. Re: (Score:2) Re:"How will you use XML in years to come?" (Score:5, Insightful) Pop quiz. Here's an excerpt of GML from that page you linked to. Do the contents of this node represent: "Obviously it's two numbers, they're coordinates" you may say. But such things are not "obvious" to an XML parser. If you're an XML parser the answer is (1): it's a simple text string. So to get to the real data you have to parse that text string again to split on a comma, and to turn the two resulting text strings into numbers. Note this is a completely separate parser and is completely outside the XML data model, so all your fancy schema validation, xpath, etc. are useless to access data at this level. - the text string "100,200" - the number 100200 (with a customary comma for nice formatting) - the number 100.2 (hey, that's the way that the crazy Europeans do it) - a tuple of two numbers: 100 and 200 Why all this pain? Because XML simply has no way to say "this is a list of things" or "this is a number." Sure, you can approximate such things. You could write something like: But the fact remains that even though you may intuitively understand this to be two coordinates when you look at it (and at least you can select the coordinates individually with xpath in this example, but they're still strings, not numbers) to XML this is still nothing but a tree of nodes. Did you catch that? A tree of nodes. You're taking a concept which is logically a pair of integers, and encoding it in a format that's representing it in a tree of nodes. Specifically, that tree looks something like this: elementNode name=gml:coordinates \-> textNode, text="\n " * \-> elementNode name=gml:coordinateX \-> textNode text="100" \-> textNode, text="\n " * \-> elementNode name=gml:coordinateY \-> textNode, text="200" \-> textNode, text="\n" * (*: yep, it keeps all that whitespace that you only intended for formatting. XML is a text markup language, so all text anywhere in the document is significant to the parser.) So let's recap. Using XML, we've taken a structure which is logically just a pair of integers and encoded it as a tree of 7 nodes, three of which are meaningless whitespace that was only put there for formatting, and even after all this XML has no clue that what we're dealing with is a pair of integers. Now let's try this example in JSON: JSON knows two things that your fancy shmancy XML parser will never know: that 100 and 200 are numbers, and that they are two elements of an array (which might be more appropriately thought of as a "tuple" in this context). It's smart enough to know that the whitespace is not significant, it doesn't build this complex and meaningless node tree; it just lets you express, directly and succinctly, the data you are trying to encode. That's because JSON is a data format, and XML is a marked up text format. But we're suffering from the fact that no one realized this ten years ago, and compensated for the parity mismatch by layering mountains of horribly complex software on top of XML instead of just using something that is actually good at data interchange. Re:"How will you use XML in years to come?" (Score:4, Insightful) The only difference here is that XML separates these 3 (markup, validation, transformation) operations, since you might find situations where you don't need all of them. Re: (Score:3, Insightful) Also, this isn't just a matter of validation. It's a matter of actually being able to access the structure of the data you're trying to encode. OK, so let's Re: (Score:3, Interesting) Yes in fact I did. That's what I was referring to when I talked about the "mountains of horribly complex software" on top of XML. [0.5k of RDF that expresses 100, 200 as integer coordinates] Simple enough. Thank you for expressing so succinctly exactly why I am so depressed. How did you XML people come to have such low standards? How can you call "simple enough" a fragment of code that ta Re:"How will you use XML in years to come?" (Score:4, Insightful) Re: (Score:3, Insightful) XML Schema may let the other end validate it, but it doesn't let the other end make sense of it. The other end can only make sense of it if they've got code written to handle the kind of data it contains: which is true, really, of any data format. Re: (Score:3, Interesting) "You might feel better..." -> "No, it wouldn't..."? WTF is that supposed to mean? How is taht even a response to what precedes it? "JSON is..." -> "XML does much more than that." Again, this is incoherent. XML is simply tree-structure Re: (Score:3, Insightful) On the browser? If you want to use AJAX-like technology, JavaScript is still the only viable and portable option as the programming language for the client side. Re: (Score:2, Insightful) Beyond that scope comparing these two unrelated "things" is irrelevant. The tools and libraries available for XML go well beyond JSON's scope. DOM [w3.org], RSS & ATOM [intertwingly.net], OASIS, Xpath, XSLT, eXist DB [sourceforge.net] are just few examples of tools and libraries surrounding XML. XML is designed to le Re: (Score:3, Interesting) While I understand your pain, XML is still a very nice *markup* language, for marking up documents and simple content trees. Can you imagine HTML / XHTML implemented as JSON? I doubt that. The fault with people here lies in XML abuse, namely SOAP-like XML API-s and using XML for everything, where binary formats, or more compact and simpler formats, like JSON, do better. WARNING: GNAA (Score:3, Informative) You know the saying (Score:5, Funny) I believe that this was what you were looking for. (Score:2) [schlockmercenary.com] I don't understand... (Score:5, Insightful) I'm a programmer, just like the rest of you here, so I'm quite used to having to write a parser here or there, or fixing an issue or two in an ant script. The thing that puzzles me, is why it's used so much on the web. XML is bulky, and when designed badly it can be far too complex; this all adds to bandwidth and processing on the client (think AJAX), so I'm not seeing why anyone would want to use it. Formats like JSON are just as usable, and not to mention more lightweight. Where's the gain? Re: (Score:2) Re:I don't understand... (Score:5, Insightful) 1. Looks a lot like HTML. "Oh, it has angle brackets, I know this!" 2. Inertia. 3. Has features that make it a good choice for business: schemas and validation, transforms, namespaces, a type system. 4. Inertia. There just isn't that much need to switch. Modern parsers/hardware make the slowness argument moot, and everyone knows how to work with it. As an interchange format with javascript (and other dynamically typed languages) it is sub-optimal for a number of reasons, and so an alternative, JSON has developed which fills that particular niche. But when I sit down to right yet another line of business app, my default format is going to be XML, and will be for the foreseeable future. Re:I don't understand... (Score:4, Funny) For the majority of applications that use it, it's overboard. You mean like this? [thedailywtf.com] Re:I don't understand... (Score:5, Insightful) XML gives you a parsable standard on two levels; generic XML syntax and specific to your protocol via schemas. It's verbose enough to allow by-hand manual editing while the syntax will catch any errors save semantic errors you'll likely have. It's also a little more versatile as far as the syntax goes. Yes, there are less verbose parsing syntaxes out there, but you always seem to lose something when it comes to manual viewing or editing. Plus, as far as writing parsers, why burn the time when there are so many tools for XML out there? It's a design choice I suppose like every other one; i.e. what are you losing/gaining by DIYing? Personally, I love XML and regret that it hasn't taken off more. Especially in the area of network protocols. People have been trying to shove everything into an HTML pipe, when XML over the much underrated BEEP is a far more versatile. There are costs, though as you've already mentioned. Re: (Score:3, Informative) Oh please. Its bad enough having this bloated standard in data files , but please don't start quadrupaling the amount of bits that need to be sent down a pipe to send the same amount of data just so it can be XML. XML is an extremely poor format to use for any kind of streamed data because you have to read a large chunk of it to find suitable boundaries to process. Not good for efficiency or code simplicity. And if you say "so what" to that then you've obviously Re:I don't understand... (Score:5, Interesting) However, the modern programming age is all about sacrificing performance for convenience (this is why virtually no one is using C or C++ to make web apps, and almost everyone is using a significantly poorer performing language like Python or Ruby). We've got powerful computers with tons of RAM and hard drive space, and high-speed internet connections that can transmit vast amounts of data in mere seconds; why waste (valuable programmer) time and energy over-optimizing everything? Instead, developers choose the option that will make their lives easier. XML is widely known, easily understood, and is human readable. I can send an XML document, without any schema or documentation, to another developer and they'll be able to "grok it". There's also a ton of tools out there for working with XML; if someone sends me a random XML document, I can see it syntax colored in Eclipse or my browser. If someone sends me an XML schema, I can use JAXB to generate Java classes to interact with it. If I need to reformat/convert ANY XML document, I can just whip up an XSLT for it and I'm done. So yes, other formats offer some benefits. But XML's universality (which does require a bit of bulkiness) makes it a great choice for most types of data one would like to markup and/or transmit. P.S. JSON is just as usable? Try writing a schema to validate it Same deal with transformations: if you want to alter your JSON data in a consistent way, you have to again write custom code every time. Sure XSLT has a learning curve, but once you master it you can accomplish in a few lines of code what any other language would need tens or even hundreds of lines to do. Re:I don't understand... (Score:5, Insightful) Because it's a standard that everyone (even reluctantly) can agree on. Because there are well-debugged libraries for reading, writing and manipulating it. Because (as a last resort) text is easy to manipulate with scripting languages like perl and python. Because if verbosity is a problem, text compresses very well. Re: (Score:3, Informative) Re:I don't understand... (Score:5, Interesting) I mean, yeah, when I was a kid, we all worked in hand-optimized C and assembler, and tried to pack useful information into each bit of storage, but systems were a lot smaller and a lot more expensive back then. These days, I write perl or python scripts that spit out forty bytes of XML to encode a single boolean flag, and it doesn't even faze me. Welcome to the 21st century. Re: (Score:2, Insightful) so programmers don't have to see or care how much overhead is involved Which is how we got to the point where, Dr. Dewar and Dr. Schonberg [af.mil]: And you're saying overhead doesn't matter? Re:I don't understand... (Score:4, Insightful) XML is, in many cases (including mine), the path of least resistance. It's not particularly fast or efficient, but it's simple and quick and I don't have to spend hours documenting my formats for the dozens of other people in the company who have to use my data. Many of whom are probably not programmers by Dewar and Schonberg's definition, but who still do valuable work for the company. Re: (Score:2) How do you know if what you've done actually gets the job done? Any monkey can type away randomly and get something done, but it's usually not the job that actually needs doing. For that, you need the skills academic work teaches. You missed the point of studying sorting algorithms. They are taught not so that you can reimplement a quicksort later in life, they are taught because they are a great no-frills case study of the basic concepts you need to get a job done while knowing that you got t Re:I don't understand... (Score:5, Insightful) -Easily validated -Easily parsed -Easily compressed (in transit or stored) -Human readable in case of emergency -Easily extendable Re: (Score:2, Insightful) Which just means that it has lots of redundancy. Or, as one might call it, bloat. Re:I don't understand... (Score:5, Insightful) Which just means that it has lots of redundancy. Or, as one might call it, bloat.. Re: (Score:2) Which just means that it has lots of redundancy. Or, as one might call it, bloat. Test question: Which is quicker?. or... 3. Spending a few minutes writing code to send your internal data structure to a library that will serialize it into YAML and then NOT running the YAML through a generic compression routine (since YAML has far less bloat and therefore far less need for compression). I think I'll go for option 3. Re: (Score:3, Interesting) >>). A while back (before XML parsers were common) I built a kinda cool system whereby a mainfr Re: (Score:3, Insightful) -it doesn't affect transit time when compressed -it minimally takes more cpu to gunzip a stream, but the same could be said of translating ANY binary format (unless you're sending direct memory dumps, which is dangerous) -it's never really in memory as the entire point is to serialize/deserialize Re: (Score:2) Re: (Score:3, Insightful) Our biggest usage is in our customer data feeds. These feeds are often 1GB+ when compressed. Since switching to an XML format from a tab-delimited format, we've been able to gi Re: (Score:3, Insightful) Let's say you need to store data, and a database is not an option. What format shall you store it in? 1 & 2 are untried, untested, and it is not possible to fi Re: (Score:2) But it isn't and doesn't... (Score:2) Re: (Score:2) Because XML is a standard. Almost all languages have a standards compliant XML parser that you can easily use. Why invent a new format and a parser, when you can use an existing standard that has most of the issues already sorted out? You don't have to spend time working out if a bug is caused by your parser or something else. XML handles things like character escaping, unicode, etc gracefully whereas a format you design may not unless you spend a lot of time on it. Formats like JSON are just as usab Re: (Score:2) Why? You can perform XSL transformations on the server and return plain HTML. Why XML is so popular (Score:3, Interesting) I have a lot of experience consulting with various organizations - some Fortune 500, some nonprofit, some educational - about their software selection process. I've watched many times as a vendor gives a presentation to my employer or client talking about how wonderous it is that their software saves all its data in XML so y Re: (Score:3, Insightful) Maybe another comparison would help: QWERTY vs. Dvorak. The one "everyone" knows and uses - and, incidentally, design keyboard shortcuts according to; I'm looking at you, Vim - was designed to avoid jams in mechanical keyboards [wikipedia.org] way back in the ass-end of time, while the other was designed to be efficient on electronic hardware. A "Dvorak solution" for XML would have to solve some fundamental problem while keeping all the good attributes (no pun intended) of the original. IMO, that would mean more readable c Re: (Score:2) It can't be processed with the likes of awk and sed. Just because you can't use tools made for processing text in Unix line-based format, doesn't mean there aren't tools for this purpose. You can even find tools inspired on awk for XML processing, like xmlgawk [sourceforge.net] (also here [vrweb.de]). However... I agree with you that XML is not the answer for everything. For instance, I just hate XML configuration files, exactly because you can't reliably grep, sed, awk, ex, them. Editing XML with vi is not the nicest task either. For config files I usually like INI-style files, for Re: (Score:2) For some config files, XML is the easiest way to go. I wrote an app that stores the entire hierarchy of the GUI's frames, panels, and values as nested nodes of XML. The app then looks at those XML nodes and recreates itself accordingly when loading the config. Using python's xml.dom.minidom [python.org] makes it easier to work with. I agree that in most cases it is overkill, if you know exactly which values need to go where, python's config parser is much easier and the resulting files are smaller. Regarding CSV fi Why is XML so popular (Score:2, Insightful) Re: (Score:2, Informative) Re:Why is XML so popular (Score:4, Funny) Funny, that. I've heard LISPers say "XML looks quite like LISP, only uglier." Re: (Score:3, Informative) Why not store it as a tree in a format computers can parse efficiently? Invent binary format with parent and child offsets and binary tags for the names and values. It's smaller in memory and faster. Better basically. You don't need to parse them if machines are going to read them. And decent human programmers can read them with a debugger or from a hexdump in a file, or write a tool to dump them as a human friendly ASCII during develop Re: (Score:2) I like this way much more than coming up with something new because it means I'd be able to keep my XML generating shell scripts, and just filter the output through a text to binary converter. Re: (Score:2, Insightful) Much Ado About Nothing... (Score:4, Interesting) They've been saying that for years, and frankly it won't happen. A vast amount of users relish the control that having software stored and run locally provides. Of course there will always be exceptions as web based e-mail has shown us. As far as the future of XML... I can't seem to find anything in this article that states anything more than the obvious, it's on the same path it's been on for quite some time. FTA: Is that news to anyone? My understanding of XML is that it's intended use is to provide information, about the information. The thing with XML (Score:3, Interesting) I had someone call me up to design them a simple web app. But he wanted it coded in XML because he thought that was the technology he wanted. His Access database was not web frendly enough. I did correct him a little to put him in check and atleast gave him the right buzz words to use to the next guy. I think XML is dead simple to use if used correctly. I do like it much better that ini files. That is about all I use it for now. Easy to use config files that others have to use. Too many 'this stuff sucks' moments (Score:5, Interesting) I first heard about XML years ago when it was new, and just the concept sucked to me. A markup language based on the ugly and unwieldy syntax of SGML (from which HTML derives)? We don't need more SGML-alikes, we need fewer, was my thought. This stuff sucks. Then a while later I actually had to use XML. I read up on its design and features and thought, OK well at least the cool bit is that it has DTDs to describe the specifics of a domain of XML. But then I found out that DTDs are incomplete to the extreme, unable to properly specify large parts of what one should be able to specify with it. And on top of that, DTDs don't even use XML syntax - what the hell? This stuff sucks. I then found that there were several competing specifications for XML-based replacements for the DTD syntax, and none were well-accepted enough to be considered the standard. So I realized that there was going to be no way to avoid fragmentation and incompatibility in XML schemas. This stuff sucks. I spent some time reading through supposedly 'human readable' XML documents, and writing some. Both reading and writing XML is incredibly nonsuccinct, error-prone, and time consuming. This stuff sucks. Finally I had to write some code to read in XML documents and operate on them. I searched around for freely available software libraries that would take care of parsing the XML documents for me. I had to read up on the 'SAX' and 'DOM' models of XML parsing. Both are ridiculously primitive and difficult to work with. This stuff sucks. Of course I found the most widely distributed, and probably widely used, free XML parser (using the SAX style), expat. It is not re-entrant, because XML syntax is so ridiculously and overly complex that people don't even bother to write re-entrant parsers for it. So you have to dedicate a thread to keeping the stack state for the parser, or read the whole document in one big buffer and pass it to the parser. XML is so unwieldy and stupid that even the best freely available implementations of parsers are lame. This stuff sucks. Then I got bitten by numerous bugs that occurred because XML has such weak syntax; you can't easily limit the size of elements in a document, for example, either in the DTD (or XML schema replacement) or expat. You just gotta accept that the parser could blow up your program if someone feeds it bad data, because the parser writers couldn't be bothered to put any kind of controls in on this, probably because they were 'thinking XML style', which basically means, not thinking much at all. This stuff sucks. Finally, my application had poor performance because XML is so slow and bloated to read in as a wire protocol. This stuff sucks. XML sucks in so many different ways, it's amazing. In fact I cannot think of a single thing that XML does well, or a single aspect of it that couldn't have been better planned from the beginning. I blame the creators of XML, who obviously didn't really have much of a clue. In summary - XML sucks, and I refuse to use it, and actively fight against it every opportunity I get. Re: (Score:3, Insightful) Re: (Score:2) As long as you guys want to fit the bill for supporting that shoddy format, go right ahead! interoperability is overrated. Re: (Score:2) An example of nonstandard constraints you have to put on your parser - DTD doesn't al Re: (Score:2) <foo>{100 megabytes of the letter 'a'}</foo> And the second was supposed to be: <{100 MB of 'a'}>hello</foo> Re:Too many 'this stuff sucks' moments (Score:4, Insightful) Too bad I used up all my mod points earlier...this post deserves a +1 Insightful. I was just a neophyte developer when XML first surfaced in buzzword bingo, but it was the beginning of my realization of how to recognize a "Kool-aid" technology: if the people who espouse a technology can not give you a simple explanation of what it is and why it's good, they are probably "drinking the "Kool-aid". Unfortunately, I also have since discovered the unsettling corollary: you will have it forced down your throat anyway. Re: (Score:2) Re: (Score:2) Use Lisp and s-expressions. Re: (Score:2) Not "free", but believe it or not, Then I got bitten by numerous bugs that occurred because XML has such weak syntax Based on the exhibited behavior, I suspect virtually all programs that parse XML use SelectSingleNode() (or comparable). And there we have a problem, in that XML itself doesn't require node uniquenes Re: (Score:3, Interesting) While I share your disdain (and I agree with everyone of your points), the question is this - What other *standard* way do we have to describe a format that has *moderate to high* level of complexity. JSON is great when I don't need to apply any constraints on the data. I would gladly choose it (along wit Re: (Score:2) Re: (Score:2, Insightful) To pick just a few of your actual points... Why on earth would you use a separate thread. SAX callbacks allow you ample opportunity to maintain whatever state you need and DOM parsers cache the entire thing into a hierarchy that you can navigate to avoid having to maintain any state of your own. Granted, the us Re: (Score:2) Re: (Score:3, Interesting) Actually, you are demonstrating some cluelessness here. Size bloat is only a small part of why XML massively sucks as a wire protocol compared to functionally equivalent universal representations such as ASN. Re: (Score:2) There's so many more readable formats like json. Or just using byte offsets. Hell we could being using pipe delimited data. the creators of XML made it for document markup (Score:2) XML was never intended as a data storage format. It was intended as a document markup format. The fact that people started immediately using it for arbitrary data came as a surprise to the people who created it. Re: (Score:2) However, I believe that XML isn't even good for marking up textual documents and data. It would be faster, smaller, and less error prone for computers if it were an intelligently defined binary format. It would be easier for humans to read and write as a non-SGML-based format. I think the correct thing that XML should have been is a format which has both a bi A Buzzword's Life (Score:5, Funny) Probably a long, healthy life in a big house on the top of buzzword hill, funded by many glowing articles in magazines like InformationWeek and CIO, and 'research papers' by Gartner. Sitting on the porch yelling, "Get off my lawn!" to upstarts like SOA, AJAX, and VOIP. Hanging out watching tube with cousin HTML and poor cousin SGML. Trying to keep JSON and YAML from breaking in and swiping his stuff. Then fading into that same retirement community that housed such oldsters as EDI, VMS, SNA, CICS, RISC, etc. We're stuck with XML for a loooong time (Score:3, Interesting) All these things are why people use it. All these things are why people abuse it. All these things are why we won't be able to get rid of it soon. TFA has nothing to say about the future of XML but the tools to use XML. XQuery and XML databases. Whoopity do. The threshold for getting posted on YAML (Score:3, Informative) If only there was a standardized format that combined these advantages, without all that XML bloat. There is! Try YAML [yaml.org]. XML's big win is supposed to be its semantics: it tells you not only what data you have, but what sort of data it is. This allows you to create all sorts of dreamy scenarios about computers being able to understand each other and act super intelligently. In reality, it leads to massively bloated XML specifications and protracted fights over what's the best way to describe one's data, but not to any of the magic. As my all time favorite Slashdot sig said: "XML is like violence: if it doesn't solve your problem, you aren't using enough of it." JSON is S-expressions done wrong (Score:2) JSON is almost exactly equivalent to LISP S-expressions. Unfortunately, JSON has major security problems due to a Javascript design error. In LISP, there's the "reader", which takes in a string, parses it, and generates a tree structure (or graph; there's a way to express cross-links), and just returns the data structure. Then lISP has "eval", which takes a data structure created by the reader and runs it. Javascript combines both functions into one, called "eval", which takes a string, parses it, and Based on the fact that it's text... (Score:2) It's not rocket science - MS were using it in MediaPlayer long before EkksEmmEll came along... it was called "sticking your crap in angle brackets and parsing it" - HTML is a subset of SGML and I'm pretty sure that it (in its XHTML form) will be around for a while yet. How does that die out? Just because you give it a name and rant about standards in some poxy white paper/media blag doesn't mean it's going to die and go away... XML tables (Score:2) We once had to port live data from Texas to Oregon from giant tables repeatedly, not too well built. So we looked to send XML, enforcing a DTD/schema on the sender teams. We ended up writing the encoders because we used an early and crude compression scheme: We took the source table and counted the number of duplicate sets per column, then returned sorted data in order of highest duplicates to lowest. Then, we encoded in XML using a column, then row order. Scanning dow Graveyard (Score:2) Why not S-expressions? (Score:4, Interesting) For example: <tag1> <tag2> <tag3/> </tag2> <tag1> becomes: (tag1 (tag2 (tag3) ) ) Re: (Score:3, Informative) Sure, you can build a different text representation for XML as sexps. But if it represents the same thing, it doesn't much matter. Imagine that you do so, and you can write a function P that takes xml into sexps and a function Q that takes it back. If Q(P(xml-stuff)) == xml-stuff and P(Q(sexps)) == sexps, then they both do the same thing and you can effectively use either syntax. So you use the syntax you want and convert when you need to. Of course, if either equality doesn't work, then one syntax Concise XML (Score:2) Don't get blindsided by big stuff you can't see (Score:4, Informative) WHATWG's HTML 5 and JSON will have no effect on these other uses. It's just that nobody in hangouts like this sees it. For example, the entire international banking industry runs on XML Schemas. Here's one such standard: IFX. Look at a few links: [csc.com] , [ifxforum.org] , [ifxforum.org] But there are other XML standards in use in banking. The petroleum industry is a heavy user of XML. Example: Well Information Transfer Standard Markup Language WITSML ( and others). The list goes on and on, literally, in major, world-wide industry after industry. XML has become like SQL -- it was new, it still has plenty of stuff going on and smart people are working on it, but a new generation of programmers has graduated from high school, and reacts against it. But it's pure folly to think it's going to go away in favor of JSON or tag soup markup. So yes, suceess in Facebook applications can make a few grad students drop out of school to market their "stuff," and Google can throw spitballs at Microsoft with a free spreadsheet written in Javascript, but when you right down to it, do you really think the banking industry, the petroleum industry, and countless others are going to roll over tomorrow and start hacking JSON? Errrm, folks, what's the big fat hairy deal? (Score:5, Informative) And for those of you out there who haven't yet noticed: XML sucks because data structure serialisation sucks. It allways will. You can't cut open, unravel and string out an n-dimensional net of relations into a 1-dimensional string of bits and bytes without it sucking in one way or the other. It's a, if not THE classic hard problem in IT. Get over it. It's with XML that we've finally agreed upon in which way it's supposed to suck. Halle-flippin'-luja! XML is the unified successor to the late sixties way of badly delimited literals, indifference between variables and values and flatfile constructs of obscure standards nobody wants. And which are so arcane by todays standards that they are beyond useless (Check out AICC if you don't know what I mean). Crappy PLs and config schemas from the dawn of computing. That's all there is to XML: a universal n-to-1 serialisation standard. Nothing more and nothing less. Calm down. And as for the headline: Of-f*cking-course it's here to stay. What do you want to change about it (much less 'enhance'). Do you want to start color-coding your data? Talking about the future of XML is allmost like talking about the future of the wheel ("Scientist ask: Will it ever get any rounder?"). Give me a break. I'm glad we got it and I'm actually - for once - gratefull to the academic IT community doing something usefull and pushing it. It's universal, can be handled by any class and style of data processing and when things get rough it's even human readable. What more do you want? Now if only someone could come up with a replacement for SQL and enforce universal utf-8 everywhere we could finally leave the 1960s behind us and shed the last pieces of vintage computing we have to deal with on a daily basis. Thats what discussions like these should actually be about. Re: (Score:2) Just out of curiosity, have you ever had to work with EDI? Because you sound like someone who probably got burnt by something like that in the past MOD PARENT UP! Re:Errrm, folks, what's the big (Score:2, Insightful) Cheers, Qbertino. This is the best explanation of XML's raison d'etre I have ever heard. I think what people might hate most is DTDs. That makes sense. Even their creator says they suck. There are many ways around them... Lisp can be one big full-service XML processor. Easily. With happy ending and no need for the DOM or SAX. The bottom line is, XML is nothing (literally) until you spec YourML. And most people don't have a need for that! So it seems useless to them. If you are writing markup languages for Re: (Score:2) There are XSD alternatives, and also nice tools and editors to handle XSDs: then you're fine. Also, having taken a look at the mainstream C++ APIs for XML, that would make most anyone hate it. It isn't bad in Java or As always (Score:2) Basically like any tool use where it makes most sense, avoid using it in other cases. XML is a fad, STEP is the future (Score:5, Interesting) Example: #10=ORGANIZATION('O0001','LKSoft','company'); #11=PRODUCT_DEFINITION_CONTEXT('part definition',#12,'manufacturing'); #12=APPLICATION_CONTEXT('mechanical design'); #13=APPLICATION_PROTOCOL_DEFINITION('','automotive_design',2003,#12); #14=PRODUCT_DEFINITION('0',$,#15,#11); #15=PRODUCT_DEFINITION_FORMATION('1',$,#16); #16=PRODUCT('A0001','Test Part 1','',(#18)); #17=PRODUCT_RELATED_PRODUCT_CATEGORY('part',$,(#16)); #18=PRODUCT_CONTEXT('',#12,''); #19=APPLIED_ORGANIZATION_ASSIGNMENT(#10,#20,(#16)); #20=ORGANIZATION_ROLE('id owner'); Re: (Score:2) Strange. I don't know why, but this STEP reminds me of BASIC. :-) Is this supposed to be a step forward? Wikipedia page for ISO STEP mentions that many consider replacing it with XML [wikipedia.org], or rather creating XML schemas to represent the information STEP does (I didn't find Wikipedia's external reference for this though). ...programs can process and present results of STEP incrementally instead of requiring closing tags... It's not true that XML cannot be rendered incrementally. This Mozilla FAQ [mozilla.org] points out that versions before Firefox 3/Gecko 1.9 don't support it, which makes me believe that Firefox 3 does suppo Make working with XML suck less... (Score:5, Interesting) XML does suck if you stick with some of the W3C standards and common tools. Suggestions to make it less painful: W3C Schema is painful; it forces object-oriented design concepts onto a hierarchical data model. Consider RELAX NG [relaxng.org] (an Oasis-approved standard) instead; it's delightful in comparison. Use the verbose XML syntax when communicating with the less technical - if you've seen XML before, it's pretty easy to comprehend: Switch to the compact syntax when you're among geeks: There's validation support on major platforms, and even a tool (Trang [thaiopensource.com]) to convert between verbose/compact formats, and output to DTD and W3C Schemas. And, if you need to specify data types, it borrows the one technology W3C Schema got right: the Datatypes library [w3.org]. The W3C DOM attempts to be a universal API, which means it must conform to the lowest common denominator in the programming languages it targets. Consider the NodeList [w3.org] interface: While similar to the native list/collection/array interfaces most languages provide, it's not an exact match. So, DOM implementers create an object that doesn't work quite like any other collection on the platform. In Java, this means writing: Instead of: Dynamic languages allow an even more concise syntax. Consider this Ruby builder code to build a trivial XML document: I thought about writing the W3C DOM equivalent of the above, but I'm not feeling masochistic tonight. Sorry. The alternatives depend on your programming language, but plenty of choices exist for DOM-style traversal/manipulation. In-memory object models of large XML document can consume a lot of resources, but often, you only need part of the data. Consider using an XMLPull [xmlpull.org] or StAX [codehaus.org] parser instead. Pull means you control the document traversal, only descending into (and fully parsing) sections of the XML that are of interest. SAX [saxproject.org] based parsers have equivalent capabilities, but the programming model is uncomfortable for many developers. Even better, some Pull processors are wicked fast, even when using them to construct a DOM. In Winter 2006, I benchmarked an XML-heavy application, and found WoodStox [codehaus.org] to be an order of magnitude faster at constructing thousands of small DOM4J documents XML in the frontend ... WTF???? (Score:3, Interesting) I've been working with XML ever since it first came out and the whole XML on the front-end is a fad that comes and goes periodically. The pros of XML Cons of XML The pros and cons mean that the best place to use XML is for interoperability between systems/applications developed by different teams/vendors where not much data is sent around and processing is not time sensitive. This does cover some front-end applications where the data can be generated by a program done by one vendor and read by a program done by a different vendor. It does, however, not cover files which are meant to be written and read by the same application. The second best place is to quickly add support for a tree structured storage format for data to an application (for example, for a config file), since you can just pick-up one of the XML libraries out there and half your file format problems will be solved (you still have to figure out and develop the "what to put in there" and "where to put it" part, but need not worry about most of the mechanics of generating, parsing and validating the file) Re:XML needs to be easier to read (Score:5, Interesting) At the game studio where I work, all our newest tools are written in C#, and use XML as a data source (typically indirectly though serialized objects). Heavyweight objects (textures, models, audio) are naturally stored in a binary format, which is optimized for the task at hand. The XML-based formats are essentially our game data's source files, and tends to function in a metadata-type capacity. As a simple example, our audio scripts store a lot of parameters about how to play a sound (pitch and volume variations, choosing among multiple variants, category and volume data, etc), and this metadata simply references external binary audio files, typically stored in a standard format like Ogg Vorbis or ADPCM compressed wave files. This metadata is compiled into a binary run-time version using a proprietary format designed to allow us to easily filter versions. These binary formats are then packed into larger containers for simpler management. Since I work on an MMO, we have to think about versioning our binary data, which tends to be challenging. XML is a great format for us, being so widely supported, since we use both native parsing libraries as well as a lightweight custom parser for our C++ tools (or if we need to support in-game loading for the in-house version of the game). It's easy to look into a file format to see what might be going wrong using just a text editor, and with I don't know what the argument about not knowing what every tag means, like in HTML. The entire point of XML is to be extensible, meaning that it's the client application that determines what the tags ultimately mean. And using SweetXML, btw, misses one of the great benefits of using XML, which is that's it's a standard for which you're likely never going to have to write parsing libraries. It's fine if you want to go that route, but just be aware that you may not have the choice of libraries that you would have by using standard XML. XML does tend to suffer from the "golden hammer" syndrome. Honestly, I'm not a huge fan of it's verbosity or general readability either, but if you take it for what it is, and use it sensibly, it's just another nifty tool you as a programmer can make good use of. After all, wouldn't you rather be working on more important parts of your project than fiddling with a text parser?
https://developers.slashdot.org/story/08/02/07/2141221/the-future-of-xml
CC-MAIN-2017-04
en
refinedweb
If you think you’ve found a bug in Python, what’s next? I'll guide you through the process of submitting a patch, so you can avoid its pitfalls and find the shortest route to becoming a Python contributor! This is the final post in a three part series. In Night of the Living Thread I fixed a bug in Python's threading implementation, so that threads wouldn't become zombies after a fork. In Dawn of the Thread I battled zombie threads in Python 2.6. Now, in the horrifying conclusion, I return to the original bugfix and submit it to the core Python team. Humanity's last hope is that we can get a patch accepted and stop the zombie threads...before it's too late. The action starts when I open a bug in the Python bug tracker. The challenge is to make a demonstration of the bug. I need to convince the world that I'm not crazy: the dead really are rising and walking the earth! Luckily I have a short script from Night of the Living Thread that shows the zombification process clearly. Next I have to fix the bug and submit a patch. I'm confused here, since the bug is in Python 2.7 and 3.3: do I submit fixes for both versions? The right thing to do is clone the Python source: hg clone I fix the bug at the tip of the default branch. The Lifecycle of a Patch doc in the Python Developer's Guide tells me to make a patch with hg diff. I attach it to the bug report by hitting the "Choose File" button and then "Submit Changes." After this, the Python Developer's Guide is no more use. The abomination I am about to encounter isn't described in any guide: The Python bug tracker is a version of Roundup, hacked to pieces and sewn together with a code review tool called Rietveld. The resulting botched nightmare is covered in scabs, stitches, and suppurating wounds. It's a revolting Frankenstein's monster. (And I thought this was only a zombie movie.) When I upload a patch to the bug tracker, Roundup, it is digested and spit out into the code review tool, Rietveld. It shows up like this, so a Python core developer can critique my bugfix. Charles-François Natali is my reviewer. He suggests a cleaner bugfix which you can read about in my earlier post, and shows me how to improve my unittest. Tragically, a week passes before I know he's reviewed my patch. I keep visiting the issue in Roundup expecting to see comments there, but I'm not looking where I should be: there's a little blue link in Roundup that says "review", which leads to Rietveld. That's where I should go to see feedback. Precious time is lost as hordes of zombie threads continue to ravage the landscape. Even worse, my Gmail account thinks Rietveld's notifications are spam. It turns out that the bug tracker was once breached by spammers and used to send spam in the past, so Gmail is quick to characterize all messages from bugs.python.org as spam. I override Gmail's spam filter with a new filter: Once I make the changes Charles-François suggests, I try to re-upload my patch. Clicking "Add Another Patch Set" in Rietveld doesn't work: it shows a page with a TypeError and a traceback. So I follow the instructions to upload a patch using the upload.py script from the command line and that throws an exception, too. I can't even cry out for help: hitting "reply" to add a comment in Rietveld fails. I tremble in fear. Just when humanity's doom seems inevitable, I find a way out: It turns out I must upload my new patch as an additional attachment to the issue in Roundup. Then Roundup, after some delay, applies it to the code review in Rietveld. Finally, I can address Charles-François's objections, and he accepts my patch! Roundup informs me when he applies my changes to the 2.7, 3.3, and default branches. As the darkness lifts I reflect on how contributing to Python has benefited me, despite the horror. For one thing, I learned a few things about Python. I learned that every module in the standard library imports its dependencies like this example from threading.py: from time import time as _time, sleep as _sleep When you execute a statement like from threading import *, Python only imports names that don't begin with an underscore. So renaming imported items is a good way to control which names a module exports by default, an alternative to the __all__ list. The code-review process also taught me about addCleanup(), which is sometimes a nicer way to clean up after a test than either tearDown or a try/finally block. And I learned that concurrency bugs are easier to reproduce in Python 2 with sys.setcheckinterval(0) and in Python 3 with sys.setswitchinterval(1e-6). But the main benefit of contributing to Python is the satisfaction and pride I gain: Python is my favorite language. I love it, and I saved it from zombies. Heroism is its own reward. {{ parent.title || parent.header.title}} {{ parent.tldr }} {{ parent.linkDescription }}{{ parent.urlSource.name }}
https://dzone.com/articles/day-thread
CC-MAIN-2017-04
en
refinedweb
0 For a given list, I would like my output to have the line "Deleting node with value ..." for each node. My destructor function works for a 2-element list, but for a 3-element list it deletes a certain node more than once, and for a list of any size greater than 3, I get an infinite loop. I try tracing through the code, but I am not sure what is going on. Any suggestions? Thanks. #include <iostream> #include <cassert> #include "lists.h" using namespace std; ListNode::ListNode (int k) { myValue = k; myNext = 0; } ListNode::ListNode (int k, ListNode* ptr) { myValue = k; myNext = ptr; } ListNode::~ListNode () { cout << "Deleting node with value " << myValue << endl; for (ListNode* p=this; p!=0; ){ p=p->myNext; delete p; } }
https://www.daniweb.com/programming/software-development/threads/104689/destructor-function-for-list-structure
CC-MAIN-2017-04
en
refinedweb
; Ui::CalculatorForm ui; ui.setupUi(widget); widget->show(); return app.exec(); } In this case, the Ui::CalculatorForm is an interface description object from the ui_calculatorform.h file that sets up all the dialog's widgets and the connections between its signals and slots. In this approach, we subclass a Qt widget and set up the user interface from within the constructor. Components used in this way expose the widgets and layouts used in the form to the Qt widget subclass, and provide a standard system for making signal and slot connections between the user interface and other objects in your application. The generated Ui::CalculatorForm structure is a member of the class. This approach is used in the Calculator Form example. To ensure that we can use the user interface, we need to include the header file that uic generates before referring to Ui::CalculatorForm: #include "ui_calculatorform.h" This means that the .pro file must be updated to include calculatorform.h: HEADERS = calculatorform.h The subclass is defined in the following way: class CalculatorForm : public QWidget { Q_OBJECT public: CalculatorForm(QWidget *parent = 0); private slots: void on_inputSpinBox1_valueChanged(int value); void on_inputSpinBox2_valueChanged(int value); private: Ui::CalculatorForm ui; }; The important feature of the class is the private ui object which provides the code for setting up and managing the user interface. The constructor for the subclass constructs and configures all the widgets and layouts for the dialog just by calling the ui object's setupUi() function. Once this has been done, it is possible to modify the user interface as needed. CalculatorForm::CalculatorForm(QWidget *parent) : QWidget(parent) { ui.setupUi(this); } We can connect signals and slots in user interface widgets in the usual way by adding the on_<object name> - prefix. For more information, see widgets-and-dialogs-with-auto-connect. The advantages of this approach are its simple use of inheritance to provide a QWidget-based interface, and its encapsulation of the user interface widget variables within the ui data member. We can use this method to define a number of user interfaces within the same widget, each of which is contained within its own namespace, and overlay (or compose) them. This approach can be used to create individual tabs from existing forms, for example. together with a standard QWidget-based class. This approach makes all the user interface components defined in the form directly accessible within the scope of the subclass, and enables signal and slot connections to be made in the usual way with the connect() function. This approach is used in the Multiple Inheritance example. We need to include the header file that uic generates from the calculatorform.ui file,: CalculatorForm(QWidget *parent = 0); private slots: void on_inputSpinBox1_valueChanged(int value); void on_inputSpinBox2_valueChanged(int value); }; We inherit Ui::CalculatorForm privately to ensure that the user interface objects are private in our subclass. We can also inherit it with the public or protected keywords in the same way that we could have made ui public or protected in the previous case. The constructor for the subclass performs many of the same tasks as the constructor used in the A resource file containing a UI file is required to process forms at run time. Also, the application needs to be configured to use the QtUiTools module. This is done by including the following declaration in a qmake project file, ensuring that the application is compiled and linked appropriately.. The QUiLoader::load() function constructs the form widget using the user interface description contained in the file. The QtUiTools module classes can be included using the following directive: #include <QtUiTools> The QUiLoader::load() function is invoked as shown in this code from the Text Finder example: The signals and slots connections defined for compile time or run time forms can either be set up manually or automatically, using QMetaObject's ability to make connections between signals and suitably-named slots. Generally, in a QDialog, if we want to process the information entered by the user before accepting it, we need to connect the clicked() signal from the OK button to a custom slot in our dialog. We will first show an example of the dialog in which the slot is connected by hand then compare it with a dialog that uses automatic connection. A Dialog Without Auto-Connect We define the dialog in the same way as before, but now include a slot in addition to the constructor: class ImageDialog : public QDialog, private Ui::ImageDialog { Q_OBJECT public: ImageDialog(QWidget *parent = 0); private slots: void checkValues(); }; The checkValues() slot will be used to validate the values provided by the user. In the dialog's constructor we set up the widgets as before, and connect the Cancel button's clicked() signal to the dialog's reject() slot. We also disable the autoDefault property in both buttons to ensure that the dialog does not interfere with the way that the line edit handles return key events: ImageDialog::ImageDialog(QWidget *parent) : QDialog(parent) { setupUi(this); okButton->setAutoDefault(false); cancelButton->setAutoDefault(false); ... connect(okButton, SIGNAL(clicked()), this, SLOT(checkValues())); } We connect the OK button's clicked() signal to the dialog's checkValues() slot which we implement as follows: void ImageDialog::checkValues() { if (nameLineEdit->text().isEmpty()) (void) QMessageBox::information(this, tr("No Image Name"), tr("Please supply a name for the image."), QMessageBox::Cancel); else accept(); } This custom slot does the minimum necessary to ensure that the data entered by the user is valid - it only accepts the input if a name was given for the image. Widgets and Dialogs with Auto-Connect Although it is easy to implement a custom slot in the dialog and connect it in the constructor, we could instead use QMetaObject's auto-connection facilities to connect the OK button's clicked() signal to a slot in our subclass. uic automatically generates code in the dialog's setupUi() function to do this, so we only need to declare and implement a slot with a name that follows a standard convention: void on_<object name>_<signal name>(<signal parameters>); Using this convention, we can define and implement a slot that responds to mouse clicks on the OK button: class ImageDialog : public QDialog, private Ui::ImageDialog { Q_OBJECT public: ImageDialog(QWidget *parent = 0); private slots: void on_okButton_clicked(); }; Another example of automatic signal and slot connection would be the Text Finder with its on_findButton_clicked() slot. We use QMetaObject's system to enable signal and slot connections: QMetaObject::connectSlotsByName(this); This enables us to implement the slot, as shown below: void TextFinder::on_findButton_clicked() { QString searchString = ui_lineEdit->text(); QTextDocument *document = ui_textEdit->document(); bool found = false;"), "Sorry, the word cannot be found."); } } } Automatic connection of signals and slots provides both a standard naming convention and an explicit interface for widget designers to work to. By providing source code that implements a given interface, user interface designers can check that their designs actually work without having to write code.
http://doc.qt.io/qt-5/designer-using-a-ui-file.html
CC-MAIN-2017-04
en
refinedweb
The Samba-Bugzilla – Bug 8463 Buffer-overflow in dirsort plugin when directory contents change at wrong time. Last modified: 2017-01-03 07:14:25 UTC Created attachment 6901 [details] Prevent buffer overflow when directory contents change The dirsort vfs plugin opens the directory and reads all entries to count them and figure out how much data to allocate; it then uses rewinddir() and reads the entries again, this time copying them into the allocated buffer. The problem is that the second time through you're not guaranteed to get the same list of entries - if a new file/directory was created in the mean time then readdir() will return that new entry too and the code will attempt to write more into the buffer than it allocated space for. The following little test demonstrates this behaviour: ------------------------------------------------------------- #include <stdio.h> #include <dirent.h> #include <unistd.h> #include <sys/stat.h> #define DIR_PATH "/tmp/rewinddir_test" #define NEW_FILE (DIR_PATH "/foobar") int main() { DIR *dir; int cnt; /* set up test directory */ mkdir(DIR_PATH, 0755); dir = opendir(DIR_PATH); /* first read of directory */ cnt = 0; while (readdir(dir)) cnt++; printf("first pass: num-files=%d\n", cnt); /* create new file and rewind */ fclose(fopen(NEW_FILE, "a")); rewinddir(dir); /* second read of directory */ cnt = 0; while (readdir(dir)) cnt++; printf("second pass: num-files=%d\n", cnt); /* clean up */ closedir(dir); unlink(NEW_FILE); rmdir(DIR_PATH); return 0; } ------------------------------------------------------------- The attached patch fixes this by breaking out of the loop if we would write too much into the buffer. Fixed by commit cdcb6319127883d724508da3f6140a1e2aca75af
https://bugzilla.samba.org/show_bug.cgi?id=8463
CC-MAIN-2017-04
en
refinedweb
To scale applications it becomes necessary to separate thread creation and management from rest of the application. Using thread pools is an approach typically employed in large scale systems. This article provides a quick overview on executor framework in Java and provides examples on how to work with thread pools. Thread Pools reduce the memory management overhead which is important for large scale applications. Thread Pools allow the applications to degrade gracefully. Significance of Thread PoolsThread Pools use worker threads to minimize thread creation overhead. Thread Pools reduce the memory management overhead which is important for large scale applications. Thread Pools allow the applications to degrade gracefully. Executor Service ObjectsJava supports executor framework which provide the enablers for thread creation and management. Listed are some of the Java objects of executor service and their key purpose. ExecutorExecutor interface provides a way to decouple tasks from how a task will run, thread to use etc. Executor is normally used instead of creating threads. ExecutorServiceExecutorService interface represents an asynchronous execution mechanism which is capable of executing tasks in the background. ThreadPoolExecutorExecutorService is an interface and ThreadPoolExecutor is one of the concrete implementations of ExecutorService. Executes submitted tasks using one of the several pooled threads. ThreadPoolExecutor provides many adjustable parameters and hooks. For ease of programming it is recommended to use the static factory methods provided by Executors. ExecutorsProvides factory and utility methods for executor. They provide static methods like newFixedThreadPool(), newCachedThreadPool() which are much easier to use. Approaches for Thread Pools Java supports several approaches for handling thread pools. These include: Fixed thread pool This approach reuses fixed number of threads. At any point at-most "n" threads would be active. If additional tasks are submitted when all threads are active the tasks would be queued. Threads in the pool will exist until it is explicitly shutdown. Cached thread pool This approach creates new threads as needed, but will reuse previously constructed threads when they are available. If no threads are available for a task a new thread will be created and added to the pool. Threads that are not used for 60 secs will be terminated and removed from the cache. Single thread pool This approach uses a single worker thread. Tasks submitted would be executed sequentially. This can be assumed equivalent to a fixed thread pool of size "1". The primary difference is that fixed thread pool can reconfigured to use additional threads but single thread pool is not re-configurable. Example of Thread Pool usage This example shows usage of fixed, cached and single thread pool. package com.sourcetricks.threadpool; import java.util.concurrent.ExecutorService; import java.util.concurrent.Executors; import java.util.concurrent.LinkedBlockingQueue; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; public class ThreadPool { class MyThread extends Thread { private int id; public MyThread(int id) { this.id = id; } public void run() { System.out.println("Starting thread " + id); doSomeWork(); System.out.println("Completed thread " + id); } private void doSomeWork() { try { Thread.sleep(5000); } catch (InterruptedException e) { e.printStackTrace(); } } } // Not using thread pool private void doWithoutThreadPool() { for ( int i = 0; i < 20; i++ ) { MyThread thread = new MyThread(i); thread.start(); } } // Using fixed thread pool private void doWithFixedThreadPool1() throws InterruptedException { ExecutorService executor = Executors.newFixedThreadPool(5); for ( int i = 0; i < 20; i++ ) { MyThread thread = new MyThread(i); executor.execute(thread); } System.out.println("Active thread count = " + ((ThreadPoolExecutor)executor).getActiveCount()); // Don't accept new work executor.shutdown(); // Wait for 30 secs for the threads to complete executor.awaitTermination(30, TimeUnit.SECONDS); } // Using fixed thread pool. Directly uses the ThreadPoolEexcutor // which provides finer control private void doWithFixedThreadPool2() throws InterruptedException { int core = 5; int max = 10; int keepalive = 5000; ExecutorService executor = new ThreadPoolExecutor( core, max, keepalive, TimeUnit.MILLISECONDS, new LinkedBlockingQueue<Runnable>()); for ( int i = 0; i < 20; i++ ) { MyThread thread = new MyThread(i); executor.execute(thread); } System.out.println("Active thread count = " + ((ThreadPoolExecutor)executor).getActiveCount()); executor.shutdown(); executor.awaitTermination(30, TimeUnit.SECONDS); } // Using cached thread pool private void doWithCachedThreadPool() throws InterruptedException { ExecutorService executor = Executors.newCachedThreadPool(); for ( int i = 0; i < 20; i++ ) { MyThread thread = new MyThread(i); executor.execute(thread); } System.out.println("Active thread count = " + ((ThreadPoolExecutor)executor).getActiveCount()); } // Using single thread pool private void doWithSingleThreadPool() throws InterruptedException { ExecutorService executor = Executors.newSingleThreadExecutor(); // Executor returned is not reconfigurable for ( int i = 0; i < 20; i++ ) { MyThread thread = new MyThread(i); executor.execute(thread); } } public static void main(String[] args) { ThreadPool threadPool = new ThreadPool(); try { threadPool.doWithCachedThreadPool(); } catch (InterruptedException e) { e.printStackTrace(); } } }
http://www.sourcetricks.com/2014/04/thread-pools-in-java.html
CC-MAIN-2017-04
en
refinedweb
SERIALIZE A hashtable that contains only String Type check this post out for clues! Just as a note - a bit off topic - if your Hashtable only contains strings, and only will ever contain strings, you should consider using the .NET StringDictionary class, same usage as a normal Hashtable, but type safe, it'll save you having to cast results or calling ToString() all the time!! You'll want: using System.Collections.Specialized; at the top of your code to use it!
http://www.nullskull.com/q/10019340/how-to-assign-a-string-to-hash-table.aspx
CC-MAIN-2014-10
en
refinedweb
Andi,I wrote the following description of the core_pattern pipe feature. Does thisseem okay?- name (or a pathname relative to the root directory, /), and must immediately follow the '|' character. * The process created to run the program runs as user and group root. * Arguments can be supplied to the program, delimited by white space (up to a total line length of 128 bytes).Cheers,MichaelAndi Kleen wrote:> Michael Kerrisk wrote:>> On Tue, Apr 15, 2008 at 11:09 PM, Michael Kerrisk>> <mtk.manpages@googlemail.com> wrote:>>> Hi Andi,>>>>>> In 2.6.19 you added the pipiing syntax>>> () to core_pattern. Petr pointed out>>> that this is not yet documented in core(5), so I set to testing it.>>>>>> The change log has the text:>>>>>> The core dump proces will run with the privileges and in the name space>>> of the process that caused the core dump.> > My memory is fuzzy but something might have changed this afterwards> (there were some semantics changes afterwards by other people) I think> my original version ran as non root.> > Anyways the reference as usual is the code, not the change log.> >>> This appears not to be true (as tested on 2.6.25-rc8). Instead the>>> pipe program is run as root. I'm not sure what "in the name space of>>> the process that caused the core dump" means > > namespace is a concept from plan9. It basically means the tree> of mounts the current process has access to. On 99+% of the systems> that is only a single global tree, but there is support for processes> creating their own name space using clone CLONE_NEWNS and then> mount/umount/mount --bind etc. Linux VFS had this support for> some time.> > The whole thing is very obscure but perhaps some more> coverage in the man pages would be not bad. It seems to move> slowly out of obscurity now with all the new container work.> > There is some scattered information in Documentation/*. You'll need> someone else to explain you all the finer details though.> > Also there are lots of different mounts now since a few 2.6 kernels --> to be honest I don't understand what they are all good for.> > > -- I wondered if it might>>> mean that the current working directory of the program would be the>>> same as that of the process that caused the core dump. However that>>> is not so: the current directory for the pipe program is the root>>> directory.> > Basically with a different namespace the paths can change completely,> which can in theory have some unpleasant effects on the core dumper> script. I skimped this by just always using the same as the process.> > -Andi> > -- Michael KerriskLinux man-pages maintainer; to report a man-pages bug? Look here:
https://lkml.org/lkml/2008/4/18/193
CC-MAIN-2014-10
en
refinedweb
Why doesn't JAXB find my subclass? People often get confused about why their sub-classes are not used by JAXB when they read an XML document that uses @xsi:type into Java objects. This question was asked in the forum (I don't think this is the first time but I can't find a reference.) The first thing you should do, and this applies not only to this issue but all the other unmarshalling related issues, is to register a ValidationEventHandler. Whenever JAXB finds an error in a document, it reports a problem to this interface, then it will try to recover from the error. The default ValidationEventHandler is the one that just ignores all the errors, and while this is useful when you just want to read your XML, this makes the trouble-shooting difficult. If you implement this interface and prints out validation fields of ValidationEvent (or just use a debugger to sniff around), you will see what "problems" JAXB is hitting. Now, let's get back to why we are having this problem. When JAXB unmarshals a document, it does this with a set of Java classes that are known to JAXBContext. Such a set of classes are statically known to a JAXBContext when it's created. This is rather different from Java serialization, where it just "finds" a class dynamically as you de-serialize a stream. Java serialization can do this because a class name is included in the object stream. In XML, all you have is a namespace URI and a local name pair that identifies a Java class indirectly. The only way for JAXB to know what Java class to instanciate is by knowing in advance what XML type names map to what Java classes. So that's why JAXB can't just locate the "right" class. So, when you hit a problem where the JAXB doesn't use the right class for @xsi:type, it almost always mean that the JAXBContext does not include the right set of classes. An error message from JAXB runtime should verify this, as it will say something like "I found an XML type 'foobar' but I don't know any Java class that matches it". JAXBContext can be created in two slightly different ways, although underneath they do pretty much the same. - You can give it a list of java.lang.Classes. JAXB then analyzes those classes, and if those classes statically refers to other classes, they will be also added to "the set". This process will be repeated until all statically reachable classes are accounted for. - You can give it a set of package names. JAXBContext locates ObjectFactory classes for those packages, and then do the same process outlined above. Since ObjectFactory is a factory class and it tends to have a reference to all the classes in that package in the form of factory methods (if it's generated by XJC), this effectively adds all the classes in a package in one-shot. This "statically reachable" portion deserves more explanation. If Foo has a field whose type is Bar, this is statically reachable, because a program looking at Foo can discover Bar through reflection. Unfortunately, inheritance is not statically reachable --- if Zot extends Foo, a program that looks at Foo cannot find Zot class, because you can't list such classes. This is why JAXB fails to locate your sub-classes. Because Java doesn't let us do so. Therefore, to fix this problem, you only need to change the way JAXBContext is created. The simplest way, if you are using XJC, is to create a JAXBContext from a list of packages. Otherwise, you need to list sub-classes explicitly to JAXBContext.newInstance invocation, like this: JAXBContext.newInstance(Foo.class,Zot.class) If your use of JAXB is substantial and you can't list those class names manually, maybe you can write a little annotation processor that generates a list of class names (if you are interested in writing it, let's talk, so that we can host it in the jaxb project for others to benefit!) - Login or register to post comments - Printer-friendly version - kohsuke's blog - 7569 reads Kohsuke, Many thanks for the ValidationEventHandler tip. ... by simond - 2012-11-21 14:23 Kohsuke, Many thanks for the ValidationEventHandler tip. Made my day a lot easier.
https://weblogs.java.net/blog/kohsuke/archive/2006/04/why_doesnt_jaxb.html
CC-MAIN-2014-10
en
refinedweb
check my code please? Rene Rad Greenhorn Joined: Feb 10, 2010 Posts: 15 posted Mar 29, 2010 21:22:02 0 Hey everyone, I'm writing a method to shuffle a deck and am unsure if it's executing properly. I'm not sure how to check if it is shuffling 1000 times. (using constant public static final int TIMES_TO_SHUFFLE = 1000) My instructions are in the javadoc before the code. I tried using the debugger but I kept looping in the block and am unsure how to properly use it. Here is my code... import java.util.Random; import java.util.ArrayList; public class Deck { /** The number of times to shuffle */ public static final int TIMES_TO_SHUFFLE = 1000; private ArrayList<Card> deck; // a deck of cards /** * Constructor for objects of class Deck */ public Deck() { deck = new ArrayList<Card>(); loadDeck(); } /** * Load a deck with all the cards */ public void loadDeck() { deck.add(new Card("Ace","Spades",11)); deck.add(new Card("Ace","Hearts",11)); deck.add(new Card("Ace","Clubs",11)); deck.add(new Card("Ace","Diamonds",11)); deck.add(new Card("King","Spades",10)); deck.add(new Card("King","Hearts",10)); deck.add(new Card("King","Clubs",10)); deck.add(new Card("King","Diamonds",10)); deck.add(new Card("Queen","Spades",10)); deck.add(new Card("Queen","Hearts",10)); deck.add(new Card("Queen","Clubs",10)); deck.add(new Card("Queen","Diamonds",10)); deck.add(new Card("Jack","Spades",10)); deck.add(new Card("Jack","Hearts",10)); deck.add(new Card("Jack","Clubs",10)); deck.add(new Card("Jack","Diamonds",10)); deck.add(new Card("10","Spades",10)); deck.add(new Card("10","Hearts",10)); deck.add(new Card("10","Clubs",10)); deck.add(new Card("10","Diamonds",10)); deck.add(new Card("9","Spades",9)); deck.add(new Card("9","Hearts",9)); deck.add(new Card("9","Clubs",9)); deck.add(new Card("9","Diamonds",9)); deck.add(new Card("8","Spades",8)); deck.add(new Card("8","Hearts",8)); deck.add(new Card("8","Clubs",8)); deck.add(new Card("8","Diamonds",8)); deck.add(new Card("7","Spades",7)); deck.add(new Card("7","Hearts",7)); deck.add(new Card("7","Clubs",7)); deck.add(new Card("7","Diamonds",7)); deck.add(new Card("6","Spades",6)); deck.add(new Card("6","Hearts",6)); deck.add(new Card("6","Clubs",6)); deck.add(new Card("6","Diamonds",6)); deck.add(new Card("5","Spades",5)); deck.add(new Card("5","Hearts",5)); deck.add(new Card("5","Clubs",5)); deck.add(new Card("5","Diamonds",5)); deck.add(new Card("4","Spades",4)); deck.add(new Card("4","Hearts",4)); deck.add(new Card("4","Clubs",4)); deck.add(new Card("4","Diamonds",4)); deck.add(new Card("3","Spades",3)); deck.add(new Card("3","Hearts",3)); deck.add(new Card("3","Clubs",3)); deck.add(new Card("3","Diamonds",3)); deck.add(new Card("2","Spades",2)); deck.add(new Card("2","Hearts",2)); deck.add(new Card("2","Clubs",2)); deck.add(new Card("2","Diamonds",2)); } /** * Add a single card to the deck. * @param a Card object */ public void addCard(Card cardToAdd) { deck.add(cardToAdd); } /** * Shuffle the deck. This involves selecting random pairs of * cards and swapping them, the number of times to swap determined * by the constant TIMES_TO_SHUFFLE. Java provides a shuffle method * as part of the Collections interface, however for this assignment * you must write your own. */ //public void shuffle() public void shuffle(){ int i = 0; while(i <= TIMES_TO_SHUFFLE) { ArrayList copy = new ArrayList(); for (Object object : deck) copy.add(object); Random generator = new Random(); ArrayList result = new ArrayList(); do{ int index = (int) (generator.nextDouble() * (double) copy.size()); result.add(copy.remove(index)); } while (copy.size() > 0); deck = result; i++; } } /** * Display the entire contents of the deck. Not used in the * game but useful for debugging. */ public void showDeck() { for (Card x : deck) { System.out.println("Type: "+ x.getDescription() + " Suit: " + x.getSuit()+ " Value: " + x.getValue() ); } } /** * Remove the top card (the first card) from the deck. * @return the Card object removed or null if there is nothing in the deck. */ public Card takeCard() { // return card or return null; } } here is the card class. public class Card { // instance variables private int value; private String suit; private String description; public Card(String description, String suit, int value){ this.description = description; this.suit = suit; this.value = value; } public String getDescription(){ return description; } public String getSuit(){ return suit; } public int getValue(){ return value; } } Christophe Verré Sheriff Joined: Nov 24, 2005 Posts: 14686 16 I like... posted Mar 29, 2010 22:21:49 0 What I can understand from the javadoc is : 1. Take two cards (you need two random indexes) 2. Swap them (no need to use another array) 3. Repeat TIMES_TO_SHUFFLE times If you follow these simple steps, the method should be easier to write. [My Blog] All roads lead to JavaRanch Bert Wilkinson Ranch Hand Joined: Oct 28, 2009 Posts: 33 posted Mar 29, 2010 23:04:25 0 It looks like your instructions for the assignment are in the javadoc.... So, with that in mind, your shuffle method looks like it will do some shuffling, but it is not doing what the javadoc says it should. You are looping through the entire deck TIMES_TO_SHUFFLE times vice doing that many "swaps" within the deck. Hopefully that's clear and gives you some direction on where you need to tweak your code. After you think you are close, you can put a println statement in there to compare the deck before and after a "shuffle" is called to convince yourself something is happening. David Newton Author Rancher Joined: Sep 29, 2008 Posts: 12617 I like... posted Mar 30, 2010 05:20:10 0 (Consider using loop(s) in loadDeck rather than specifying each card manually, which is rather error-prone.) Note also that "shuffling" in code is a different operation than shuffling a physical deck of cards--shuffling 1000 times is not likely to give you a deck that's (substantially?) more random than shuffling once. I agree. Here's the link: subject: check my code please? Similar Threads Compilation Err - Collection Interface Program How to test my code for proper function? adding objects to an arraylist. Accessor methods on enum Urgent help PLEASE All times are in JavaRanch time: GMT-6 in summer, GMT-7 in winter JForum | Paul Wheaton
http://www.coderanch.com/t/489396/java/java/check-code
CC-MAIN-2014-10
en
refinedweb
Java 5 introduced Thread pool in Java in form of Executor framework, which allows Java programmer to decouple submission of task to execution of task. If you are doing server side programming in Java than Thread pool is an important concept to maintain scalability, robustness and stability of system. For those, who are not familiar with thread pool in Java or concept of thread pool here is one liner, Thread pool in Java is pool of worker threads, which is ready to perform any task given to them, mostly in form of implementation of Runnable or Callable interface. Since Java supports multithreading in programming language itself, it allows multiple thread to run concurrently and perform parallel processing? What is Thread Pool in Java and why we need it As I said Thread pool is pool of already created worker thread ready to do the job. Thread pool is one of essential facility any multi-threaded server side Java application requires. One example of using thread pool is creating a web server, which process client request. If you are familiar with socket programming than you know that ServerSocket.accept() is blocking method and blocks until a socket connection made. If only one thread is used to process client request, than it subsequently limit how many client can access server concurrently. In order to support large number of clients, you may decide to use one thread per request paradigm, in which each request is processed by separate Thread, but this require Thread to be created, when request arrived. Since creation of Thread is time consuming process, it delays request processing. It also limits number of clients based upon how many thread per JVM is allowed, which is obviously a limited number. Thread pool solves this problem for you, It creates Thread and manage them. Instead of creating Thread and discarding them once task is done, thread-pool reuses threads in form of worker thread. Since Thread are usually created and pooled when application starts, your server can immediately start request processing, which can further improve server’s response time. Apart from this, there are several other benefits of using Thread pool in Java applications, which we will see in subsequent section. In short, we need thread pools to better mange threads and decoupling task submission from execution. Thread pool and Executor framework introduced in Java 5 is an excellent thread pool provided by library. Java Thread Pool – Executor Framework in Java 5 Java 5 introduced several useful features like Enum, Generics, Variable arguments and several concurrency collections and utilities like ConcurrentHashMap and BlockingQueue etc, It also introduced a full feature built-in Thread Pool framework commonly known as Executor framework. Core of this thread pool framework is Executor interface which defines abstraction of task execution with method execute(Runnable task) and ExecutorService which extends Executor to add various life-cycle and thread pool management facilities like shutting down thread pool. Executor framework also provides an static utility class called Executors ( similar to Collections ) which provides several static factory method to create various type of Thread Pool implementation in Java e.g. fixed size thread pool, cached thread pool and scheduled thread pool. Runnable and Callable interface are used to represent task executed by worker thread managed in these Thread pools. Interesting point about Executor framework is that, it is based on Producer consumer design pattern, where application thread produces task and worker thread consumers or execute those task, So it also suffers with limitation of Producer consumer task like if production speed is substantially higher than consumption than you may run OutOfMemory because of queued task, of course only if your queue is unbounded. How to create fixed size thread pool using Executor framework in Java? Creating fixed size thread pool using Java 5 Executor framework is pretty easy because of static factory methods provided by Executors class. All you need to do is define your task which you want to execute concurrently and than submit that task to ExecutorService. from them Thread pool will take care of how to execute that task, it can be executed by any free worker thread and if you are interested in result you can query Future object returned by submit()method. Executor framework also provides different kind of Thread Pool e.g. SingleThreadExecutor which creates just one worker thread or CachedThreadPool which creates worker threads as and when necessary. You can also check Java documentation of Executor Framework for complete details of services provided by this API. Java concurrency in Practice also has couple of chapters dedicated to effective use of Java 5 Executor framework, which is worth reading for any senior Java developer. Example of Thread Pool in Java Here is an example of Thread pool in Java, which uses Executor framework of Java 5 to create a fixed thread pool with number of worker thread as 10. It will then create task and submit that to Thread pool for execution: public class ThreadPoolExample { public static void main(String args[]) { ExecutorService service = Executors.newFixedThreadPool(10); for (int i =0; i<100; i++){ service.submit(new Task(i)); } } } final class Task implements Runnable{ private int taskId; public Task(int id){ this.taskId = id; } @Override public void run() { System.out.println("Task ID : " + this.taskId +" performed by " + Thread.currentThread().getName()); } } Output: Task ID : 0 performed by pool-1-thread-1 Task ID : 3 performed by pool-1-thread-4 Task ID : 2 performed by pool-1-thread-3 Task ID : 1 performed by pool-1-thread-2 Task ID : 5 performed by pool-1-thread-6 Task ID : 4 performed by pool-1-thread-5 If you look at output of this Java example you will find different threads from thread pool are executing tasks. Benefits of Thread Pool in Java Thread Pool offers several benefit to Java application, biggest of them is separating submission of task to execution of task ,which result if more loose coupled and flexible design than tightly coupled create and execute pattern. Here are some more benefits of using Thread pool in Java: - Use of Thread Pool reduces response time by avoiding thread creation during request or task processing. - Use of Thread Pool allows you to change your execution policy as you need. you can go from single thread to multiple thread by just replacing ExecutorService implementation. - Thread Pool in Java application increases stability of system by creating a configured number of threads decided based on system load and available resource. - Thread Pool frees application developer from thread management stuff and allows to focus on business logic. That’s all on Thread pool in Java 5. we have seen what is thread pool in Java, what is executor framework in java 5, how to create thread pool in Java and some benefits of using thread pool in Java application. no doubt knowledge of thread pool is essential for a server side core Java developer and I suggest reading Java Threads and Concurrency Practice in Java to learn more about concurrency and thread pool. Recommended Books in this article - Java Concurrency in Practice by Brian Goeatz, Doug Leaa, Joshua Bloch and team - Java Threads By Scott Oaks and Henry Wong - Effective Java by Joshua Bloach I usually use Executors.newScheduledThreadPool to execute schedule task. Hey Javin, Thanks a lot for sharing this post. Your tutorial is very easy to understand and also very useful. Since read more [Click Here]
http://www.javacodegeeks.com/2013/07/how-to-create-thread-pools-using-java-5-executor-framework.html/comment-page-1/
CC-MAIN-2014-10
en
refinedweb
Interface for interaction between a graphics document and a user. More... #include <RDocumentInterface.h> Interface for interaction between a graphics document and a user. Typically one document interface exists for every document that is open in an MDI application. The document interface owns and links the various scenes, views and the currently active action. A document interface can own multiple graphics scenes, each of which can have multiple views attached to it. The views forward all user events (mouse moves, mouse clicks, etc.) to the document interface for processing. The document interface dispatches the events to the currently active action object. Adds a listener for coordinate events. This can for example be a document specific widget that displays the current coordinate, e.g. rulers. Adds the given entity to the preview of all scenes / view. Adds a box to the preview that represents a zoom box displayed while drawing a window to magnify an area. Applies the given operation to the document. The operation might for example do something with the current selection. Auto zooms in the view that currently has the focus. After calling this function, all exports go into the preview of the scene instead of the scene itself. Resets the document to its original, empty state. Clears cached variables to ensure they are re-initialized before the next use. Clears the preview of all scenes. Notifies all property listeners that no properties are relevant at this point. This can for example clear the property editor and other property listeners. Deletes all actions that have been terminated. De-select all entities, for convenience. Deselects the given entities and updates the scenes accordingly. Deselects the given entity and updates the scenes accordingly. After calling this function, all exports go into the scene again and not the preview anymore. The event is also used to determine the maximum distance from the cursor to the entity in the view in which the event originated. \par Non-Scriptable: This function is not available in script environments. Gets the current snap object. Helper function for mouseReleaseEvent. Triggers an appropriate higher level event for mouse clicks for the given action. The event type depends on the action's current ClickMode. Highlights the given reference point. Imports the given file if there is a file importer registered for that file type. Makes sure that the current preview survives one mouse move. Locks the position of the relative zero point. Forwards the given mouse double click event to the current action. Forwards the given mouse move event to the current action. Forwards the given mouse press event to the current action. Forwards the given mouse release event to the current action. Notifies all coordinate listeners that the coordinate has changed to position. Triggers an objectChangeEvent for every object in the given set. Forwards the given gesture to the current action. Forwards the given gesture to the current action. Helper function for mouseMoveEvent. Triggers an appropriate preview event for the given action and the current click mode the action is in. Previews the given operation by applying the operation to a temporary document that is linked to the (read only) document. Forwards the given event to the current action to signal that a property value has been changed. Transaction based redo. Regenerates all scenes attached to this document interface by exporting the document into them. Regenerates the given part of all scenes attached to this document interface by exporting the given list of entities into them. This is an overloaded member function, provided for convenience. It differs from the above function only in what argument(s) it accepts. Regenerates all views. Registers a scene with this document interface. Repaints all views. Selects all and updates the scenes / views accordingly. Selects the given entities and updates the scenes accordingly. Sets the click mode of the current action to the given mode. Sets the current action. This action will receive all events until it finishes. Sets the current block that is in use for all views attached to this document interface. Sets the current block based on the given block name. Sets the current layer based on the given layer name. Sets the current UCS (user coordinate system) that is in use for all views attached to this document interface. Sets the current view based on the given view name. Force cursor to be shown. Used for e.g. snap to intersection manual where we want to show the cursor eventhough we are in entity picking mode. Sets the action that is active if no other action is active. Sets the current snap object. The document interface takes ownership of the object. Sets the current snap restriction object. The document interface takes ownership of the object. Notifies all property listeners that the properties of the given entity should be shown. Uses the current snap to snap the given position to a grid point, end point, etc. Forwards the given gesture to the current action. Forwards the given tablet event to the current action. Called immediately after the user has activated a new UCS to be used as current UCS. Transaction based undo. Unlocks the position of the relative zero point. Unregisters a scene from this document interface. Marks all entities with any kind of caching as dirty, so they are regenerated next time regenerate is called. Forwards the given mouse wheel event to the current action. Zooms in at the view that currently has the focus. Zooms out at the view that currently has the focus. Zooms to the previously visible viewport. Zooms to the given region..
http://www.qcad.org/doc/qcad/latest/developer/class_r_document_interface.html
CC-MAIN-2014-10
en
refinedweb
Search: Search took 0.02 seconds. - 15 Dec 2010 6:41 AM - Replies - 6 - Views - 2,473 We ran into the same thing when upgrading to 2.2.1 - assertion that was not there before. What are the dangers of having more than one TreeGridCellRenderer? (Two seemed to work fine with our code... - 14 Dec 2009 8:06 AM - Replies - 1 - Views - 1,088 Hello. I have created a DataView that uses a XTemplate. This works great for most of my fields, but some of my fields contain a colon (e.g. namespace:foobar). The XTemplate won't work with that... - 9 Nov 2007 11:40 AM - Replies - 9 - Views - 1,588 Hey pay him the $10! ;) - 19 Oct 2007 6:09 AM - Replies - 7 - Views - 4,440 How are you referencing the grid in the viewport? Making that change (items : [myGrid]) solved the problem for me. If that doesn't help sorry I'm no help. - 18 Oct 2007 8:25 AM - Replies - 7 - Views - 4,440 I struggled with this similar issue for hours, and finally figured it out. Within the viewpoint region, you need to refer to the EditorGridPanel itself. For example, do this: { ... Results 1 to 5 of 5
http://www.sencha.com/forum/search.php?s=d92978ca324adaa7322cd9658175ca99&searchid=4728597
CC-MAIN-2014-10
en
refinedweb
Up to [DragonFly] / src / lib / libc / stdio Request diff between arbitrary revisions Keyword substitution: kv Default branch: MAIN Remove leading zeroes.> Add the DragonFly cvs id and perform general cleanups on cvs/rcs/sccs ids. Most ids have been removed from !lint sections and moved into comment sections. import from FreeBSD RELENG_4 1.7.2.6
http://www.dragonflybsd.org/cvsweb/src/lib/libc/stdio/fopen.3?r1=1.3
CC-MAIN-2014-10
en
refinedweb
In following example we will discuss about Array in Java. Array is a collection of data of same datatype.We can use it to store Integer, Boolean, String object. We can store only primitive data in array. We have contained multiples value of the same type and we can store the multiple value in memory at fixed size. We can use multiple type array. It can be used in Java, C++, PHP and any other programming languages. We can use array in program as: There are many syntax Declaration of an array:- Advantages of array: Disadvantages of Array: public class ArrayExample { public static void main(String[] args) { int num[] = new int[7]; for (int i=0;i<7;i++){ num[i]=i+1; } for (int i=0;i<7;i++){ System.out.println("array["+i+"] = "+num[i]); } System.out.println("Length of Array = "+num.length); } } Liked it! Share this Tutorial Ask Questions? Discuss: Array in java Post your Comment
http://www.roseindia.net/java/beginners/array-in-java.shtml
CC-MAIN-2014-10
en
refinedweb
Table of Contents List of Figures List of Tables Table of Contents.. The jPDL graphical process designer plugin is also included in JBossTools, JBoss Developer Studio and the SOA Platform. ..4 specifications so that it is deployable on any application server. Depending on the functionalities that you use, the library lib/jbpm-jpdl.jar has some dependencies on other third party libraries such as e.g. hibernate, dom4j and others. We have done great efforts to require only those dependent libraries that you actually use. The dependencies are further documented in Chapter the section called “The identity component” The job executor is a component for monitoring and executing jobs in a standard Java environment. Jobs are used for timers and asynchronous messages. In an enterprise environment, JMS and the EJB Timer Service can be used for that purpose. Conversely, the job executor can be used in an environment where neither JMS nor EJB are available. The job executor component is packaged in the core jbpm-jpdl library, but it needs to be deployed in one of the following ways: either register the JobExecutorLauncher servlet context listener in the web app deployment descriptor to start/stop the job executor during creation/destruction of the servlet context, or start up a separate JVM and start the job executor in there programatically. Table of Contents This chapter takes you through the first steps of getting JBoss jBPM and provides the initial pointers to get up and running in no time. To get the latest release jBPM 3 version, go to the jBPM jPDL 3 package on Sourceforge.net and download the latest installer. The jBPM installer creates a runtime installation and it can also download and install the eclipse designer and a jboss server. You can use jBPM also without application server, but all of these components are preconfigured to interoperate out-of-the-box to get you started with jBPM quickly. To launch the installer, open a command line and go to the directory where you downloaded it. Then type: java -jar jbpm-installer-{version}.jar Step through the instructions. Any supported version of JBoss and the exact version of eclipse can optionally be downloaded by the installer. When installing jBPM into JBoss, this will create a jBPM directory into a server configuration's deploy directory. All jBPM files are centralized inside this deploy/jbpm directory. No other files of your JBoss installation will be touched. You can use your own eclipse (if it is version 3.4+) or you can use the eclipse that the installer downloaded. To install the graphical process designer in eclipse, just use the eclipse update mechanism (Help --> Software Updates --> ...) and point it to the file designer/jbpm-jpdl-designer-site.zip. The jBPM Community Page provides all the details about where to find forums, wiki, issue tracker, downloads, mailing lists and the source repositories. Table of Contents This tutorial will show you basic process constructs in jpdl and the usage of the API for managing the runtime executions. The format of this tutorial is explaining a set of examples. The examples focus on a particular topic and contain extensive comments. The examples can also be fond in the jBPM download package in the directory src/java.examples. The best way to learn is to create a project and experiment by creating variations on the examples given. To get started first, download and install jBPM. jBPM includes a graphical designer tool for authoring the XML that is shown in the examples. You can find download instructions for the graphical designer in ???. You don't need the graphical designer tool to complete this tutorial.: public void testHelloWorldProcess() { // This method shows a process definition and one execution // of the process definition. The process definition has // 3 nodes: an unnamed start-state, a state 's' and an // end-state named 'end'. // The next line parses a piece of xml text into a // ProcessDefinition. A ProcessDefinition is the formal // description of a process represented as a java object. ProcessDefinition processDefinition = ProcessDefinition.parseXmlString( "<process-definition>" + " <start-state>" + " <transition to='s' />" + " </start-state>" + " <state name='s'>" + " <transition to='end' />" + " </state>" + " <end-state" + "</process-definition>" ); // The next line creates one execution of the process definition. // After construction, the process execution has one main path // of execution (=the root token) that is positioned in the // start-state. ProcessInstance processInstance = new ProcessInstance(processDefinition); // After construction, the process execution has one main path // of execution (=the root token). Token token = processInstance.getRootToken(); // Also after construction, the main path of execution is positioned // in the start-state of the process definition. assertSame(processDefinition.getStartState(), token.getNode()); // Let's start the process execution, leaving the start-state // over its default transition. token.signal(); // The signal method will block until the process execution // enters a wait state. // The process execution will have entered the first wait state // in state 's'. So the main path of execution is now // positioned in state 's' assertSame(processDefinition.getNode("s"), token.getNode()); // Let's send another signal. This will resume execution by // leaving the state 's' over its default transition. token.signal(); // Now the signal method returned because the process instance // has arrived in the end-state. assertSame(processDefinition.getNode("end"), token.getNode()); } One of the basic features of jBPM is the ability to persist executions of processes in the database when they are in a wait state. The next example will show you how to store a process instance in the jBPM database. The example also suggests a context in which this might occur. Separate methods are created for different pieces of user code. E.g. an piece of user code in a webapplication starts a process and persists the execution in the database. Later, a message driven bean loads the process instance from the database and resumes its execution. More about the jBPM persistence can be found in Chapter 6, Persistence. public class HelloWorldDbTest extends TestCase { static JbpmConfiguration jbpmConfiguration = null; static { // An example configuration file such as this can be found in // 'src/config.files'. Typically the configuration information is in the // resource file 'jbpm.cfg.xml', but here we pass in the configuration // information as an XML string. // First we create a JbpmConfiguration statically. One JbpmConfiguration // can be used for all threads in the system, that is why we can safely // make it static. jbpmConfiguration = JbpmConfiguration.parseXmlString( "<jbpm-configuration>" + // A jbpm-context mechanism separates the jbpm core // engine from the services that jbpm uses from // the environment. " <jbpm-context>" + " <service name='persistence' " + " factory='org.jbpm.persistence.db.DbPersistenceServiceFactory' />" + " </jbpm-context>" + // Also all the resource files that are used by jbpm are // referenced from the jbpm.cfg.xml " <string name='resource.hibernate.cfg.xml' " + " value='hibernate.cfg.xml' />" + " .varmapping' " + " value='org/jbpm/context/exe/jbpm.varmapping.xml' />" + "</jbpm-configuration>" ); } public void setUp() { jbpmConfiguration.createSchema(); } public void tearDown() { jbpmConfiguration.dropSchema(); } public void testSimplePersistence() { // Between the 3 method calls below, all data is passed via the // database. Here, in this unit test, these 3 methods are executed // right after each other because we want to test a complete process // scenario. But in reality, these methods represent different // requests to a server. // Since we start with a clean, empty in-memory database, we have to // deploy the process first. In reality, this is done once by the // process developer. deployProcessDefinition(); // Suppose we want to start a process instance (=process execution) // when a user submits a form in a web application... processInstanceIsCreatedWhenUserSubmitsWebappForm(); // Then, later, upon the arrival of an asynchronous message the // execution must continue. theProcessInstanceContinuesWhenAnAsyncMessageIsReceived(); } public void deployProcessDefinition() { // This test shows a process definition and one execution // of the process definition. The process definition has // 3 nodes: an unnamed start-state, a state 's' and an // end-state named 'end'. ProcessDefinition processDefinition = ProcessDefinition.parseXmlString( "<process-definition" + " <start-state" + " <transition to='s' />" + " </start-state>" + " <state name='s'>" + " <transition to='end' />" + " </state>" + " <end-state" + "</process-definition>" ); // Lookup the pojo persistence context-builder that is configured above JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext(); try { // Deploy the process definition in the database jbpmContext.deployProcessDefinition(processDefinition); } finally { // Tear down the pojo persistence context. // This includes flush the SQL for inserting the process definition // to the database. jbpmContext.close(); } } public void processInstanceIsCreatedWhenUserSubmitsWebappForm() { // The code in this method could be inside a struts-action // or a JSF managed bean. // Lookup the pojo persistence context-builder that is configured above JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext(); try { GraphSession graphSession = jbpmContext.getGraphSession(); ProcessDefinition processDefinition = graphSession.findLatestProcessDefinition("hello world"); // With the processDefinition that we retrieved from the database, we // can create an execution of the process definition just like in the // hello world example (which was without persistence). ProcessInstance processInstance = new ProcessInstance(processDefinition); Token token = processInstance.getRootToken(); assertEquals("start", token.getNode().getName()); // Let's start the process execution token.signal(); // Now the process is in the state 's'. assertEquals("s", token.getNode().getName()); // Now the processInstance is saved in the database. So the // current state of the execution of the process is stored in the // database. jbpmContext.save(processInstance); // The method below will get the process instance back out // of the database and resume execution by providing another // external signal. } finally { // Tear down the pojo persistence context. jbpmContext.close(); } } public void theProcessInstanceContinuesWhenAnAsyncMessageIsReceived() { // The code in this method could be the content of a message driven bean. // Lookup the pojo persistence context-builder that is configured above JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext(); try { GraphSession graphSession = jbpmContext.getGraphSession(); // First, we need to get the process instance back out of the database. // There are several options to know what process instance we are dealing // with here. The easiest in this simple test case is just to look for // the full list of process instances. That should give us only one // result. So let's look up the process definition. ProcessDefinition processDefinition = graphSession.findLatestProcessDefinition("hello world"); // Now, we search for all process instances of this process definition. List processInstances = graphSession.findProcessInstances(processDefinition.getId()); // Because we know that in the context of this unit test, there is // only one execution. In real life, the processInstanceId can be // extracted from the content of the message that arrived or from // the user making a choice. ProcessInstance processInstance = (ProcessInstance) processInstances.get(0); // Now we can continue the execution. Note that the processInstance // delegates signals to the main path of execution (=the root token). processInstance.signal(); // After this signal, we know the process execution should have // arrived in the end-state. assertTrue(processInstance.hasEnded()); // Now we can update the state of the execution in the database jbpmContext.save(processInstance); } finally { // Tear down the pojo persistence context. jbpmContext.close(); } } } The process variables contain the context information during process executions. The process variables are similar to a java.util.Map that maps variable names to values, which are java objects. The process variables are persisted as a part of the process instance. To keep things simple, in this example we only show the API to work with variables, without persistence. More information about variables can be found in Chapter 10, Context // This example also starts from the hello world process. // This time even without modification. ProcessDefinition processDefinition = ProcessDefinition.parseXmlString( "<process-definition>" + " <start-state>" + " <transition to='s' />" + " </start-state>" + " <state name='s'>" + " <transition to='end' />" + " </state>" + " <end-state" + "</process-definition>" ); ProcessInstance processInstance = new ProcessInstance(processDefinition); // Fetch the context instance from the process instance // for working with the process variables. ContextInstance contextInstance = processInstance.getContextInstance(); // Before the process has left the start-state, // we are going to set some process variables in the // context of the process instance. contextInstance.setVariable("amount", new Integer(500)); contextInstance.setVariable("reason", "i met my deadline"); // From now on, these variables are associated with the // process instance. The process variables are now accessible // by user code via the API shown here, but also in the actions // and node implementations. The process variables are also // stored into the database as a part of the process instance. processInstance.signal(); // The variables are accessible via the contextInstance. assertEquals(new Integer(500), contextInstance.getVariable("amount")); assertEquals("i met my deadline", contextInstance.getVariable("reason")); In the next example we'll show how you can assign a task to a user. Because of the separation between the jBPM workflow engine and the organisational model, an expression language for calculating actors would always be too limited. Therefore, you have to specify an implementation of AssignmentHandler for including the calculation of actors for tasks. public void testTaskAssignment() { // The process shown below is based on the hello world process. // The state node is replaced by a task-node. The task-node // is a node in JPDL that represents a wait state and generates // task(s) to be completed before the process can continue to // execute. ProcessDefinition processDefinition = ProcessDefinition.parseXmlString( "<process-definition" + " <start-state>" + " <transition name='baby cries' to='t' />" + " </start-state>" + " <task-node" + " <task name='change nappy'>" + " <assignment class='org.jbpm.tutorial.taskmgmt.NappyAssignmentHandler' />" + " </task>" + " <transition to='end' />" + " </task-node>" + " <end-state" + "</process-definition>" ); // Create an execution of the process definition. ProcessInstance processInstance = new ProcessInstance(processDefinition); Token token = processInstance.getRootToken(); // Let's start the process execution, leaving the start-state // over its default transition. token.signal(); // The signal method will block until the process execution // enters a wait state. In this case, that is the task-node. assertSame(processDefinition.getNode("t"), token.getNode()); // When execution arrived in the task-node, a task 'change nappy' // was created and the NappyAssignmentHandler was called to determine // to whom the task should be assigned. The NappyAssignmentHandler // returned 'papa'. // In a real environment, the tasks would be fetched from the // database with the methods in the org.jbpm.db.TaskMgmtSession. // Since we don't want to include the persistence complexity in // this example, we just take the first task-instance of this // process instance (we know there is only one in this test // scenario). TaskInstance taskInstance = (TaskInstance) processInstance .getTaskMgmtInstance() .getTaskInstances() .iterator().next(); // Now, we check if the taskInstance was actually assigned to 'papa'. assertEquals("papa", taskInstance.getActorId() ); // Now we suppose that 'papa' has done his duties and mark the task // as done. taskInstance.end(); // Since this was the last (only) task to do, the completion of this // task triggered the continuation of the process instance execution. assertSame(processDefinition.getNode("end"), token.getNode()); } Actions are a mechanism to bind your custom java code into a jBPM process. Actions can be associated with its own nodes (if they are relevant in the graphical representation of the process). Or actions can be placed on events like e.g. taking a transition, leaving a node or entering a node. In that case, the actions are not part of the graphical representation, but they are executed when execution fires the events in a runtime process execution. We'll start with a look at the action implementation that we are going to use in our example : MyActionHandler. This action handler implementation does not do really spectacular things... it just sets the boolean variable isExecuted to true. The variable isExecuted is static so it can be accessed from within the action handler as well as from the action to verify it's value. More information about actions can be found in the section called “Actions” // MyActionHandler represents a class that could execute // some user code during the execution of a jBPM process. public class MyActionHandler implements ActionHandler { // Before each test (in the setUp), the isExecuted member // will be set to false. public static boolean isExecuted = false; // The action will set the isExecuted to true so the // unit test will be able to show when the action // is being executed. public void execute(ExecutionContext executionContext) { isExecuted = true; } } As mentioned before, before each test, we'll set the static field MyActionHandler.isExecuted to false; // Each test will start with setting the static isExecuted // member of MyActionHandler to false. public void setUp() { MyActionHandler.isExecuted = false; } We'll start with an action on a transition. public void testTransitionAction() { // The next process is a variant of the hello world process. // We have added an action on the transition from state 's' // to the end-state. The purpose of this test is to show // how easy it is to integrate java code in a jBPM process. ProcessDefinition processDefinition = ProcessDefinition.parseXmlString( "<process-definition>" + " <start-state>" + " <transition to='s' />" + " </start-state>" + " <state name='s'>" + " <transition to='end'>" + " <action class='org.jbpm.tutorial.action.MyActionHandler' />" + " </transition>" + " </state>" + " <end-state" + "</process-definition>" ); // Let's start a new execution for the process definition. ProcessInstance processInstance = new ProcessInstance(processDefinition); // The next signal will cause the execution to leave the start // state and enter the state 's' processInstance.signal(); // Here we show that MyActionHandler was not yet executed. assertFalse(MyActionHandler.isExecuted); // ... and that the main path of execution is positioned in // the state 's' assertSame(processDefinition.getNode("s"), processInstance.getRootToken().getNode()); // The next signal will trigger the execution of the root // token. The token will take the transition with the // action and the action will be executed during the // call to the signal method. processInstance.signal(); // Here we can see that MyActionHandler was executed during // the call to the signal method. assertTrue(MyActionHandler.isExecuted); } The next example shows the same action, but now the actions are placed on the enter-node and leave-node events respectively. Note that a node has more than one event type in contrast to a transition, which has only one event. Therefore actions placed on a node should be put in an event element. ProcessDefinition processDefinition = ProcessDefinition.parseXmlString( "<process-definition>" + " <start-state>" + " <transition to='s' />" + " </start-state>" + " <state name='s'>" + " <event type='node-enter'>" + " <action class='org.jbpm.tutorial.action.MyActionHandler' />" + " </event>" + " <event type='node-leave'>" + " <action class='org.jbpm.tutorial.action.MyActionHandler' />" + " </event>" + " <transition to='end'/>" + " </state>" + " <end-state" + "</process-definition>" ); ProcessInstance processInstance = new ProcessInstance(processDefinition); assertFalse(MyActionHandler.isExecuted); // The next signal will cause the execution to leave the start // state and enter the state 's'. So the state 's' is entered // and hence the action is executed. processInstance.signal(); assertTrue(MyActionHandler.isExecuted); // Let's reset the MyActionHandler.isExecuted MyActionHandler.isExecuted = false; // The next signal will trigger execution to leave the // state 's'. So the action will be executed again. processInstance.signal(); // Voila. assertTrue(MyActionHandler.isExecuted); Table of Contents jP. lib/jbpm-jpdl.jar is the library with the core jpdl functionality. lib/jbpm-identity.jar is the (optional) library containing an identity component as described in the section called “The identity component”. All the libraries on which jPDL might have a dependency, are located in the lib directory. The actual version of those libraries might depend on the JBoss server that you've selected in the installer.. The installer deploys jBPM into JBoss. This section walks you through the deployed components. When jBPM is installed on a server configuration in JBoss, only the jbpm directory is added to the deploy directory and all components will be deployed under that directory. No other files of JBoss are changed or added outside that directory. The enterprise bundle is a J2EE web application that contains the jBPM enterprise beans and the JSF based console. The enterprise beans are located in \deploy\jbpm\jbpm-enterprise-bundle.ear\jbpm-enterprise-beans.jar and the JSF web application is located at \deploy\jbpm\jbpm-enterprise-bundle.ear\jsf-console.war If you want to see debug logs in the suite server, update file jboss-{version}/server/default/config/log4j.xml Look for <!-- ============================== --> <!-- Append messages to the console --> <!-- ============================== --> <appender name="CONSOLE" class="org.apache.log4j.ConsoleAppender"> <errorHandler class="org.jboss.logging.util.OnlyOnceErrorHandler"/> <param name="Target" value="System.out"/> <param name="Threshold" value="INFO"/> And in param Threshold, change INFO to DEBUG. Then you'll get debug logs of all the components. To limit the number of debug logs, look a bit further down that file until you see 'Limit categories'. You might want to add tresholds there for specific packages like e.g. <category name="org.hibernate"> <priority value="INFO"/> </category> <category name="org.jboss"> <priority value="INFO"/> </category> First of all, in case you're just starting to develop a new process, it is much easier to use plain JUnit tests and run the process in memory like explained in Chapter 3, Tutorial. But if you want to run the process in the console and debug it there here are the 2 steps that you need to do: 1) in jboss-{version}/server/bin/run.bat, somewhere at the end, there is a line like this: rem set JAVA_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=y %JAVA_OPTS% For backup reasons, just start by making a copy of that line, then remove the first ' rem' and change suspend=y to suspend=n. Then you get something like rem set JAVA_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=y %JAVA_OPTS% set JAVA_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,address=8787,server=y,suspend=n %JAVA_OPTS% 2) In your IDE debug by connecting to a remote Java application on localhost on port 8787. Then you can start adding break points and run through the processes with the console until the breakpoint is hit. For more info about configuring logging of optimistic locking failures, see the section called “Logging of optimistic concurrency exceptions” Table the section called When running in a cluster, jBPM synchronizes on the database. By default with optimistic locking. This means that each operation is performed in a transaction. And if at the end a collision is detected, then the transaction is rolled back and has to be handled. E.g. by a retry. So optimistic locking exceptions are usually part of the normal operation. Therefor, by default, the org.hibernate.StateObjectStateExceptions the that hibernate throws in that case are not logged with error and a stack trace, but instead a simple info message 'optimistic locking failed' is displayed. Hibernate itself will log the StateObjectStateException including a stack trace. If you want to get rid of these stack traces, put the level of org.hibernate.event.def.AbstractFlushingEventListener to FATAL. If you use log4j following line of configuration can be used for that: log4j.logger.org.hibernate.event.def.AbstractFlushingEventListener=FATAL If you want to enable logging of the jBPM stack traces, add the following line to your jbpm.cfg.xml: <boolean name="jbpm.hide.stale.object.exceptions" value="false" /> .. Table of Contents In most scenarios, jBPM is used to maintain execution of processes that span a long time. In this context, "a long time" means spanning several transactions. The main purpose of persistence is to store process executions during wait states. So think of the process executions as state machines. In one transaction, we want to move the process execution state machine from one state to the next. A process definition can be represented in 3 different forms : as xml, as java objects and as records in the jBPM database. Executional (=runtime) information and logging information can be represented in 2 forms : as java objects and as records in the jBPM database. For more information about the xml representation of process definitions and process archives, see Chapter 17, jBPM Process Definition Language (JPDL). More information on how to deploy a process archive to the database can be found in the section called “Deploying a process archive” The persistence API is an integrated with the configuration framework by exposing some convenience persistence methods on the JbpmContext. Persistence API operations can therefore be called inside a jBPM context block like this: JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext(); try { // Invoke persistence operations here } finally { jbpmContext.close(); } In what follows, we suppose that the configuration includes a persistence service similar to this one (as in the example configuration file src/config.files/jbpm.cfg.xml): <jbpm-configuration> <jbpm-context> <service name='persistence' factory='org.jbpm.persistence.db.DbPersistenceServiceFactory' /> ... </jbpm-context> ... </jbpm-configuration> The three most common persistence operations are: First deploying a process definition. Typically, this will be done directly from the graphical process designer or from the deployprocess ant task. But here you can see how this is done programmatically: JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext(); try { ProcessDefinition processDefinition = ...; jbpmContext.deployProcessDefinition(processDefinition); } finally { jbpmContext.close(); } For the creation of a new process execution, we need to specify of which process definition this execution will be an instance. The most common way to specify this is to refer to the name of the process and let jBPM find the latest version of that process in the database: JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext(); try { String processName = ...; ProcessInstance processInstance = jbpmContext.newProcessInstance(processName); } finally { jbpmContext.close(); } For continuing a process execution, we need to fetch the process instance, the token or the taskInstance from the database, invoke some methods on the POJO jBPM objects and afterwards save the updates made to the processInstance into the database again. JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext(); try { long processInstanceId = ...; ProcessInstance processInstance = jbpmContext.loadProcessInstance(processInstanceId); processInstance.signal(); jbpmContext.save(processInstance); } finally { jbpmContext.close(); } Note that if you use the xxxForUpdate methods in the JbpmContext, an explicit invocation of the jbpmContext.save is not necessary any more because it will then occur automatically during the close of the jbpmContext. E.g. suppose we want to inform jBPM about a taskInstance that has been completed. Note that task instance completion can trigger execution to continue so the processInstance related to the taskInstance must be saved. The most convenient way to do this is to use the loadTaskInstanceForUpdate method: JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext(); try { long taskInstanceId = ...; TaskInstance taskInstance = jbpmContext.loadTaskInstanceForUpdate(taskInstanceId); taskInstance.end(); } finally { jbpmContext.close(); } Just as background information, the next part is an explanation of how jBPM manages the persistence and uses hibernate. The JbpmConfiguration maintains a set of ServiceFactorys. The service factories are configured in the jbpm.cfg.xml as shown above and instantiated lazy. The DbPersistenceServiceFactory is only instantiated the first time when it is needed. After that, service factories are maintained in the JbpmConfiguration. A DbPersistenceServiceFactory manages a hibernate SessionFactory. But also the hibernate session factory is created lazy when requested the first time. During the invocation of jbpmConfiguration.createJbpmContext(), only the JbpmContext is created. No further persistence related initializations are done at that time. The JbpmContext manages a DbPersistenceService, which is instantiated upon first request. The DbPersistenceService manages the hibernate session. Also the hibernate session inside the DbPersistenceService is created lazy. As a result, a hibernate session will be only be opened when the first operation is invoked that requires persistence and not earlier. The most common scenario for managed transactions is when using jBPM in a JEE application server like JBoss. The most common scenario is the following: A stateless session facade in front of jBPM is a good practice. The easiest way on how to bind the jbpm transaction to the container transaction is to make sure that the hibernate configuration used by jbpm refers to an xa-datasource. So jbpm will have its own hibernate session, there will only be 1 jdbc connection and 1 transaction. The transaction attribute of the jbpm session facade methods should be 'required' The the most important configuration property to specify in the hibernate.cfg.xml that is used by jbpm is: hibernate.connection.datasource= --datasource JNDI name-- like e.g. java:/JbpmDS More information on how to configure jdbc connections in hibernate, see the hibernate reference manual, section 'Hibernate provided JDBC connections' For more information on how to configure xa datasources in jboss, see the jboss application server guide, section 'Configuring JDBC DataSources' In some scenarios, you already have a hibernate session and you want to combine all the persistence work from jBPM into that hibernate session. Then the first thing to do is make sure that the hibernate configuration is aware of all the jBPM mapping files. You should make sure that all the hibernate mapping files that are referenced in the file src/config.files/hibernate.cfg.xml are provided in the used hibernate configuration. Then, you can inject a hibernate session into the jBPM context as is shown in the following API snippet: JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext(); try { jbpmContext.setSession(SessionFactory.getCurrentSession()); // your jBPM operations on jbpmContext } finally { jbpmContext.close(); } That will pass in the current hibernate session used by the container to the jBPM context. No hibernate transaction is initiated when a session is injected in the context. So this can be used with the default configurations. The hibernate session that is passed in, will not be closed in the jbpmContext.close() method. This is in line with the overall philosophy of programmatic injection which is explained in the next section. The configuration of jBPM provides the necessary information for jBPM to create a hibernate session factory, hibernate session, jdbc connections, jbpm required services,... But all of these resources can also be provided to jBPM programmatically. Just inject them in the jbpmContext. Injected resources always are taken before creating resources from the jbpm configuration information. The main philosophy is that the API-user remains responsible for all the things that the user injects programmatically in the jbpmContext. On the other hand, all items that are opened by jBPM, will be closed by jBPM. There is one exception. That is when fetching a connection that was created by hibernate. When calling jbpmContext.getConnection(), this transfers responsibility for closing the connection from jBPM to the API user. JbpmContext jbpmContext = jbpmConfiguration.createJbpmContext(); try { // to inject resources in the jbpmContext before they are used, you can use jbpmContext.setConnection(connection); // or jbpmContext.setSession(session); // or jbpmContext.setSessionFactory(sessionFactory); } finally { jbpmContext.close(); } The DbPersistenceService maintains a lazy initialized hibernate session. All database access is done through this hibernate session. All queries and updates done by jBPM are exposed by the XxxSession classes like e.g. GraphSession, SchedulerSession, LoggingSession,... These session classes refer to the hibernate queries and all use the same hibernate session underneath. The XxxxSession classes are accessible via the JbpmContext as well. The DbPersistenceServiceFactory itself has 3 more configuration properties: isTransactionEnabled, sessionFactoryJndiName and dataSourceJndiName. To specify any of these properties in the jbpm.cfg.xml, you need to specify the service factory as a bean in the factory element like this: IMPORTANT: don't mix the short and long notation for configuring the factories. See also the section called “Customizing factories”. If the factory is just a new instance of a class, you can use the factory attribute to refer to the factory class name. But if properties in a factory must be configured, the long notation must be used and factory and bean must be combined as nested elements. Like this: <jbpm-context> <service name="persistence"> <factory> <bean class="org.jbpm.persistence.db.DbPersistenceServiceFactory"> <field name="isTransactionEnabled"><false /></field> <field name="sessionFactoryJndiName"> <string value="java:/myHibSessFactJndiName" /> </field> <field name="dataSourceJndiName"> <string value="java:/myDataSourceJndiName" /> </field> </bean> </factory> </service> ... </jbpm-context> By default, the DbPersistenceServiceFactory will use the resource hibernate.cfg.xml in the root of the classpath to create the hibernate session factory. Note that the hibernate configuration file resource is mapped in the property 'jbpm.hibernate.cfg.xml' and can be customized in the jbpm.cfg.xml. This is the default configuration: <jbpm-configuration> ... <!-- configuration resource files pointing to default configuration files in jbpm-{version}.jar --> <string name='resource.hibernate.cfg.xml' value='hibernate.cfg.xml' /> <!-- <string name='resource.hibernate.properties' value='hibernate.properties' /> --> ... </jbpm-configuration> When the property resource.hibernate.properties is specified, the properties in that resource file will overwrite all the properties in the hibernate.cfg.xml. Instead of updating the hibernate.cfg.xml to point to your DB, the hibernate.properties can be used to handle jbpm upgrades conveniently: The hibernate.cfg.xml can then be copied without having to reapply the changes. Please refer to the hibernate documentation: If you want to configure jBPM with JBossCache, have a look at the jBPM configuration wiki page For more information about configuring a cache provider in hibernate, take a look at the hibernate documentation, section 'Second level cache' The hibernate.cfg.xml that ships with jBPM includes the following line: <property name="hibernate.cache.provider_class">org.hibernate.cache.HashtableCacheProvider</property> This is done to get people up and running as fast as possible without having to worrie about classpaths. Note that hibernate contains a warning that states not to use the HashtableCacheProvider in production. To use ehcache instead of the HashtableCacheProvider, simply remove that line and put ehcache.jar on the classpath. Note that you might have to search for the right ehcache library version that is compatible with your environmment. Previous incompatibilities between a jboss version and a perticular ehcache version were the reason to change the default to HashtableCacheProvider. By default, jBPM will delegate transaction to hibernate and use the session per transaction pattern. jBPM will begin a hibernate transaction when a hibernate session is opened. This will happen the first time when a persistent operation is invoked on the jbpmContext. The transaction will be committed right before the hibernate session is closed. That will happen inside the jbpmContext.close(). Use jbpmContext.setRollbackOnly() to mark a transaction for rollback. In that case, the transaction will be rolled back right before the session is closed inside of the jbpmContext.close(). To prohibit jBPM from invoking any of the transaction methods on the hibernate API, set the isTransactionEnabled property to false as explained in the section called “The DbPersistenceServiceFactory” above. The most common scenario for managed transactions is when using jBPM in a JEE application server like JBoss. The most common scenario to bind your transactions to JTA is the following: <jbpm-context> <service name="persistence"> <factory> <bean class="org.jbpm.persistence.db.DbPersistenceServiceFactory"> <field name="isTransactionEnabled"><false /></field> <field name="isCurrentSessionEnabled"><true /></field> <field name="sessionFactoryJndiName"> <string value="java:/myHibSessFactJndiName" /> </field> </bean> </factory> </service> ... </jbpm-context> Then you should specify in your hibernate session factory to use a datasource and bind hibernate to the transaction manager. Make sure that you bind the datasource to an XA datasource in case you're using more then 1 resource. For more information about binding hibernate to your transaction manager, please, refer to paragraph 'Transaction strategy configuration' in the hibernate documentation. <hibernate-configuration> <session-factory> <!-- hibernate dialect --> <property name="hibernate.dialect">org.hibernate.dialect.HSQLDialect</property> <!-- DataSource properties (begin) --> <property name="hibernate.connection.datasource">java:/JbpmDS</property> <!--">java:comp/UserTransaction</property> ... </session-factory> </hibernate-configuration> Then make sure that you have configured hibernate to use an XA datasource. These configurations allow for the enterprise beans to use CMT and still allow the web console to use BMT. That is why the property 'jta.UserTransaction' is also specified. All the HQL queries that jBPM uses are centralized in one configuration file. That resource file is referenced in the hibernate.cfg.xml configuration file like this: <hibernate-configuration> ... <!-- hql queries and type defs --> <mapping resource="org/jbpm/db/hibernate.queries.hbm.xml" /> ... </hibernate-configuration> To customize one or more of those queries, take a copy of the original file and put your customized version somewhere on the classpath. Then update the reference 'org/jbpm/db/hibernate.queries.hbm.xml' in the hibernate.cfg.xml to point to your customized version. jBPM runs on any database that is supported by hibernate. The example configuration files in jBPM ( src/config.files) specify the use of the hypersonic in-memory database. That database is ideal during development and for testing. The hypersonic in-memory database keeps all its data in memory and doesn't store it on disk. Make sure that the database isolation level that you configure for your JDBC connection is at least READ_COMMITTED. Almost all features run OK even with READ_UNCOMMITTED (isolation level 0 and the only isolation level supported by HSQLDB). But race conditions might occur in the job executor and with synchronizing multiple tokens. Following is an indicative list of things to do when changing jBPM to use a different database: The jbpm.db subproject, contains a number of drivers, instructions and scripts to help you getting started on your database of choice. Please, refer to the readme.html in the root of the jbpm.db project for more information. While jBPM is capable of generating DDL scripts for all database, these schemas are not always optimized. So you might want to have your DBA review the DDL that is generated to optimize the column types and use of indexes. In development you might be interested in the following hibernate configuration: If you set hibernate configuration property 'hibernate.hbm2ddl.auto' to 'create-drop' (e.g. in the hibernate.cfg.xml), the schema will be automatically created in the database the first time it is used in an application. When the application closes down, the schema will be dropped. The schema generation can also be invoked programmatically with jbpmConfiguration.createSchema() and jbpmConfiguration.dropSchema(). In your project, you might use hibernate for your persistence. Combining your persistent classes with the jBPM persistent classes is optional. There are two major benefits when combining your hibernate persistence with jBPM's hibernate persistence: First, session, connection and transaction management become easier. By combining jBPM and your persistence into one hibernate session factory, there is one hibernate session, one jdbc connection that handles both yours and jBPM's persistence. So automatically the jBPM updates are in the same transaction as the updates to your own domain model. This can eliminates the need for using a transaction manager. Secondly, this enable you to drop your hibernatable persistent object in to the process variables without any further hassle. The easiest way to integrate your persistent classes with the jBPM persistent classes is by creating one central hibernate.cfg.xml. You can take the jBPM hibernate.cfg.xml as a starting point and add references to your own hibernate mapping files in there. To customize any of the jBPM hibernate mapping files, you can proceed as follows: src/jbpm-jpdl-sources.jar) jBPM uses hibernate's second level cache for keeping the process definitions in memory after loading them once. The process definition classes and collections are configured in the jBPM hibernate mapping files with the cache element like this: <cache usage="nonstrict-read-write"/> Since process definitions (should) never change, it is ok to keep them in the second level cache. See also the section called “Changing deployed process definitions”. The second level cache is an important aspect of the JBoss jBPM implementation. If it weren't for this cache, JBoss jBPM could have a serious drawback in comparison to the other techniques to implement a BPM engine. The default caching strategy is set to nonstrict-read-write. During runtime execution of processes, the process definitions are static. This way, we get the maximum caching during runtime execution of processes. In theory, caching strategy read-only would be even better for runtime execution. But in that case, deploying new process definitions would not be possible as that operation is not read-only. Table of Contents SwitchingBPM jPDL installer. Download and install as described in the section called “Downloading and installing jBPM”. We will assume that this installation was done to a location on your machine named ${jbpm-jpdl-home}. You will find the DB subproject of jBPM in the ${jbpm-jpdl-home}/db. After installing the of your choice database, you will have to run the database creation scripts to create the jBPM tables. Note that in the hsqldb inside jboss this is done automatically during installation. Whatever database that you use, make sure that the isolation level of the configured JDBC connection is at least READ_COMMITTED, as explained in the section called “Isolation level of the JDBC connection” scripts for your database, you should look int the directory ${jbpm-jpdl-home}/db.-jpdl-home}/db.', 'user', 'sample.user@sample.domain', 'user'); insert into JBPM_ID_USER (ID_,CLASS_, NAME_, EMAIL_, PASSWORD_) values ('2', 'U', 'manager', 'sample.manager@sample.domain', 'manager'); insert into JBPM_ID_USER (ID_,CLASS_, NAME_, EMAIL_, PASSWORD_) values ('3', 'U', 'shipper', 'sample.shipper@sample.domain', 'shipper'); insert into JBPM_ID_USER (ID_,CLASS_, NAME_, EMAIL_, PASSWORD_) values ('4', 'U', 'admin', 'sample.admin@sample.domain', 'admin'); Before we can really use our newly created database with the JBoss jBPM default webapp we will have to do some updates to the JBoss jBPM configuration. The location of the jbpm server configuration is ${jboss-home}/server/default/deploy/jbpm. First we create a new datasource in JBoss that binds to our database. In the default installation, this is the done in the file jbpm-hsqldb-ds.xml. That hypersonic database configuration file can be removed and should be replaced by the a file that ends with -ds.xml like e.g. jbpm-postgres-ds.xml <> Of course it is possible that you have to change some of the values in this file to accommodate for your particular situation. You then simply save this file in the ${jboss-home}/server/default/deploy/jbpm folder. Congratulations, you just created a new DataSource for your JBoss jBPM server. Well, almost... To make things really work you will have to copy the correct JDBC driver to the ${jboss.home}/server/default The last thing we have to do to make everything run is to update the hibernate configuration file hibernate.cfg.xml. That file is located in directory ${jboss.home}/server/default/deploy/jbpm-service.sar.. For database upgrades, please refer to the release.notes.html in the root of your installation directory.. Table of Contents The present chapter describes the facilities offered by jBPM to leverage the Java EE infrastructure. CommandServiceBean is a stateless session bean that executes jBPM commands by calling it's execute method within a separate jBPM context. The environment entries and resources available for customization are summarized in the table below. CommandListenerBean is a message-driven bean that listens on the CommandQueue for command messages. This bean delegates command execution to the CommandServiceBean. The body of the message must be a Java object that implements the org.jbpm.Command interface. The message properties, if any, are ignored. If the message does not match the expected format, it is forwarded to the DeadLetterQueue. No further processing is done on the message. If the destination reference is absent, the message is rejected. In case the received message specifies a replyTo destination, the result of the command execution is wrapped into an object message and sent there. The command connection factory environment reference indicates the resource manager that supplies JMS connections. Conversely, JobListenerBean is a message-driven bean that listens on the JbpmJobQueue for job messages to support asynchronous continuations. The message must have a property called jobId of type long which references a pending Job in the database. The message body, if any, is ignored. This bean extends the CommandListenerBean and inherits its environment entries and resource references available for customization. The TimerEntityBean interacts with the EJB timer service to schedule jBPM timers. Upon expiration, execution of the timer is actually delegated to the command service bean. The timer entity bean requires access to the jBPM data source for reading timer data. The EJB deployment descriptor does not provide a way to define how an entity bean maps to a database. This is left off to the container provider. In JBoss AS, the jbosscmp-jdbc.xml descriptor defines the data source JNDI name and the relational mapping data (table and column names, among others). Note that the JBoss CMP descriptor uses a global JNDI name ( java:JbpmDS), as opposed to a resource manager reference ( java:comp/env/jdbc/JbpmDataSource). Earlier versions of jBPM used a stateless session bean called TimerServiceBean to interact with the EJB timer service. The session approach had to be abandoned because there is an unavoidable bottleneck at the cancelation methods. Because session beans have no identity, the timer service is forced to iterate through all the timers for finding the ones it has to cancel. The bean is still around for backwards compatibility. It works under the same environment as the TimerEntityBean, so migration is easy. jbpm.cfg.xml includes the following configuration items: <jbpm-context> <service name="persistence" factory="org.jbpm.persistence.jta.JtaDbPersistenceServiceFactory" /> <service name="message" factory="org.jbpm.msg.jms.JmsMessageServiceFactory" /> <service name="scheduler" factory="org.jbpm.scheduler.ejbtimer.EntitySchedulerServiceFactory" /> </jbpm-context> JtaDbPersistenceServiceFactory enables jBPM to participate in JTA transactions. If an existing transaction is underway, the JTA persistence service clings to it; otherwise it starts a new transaction. The jBPM enterprise beans are configured to delegate transaction management to the container. However, if you create a JbpmContext in an environment where no transaction is active (say, in a web application), one will be started automatically. The JTA persistence service factory has the configurable fields described below. isCurrentSessionEnabled: if true, jBPM will use the "current" Hibernate session associated with the ongoing JTA transaction. This is the default setting. See the Hibernate guide, section 2.5 Contextual sessions for a description of the behavior. You can take advantage of the contextual session mechanism to use the same session used by jBPM in other parts of your application through a call to SessionFactory.getCurrentSession(). On the other hand, you might want to supply your own Hibernate session to jBPM. To do so, set isCurrentSessionEnabledto falseand inject the session via the JbpmContext.setSession(session)method. This will also ensure that jBPM uses the same Hibernate session as other parts of your application. Note, the Hibernate session can be injected into a stateless session bean via a persistence context, for example. isTransactionEnabled: a truevalue for this field means jBPM will begin a transaction through Hibernate's transaction API (section 11.2. Database transaction demarcation of the Hibernate manual shows the API) upon JbpmConfiguration.createJbpmContext(), commit the transaction and close the Hibernate session upon JbpmContext.close(). This is NOT the desired behaviour when jBPM is deployed as an ear, hence isTransactionEnabledis set to falseby default. JmsMessageServiceFactory leverages the reliable communication infrastructure exposed through JMS interfaces to deliver asynchronous continuation messages to the JobListenerBean. The JMS message service factory exposes the following configurable fields. connectionFactoryJndiName: the JNDI name of the JMS connection factory. Defaults to java:comp/env/jms/JbpmConnectionFactory. destinationJndiName: the JNDI name of the JMS destination where job messages are sent. Must match the destination where JobListenerBeanreceives messages. Defaults to java:comp/env/jms/JobQueue. isCommitEnabled: tells whether the message service should create a transacted session and either commit or rollback on close. Messages produced by the JMS message service are never meant to be received before the database transaction commits. The J2EE tutorial states "when you create a session in an enterprise bean, the container ignores the arguments you specify, because it manages all transactional properties for enterprise beans". Unfortunately the tutorial fails to indicate that said behavior is not prescriptive. JBoss ignores the transactedargument if the connection factory supports XA, since the overall JTA transaction controls the session. Otherwise, transactedproduces a locally transacted session. In Weblogic, JMS transacted sessions are agnostic to JTA transactions even if the connection factory is XA enabled. With isCommitEnabledset to false(the default), the message service creates a nontransacted, auto-acknowledging session. Such a session works with containers that either disregard the creation arguments or do not bind transacted sessions to JTA. Conversely, setting isCommitEnabledto truecauses the message service to create a transacted session and commit or rollback according to the TxService.isRollbackOnlymethod. EntitySchedulerServiceFactory builds on the transactional notification service for timed events provided by the EJB container to schedule business process timers. The EJB scheduler service factory has the configurable field described below. timerEntityHomeJndiName: the name of the TimerEntityBean's local home interface in the JNDI initial context. Defaults to java:comp/env/ejb/TimerEntityBean. hibernate.cfg.xml includes the following configuration items that may be modified to support other databases or application servers. <!-- sql dialect --> <property name="hibernate.dialect">org.hibernate.dialect.HSQLDialect</property> <property name="hibernate.cache.provider_class"> org.hibernate.cache.HashtableCacheProvider </property> <!-- DataSource properties (begin) --> <property name="hibernate.connection.datasource">java:comp/env/jdbc/JbpmDataSource</property> <!-- DataSource properties (end) --> <!-- JTA transaction properties (begin) --> <property name="hibernate.transaction.factory_class"> org.hibernate.transaction.JTATransactionFactory </property> <property name="hibernate.transaction.manager_lookup_class"> org.hibernate.transaction.JBossTransactionManagerLookup <) --> You may replace the hibernate.dialect with one that corresponds to your database management system. The Hibernate reference guide enumerates the available database dialects in section 3.4.1 SQL dialects. HashtableCacheProvider can be replaced with other supported cache providers. Refer to section 19.2 The second level cache of the Hibernate manual for a list of the supported cache providers. The JBossTransactionManagerLookup may be replaced with a strategy appropriate to applications servers other than JBoss. See section 3.8.1 Transaction strategy configuration to find the lookup class that corresponds to each application server. Note that the JNDI name used in hibernate.connection.datasource is, in fact, a resource manager reference, portable across application servers. Said reference is meant to be bound to an actual data source in the target application server at deployment time. In the included jboss.xml descriptor, the reference is bound to java:JbpmDS. Out of the box, jBPM is configured to use the JTATransactionFactory. If an existing transaction is underway, the JTA transaction factory uses it; otherwise it creates a new transaction. The jBPM enterprise beans are configured to delegate transaction management to the container. However, if you use the jBPM APIs in a context where no transaction is active (say, in a web application), one will be started automatically. If your own EJBs use container-managed transactions and you want to prevent unintended transaction creations, you can switch to the CMTTransactionFactory. With that setting, Hibernate will always look for an existing transaction and will report a problem if none is found. Client components written directly against the jBPM APIs that wish to leverage the enterprise services must ensure that their deployment descriptors have the appropriate environment references in place. The descriptor below can be regarded as typical for a client session bean. <session> <ejb-name>MyClientBean</ejb-name> <home>org.example.RemoteClientHome</home> <remote>org.example.RemoteClient</remote> <local-home>org.example.LocalClientHome</local-home> <local>org.example.LocalClient</local> <ejb-class>org.example.ClientBean</ejb-class> <session-type>Stateless</session-type> <transaction-type>Container</transaction-type> .ConnnectionFactory</res-type> <res-auth>Container</res-auth> </resource-ref> <message-destination-ref> <message-destination-ref-name>jms/JobQueue</message-destination-ref-name> <message-destination-type>javax.jms.Queue</message-destination-type> <message-destination-usage>Produces</message-destination-usage> </message-destination-ref> </session> Provided the target application server was JBoss, the above environment references could be bound to resources in the target operational environment as follows. Note that the JNDI names match the values used by the jBPM enterprise beans. <session> <ejb-name>MyClientBean</ejb-name> <jndi-name>ejb/MyClientBean</jndi-name> <local-jndi-name>java:ejb/MyClientBean</local-jndi-name> > </session> In case the client component is a web application, as opposed to an enterprise bean, the deployment descriptor would look like this: <web-app> <servlet> <servlet-name>MyClientServlet</servlet-name> <servlet-class>org.example.ClientServlet</servlet-class> </servlet> <servlet-mapping> <servlet-name>MyClientServlet</servlet-name> <url-pattern>/client/servlet</url-pattern> </servlet-mapping> -link>TimerEntityBean</ejb-link> <.ConnectionFactory</res-type> <res-auth>Container</res-auth> </resource-ref> <message-destination-ref> <message-destination-ref-name>jms/JobQueue</message-destination-ref-name> <message-destination-type>javax.jms.Queue</message-destination-type> <message-destination-usage>Produces</message-destination-usage> <message-destination-link>JobQueue</message-destination-link> </message-destination-ref> </web-app> The above environment references could be bound to resources in the target operational environment as follows, if the target application server was JBoss. <jboss-web> > </jboss-web> Table of Contents A EL expressions or beanshell scripts that return a boolean. At runtime the decision node will FIRST loop over its leaving transitions THAT HAVE a condition specified. It will evaluate those transitions first in the order as specified in the xml. The first transition for which the conditions resolves to 'true' will be taken. If all transitions with a condition resolve to false, the default transition (the first in the XML) is taken. Another approach is to use an expression that returns the name of the transition to take. With the 'expression' attribute, you can specify an expression on the decision that has to resolve to one of the leaving transitions of the decision node. Next aproach is the 'handler' element on the decision, that element can be used to specify an implementation of the DecisionHandler interface can be specified on the decision node. the section called ) of the 'interview' process is created. If no explicit version is specified, the latest version of the sub process as known when deploying the 'hire' process is used. To make jBPM instantiate a specific version the optional version attribute can be specified. To postpone binding the specified or latest version until actually creating the sub process, the optional binding attribute should be set to the section called “Graph execution” and ???, Chapter 13,... Table of Contents Context: java.lang.String java.lang.Boolean java.lang.Character java.lang.Float java.lang.Double java.lang.Long java.lang.Byte java.lang.Short java.lang.Integer java.util.Date byte[] java.io.Serializable classes that are persistable with hibernate the section called the section called “Other. Table of Contents The the section called : actor-idof the task element in the process: pooled-actor-idsof the task element in the process setting the actorId property of the taskInstance to null. the section called the section called the section called “Exception handling”. As on nodes, timers can be specified on tasks. See the section called ); Table of Contents This> <transition name='time-out-transition' to='...' /> </state> A timer that is specified on a node, is not executed after the node is left. Both the transition and the action are optional. When a timer is executed, the following events occur in sequence : timer. j job executor is the component that resumes process executions asynchronously. It waits for job messages to arrive over an asynchronous messaging system and executes them. The two job messages used for asynchronous continuations are ExecuteNodeJob and ExecuteActionJob. These job messages are produced by the process execution. During process execution, for each node or action that has to be executed asynchronously, a Job (POJO) will be dispatched to the MessageService. The message service is associated with the JbpmContext and it just collects all the messages that have to be sent. The messages will be sent as part of JbpmContext.close(). That method cascades the close() invocation to all of the associated services. The actual services can be configured in jbpm.cfg.xml. One of the services, DbMessageService, is configured by default and will notify the job executor that new job messages are available. The graph execution mechanism uses the interfaces MessageServiceFactory and MessageService to send messages. This is to make the asynchronous messaging service configurable (also in jbpm.cfg.xml). In Java EE environments, the DbMessageService can be replaced with the JmsMessageService to leverage the application server's capabilities. Here's how the job executor works in a nutshell: Jobs are records in the database. Jobs are objects and can be executed, too. Both timers and async messages are jobs. For async messages, the dueDate is simply set to the current time when they are inserted. The job executor must execute the jobs. This is done in 2 phases: 1) a job executor thread must acquire a job and 2) the thread that acquired the job must execute it. Acquiring a job and executing the job are done in 2 separate transactions. A thread acquires a job by putting its name into the owner field of the job. Each thread has a unique name based on ip-address and sequence number. Hibernate's optimistic locking is enabled on Job-objects. So if 2 threads try to acquire a job concurrently, one of them will get a StaleObjectException and rollback. Only the first one will succeed. The thread that succeeds in acquiring a job is now responsible for executing it in a separate transaction. A thread could die between acquisition and execution of a job. To clean-up after those situations, there is one lock-monitor thread per job executor that checks the lock times. Jobs that are locked for more then 30 mins (by default) will be unlocked so that they can be executed by another job. The required isolation level should be set to REPEATABLE_READ for hibernate's optimistic locking to work correctly. That isolation level will guarantee that update JBPM_JOB job set job.version = 2 job.lockOwner = '192.168.1.3:2' where job.version = 1 will only result in 1 row updated in exactly 1 of the competing transactions. Non-Repeatable Reads means that the following anomaly can happen: A transaction re-reads data it has previously read and finds that data has been modified by another transaction, one that has been committed since the transaction's previous read. Non-Repeatable reads are a problem for optimistic locking and therefore isolation level READ_COMMITTED is not enough cause it allows for Non-Repeatable reads to occur. So REPEATABLE_READ is required if you configure more than one job executor thread. When using jBPM's built-in asynchronous messaging, job messages will be sent by persisting them to the database. This message persisting can be done in the same transaction/JDBC connection as the jBPM process updates. The job messages will be stored in the JBPM_JOB. Table of Contents This chapter describes the business calendar of jBPM. The business calendar knows about business hours and is used in calculation of due dates for tasks and timers. The business calendar is able to calculate a due date by adding a duration to or subtracting it from a base date. If the base date is ommited, the 'current' date is used. As mentioned the due date is composed of a duration and a base date. If this base date is ommitted, the duration is relative to the date (and time) at the moment of calculating the duedate. The format is: duedate ::= [<basedate> +/-] <duration> A duration is specified in absolute or in business hours. Let's look at the syntax: duration ::= <quantity> [business] <unit> Where <quantity> is a piece of text that is parsable with Double.parseDouble(quantity). <unit> is one of {second, seconds, minute, minutes, hour, hours, day, days, week, weeks, month, months, year, years}. And adding the optional indication business means that only business hours should be taken into account for this duration. Without the indication business, the duration will be interpreted as an absolute time period. A duration is specified in absolute or in business hours. Let's look at the syntax: basedate ::= <EL> Where <EL> is any JAVA Expression Language expression that resolves to a JAVA Date or Calendar object. Referencing variable of other object types, even a String in a date format like '2036-02-12', will throw a JbpmException NOTE: This basedate is supported on the duedate attributes of a plain timer, on the reminder of a task and the timer within a task. It is not supported on the repeat attributes of these elements." /> The file org/jbpm/calendar/jbpm.business.calendar.properties specifies what business hours are. The configuration file can be customized and a modified copy can be placed in the root of the classpath. This is the example business hour specification that is shipped by default in jbpm.business.calendar.properties: hour.format=HH:mm #weekday ::= [<daypart> [& <daypart>]*] #daypart ::= <start-hour>-<to-hour> #start-hour and to-hour must be in the hour.format #dayparts have to be ordered weekday.monday= 9:00-12:00 & 12:30-17:00 weekday.tuesday= 9:00-12:00 & 12:30-17:00 weekday.wednesday= 9:00-12:00 & 12:30-17:00 weekday.thursday= 9:00-12:00 & 12:30-17:00 weekday.friday= 9:00-12:00 & 12:30-17:00 weekday.saturday= weekday.sunday= day.format=dd/MM/yyyy # holiday syntax: <holiday> # holiday period syntax: <start-day>-<end-day> # below are the belgian official holidays holiday.1= 01/01/2005 # nieuwjaar holiday.2= 27/3/2005 # pasen holiday.3= 28/3/2005 # paasmaandag holiday.4= 1/5/2005 # feest van de arbeid holiday.5= 5/5/2005 # hemelvaart holiday.6= 15/5/2005 # pinksteren holiday.7= 16/5/2005 # pinkstermaandag holiday.8= 21/7/2005 # my birthday holiday.9= 15/8/2005 # moederkesdag holiday.10= 1/11/2005 # allerheiligen holiday.11= 11/11/2005 # wapenstilstand holiday.12= 25/12/2005 # kerstmis business.day.expressed.in.hours= 8 business.week.expressed.in.hours= 40 business.month.expressed.in.business.days= 21 business.year.expressed.in.business.days= 220 Table of Contents This the section called the section called “Specifying mail recipients” Mails can be defined in templates and in the process you can overwrite properties of the templates like this: <mail template='sillystatement' actors="#{president}" /> More about templates can be found in the section called the section called the section called .com' the section> The default value for the From address used in jPDL mails is jbpm@noreply. The from address of mails can be configured in the jBPM configuration file jbpm.xfg.xml with key 'jbpm.mail.from.address' like this: <jbpm-configuration> ... <string name='jbpm.mail.from.address' value='jbpm@yourcompany.com' /> < 'jbpm.mail.class.name' configuration string in the jbpm.cfg.xml like this: <jbpm-configuration> ... <string name='jbpm.mail.class.name' value='com.your.specific.CustomMail' /> </jbpm-configuration> The customized mail class will be read during parsing and actions will be configured in the process that reference the configured (or the default) mail classname. So if you change the property, all the processes that were already deployed will still refer to the old mail class name. But they can be easily updated with one simple update statement to the jbpm database. If you need a mailserver that is easy to install, checkout JBossMail Server or Apache James Table of Contents The purpose of logging is to keep track of the history of a process execution. As the runtime data of a process execution changes, all the delta's are stored in the logs. Process logging, which is covered in this chapter, is not to be confused with software logging. Software logging traces the execution of a software program (usually for debugging purposes). Process logging traces the execution of process instances. There are various use cases for process logging information. Most obvious is the consulting of the process history by participants of a process execution. Another use case is Business Activity Monitoring (BAM). BAM will query or analyse the logs of process executions to find usefull statistical information about the business process. E.g. how much time is spend on average in each step of the process ? Where are the bottlenecks in the process ? ... This information is key to implement real business process management in an organisation. Real business process management is about how an organisation manages their processes, how these are supported by information technology *and* how these two improve the other in an iterative process. Next use case is the undo functionality. Process logs can be used to implement the undo. Since the logs contain the delta's of the runtime information, the logs can be played in reverse order to bring the process back into a previous state. Logs are produced by jBPM modules while they are running process executions. But also users can insert process logs. A log entry is a java object that inherits from org.jbpm.logging.log.ProcessLog. Process log entries are added to the LoggingInstance. The LoggingInstance is an optional extension of the ProcessInstance. Various kinds of logs are generated by jBPM : graph execution logs, context logs and task management logs. For more information about the specific data contained in those logs, we refer to the javadocs. A good starting point is the class org.jbpm.logging.log.ProcessLog since from that class you can navigate down the inheritance tree. The LoggingInstance will collect all the log entries. When the ProcessInstance is saved, all the logs in the LoggingInstance will be flushed to the database. The logs-field of a ProcessInstance is not mapped with hibernate to avoid that logs are retrieved from the database in each transactions. Each ProcessLog is made in the context of a path of execution ( Token) and hence, the ProcessLog refers to that token. The Token also serves as an index-sequence generator for the index of the ProcessLog in the Token. This will be important for log retrieval. That way, logs that are produced in subsequent transactions will have sequential sequence numbers. (wow, that a lot of seq's in there :-s ). The API method for adding process logs is the following. public class LoggingInstance extends ModuleInstance { ... public void addLog(ProcessLog processLog) {...} ... } The UML diagram for logging information looks like this: A CompositeLog is a special kind of log entry. It serves as a parent log for a number of child logs, thereby creating the means for a hierarchical structure in the logs. The API for inserting a log is the following. public class LoggingInstance extends ModuleInstance { ... public void startCompositeLog(CompositeLog compositeLog) {...} public void endCompositeLog() {...} ... } The CompositeLogs should always be called in a try-finally-block to make sure that the hierarchical structure of logs is consistent. For example: startCompositeLog(new MyCompositeLog()); try { ... } finally { endCompositeLog(); } For deployments where logs are not important, it suffices to remove the logging line in the jbpm-context section of the jbpm.cfg.xml configuration file: <service name='logging' factory='org.jbpm.logging.db.DbLoggingServiceFactory' /> In case you want to filter the logs, you need to write a custom implementation of the LoggingService that is a subclass of DbLoggingService. Also you need to create a custom logging ServiceFactory and specify that one in the factory attribute. As said before, logs cannot be retrieved from the database by navigating the LoggingInstance to its logs. Instead, logs of a process instance should always be queried from the database. The LoggingSession has 2 methods that serve this purpose. The first method retrieves all the logs for a process instance. These logs will be grouped by token in a Map. The map will associate a List of ProcessLogs with every Token in the process instance. The list will contain the ProcessLogs in the same ordered as they were created. public class LoggingSession { ... public Map findLogsByProcessInstance(long processInstanceId) {...} ... } The second method retrieves the logs for a specific Token. The returned list will contain the ProcessLogs in the same ordered as they were created. public class LoggingSession { public List findLogsByToken(long tokenId) {...} ... } Sometimes you may want to apply data warehousing techniques to the jbpm process logs. Data warehousing means that you create a separate database containing the process logs to be used for various purposes. There may be many reasons why you want to create a data warehouse with the process log information. Sometimes it might be to offload heavy queryies from the 'live' production database. In other situations it might be to do some extensive analysis. Data warehousing even might be done on a modified database schema which is optimized for its purpose. In this section, we only want to propose the technique of warehousing in the context of jBPM. The purposes are too diverse, preventing a generic solution to be included in jBPM that could cover all those requirements. Table of Contents JPDL specifies an xml schema and the mechanism to package all the process definition related files into a process archive. A process archive is a zip file. The central file in the process archive is processdefinition.xml. The main information in that file is the process graph. The processdefinition.xml also contains information about actions and tasks. A process archive can also contain other process related files such as classes, ui-forms for tasks, ... Deploying process archives can be done in 3 ways: with the process designer tool, with an ant task or programatically. Deploying a process archive with the designer tool is supported in the starters-kit. Right click on the process archive folder to find the "Deploy process archive" option. The starters-kit server contains the jBPM webapp, which has a servlet to upload process archives called ProcessUploadServlet. This servlet is capable of uploading process archives and deploying them to the default jBPM instance configured. Deploying a process archive with an ant task can be done as follows: <target name="deploy.par"> <taskdef name="deploypar" classname="org.jbpm.ant.DeployProcessTask"> <classpath --make sure the jbpm-[version].jar is in this classpath--/> </taskdef> <deploypar par="build/myprocess.par" /> </target> To deploy more process archives at once, use the nested fileset elements. The file attribute itself is optional. Other attributes of the ant task are: Process archives can also be deployed programmatically with the class org.jbpm.jpdl.par.ProcessArchiveDeployer What happens when we have a process definition deployed, many executions are not yet finished and we have a new version of the process definition that we want to deploy ? Process instances always execute to the process definition that they are started in. But jBPM allows for multiple process definitions of the same name to coexist in the database. So typically, a process instance is started in the latest version available at that time and it will keep on executing in that same process definition for its complete lifetime. When a newer version is deployed, newly created instances will be started in the newest version, while older process instances keep on executing in the older process defintions. If the process includes references to Java classes, the java classes can be made available to the jBPM runtime environment in 2 ways : by making sure these classes are visible to the jBPM classloader. This usually means that you can put your delegation classes in a .jar file next to the jbpm-[version].jar. In that case, all the process definitions will see that same class file. The java classes can also be included in the process archive. When you include your delegation classes in the process archive (and they are not visible to the jbpm classloader), jBPM will also version these classes inside the process definition. More information about process classloading can be found in the section called “Delegation” When a process archive gets deployed, it creates a process definition in the jBPM database. Process definitions can be versioned on the basis of the process definition name. When a named process archive gets deployed, the deployer will assign a version number. To assign this number, the deployer will look up the highest version number for process definitions with the same name and adds 1. Unnamed process definitions will always have version number -1. Changing process definitions after they are deployed into the jBPM database has many potential pitfalls. Therefor, this is highly discouraged. Actually, there is a whole variety of possible changes that can be made to a process definition. Some of those process definitions are harmless, but some other changes have implications far beyond the expected and desirable. So please consider migrating process instances to a new definition over this approach. In case you would consider it, these are the points to take into consideration: Use hibernate's update: You can just load a process definition, change it and save it with the hibernate session. The hibernate session can be accessed with the method JbpmContext.getSession(). The second level cache: A process definition would need to be removed from the second level cache after you've updated an existing process definition. See also the section called “Second level cache” An alternative approach to changing process definitions might be to convert the executions to a new process definition. Please take into account that this is not trivial due to the long-lived nature of business processes. Currently, this is an experimental area so for which there are not yet much out-of-the-box support. As you know there is a clear distinction between process definition data, process instance data (the runtime data) and the logging data. With this approach, you create a separate new process definition in the jBPM database (by e.g. deploying a new version of the same process). Then the runtime information is converted to the new process definition. This might involve a translation cause tokens in the old process might be pointing to nodes that have been removed in the new version. So only new data is created in the database. But one execution of a process is spread over two process instance objects. This might become a bit tricky for the tools and statistics calculations. When resources permit us, we are going to add support for this in the future. E.g. a pointer could be added from one process instance to it's predecessor. Delegation is the mechanism used to include the users' custom code in the execution of processes. The jBPM class loader is the class loader that loads the jBPM classes. Meaning, the classloader that has the library jbpm-3.x.jar in its classpath. To make classes visible to the jBPM classloader, put them in a jar file and put the jar file besides the jbpm-3.x.jar. E.g. in the WEB-INF/lib folder in the case of webapplications. Delegation classes are loaded with the process class loader of their respective process definition. The process class loader is a class loader that has the jBPM classloader as a parent. The process class loader adds all the classes of one particular process definition. You can add classes to a process definition by putting them in the /classes folder in the process archive. Note that this is only useful when you want to version the classes that you add to the process definition. If versioning is not necessary, it is much more efficient to make the classes available to the jBPM class loader. If the resource name doesn't start with a slash, resources are also loaded from the /classes directory in the process archive. If you want to load resources outside of the classes directory, start with a double slash ( // ). For example to load resource data.xml wich is located next to the processdefinition.xml on the root of the process archive file, you can do clazz.getResource("//data.xml") or classLoader.getResourceAsStream("//data.xml") or any of those variants. Delegation classes contain user code that is called from within the execution of a process. The most common example is an action. In the case of action, an implementation of the interface ActionHandler can be called on an event in the process. Delegations are specified in the processdefinition.xml. 3 pieces of data can be supplied when specifying a delegation : Next is a description of all the configuration types: This is the default configuration type. The config-type field will first instantiate an object of the delegation class and then set values in the fields of the object as specified in the configuration. The configuration is xml, where the elementnames have to correspond with the field names of the class. The content text of the element is put in the corresponding field. If necessary and possible, the content text of the element is converted to the field type. Supported type conversions: java.lang.Stringthis can be indicated by specifying a type attribute with the fully qualified type name. For example, following snippet will inject an ArrayList of Strings into field 'numbers': <numbers> <element>one</element> <element>two</element> <element>three</element> </numbers> The text in the elements can be converted to any object that has a String constructor. To use another type then String, specify the element-type in the field element ('numbers' in this case). Here's another example of a map: <numbers> <entry><key>one</key><value>1</value></entry> <entry><key>two</key><value>2</value></entry> <entry><key>three</key><value>3</value></entry> </numbers> keyand one element value. The key and element are both parsed using the conversion rules recursively. Just the same as with collections, a conversion to java.lang.Stringis assumed if no typeattribute is specified. For example in the following class... public class MyAction implements ActionHandler { // access specifiers can be private, default, protected or public private String city; Integer rounds; ... } ...this is a valid configuration: ... <action class="org.test.MyAction"> <city>Atlanta</city> <rounds>5</rounds> </action> ... Same as config-type field but then the properties are set via setter methods, rather then directly on the fields. The same conversions are applied. This instantiator will take the complete contents of the delegation xml element and passes this as text in the delegation class constructor. For some of the delegations, there is support for a JSP/JSF EL like expression language. In actions, assignments and decision conditions, you can write an expression like e.g. expression="#{myVar.handler[assignments].assign}" The basics of this expression language can be found in the J2EE tutorial. The jPDL expression language is similar to the JSF expression language. Meaning that jPDL EL is based on JSP EL, but it uses #{...} notation and that it includes support for method binding. Depending on the context, the process variables or task instance variables can be used as starting variables along with the following implicit objects: This feature becomes really powerfull in a JBoss SEAM environment. Because of the integration between jBPM and JBoss SEAM, all of your backed beans, EJB's and other one-kind-of-stuff becomes available right inside of your process definition. Thanks Gavin ! Absolutely awsome ! :-) The jPDL schema is the schema used in the file processdefinition.xml in the process archive. When parsing a jPDL XML document, jBPM will validate your document against the jPDL schema when two conditions are met: first, the schema has to be referenced in the XML document like this <process-definition ... </process-definition> And second, the xerces parser has to be on the classpath. The jPDL schema can be found in ${jbpm.home}/src/java.jbpm/org/jbpm/jpdl/xml/jpdl-3.2.xsd or at. Table of Contents Security features of jBPM are still in alpha stage. This chapter documents the pluggable authentication and authorization. And what parts of the framework are finished and what parts not yet. On the framework part, we still need to define a set of permissions that are verified by the jbpm engine while a process is being executed. Currently you can check your own permissions, but there is not yet a jbpm default set of permissions. Only one default authentication implementation is finished. Other authentication implementations are envisioned, but not yet implemented. Authorization is optional, and there is no authorization implementation yet. Also for authorization, there are a number of authorization implementations envisioned, but they are not yet worked out. But for both authentication and authorization, the framework is there to plug in your own authentication and authorization mechanism. Authentication is the process of knowing on who's behalf the code is running. In case of jBPM this information should be made available from the environment to jBPM. Cause jBPM is always executed in a specific environment like a webapp, an EJB, a swing application or some other environment, it is always the surrounding environment that should perform authentication. In a few situations, jBPM needs to know who is running the code. E.g. to add authentication information in the process logs to know who did what and when. Another example is calculation of an actor based on the current authenticated actor. In each situation where jBPM needs to know who is running the code, the central method org.jbpm.security.Authentication.getAuthenticatedActorId() is called. That method will delegate to an implementation of org.jbpm.security.authenticator.Authenticator. By specifying an implementation of the authenticator, you can configure how jBPM retrieves the currently authenticated actor from the environment. The default authenticator is org.jbpm.security.authenticator.JbpmDefaultAuthenticator. That implementation will maintain a ThreadLocal stack of authenticated actorId's. Authenticated blocks can be marked with the methods JbpmDefaultAuthenticator.pushAuthenticatedActorId(String) and JbpmDefaultAuthenticator.popAuthenticatedActorId(). Be sure to always put these demarcations in a try-finally block. For the push and pop methods of this authenticator implementation, there are convenience methods supplied on the base Authentication class. The reason that the JbpmDefaultAuthenticator maintains a stack of actorIds instead of just one actorId is simple: it allows the jBPM code to distinct between code that is executed on behalf of the user and code that is executed on behalf of the jbpm engine. See the javadocs for more information. Authorization is validating if an authenticated user is allowed to perform a secured operation. The jBPM engine and user code can verify if a user is allowed to perform a given operation with the API method org.jbpm.security.Authorization.checkPermission(Permission). The Authorization class will also delegate that call to a configurable implementation. The interface for pluggin in different authorization strategies is org.jbpm.security.authorizer.Authorizer. In the package org.jbpm.security.authorizer there are some examples that show intentions of authorizer implementations. Most are not fully implemented and none of them are tested. Also still todo is the definition of a set of jBPM permissions and the verification of those permissions by the jBPM engine. An example could be verifying that the current authenticated user has sufficient privileges to end a task by calling Authorization.checkPermission(new TaskPermission("end", Long.toString(id))) in the TaskInstance.end() method. Table of Contents Since>"); ... The.
http://docs.jboss.com/jbpm/v3.2/userguide/html_single/
CC-MAIN-2014-10
en
refinedweb
>>: 4. Re: How can i use switchyard camel with activemq?alex liu May 6, 2012 11:20 PM (in response to alex liu) After trying to deloy activemq component using switchyard CDI, i got failed. 1.I put the ActiveMQComponentFactory.java to project switchyard-quickstart-camel-service, 2.write a sample route class: public interface MyTestDSL { public void sendMessage(String input); } @Route(MyTestDSL.class) public class MyTestDSLBuilder extends RouteBuilder{ @Override public void configure() { from("switchyard://MyTestDSL") .log("Message received in Java DSL Route") .log("${body}") .split(body(String.class).tokenize("\n")) .filter(body(String.class).startsWith("sally:")) .to("activemq://TestQueue"); } } 3.generate jar using maven,start jboss as7,then deploy this jar 4.finally,i got those errors bellow: 10:41:09,751 ERROR [org.jboss.msc.service.fail] (MSC service thread 1-4) MSC00001: Failed to start service jboss.deployment.unit."switchyard-quickstart-camel-service.jar".SwitchYardService: org.jboss.msc.service.StartException in service jboss.deployment.unit."switchyard-quickstart-camel-service.jar".SwitchYardService: org.switchyard.exception.SwitchYardException: Failed to create route route2 at: >>> Split[{tokenize(bodyAs[java.lang.String], )} -> [Filter[{bodyAs[java.lang.String] startsWith sally:} -> [To[activemq://alexQueue]]]]] <<< in route: Route[[From[switchyard://MyTestDSL?namespace=urn%3Aswitchyar... because of Failed to resolve endpoint: activemq://alexQueue due to: No component found with scheme: activemq absolutly,component "activemq" is failed to register. 5. Re: How can i use switchyard camel with activemq?Keith Babo May 7, 2012 6:56 AM (in response to alex liu) This is because the ActiveMQ Camel component has not been installed as a module in your AS7 instance. Dan is cooking up a quickstart that will likely help you a lot. My suggestion would be to wait a bit on the ActiveMQ front and take a look at what he produces. 6. Re: How can i use switchyard camel with activemq?Daniel Bevenius May 7, 2012 9:23 AM (in response to Keith Babo) Hi, attached you'll find a zip file containing a AS7 module for activemq-camel which can be unzipped to your AS7 installation: unzip camel-activemq-module.zip -d /path/to/as7/home We've added a quickstart but we have not pushed it upstream yet as we need to descide where to put the Camel modules first. But if might be helpful to have as a reference and perhaps start but verifying that you can get it working and then make changes to implement your specific use case. The quickstart uses a XML DSL as opposed to the JavaDSL that your example is using, but it should be able to find the ActiveMQComponent now. I'll post back when the quickstart and the modules are available upstream. Lets us know if you run into any issues. Regard, /Daniel - camel-activemq-module.zip 3.5 MB 7. Re: How can i use switchyard camel with activemq?Charles Moulliard Oct 17, 2013 4:29 PM (in response to Daniel Bevenius) Should be interesting to move this example to the quickstart github repo of jboss-switchyard repository 8. Re: How can i use switchyard camel with activemq?Keith Babo Oct 17, 2013 4:37 PM (in response to Charles Moulliard) There are some examples demonstrating ActiveMQ integration through JCA: Some background info on configuring ActiveMQ through JCA: SwitchYard JCA component and ActiveMQ Resource Adapter hth, keith
https://community.jboss.org/message/734038?tstart=0
CC-MAIN-2014-10
en
refinedweb
! This is part of the Shape Hierarchy so it should help you get started... public class Shape { } public class Shape2d extends Shape { } public class Shape3d extends Shape { } Thank you! but how about the 2d-shapes circle, square and triangle also the 3-shapes sphere, cube and tetrahedron? Do i have to put them in classes??? where should i put them? I'm still confused. Help me please... Thank you!!! You should search the web for some more information on polymorphism to help you. The other 2d and 3d shapes are subclasses of the Shape superclass, but you should be able to figure out how to implement them with my above code. The code above can be stated as: Shape is a superclass Shape2d is a subclass of Shape Shape3d is a subclass of Shape Whenever you see the word subclass it means that you are going to have to extend the functionality of a superclass. Hence the word "extends." Does any of this make sense to you? Oh Yeah! got it! I'll try my best to figure it out. Anyway, thanks for the tip! Its easiest if you break things down line by line. For example... "You must have your superclass shape and 2 subclasses two-dimensional shape and three-dimensional shape." //superclass shape public class Shape { } //subclass 2d shape public class Shape2d extends Shape { } //subclass 3d shape public class Shape3d extends Shape { } "Under two-dimensional shape, you have other subclasses, circle, square, and triangle." so now circle, square, and triangle are all subclasses of 2d shape so public class Circle extends Shape2d { } etc.. Understand it now? ahh! I see!!! i understand it now! So all of the 2d shapes extends shape2d and all 3d shapes extends shape3d?? Am i right??? Thank you very much! You're good!!! Yes, that is correct. Ok Thank you! I'll try figuring out whats next on this...hehe I still don"t get it! can somebody please help again??? I don't know what to put inside the classes!!! please help me! tNx!!! Your goal is to define a datatype Shape that can answer the question "Get Area?". You need to define the method getArea in each of the subclasses and declare the abstract method in the base class Shape. By declaring the abstract method in Shape, you're saying to the rest of the program that "a Shape is something that can answer the question 'Get Area'." Anything that is a Shape must be able to answer that question. Therefore, you need to define the method in your subclasses. Also, you need to define the shape-specific information that each type of shape would like to know about itself. Which part are you stuck on? I don't know. I don't know where to start at the first place. It's kind of new to me because our teacher haven't discussed it yet to us. Can you help me??? Start by creating all the classes like I showed in my previous posts. Next add all the methods to the classes. Next implement all the methods. Like this? public class Shape { } public class 2dshape extends Shape { } public class Circle extends 2dshape { } public class Square extends 2dshape { } public class Triangle extends 2dshape { } public class 3dshape extends Shape { } public class Sphere extends 3dshape { } public class Cube extends 3dshape { } public class Tetrahedron extends 3dshape { } So what's next? Thats a start but Java doesn't allow you to start a class with a number so thats why I named mine Shape2d and Shape3d, so just rename them. Now start adding all the constructors and methods to the classes. OK! Can somebody please help out again? I really don't know what to do. Reread the thread? People already told you. what have you coded so far?
http://www.daniweb.com/software-development/java/threads/174062/polymorphism-in-javashape-hierarchy
CC-MAIN-2014-10
en
refinedweb
14 February 2011 18:12 [Source: ICIS news] LONDON (ICIS)--INEOS' 2010 fourth-quarter earnings before interest, tax, depreciation and amortisation (EBITDA) fell 8.2% to €270m ($365m) on squeezed European cracker margins, a port strike in France and a lightning strike in the US, INEOS said on Monday. The Switzerland-headquatered company said it made an €82m refinery inventory holding gain in the quarter. The historical cost (HC) earnings were well down on the €402m reported for the third quarter of 2010. Combined refining replacement cost (RC) EBITDA and chemicals historical cost EBITDA for the fourth quarter were €188m from €267m in the fourth quarter of 2009 and €406m in the third quarter of 2010, INEOS said. INEOS reports RC/HC EBITDA to measure compliance with its debt covenants. The port strike at ?xml:namespace> The events resulted in lost EBITDA of about €36m from the Refining and the Olefins & Polymers (O&P) segments in INEOS said its chemical intermediates businesses reported earnings up 29.2% at €186m. The acrylonitrile and phenol markets market had been tight and the oxide business benefitted from strong derivatives demand. O&P Europe earnings were €5m from €54m in the equivalent period of 2009 although demand for olefins and polymers had been good. O&P North America earnings were €33m from €68m. Refining earnings were €46m on an historical cost basis compared with €28m. However, refining margins remained weak in the quarter and on a replacement cost basis the business reported a €36m loss compared with a €1m profit in the fourth quarter of 2009. PetroChina is currently considering a €1bn investment for a 50% share of the INEOS refining business. Full year 2010 RC/HC EBITDA were €1.56bn from €985m in 2009, the company said. It did not report sales figures for the fourth quarter or the full year. ($1 = €0.74) For more on INEOS visit ICIS company intelligence Click here to find out more on the Europe and North America
http://www.icis.com/Articles/2011/02/14/9435277/ineos-q4-2010-earnings-fall-8.2-to-270m-on-strike-weather.html
CC-MAIN-2014-10
en
refinedweb
Timeline 08/25/08: - 23:34 Changeset [39599] by - Abstracted client connection in NotificationsClient object within … - 23:01 Ticket #16387 (git-core update to 1.6.0.1) created by - another version bump from 1.6.0 to 1.6.0.1 - 22:24 Changeset [39598] by - science/glue: add dependency on py25-pyrxp - 20:38 Changeset [39597] by - curl-ca-bundle: try to download the certdata file from the MacPorts mirror … - 20:35 Ticket #16386 (curl-ca-bundle 7.18.2_0 - checksums fail) closed by - fixed: Fixed in r39596. Updated to certdata.txt 1.49. Thanks for reporting the … - 20:35 Changeset [39596] by - curl-ca-bundle: update to 1.49; closes #16386 - 19:11 Changeset [39595] by - removed unecessary spaces - 19:08 Changeset [39594] by - Patch for gnu sed - 17:56 DarwinPorts edited by - un-linkify CamelCase names (diff) - 17:28 DarwinPorts edited by - do not spell out the bad domain (diff) - 17:02 Ticket #16386 (curl-ca-bundle 7.18.2_0 - checksums fail) created by - When I try to install curl with the ssl variant, the checksum check will … - 16:16 ram edited by - (diff) - 16:12 Ticket #15929 (bzrtools depend on py25-baz) closed by - fixed: bzrtools has been updated to 1.6.0, in r39590, which no longer includes … - 16:12 Changeset [39593] by - python/py25-baz: drop maintainership - 16:12 Changeset [39592] by - devel/bzr-gtk: update to 0.95.0 - 16:12 Changeset [39591] by - devel/bzr-rebase: update to 0.4 - 16:12 Changeset [39590] by - devel/bzrtools: update to 1.6.0 - 16:12 Changeset [39589] by - devel/bzr: update to 1.6 - 14:33 Changeset [39588] by - New port: textproc/aspell-dict-ta - 14:30 Changeset [39587] by - New port: textproc/aspell-dict-he - 14:30 Changeset [39586] by - New port: textproc/aspell-dict-el - 14:29 Changeset [39585] by - New port: textproc/aspell-dict-bn - 14:10 DarwinPorts created by - Add history about the project and the DarwinPorts name - 13:31 Ticket #16385 (Performance: No rule to make target Performance/dependencies, needed by ...) created by - When trying to install Performance 0.2.5 (or 0.2.4) on Mac OS X 10.5.4 … - 13:19 Changeset [39584] by - New port: textproc/aspell-dict-pt_BR - 13:19 Changeset [39583] by - New port: textproc/aspell-dict-pt_PT - 13:19 Changeset [39582] by - New port: textproc/aspell-dict-hu - 13:19 Changeset [39581] by - New port: textproc/aspell-dict-is - 13:14 Changeset [39580] by - New port: textproc/aspell-dict-fi - 12:58 Ticket #16384 (gnustep portgroup reinplace didn't change anything in ...) created by - Using the patch from #15514 on Mac OS X 10.5.4 Intel with Xcode 3.1 I see … - 12:55 Changeset [39579] by - Total number of ports parsed: 4996 Ports successfully parsed: 4996 … - 12:30 Changeset [39578] by - New port: textproc/aspell-dict-nb - 12:24 Changeset [39577] by - gnustep-make: change dist_subdir so those who had the old distfile pre … - 12:06 Changeset [39576] by - New port: textproc/aspell-dict-pl - 12:00 Milestone MacPorts 1.6 completed - - 11:59 Changeset [39575] by - ImageMagick: update to 6.4.3-4 All 699 tests behaved as expected (33 … - 10:41 Changeset [39574] by - net/gajim-devel: Use applications_dir variable. - 10:18 Changeset [39573] by - port1.0/portconfigure.tcl: Return an error if an invalid value was given … - 10:10 Changeset [39572] by - Revert r39535, which I accidentally committed on a release tag - 02:27 Changeset [39571] by - version 14.1.5; make port-set optimization level take precedence (remove … - 00:53 Changeset [39570] by - Total number of ports parsed: 4994 Ports successfully parsed: 4994 … 08/24/08: - 22:53 Changeset [39569] by - Removed IPC server code from MPNotifications files. - 22:36 Changeset [39568] by - Added IPCAdditions category to MPNotifications class. This is a cleaner … - 22:10 Changeset [39567] by - winetricks: update to 20080823: * DirectX 9: new version June 2008 * … - 22:08 Changeset [39566] by - Added MPHelperToolIPCTester.m file and corresponding target for additional … - 22:08 Ticket #16382 (Performance 0.2.5 is available) closed by - fixed: Updated Performance to 0.2.5 in r39565. - 22:08 Changeset [39565] by - Performance: update to 0.2.5; closes #16382 - 22:07 Ticket #16383 (gnustep portgroup should use destroot.violate_mtree yes) created by - When using the gnustep layout, the gnustep portgroup should indicate that … - 22:06 Changeset [39564] by - gcc-dp-* was renamed to gcc-mp-* quite some time ago - 21:18 Changeset [39563] by - winetricks: put the MacPorts distfiles mirror first, since upstream does … - 21:11 Changeset [39562] by - HelperTool<->Framework IPC works on first build of Test bundle but not on … - 21:08 Ticket #16382 (Performance 0.2.5 is available) created by - The Performance port is at version 0.2.4 but version 0.2.5 is available so … - 17:52 Ticket #16381 (incorrect directory for local openmotif port package) created by - installing openmotif (or a package that relies on it), gives the following … - 15:59 Ticket #15371 (xdoclet-1.2.3 depends on port ant but should be apache-ant) closed by - fixed: Fixed in r39561. Thanks for the report. - 15:58 Changeset [39561] by - Fixes #15371 by updating the dependency to port:apache-ant. Verified … - 14:37 Changeset [39560] by - Implemented server side Framework<-->HelperTool IPC methods in … - 13:39 Ticket #16380 (sbcl on Mac OS X 10.4 PowerPC: make.sh: No such file or directory) created by - Where are the versions of sbcl and slime for OS X 10.4 PPC - OS X 10.4.11 … - 12:59 Changeset [39559] by - python/py25-matplotlib-basemap: update to 0.99.1 - 12:53 Changeset [39558] by - Total number of ports parsed: 4994 Ports successfully parsed: 4994 … - 10:50 Ticket #16379 (New port cilk-5.4.6) created by - […] - 10:46 Changeset [39557] by - version 4.4-20080822 - 07:31 Changeset [39556] by - net/gajim-devel: Remove hardcoded /opt/local. (reinplaces in Portfile) - 07:23 Changeset [39555] by - net/gajim-devel: Fix installation. - 07:22 Changeset [39554] by - net/gajim-devel: Remove unnecessary command. - 06:48 Changeset [39553] by - lang/ruby: revert r39392. - 03:09 Changeset [39552] by - Updated lxml to 2.1.1 - 02:49 Ticket #15750 (build libxml2 with two-level namespace) closed by - fixed: Committed in r39551. - 02:47 Changeset [39551] by - textproc/libxml2: Add libtool dependency. Fixes #15750. - 00:53 Changeset [39550] by - Total number of ports parsed: 4994 Ports successfully parsed: 4994 … 08/23/08: - 22:22 Changeset [39549] by - ruby/rb-cocoa: change install destination of some resources: - docs -> … - 19:44 Changeset [39548] by - Fix a lint warning. Update description to remove error in describing … - 16:11 Changeset [39547] by - python/py25-dateutil: update to 1.4.1 - 16:11 Changeset [39546] by - graphics/plotutils: update to 2.5.1 - 15:58 Changeset [39545] by - science/geos2: disable livecheck - 15:41 Ticket #16371 (libpng-1.2.30 and/or gimp2-2.4.5 - Gimp fails to open certain PNG files) closed by - fixed - 15:34 Ticket #14707 (NEW: kmymoney) closed by - fixed: Committed revision r39544. I made some minor changes and called it … - 15:31 Changeset [39544] by - Added new port. Satisfies ticket #14707. - 15:27 Ticket #16378 (boost-1.36) created by - Attached is an updated port file for boost 1.36 (came out earlier this … - 12:54 Changeset [39543] by - Total number of ports parsed: 4993 Ports successfully parsed: 4993 … - 12:11 Changeset [39542] by - dports/databases: New port, Tcl bindings for sqlite3. - 11:23 Ticket #15589 (Port Submission: aqbanking3) closed by - fixed: Committed revision r39541. I fixed a few issues such as the fetch path. … - 11:18 Changeset [39541] by - Added new port. Satisfies ticket #15589. - 10:37 Changeset [39540] by - Minor text and line placement changes. - 04:49 Changeset [39539] by - editors/jed: Whitespace changes for port lint - 04:26 Ticket #15750 (build libxml2 with two-level namespace) reopened by - I just installed libxml2 with a fresh MP installation. The port libxml2 … - 04:23 Ticket #16369 (jed 0.99-18: should use slang2) closed by - fixed: Committed in r39538 with a slight modification: * ${prefix} should be … - 04:21 Changeset [39538] by - editors/jed: Depend on slang2 instead of slang, closes #16369. - 03:04 Changeset [39537] by - net/gajim: Add livecheck. - 02:44 Changeset [39536] by - office/taskjuggler: Add livecheck. - 01:48 Changeset [39535] by - port1.0/portconfigure.tcl: Return an error if an invalid value was given … - 01:15 Changeset [39534] by - devel/libuninameslist: Make port lint happy - 01:13 Changeset [39533] by - devel/libuninameslist: Remove invalid configure.compiler setting - 00:53 Changeset [39532] by - Total number of ports parsed: 4991 Ports successfully parsed: 4991 … - 00:14 Changeset [39531] by - lang/sdcc: Remove invalid configure.compiler setting (prevented use of … - 00:12 Changeset [39530] by - lang/sdcc: Whitespace only - 00:08 Ticket #16338 (UPDATE: sdcc-2.8.0) closed by - fixed: Committed in r39529. - 00:08 Changeset [39529] by - lang/sdcc: Update to version 2.8.0, closes #16338 … Note: See TracTimeline for information about the timeline view.
https://trac.macports.org/timeline?from=2008-08-25T10%3A18%3A20-0700&precision=second
CC-MAIN-2014-10
en
refinedweb
beengone: a script-friendly way to check computer idle time I spent too long figuring this out, but I’m quite certain there are at least 3 people who can put the result to good use. I don’t know who they are yet, but they’ll show up. Eventually. I wanted a way to detect whether I was at my computer or not when a script finished, and — if I’d been gone a certain amount of time — send me a text message or push notification instead of displaying a Growl popup. I had everything worked out except for the actual detection. I was just going to use AppleScript and check for the ScreenSaverEngine process, but that proved not to be failsafe. I did some digging, only to find that OS X doesn’t seem to expose that information directly anywhere. Now, if you’re using Growl, you can easily add push notifications only on idle using Pushover, Boxcar or Prowl. Pushover even has an API that you can integrate in non-Growl scripts. When using the API, though, you can’t detect idle the way the Growl plugin does. For applications where I want a configurable time limit and need to trigger something like another task or anything other than push notifications, I needed a means to detect idle time on my own. Enter I/O Kit. I learned this technique for detecting the idle status from a post by Jean-David Gadina. I hacked together my solution from his examples, and most of the credit for the final product really goes to him. The result is a little CLI called beengone. You simply run it from a script with a number that represents the amount of time (in minutes) that you want to test for. If there hasn’t been any keyboard or mouse input within the number of minutes you specify, it prints out “true” and a 0 exit code. If there has, then you haven’t “been gone” and it outputs “false” and exits with a non-zero status. It’s designed to be used in scripts, where you can just call it with: beengone 5 and it will tell you if there has been activity in the last 5 minutes. Capture the output on STDOUT or use the exit code to handle logic in your script. If you want to check and make sure it’s working, run: sleep 5 && beengone .1 Then, don’t touch anything for five seconds. If it’s working, it will output “true” on STDOUT. If not, you’ll see “false” and there may be something providing input to your computer in the background which is preventing idle. I haven’t seen WOL or anything else affect it yet, though. Here’s a quick example in Ruby of how I would call it and determine an action to take based on the results: def notify(msg) $stderr.puts msg away = %x{/usr/local/bin/beengone 5}.strip if away == "false" # user is present, display a Notification Center banner TerminalNotifier.notify(msg, :title => "Your script name") if growl else # Machine is idle, send an SMS using <> $stderr.puts "Sending... " + %x{~/scripts/voicesms.rb -m "#{msg}"} end end end ### perform tasks and notify on completion notify("The script is complete") (Of course, you could get more elegant with how the CLI is called, and I’ll probably eventually write a wrapper using the Process module.) beengone is a simple tool and a simple solution which seems to be fairly foolproof for my needs. Feel free to download the binary below, and if you’re curious how it works, please refer to Jean-David’s original post. Hope this helps some other folks, too. beengone v1.0 A command-line tool to check if the user has been AFK for a given number of minutes Updated Sat Feb 09 2013.
http://brettterpstra.com/2013/02/10/beengone-a-script-friendly-way-to-check-computer-idle-time/
CC-MAIN-2014-10
en
refinedweb
Checks whether an entry, given its DN, is in the scope of a certain base DN. #include "slapi-plugin.h" int slapi_sdn_scope_test( const Slapi_DN *dn, const Slapi_DN *base, int scope ); This This function returns non-zero if dn matches the scoping criteria given by base and scope. This function carries out a simple test to check whether the DN passed in the dn parameter is actually in scope of the base DN according to the values passed into the scope and base parameters.
http://docs.oracle.com/cd/E19424-01/820-4810/aaini/index.html
CC-MAIN-2014-10
en
refinedweb
public class UnexpectedException extends RemoteException UnexpectedExceptionis thrown if the client of a remote method call receives, as a result of the call, a checked exception that is not among the checked exception types declared in the throwsclause of the method in the remote interface. UnexpectedException(String s) UnexpectedExceptionwith the specified detail message. s- the detail message public UnexpectedException(String s, Exception ex) Unexpected.
http://docs.oracle.com/javase/7/docs/api/java/rmi/UnexpectedException.html
CC-MAIN-2014-10
en
refinedweb
23 March 2011 22:48 [Source: ICIS news] HOUSTON (ICIS)--A ?xml:namespace> The producer was not available for comment. However, the nomination was reported by multiple sources, and was said to have likely been prompted by an imbalance in supply and demand. The settlement nomination - if it held up - would result in an April BD settlement of $1.25/lb. The March contract settled at $1.04/lb. BD settlements are usually at the lowest nomination from the four major producers. However, on four occasions since September 2010, the BD settlement has split, with three producers settling at a common price, while the fourth producer settled at a different value. A BD buyer said the large nomination for April indicated the strength of market sentiment toward higher prices. US spot prices for BD were 105-115 cents/lb CIF (cost, insurance and freight) basis, as assessed by ICIS. US BD producers include ExxonMobil, INEOS, LyondellBasell, Shell and TPC
http://www.icis.com/Articles/2011/03/23/9446619/us-bd-producer-seeks-20-hike-for-april-contract.html
CC-MAIN-2014-10
en
refinedweb
Introduction Introduction  ... Hibernate and Struts. Hibernate and Struts are popular open source tools. You can download Hibernate from Introduction to Struts 2 Introduction Struts - Struts strutsTutorial; import...////////// package strutsTutorial; import Struts - Struts ******************************************************************** package strutsTutorial; import... strutsTutorial; import javax.servlet.http.HttpServletRequest; import What is Struts Framework? introduction to the Struts framework such as history, features and technology of Struts.. Logic Tags: An Introduction Struts Logic Tags: An Introduction Struts logic tags are conditional tags...; matching strings and substrings... Logic tags available in the Struts Framework JSF Introduction - An Introduction to JSF Technology JSF Introduction - An Introduction to JSF Technology... Introduction section introduces you with cool JSF technology. ... already existing technologies like JSP, Servlets, Struts etc... If you have Detailed introduction to Struts 2 Architecture Detailed introduction to Struts 2 Architecture Struts 2 Framework Architecture In the previous section we learned... components of Struts 2 framework. How Struts 2 Framework works? Suppose you Struts validation not work properly - Struts Struts validation not work properly hi... i have a problem with my struts validation framework. i using struts 1.0... i have 2 page which...) { this.address = address; } } my struts-config.xml Introduction to Ajax. Introduction to Ajax. Ajax : Asynchronous JavaScript and XML Ajax is not a technology, It is a collection of technologies. It is used for creation a fast... in struts. Ajax validation in struts2 Introduction To Application Introduction To Application The shopping cart application allows customer to view and brows catalogs of the products which is being sail online... for the readers who wants to know how a shopping cart application can be written in struts Struts Logic Tags Struts Logic Tags Struts Logic Tags examples. Introduction to Struts Logic Struts logic tags are conditional tags that replaces Introduction to Type conversion in Struts Introduction to Type conversion in Struts Type conversion is a mechanism... this powerful and important facility. In struts the type conversion is an mechanism.... Struts by default provides the type conversion of primitives data, a simple Introduction Applet Introduction Applet is java program that can be embedded into HTML pages. Java applets runs on the java enables web browsers such as mozila and internet explorer Introduction to Action interface Introduction To Struts Action Interface The Action interface contains the a single method execute(). The business logic of the action is executed within this method. This method is implemented by the derived class. For example Struts Reference are: Building a simple Struts application Introduction to the MVC design pattern... Struts Reference Welcome to the Jakarta Online Reference page, you will find everything you need to know to quick start your Struts Struts 2.2.1 - Struts 2.2.1 Tutorial Struts Framework An introduction to the Struts Framework This article is discussing about the high-class web application development framework, which is Struts. This article will give you detailed introduction to the Struts Framework. Struts Introduction to PreResultListener Introduction To PreResultListener PreResultListener is an interface...;/html> login.jsp <%@ taglib prefix="s" uri="/struts...="s" uri="/struts-tags" %><HTML> <HEAD> < Struts Tutorial articles given belows: Introduction to the Apache Struts MVC Architecture...Struts Tutorial Struts is an open source web application framework deployed...) design paradigm, Struts framework helps developers create well-architected Web Struts Links - Links to Many Struts Resources form. Introduction to Jakarta Struts 1.1 ( Slides in PDF form) Jakarts Struts... Struts Links - Links to Many Struts Resources Jakarta Struts Tutorials One of the Best Jakarta Struts available on the web. Struts Struts Tutorials Introduction to Struts ? Expression Language Struts is a robust and powerful framework... Struts Tutorials Struts Tutorials - Jakarta Struts Tutorial This complete reference of Jakarta Struts shows you how to develop DynaActionForm To build and deploy the application go to Struts\strutstutorial directory...; In this tutorial you will learn how to create Struts DynaActionForm. We will recreate our address form with Struts DynaActionForm struts struts in industry, struts 1 and struts 2. which is the best? which is useful as a professuional Have a look at the following link: Struts Tutorials Struts Forward Action Example to Struts\Strutstutorial directory and type ant on the command prompt... Struts Forward Action Example  ... about Struts ForwardAction (org.apache.struts.actions.ForwardAction AN INTRODUCTION TO JSTL on JSTL, the author gives a brief introduction to JSTL and shows why and how...).... which may soon replace Struts. We unzip the jwsdp1 javascript introduction for programmers javascript introduction for programmers A brief Introduction of JavaScript(web scripting language) for Java Programmers.3.8 Tutorials and Examples Introduction to Struts 2 Framework - Video Tutorial Struts 2 video tutorial...Struts 2.3.8 is another best release with performance improvements and new... and examples of Struts 2.3.8. Struts 2.3.8 is "General Articles to be an introduction to either Struts or JSR 168. It assumes you have some... Struts Articles Building on Struts for Java 5 Users Struts is undoubtedly the most successful Java web Introduction to HTML Introduction to HTML  ...;This is another paragraph. This will give you<br>an introduction of HTML... paragraph. This is another paragraph. This will give youan introduction of HTML . Struts in Action is a comprehensive introduction to the Struts framework... provides an introduction to Struts and evaluates the case for using it. It tries... is indespensable. It starts with a brief introduction to Struts in the front of the book Struts 2.1.8 Features Struts 2.1.8 Features In this section we will learn the new features and enhancements of Struts 2.1.8. Struts is one of the most used MVC framework by Java Developers INTERNATIONALIZATION introduction we shall see how to implement i18n in a Simple JSP file of Struts. g... STRUTS INTERNATIONALIZATION -------------------------------- by Farihah... to implement Internationalization (abbreviated as I18N) in Str 1 Tutorial and example programs and struts2 Introduction to the Apache Struts This lesson is an introduction to the Struts and its architecture... to the Struts Controller This lesson is an introduction to Controller part introduction to information systems development introduction to information systems development can someone sent plz codes for college management system developed using IDE eclipse Roseindia Introduction to Struts 2 Introduction to Struts 2 Framework...Struts 2 Framework is used to develop enterprise web application.... Struts 2 Framework encourages Model-View-Controller based architecture struts the checkbox.i want code in struts Struts Book - Popular Struts Books Software Foundation. Struts in Action is a comprehensive introduction to the Struts... introduction to the Struts framework that is complemented by practical case studies... Struts Book - Popular Struts Books Programming Jakarta Struts LookupDispatchAction Example ; To build and deploy the application go to Struts\Strutstutorial directory... Struts LookupDispatchAction Example Struts LookupDispatch Action Struts LookupDispatchAction Example the Example To build and deploy the application go to Struts\Strutstutorial directory... Struts LookupDispatchAction Example Struts LookupDispatch Action Struts Dispatch Action Example Struts Dispatch Action Example Struts Dispatch Action... with the struts framework. The org.apache.struts.actions.DispatchAction
http://www.roseindia.net/tutorialhelp/comment/93874
CC-MAIN-2014-10
en
refinedweb
Threaded View Sencha Cmd V3 Beta 3.0.0.230 Now Available Sencha Cmd V3 Beta 3.0.0.230 Now Available The latest Sencha Cmd beta build is now available - 3.0.0.230. The highlights are improved UTF-8/multi-charset handling and correcting the overwriting issues with "sencha app upgrade". To use the new charset directive in your JS files, place this as the first line: Code: //@charset ISO-8859-1 The "charset" directive is used to describe the encoding of an input JS). Enjoy! For download links, see Bugs Fixes Cmd (1) - TOUCH-3562 - Sencha generate app corrupts binary template files - SDKTOOLS-178 - Clarification on compiler debug directive Don Griffin - SDKTOOLS-185 - Concatenate process does not properly encode French characters - SDKTOOLS-186 - Running sencha app build on CI server will hang trying to connect to device - SDKTOOLS-187 - Obfuscated third-party libraries using Ext JS require symbol metadata - SDKTOOLS-188 - [BUG] Sencha Cmd 3.0.0.181 app upgrade - SDKTOOLS-193 - Concatenate encodes app-all.js using ANSI instead of UTF8 - SDKTOOLS-195 - The sencha theme build command does not accept simple theme name argument - SDKTOOLS-198 - Compiler does not detect instantiation point auto-dependencies - SDKTOOLS-199 - Using 'upgrade' command overwrites files - SDKTOOLS-205 - Overeager matching for -namespace - SDKTOOLS-206 - Issue with exclude -all? - SDKTOOLS-207 - Ant clean target not working - SDKTOOLS-211 - JSB parse options get reset during compilation - SDKTOOLS-47 - Fetching Components via xtype Ext JS Development Team Lead Check the docs. Learn how to (properly) report a framework issue and a Sencha Cmd issue "Use the source, Luke!"-->
http://www.sencha.com/forum/showthread.php?247193-Sencha-Cmd-V3-Beta-3.0.0.230-Now-Available&p=904153&mode=threaded
CC-MAIN-2014-10
en
refinedweb
The Templated BLAS Wrapper Class. More... #include <Teuchos_BLAS.hpp> The Templated BLAS Wrapper Class. The Teuchos::BLAS class provides functionality similar to the BLAS (Basic Linear Algebra Subprograms). The BLAS provide portable, high- performance implementations of kernels such as dense std::vector multiplication, dot products, dense matrix-stdvector multiplication and dense matrix-matrix multiplication. The standard BLAS interface is Fortran-specific. Unfortunately, the interface between C++ and Fortran is not standard across all computer platforms. The Teuchos_BLAS class provides C++ bindings for the BLAS kernels in order to insulate the rest of Petra from the details of C++ to Fortran translation. In addition to giving access the standard BLAS functionality. Teuchos::BLAS also provide functionality for any <ScalarType> class that defines the +, - * and / operators. Teuchos::BLAS is a single memory image interface only. This is appropriate since the standard BLAS are only specified for serial execution (or shared memory parallel). These templates are specialized to use the Fortran BLAS routines for scalar types float and double. -.
http://trilinos.sandia.gov/packages/docs/r10.0/packages/teuchos/browser/doc/html/classTeuchos_1_1BLAS.html
CC-MAIN-2014-10
en
refinedweb
AntiPHPatterns Design Patterns are independent solutions to problems that occur over and over again. Usually these are considered at a high level without much detail. They are independent of core-implementation things like the exact language or data base. And at the opposite end of things we find AntiPatterns which appear to be solutions, but are actually counterproductive, ineffective or subtly dangerous. In 2010 Stefan Priebsch coined the phrase AntiPHPatterns, to describe some of these high-level pseudo-solutions that appear in PHP. His excellent description was presented in London at the UK PHP conference. His collection fell into four general categories: Constantitis, Globalomania, Singletonitis, and Godclass. In this article I'll touch on Stefan's excellent analysis, and then I will add a collection of my own AntiPHP examples, showing how novice programmers often over-think or under-think specific problems. I have seen these things in my work over the years, and on the occasions when I have had the opportunity to refactor, the view in hindsight has helped me to understand why some newer ways of looking at problems can help create a library of effective teaching examples. Constantitis In PHP, the define() function creates a constant. Constants are immutable and global, and they can be defined anywhere in the code base. As a result, it is entirely possible that two programmers working on the same project may define a constant with the same name, thus causing a collision that results in a run-time failure. The presence of many global constants is a code smell. Among other things, it may imply a large measure of configuration over convention, and with many possible configurations, things can become confusing. In addition, if you want to change the settings that depend on the constants, you have to change the constants in the script. Yecch! Globalomania In PHP, the global keyword creates a global variable, one that appears suddenly and unexpectedly in every namespace and scope. This is slightly worse than a global constant, since it can be changed, and a change in one area of the script may have unwanted effects in other areas of the script. Singletonitis The singleton design pattern tries to solve the problem of having multiple instances of the same class, when only one instance is needed (eg: a data base connection). But in practice, singletons are often used as if they were global variable containers. Singletons may share same the code smell with other globals (constants and variables). Singletonitis has the same cure as constantitis and globalomania, which is dependency injection. Godclass It is axiomatic in software development that encapsulation is good, both for code and data. A well-written class should do one thing perfectly (or as close to perfectly as possible). If you try to describe the purpose of a class and you use any conjunctions, your class is probably defective because it is doing too much work in too many different ways. When you find that an entire web page can be rendered by calling upon a single class, you've found Godclass. The couplings and dependencies of the classes that extend Godclass are not readily visible, making it hard to know what changes will have adverse effects. The cure for Godclass is isolation of data and functionality, so that objects know "all about" themselves, but treat other objects as if they are "black box" vending machines. In other words, "minimize class responsibilities." So much for antiPHPatterns. Now let's look at AntiPHPractices. AntiPHPractices AntiPatterns are the high-level concepts gone awry. More plentiful and finer grained blunders and code smells are called AntiPractices. If it seems like this article has a negative, "don't do that" tone, it's because the article is intended to steer you away from risky or dangerous practices! If you find these things in your own programming, get rid of them as soon as you get a chance. PHP is an old programming language. There are a lot of things that were designed into the language in the 1990's. That was before hacking, spam, viruses, and other security problems were even beginning to be understood. As a result, many of the code smells relate to things that seemed convenient at the time, but turned out to be dangerous or wasteful in retrospect. Unfortunately, bad code examples do not come with expiration dates, and some of them (well, millions of them) are still published on the internet. They are easy to find and they get copied over and over by novice programmers, perpetuating the bad practices. If you find a code sample on the internet ask yourself, "When was this written?" Also, ask yourself, "Do I completely understand what the author as thinking and how it relates to my problem?" If you're not 100% sure about both of those answers, skip over the example and go look for one that makes more sense to you! These coding horrors are in no particular order, and they may not always cause scripts to fail, data bases to be destroyed, money to be lost, etc. But they have that potential. 1. Installing the Code Without Understanding the Code For some reason, people feel that they don't need to learn PHP before they start using it! Unfortunately, the internet is littered with simply terrible examples of PHP code written long ago by novice programmers who did not understand the principles of computer science or security, and who never took the time to clean up after themselves. And a particularly frequent offender, DreamWeaver, generates some of the worst PHP code ever written. Copying code from internet resources or DreamWeaver is a sure way to get yourself in trouble. So don't do that. Instead stick to quality learning resources from dependable authors. Like everything else in life, knowing what you're doing with PHP is really helpful. Want a good test for understanding the code? Try to say or write a one-sentence explanation for every line of code in your script. If you can do that, you understand the code. If you find yourself unsure or at a loss for words, stop what you're doing and look up the functions and classes on php.net. A moment invested in learning can save you from days of debugging. Further guidance on this point comes to us from Alexander Pope in his elegant Essay on Criticism. 2. Not Having a Backup Copy Really?! You made a change to a deployed production system?! 3. Not Having a Test Data Set Really?! You are about to make a change to a deployed production system?! You might want to rethink that! 4. Programming Without Error_Reporting(E_ALL) If you do not tell PHP to show you all the errors, you will miss some of the errors! PHP will suppress the Notice messages unless you ask PHP to tell you the Notice messages. Always ask any programming language to tell you all of the errors! 5. Suppressing Messages with @ Whenever there is a reason to suppress a warning message, there is an even greater reason to find out the cause of the message and eliminate it! If I see the @ prepended to a function call, I expect to see the comments explaining why that is a good practice. In almost every case it's not a good practice, but was an expedient way for a sloppy programmer to avoid a step in the debugging process. It's significant to note that if you use @ to suppress messages and a Fatal Error occurs in the function, the script will fail silently. There will be no message or log, and no indication of where the error occurred. Imagine trying to debug something like that! 6. Register Globals This takes the danger of Globalmania and sprays it randomly and universally into your script. Disable Register Globals! If you're at a current level of PHP, it should be disabled already. Maybe you want to use phpinfo() to verify this. 7. Sloppy Syntax and Formatting This is a programmatic shibboleth that screams, "I don't know what I'm doing!" PHP allows you to write ugly code, very ugly code, and PHP will still interpret it, and if the code does not fail for a parse error, PHP will try to run it. This is unfortunate, because it leads novice PHP programmers to think that coding standards do not matter, that they can use silly or meaningless variable names, that alignment and readability of the code isn't important. Nothing could be further from the truth. Neatness counts, and the reason for neatness and tidy organization is so that humans can make sense of your script. Those humans include you, if you revisit the programming after a day or two away from the task. Adopt a coding standard and adhere to it rigidly. You probably want to adhere to PSR-1 and PSR-2. With self-discipline comes self-confidence. 7a. Misusing Quotes and Apostrophes Quote marks and apostrophes have special meanings in most programming languages and PHP is no exception, providing the novice with an almost endless number of ways to create parse errors with fiddly punctuation. What do you need to know, when and how to use or avoid them? See this article about Quotes in PHP Programing 8. Poorly Chosen Variable Names What is the likely meaning of a variable named $x? It's hard to guess! If it's expected to be today's date, why not use $today instead? And if you're retrieving rows from several different queries, beware of using $row. It's just too easy to get confused. Instead, strive for meaningful variable names that clearly identify your thought processes. And in related matters, choose data base column names that you can readily understand and that do not conflict with SQL functions or reserved words. 9. Compound Statements We've all seen them... Someone wants to look smart, so they nest a collection of function calls into a single-line statement. The problem with doing this is pretty simple. If you get an unexpected return value, you have to take the code apart to find out what screwed up. Why not just start with the code on separate lines and save yourself a debugging cycle? PHP does not run any faster when there are compound statements in the script! PHP has to compile the script before it can be executed, and the compiled output is exactly the same, whether the statements were written legibly or all jammed up together into an unreadable mess. PHP does not care, so take the easy approach -- the one humans are most likely to understand. These two blocks of code do the same work. The signature of compound statements are adjacent closing parentheses. // NORMALIZE, CAPITALIZE, ESCAPE THE COLOR $rgb = $mysqli->real_escape_string(ucfirst(strtolower(trim($color)))); // A CLEANER WAY TO NORMALIZE, CAPITALIZE, ESCAPE THE COLOR $rgb = trim($color); // REMOVE WHITESPACE $rgb = strtolower($rgb); // MAKE ALL LOWER CASE $rgb = ucfirst($rgb); // CAPITALIZE FIRST LETTER $rgb = $mysqli->real_escape_string($rgb); 9.a Compound Arguments These are closely related to compound statements, and they create a similar problem, to wit, you cannot readily see what data was passed in the argument string. Consider this example: $res= $mysqli->query($sql2 . $start . ', ' . $limit); What does the query string contain? Who knows? You would have to print out three variables to begin to figure it out. A better ways to write the code would look something like this: $sql = $sql2 . $start . ', ' . $limit; print_r($sql); $res = $mysqli->query($sql); 10. Using REGEX to Do Things That PHP Should Do for You I came across this statement in a client's code recently: $regexp="/^[a-z0-9]+([_\\.-][a-z0-9]+)*@([a-z0-9]+([\.-][a-z0-9]+)*)+\\.[a-z]{2,}$/i"; Maybe that made sense (to validate an email address) many years ago, but we've had PHP filter_var() since PHP 5.2. Regular expressions are very hard to get right! And they are even harder to get right if they're strung together in a long, uncommented string. Experienced programmers have a joke that goes, "I had 99 problems, so I used a Regular Expression. Now I have 100 problems!" 11. Writing Code to Do Things That PHP Should Do for You You wouldn't believe how many novice programmers do not understand the way date() and strtotime() work together. Instead of using the built-in functionality, they try to write their own date computation algorithms, and the resulting code is almost always overly complicated and usually wrong in the edge cases, like leap year, daylight savings time, changing time zones, etc. This is not the only place that PHP built-ins get ignored, but it's fairly common. For another example, see the list of array functions for a collection of things that novices overlook or misunderstand. 12. Using the Wrong Tools "Please bring me a wrench," said the carpenter. "What kind of wrench" said the apprentice, "Pipe wrench, lock wrench, spanner wrench, monkey wrench, Stillsons wrench...? "Doesn't matter -- I'm just going to use it to pound nails." File systems and data bases are different kinds of storage, and understanding the differences is an important part of application development. Have you ever seen anyone store image files in a data base BLOB? You won't see them doing it for very long, because as the image collection grows the data base will become slower and slower, impossible to back up, and generally an impediment to application deployment. If you start down this path, expect to find yourself refactoring (or maybe someone else will be refactoring because you will get fired). Image files (and by extension, sound or multimedia files) belong in the server file system. The associated data base information should contain the URL of the image, not the image file. The flip side of this misstep is seen in applications where some well-intentioned but misinformed designer created a flat text file or an XML document so he could store his files without using a data base. Usually this happens when the designer is trying to avoid learning what a data base does. The problem with this approach is not performance, but functionality. XML is by its nature hierarchical. A data base is relational. The first time you have to find the relationships in an XML document, you will wish you had learned about data bases instead! 13. Failure to Put Comments Into the Code It doesn't always have to be a doc-block, but there should be something to tell people what you're thinking and what you're trying to do! And a doc-block never hurts. It can provide tips and code hinting in some IDEs. 14. Using the ?> Close-PHP Tag This tag is almost invariably followed by an invisible whitespace end-of-line character, which is browser output. It can and will cause more trouble than it's worth. Especially because it is invisible! Unless you absolutely require the close PHP tag, omit it. Really, PHP knows when it's finished! Even if DreamWeaver doesn't! 15. Using the Global Keyword This is really only needed when the application designer hasn't given enough thought to the organization of the data and classes. It's a signature of disorganized thinking. Avoid global variables. Pass parameters and return values instead. Avoid using the $_REQUEST variable. The reason for this is simple: you do not know where the request data came from! Was it from your HTML form? Or did a malicious person simply type the request variables into the URL? If you choose $_POST and $_GET you at least know the request method. It does not protect you from hack attacks, but it makes them a little harder to perpetrate. Another reason to avoid $_REQUEST is because of the request_order directive. This (positively stupid) configuration variable can cause the same script to produce different outputs if the directive is changed. Why did PHP introduce this stumbling block at 5.3? Your guess is as good as mine! Like the use of register_globals, any programming that injects unnecessary variables into the symbol table has a bad code smell. The wise programmer avoids variable proliferation. 18. Writing Your Own Version of Extract() You may have seen something like this: $u = $_POST["username"]; $p = $_POST["password"]; This accomplishes nothing. It just copies one variable to another, creating two versions of the same data. Perhaps the programmer forgot about filter_var()? It also introduces a subtle psychological element of trust that is completely misplaced. $u = $_POST['username'] only copies the contents of $_POST['username'] into the variable named $u. Whatever was present, good or bad, in the original POST request variable is now "conveniently" addressable inside the shorthand $u. The act of including the unfiltered attack vector in a more convenient variable name is one big part of this anti-practice, because it seems to encourage the use of shorthand $u instead of the longhand and more explicit $_POST['username']. It seems to me that $_POST is a highly recognizable external variable, and therefore readily identified as a tainted data source. Not so, for $u, and it requires us to read the code and follow $u back to its origin before we can know whether to trust $u. So if you're going to create a shorthand notation, filter and sanitize the data before assigning it to the shorthand variable. This anti-practice is an example of failure to sanitize external data, as shown below. 19. Failure to Filter/Sanitize External Data Here is the classic screwup: $sql = 'DELETE FROM myTable WHERE id = ' . $_GET['id']; Do you have any idea how many rows will be deleted? No, you do not. Maybe you think it is only one row? But there is no LIMIT clause. Want to know how common this blunder is? In November, 2013, Michelangelo Van Dam gave a presentation to the Washington DC PHP Users Group in which he searched GitHub for the intersection of mysql_query() and $_GET. This is what we saw. What if $_GET['id'] came from a URL that said: url.php?id=0+OR+1+=+1 The resulting query would say DELETE FROM myTable WHERE id = 0 OR 1 = 1 and the OR condition would apply to every row of myTable. Bang! You're Fired! This anti-practice has been an object of ridicule for years. It's as dangerous as driving on the wrong side of the road. Yet for some odd reason, there are still programmers who have not heard of the issue. See 19.a Relying on JavaScript to Filter/Sanitize External Data A variant on failing to filter external data is relying on client-side technologies to do the work that must be done on the server-side (see the relationship here). JavaScript and jQuery are used to make a nice experience for your human client, but they provide no protection at all against an attack because the attackers will simply bypass the client-side niceties and will post toxic data directly into your application. This is easily accomplished with cURL or fsockopen() and the attack can even be tailored to look like it's coming from a browser that was referred by the HTML form on your web site. All external data is, by definition, tainted, full stop. You must filter it with server-side tools, after it has been received on the server. 20. Failure to Escape External Data The query says, "INSERT INTO myTable (name) VALUES ({$_POST['name']}" and that should work, right? Sure, until you find someone named O'Reilly. Then you have a stray apostrophe in your query string, and that's a recipe for failure. Apostrophes have a special meaning in query strings, unless you escape them. 21. Double Escape of External Data Consider the PHP environment is set up with Magic Quotes, and the programmer, not sure about this, uses addslashes() or similar functions to escape the data. As a result the data base is cluttered with double backslashes. A common symptom of this misstep is code that uses stripslashes() on the fields that are drawn from the data base. 22. Failure to Escape Client Output Envision a script reads an input field and stores it in the data base, then later it echoes the data out to a client browser. What could possibly go wrong? Well, the input field could have been poisoned with toxic JavaScript that will cause the client browser to begin an action that is harmful to the client computer and eventually to the client human. When you see echo statements without htmlspecialchars(), you have to wonder... 23. Uninitialized Variables shows what can go wrong if you don't know what a variable contains. Unfortunately PHP suppresses Notice level messages by default. So novice programmers think it is OK to use uninitialized variables to mean FALSE or zero or an empty string, because that level of offense only rises to the level of a PHP Notice. You can get away with this right up until the time that your manager wants to make a small change to your script and she chooses the variable name you were assuming would be empty or set with a value you expected. If you raise the error_reporting() level to show the Notice messages, PHP will tell you about this pitfall. 24. Failure to Test the Return Values from PHP Functions You wouldn't believe how many times the same programming error occurs, over and over, because something like this gets copied and perpetrated again and again. $res = mysql_query($sql); while ($row = mysql_fetch_assoc($res)) { ... What's wrong here is the assumption that the return value in $res is a query result resource. It might not be! It might be FALSE. If the program does not test for this condition, the script will eventually fail silently or with data damage. MySQL is not a black box. It can and will fail for reasons that are outside of the PHP programmers' control. If your script cannot control something, it must test for the uncontrollable condition (in this case, query failure) and handle the conditions as they arise. 25. Wasting Time Not Knowing What Happened Script failed, no output, what's wrong? Well, the first thing to do is start creating output! Use var_dump() to print out the inputs your script is receiving. And to print out the intermediate variables. And to visualize the data before the output phase. 25.a Wasting Other People's Time Not Knowing What Happened It's amazing how many questions are posted here at EE with a code example and the question, "what's wrong?" The code is useless; we already know it doesn't work. What we really need to see instead is the data. If your data takes the form of a return object (for example, from an API) and you post the output of print_r() or var_dump() we can read it, but it's unwieldy when we want to write some code that works with the data. PHP has a perfect tool for solving this problem. It's a built-in function, var_export(). Look it up, learn it now, test it out, and please use it when you want to show a colleague your test data. And since PHP is still a "living language" with some things that are only half-baked, you may also want to read this note. Var_export() may need help to work with StdClass. 26. Wasting Time on Meaningless Optimization Not limited to PHP by any means, but PHP programmers seem particularly interested in minutiae like whether it is faster to iterate over an array or an object. The answer is, "it doesn't matter." These kinds of pursuits are like milking a mouse. No matter how much effort you put into the process, you will not get much. 27. Overlooking Meaningful Optimization Data transfers via disk I/O operations are several orders of magnitude slower than in-memory data transfers, and when there is a performance problem in a web application it is usually found in the data base.* You need to know how your database works and how to optimize the data base and script file structure. Here are some topical questions to ask yourself. Are your scripts using something like MySQL_Fetch_Array()? Perhaps you copied that from one of the many bad PHP examples that litter the internet? Change that function to a variant of Fetch_Assoc() or better yet, Fetch_Object(). The Fetch_Array() variant retrieves twice as much data as is needed, making it the least efficient way to retrieve your data. Do you have a SELECT * query? If so, eliminate the * and put in the names of the columns you want to retrieve. SELECT * retrieves all of the columns, even the ones you do not need, making it the least efficient way to retrieve data. Do you have a SELECT query that is expected to retrieve one row (such as a user-id lookup or a single inventory item)? If so, be sure you have LIMIT 1 in the query. Failure to LIMIT the query will result in a table scan, making it the least efficient way to find the information. Does your code loop around to add values as it retrieves the results set from the MySQL server? Maybe you should be using server side aggregations with a GROUP BY clause. Are your data base tables appropriately indexed? You may want to have an index on every column used in WHERE, ORDER, GROUP, JOIN and HAVING clauses. Are your queries sargable? Have you used EXPLAIN SELECT to optimize queries that touch more than one table? If you recognize some of these issues in your queries, or more importantly, don't understand some of them, you might want to read this excellent article by our EE colleague, gr8gonzo. Entire books are devoted to MySQL and optimization, especially at large scale, is a highly technical and detailed endeavor. But every programmer should have an understanding of the basics. * Or sometimes in the API calls to other applications with slow data base queries! 28. Skipping Best Practices You really want to just get it done fast, right? See The cartoon is a joke, but the best practices are not. They are the way experienced programmers have learned to get the best and fastest results. 29. Manipulative Programming (Added in 2016) Undoubtedly you've been in a conversation with a technical manager who has no sense of marketing, and something like this has come up: "How can we force the users to ___ (fill in the blank)?" The answer is not found in technical details; it's found in human nature and common sense. If your clients feel like they are being coerced, they will go away, and that is that. One obvious manifestation of this anti-practice is demanding that clients fill out a registration form before they can see free information, or refusing to accept forms until they are complete, or making a web site that fails if it cannot set HTTP cookies. Don't do things that drive customers away! Instead, be as open and informative and inviting as you can be. You can bet your competitors are trying to be nice to your clients. Don't lose your customers over a design flaw. Summary Knowing what to do is important, and so is knowing what not to do. Do you have some examples of wrongheaded designs and practices? Please share them in the article comments below. If we're smart about things, we can learn to avoid the AntiPHPractices, choosing best practices instead.!
https://www.experts-exchange.com/articles/12293/AntiPHPatterns-and-AntiPHPractices.html
CC-MAIN-2020-10
en
refinedweb
Hadoop.: - Manages the file system namespace. - Regulates client’s access to files. - It also executes file system operations such as renaming, closing, and opening files and directories. Datanode. Block..
https://www.mumbai-academics.com/2018/06/hadoop-hadoop-file-system-overview-hdfc.html
CC-MAIN-2020-10
en
refinedweb
I while on a train journey. My previous phone had some GPS issues, which was leading to my location being shown in Arizona, USA!. Surprisingly (or not?) it even gave a proof of that! All this was really cool to look at, but I really wanted to dive in and get some more insights into my travelling patterns throughout the years. Like most data science problems, data pre-processing was definitely the pain point. The data was in a JSON format where the meaning of different attributes wasn’t very clear. Data Extraction {'timestampMs': '1541235389345', 'latitudeE7': 286648226, 'longitudeE7': 773296344, 'accuracy': 22, 'activity': [{'timestampMs': '1541235388609', 'activity': [{'type': 'ON_FOOT', 'confidence': 52}, {'type': 'WALKING', 'confidence': 52}, {'type': 'UNKNOWN', 'confidence': 21}, {'type': 'STILL', 'confidence': 7}, {'type': 'RUNNING', 'confidence': 6}, {'type': 'IN_VEHICLE', 'confidence': 5}, {'type': 'ON_BICYCLE', 'confidence': 5}, {'type': 'IN_ROAD_VEHICLE', 'confidence': 5}, {'type': 'IN_RAIL_VEHICLE', 'confidence': 5}, {'type': 'IN_TWO_WHEELER_VEHICLE', 'confidence': 3}, {'type': 'IN_FOUR_WHEELER_VEHICLE', 'confidence': 3}]}]}, {'timestampMs': '1541235268590', 'latitudeE7': 286648329, 'longitudeE7': 773296322, 'accuracy': 23, 'activity': [{'timestampMs': '1541235298515', 'activity': [{'type': 'TILTING', 'confidence': 100}]}] After researching a bit, I stumbled upon this article, which cleared a lot of things. However, there are some questions that still remain unanswered:- - What does the activity type tiltingmean? - I assumed confidence to be the probability of each task. However, often they do not add up to 100. If they do not represent probabilities what do they represent? - What is the difference between the activity type walkingand on foot? - How can Google possibly predict activity type between IN_TWO_WHEELER_VEHICLEvs IN_FOUR_WHEELER_VEHICLE?! If anyone has been able to figure it out, please let me know in comments. Edit: There has been some discussion about these topics on this thread. A paper on Human Activity Recognition using smartphone data can be found here. Assumptions As I continued to structure my pre-processing pipeline, I realized I will have to take some assumptions to take into account all the attributes of the data. - The GPS is always on (A strong assumption which is later taken care of). - The confidence interval is the probability of the activity type. This assumption helps us take into account various possible activity types for a given instance without under-representing or over-representing any particular activity type. - Each log has two types of timestamps. (i) Corresponding to position latitude and longitude. (ii) Corresponding to the activity. Since the difference between two timestamps was usually very minute ( < 30 seconds) I safely used the timestamp corresponding to latitude and longitude for our analysis Data Cleaning Remember I told how my GPS was giving Arizona, USA as the location? I did not want those data points to significantly differ the results. Using the longitudinal boundaries of India, I filtered out the data points pertaining only to India. def remove_wrong_data(data): degrees_to_radians = np.pi/180.0 data_new = list() for index in range(len(data)): longitude = data[index]['longitudeE7']/float(1e7) if longitude > 68 and longitude < 93: data_new.append(data[index]) return data_new Cities for each data-point I wanted to get the corresponding city for each given latitude and longitude. A simple Google search got me the coordinates of the major cities I’ve lived in i.e. Delhi, Goa, Trivandrum and Bangalore. def get_city(latitude): latitude = int(latitude) if latitude == 15: return 'Goa' elif latitude in [12,13]: return 'Bangalore' elif latitude == 8: return 'Trivandrum' elif latitude > 27.5 and latitude < 29: return 'Delhi' else: return 'Other' data_low['city'] = data.latitude.apply(lambda x:get_city(x)) Distance Logs consist of latitude and longitude. To calculate distance travelled between logs, one has to convert these values to formats that can be used for distance related calculations. from geopy.distance import vincenty coord_1 = (latitude_1,longitude_1) corrd_2 = (longitude_2, longitude_2) distance = vincenty(coord_1,coord_2) Normalized Distance Each log consists of activity. Each activity consists of one or more activity types along with the confidence (called as probability). To take into account the confidence of measurement, I devised a new metric called normalized distance which is simply distance * confidence Now comes the interesting part! Before I dive into the insights, let me just brief on some of the data attributes:- accuracy:Estimation of how accurate the data is. An accuracy of less than 800 is generally considered high. We have therefore dropped the columns with accuracy greater than 1000 day:Represents the day of the month day_of_week:Represents the day of the week month:Represents the month year:Represents the year distance:Total distance travelled city:City corresponding to that data point Outlier Detection There are total 1158736 data points. 99% of the points cover a distance less than 1 mile. The rest 1% are anomalies generated due to poor reception/flight mode. To avoid the 1% of data causing significant changes in our observations, we’ll split the data into two based on the normalized distance. This also ensures that we remove the points which do not obey the assumption #1 we made during our analysis data_low = data[data.normalized_distance < 1] data_large = data[data.normalized_distance > 1] Distance travelled with respect to the city The data for 2018 correctly represents that majority of the time was spent at Bangalore and Trivandrum. I wondered how distance travelled in Delhi (my hometown) came out to be more than the Goa, where I did my graduation. Then it hit me, I did not have a mobile internet connection for the majority of my college life :). Travel Patterns in Bangalore and Trivandrum In June 2018, I completed my internship in my previous organization (at Trivandrum) and joined Nineleaps (at Bangalore). I wanted to know how my habits changed on transitioning from one city to another. I was particularly interested in observing my patterns for two reasons: - Since I always had mobile internet while residing in these cities, I expected the representation to be an accurate representation of reality. - I’ve spent roughly the same amount of time in two cities, hence the data will not be biased towards any particular city. - Multiple friends and family members visiting Bangalore in the month of October resulted in a huge spike in distance travelled in vehicles. - Initially, I was exploring Trivandrum. However, as my focus shifted to securing a full-time data science opportunity, the distance travelled drastically reduced from January to February to March. - Vehicle usage is much higher in Bangalore between 20:00–00:00. I guess I am leaving office later in Bangalore. - I was walking a lot more in Trivandrum! The difference in walking distance from 10:00–20:00 shows how I was living a healthier lifestyle by taking a walk after every hour or two in office. There is a lot more (like this and this) to do with your Location history. You can also explore your Twitter/Facebook/Chrome data. Some handy tips when trying to explore your dataset: - Spend a significant amount of time pre-processing your data. It’s painful but worth it. - When working with large volumes of data, pre-processing can be computationally heavy. Instead of re-running the Jupyter Cells every time, dump the post-preprocessed data into a pickle file and simply import the data when you start again. - Initially, you might miserably fail (like me) in finding any patterns. Make a list of your observations and keep exploring the dataset from different angles. If you ever reach a point wondering whether any patterns are even present, ask yourself three questions: (i) Do I have a thorough understanding of the various attributes of data? (ii) Is there anything I can do to improve my pre-processing step? (iii) Have I explored the relationship between all the attributes using all possible visualization / statistical tools? To get started you can use my Jupyter Notebook here. If you have any questions/suggestions feel free to post on comments. You can connect with me on LinkedIn or email me at k.mathur68@gmail.com.
https://www.tefter.io/bookmarks/57689/readable
CC-MAIN-2020-10
en
refinedweb
Thanks! I realized that and already changed my post to read "almost satisfying" instead of "not fully satisfying."Thanks! I realized that and already changed my post to read "almost satisfying" instead of "not fully satisfying."jahboater wrote: ↑Mon Jul 15, 2019 7:11 pmThe results for this C code are correct and as expected.The results for this C code are correct and as expected.ejolson wrote: ↑Mon Jul 15, 2019 6:53 pmOne obtains a different but still not fully satisfying result with the C programwonder if Richard's BBC Basic does the same.wonder if Richard's BBC Basic does the same. Code: Select all #include <stdio.h> #include <math.h> int main(){ for(double x=1e3;x<=1e32;x*=10){ double y=floor(x); printf("floor(%g)=%f %s\n", x,y,y==x?"equal":"not equal"); } return 0; } floor(x) returns largest integral value not greater than x. Therefore if x is an integer, floor(x) will return x. And in your example x is always an integer. I think you need for the example: long n = lrint(x) If the rounded value of x cannot be stored in a long, it will report a domain error (and raise the FE_INVALID exception). I wonder what happens if one moves along exact powers of two.
https://www.raspberrypi.org/forums/viewtopic.php?p=1501679
CC-MAIN-2020-10
en
refinedweb
Idiomatic Python: functions versus classes Brett In Python, everything is an object. Compared to Java which forces you to code everything in an object-oriented programming style but still has the concept of primitive types which are not objects on their own (although Java 5 added autoboxing to help hide this discrepancy), Python has no primitive types which aren’t objects but provides the concept of functions to support procedural programming (Python also has minor support for functional programming but that’s another conversation). While this dichotomy in Python of supporting both object-oriented and procedural programming allows using the right approach for the needs of the problem, people coming from other languages that are only procedural or object-oriented tends to lead to people missing the opportunity to harness this dichotomy that Python provides. The hope of this post is to show people coming from object-oriented-heavy languages like Java and C# that using functions when it makes sense is an idiomatic use of Python. The biggest question you need to ask yourself when working with Python is whether a class is even appropriate for the problem (Jack Diederich gave a talk at PyCon US 2012 on this exact topic)? If you take the time to think over the problem you’re trying to solve you may find the answer is “no” on a somewhat regular basis. All of this begs the question of what exactly is object-oriented programming for? For many this is a question that they have never stopped to think about because they came from a language that was entirely object-oriented to begin with or a language that just completely lacked it. The naïve answer is that object-oriented programming provides a way to organize code that puts methods with the data they work with. But really that’s just a kind of namespace, and namespaces are not unique to object-oriented programming. The truly important part of object-oriented programming is dispatching/messaging. This is what inheritance is built on top of and what lets an object choose which method to use at run-time. The classic ability of object-oriented programming to affect semantics by overriding a method in a subclass comes into play and shows its usefulness in this regard. When you want to change semantics of a method that another method uses, essentially injecting your differing semantics into the middle of executing code without having to copy-and-paste the surrounding code to make such a change is where object-oriented programming shines. But is the ability to override methods always important? Consider the heapq module in Python’s standard library. While a heap could be viewed as a classic object-oriented programming problem by providing a heap object, it might not always make sense to dictate upfront that someone use a special class just to have a heap. In the case of heapq, you pass in a mutable sequence that is to be the heap into functions instead of creating a heap class to begin with. This becomes important in terms of API flexibility as it means that while your code may want to use a heap, you can have users of your code just give you a mutable sequence and you can turn it into a heap as you deem necessary. So in this instance, it’s much better to have your code which implements a heap be functions that can work with any mutable sequence instead of a class that someone must start out using. Essentially, unless you see a need for users of your code to customize some bit of functionality that is in the middle of a call chain, don’t bother with using a class. Namespacing in Python is very rich, so you don’t need object-oriented programming to organize data with the code that works with it (simply create a module that groups code that operates on similar objects together). And if you expect users to only change semantics of your code at the beginning or end of a computation, then you still don’t need object-oriented programming as that’s just wrapping a function call to manipulate data before/after calling your function. In the end, you really only need to object-oriented programming to provide a mechanism for which the semantics of how something is computed or accessed at a fundamental level that permeates through code implicitly. To help make this distinction more clear, think about heapsorting again. This is a straight-forward concept and it’s easy to write a function that takes a sequence and performs a heapsort which requires just comparing objects as less-than or equal to each other. Since the algorithm is already object-agnostic, the sorting function itself doesn’t need to be a method on the sequence object. But since comparing two objects can need to vary between objects and is embedded within the heapsort algorithm, the comparison operation should be a method that can be easily overridden. While you could take a comparison function as an argument to the sorting function, that comparison function would need to know how to handle all possible objects in the sequence, while having the comparison code attached to the object means an object just needs to worry about itself in the comparison. And so in this instance it makes sense for comparing to be an object-oriented feature while the sorting of a sequence isn’t. And even when it does make sense to define a class, don’t overdo it. For instance, do not overdo the use of staticmethod. While it might be tempting to put all related code on a class even when it doesn’t require access to the instance or class itself, the use of staticmethod should be relegated to only things which a subclass may want to override. If you have a staticmethod which is private or is not called by other methods on the class then you should simply make it a function and pass the instance into the function. It works just as well and clearly separates what is tightly coupled to the class’ implementation and what isn’t.
https://devblogs.microsoft.com/python/idiomatic-python-functions-versus-classes/
CC-MAIN-2020-10
en
refinedweb
CareKitCareKit CareKit™ is an open source software framework for creating apps that help people better understand and manage their health. The framework provides modules that can be used out of the box, or extended and customized for more targeted use cases. It is composed of three SPM packages which can each be imported separately. CareKit: This is the best place to start building your app. CareKit provides view controllers that tie CareKitUI and CareKitStore together. The view controllers leverage Combine to provide synchronization between the store and the views. CareKitUI: Provides the views used across the framework. The views are subclasses of UIView that are open and extensible. Properties within the views are public allowing for full control over the content. CareKitStore: Provides a Core Data solution for storing patient data. It also provides the ability to use a custom store, such as a third party database or API. Table of ContentsTable of Contents - Requirements - Getting Started - CareKit - CareKitUI - CareKitStore - Integrating with ResearchKit - Getting Help - License RequirementsRequirements The primary CareKit framework codebase supports iOS and requires Xcode 11.0 or newer. The CareKit framework has a Base SDK version of 13.0. Getting StartedGetting Started Installation (Option One): SPMInstallation (Option One): SPM CareKit can be installed via SPM. Create a new Xcode project and navigate to File > Swift Packages > Add Package Dependency. Enter the url and tap Next. Choose the master branch, and on the next screen, check off the packages as needed. To add localized strings to your project, add the strings file to your project: English Strings Installation (Option Two): Embedded FrameworkInstallation (Option Two): Embedded Framework Download the project source code and drag in CareKit.xcodeproj, CareKitUI.xcodeproj, and CareKistStore.xcodeproj as needed. Then, embed the framework as a dynamic framework in your app, by adding it to the Embedded Binaries section of the General pane for your target as shown in the figure below. OCKCatalog AppOCKCatalog App The included catalog app demonstrates the different modules that are available in CareKit: OCKCatalog OCKSampleAppOCKSampleApp The included sample app demonstrates a fully constructed CareKit app: OCKSample CareKitCareKit CareKit is the overarching package that provides view controllers to tie CareKitUI and CareKitStore together. When importing CareKit, CareKitUI and CareKitStore will be imported under the hood. List view controllersList view controllers CareKit offers full screen view controllers for convenience. The view controllers query for and display data from a store, and stay synchronized with the data. OCKDailyTasksPageViewController: Displays tasks for each day with a calendar to page through dates. OCKContactsListViewController: Displays a list of contacts in the store. Synchronized View ControllersSynchronized View Controllers For each card in CareKitUI there is a corresponding view controller in CareKit. The view controllers are self contained modules that can be placed anywhere by using standard view controller containment. The view controller for each card leverages Combine to provide synchronization between the view and the store. This is done with the help of an OCKStoreManager, which wraps a store. When the store is modified, the store manager will publish notifications of the change. View controllers listen for this notification, and update the views accordingly. To create a synchronized view controller: // Create a store to hold your data. let store = OCKStore(named: "my-store", type: .onDisk) // Create a store manager to handle synchronization. Use this to instantiate all synchronized view controllers. let storeManager = OCKSynchronizedStoreManager(wrapping: store) // Create a view controller that queries for and displays data. The view will update automatically whenever the data in the store changes. let viewController = OCKSimpleTaskViewController(taskID: "doxylamine", eventQuery: OCKEventQuery(for: Date()), storeManager: storeManager) All synchronized view controllers have a controller and a view synchronizer. The controller is responsible for performing business logic. The view synchronizer defines how to instantiate the view to display, and how to update the view when the data in the store changes. Controllers and view synchronizers are customizable, and can be injected into a view controller to perform custom behavior: // Define a custom view synchronizer. class CustomSimpleTaskViewSynchronizer: OCKSimpleTaskViewSynchronizer { override func makeView() -> OCKSimpleTaskView { let view = super.makeView() // Customize the view when it is instantiated here... return view } override func updateView(_ view: OCKSimpleTaskView, context: OCKSynchronizationContext<OCKTaskEvents?>) { super.updateView(view, context: context) // Update the view when the data changes in the store here... } } // Define a custom controller. class CustomSimpleTaskController: OCKSimpleTaskController { // Override functions here to provide custom business logic for the view controller... } // Instantiate the view controller with the custom classes, then fetch and observe data in the store. let viewController = OCKSimpleTaskViewController(controller: CustomSimpleTaskController(storeManager: storeManager), viewSynchronizer: CustomSimpleTaskViewSynchronizer()) viewController.controller.fetchAndObserveEvents(forTaskID: "Doxylamine", eventQuery: OCKEventQuery(for: Date())) Custom Synchronized View ControllersCustom Synchronized View Controllers CareKit supports creating a custom view that can be paired with a synchronized view controller to synchronize the custom view and the data in the store. In the sample code below, notice that the custom view conforms to OCKTaskDisplayable. This allows the view to notify the view controller when to perform certain business logic. This also means that when creating a custom view, you only need to call methods on OCKTaskDisplayable from the view and the view controller will automatically react: // Define a custom view that displays an event for a task. class TaskButton: UIButton, OCKTaskDisplayable { weak var delegate: OCKTaskViewDelegate? override init(frame: CGRect) { super.init(frame: frame) addTarget(self, action: #selector(didTap(_:)), for: .touchUpInside) } @objc func didTap(_ sender: UIButton) { sender.isSelected.toggle() // Notify the view controller to mark the event for the task as completed. delegate?.taskView(self, didCompleteEvent: sender.isSelected, at: IndexPath(row: 0, section: 0), sender: sender) } } // Define a controller for the view to perform custom business logic. class TaskButtonController: OCKTaskController { // This function gets called as a result of the delegate call in the view. func setEvent(atIndexPath indexPath: IndexPath, isComplete: Bool, completion: ((Result<OCKAnyOutcome, Error>) -> Void)?) { // Perform custom behavior here... } } // Define a view synchronizer for the custom view. class TaskButtonViewSynchronizer: OCKTaskViewSynchronizerProtocol { typealias View = TaskButton // Instantiate the custom view. func makeView() -> TaskButton { return TaskButton(frame: CGRect(x: 0, y: 0, width: 200, height: 60)) } // Update the custom view when the data in the store changes. func updateView(_ view: TaskButton, context: OCKSynchronizationContext<OCKTaskEvents?>) { let event = context.viewModel?.firstEvent view.titleLabel?.text = event?.task.title view.isSelected = event?.outcome != nil } } // Finally, define a view controller that ties everything together. class TaskButtonViewController: OCKTaskViewController<TaskButtonController, TaskButtonViewSynchronizer> {} // Instantiate the view controller with the custom classes, then fetch and observe data in the store. let viewController = TaskButtonViewController(controller: TaskButtonController(storeManager: storeManager), viewSynchronizer: TaskButtonViewSynchronizer()) viewController.controller.fetchAndObserveEvents(forTaskID: "Doxylamine", eventQuery: OCKEventQuery(for: Date())) SwiftUISwiftUI CareKit controllers are compatible with SwiftUI, and can help take care of synchronization with the store. Start by defining a SwiftUI view: struct ContentView: View { // Observe the view model in the controller. @ObservedObject var controller: OCKSimpleTaskController // Define an event for convenience. var event: OCKAnyEvent? { controller.objectWillChange.value?.firstEvent } var body: some View { VStack(alignment: .center, spacing: 16) { Text(event?.task.title ?? "") Button(action: { let isComplete = self.event?.outcome != nil self.controller.setEvent(atIndexPath: IndexPath(row: 0, section: 0), isComplete: !isComplete, completion: nil) }) { self.event?.outcome != nil ? Text("Mark as Completed") : Text("Completed") } } } } Next, create a controller and instantiate the view: let controller = OCKSimpleTaskController(storeManager: manager) controller.fetchAndObserveEvents(forTaskID: "doxylamine", eventQuery: OCKEventQuery(for: Date())) let contentView = ContentView(controller: controller) CareKitUICareKitUI CareKitUI provides cards to represent tasks, charts, and contacts. There are multiple provided styles for each category of card. All cards are built in a similar pattern, making it easy to recognize and customize the properties of each. They contain a headerView at the top that displays labels and icons. The contents of the card is placed inside a vertical contentStackView, allowing easy placement of custom views into a card without having to worry about breaking existing constraints. For creating a card from scratch, see the OCKCardable protocol. Conforming to this protocol allows for styling a custom card to match the styling used across the framework. TasksTasks Below are the available task card styles: As an example, the instructions task card can be instantiated and customized like so: let taskView = OCKInstructionsTaskView() taskView.headerView.titleLabel.text = "Doxylamine" taskView.headerView.detailLabel.text = "7:30 AM to 8:30 AM" taskView.instructionsLabel.text = "Take the tablet with a full glass of water." taskView.completionButton.isSelected = false taskView.completionButton.label.text = "Mark as Completed" ChartsCharts Below are the available chart card styles: As an example, the bar chart can be instantiated and customized like so: let chartView = OCKCartesianChartView(type: .bar) chartView.headerView.titleLabel.text = "Doxylamine" chartView.graphView.dataSeries = [ OCKDataSeries(values: [0, 1, 1, 2, 3, 3, 2], title: "Doxylamine") ] ContactsContacts Below are the available contact card styles: As an example, the simple contact card can be instantiated and customized like so: let contactView = OCKSimpleContactView() contactView.headerView.titleLabel.text = "Lexi Torres" contactView.headerView.detailLabel.text = "Family Practice" StylingStyling To easily provide custom styling or branding across the framework, see the OCKStylable protocol. All stylable views derive their appearance from a list of injected constants. This list of constants can be customized for quick and easy styling. For example, to customize the separator color in a view and all of it's descendents: // Define your custom separator color. struct CustomColors: OCKColorStyler { var separator: UIColor { .black } } // Define a custom struct to hold your custom color. struct CustomStyle: OCKStyler { var color: OCKColorStyler { CustomColors() } } // Apply the custom style to your view. let view = OCKSimpleTaskView() view.customStyle = CustomStyle() Note that each view in CareKitUI is by default styled with OCKStyle. Setting a custom style on a view will propagate the custom style down to any subviews that do not already have a custom style set. The style propagation rules can be visualized in this diagram demonstrating three separate view hierarchies: CareKitStoreCareKitStore The CareKitStore package defines the OCKStoreProtocol that CareKit uses to talk to data stores, and a concrete implementation that leverages CoreData, called OCKStore. It also contains definitions of most of the core structures and data types that CareKit relies on, such OCKAnyTask, OCKTaskQuery, and OCKSchedule. StoreStore The OCKStore class is an append-only, versioned store packaged with CareKit. It is implemented on top of CoreData and provides fast, secure, on-device storage. OCKStore was designed to integrate with CareKit's synchronized view controllers, but can be used in isolation as well. import CareKitStore let store = OCKStore(named: "my-store", type: .onDisk) let breakfastSchedule = OCKSchedule.dailyAtTime(hour: 8, minutes: 0, start: Date(), end: nil, text: "Breakfast") let task = OCKTask(id: "doxylamine", title: "Doxylamine", carePlanID: nil, schedule: breakfastSchedule) store.addTask(task) { result in switch result { case .failure(let error): print("Error: \(error)") case .success: print("Successfully saved a new task!") } } The most important feature of OCKStore is that it is a versioned store with a notion of time. When querying the store using a date range, the result returned will be for the state of the store during the interval specified. // On January 1st let task = OCKTask(id: "doxylamine", title: "Take 1 tablet of Doxylamine", carePlanID: nil, schedule: breakfastSchedule) store.addTask(task) // On January 10th let task = OCKTask(id: "doxylamine", title: "Take 2 tablets of Doxylamine", carePlanID: nil, schedule: breakfastSchedule) store.updateTask(task) // On some future date... let earlyQuery = OCKTaskQuery(dateInterval: /* Jan 1st - 5th */) store.fetchTasks(query: earlyQuery, callbackQueue: .main) { result in let title = try! result.get().first?.title // Take 1 Tablet of Doxylamine } let laterQuery = OCKTaskQuery(dateInterval: /* Jan 12th - 17th */) store.fetchTasks(query: laterQuery, callbackQueue: .main) { result in let title = try! result.get().first?.title // Take 2 Tablets of Doxylamine } // Queries return the newest version of the task during the query interval! let midQuery = OCKTaskQuery(dateInterval: /* Jan 5th - 15th */) store.fetchTasks(query: laterQuery, callbackQueue: .main) { result in let title = try! result.get().first?.title // Take 2 Tablets of Doxylamine } SchemaSchema CareKitStore defines six high level entities as illustrated in this diagram: Patient: A patient represents the user of the app. Care Plan: A patient may have zero or more care plans. A care plan organizes the contacts and tasks associated with a specific treatment. For example, a patient may have one care plan for heart disease and a second for obesity. Contact: A care plan may have zero or more associated contacts. Contacts might include doctors, nurses, insurance providers, or family. Task: A care plan may have zero or more tasks. A task represents some activity that the patient is supposed to perform. Examples may include taking a medication, exercising, journaling, or checking in with their doctor. Schedule: Each task must have a schedule. The schedule defines occurrences of a task, and may optionally specify target or goal values, such as how much of a medication to take. Outcome: Each occurrence of a task may or may not have an associated outcome. The absence of an outcome indicates no progress was made on that occurrence of the task. Outcome Value: Each outcome may have zero or more values associated with it. A value might represent how much medication was taken, or a plurality of outcome values could represent the answers to a survey. It is important to note that tasks, contacts, and care plans can exist without a parent entity. Many CareKit apps are targeted to well defined use cases, and it can often be expedient to simply create tasks and contacts without defining a patient or care plan. SchedulingScheduling The scheduling tools provided in CareKit allow very precise and customizable scheduling of tasks. Each instance of an OCKSchedule is created by composing one or more OCKScheduleElements, which each define a single repeating interval. Static convenience methods exist to help with common use cases. let breakfastSchedule = OCKSchedule.dailyAtTime(hour: 8, minutes: 0, start: Date(), end: nil, text: "Breakfast") let everySaturdayAtNoon = OCKSchedule.weeklyAtTime(weekday: 7, hours: 12, minutes: 0, start: Date(), end: nil) Highly precise, complicated schedules can be created by combining schedule elements or other schedules. // Combining elements to create a complex schedule let elementA = OCKScheduleElement(start: today, end: nextWeek, interval: DateComponents(hour: 36)) let elementB = OCKScheduleElement(start: lastWeek, end: nil, interval: DateComponents(day: 2)) let complexSchedule = OCKSchedule(composing: [elementA, elementB]) // Combing two schedules into a composed schedule let dailySchedule = OCKSchedule.dailyAtTime(hour: 8, minutes: 0, start: tomorrow, end: nextYear, text: nil) let crazySchedule = OCKSchedule(composing: [dailySchedule, complexSchedule]) Schedules have a number of other useful properties that can be set, including target values, durations, and textual descriptions. let element = OCKScheduleElement( start: today, // The date and time this schedule will begin end: nextYear, // The date and time this schedule will end interval: DateComponents(day: 3), // Occurs every 3 days text: "Before bed", // "Before bed" will be show instead of clock time targetValues: [OCKOutcomeValue(10, units: "mL")], // Specifies what counts as "complete" duration: Duration = .hours(2) // The window of time to complete the task ) text: By default, CareKit view controllers will prompt users to perform tasks using clock time (e.g. "8:00PM"). If you provide a textproperty, then the text will be used to prompt the user instead ("Before bed"). duration: If you provide a duration, CareKit will prompt the user to perform the scheduled task within a window (e.g. "8:00 - 10:00 PM"). The duration can also be set to .allDayif you do not wish to specify any time in particular. targetValues: Target values are used by CareKit to determine if a user has completed a specific task or not. See OCKAdherenceAggregatorfor more details. Custom Stores and TypesCustom Stores and Types The OCKStore class provided with CareKit is a fast, secure, on-device store and will serve most use cases well. That said, we recognize it may not fully meet the needs of all our developers, so CareKit also allows you to write your own store. For example, you could write a wrapper around a web server, or even a simple JSON file. Any class that conforms to the OCKStoreProtocol can be used in place of the default store. Writing a CareKit store adapter requires defining the entities that will live in your store and implementing asynchronous Create, Read, Update, and Delete methods for each. Stores are free to define their own types, as long as those types conform to a certain protocol. For example, if you are writing a store that can hold tasks, you might do it like this. import CareKitStore struct MyTask: OCKAnyTask & Equatable & Identifiable { // MARK: OCKAnyTask let id: String let title: String let schedule: String /* ... */ // MARK: Custom Properties let difficulty: DifficultyRating /* ... */ } struct MyTaskQuery: OCKAnyTaskQuery { // MARK: OCKAnyTaskQuery let ids: [String] let carePlanIDs: [String] /* ... */ // MARK: Custom Properties let difficult: DifficultyRating? } class MyStore: OCKStoreProtocol { typealias Task = MyTask typealias TaskQuery = MyTaskQuery /* ... */ // MARK: Task CRUD Methods func fetchTasks(query: TaskQuery, callbackQueue: DispatchQueue, completion: @escaping OCKResultClosure<[Task]>) { /* ... */ } func addTasks(_ tasks: [Task], callbackQueue: DispatchQueue, completion: OCKResultClosure<[Task]>?) { /* ... */ } func updateTasks(_ tasks: [Task], callbackQueue: DispatchQueue, completion: OCKResultClosure<[Task]>?) { /* ... */ } func deleteTasks(_ tasks: [Task], callbackQueue: DispatchQueue, completion: OCKResultClosure<[Task]>?) { /* ... */ } /* ... */ } Using the four basic CRUD methods you supply, CareKit is able to use protocol extensions to imbue your store with extra functionality. For example, a store that implements the four CRUD methods for tasks automatically receives the following methods. func fetchTask(withID id: String, callbackQueue: DispatchQueue, completion: @escaping OCKResultClosure<Task>) func addTask(_ task: Task, callbackQueue: DispatchQueue, completion: OCKResultClosure<Task>?) func updateTask(_ task: Task, callbackQueue: DispatchQueue, completion: OCKResultClosure<Task>?) func deleteTask(_ task: Task, callbackQueue: DispatchQueue, completion: OCKResultClosure<Task>?) Methods provided via protocol extensions employ naive implementations. As the developer, you are free to provide your own implementations that leverage the capabilities of your underlying data store to achieve greater performance or efficiency. If you are considering implementing your own store, read over the protocol notes and documentation carefully. Integrating with ResearchKitIntegrating with ResearchKit CareKit and ResearchKit are sister frameworks and are designed to integrate well with one another. When integrating a ResearchKit into your CareKit app, there are a series of steps you will need to follow. - Subclass an existing task view controller - Override the method that is called when the task is completed - Present a ResearchKit survey and wait for the user to complete it - Get the survey result and save it to CareKit's store Here is an example demonstrating how to prompt the user to rate their pain on a scale of 1-10. Keep in mind as you're reading the code below that CareKit and ResearchKit both use the term "task", but that they are distinct. // 1. Subclass a task view controller to customize the control flow and present a ResearchKit survey! class SurveyViewController: OCKInstructionsTaskViewController, ORKTaskViewControllerDelegate { // 2. This method is called when the use taps the button! override func taskView(_ taskView: UIView & OCKTaskDisplayable, didCompleteEvent isComplete: Bool, at indexPath: IndexPath, sender: Any?) { // 2a. If the task was marked incomplete, fall back on the super class's default behavior or deleting the outcome. if !isComplete { super.taskView(taskView, didCompleteEvent: isComplete, at: indexPath, sender: sender) return } // 2b. If the user attempted to mark the task complete, display a ResearchKit survey. let answerFormat = ORKAnswerFormat.scale(withMaximumValue: 10, minimumValue: 1, defaultValue: 5, step: 1, vertical: false, maximumValueDescription: "Very painful", minimumValueDescription: "No pain") let painStep = ORKQuestionStep(identifier: "pain", title: "Pain Survey", question: "Rate your pain", answer: answerFormat) let surveyTask = ORKOrderedTask(identifier: "survey", steps: [painStep]) let surveyViewController = ORKTaskViewController(task: surveyTask, taskRun: nil) surveyViewController.delegate = self // 3a. Present the survey to the user present(surveyViewController, animated: true, completion: nil) } // 3b. This method will be called when the user completes the survey. func taskViewController(_ taskViewController: ORKTaskViewController, didFinishWith reason: ORKTaskViewControllerFinishReason, error: Error?) { taskViewController.dismiss(animated: true, completion: nil) guard reason == .completed else { taskView.completionButton.isSelected = false return } // 4a. Retrieve the result from the ResearchKit survey let survey = taskViewController.result.results!.first(where: { $0.identifier == "pain" }) as! ORKStepResult let painResult = survey.results!.first as! ORKScaleQuestionResult let answer = Int(truncating: painResult.scaleAnswer!) // 4b. Save the result into CareKit's store controller.appendOutcomeValue(withType: answer, at: IndexPath(item: 0, section: 0), completion: nil) } } Once you have defined this view controller, you can add it into your app as you would any other CareKit view controller! let todaysSurveyCard = SurveyViewController( taskID: "survey", eventQuery: OCKEventQuery(for: Date()), storeManager: storeManager) present(surveyCard, animated: true, completion: nil) You may also decide that you want the view to update to display the result of your survey instead of the default values used by the superclass. To change that, you can implement your own view synchronizer. class SurveyViewSynchronizer: OCKInstructionsTaskViewSynchronizer { // Customize the initial state of the view override func makeView() -> OCKInstructionsTaskView { let instructionsView = super.makeView() instructionsView.completionButton.label.text = "Start Survey" return instructionsView } // Customize how the view updates override func updateView(_ view: OCKInstructionsTaskView, context: OCKSynchronizationContext<OCKTaskEvents?>) { super.updateView(view, context: context) // Check if an answer exists or not and set the detail label accordingly if let answer = context.viewModel?.firstEvent?.outcome?.values.first?.integerValue { view.headerView.detailLabel.text = "Pain Rating: \(answer)" } else { view.headerView.detailLabel.text = "Rate your pain on a scale of 1 to 10" } } } Now, when you create an instance of your SurveyViewController, you can pass in your custom view synchronizer to change how the view updates. let surveyCard = SurveyViewController( viewSynchronizer: SurveyViewSynchronizer(), taskID: "survey", eventQuery: OCKEventQuery(date: Date()), storeManager: storeManager) present(surveyCard, animated: true, completion: nil) Getting HelpGetting Help GitHub is our primary forum for CareKit. Feel free to open up issues about questions, problems, or ideas. LicenseLicense This project is made available under the terms of a BSD license. See the LICENSE file.
http://www.alexruperez.com/entries/5223-carekit-apple-carekit
CC-MAIN-2020-10
en
refinedweb
Create Your First React Electron Desktop App With TypeScript.TypeScript can help improve your app quality by informing you of type errors in your code, so it’s a good idea to start integrating this into your development flow if you haven’t already. In a previous piece, I went over the steps of creating desktop application software using Electron. This piece will start off by cloning the repo and extending it to support TypeScript so that we get type-checking capabilities while developing our desktop app. TypeScript can help improve your app quality by informing you of type errors in your code, so it’s a good idea to start integrating this into your development flow if you haven’t already. With that said, this is not a continuation tutorial, but we will be using the repo to extend it so that users like you and I can start taking advantage of TypeScript features when developing desktop apps. And without further ado, let’s get started! (Note: If you want to have a copy of the resulting repo that we will be building, visit this link) The first thing we are going to do is to clone the repo. After it’s done, go into the directory and install the dependencies using the CLI: npm install Once it’s done installing the app, lets make sure that we have a working project by starting it up in dev mode: npm start If it was successful, you should see this window: That started up our live hot reloadable web server for our React app. Now go ahead and run Electron: npm run electron If that was successful, you should then see this window: Great! Now that we know we have a working app, let’s continue with installing TypeScript into the project: npm i -D typescript (Note: -D is just an alias for --save-dev) We’re going to install ESLint next. You might be wondering why I’m even bothering with ESLint since it is mainly in concern with linting JavaScript. The team behind TSLint made an announcement earlier this year announcing their plans moving forward and decided that TSLint will become deprecated in favor of ESLint. As a result, tools were eventually developed onward that allow developers to use ESLint and TypeScript together. @typescript-eslint/parser is a parser that turns our source code into an Abstract Syntax Tree (AST) that enables ESLint to be used with TypeScript by utilizing the TypeScript compiler. You can read about it on GitHub for more information. We will also need to install @typescript-eslint/eslint-plugin. I’m going to list the packages that I regularly use in my React projects. You don’t have to install all of them, but eslint and the bottom five of this list are what you’ll most definitely want to use in your projects: So let’s go ahead and install eslint and all of the others: npm install -D eslint eslint-config-airbnb eslint-config-prettier eslint-plugin-import eslint-plugin-jsx-a11y eslint-plugin-prettier eslint-plugin-react eslint-plugin-react-hooks @typescript-eslint/parser @typescript-eslint/eslint-plugin Let’s also not forget about typescript itself: npm install -D typescript Next, we’re going to create a .eslintrc.js file in our root directory. Here's my .eslintrc.js: module.exports = { parser: '@typescript-eslint/parser', parserOptions: { project: './tsconfig.json', ecmaFeatures: { jsx: true, }, }, env: { browser: true, jest: true, }, extends: [ 'airbnb', 'prettier', 'prettier/react', 'prettier/@typescript-eslint', 'plugin:@typescript-eslint/recommended', 'plugin:prettier/recommended', ], plugins: ['@typescript-eslint', 'react-hooks', 'prettier'], rules: { '@typescript-eslint/explicit-function-return-type': 'off', '@typescript-eslint/indent': 'off', '@typescript-eslint/explicit-member-accessibility': 'off', '@typescript-eslint/member-delimiter-style': 'off', '@typescript-eslint/no-use-before-define': 'off', '@typescript-eslint/no-explicit-any': 'off', '@typescript-eslint/camelcase': 'off', 'arrow-parens': [2, 'always'], 'arrow-body-style': 0, 'consistent-return': 0, 'css-modules/no-unused-class': 'off', camelcase: 0, 'class-methods-use-this': 0, 'comma-dangle': 0, 'dot-notation': 0, eqeqeq: 0, 'flowtype/no-types-missing-file-annotation': 0, 'func-names': 'off', 'import/prefer-default-export': 0, 'import/no-extraneous-dependencies': 'off', 'import/newline-after-import': 'off', 'import/first': 'off', 'import/no-extensions': 'off', 'import/extensions': 'off', 'import/no-unresolved': 'off', 'import/no-useless-path-segments': 0, 'import/no-absolute-path': 'off', 'jsx-a11y/html-has-lang': 0, 'jsx-a11y/alt-text': 0, 'jsx-a11y/anchor-is-valid': 'off', 'jsx-a11y/click-events-have-key-events': 'off', 'jsx-a11y/href-no-hash': 0, 'jsx-a11y/no-static-element-interactions': 0, 'jsx-a11y/no-noninteractive-element-interactions': 0, 'jsx-a11y/no-autofocus': 0, 'jsx-a11y/label-has-associated-control': 0, 'jsx-a11y/label-has-for': 0, 'jsx-quotes': ['error', 'prefer-double'], 'jsx-a11y/media-has-caption': 0, 'jsx-a11y/anchor-has-content': 0, 'linebreak-style': 0, 'max-len': 0, 'no-alert': 0, 'no-case-declarations': 0, 'no-underscore-dangle': 'off', 'no-useless-escape': 'off', 'no-trailing-spaces': 0, 'no-multi-assign': 'off', 'no-nested-ternary': 'off', 'no-lonely-if': 'off', 'no-plusplus': 'off', 'no-loop-func': 'off', 'no-unused-expressions': 0, 'no-unused-vars': 1, 'no-confusing-arrow': 0, 'no-use-before-define': 0, 'no-console': 0, 'no-return-assign': 0, 'no-restricted-properties': 0, 'no-param-reassign': 0, 'no-shadow': 0, 'no-prototype-builtins': 0, 'no-multiple-empty-lines': 0, 'no-else-return': 0, 'object-curly-spacing': ['error', 'always'], 'object-property-newline': 0, 'one-var': 0, 'one-var-declaration-per-line': 0, 'prettier/prettier': 0, 'padded-blocks': 0, 'prefer-template': 0, 'prefer-destructuring': 0, quotes: 2, 'react-hooks/exhaustive-deps': 'warn', 'react-hooks/rules-of-hooks': 'error', 'react/no-multi-comp': 0, 'react/jsx-wrap-multilines': 0, 'react/default-props-match-prop-types': 'off', 'react/no-find-dom-node': 'off', 'react/destructuring-assignment': 'off', 'react/jsx-no-bind': 'off', 'react/jsx-filename-extension': [ 'error', { extensions: ['.js', '.jsx', '.ts', '.tsx'], }, ], 'react/react-in-jsx-scope': 0, 'react/prop-types': 0, 'react/forbid-prop-types': 0, 'react/no-children-prop': 0, 'react/no-array-index-key': 0, 'react/prefer-stateless-function': 'off', 'react/sort-comp': 0, 'react/no-unescaped-entities': 0, 'react/jsx-no-bind': 0, 'react/no-unused-state': 1, 'react/no-unused-prop-types': 0, 'react/jsx-pascal-case': 0, 'react/no-danger': 0, 'react/require-default-props': 0, 'react/jsx-curly-spacing': 0, 'react/jsx-max-props-per-line': 1, 'space-in-parens': ['error', 'never'], 'spaced-comment': 0, 'space-infix-ops': 0, 'space-unary-ops': 0, 'space-before-function-paren': 0, }, settings: { 'import/resolver': { node: { moduleDirectory: ['node_modules', 'src'], }, }, }, } eslintrc.js Now when we implement TypeScript into an Electron project, it gets a little tricky. TypeScript is a typed superset of JavaScript that compiles code to plain JavaScript, which is what we want. But there might actually be an issue on this when building apps in Electron that we might not have been aware of at first glance, especially if we just started using Electron. The problem is that there are actually two types of processes that run in Electron. One is called the main process and the other is the renderer process. When Electron creates web pages, they’re created as renderer processes (which are essentially living in a browser environment). Electron can create and run multiple renderer processes at the same time, but ultimately there can only be one main process. Since renderer processes are web pages, they’re blocked from calling native GUI APIs because it would be a huge security concern to allow them to manage GUI resources. Electron enables a one-way communication tunnel between the renderer and the main process by utilizing ipcMain, ipcRenderer, or remote. Because of this restriction, we must split the directories in such a way that we develop code for the main process separately apart from the renderer process, so that we have TypeScript compile them separately. This is so we don’t create problems in the software from compiling together their code. Let’s look at our directory structure and see what we’ve got: It looks like we have start.js, which is the main process, living in the same directory as the code of the renderer process ( App.js, index.js, index.css, etc). So we have to separate them to something like this: Note: I renamed the files in the screenshot so that they are TypeScript files. This is a good start. However, when we configure the typescript config file we have to specify a glob that TypeScript will use to include in all the files that it matches in the compilation, including where to output them to. We’re still stuck at the previous issue, so what we’re going to do is to make the current root directory to be the parent directory, which will hold the mainand renderer process code. We’re also going to make both of them be independent repos so that we can gain the benefits of npm installing packages that only need to exposed to a specific process and vice versa. This will help give us an easier time debugging in the future from having our directories more abstracted and organized. What we’re going to do is to move everything except the main directory to the renderer directory. The reason we do this is because this project was bootstrapped by create-react-app, which is essentially already an environment inside a renderer process: Now that we’ve got the renderer repo out of the way, let's make the main process into its own repo: # step into the main directory cd main # initialize npm npm init Just press Enter through everything. Now open up the package.json and you should see a nearly empty package.json file: { "name": "main", "version": "1.0.0", "description": "", "main": "index.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC" } What we’re going to need to change here is the "main" part, not because it's not a TypeScript file, but because this is the Electron file we are going to be putting in our output directory when we run the build command later. When we build our app, we're going to initiate it inside the renderer directory so we need a clearer name: { "name": "main", "version": "1.0.0", "description": "", "main": "./src/electron.js", "scripts": { "test": "echo \"Error: no test specified\" && exit 1" }, "author": "", "license": "ISC" } You might be confused as to why we didn’t write the name ending with a TypeScript extension like .ts. This is because we're going to create a electron.ts file which we will make TypeScript transpile it to .js in the same directory. When we run the build command in the renderer directory later, we are going to programmatically copy this file and send it to the renderer's output directory, which will be build. In order to get TypeScript to compile this file, we’re going to install TypeScript in the main repo: npm install -D typescript Then we’re going to create a tsconfig.json in its root directory: { "compilerOptions": { "target": "es5", "lib": ["dom", "dom.iterable", "esnext"], "] } We’re going to treat this as a typical repo for developing as we don’t want any unnecessary confusions going back and forth switching in between, so we’ll create a src directory and move the start.ts file right into it. This start.ts file will be the electron.ts file that will be compiled right into electron.js. Also, don’t forget to install electron: npm install electron && npm install -D @types/electron electron-is-dev In addition, we’re going to install the nodemon package so that we acquire auto restartcapabilities when we combine it with electron-reload (electron-reload is used to restart the main process when we make changes to it): npm install --save-dev nodemon electron-reload Next, we’re going to add the start command to the scripts section: { "name": "main", "version": "1.0.0", "description": "", "main": "./src/electron.js", "scripts": { "start": "cross-env NODE_ENV=dev nodemon --exec \"electron src/electron.js\" && tsc ./src/electron.ts -w" }, "author": "", "license": "ISC", "dependencies": { "electron": "^6.0.12" }, "devDependencies": { "@types/electron": "^1.6.10", "concurrently": "^5.0.0", "cross-env": "^6.0.3", "electron-is-dev": "^1.1.0", "electron-reload": "^1.5.0", "nodemon": "^1.19.3", "typescript": "^3.6.4" } } And this is our electron.ts file: import { app, BrowserWindow } from 'electron' import * as path from 'path' import * as isDev from 'electron-is-dev' import 'electron-reload' let mainWindow function createWindow() { mainWindow = new BrowserWindow({ width: 800, height: 600, webPreferences: { nodeIntegration: true, }, }) mainWindow.loadURL( isDev ? '' : `{path.join(__dirname, '../build/index.html')}`, ) mainWindow.on('closed', () => { mainWindow = null }) } app.on('ready', createWindow) app.on('window-all-closed', () => { if (process.platform !== 'darwin') { app.quit() } }) app.on('activate', () => { if (mainWindow === null) { createWindow() } }) Great! Now when we run npm start, our main process should run successfully, in addition to automatically re-compiling electron.ts to electron.js on changes: Let's move back into the renderer directory because there are a couple of things we still need to do. # move back out to the parent directory cd .. # move into the renderer directory cd renderer Note: If you’re missing a tsconfig.json file, create it: { "compilerOptions": { "allowJs": true, "allowSyntheticDefaultImports": true, "esModuleInterop": true, "forceConsistentCasingInFileNames": true, "isolatedModules": true, "jsx": "preserve", "lib": ["dom", "dom.iterable", "esnext"], "module": "esnext", "moduleResolution": "node", "noEmit": true, "skipLibCheck": true, "strict": false, "target": "es5", "resolveJsonModule": true }, "include": ["src"] } If all goes well, we should now have two working processes! Go into your renderer process directory and run npm start where you should see a working and running server as expected: And finally, open up another terminal, go into your main process directory and run npm startas well. It should be working as well: Hurray! We finally did it! We can now start almost developing! Wait, what? Yes, that’s right. We’re not completely done yet. Have you noticed that when you make changes to the main process code, Electron is not reloading? We’re going to need the wait-on package to call the shots on when to execute the electron.js file. This perfectly solves our problem since it waits until HTTP requests return a 200 code and then it will continue to execute the script when the app is ready to continue. We’re also going to use concurrently so that we can run our commands at the same time since they can be run individually: { "name": "main", "version": "1.0.0", "description": "", "main": "./src/electron.js", "scripts": { "start": "concurrently \"tsc ./src/electron.ts -w\" \"cross-env NODE_ENV=dev nodemon --exec \"\"wait-on && electron src/electron.js\"\"" }, "author": "", "license": "ISC", "dependencies": { "electron": "^6.0.12" }, "devDependencies": { "@types/electron": "^1.6.10", "concurrently": "^5.0.0", "cross-env": "^6.0.3", "electron-is-dev": "^1.1.0", "electron-reload": "^1.5.0", "nodemon": "^1.19.3", "typescript": "^3.6.4", "wait-on": "^3.3.0" } } Once you reach this point, you can then begin developing your app code however you like. Remember, you’re able to develop the main process separately from your renderer process, but they will be packaged together when you package them with electron-builder.Conclusion And that concludes the end of this piece. I hope you found this to be valuable and that it helped you gain a little more understanding of how you can integrate TypeScript into other projects. Look for more in the future! The official <a href="" target="_blank">reactjs.org</a> website contains an excellent introductory tutorial. The official reactjs.org website contains an excellent introductory tutorial. The tutorial snippets are written in JavaScript and I am trying to convert these to TypeScript. I have managed to get the code working but have a question about using interfaces. What should the correct "function signature" be for the onClick callback. Is there a way to replace the 'any' keyword in the IProps_Square interface with an explicit function signature ? Any help or suggestions would be really appreciated, many thanks Russell index.html <!DOCTYPE html> <html lang="en"> <body> <div id="reactjs-tutorial"></div> </body> </html> index.tsx import * as React from 'react'; import * as ReactDOM from 'react-dom'; interface IProps_Square { message: string, onClick: any, } class Square extends React.Component < IProps_Square > { render() { return ( <button onClick={this.props.onClick}> {this.props.message} </button> ); } } class Game extends React.Component { render() { return ( <Square message = { 'click this' } onClick = { () => alert('hello') } /> ); } } ReactDOM.render( <Game />, document.getElementById('reactjs-tutorial') );
https://morioh.com/p/8403a7c5e10a
CC-MAIN-2020-10
en
refinedweb
In this Angular 8 CRUD tutorial, we will learn how to implement crud To-do Application in Angular 8. Befor get started with angular 8 crud operation. You must have installed Angular project. So let’s get start: Create Angular 8 In this tutorial Angular 8 CRUD tutorial, we will learn how to implement CRUD To-do Application in Angular 8. Befor get started with angular 8 crud operation. You must have installed Angular project. So let’s get start:Create Angular 8 Component What is component? Components are a logical piece of code for Angular JS application. We will create three component. Type the following command to generate Angular 8 Components. We will perform create, read, update operations. So we will create three components. To create it: ng g component todo ng g component todoAdd ng g component todoEdit Open src/app/app.module.ts you will see all the three components imported and declared in declarations section by Angular 8 itself. Now We need to create route for the above created component. So when you navigate the route you will be see the html of your component. To create new route open the “app-routing.module.ts” file under the “your project directory > src > app”. Import all three above created component to the routing module. Add the below line in the top of the “app-routing.module.ts” file. import { TodoComponent } from './todo/todo.component'; import { TodoAddComponent } from './todo-add/todo-add.component'; import { TodoEditComponent } from './todo-edit/todo-edit.component'; You will see the route variable. Changed with this Install bootstrap 4 CSS FramworkInstall bootstrap 4 CSS Framwork const routes: Routes = [ { path: '', component: TodoComponent, data: { title: 'List of todos' } }, { path: 'todo/add', component: TodoAddComponent, data: { title: 'Add todo' } }, { path: 'todo/edit/:id', component: TodoEditComponent, data: { title: 'Edit todo' } }, ]; Next, install the Bootstrap 4 CSS Framework using the following command. npm install bootstrap --save Now, add it inside the angular.json file. "styles": [ "src/styles.css", "./node_modules/bootstrap/dist/css/bootstrap.min.css" ] So, now we can use the Bootstrap 4 classes in our project.Configure the Angular 8 Form Validation We will use ReactiveFormsModule for Angular 8 Form Validation. Now, import the ReactiveFormsModule inside the app.module.ts file. Modify Angular 8 entry pageModify Angular 8 entry page import { ReactiveFormsModule } from '@angular/forms'; imports: [ ReactiveFormsModule ], Open src/app/app.component.html and modify this HTML page to fit the CRUD page. Replace with the following html <router-outlet></router-outlet> <div class="container mt-5"> <h2>Todos <a class="float-right" [routerLink]="['/todo/add']"> <button type="button" class="btn btn-primary">Add</button> </a> </h2> <table class="table table-bordered mt-5"> <thead> <tr> <th>ID</th> <th>Title</th> <th>Action</th> </tr> </thead> <tbody> <tr * <td width="20">{{item.id}}</td> <td>{{item.title}}</td> <td width="250"> <button type="button" class="btn btn-danger mr-1" (click)="deleteTodo(item.id, i)">Delete</button> <a [routerLink]="['/todo/edit/', item.id]"> <button type="button" class="btn btn-primary">Edit</button> </a> </td> </tr> </tbody> </table> </div> Open src/app/todo.component.ts and put the below code import { Component, OnInit } from '@angular/core'; import { ApiService } from '../api.service'; import { Todo } from '../todo'; @Component({ selector: 'app-todo', templateUrl: './todo.component.html', styleUrls: ['./todo.component.css'] }) export class TodoComponent implements OnInit { data: Todo[] = []; constructor(private api: ApiService) { } ngOnInit() { this.api.getTodos() .subscribe(res => { this.data = res; }, err => { console.log(err); }); } deleteTodo(id, index) { this.api.deleteTodo(id) .subscribe(res => { this.data.splice(index,1); }, (err) => { console.log(err); } ); } } Open src/app/todo-add.component.html and put the below html <div class="row mt-5"> <div class="col-md-6 mx-auto"> <h2 class="text-center">Add Todo</h2> <div class="card mt-3"> <div class="card-body"> <form [formGroup]="todoForm" (ngSubmit)="addTodo()"> <div class="form-group"> <label class="col-md-4">Title </label> <input type="text" class="form-control" formControlName="title" /> </div> <div class="form-group"> <button type="submit" class="btn btn-primary col-md-4" [disabled]="todoForm.invalid">Add</button> <a [routerLink]="['/']"> <button type="submit" class="btn btn-primary col-md-4 ml-1">Back</button> </a> </div> </form> </div> </div> </div> </div> Open src/app/todo-add.component.ts and put the below code import { Component, OnInit } from '@angular/core'; import {FormBuilder, FormGroup, Validators} from "@angular/forms"; import { ApiService } from '../api.service'; import {Router} from "@angular/router"; @Component({ selector: 'app-todo-add', templateUrl: './todo-add.component.html', styleUrls: ['./todo-add.component.css'] }) export class TodoAddComponent implements OnInit { todoForm: FormGroup; constructor(private formBuilder: FormBuilder, private router: Router, private api: ApiService) { } ngOnInit() { this.todoForm = this.formBuilder.group({ title: ['', Validators.compose([Validators.required])], }); } addTodo() { const payload = { title: this.todoForm.controls.title.value, }; this.api.addTodo(payload) .subscribe(res => { let id = res['_id']; this.router.navigate(['/']); }, (err) => { console.log(err); }); } } Open src/app/todo-edit.component.html and put the below html <div class="row mt-5"> <div class="col-md-6 mx-auto"> <h2 class="text-center">Update Todo</h2> <div class="card mt-3"> <div class="card-body"> <form [formGroup]="todoForm" (ngSubmit)="updateTodo(todoForm.value)"> <div class="form-group"> <label class="col-md-4">Title </label> <input type="text" class="form-control" formControlName="title" /> </div> <div class="form-group"> <button type="submit" class="btn btn-primary col-md-4" [disabled]="todoForm.invalid">Update</button> <a [routerLink]="['/']"> <button type="submit" class="btn btn-primary col-md-4 ml-1">Back</button> </a> </div> </form> </div> </div> </div> </div> Open src/app/todo-edit.component.ts and put the below code Configure the HttpClientModuleConfigure the HttpClientModule import { Component, OnInit } from '@angular/core'; import {FormBuilder, FormGroup, Validators, NgForm} from "@angular/forms"; import { ApiService } from '../api.service'; import { ActivatedRoute, Router } from '@angular/router'; import { Todo } from '../todo'; @Component({ selector: 'app-todo-edit', templateUrl: './todo-edit.component.html', styleUrls: ['./todo-edit.component.css'] }) export class TodoEditComponent implements OnInit { todoForm: FormGroup; id:number= null; constructor( private formBuilder: FormBuilder, private activeAouter: ActivatedRoute, private router: Router, private api: ApiService ) { } ngOnInit() { this.getDetail(this.activeAouter.snapshot.params['id']); this.todoForm = this.formBuilder.group({ title: ['', Validators.compose([Validators.required])], }); } getDetail(id) { this.api.getTodo(id) .subscribe(data => { this.id = data.id; this.todoForm.setValue({ title: data.title }); console.log(data); }); } updateTodo(form:NgForm) { this.api.updateTodo(this.id, form) .subscribe(res => { this.router.navigate(['/']); }, (err) => { console.log(err); } ); } } We need HttpClientModule to access RESTful API. So before creating a service, first, Open src/app/app.module.ts then add this import. import { HttpClientModule } from '@angular/common/http'; Add HttpClientModule to imports array under @NgModule Create Angular 8 Service for Accessing RESTful APICreate Angular 8 Service for Accessing RESTful API imports: [ HttpClientModule ] Generate an Angular 8 service for Accessing RESTful API by typing this command. ng g service api Next, open and edit src/app/api.service.ts then add the below function. getTodos (): Observable<Todo[]> { return this.http.get<Todo[]>(apiUrl, httpOptions) .pipe( tap(heroes => console.log('fetched todos')), catchError(this.handleError('getTodos', [])) ); } getTodo(id: number): Observable<Todo> { const url = `${apiUrl}?id=${id}`; return this.http.get<Todo>(url).pipe( tap(_ => console.log(`fetched todo id=${id}`)), catchError(this.handleError<Todo>(`getTodo id=${id}`)) ); } addTodo (todo): Observable<Todo> { return this.http.post<Todo>(`${apiUrl}/create.php`, todo, httpOptions).pipe( tap((todo: Todo) => console.log(`added todo w/ id=${todo.id}`)), catchError(this.handleError<Todo>('addTodo')) ); } updateTodo (id, todo): Observable<any> { const url = `${apiUrl}/update.php?id=${id}`; return this.http.put(url, todo, httpOptions).pipe( tap(_ => console.log(`updated todo id=${id}`)), catchError(this.handleError<any>('updateTodo')) ); } deleteTodo (id): Observable<Todo> { const url = `${apiUrl}/delete.php?id=${id}`; return this.http.delete<Todo>(url, httpOptions).pipe( tap(_ => console.log(`deleted todo id=${id}`)), catchError(this.handleError<Todo>('deletetodo')) ); } private handleError<T> (operation = 'operation', result?: T) { return (error: any): Observable<T> => { // TODO: send the error to remote logging infrastructure console.error(error); // log to console instead // Let the app keep running by returning an empty result. return of(result as T); }; } Run the below command, will run the Angular 8 Web Application and open the Application in browser by it self ng serve -o In this Angular Tutorial, you'll learn Angular from scratch and go from beginner to advanced in Angular. In this Angular crash course you will learn from scratch. We will assume that you are a complete beginner and by the end of the course you will be at advanced level. This course contain Real-World examples and Hands On practicals.Complete Angular Course: Go From Zero To Hero Welcome to this course "Complete Angular Crash Course: Learn Angular from Scratch and Go from. What you'll learn In this JavaScript Handbook tutorial, you'll learn all you need to know about JavaScript JavaScript 😆: What do they mean? They are all referring to a standard, called ECMAScript. ECMAScript is the standard upon which JavaScript is based, and it’s often abbreviated to ES. Beside JavaScript, other languages implement(ed) ECMAScript, including:. The current ECMAScript version is ES2018. It was released in June 2018. Historically, JavaScript editions have been standardized during the summer, so we can expect ECMAScript 2019 to be released in summer 2019, but this is just speculation. TC39 is the committee that evolves JavaScript. The members of TC39 are companies involved in JavaScript and browser vendors, including Mozilla, Google, Facebook, Apple, Microsoft, Intel, PayPal, SalesForce and others. Every standard version proposal must go through various stages, which are explained here.The: letand const I’ll cover each of them in a dedicated section here in this guide. So let’s get started.. _this_scope The this scope with arrow functions is inherited from the context. With regular functions, this always refers to the nearest function, while with arrow functions this problem is removed, and you won't need to write var that = this ever again.: Here is an example of a generator which explains how it all works. function *calculator(input) { var doubleThat = 2 * (yield (input / 2)) var another = yield (doubleThat) return (input * doubleThat * another) } We initialize it with const calc = calculator(10) Then we start the iterator on our generator: calc.next() This first iteration starts the iterator. The code returns this object: { done: false value: 5 } What happens is: the code runs the function, with input = 10as it was passed in the generator constructor. It runs until it reaches the yield, and returns the content of yield: input / 2 = 5. So we get { done: true value: 14000 } As the iteration is done (no more yield keywords found), we just return (input * doubleThat * another) which amounts to 10 * 14 * 100. let. Classes have a special method called constructor which is called when a class is initialized via new. The parent class can be referenced using super(). A getter for a property can be declared as class Person { get fullName() { return `${this.firstName} ${this.lastName}` } } Setters are written in the same way: class Person { set age(years) { this.theAge = years } } Before ES2015, there were at least 3 major competing module standards, which fragmented the community: ES2015 standardized these into a common format. Importing is done via the import ... from ... construct: import * from 'mymodule' import React from 'react' import { React, Component } from 'react' import React as MyLibrary from 'react' You can write modules and export anything to other modules using the export keyword: export var foo = 2 export function bar() { /* ... */ } = `Hey thisstring is awesome!` Compare how we used to do multiline strings pre-ES2015: var str = 'One\n' + 'Two\n' + 'Three' Functions now support default parameters: const foo = function(index = 0, testing = true) { /* ... */ } foo() You can expand an array, an object or a string using the spread operator .... Let’s start with an array example. Given the following:. In ES2015 Object Literals gained superpowers. Instead of doing const something = 'y' const x = { something: something } you can do const something = 'y' const x = { something } A prototype can be specified with const anObject = { y: 'y' } const x = { __proto__: anObject } const anObject = { y: 'y', test: () => 'zoo' } const x = { __proto__: anObject, test() { return super.test() + 'x' } } x.test() //zoox const x = { ['a' + '_' + 'b']: 'z' } x.a_b //z:') } The exponentiation operator ** is the equivalent of Math.pow(), but brought into the language instead of being a library function. Math.pow(4, 2) == 4 ** 2 This feature is a nice addition for Math intensive JavaScript applications. The ** operator is standardized across many languages including Python, Ruby, MATLAB, Lua, Perl and many others. ECMAScript 2017, edition 8 of the ECMA-262 Standard (also commonly called ES2017 or ES8), was finalized in June 2017. Compared to ES6, ES8 is a tiny release for JavaScript, but still it introduces very useful features: The purpose of string padding is to add characters to a string, so it reaches a specific length. ES2017 introduces two String methods: padStart() and padEnd(). padStart(targetLength [, padString]) padEnd(targetLength [, padString]) Sample usage: This method returns an array containing all the object own property values. Usage: const person = { name: 'Fred', age: 87 } Object.values(person) // ['Fred', 87] Object.values() also works with arrays: const people = ['Fred', 'Tony'] Object.values(people) // ['Fred', 'Tony']']] This method returns all own (non-inherited) property descriptors of an object. Any object in JavaScript has a set of properties, and each of these properties has a descriptor. A descriptor is a set of attributes of a property, and it’s composed by a subset of the following: Object.getOwnPropertyDescriptors(obj) accepts an object, and returns an object with the set of descriptors. ES2015 gave us Object.assign(), which copies all enumerable own properties from one or more objects, and returns a new object. However there is a problem with that, because it does not correctly copy properties with non-default attributes. If an object for example just has, as it was not copied over. The same limitation goes for shallow cloning objects with Object.create(). This feature allows to have trailing commas in function declarations, and in functions calls: const doSomething = (var1, var2,) => { //... }doSomething('test2', 'test2',) This change will encourage developers to stop the ugly “comma at the start of the line” habit.. It’s a higher level abstraction over of their own, and syntax complexity. They were good primitives around which a better syntax could be exposed to the developers: enter async functions. Code making use of asynchronous After I did something //after 3s) })? ES6 introduced the concept of a rest element when working with array destructuring: const numbers = [1, 2, 3, 4, 5] [first, second, ...others] = numbers and spread elements: <pre class="gk gl gm gn go jv jw cm"><span id="9561" class="hx hy ef at jp b fa jx jy r jz">const numbers = [1, 2, 3, 4, 5] const sum = (a, b, c, d, e) => a + b + c + d + e const sum = sum(...numbers)</span></pre ES2018 introduces } The new construct for-await-of allows you to use an async iterable object as the loop iteration: for await (const line of readLines(filePath)) { console.log(line) } Since this uses await, you can use it only inside async functions, like a normal await (see async/await)')) RegExp lookbehind assertions: match a string depending on what precedes it. This is a lookahead:aheads use the ?= symbol. They were already available. Lookbehinds, a new feature, ('[email protected]') //✅ /^\p{ASCII}+$/u.test('ABC('🙃🙃') /. In ES2018 a capturing group can be assigned to a name, rather than just being assigned a slot in the resulting array: const re = /(?<year>\d{4})-(?<month>\d{2})-(?<day>\d{2})/ const result = re.exec('2015-01-02')// result.groups.year === '2015'; // result.groups.month === '01'; // result.groups.day === '02'; 's’flag for regular expressions The s flag, short for single line, causes the . to match new line characters as well. Without it, the dot matches regular characters but not the new line: Coding StyleCoding Style /hi.welcome/.test('hi\nwelcome') // false /hi.welcome/s.test('hi\nwelcome') // true. Even if you prefer a set of styles, when working on a project you should use that project’s style. An Open Source project on GitHub might follow a set of rules, another project you work on with a team might follow an entirely different one. We always use the latest ES version. Use Babel if old browser support is necessary. var. Default to const, and only use letif you reassign the variable. thisworks. Declare them as const, and use implicit returns if possible. Feel free to use nested functions to hide helper functions to the rest of the code. const test = (a, b) => a + bconst another = a => a + 2 -. if if (condition) { statements }if (condition) { statements } else { statements }if (condition) { statements } else if (condition) { statements } else { statements } for: Always initialize the length in the initialization to cache it, don’t insert it in the condition. Avoid using for in except with used in conjunction with .hasOwnProperty(). Prefer for of: for (initialization; condition; update) { statements } while while (condition) { statements } do do { statements } while (condition); switch switch (expression) { case expression: statements default: statements } try try { statements } catch (variable) { statements }try { statements } catch (variable) { statements } finally { statements } (; before & after a binary operation ( +, -, /, *, &&..); inside the for statement, after each ;to separate each part of the statement; after each ,. 'instead of double quotes ". Double quotes are a standard in HTML attributes, so using single quotes helps remove problems when dealing with HTML strings. Use template literals when appropriate instead of variable interpolation. Now we’ll take a deep dive into the building blocks of JavaScript: unicode, semicolons, white space, case sensitivity, comments, literals, identifiers and reserved words JavaScript is written in Unicode. This means you can use Emojis as variable names. 😃 😧 😲 But more importantly, you can write identifiers in any language, for example Japanese or Chinese, with some rules. variable. We define as literal, an object. It can start with a letter, the dollar sign $ or an underscore _, and it can contain digits. Using Unicode, a letter can be any allowed char, for example an emoji 😄. Test test TEST _test Test1 $test The dollar sign is commonly used to reference DOM elements. You can’t use as identifiers any of the following words because they are reserved by the language. VariablesVariables break do instanceof typeof case else new var catch finally return void continue for switch while debugger function this with default if throw delete in try class enum extends super const export import implements let private public interface package protected static yield A variable is a literal assigned to an identifier, so you can reference and use it later in the program. We’ll learn how to declare one with JavaScript. referenced as “untyped”. A variable must be declared before you can use it. There are 3 ways to do it: using var, let or const. Those 3 ways differ in how you can interact with the variable later on. var Until ES2015, var was the only construct available for defining variables. var a = 0 If you forget to add var you will be assigning a value to an undeclared variable, and the results could into a function with the same name_seems an obscure term, just read _let color = 'red'_as let the color be red and all has much more sense. Defining let outside of any function - contrary to var - does not create a global variable. const Variables declared with var or let can be changed later on in the program, and reassigned. A. TypesTypes Why? Because we should always use the simplest construct available to avoid making errors down the road. You might sometimes read that JS is untyped, but that’s incorrect. It’s true that you can assign all sorts of different types to a variable, but JavaScript has types. In particular, it provides primitive types, and object types. Primitive types are And two special types: Let’s see them in detail in the next sections. Internally, JavaScript has just one type for numbers: every number is a float. A numeric literal is a number represented in the source code, amd depending on how it’s written, it can be an integer literal or a floating point literal. Integers: 10 5354576767321 0xCC //hex Floats: 3.14 .1234 5.2e4 //5.2 * 10^4 with ${something}` JavaScript defines two reserved words for booleans: true and false. Many comparision operations == === < > (and so on) return either one or the other. if, while statements and other control structures use booleans to determine the flow of the program. They don’t just accept true or false, but also accept truthy and falsy values. Falsy values, values interpreted as false, are 0 -0 NaN undefined null '' //empty string All the rest is considered a truthy value. null is a special value that indicates the absence of a value. It’s a common concept in other languages as well, can be known as nil or None in Python for example.. Under this category go all expressions that evaluate to a number: 1 / 2 i++ i -= 2 i * 2 Expressions that evaluate to a string: 'A ' + 'string' Logical expressionsLogical expressions [] //array literal {} //object literal [1,2,3] {a: 1, b: 2} {a: {b: 1}} Logical expressions make use of logical operators and resolve to a boolean value: a && b a || b !a new //create an instance of a constructor super //calls the parent constructor ...obj //expression using the spread operator object.property //reference a property (or method) of an object object[property] object['property'] new object() new a(1) new MyRectangle('name', 2, {a: 4}) function() {} function(a, b) { return a * b } (a, b) => a * b a => a * 2 () => { return 2 } The syntax for calling a function or method Prototypal InheritancePrototypal Inheritance a.x(2) window.resize() JavaScript is quite unique in the popular programming languages landscape because of its use of prototypal inheritance. Let’s find out what that means. While most object-oriented languages use a class-based inheritance model, JavaScript is based on the prototype inheritance model. What does this mean? const list = new Array() The prototype is Array. You can verify this by checking the Object.getPrototypeOf() and the Object.prototype.isPrototypeOf() methods: const car = {}const list = []Object.getPrototypeOf(car) === Object.prototypeObject.prototype.isPrototypeOf(car)Object.getPrototypeOf(list) === Array.prototypeArray is still the same, and you can access an object prototype in the usual way.(). Normally methods are defined on the instance, not on the class. Static methods are executed on the class instead: class Person { static genericHello() { return 'Hello' } }Person.genericHello() //Hello JavaScript does not have a built-in way to define private or protected methods. There are workarounds, but I won’t describe them here. You can add methods prefixed with get or set to create a getter and setter, which are two different pieces of code that are execute: ExceptionsExceptions class Person { constructor(name) { this._name = name } set name(value) { this._name = value } } When the code runs into an unexpected problem, the idiomatic JavaScript way to handle this situation is through. JavaScript semicolons are optional. I personally like to avoid using semicolons in my code, but many people prefer them. Semicol JavaScript parser will automatically add a semicolon when, during the parsing of the source code, it finds these particular situations: }, closing the current block returnstatement on its own line breakstatement on its own line throwstatement on its own line continuestatement on its own line a piece of code: (1 + 2).toString() prints "3". const a = 1 const b = 2 const c = a + b (a + b).toString() Instead, the above). Pick some rules: returnstatements. If you return something, add it on the same line as the return (same for break, throw, continue) on multiple lines` Not just that. You can interpolate variables using the ${} syntax: const multilineString = `A string allow, in particular: Let’s dive into each of these in detail. Pre-ES6, to create a string spanned() Template literals provide an easy way to interpolate variables and expressions into strings. You do so by using the ${...} syntax: const var = 'test' const string = `something ${var}` //something test Inside the ${} you can add anything, even expressions: const string = `something ${1 + 2 + 3}` const string2 = `something ${foo() ? 'x' : 'y' }` Tagged templates is one feature that might sound less useful at first for you, but it’s actually used by lots of popular libraries around, like Styled Components, the GraphQL client/server library, so it’s essential to understand how it works. In Styled Components template tags are used to define CSS strings: const Button = styled.button` font-size: 1.5em; background-color: black; color: white; `; In Apollo template tags are used to define a GraphQL query schema: const query = gql` query { ... } ` The styled.button and gql template tags highlighted in those examples are just functions: function gql(literals, ...expressions) { } This function returns a string, which can be the result of any kind of computation. literals is an array containing the template literal content tokenized by the expressions interpolations. expressions contains all the interpolations. If we take the example above: const string = `something ${1 + 2 + 3}` literals is an array with two items. The first is something, the string until the first interpolation, and the second is an empty string, the space between the end of the first interpolation (we only have one) and the end of the string. expressions in this case is an array with a single item, 6. A more complex example is: const string = `something another ${'x'} new line ${1 + 2 + 3} test` In this case literals is an array where the first item is: `something another `: JavaScript FunctionsJavaScript Functions function interpolate(literals, ...expressions) { let string = `` for (const [i, val] of expressions.entries()) { string += literals[i] + val } string += literals[literals.length - 1] return string } as a regular function Functions can be assigned to variables (this is called a function expression): const dosomething = function(foo) { // do something } Named function expressions are similar, but play nicer with the stack call trace, which is useful when an error occurs - it holds the name of the function: const dosomething = function dosomething(foo) { // do something } ES6/ES2015 introduced arrow functions, which are especially nice to use when working with inline functions, as parameters or callbacks: const dosomething = foo => { //do something } Arrow functions have an important difference from the other function definitions above, we’ll see which one later as it’s an advanced topic. A function can have one or more parameters. const dosomething = () => { //do something }const dosomethingElse = foo => { //do something }const dosomethingElseAgain = (foo, bar) => { //do something } Starting with ES6/ES2015, functions can have default values for the parameters: const dosomething = (foo = 1, bar = 'hey') => { //do something } This allows you to call a function without filling all the parameters: dosomething(3) dosomething() ES2018 introduced trailing commas for parameters, a feature that helps reducing bugs due to missing commas when moving around parameters (e.g. moving the last in the middle): const dosomething = (foo = 1, bar = 'hey') => { //do something }dosomething(2, 'ho!') You can wrap all your arguments in an array, and use the spread operator when calling the function: const dosomething = (foo = 1, bar = 'hey') => { //do something } const args = [2, 'ho!'] dosomething(...args) With many parameters, remembering the order can be difficult. Using objects, destructuring allows to keep the parameter names: Return valuesReturn values const dosomething = ({ foo = 1, bar = 'hey' }) => { //do something console.log(foo) // 2 console.log(bar) // 'ho!' } const args = { foo: 2, bar: 'ho!' } dosomething(args) Every function returns a value, which by default is undefined. Any function is terminated when its lines of code end, or when the execution flow finds a return keyword. When JavaScript encounters this keyword it exits the function execution and gives control back to its caller. If you pass a value, that value is returned as the result of the function: const dosomething = () => { return 'test' } const result = dosomething() // result === 'test' You can only return one value. To simulate returning multiple values, you can return an object literal, or an array, and use a destructuring assignment when calling the function. Using arrays: Using objects: Functions can be defined inside other functions: const dosomething = () => { const dosomethingelse = () => {} dosomethingelse() return 'test' } The nested function is scoped to the outside function, and cannot be called from the outside. When used as object properties, functions are called methods: const car = { brand: 'Ford', model: 'Fiesta', start: function() { console.log(`Started`) } }car.start() "this"in arrow functions There’s an important behavior of Arrow Functions vs regular Functions when used as object methods. Consider this example: const car = { brand: 'Ford', model: 'Fiesta', start: function() { console.log(`Started ${this.brand} ${this.model}`) }, stop: () => { console.log(`Stopped ${this.brand} ${this.model}`) } } The stop() method does not work as you would expect.). An IIFE is a function that’s immediately executed right after its declaration: ;(function dosomething() { console.log('executed') })() You can assign the result to a variable: const something = (function dosomething() { return 'something' })() They are very handy, as you don’t need to separately call the function after its definition. JavaScript before executing your code reorders it according to some rules. Functions in particular are moved at the top of their scope. This is why it’s legal to write dosomething() function dosomething() { console.log('did something') } Internally, JavaScript moves the function before its call, along with all the other functions found in the same scope: function dosomething() { console.log('did something') } dosomething() Now, if you use named function expressions, since you’re using variables something different happens. The variable declaration is hoisted, but not the value, so not the function. dosomething() const dosomething = function dosomething() { console.log('did something') } Not going to work: This is because what happens internally is: const dosomething dos. Arrow functions allow you to have an implicit return: values are returned without having to use the return keyword. It works when there is a on-line statement in the function body: const myFunction = () => 'test' myFunction() //'test' Another example, returning an object (remember to wrap the curly brackets in parentheses to avoid it being considered the wrapping function body brackets): const myFunction = () => ({value: 'test'}) myFunction() //{value: 'test'} thisworks as well, when instantiating an object. It: ClosuresClosures const link = document.querySelector('#link') link.addEventListener('click', () => { // this === window })const link = document.querySelector('#link') link.addEventListener('click', function() { // this === link }) Here’s a gentle introduction to the topic of closures, which are key to understanding how JavaScript functions work. If you’ve ever written a function in JavaScript, you’ve already made use of closures. It’s a key topic to understand, which has implications for the things you can do. When a function is run, it’s executed with the scope that was in place when it was defined, and not with the state that’s in place when it is executed. The scope basically is the set of variables which are visible. A function remembers its Lexical Scope, and it’s able to access variables that were defined in the parent scope. In short, a function has an entire baggage of variables it can access. Let me immediately give an example to clarify this. const bark = dog => { const say = `${dog} barked!` ;(() => console.log(say))() }bark(`Roger`) This logs to the console Roger barked!, as expected. What if you want to return the action instead: const prepareBark = dog => { const say = `${dog} barked!` return () => console.log(say) }const bark = prepareBark(`Roger`)bark() This snippet also logs to the console Roger barked!. Let’s make one last example, which reuses prepareBark for two different dogs: const prepareBark = dog => { const say = `${dog} barked!` return () => { console.log(say) } }const rogerBark = prepareBark(`Roger`) const sydBark = prepareBark(`Syd`)rogerBark() sydBark() This prints Roger barked! Syd barked! As you can see, the state of the variable say is linked to the function that's returned from prepareBark(). Also notice that we redefine a new say variable the second time we call prepareBark(), but that does not affect the state of the first prepareBark() scope. This is how a closure works: the function that’s returned keeps the original state in its scope.Arrays JavaScript arrays over time got more and more features, and sometimes it’s tricky to know when to use some construct vs another. This section aims to explain what you should use, as of 2018. const a = [] const a = [1, 2, 3] const a = Array.of(1, 2, 3) const a = Array(6).fill(1) //init an array of 6 items of value 1 Don’t use the old syntax (just use it for typed arrays) const a = new Array() //never use const a = new Array(1, 2, 3) //never use const l = a.length every a.every(f) Iterates a until f() returns false some a.some(f) Iterates a until f() returns true const b = a.map(f) Iterates a and builds a new array with the result of executing f() on each a element const b = a.filter(f) Iterates a and builds a new array with elements of a that returned true when executing f() on each a element ES6 a.forEach(f) Iterates f on a without a way to stop Example: for..offor..of a.forEach(v => { console.log(v) }) ES6 forfor for (let v of a) { console.log(v) } for (let i = 0; i < a.length; i += 1) { //a[i] } Iterates a, can be stopped using return or break and an iteration can be skipped using continue ES6 Getting the iterator from an array returns an iterator of values const a = [1, 2, 3] let it = a[Symbol.iterator]()console.log(it.next().value) //1 console.log(it.next().value) //2 console console.log(it.next().value) //1 console.log(it.next().value) //2 .next() returns undefined when the array ends. You can also detect if the iteration ended by looking at it.next() which returns a value, done pair. done is always false until the last element, which returns true. a.push(4) a.unshift(0) a.unshift(-2, -1) From the end a.pop() From the beginning a.shift() At a random position a.splice(0, 2) // get the first 2 items a.splice(3, 2) // get the 2 items starting from index 3 Do not use remove() as it leaves behind undefined values. a.splice(2, 3, 2, 'a', 'b') //removes 3 items starting from //index 2, and adds 2 items, // still starting from index 2 const a = [1, 2] const b = [3, 4] a.concat(b) // 1, 2, 3, 4 a.indexOf() Returns the index of the first matching item found, or -1 if not found a.lastIndexOf() Returns the index of the last matching item found, or -1 if not foundES }) a.includes(value) Returns true if a contains value. a.includes(value, i) Returns true if a contains value after the position i. Sort the arraySort the array a.slice()() a.toString() Returns a string representation of an array a.join() Returns a string concatenation of the array elements. Pass a parameter to add a custom separator: a.join(', ') const b = Array.from(a) const b = Array.of(...a) const b = Array.from(a, x => x % 2 == 0) LoopsLoops const a = [1, 2, 3, 4] a.copyWithin(0, 2) // [3, 4, 3, 4] const b = [1, 2, 3, 4, 5] b.copyWithin(0, 2) // [3, 4, 5, 4, 5] //0 is where to start copying into, // 2 is where to start copying from const c = [1, 2, 3, 4, 5] c.copyWithin(0, 2, 4) // [3, 4, 3, 4, 5] //4 is an end index JavaScript provides many ways to iterate through loops. This section explains all the various loop possibilities in modern JavaScript with a small example and the main properties. for const list = ['a', 'b', 'c'] for (let i = 0; i < list.length; i++) { console.log(list[i]) //value console.log(i) //index } forloop using break forloop using continue Introduced in ES5. Given an array, you can iterate over its properties using list.forEach(): const list = ['a', 'b', 'c'] list.forEach((item, index) => { console.log(item) //value console.log(index) //index })//index is optional list.forEach(item => console.log(item)) Unfortunately you cannot break out of this loop. do...while const list = ['a', 'b', 'c'] let i = 0 do { const list = ['a', 'b', 'c'] let i = 0 while Iterates all the enumerable properties of an object, giving the property names. for (let property in object) { console.log(property) //property name console.log(object[property]) //property value } for...of ES2015 introduced the for...of loop, which combines the conciseness of forEach with the ability to break: //iterate over the value for ...invs for...of The difference with for...in is: for...ofiterates over the property values for...initerates the property names: This style of event handlers is very rarely used today, due to its constrains, but it was the only way in the early days of JavaScript: <a href="site.com" onclick="dosomething();">A link</a>() }) Here’s a list of the most common events you will likely handle. load is fired on window and the body element when the page has finished loading.) keydown fires when a keyboard button is pressed (and any time the key repeats while the button stays pressed). keyup is fired when the key is released.: The Event LoopThe Event Loop let cached = null window.addEventListener('scroll', event => { if (!cached) { setTimeout(() => { //you can access the original event at `cached` cached = null }, 100) } cached = event }) The Event Loop is one of the most important aspects to understand about JavaScript.., filesystem.Queuing function execution(). At this point the call stack looks like this: Here is the execution order for all the functions in our program: Why is this happening?: baz, before bar bar That’s a big difference between Promises (and Async/await, which is built on promises) and plain old asynchronous functions through setTimeout() or other platform APIs. JavaScript is synchronous by default, and is single threaded. This means that code cannot create new threads and run in parallel. Let’s learn what asynchronous code means and how it looks.. Promises are one way to deal with asynchronous code, without writing too many callbacks in your code. Although being around since the function continues its execution while the promise does itsconst, andnew Promise((resolve, reject) => { reject('Error') }) .catch((err) => { console.error(err) }) If inside the catch() you raise an error, you can append a second catch() to handle it, and so on. new Promise((resolve, reject) => { throw new Error('Error') }) .catch((err) => { throw new Error('Error') }) .catch((err) => { console.error(err) }): Async and AwaitAsync and Await const first = new Promise((resolve, reject) => { setTimeout(resolve, 500, 'first') }) const second = new Promise((resolve, reject) => { setTimeout(resolve, 100, 'second') })Promise.race([first, second]).then((result) => { console.log(result) // second }) Now we’ll discover the modern approach to asynchronous functions in JavaScript. JavaScript evolved in a very short time from callbacks to Promises,: 0 1 2 3 4 but actually what happens is this: 5 5 5 5 5: 0 1 2 3 4: TimersTimers const operations = []for (var i = 0; i < 5; i++) { operations.push(((j) => { return () => console.log(j) })(i)) }for (const operation of operations) { operation() } When writing JavaScript code, you might want to delay the execution of a function. We’ll discuss how to use setTimeout and setInterval to schedule functions in the future. setTimeout() When writing JavaScript code, you might want to delay the execution of a function. This is the job of setTimeout. You specify a callback function to execute later, and a value expressing how execute(() => { if (App.somethingIWait === 'arrived') { clearInterval(interval) return } // available in Node.js, through the Timers module. Node.js also provides setImmediate(), which is equivalent to using setTimeout(() => {}, 0), mostly used to work with the Node.js Event Loop.! You cannot bind a value to an arrow function, like you do with normal functions. It’s simply not possible due to the way they work. this is lexically bound, which means its value is derived from the context where they are defined.. In event handlers callbacks, this refers to the HTML element that received the event: document.querySelector('#button').addEventListener('click', function(e) { console.log(this) //HTMLElement } You can bind it using Strict ModeStrict Mode document.querySelector('#button').addEventListener( 'click', function(e) { console.log(this) //Window if global, or your context }.bind(this) ) test.testing = true //true test.testing //undefined Strict mode fails in all those cases: ;(() => { 'use strict' true.false = ''( //TypeError: Cannot create property 'false' on boolean 'true' 1 ).name = 'xxx' //TypeError: Cannot create property 'name' on number '1' 'test'.testing = true //TypeError: Cannot create property 'testing' on string 'test' })() In sloppy mode, if you try to delete a property that you cannot delete, JavaScript simply returns false, while in Strict Mode, it raises a TypeError: delete Object.prototype( //false () => { 'use strict' delete Object.prototype //TypeError: Cannot delete property 'prototype' of function Object() { [native code] } } )() with Strict Mode disables the with keyword, to remove some edge cases and allow more optimization at the compiler level.() { /* */ }())(() => { /* */ }()) There is some weirder syntax that you can use to create an IIFE, but it’s very rarely used in the real world, and it relies on using any unary operator: ;-(function() { /* */ })() +(function() { /* */ })()~(function() { /* */ })()!(function() { /* */ })() (does not work with arrow functions) An IIFE can also be named regular functions (not arrow functions). This does not change the fact that the function does not “leak” to the global scope, and it cannot be invoked again after its execution: ;(function doSomething() { /* */ })(). Addition (+) const three = 1 + 2 const four = three + 1 The + operator also serves as string concatenation if you use strings, so pay attention: const three = 1 + 2 three + 1 // 4 'three' + 1 // three1 Subtraction (-) const two = 4 - 2 Division (/)). 1 / 0 //Infinity -1 / 0 //-Infinity Remainder (%) The remainder is a very useful calculation in many use cases: const result = 20 % 5 //result === 0 const result = 20 % 7 //result === 6 A reminder by zero is always NaN, a special value that means "Not a Number": 1 % 0 //NaN -1 % 0 //NaN Multiplication (*) 1 * 2 //2 -1 * 2 //-2 Exponentiation (**) Raise the first operand to the power second operand Unary operatorsUnary operators 1 ** 2 //1 2 ** 1 //2 2 ** 2 //4 2 ** 8 //256 8 ** 2 //64 Increment (++) Increment a number. This is a unary operator, and if put before the number, it returns the value incremented. If put after the number, it returns the original value, then increments it. let x = 0 x++ //0 x //1 ++x //2 Decrement ( — ) Works like the increment operator, except it decrements the value. let x = 0 x-- //0 x //-1 --x //-2 Unary negation (-) Return the negation of the operand let x = 2 -x //-2 x //2 Unary plus (+) If the operand is not a number, it tries to convert it. Otherwise if the operand is already a number, it does nothing. Assignment shortcutsAssignment shortcuts let x = 2 +x //2x = '2' +x //2x = '2a' +x //NaN a += 5 //a === 5 a -= 2 //a === 3 a *= 2 //a === 6 a /= 2 //a === 3 a %= 2 //a === 1: The Math ObjectThe Math Object const a = 1 * 2 + 5 / 2 % 2 const a = 1 * 2 + 5 / 2 % 2 const a = 2 + 2.5 % 2 const a = 2 + 0.5 const a = 2.5 The Math object contains lots of utilities math-related. Let’s have a look at them all here. All those functions are static. Math cannot be instantiated. Returns the absolute value of a number Math.abs(2.5) //2.5 Math.abs(-2.5) //2.5 Returns the arccosine of the operand The operand must be between -1 and 1 Math.acos(0.8) //0.6435011087932843 Returns the arcsine of the operand The operand must be between -1 and 1 Math.asin(0.8) //0.9272952180016123 Returns the arctangent of the operand Math.atan(30) //1.5374753309166493 Returns the arctangent of the quotient of its arguments. Math.atan2(30, 20) //0.982793723247329 Rounds a number up Math.ceil(2.5) //3 Math.ceil(2) //2 Math.ceil(2.1) //3 Math.ceil(2.99999) //3 Return the cosine of an angle expressed in radiants Math.cos(0) //1 Math.cos(Math.PI) //-1 Return the value of Math.E multiplied per the exponent that’s passed as argument Math.exp(1) //2.718281828459045 Math.exp(2) //7.38905609893065 Math.exp(5) //148.4131591025766 Rounds a number down Math.ceil(2.5) //2 Math.ceil(2) //2 Math.ceil(2.1) //2 Math.ceil(2.99999) //2 Return the base e (natural) logarithm of a number Math.log(10) //2.302585092994046 Math.log(Math.E) //1 Return the highest number in the set of numbers passed Math.max(1,2,3,4,5) //5 Math.max(1) //1 Return the smallest number in the set of numbers passed Math.max(1,2,3,4,5) //1 Math.max(1) //1 Return the first argument raised to the second argument Math.pow(1, 2) //1 Math.pow(2, 1) //2 Math.pow(2, 2) //4 Math.pow(2, 4) //16 Returns a pseudorandom number between 0.0 and 1.0 Math.random() //0.9318168241227056 Math.random() //0.35268950194094395 Rounds a number to the nearest integer Math.round(1.2) //1 Math.round(1.6) //2 Calculates the sin of an angle expressed in radiants Math.sin(0) //0 Math.sin(Math.PI) //1.2246467991473532e-16) Return the square root of the argument Math.sqrt(4) //2 Math.sqrt(16) //4 Math.sqrt(5) //2.23606797749979 Calculates the tangent of an angle expressed in radiants ES ModulesES Modules Math.tan(0) //0 Math.tan(Math.PI) //-1.2246467991473532e-16 ES Modules is the ECMAScript standard for working with modules. While Node.js has been using the CommonJS standard for a long time, the browser never had a module system. syntax to import a module is: import package from 'module-name' while CommonJS uses const package = require('module-name') A module is a JavaScript file that exports one or more value special type="module" attribute: <script type="module" src="index.js"></script> check an ES Modules example on Modules are fetched using CORS. This means that if you reference scripts from other domains, they must have a valid CORS header that allows cross-site loading (like Access-Control-Allow-Origin: *) exports.b = 2 exports.c = 3 and import them individually using the destructuring assignment: const { a, b, c } = require('./uppercase.js') or just export one value using: //file.js module.exports = value and import it using GlossaryGlossary const value = require('./file.js') To end with, a guide to a few terms used in frontend development that might be alien to you. In JavaScript a block is delimited curly braces ( {}). An if statement contains a block, a for loop contains a block. With Function Scoping, any variable defined in a block is visible and accessible from inside the whole block, but not outside of it. A callback is a function that’s invoked when something happens. A click event associated to an element has a callback function that’s invoked when the user clicks the element. A fetch request has a callback that’s called when the resource is downloaded.. With Function Scoping, any variable defined in a function is visible and accessible from inside the whole function. A variable is immutable when its value cannot change after it’s created. A mutable variable can be changed. The same applies to objects and arrays. Lexical Scoping is a particular kind of scoping where variables of a parent function are made available to inner functions as well. The scope of an inner function also includes the scope of a parent function. A polyfill is a way to provide new functionality available in modern JavaScript or a modern browser API to older browsers. A polyfill is a particular kind of shim. A function that has no side effects (does not modify external resources), and its output is only determined by the arguments. You could call this function 1M times, and given the same set of arguments, the output will always be the same. JavaScript with var and let declaration allows you to reassign a variable indefinitely. With constdeclarations you effectively declare an immutable value for strings, integers, booleans, and an object that cannot be reassigned (but you can still modify it through its methods). Scope is the set of variables that’s visible to a part of the program. Scoping is the set of rules that’s defined in a programming language to determine the value of a variable. A shim is a little wrapper around a functionality, or API. It’s generally used to abstract something, pre-fill parameters or add a polyfill for browsers that do not support some functionality. You can consider it like a compatibility layer. A side effect is when a function interacts with some other function or object outside it. Interaction with the network or the file system, or with the UI, are all side effects. State usually comes into play when talking about Components. A component can be stateful if it manages its own data, or stateless if it doesn’t. A stateful component, function or class manages its own state (data). It could store an array, a counter or anything else. A stateless component, function or class is also called dumb because it’s incapable of having its own data to make decisions, so its output or presentation is entirely based on its arguments. This implies that pure functions are stateless.! Originally published by Flavio Copes at
https://morioh.com/p/9d5d5c475b95
CC-MAIN-2020-10
en
refinedweb
A python package that provides functionality to interface with the Confluent Schema Registtry Project description version number: 0.0.1 author: Matthijs van der Kroon Overview A python package that provides Avro serialisation and deserialisation compatible with the Confluent Schema Registry. WARNING: python2.7 not supported Installation / Usage To install use pip: $ pip install primed_avro Or clone the repo: $ git clone $ python setup.py install Contributing TBD Example from primed_avro.writer import Writer from primed_avro.registry import ConfluentSchemaRegistryClient csr = ConfluentSchemaRegistryClient(url=””) schemaMeta = csr.get_schema(subject=topic) writer = Writer(schema=schemaMeta.schema) Project details Release history Release notifications Download files Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
https://pypi.org/project/primed-avro/0.0.1b1/
CC-MAIN-2020-10
en
refinedweb
Media Manager Namespaces Choose namespace Media Files Upload to smoothiepanel Sorry, you don't have enough rights to upload files. File - Date: - 2016/06/27 08:54 - Filename: - color_coding - Format: - JPEG - Size: - 2MB - Width: - 2238 - Height: - 2125 - Camera: - Sony D5803 - References for: - Nothing was found.
http://smoothieware.org/temperaturecontrol-thermistor-choice?tab_files=upload&do=media&tab_details=view&image=how-to-wire%3Acolor_coding&ns=smoothiepanel
CC-MAIN-2020-10
en
refinedweb
For each of your deployments, Deployment Manager creates pre-defined environment variables that contain information inferred from your deployment. Use these environment variables in your Python or Jinja2 templates to get information about your project or deployment. Available environment variables The following environment variables are automatically set by Deployment Manager. They are replaced everywhere you use them in your templates. For example, use the project_number variable to add the project number to the name of a service account. Using an environment variable Use the following syntax to add an environment variable to your templates: {{ env["deployment"] }} # Jinja context.env["deployment"] # Python In your template, use the variables as in these examples: Jinja - type: compute.v1.instance name: vm-{{ env["deployment"] }} properties: machineType: zones/us-central1-a/machineTypes/f1-micro serviceAccounts: - email: {{ env['project_number'] }}-compute@developer.gserviceaccount.com scopes: - ... Python def GenerateConfig(context): resources = [] resources.append ({ 'name': 'vm-' + context.env["deployment"], 'type': 'compute.v1.instance', 'properties': { 'serviceAccounts': [{ 'email': context.env['project_number'] + '-compute@developer.gserviceaccount.com', 'scopes': [...] }] } ...}] return {'resources': resources} What's next - Add a template permanently to your project as a composite type. - Host templates externally to share with others. - Add schemas to ensure users interact with your templates correctly.
https://cloud.google.com/deployment-manager/docs/configuration/templates/use-environment-variables?hl=no
CC-MAIN-2020-10
en
refinedweb
When a user clicks on a button or link on a Web page, there can be a delay between posting to the server and the next action that happens on the screen. The problem with this delay is that the user may not know that they already clicked on the button and they might t hit the button again. It’s important to give immediate feedback to the user so they know that the application is doing something. This article presents a few different methods of providing the user with immediate feedback prior to the post-back. You’ll learn to disable the button and change the message on that button. You’ll see how to pop-up a message over the rest of the screen. You’ll also learn how to gray out the complete background of the page. And finally, you’ll see how to use spinning glyphs from Font Awesome to provide feedback to the user that something is happening. A Sample Input Screen Figure 1 shows a sample screen where the user fills in some data and then clicks a button to post that data to the server. I’m only going to use a single field for this screen, but the technique presented works on any input screen. When the user clicks on a button that posts data to the server, your job is to give the user some immediate feedback that something has happened so that they don’t click on the button (or any other button) again until the process has completed. There are many ways that you can provide feedback to the user that you’ve begun to process their request. Here’s a list of just some of the things you can do. - Redirect the user to another page with a message that their request is being processed. - Disable all the buttons on the screen so they can’t click anything else. - Hide all of the buttons on the screen so they can’t click any of them. - Change the text on the button they just clicked on. - Disable the button they just clicked on. - Pop-up a message over the screen. - Gray out the page they’re on. - Any combination of the above items. In this article, you’ll learn how to do a few of these items. To create the screen shown in Figure 1, use Visual Studio to create a new MVC application. Create a folder under \Views called \ProgressSamples. Create a new view in this folder called ProgressSample.cshtml. Create a new controller called ProgressSamplesController that can call this new page. The code to create this view is shown in Listing 1. A Class to Hold the Input For the sample screen, you’ll be entering a music genre such as Rock, Country, Jazz, etc. That means you need a class to use as a model for the input screen. The class shown in the following code snippet called MusicGenre will be used in this article for data binding on the form. public class MusicGenre { public MusicGenre() { GenreId = 0; Genre = string.Empty; } public int GenreId { get; set; } public string Genre { get; set; } } The Controller In the controller, you need two methods (as shown in Listing 2): one displays the screen and one handles posting data from that screen. The first method, ProgressSample, is very simple in that it creates an instance of the MusicGenre class, passes that to the ActionResult returned from this method, and then is passed on to the view. The second method handles the post-back from the page. To simulate a long-running process, just add a call to the Thread.Sleep() method and pass in 3000 to simulate a three-second operation. Return the model back from this method so the data stays in place on the page when this operation is complete. Change the Button When Clicked The first example is very simple. You change the text of the button and disable it (Figure 2) so that the user doesn’t click on the button again. These two steps are accomplished by writing a very simple JavaScript function, as shown in the following code snippet. <script> function DisplayProgressMessage(ctl, msg) { $(ctl).prop("disabled", true); $(ctl).text(msg); return true; } </script> Pass a reference to the button that was clicked to the DisplayProgressMessage function, and a message to change the text of the button. Use a jQuery selector to set the button’s disabled property to true. This causes the button to become disabled. Use the text method to set the text of the button to the message passed in. After creating this function, modify the submit button to call the DisplayProgressMessage function. Add an onclick event procedure to the submit button, as shown in the code snippet below. Pass a reference to the button itself using the keyword this, and the text Saving… which is displayed on the button. <button type="submit" id="submitButton" class="btn btn-primary" onclick="return DisplayProgressMessage(this, 'Saving...');"> Save </button> Add Pop-Up Message Let’s now enhance this sample by adding a pop-up message (Figure 3) in addition to changing the text of the button. Create a <div> with a <label> in it, and within the label, place the text you wish to display. Add a CSS class called .submit-progress to style the pop-up message. Set this <div> as hidden using the Bootstrap class Hidden so it won’t show up until you want it to display. Here’s the HTML you’ll use for this pop-up message: <div class="submit-progress hidden"> <label>Please wait while Saving Data...</label> </div Create the submit-progress style using a fixed position on the screen. Set the top and left to 50% to place this <div> in the middle of the page. Set some padding, width, and margins that are appropriate for the message you’re displaying. Select a background and foreground color for this message. Finally, set a border radius and a drop-shadow so the pop-up looks like it’s sitting on top of the rest of the page. To make this pop-up appear when clicking on the button, update the DisplayProgressMessage function, as shown in the following code snippet: <script> function DisplayProgressMessage(ctl, msg) { $(ctl).prop("disabled", true).text(msg); $(".submit-progress").removeClass("hidden"); return true; } </script> Notice that I changed this function a little. I chained together the setting of the disabled property and the setting of the text on the button. This isn’t absolutely necessary, but it’s a little more efficient as the selector only needs to be called one time. Next, remove the hidden class from the <div> tag to have the pop-up message appear on the page. Gray Out the Background To provide even more feedback to the user when they click on the button, you might "gray out" the whole Web page (Figure 4). This is accomplished by applying a background color of lightgray and an opacity of 50% to the <body> element. Create a style named .submit-progress-bg to your page that you can apply to the <body> element using jQuery. <style> .submit-progress-bg { background-color: lightgray; opacity: .5; } </style> Change the DisplayProgressMessage to add this class to the <body> tag when the button is clicked, as shown in the code snippet below: function DisplayProgressMessage(ctl, msg) { $(ctl).prop("disabled", true).text(msg); $(".submit-progress").removeClass("hidden"); $("body").addClass("submit-progress-bg"); return true; } Add a Font Awesome Spinner Another way to inform a user that something’s happening is to add some animation. Luckily, you don’t need to build any animation yourself; you can use Font Awesome () for this purpose. Add Font Awesome to your project using the NuGet Package Manager within Visual Studio. Font Awesome has many nice glyphs to which you can add a "spin" effect (Figure 5). Add an <i> tag to the submit progress pop-up <div> that you added before and set the CSS class to "fa fa-2x fa-spinner fa-spin". The first class name "fa" simply identifies that you wish to use the Font Awesome fonts. The second class name "fa-2x" says you want it to be two times the normal size of the glyph. The third class name "fa-spinner" is the actual glyph to use, which, as shown in Figure 5, has different-sized white circles arranged in a circle. The fourth class name "fa-spin" causes the glyph to spin continuously. Adding that <i> tag with those classes set to spin causes your pop-up message to display that glyph next to your text. <div class="submit-progress hidden"> <i class="fa fa-2x fa-spinner fa-spin"></i> <label>Please wait while Saving Data...</label> </div> Of course, you do need to change the .submit-progress style a little bit in order to fit both the glyph and the text within the message area and to make them look good side-by-side. You can add the following styles just below the other .submit-progress you created before to override the original styles. This makes the glyph appear in the right place in your message. You also add an additional style for the <i> tag to put a little bit of space between the glyph and the text. .submit-progress { padding-top: 2em; width: 23em; margin-left: -11.5em; } .submit-progress i { margin-right: 0.5em; } If you were to run the page right now without making any changes, you’ll either not see the glyph at all, or you’ll see it but it won’t be animated. The problem is that when the JavaScript function is running, it runs on a single execution context and then immediately returns to the browser, which then executes the form post to the server. The animated glyph needs to have its own execution context in which to run. To accomplish this, you need to use the JavaScript setTimeout function around the code that un-hides the <div> tag with your spinning glyphs. You can set any amount of time that you want on the setTimeout, but usually just a single micro-second will do, as shown in the code below. function DisplayProgressMessage(ctl, msg) { $(ctl).prop("disabled", true).text(msg); $("body").addClass("submit-progress-bg"); // Wrap in setTimeout so the UI // can update the spinners setTimeout(function () { $(".submit-progress").removeClass("hidden"); }, 1); return true; } Summary In this article, you learned how to provide some feedback to your users when they’re about to call a long operation on the server. It’s important to provide the feedback so that they don’t try to click on the same button again, or try to navigate somewhere else while you’re finishing a process. I’m sure that you can expand upon the ideas presented in this article and apply it to your specific use cases. Remember to run any animations in a different execution context using the setTimeout function. Sample Code You can download the sample code for this article by visiting my website at. Select PDSA Articles, then select "CODE Magazine—Progress Messages" from the drop-down list.
https://www.codemag.com/Article/1507051/Display-a-Progress-Message-on-an-MVC-Page
CC-MAIN-2020-10
en
refinedweb
On the Visual Studio team, we’re laser focused on the developer’s inner loop of writing, debugging, and testing code. It’s the most important part of making developers productive. Visual Studio 2019 is packed with tools to make your workflow more efficient. In this article, we’ll cover our favorite new features in Visual Studio navigation, debugging, code fixes and refactorings, code cleanup, and much more! Shell and UX The first thing you might notice in Visual Studio 2019 is the more purple shell color. This helps distinguish Visual Studio versions when you use side-by-side installations. You also don’t want to miss the improved Visual Studio search experience and solution filters. Side-by-Side Installation The Visual Studio installer enables you to install different versions of Visual Studio side-by-side on a single computer. If your team uses an earlier version of Visual Studio and you want to try out all the latest features in Visual Studio 2019, you can do that with side-by-side installation. Visual Studio Search We improved search efficiency and effectiveness in Visual Studio 2019. You can access Visual Studio search with (Ctrl+Q), which places your cursor in the search box in the top center of the Visual Studio shell. More parts of Visual Studio are now searchable including improved accuracy for menus, commands, options, and installable components. We’ve also added code search so you can easily find types and members with C# and Visual Basic, as well as file search for all languages. Figure 1 shows the Git commands now indexed in the search. Solution Filters How many times have you been stuck waiting for a large solution to load with many projects when you only wanted to work in a small subset of those projects? To improve performance when opening large solutions, Visual Studio 2019 introduced solution filters. Solution filters let you open a solution with only selective projects loaded. Loading a subset of projects in a solution decreases solution load, build, and test run time, and enables more focused review. This helps you get to code faster by opening a solution without loading all or. You can open a solution without loading any of its projects directly from the command line, as well as the Open Project dialog, as shown in Figure 2. - Choose File > Open > Project/Solution from the menu bar. - In the Open Project dialog, select the solution, and then select the Do not load projects checkbox. - Choose Open to open the solution with all of its projects unloaded. - In Solution Explorer, select the projects you want to load (press Ctrl while clicking to select more than one project), and then right-click on the project and choose Reload Project, as shown in Figure 3. Visual Studio remembers which projects were loaded the next time you open the solution. Performance Improvements Improving performance is always a top request from customers. In Visual Studio 2019, we reduced the time it takes for several operations. Faster Visual Studio startup Faster branch switching in Visual Studio Faster debug stepping Faster installation updates Faster Startup The new Start window is much faster in Visual Studio 2019 and has been designed to present you with several options to get you to code quickly. In addition, starting with Visual Studio 2019 version 16.1, Visual Studio blocks synchronously autoloaded extensions to improve startup and solution load times. This enables you to get to your code faster. Faster Branch Switching When working with Git, part of the usual workflow is to create and work on code branches. Visual Studio no longer completely unloads and reloads the solution during branch switches (unless many projects update as part of the branch switching operation). To avoid context switching between Visual Studio and the Git command line, Visual Studio 2019 now provides an integrated branch switching experience that enables you to "stash" any uncommitted changes during the branch switch operation. You no longer need to go outside of Visual Studio to stash your changes before switching branches. Faster Debugger Stepping Because a large part of the development cycle includes stepping through and debugging code, we’ve worked to bring several improvements to the debugger performance. Stepping through your code is over 50% faster in Visual Studio 2019 versus 2017. The Watch, Autos, and Locals windows are 70% faster. Moreover, because most debugger-related windows (i.e., Watch window, Call Stack window, etc.) are now asynchronous, you can interact with one window in Visual Studio while waiting for information to load in another. Stepping through your code is over 50% faster in Visual Studio 2019 versus 2017. Faster Installation of Visual Studio Updates With the introduction of background downloads for updates in Visual Studio 2019, you can continue working on your code for a longer time while the update downloads in the background. At the end of the download, once the update is ready for installation, you get a notification to let you know that you’re good to go. Using this approach, the installation time for Visual Studio 2019 updates has decreased significantly. Tooling Improvements and Navigation Tooling and navigation improvements include new syntax classification colors, additional Go to commands, and read/write filters for Find All References. New Classification Colors The Visual Studio editor is now just a little more colorful for C# and Visual Basic. Keywords, user methods, local variables, parameter names, and overloaded operators all get new colors. You can customize the colors for each new syntax classification in Tools > Options > Environment > Fonts and Colors by scrolling to User Members. Find Code Using "Go to" Commands Visual Studio's Go to commands perform a focused search of your code to help you quickly find specified items. You can go to a specific line, type, symbol, file, and member from a simple, unified interface. Type (Ctrl+T) to launch the Go to window display at the top right of your code editor. You can see the Go to tool in Figure 4. As you type in the text box, the results appear in a drop-down list. Use down arrows to preview a file or go to an element by selecting it in the list, as shown in Figure 5. You can also filter the searches by types, symbols, line, files, members, and recent files. To narrow your search to a specific type of code element, you can either specify a prefix in the search box or select one of the five filter icons shown in Table 1: You can also view project files (.csproj and .vbproj) with Go to navigation and search the contents for file references. Speaking of project files, you can easily edit SDK-style project files from the Solution Explorer with a simple double-click! Find References in Your Code You can use the Find All References command to find references to code elements throughout your codebase. The Find All References command is available in the context (right-click) menu of the element you’re interested in. Or, if you’re a keyboard user, place your cursor in the element and press Shift + F12. The results appear in a tool window named <element> references, where element is the name of the item you’re searching for. In Visual Studio 2019 for C# or Visual Basic, the Find References window has a Kind column where it lists what type of reference it found. You can use this column to filter by reference type by clicking on the filter icon that appears when hovering over the column header. You can filter references by several categories including Read, Write, Reference, Name, Namespace, and Type, as shown in Figure 6. Code Fixes and Refactorings Several hints are built into Visual Studio in the form of code fixes and refactorings. These appear as lightbulbs and screwdrivers next to your code or in the margin. The hints can resolve warnings and errors as well as provide suggestions. Suggestions can help you better follow the code style that your team prefers or discover new features, such as new C# syntax. You can check out the most popular refactorings that are built into Visual Studio at. We’ve added dozens of new code fixes and refactorings in Visual Studio 2019! You can open these by typing (Ctrl+.) or by clicking on the lightbulb or screwdriver icons. Here are a few of our favorites: - Sync namespace and folder name, as shown in Figure 7 - Convert foreach to LINQ, as shown in Figure 8 - Add multiple missing references, as shown in Figure 9 - Pull members up to base, as shown in Figure 10 And many more, like these: Invert conditional expressions Extract interface to same file Wrap/indent/align parameters/arguments Regex language support and completion Remove unused expression values and parameters Use expression/block body for lambda Move type to namespace Split/merge nested if statement Write Better Code Faster with Roslyn Analyzers Code fixes and refactorings in Visual Studio are all powered by analyzers. An analyzer is the tool that does static analysis on your code and reports diagnostics and errors. The .NET Compiler Platform ("Roslyn") analyzers review your code for style, quality and maintainability, design, and other issues, but it doesn’t need to stop with the built-in tools. You can create your own analyzers with the open-source Roslyn APIs. Do you have a common scenario or guidance that’s special to your codebase? You can create a diagnostic and code fix for it to share with your team or anyone who depends on your library. For an example tutorial, visit. The .NET Compiler Platform ("Roslyn") analyzers don’t need to stop with the built-in tools. You can create your own analyzers with the open-source Roslyn APIs. Many users in the community have already created their own Roslyn analyzer packages that are available to download on NuGet. As always, only install packages from providers you trust. The Roslyn team has several analyzer packages that we recommend, but they don’t ship as part of the default tools because they give more verbose feedback than is necessary in the default experience. If you’d like additional code style guidance and rules, download our recommended analyzer packages at. Define Code Style with EditorConfig Now that you’re familiar with code fixes and analyzers, we can talk about code style configuration. Code style is important because consistency makes code easier to maintain and read. Enforcing consistent code style is especially important when developer teams and their code bases grow. Visual Studio enables you to configure analyzers to apply your preferred code style rules and customize the severity at which they appear in the editor. You can easily change your code style to prefer explicit type instead of var (or vice versa) and display any violation of this rule as a suggestion, warning, or error in the editor. A warning to use explicit type instead of var is shown in Figure 11 as a green squiggle. The same code style rule is shown as a suggestion (three gray dots) in Figure 12. Where do you configure all these code styles? You can use the code style pages in Tools > Options or the more versatile. With the EditorConfig rules and syntax, you can enable or disable individual .NET coding conventions, and configure the degree to which you want each rule enforced, via a severity level. There are three supported .NET coding convention categories: - Language conventions: Rules for C# or Visual Basic language preferences (Example: var versus explicit type preference) - Formatting conventions: Rules for layout and structure of code. (Example: rules around Allman braces, or preferring a space between a method call name and parenthesis) - Naming conventions: Rules for naming code elements (Example: you can specify that an async method must end in Async) Each category has variations in syntax, but instead of writing these yourself, you can use the default .NET EditorConfig item template or generate your own EditorConfig from your pre-existing Tools > Options settings. csharp_style_var_for_built_in_types = false:suggestion csharp_space_between_method_call_name_and _opening_parenthesis = false dotnet_naming_style.pascal_case_style .capitalization = pascal_case To add an EditorConfig file to a project or a solution, right click on the project or solution name within the Solution Explorer. Select Add New Item or press (Ctrl+Shift+A). In the Add New Item dialog, search for EditorConfig. Select the Default EditorConfig template to add an EditorConfig file prepopulated with two core EditorConfig options for indent style and size. Or, select the .NET EditorConfig template to add an EditorConfig file prepopulated with default options. An .editorconfig file appears in Solution Explorer, and it opens in the editor, as seen in Figure 13. You can also add an EditorConfig file based on the code style settings you’ve chosen in the Visual Studio Options dialog. The options dialog is available at Tools > Options > Text Editor > [C# or Basic] > Code Style > General. Click Generate .editorconfig file from settings to automatically generate a coding style .editorconfig file based on the settings on this Options page. If rule violations are found, they’re reported in the code editor (as a squiggle under the offending code) and in the Error List window. The document health indicator gives you a sneak-peak of rule violations without having the error list open. Figure 14 shows the File Health Indicator. Apply Code Styles For C# code files, Visual Studio 2019 has a Code Cleanup button at the bottom of the editor that applies code styles from an EditorConfig file or from the Code Style options page. If an EditorConfig file exists for the project, those are the settings that take precedence. Figure 15 shows the Code Cleanup button in action. First, configure which code styles you want to apply (in one of two profiles) in the Configure Code Cleanup dialog box (shown in Figure 16). To open this dialog box, click the expander arrow next to the code cleanup broom icon and then choose Configure Code Cleanup. After you've configured Code Cleanup, you can either click on the broom icon or press (Ctrl+K, Ctrl+E) to run Code Cleanup. You can also run Code Cleanup across your entire project or solution. Right-click on the project or solution name in Solution Explorer, select Analyze and Code Cleanup, and then select Run Code Cleanup, as seen in Figure 17. You can also generate an EditorConfig based off the code styles used in an existing codebase. This and much more is offered by IntelliCode. IntelliCode Visual Studio IntelliCode enhances software development using machine learning and artificial intelligence. IntelliCode delivers context-aware code completions and guides developers to adhere to the patterns and styles of their team. Context-Aware Code Completions IntelliCode provides AI-assisted IntelliSense suggestions that appear at the top of the completion list with a star icon next to them, as shown in Figure 18. The completion list suggests the most likely correct API for a developer to use rather than presenting a simple alphabetical list of members. To provide this dynamic list, IntelliCode uses the developer's current code context as well as patterns based on thousands of highly rated, open-source C# projects on GitHub. The results form a model that predicts the most likely and most relevant API calls. IntelliCode uses the developer's current code context as well as patterns based on thousands of highly rated, open-source C# projects on GitHub.)., as shown in Figure 19. IntelliCode can also provide AI-assisted IntelliSense recommendations based on your own code. You can create a custom IntelliCode model to get AI-assisted IntelliSense recommendations based on your C# codebase. An IntelliCode model is an encapsulation of a set of rules that enable prediction of some useful information based on inputs. IntelliCode creates custom models using the same learning process as for the IntelliCode base models, except that they’re trained on your own code. The model trained on your code is private and only available to you and those with whom you choose to share it. The more code you provide to illustrate your patterns of usage, the more capable the custom model will be of offering good recommendations. Create a Custom Model To get useful predictions, a codebase should represent the common usage patterns for the APIs, objects, and methods that you use. The larger the variety of common usages that a codebase illustrates, the more useful the resulting model is in predicting those usages. To train a model, follow these steps: Open the project or solution in Visual Studio. Enable custom models in Tools > Options > IntelliCode > General > C# custom models. Open the IntelliCode page by choosing View > Other Windows > IntelliCode Model Management. Choose Create new model, as shown in Figure 20. After you've trained a model, the Share model button appears. Click the button to copy the sharing link. From there, you can share the link with your collaborators. Code Style Inference EditorConfig files help to keep your code consistent by defining code styles and formats. These conventions enable Visual Studio to offer automatic style and format fixes to clean up your document. For C# developers, IntelliCode can infer your code style and formatting conventions to dynamically create an EditorConfig file. You can add an IntelliCode-generated EditorConfig file at the project or solution level in Visual Studio (or to a solution folder). First, enable EditorConfig inference in Tools > Options > IntelliCode > General > EditorConfig inference. Then, add a prepopulated EditorConfig file, right-click on the desired location in Solution Explorer and choose Add > New EditorConfig (IntelliCode), as shown in Figure 21. After you add the file in this way, IntelliCode automatically populates it with code style conventions that it infers from your codebase. No more long discussions with your team about the best convention to use! Once generated, this file will help you maintain consistency in your team’s codebase. Debugging Visual Studio 2019 has added and improved upon features that enhance your productivity while debugging. The data breakpoint was a debugging feature exclusive to C++ that’s now compatible with .NET Core applications (version 3.0 or higher) in Visual Studio 2019. Data breakpoints enable you to halt your code when a specific object’s property changes in memory. Access these breakpoints by right-clicking an object’s property in the Autos, Locals, or Watch windows and selecting "Break When Value Changes" in the context menu. You can now search for specific values in your Autos, Locals, and Watch windows across multiple languages (excluding Xamarin, Unity, and SQL), as shown in Figure 22. Performing a search saves you the hassle of constantly scrolling and expanding items you want to inspect. Besides the new additions in 2019 and improved overall performance, there are plenty of other existing debugging features available in Visual Studio that can improve your debugging experience. There are plenty of ways to expedite your stepping experience, including Run to Click, a green arrow glyph that lets you fast-forward your code’s execution to a specified line (as shown in Figure 23) and Step into Specific, a context menu option which lets you step inside a nested function call. The next time you consider writing a print statement to log information, try using a TracePoint instead, which enables you to print to the output window without modifying your code. TracePoints can be set by creating a normal breakpoint at the specified line of code, selecting the gear glyph after hovering over it, and selecting the Actions option. You can see this in Figure 24. For users debugging asynchronous or multithreaded applications, check out the Parallel Stacks, Tasks, and Threads windows (all accessible via Debug > Windows in the top menu while debugging) to break down and analyze the code in an efficient, understandable manner. Test Explorer The Test Explorer had a major UI update in Visual Studio 2019 version 16.2 to provide better handling of large test sets, easier filtering, more discoverable commands, tabbed playlist views, and the addition of customizable columns that let you fine tune what test information is displayed. You can see this new window in Figure 25. Easily view the total number of failing tests at a glance and filter by outcome with the summary buttons at the top of the Test Explorer, as shown in Figure 26. You can customize what information shows for your tests by selecting which columns are visible, as shown in Figure 27. You can display the Duration column when you’re interested in identifying slow performing tests or you can use the Message column for comparing results. This table layout mimics the Error List table in its customizability. The columns can also filter using the filter icon that appears when hovering over the column header. Additionally, you now can specify what displays in each tier of the test hierarchy, as shown in Figure 28. The default tiers are Project, Namespace, and then Class, but you can also select any combination of groupings including State or Duration groupings. Playlists can display in multiple tabs and are much easier to create and discard as needed. Live Unit Testing also gets its own tab that displays all tests currently included in Live Unit Testing so you can easily keep track of Live Unit Testing results, separate from the manually run test results. Live Unit Testing is a Visual Studio Enterprise feature that automatically runs any impacted unit tests in the background and presents the results and code coverage live in Visual Studio in real time. Live Share Live Share enables you to collaboratively edit and debug with others in real time, regardless of what programming languages you're using or app types you're building. It enables you to instantly and securely share your current project, and then, as needed, share debugging sessions, terminal instances, localhost Web apps, voice calls, and more! Additionally, unlike traditional pair programming, Visual Studio Live Share enables developers to work together while retaining their personal editor preferences (e.g., theme or keybindings), as well as having their own cursor. This enables you to seamlessly transition between following one another and being able to explore ideas/tasks on your own. In practice, this ability to work together and independently provides a collaboration experience that’s potentially more natural for many common use cases. Click Live Share within Visual Studio to start your collaboration session and automatically copy an invite link to your clipboard, as shown in Figure 29. Send the link over email, Teams, etc. to those you want to invite. Opening the link in a browser enables your guest to join the collaboration session that shares the contents of the folder, project, or solution that you opened. Note that, given the level of access Live Share sessions that you can provide to guests, you should only share with people you trust and think through the implications of what you are sharing. That's it! Here are a few things to try out once a guest has joined you: - Move around to different files in the project independently and make some edits. - Follow the guest and observe as they scroll, make edits, and navigate to different files. - Start up a co-debugging session with them. - Share a server so you can check out something like a Web app running on their computer. - Share a terminal and run some commands Resources This article is a peek at recent improvements to Visual Studio 2019. More content, examples, and comprehensive docs are located on docs.microsoft.com. Here are some direct links that you might find useful: To learn more on the most popular refactorings built-in to Visual Studio 2019, visit. Guidance on creating your own Roslyn analyzers can be found at. Get extra guidance and code style rules from your editor by installing the recommended analyzer packages at. To learn more tips and tricks on Visual Studio Productivity checkout our guide at.
https://www.codemag.com/Article/1911022/Be-More-Productive-in-Visual-Studio-2019
CC-MAIN-2020-10
en
refinedweb
Node:Hidden assignments, Next:Postfix and prefix ++ and --, Previous:Hidden operators and values, Up:Hidden operators and values Hidden assignments Assignment expressions have values too -- their values are the value of the assignment. For example, the value of the expression c = 5 is 5. The fact that assignment statements have values can be used to make C code more elegant. An assignment expression can itself be assigned to a variable. For example, the expression c = 0 can be assigned to the variable b: b = (c = 0); or simply: b = c = 0; These equivalent statements set b and c to the value 0, provided b and c are of the same type. They are equivalent to the more usual: b = 0; c = 0; Note: Don't confuse this technique with a logical test for equality. In the above example, both b and c are set to 0. Consider the following, superficially similar, test for equality, however: b = (c == 0); In this case, b will only be assigned a zero value ( FALSE) if c does not equal 0. If c does equal 0, then b will be assigned a non-zero value for TRUE, probably 1. (See Comparisons and logic, for more information.) Any number of these assignments can be strung together: a = (b = (c = (d = (e = 5)))); or simply: a = b = c = d = e = 5; This elegant syntax compresses five lines of code into a single line. There are other uses for treating assignment expressions as values. Thanks to C's flexible syntax, they can be used anywhere a value can be used. Consider how an assignment expression might be used as a parameter to a function. The following statement gets a character from standard input and passes it to a function called process_character. process_character (input_char = getchar()); This is a perfectly valid statement in C, because the hidden assignment statements passes the value it assigns on to process_character. The assignment is carried out first and then the process_character function is called, so this is merely a more compact way of writing the following statements. input_char = getchar(); process_character (input_char); All the same remarks apply about the specialized assignment operators +=, *=, /=, and so on. The following example makes use of a hidden assignment in a while loop to print out all values from 0.2 to 20.0 in steps of 0.2. #include <stdio.h> /* To shorten example, not using argp */ int main () { double my_dbl = 0; while ((my_dbl += 0.2) < 20.0) printf ("%lf ", my_dbl); printf ("\n"); return 0; }
http://crasseux.com/books/ctutorial/Hidden-assignments.html
CC-MAIN-2017-43
en
refinedweb
Web Service (ASMX which gets an Object with two members (Name and LastName)) 2. Left by Bryan Corazza on Mar 09, 2007 10:36 AM # re: Failed to Serialize the Message Part Hey i am also getting same problem what u told can u help Then probably you will get to know what is the correct format you will get to send. Why would two species of predator with the same prey cooperate? have a peek at this web-site Well, now I have built some sample: 1. ReadSection 2carefully to overcome this error. Below is the current issue i am facing... c# asp.net json asp.net-web-api share|improve this question edited Apr 16 '14 at 22:30 asked Apr 16 '14 at 2:56 CampDev 4591616 What does the value object look like that Deploy and run - from file, to web service and back -response all run OK. 4. Two-Way HTTP Receive Port: Solicit-Response SOAP Send Port: We used the .NET Proxy class on our SOAP port to make the call. Without doing anything special, I get the following error message: Failed to serialize the message part "auditEventMessage" into the type "String" using namespace "". What does Joker “with TM” mean in the Deck of Many Things? Playing around with that stuff can sometimes make things more confusing. Join them; it only takes a minute: Sign up Failed to serialize the response in Web API with Json up vote 44 down vote favorite 17 I am working with ASP.NET Biz Talk Artifacts, Maps , Orchestrations, Schemas, Pipelines Naming Conventions Microsoft's Updated 2009 Documentation Developing Integration Solutions Using BizTalk Server 2009 and Team Foundation Server Mi... I usedWFetchto post the message to BizTalk. It is much more reliable and maintainable to use Models in which you have control of what the data looks like and not the database. Browse other questions tagged soap biztalk adapter or ask your own question. Regards. We'll pass the first webservice argument "Person" as the incoming message via HTTP receive port. Make a suggestion Dev centers Windows Office Visual Studio Microsoft Azure More... Advanced Users Biztalk WIKI : Useful How to Links Part - 5 Useful How to Links Part - 5 Bre Static Support Key-... For example for this example you can try the following FileStream fs = new FileStream(@"C:\Documents and Settings\SaravanaK\Desktop\FailedMessages\_Person.out",FileMode.Open,FileAccess.Read);XmlSerializer serialise = new XmlSerializer(typeof(LH.WebReference.Person));LH.WebReference.Person per = (LH.WebReference.Person)serialise.Deserialize(fs);fs.Close(); fs = new FileStream(@"C:\Documents and Settings\SaravanaK\Desktop\FailedMessages\_secondName.out",FileMode.Open,FileAccess.Read);serialise = It only seems to occur when the response is serialized to XML, serializing to JSON works absolutely fine. Is there a reason why similar or the same musical instruments would develop? I am dropping an XML file into a File receive location that theSOAP sendportsubscribes tovia BTSReceivePortName.See theerror I get below. You need to configure IIS as well to receive messages via HTTP, follow the link to configure IIS for HTTP receive. Most of the content here also applies to other versions of BizTalk beyond 2006. Save the Xml into a file and use the Validate Instance command from the schema. Filter Condition on the Send Port 5. Send me you email and I can send you a SOAP debugger/logger to help you out. AutoMapper and ValueInjecter are 2 notable ones –Sonic Soul Jun 23 '15 at 15:53 add a comment| up vote 1 down vote Use AutoMapper... In our example we got " If I add an array of a custom class (FieldError) it then doesn't serialize that to XML properly. That’s the first thing I can think of with that error… Stephen W. Now I spend about an hour looking at the messages that failed and I realized the problem really had 'nothing' to do with BizTalk and it was a serialization issue. The content you requested has been removed. Just fyi - I think that should read: using (Database db = new Database ()) { List Better to use it in the dbcontext constructor public DbContext() // dbcontext constructor : base("name=ConnectionStringNameFromWebConfig") { this.Configuration.LazyLoadingEnabled = false; this.Configuration.ProxyCreationEnabled = false; } Asp.Net Web API Error: The 'ObjectContent`1' type failed Are there any rules of thumb for the most comfortable seats on a long distance bus? The problem seems to be related to the fact that HttpError stores data to be serialized in a Dictionary Please ensure that the message part stream is created properly." The reason for the first error message is due to wrongly named IBaseMessage partName. DOWNLOAD SAMPLE Read the readme.txt file inside to configure it.
http://juicecoms.com/failed-to/failed-to-serialize-the-message-part-request-into-the-type.html
CC-MAIN-2017-43
en
refinedweb
Release Notes¶ ndn-cxx version 0.5.1¶ Release date: January 25, 2017 Note This is the last release of the library that supports NDN Certificate format version 1 and the existing implementations of validators. The upcoming 0.6.0 release will include multiple breaking changes of the security framework. Changes since version 0.5.0: New features:¶ - Add version 2 of the security framework (introduced in security::v2 namespace) - NDN Certificate Format Version 2.0 (Issue #3103) - New Public Information Base (PIB) and Trusted Program Module (TPM) framework to manage public/private keys and NDN Certificate version 2.0 (Issue #2948, Issue #3202) - New KeyChain implementation (Issue #2926) - New Validator implementation (Issue #3289, Issue #1872) - New security-supporting utilities: trust anchor container and certificate cache - Creation of Command Interests delegated to CommandInterestSigner class, while the new KeyChain only signs Interests (Issue #3912) - Enable validator to fetch certificates directly from the signed/command interest sender (Issue #3921) - Add UP and DOWN kinds to FaceEventNotification (issue:3794) - Add support for NIC-associated permanent faces in FaceUri (Issue #3522) - Add support for CongestionMark and Ack NDNLPv2 fields (Issue #3797, Issue #3931) - Add StrategyChoice equality operators and formatted output (Issue #3903) Improvements and bug fixes¶ - Ensure that NACK callback is called for matching Interests, regardless of their nonce (Issue #3908) - Optimize name::Component::compare() implementation (Issue #3807) - Fix memory leak in ndn-cxx:Regex (Issue #3673) - Correct NDNLPv2 rules for whether an unknown field can be ignored (Issue #3884) - Ensure that port numbers in FaceUri are 16 bits wide - Correct ValidityPeriod::isValid check (Issue #2868) - Fix encoding of type-specific TLV (Issue #3914) - Rename previously incorrectly named EcdsaKeyParams to EcKeyParams (Issue #3135) - Various documentation improvements, including ndn-cxx code style updates (Issue #3795, Issue #3857) Deprecated¶ - Old security framework. All old security framework classes are moved to ndn::security::v1 namespace in this release and will be removed in the next release. - v1::KeyChain, use v2::KeyChain instead - v1::Validator interface and all implementations of this interface (ValidatorRegex, ValidatorConfig, ValidatorNull). Use v2::Validator and the corresponding implementations of ValidationPolicy interfaces (will be introduced before 0.6.0 release). - v1::SecPublicInfo and its implementation (SecPublicInfoSqlite), SecTpm and its implementations (SecTpmFile, SecTpmOsx). These classes are internal implementation and not intended to be used without v1::KeyChain. v2::KeyChain internally uses the newly introduced Pib and Tpm interfaces with their corresponding implementations. - v1::Certificate, v1::IdentityCertificate, v1::CertificateExtension, v1::CertificateSubjectDescription, use v2::Certificate and AdditionalDescription - v1::SecuredBag, use v2::SafeBag instead - will.
http://named-data.net/doc/ndn-cxx/0.5.1/RELEASE_NOTES.html
CC-MAIN-2017-43
en
refinedweb
This post is a followup of to initiate a discussion whether whitebox def macros should be included in an upcoming SIP proposal on macros. Please read the blog post for context. Whitebox macros are similar to blackbox def macros with the distinction that the result type of whitebox def macros can be refined at each call-site. The ability to refine the result types opens up many applications including To give an example of how blackbox and whitebox macros differ, imagine that we wish to implement a macro to convert case classes into tuples. import scala.macros._ object CaseClass { def toTuple[T](e: T): Product = macro { ??? } case class User(name: String, age: Int) // if blackbox: expected (String, Int), got Product // if whitebox: OK val user: (String, Int) = CaseClass.toTuple(User("Jane", 30)) } As you can see from this example, whitebox macros are more powerful than blackbox def macros. A whitebox macro that declares its result type as Any can have it’s result type refined to any precise type in the Scala typing lattice. This powerful capability opens up questions. For example, do implicit whitebox def macros always need to be expanded in order be disqualified as a candidate during implicit search? Any Quoting Eugene Burmako from SIP-29 on inline/meta, which contains a detailed analysis on “Loosing whiteboxity” The main motivation for getting rid of whitebox expansion is simplification - both of the macro expansion pipeline and the typechecker. Currently, they are inseparably intertwined, complicating both compiler evolution and tool support. The main motivation for getting rid of whitebox expansion is simplification - both of the macro expansion pipeline and the typechecker. Currently, they are inseparably intertwined, complicating both compiler evolution and tool support. Note, however, that the portable design of macros v3 (presented in) should in theory make it possible to infer the correct result types for whitebox macros in IDEs such as IntelliJ. Quoting the minutes from the Scala Center Advisory Board:. Adriaan Moors, the Scala compiler team lead at Lightbend agreed with Martin, and mentioned a current collaboration with Miles Sabin to improve scalac so that Shapeless and other libraries can rely less on macros and other nonstandard techniques What do you think, should whitebox def macros be included in the macros v3 SIP proposal? In particular, please try to answer the following questions Thanks a lot to Ólafur, Eugene and the Scala Center in general for setting up such a thorough and transparent process. Here are my personal thoughts, as an extensive user of macros: The first use-case for whitebox macros that comes to mind is of course quasiquotes, because we often want what is quoted to influence the typing of the resulting expression. This is invaluable when one wants to design type-safe quasiquote-based interfaces. For example, see the Contextual library. Haskell has similar capabilities thanks to Template Haskell. This extends the point above, but it goes much further. We have been working on Squid, an experimental type-safe metaprogramming framework that makes use of quasiquotes as its primary code manipulation tool. Squid quasiquotes are statically-typed and hygienic. For example { import Math.pow; code"pow(0.5,3)" } has type Code[Double] and is equivalent to code"_root_.Math.pow(0.5,3)". { import Math.pow; code"pow(0.5,3)" } Code[Double] code"_root_.Math.pow(0.5,3)" (You can read more about Squid Code quasiquotes in our upcoming Scala Symposium paper: Type-Safe, Hygienic, and Reusable Quasiquotes.) Code The main reasons for using whitebox quasiquote macros here are: to enable pattern matching: we have an alternative code{pow(0.5,3)} syntax that could be a blackbox, but it doesn’t work in patterns (while the quasiquoted form works); making patterns more flexible might be a way to solve this particular point; code{pow(0.5,3)} to enable type-parametric matching: one can write things like pgrm.rewrite{ case code"Some[$t]($x).get" => x }. This works thanks to some type trickery, namely it generates a local module t that has a type member t.Typ, and types the pattern code using that type, extracting an x variable of type Code[t.Typ]. This is somewhat similar to the type providers pattern. The rewrite call itself is also a macro that, among other things, makes sure that rewritings are type-preserving. pgrm.rewrite{ case code"Some[$t]($x).get" => x } t t.Typ x Code[t.Typ] rewrite to enable extending Scala’s type system: we have alternative ir quotation mechanism that is contextual in the sense that quoted term types have an additional context parameter. This (contravariant) type parameter expresses the term’s context dependencies/requirements. Term val q = ir"(?x:Int).toDouble" introduces a free variable x and thus has type IR[Double,{val x:Int}] where the second type argument expresses the context requirement. (IR stands for Intermediate Representation.) Expression code"(x:Int) => $q + 1" had type IR[Int => Double,{}] because the free variable x in q was captured (this is determined statically). That term can then be safely be ran (using its .run method, which requires an implicit proving that the context is empty C =:= {}). Thus we “piggyback” on Scala’s type checker in a modular way to provide our own user-friendly safety checking that would be very hard to express using vanilla Scala. ir val q = ir"(?x:Int).toDouble" IR[Double,{val x:Int}] code"(x:Int) => $q + 1" IR[Int => Double,{}] q .run C =:= {} As you have guessed, this relies on invoking the compiler from within the quasiquote macro. I understand that this is technically tricky and makes type-checking “inseparably intertwined” with macro expansion, but on the other hand that’s also an enormous advantage. If it’s possible to sanitize the interface between macros and type-checkers, that would give Scala a very unique capability that puts it in a league of its own in terms of expressivity –– basically, the capability to have an extensible type system. Could Squid’s quasiquotes be made a compiler plugin? Probably, though I’m not knowledgeable enough to answer with certainty, and I suspect it would be very hard to integrate these changes right into the different versions of Scala’s type checker. As an aside, in Squid we also came up with the “object algebra interface” way to make language constructs expressed in the quasiquotes independent from the actual intermediate representation of code used. This seems similar to the way the new macros are intended to work –– the main difference being that we support only expressions (not class/method definitions). Dynamic I think the usage of the Dynamic trait becomes extremely limited (from a type-safe programming point of view) if we don’t have a way to refine the types of the generated code based on the strings that are passed to its methods selectDynamic & co. (doing so is apparently even known as the “poor man’s type system”). selectDynamic If that is possible to do in a sane way, I could not recommend going with that possibility enough! Thank you for your detailed response @LPTK In Squid, do you rely on fundep materialization? There may be a design space between blackbox and whitebox def macros that supports refined result types but not fundep materialization. I suspect it would be very hard to integrate these changes right into the different versions of Scala’s type checker. I suspect it would be very hard to integrate these changes right into the different versions of Scala’s type checker. I suspect so too, we face the same challenges designing a macro system that works reliably across different compilers The Dynamic trait The Dynamic trait That is a good observation. I am not sure how common this technique is. I have contacted the author of scalikejdbc to share how they use selectDynamic with whitebox def macros. Also, not sure how may impact this. the capability to have an extensible type system. the capability to have an extensible type system. Note that this may not necessarily be a desirable capability. Some whitebox def macros are so powerful they can be used to turn Scala into another language! I’d like a way for whitebox macros authors to be able (although not neccesarily obliged) to separate the part of the macro that computes the return type from the part that computes the expanded term. Let’s call the first part “signature macros”. For implicit macros, this would lend itself to more efficient typechecking. Even for non-implicit macros, an IDEs could be more efficient if they could just run the “signature macro”. I think that this separation also will help to shine a light on whether the full Scala language is the right language for signature macros, or if a more restrictive language could express a broad set of use cases of whitebox macros. I suppose the contract would be that if the signature macro returned a type and no errors, the corresponding term expansion macro would be required to succeed and to conform to the computed return type. Obviously a naive implementation of the signature macro is to just run the term macro and typecheck it, as per the status quo. I think we should aim higher than that, though! Ryan Culpepper recently suggested essentially the same thing that you call “signature macros” two weeks ago! …Really glad to hear this suggestion; means that at least a subset of us are thinking along the same lines cc/ @olafurpg Not currently. We had a prototype system that perhaps did something like that (not sure): it was a system for statically generating evidence that structural types did not contain certain names or were disjoint in terms of field names. For example, you could write def foo[A,B](implicit dis: A <> B) meaning that A and B are structural types that share no field names. You could then call foo[{val x:Int},{val y:Double}] but not foo[{val x:Int},{val x:Double}]. When extendind an abstract context C as in C{val x:Int}, the contextual quasiquote macro would look for an evidence that C <> {def x} to ensure soundness in the face of name clashes. However, instead of porting that old prototype to the current system, we’re probably going to move to a more modular solution, which shouldn’t need any implicit macros. def foo[A,B](implicit dis: A <> B) A B foo[{val x:Int},{val y:Double}] foo[{val x:Int},{val x:Double}] C C{val x:Int} C <> {def x} There is one particularly nasty thing that a Squid implicit macro currently does: it looks inside the current scope to see if it can find some type representation evidence. This allows us to use an extracted type t implicitly as in case ir"Some[$t]($x) => ... implicitly[t.Typ] ... instead of having to write case ir"Some[$t]($x) => implicit val t_ = t; ... implicitly[t.Typ] .... I understand this is probably asking macros for too much, and I think we could do without it (though it may degrade the user experience a little). case ir"Some[$t]($x) => ... implicitly[t.Typ] ... case ir"Some[$t]($x) => implicit val t_ = t; ... implicitly[t.Typ] ... About Dynamic, one of the things I’ve used it for was to automatically redirect method calls to some wrapped object (cf. composition vs inheritance style). Yeah, it’s a judgement call. IMHO Scala is already a language that lets you define a myriad different sub-languages thanks to its flexible syntax and expressive type system. I think that’s one thing many people like about the language (cf., for example, the vast ecosystem of SQL/data analytic libraries that define their own custom syntaxes and semantics). Sounds like the most natural way to do it would be to just have type macros. Then whitebox macros are just blackbox macros with a return type that is a macro invocation. def myWhitebox[A](a: A, str: String): MyReturn[A, str.type] = macro ... type MyReturn[A, S <: String with Singleton] = macro ... It’s a nice separation of concerns. But I’m afraid there are a lot of whitebox macros in the wild where both code generation and type refinement are very much intertwined, because they’re semantically inseparable. In the case of Squid, what I’d do is to parametrize the current macro to either just compute a type or do the full code generation; but that would mean a lot of computation would be duplicated (I would have to parse, transform, typecheck and analyse the quasiquote string in both type signature and code-gen macro invocations), and batch compile times would be strictly worse. To add onto what @LPTK wrote, I’d speculate that there are very few whitebox macros for which the signature macro could be easily separated from the term macro without a lot of code duplication and/or redundant work. An alternative approach may be to conflate the signature macro and the term macro. The macro expansion could return a tuple of (List[c.Type], c.Tree) where the list of types must contain exactly as many types as there are method type arguments. For example, suppose that I want to implement the CaseClass.toTuple[T] method from above. (List[c.Type], c.Tree) CaseClass.toTuple[T] object CaseClass { def toTuple[C, T](cls: C): T = macro CaseClassMacros.impl[C, T] } class CaseClassMacros(val c: Context) { import c.universe._ def impl[C: c.WeakTypeTag, T](cls: c.Expr[C]): (List[c.Type], c.Tree) = { ... val tree = q"""...""" val tType: c.Type = ??? val resultTypes = List(weakTypeOf[C], tType) (resultTypes, tree) } } The typechecking of the returned tree could be deferred until after the compiler has verified that result types are valid. There would be no need to re-expand the macro using the result types since the type T is a functional dependency of C. While this is less conceptually elegant than having independent signature and term macros, I think that it would be more practical for macro authors. When thinking about macros I have found it useful to consider two dimensions: First dimension: What is the expressive power of the macro language? Second dimension. When should this power be available? Scala with whitebox macros is currently at the extreme point (3, 3) of the matrix. This is IMO is a very problematic point to be on. Having the full power of the underlying language at your disposal means your editor can (1) crash, (2) become unresponsive, or (3) pose a security risk, just because some part of your program is accessing a bad macro in a library. That’s not hypothetical. I still remember the very helpful(?) Play schema validation macro that caused all IDEs to freeze. Scala with blackbox macros is at (3, 2). This is slightly better as only building but not editing is affected by bad macros and you can do a better job of isolating and diagnosing problems. But it still would make desirable tools such as a compile server highly problematic because of security concerns. If we take other languages as comparisons they tend to be more conservative. Template Haskell lets you do lots of stuff, but it is its own language. I believe that was a smart decision of the Haskell designers. Meta OCaml is blackbox only and does not have any sort of inspection, so it’s essentially compile-time staging and nothing else. So, if Scala continued to have whitebox macros it would indeed be far more powerful than any other language. Is that good or bad? Depends on where you come from and what you want to do, for sure. But I will be firmly in the “it would be very bad” camp. In the future, I want to concentrate on making Scala a better language, with better tooling, as opposed to a more powerful toolbox in which people can write their own language . There’s nothing wrong with toolboxes, but it’s not a primary goal of Scala as I see it. Given this dilemma, maybe there’s no single solution that satisfies all concerns. That was the original motivation of the inline/meta proposal in SIP 29: Have only inlining available as a standard part of the language. Inlining does a core part of macro expansion (arguably, the hardest part to implement correctly). Then build on that using meta blocks that are enabled by a special compiler mode or a compiler plugin. If we have only blackbox macros the plugin can be a standard one which simply runs after typer. With whitebox macros the “plugin” would in fact have to replace the typer, which is much more problematic. I believe it would in effect mean we define a separate language, similar to Template Haskell. That’s possible, but I believe we need then to be upfront about this. One thing to add to my previous comment: Some form of type macros (or, as @retronym calls them, signature macros) might be a good replacement for unfettered whitebox macros. Dotty’s inline essentially does two things: inline In the type language, we already have beta-reduction. If type F[X] = G[X] then F[String] is known to be the same as G[String]. If we add some form of condiional, we might already have enough to express what we want, and we would stay in the same envelope of expressive power. F[String] G[String] To get into the same ballpark in terms of expressiveness, I think you’ll also need some form of recursion purely at the type level, which is not currently possible: type Fix[A[_]] = A[Fix[A]] illegal cyclic reference: alias [A <: [_$2] => Any] => A[Fix[A]] of type Fix refers back to the type itself Wouldn’t supporting this potentially break the type system pretty badly? A minor nitpick: Actually, MetaOCaml is not related to macros. It’s essentially for generating and compiling code at runtime (traditional multi-stage programming) –– though it’s true that the approach was ported to compile-time with systems such as MacroML, or more recently modular macros. @LPTK Yes, we’d have to add some form of recursion to type definitions, with the usual complications to ensure termination. You are right about Meta OCaml. I meant OCaml Macros: For implicit macros, this would lend itself to more efficient typechecking. For implicit macros, this would lend itself to more efficient typechecking. Indeed. But, furthermore we have by now decided that every implicit def needs to come with a declared return type. This restriction is necessary to avoid puzzling implicit failures due to cyclic references. So, it seems whatever is decided for whitebox macros, implicit definitions in the future cannot be whitebox macros. We use whitebox macros to compile db queries and return query result as typed rows, i.e. db query string also serves as a class definition. For example: scala> tresql"emp[ename = ‘CLARK’] {ename, hiredate}".map(row => row.ename + " hired " + row.hiredate) foreach println select ename, hiredate from emp where ename = 'CLARK’ CLARK hired 1981-06-09 Is there a way to achieve this without whitebox macros? We use whitebox macros to do symbolic computation (using a Java library called Symja) at compile-time. As we have no idea what the final function/formula is going to look like, we cannot define a fixed return type. I’d also be interested if there’s a way to do this without whitebox macros.
https://contributors.scala-lang.org/t/whitebox-def-macros/1210
CC-MAIN-2017-43
en
refinedweb
I'm suppose to create a problem that displays the contents of a file (that the user inputs), and then displays each line with a number in front and a colon after it. So, line 1 and 2 would print out: 1 lineFromFile : 2 lineFromFile : But, it keeps printing out 1 in front of each line, so I think I didn't write the count part correctly. Can someone help? Here's what I've got. import java.util.Scanner; //Needed for Scanner class import java.io.*; //Needed for file and IOException public class orderOfFileLines { public static void main (String[ ] args) throws IOException { int number; //Loop control variable //Create a Scanner object for keyboard input. Scanner keyboard = new Scanner(System.in); //Get the file name. System.out.print("Enter the name of a file."); String filename = keyboard.nextLine(); //Open the file. File file = new File(filename); Scanner inputFile = new Scanner(file); //Read the lines from the file until no more are left. while (inputFile.hasNext()) { //Read the next line. String line = inputFile.nextLine(); for (number = 1; number <=1; number++) { //Display the lines with number and ":". System.out.println(number + line + ":"); } } //Close the file. inputFile.close(); }//end main method }//end class
https://www.daniweb.com/programming/software-development/threads/124283/don-t-think-i-m-getting-count-correct
CC-MAIN-2017-43
en
refinedweb
***********Updated on 4th October 2017*********** Secure Socket Layer (SSL) and its successor Transport Layer Security (TLS) are protocols which use cryptographic algorithms to secure the communication between 2 entities. It is just a secure layer running on top of HTTP. Overview of SSL Protocol Stack Several versions of SSL have been released after its advent in 1995 (SSL 2.0 by Netscape communications, SSL 1.0 was never released). Here is the list: - SSL 1.0, 2.0 and 3.0 - TLS 1.0 (or SSL 3.1, released in 1999) - TLS 1.1 (or SSL 3.2, released in 2006) - TLS 1.2 (or SSL 3.3, released in 2008) SSL was changed to TLS when it was handed over to IETF for standardizing the security protocol layer in 1999. After making few changes to SSL 3.0, IETF released TLS 1.0. TLS 1.0 is being used by several web servers and browsers till date. What I have never understood, is there have been newer versions released after this, with the latest being TLS 1.2 released in 2008. On Windows the support for SSL/TLS protocols is tied to the SCHANNEL component. So, if a specific OS version doesn’t support a SSL/TLS version, this means it remains unsupported. Below table should give you a good understanding of what protocols are supported on Windows OS. TLS 1.1 & TLS 1.2 are enabled by default on post Windows 8.1 releases. Prior to that they were disabled by default. So the administrators have to enable the settings manually via the registry. Refer this article on how to enable this protocols via registry: On the client side, you can check this in the browser settings. If you are using IE on any of the supported Windows OS listed above, then in IE, browse to Tools -> Internet Options -> Advanced. Under the Security section, you would see the list of SSL protocols supported by IE. IE supports only those security protocol versions, which is supported by the underlying SCHANNEL component of the OS. TLS settings in IE on Windows 10 Chrome supports whatever IE supports. If you intend to check the support in Firefox, then enter the text "about:config" in the browser address bar and then enter TLS in the search bar as shown below. TLS Settings on Firefox v47 The settings security.tls.version.max specifies the maximum supported protocol version and security.tls.version.min specifies the minimum supported protocol version . They can take any of the below 4 values: - 0 - SSL 3.0 - 1 - TLS 1.0 (This is the current default for the minimum required version.) - 2 - TLS 1.1 - 3 - TLS 1.2 (This is the current default for the maximum supported version.) Refer this Mozilla KB for more info:.* Nice clear information !!! helpful Excellent!! This was very helpful for me Hello thanks Much, About TLS 1.2 in windows Seven7, blogs.msdn.com/…/support-for-ssl-tls-protocols-on-windows.aspx, Says you have to Turn On "TLS 1.2", so Where Is The Instruction Setup, if you please ? ? ? Only In internet explorer , and, Or ,What, where, howTo, more, etc. plzz Hi Calvin, Sorry for the late reply. I was out on vacation, so couldn't reply to you earlier. I didn't include the instruction setup because there is a KB article on the same. Here is the link: support.microsoft.com/…/245030 Basically if you want to disable or enable TLS/SSL on the server side, it has to be done via registry.> Great article, but I'd appreciate it if you'd put descriptions (altText), under your images so that users reading your information using screen readers have an idea of what you're showing us. thanks for the feedback Cron. I've included them now. Let me know if this helps. have a 32 Bit Windows 2008 machine, how do I disable the Ciphers on that server and make it not vulnerable to the BEAST. Kashif, There is already a fix for the Beast. Read this article: blogs.msdn.com/…/fixing-the-beast.aspx Why TLS 1.1 and 1.2 is not turned on by default? @Ray – Well the reason is that not many existing client browsers don't support TLS 1.1 and later. Even though they are the latest versions of TLS, the problem has been with the adoption of these versions. If they are enabled, there will be additional time spent in re-negotiating these parameters with the server or client. Hence they are always kept disabled. i m using windows 8. when i open browger its show ssl connection errors. how can i solve @Arif – Could you provide a snapshot or provide more details on what is being done? — What site are you browsing? — What browser are you using? — What is the actual error? I have a case where the Win7 clients' SSL3 and TLS1.0 box is checked on IE8. When they try and connect to a Win2008R2 instance of IIS7.5 it refuses to go secure. If the client additionally checks the SSL2.0 checkbox, then the client goes secure with the server. How can that be? The server is default config, so SSL2.0 is turned off by default. Why Windows Server 2008 (non R2!) do not get new schannel which supports TLS 1.1/1.2 and new ciphers ? It is still in Mainstream Support till 2015. support.microsoft.com/…/default.aspx Hello, I am not sure why it was not added as it would have been really great to add support for latest TLS versions. The only problem I see was the major over-hauling required as the support for SSL is inbuilt into Windows. However you could provide your feedback here: Hello Kaushal, I have a few android browers accessing my application which is hosted on iis 7.5 & browsers are throwing a error message saying: =============================================== Your connection to is encrypted with 256 bit encrption. The connection uses TLS 1.0 The connection is encrypted using AES_256_CBC…..with sha1 for msg authencation and rsa as the key exchange mechanism. The server does not support the TLS renegotiation extension ================================================ Need your suggestion on this..do i need to change the TLS to 1.2 in registry? please help. Could you verify if the necessary updates are installed on the server? Check this: support.microsoft.com/…/977377. Please read carefully the instructions on how to fix this. This will affect certain functionality. You may also consider posting this problem on Windows Security Forums to get a quicker and better response: social.technet.microsoft.com/…/home how to fix internet explorer 10 "this page can't be display" make sure tls and ssl protocols are enabled. but in internet option tls 1.0 and ssl 3.0 box is alreadly selected. @Deepak, that is a very vague issue description, Have you checked if the issue happens from other clients while accessing the same server? I had a problem, where TLS states were Grayed out in Win 7 64-bit OS. Sorry I am facing a problem when checking for a particular website its showing the page cannot be displayed. When moved to IE options , TLS states were Grayed out, How should I proceed further. Hello Santhosh. Looks like these settings are managed for your computer via group policy or you may want to consider launching IE as an administrator to give it a second try. hi Kumar, how if i enabling the TLS on Windows Server 2008 R2, is any error in the communication, whereas i enabling both SSL and TLS on IIS Service. can you explain a little bit how SSL and TLS work in encryption before 3-way TCP hanshake key. many thanks and this is a good topic. How can I enable TLS 1.1, 1.2 for my clients which are using windows application to connect to wcf service? I think browser specific settings to enable TLS 1.1, 1.2 are not applicable in thi case. Hello. I am on Windows 7 using wamp and opens SSL. How do i enable TLS 1.1 and 1.2? @DJ Danni OpenSSL has its own implementation for SSL/TLS. They don't rely on SCHANNEL. For OpenSSL I would suggest you to check their documentation on how to enable it. The recent version of OpenSSL does support TLS 1.1 & TLS 1.2. Here is my OpenSSL Version OpenSSL/0.9.8k openssl OpenSSL support enabled OpenSSL Library Version OpenSSL 0.9.8k 25 Mar 2009 OpenSSL Header Version OpenSSL 0.9.8r 8 Feb 2011 So is it possabole to use TLS 1.2 on that version? If so how do i update? @DJ Danni. From what I read, it is supported on 1.01 and higher versions..…/openssl-1.0.1-notes.html Not mean to discourage people, but with technology progressing all the time, this article needs update or clearly marked so. Hi, can any one help me on my below query. A client has had a security assessment conducted of the web servers in their environment. They want the servers to be configured to disable SSL version 2, and to only accept SSL ciphers greater than, or equal to, 128 bits. The web servers in the environment consist of Apache 2.2 on Red Hat Enterprise Linux 6, IIS 6 on Windows Server 2003, and IIS 7.5 on Windows Server 2008 R2. Please answer the following questions: 1. How do you test the servers to determine which SSL versions and ciphers are currently supported / accepted? Please describe the process. 2. What changes are needed for each of the web servers / operating systems to meet the client's requirements? Please be specific. what happens if you create tls 1.0, 1.1 and 1.2 keys in gpo for a mix win2k3, win2k8, win2k8r2, win2k12 environment? the keys for 1.1 and 1.2 would not enable features unsupported on win2k3, however would this cause issues if those keys now existed for gpo or is there a better method similar to enabling IE tls via gpo? (for server/apps/programs like iis) Hi We are using IE9 and TLS 1.0 is enable by default, if i will enable TLS 1.2 IT it will impact the other sites…? @Prashant – It wont! Hi Kaushal, Thanks for your prompt replied.. 🙂 TLS 1.2 what exactly its enable and how it is different from TLS 1.0..? As just want to check the effects before implementing the changes. There are lots of difference. TLS 1.2 is the latest version while TLS 1.0 is almost obsolete.. Most of the vulnerabilities that we encounter today are related to 1.0 There re several posts describing he differences. you could go through them. I have a question. With PCI standards dictating now that TLSv1 needs to be disabled on web servers in order to be compliant. Yet many users are still sitting on IE 8-10 which has support for TLSv1.1 and 1.2 disabled by default. Many users simply won't be able to use these PCI compliant sites. Does Microsoft plan to enable 1.1 and 1.2 in IE 8-10 in a patch? It has nothing to do with IE. IE supports whatever the underlying OS supports. Until Windows Server 2012 R2/Windows 8.1 TLS 1.1/1.2 was disabled by default. It is debatable to say whether to enable it or not. However a simple solution is that we can have the registry key switched to enable the support for these protocols. It has everything to do with IE. Other than IE8-10 every single other browser has TLSv1.1. ans 1.2 enabled by default. IE 8-10 has that support as well but has it turned off by default. And this has nothing to do with IIS, most web servers are running on apache, tomcat or other linux variants. As it stands now, in June 2016 when every single web site on the net that has to be PCI compliant will need to disable TLSv1. Then you will have a significant number of people who are still running IE 8-10 who will not be able to access these sites by default. @Joe As I said it depends on the underlying Operating System for most of the Microsoft products. IE supports, whatever the underlying OS supports. IE 8 on windows XP supports only TLS 1.0 as XP has no support for the later versions of TLS. The component is called SCHANNEL. We have either Client & Server specific registry keys. We need to understand why this decision was made. When IE came out with support for TLS 1.1 and TLS 1.2, it was the only browser which supported these protocols. As you mentioned most of the web servers run on Apache and they didn't support these protocols until 2010/2011 I believe. When Windows 8.1 was launched IT industry was slowly shifting to newer versions. FINALLY IT HAPPENED. So the decision was made to be keep it as disabled by default. Chrome and other major browsers started support for it around the same time (2011). Instead of waiting for a patch its a simple setting that can be enforced. One can write a script or if the machines are part of a domain, then push it via Group Policy. I'm not quite certain why we are discussing web servers. Hi All, Couple of website is hosted in system (Windows 2008 R2) in IIS7.0 environment and now working with SSL V3.0. I want to move to TLS1.2. I have setup the the TLS1.2 in registry and disable my existing SSLv2.0,3.0, TLS1.0 & TLS1.1. Also, enable the TLS1.2 in IE and disdable all other type of SSL & TLS. Now, when I browse the app, i am getting error. 1. Do I need to chnage anything in website configuration for TLS support? 2. The services are currently hosted using a Verisign SSL certificate. Can we use same certificate for TLS? @ Ambarish Correction: Windows Server 2008 R2 has IIS 7.5. Could you let me know the registry that you have added? You can email me at kaushalp@microsoft.com. There is no configuration in IIS to enable/disable a specific SSL protocol version. Ensure the browser you are using to communicate has TLS 1.2 enabled as well. Secondly, SSL Certificates are not dependent on the SSL protocol version. "Ensure the browser you are using to communicate has TLS 1.2 enabled as well." My point exactly. When every PCI compliant web server switches over to TLSv1.2 in June of next year the entire web will be filled with people running IE 8-10 saying the same thing. It's ridiculous. Only IE 8-10 has support for that protocol off by default. And it's doubly ridiculous that it is still that way if IE had support for those protocols before the other browsers. I might add as well that while IE 8-10 only use TLSv1 by default that it is a broken protocol and not secure at all anymore. As if the world needed another reason to get off of IE altogether. Yes it is absurd that it is switched off by default on earlier OS's. I am not quite certain of the future updates that will be provided. Rather waiting as I suggested earlier push these settings off to machines via group policies. You could also submit your feedback to the IE dev team here: connect.microsoft.com/IE So, we just went under a Scan and now it's complaining that we need to disable TLS 1.0. That throws a couple of quick wrenches. 1st, the back end app talks over TLS 1.0 to SQL Server so we need to keep the client side enabled. Second, if I disable the server side, I can't RDP into the server. I can use weaker RDP encryption, but that's not good either. How are we getting past this aging TLS v1.0? On top of that, there are still a lot of browsers that can't use 1.1 or 1.2. I'm more concerned about RDP. Oh, and I do NOT use RDP over the internet, VPN tunnel only, so I'm guessing you'll say it's ok to use the weaker encryption. Thanks. @Kevin. Every single browser currently supports TLS 1.1 & 1.2. The ones that don't support them are IE running on Windows XP/Windows server 2003. The Reason being both the OS don't have native support for TLS 1.1 & TLS 1.2. If you are using Windows Vista o higher, you shouldn't have any issues. At some point you need to make the transition, it is better you do it now than at a later stage. I'm not an RDP expert, but check if the RDP client supports TLS 1.1/1.2. Very helpful! Thanks! Hi, i'm tying to make a secure connection from a .net 4.5 context with WebRequest over TLS 1.1 / 1.2 for Windows Server 2008 sp2. All goes fine when i run the application from windows 8/8.1 host, but on Windows Server 2008 is fails with: "The underlying connection was closed: An unexpected error occurred on a send." I believe it is due to "Now let’s come to the point, on Windows the support for SSL/TLS protocols is tied to the SCHANNEL component" and "All the windows components/applications abide by this rule and can support only those protocols which are supported at the OS level." If this is true for .net 4.5, could you point me to a reference document confirming the implementation? Hello, I am trying to make some changes to pass the PCI Compliance Scan. For starters, I was able to disable TLS 1.0 on the Exchange server but now many users outside of the network are having trouble connecting to the server via Outlook, would you happen to know what would be the best way to get them to connect? Also, I am running a terminal server on Windows Server 2008 (not R2) and I see on the top of the page that the server does not support TLS 1.1 and 1.2. Is there a way around that? I am failing because of TLS 1.0 being enabled. Thanks, Saul @Saul Windows Server 2008 doesn't support TLS 1.1 & 1.2. The best solution for you is to upgrade to 2008 R2 or higher. i am currently running windows vista and internet explorer 9, and cannot see support for tls 1.1 or tls 1.2 when i go control panel-advanced internet options. what gives? would like to keep current os and browser. you said vist or higher should have no problems. @Kaushal – Very useful article – thanks. In your response of the 8th May, you say "If you are using Windows Vista o higher, you shouldn't have any issues." but the table above says that Vista doesn't support TLS1.1/1.2. Can you say which is correct? (Ron's message of the 22nd suggests the table is correct). Thanks. This is a good but very old article. Since some people are still commenting on this article it is worth mentioning a few things. Some things are no longer true. For example all major browsers such as Chrome and Firefox have had support for TLS1.1 and TLS 1.2 for quite some time now. @Kaushal, when you enable TLS1.1 and TLS1.2 in Schannel through the registry, it is not automatically also enabled in IE settings. While it is true that IE is dependent on whatever your version of Schannel.dll supports, these are two separate settings. For viewing/changing settings on Microsoft servers without editing the registry manually you can use the free tool IISCrypto that makes the necessary registry settings for you. When disabling older protocols on the server side, such as SSL2.0, SSL3.0 please be aware that many older devices may not support TLS, for example Windows XP, older versions of android and windows mobile etc. I find it quite shocking that if I set up the browsers on my windows 8.1 client to insist on secure connections (ie no support – not even fall-back – for RC4, and no support for anything earlier than TLS1.2, I can no longer access the MSDN website. That doesn't much matter for pages like this one, but it's a bit of an issue for, for example, downloads. Hi Kushal, Can I expect help in resolving this problem : stackoverflow.com/…/ssl-protocol-tweaking-at-operating-system-level-by-editing-registry-on-windows. Thanks in advance! Hi Kushal, Its really a good article and gave me a lot of information about SSL and TLS. I have a problem regarding TLS 1.2. One of the webservice hosted on a customer server is having TLS 1.2, to consume that webservice I was told to use TLS 1.2 in my application hosted in different server. How this can be done. I am using Windows 2008 R2 server. I have enabled tls 1.2 in my server and tried to initiate the requests to consume the customer server's web service. But it is erroring out "could not establish secure channel for SSL/TLS with authority" I had confirmation that MS are developing a hotfix for Windows 2008R2 to support TLS v1.1 and v1.2. Testing is still underway (90 day test cycles) so it could be soon released. Tracked down a browser not able to access a web page to the SSL cert used on the web server using TLS 1.0. Similar web server apps, when SSL uses TLS 1.2 can be accessed by browser with no problem. Question: who/how decides what TLS version to use with SSL cert? Is it baked into the SSL cert at the time the SSL cert is created? It's not in the app/web server, because the app/web server code is the same in both cases — only the SSL certs are different. @Aleks I have covered your answer in this post here: blogs.msdn.com/…/ssl-handshake-and-https-bindings-on-iis.aspx To answer your question, both client and server engage into a dialog (SSL Handshake) to decide the SSL protocol version. Hi Kaushal, I have a server 2003 machine and I'm researching what I need to do to get it to pass an SSL test (e.g. ssllabs.com). I see a lot of information about how server 2003 does not support TLS 1.1 and 1.2, but I'm also seeing a ton of information online regarding workarounds by editing the registry or various other things. e.g.: portal.chicagonettech.com/…/maximizing-ssl-security-for-windows-server-2003-ssl-tls.aspx That article appears to show only registry changes which appear to result in a grade of B from ssllabs.com, and if schannel doesn't support TLS 1.1 to begin with I don't see how any registry tweaks are going to change that. I'm not clear – is there any way to get server 2003 to disable the older insecure protocols and ciphers and only allow the current ones like TLS 1.1 and 1.2? Is the only option to upgrade the OS to a newer version? Thanks Since PCI DSS 3.1 will force retire TLS 1.0 from all e-commerce websites by end of June 2016, what will be Microsoft position regarding IE 9 on Vista? Will they upgrade SCHANNEL on Vista to add TLS 1.1/1.2 support or urge users to drop IE 9 and use Firefox or Chrome instead?…/Migrating_from_SSL_Early_TLS_Information%20Supplement_v1.pdf Hi Kaushal, QQ is TLS1.0 Client & Server side enabled by default on Windows 2003 and 2008? I notice that the key Client with the Dword "Enabled" (with value ffffffff) is not present by default. On Windows 2008R2 and 2012 I was able to find documentation that explicitly say is enabled by default, but for 2003 I found a bunch of kbs and notes on How to disable protocols. Thanks in advanced! sorry for the delayed response folks. Been away from blogging for quite some time. @Steve Windows Server 2003 and Windows Server 2008 do not have native support for TLS 1.1 and TLS 1.2. Even if you add a registry key it is of no use as the protocol itself is not recognized by the OS. There is also a reason for this. Both the protocols were proposed around 2006 and the industry started adopting this around 2010. The only solution for you would be to upgrade latest version which is Windows Server 2012 R2. @Valérie as I mentioned above TLS 1.1 and TLS 1.2 are not supported on Windows Vista and I don't see any investments being made for them to be made available on Vista. However, I am not on the product team which makes such decisions. I think it would be better if you could post a query for this on UserVoice: windows.uservoice.com/…/265757-windows-feature-suggestions @Fercha Yes, TLS 1.0 is enabled by default for client and server side on Windows Server 2003 and Windows Server 2008. You may not see any keyword in the registry for this as they are built-in. I am using IISCrypto at the last few years and until now all works fine with my sites Today I was asked by the PCI company to Disable TLS 1.1 AND TLS 1.0 and to enable TLS 1.2 From the moment that I enable only TLS 1.2 nobody from any browser and any pc can login to my site and the error msg is : Runtime Error Description: An exception occurred while processing your request. Additionally, another exception occurred while executing the custom error page for the first exception. The request has been terminated. Few months ago I did the same at my other sites and all works fine. I will really appreciate your help to help me understand what I am doing wrong and what should be done to solve this issue ASAP Btw- my server is windows server 2008R2 with the last updates. is this what the end user sees if they are trying to access a LTS1.2 encrypted portal with a browser that does not support TLS1.2? Hi, Our application wants to support TLS 1.2. I have configured in registry and also updated my configuration files but I have the IIS version as 7. Our OS is Windows 2008 R2 Enterprise edition. When I tried to execute my application, it is still taking TLSv1.0 as default. Do I need to upgrade my IIS manager to 7.5? Also am getting error if I tried to upgrade as Microsoft .net framework version 4.0 or greater is required to install 7.5 IIS express. Helping this issue would be highly appreciated. Thanks, Rameez @Rameez Raja IIS version is tied to Windows version. You are actually running IIS 7.5 as it is Windows Server 2008 R2. There are 2 things to note, there is a client stack and server stack. IIS consume the server stack configuration. In this blog when we modify the IE settings it is changing the settings only for IE and not IIS. You will have to modify the Even if you enable TLS1.2 it depends on the client connecting to the server. Basically you have to do this: Support article: support.microsoft.com/245030 In order to disable or enable TLS/SSL on the server side, registry has to be modified.> Any one please tell me how can i enable TLS 1.2 in excel vba to use it for a service call which uses TLS 1.2. Hi Kaushal, I have TLS 1.0 , 1.1 , TLS 1.2 enabled on both Client and Server and both have the same cipher order. So which protocol is agreed by both before exchange of actual messages ? I have read somewhere they use the highest possible protocol. Is this right ? Thanks, Mrunal How do POSReady OSes line up here? “It is just a secure layer running on top of HTTP.” No, it’s a layer on top of TCP. Any one please tell me how can i enable TLS 1.2 in excel vba to use it for a soap service call which uses TLS 1.2. Hi, I have a Windows Server 2003. IIS 6 is running on that and TLS 1.0 is available. I have a classic ASP based web application hosted on that server. It has a payment module that connects to PayPoint payment API via https query string from the server side. I checked the paypoint url in the ssllabs and it supports all transport protocols. The application stopped working since March 2016. I am not able to figure out the error. I initially thought it could be because of the TLS 1.2 version because most of the industry is heading towards that direction and thought that the issue is because the server is Windows 2003. However, the test payment link from PayPoint worked in FireFox browser and it showed that it used TLS 1.2 for the transaction. For a moment I lost my mind.. Does FireFox has the complete implementation of TLS 1.2 build within? So, if the browser or the client application is going to support TLS 1.2 does it really matter whether the server supports TLS 1.2 or not, windows server 2003 in this case? On the receiving end of the transport connectivity, PayPoint in this case, is it possible for the PayPoint software to put restrictions on the inbound connections to use only TLS 1.2. The ssllabs says the server supports all protocols. I am sorry for so many questions, but I no very little about this level of connectivity. Please provide any thoughts. Thanks Pandu An associated question to my previous question: Can a browser or a client application fully support or implement a Transport Layer Protocol (either directly or with the help of .NET framework, in the case of applications) even if the underlying Operating System does not support that version of protocol? Thanks Pandu Mozilla just announced plans to enable TLS 1.3 by default in Firefox 52 (currently scheduled for March, 2017). Can you talk about Microsoft’s plans for enabling TLS 1.3 in Schannel (to allow those of us running IIS web sites to use it)? Thanks. Hi Kaushal, Why my IE11 shows encrypted in connection properties..? Supose showing TLS something, right? Firefox now supports TLS 1.3 with version 49 it will be default on after version 49 0 – SSL 3.0 1 – TLS 1.0 (This is the current default for the minimum required version.) 2 – TLS 1.1 3 – TLS 1.2 (This is the current default setting in version 49) 4 – TLS 1.3 (This is the capable setting for version 49+) We need to provide https support to Windows XP clients with IE 6.0. We are using wininet API for internet communication. On XP SP3 after enabling TLS 1.0 https connection is established. But this doesn’t work on XP SP2 and lower. Is https connection possible on XP SP2 and lower? How can we find weather the IE and OS support Https connection or not? Please suggest solution. Hello, I want to know what are all the dependencies needed for upgrading TLS v1.0 to v1.2 for various web servers. Nice 1! Like a breath of fresh air. U turned on the light bulbs for me mate. Thanks Hi Kaushal, Due to PCI Compliance we had to upgrade the security certificate from SSL to TLS1.2 on the server. My web service resides on server(Windows 2008 R2) and am trying to connect it from the test client console app. I can access the web service and get the results through my client app which is in my local machine, which was already built successfully when the security certificate on server was Tls1.0. In my client console app, when I try to ADD the same ‘service’ or ‘web’ reference, am not able to add the service url to the client test application. I get the following error. There was an error downloading ‘’. The underlying connection was closed: An unexpected error occurred on a send. Unable to read data from the transport connection: An existing connection was forcibly closed Note: Both Test Console Application and Web Service are built on .Net 4.5 Any help is appreciated. Thank You..:) Does anyone know a way to get server 2008 r2 to be able to do tls 1.2 to paypal on the classic asp server side posts? I haev set the registry and in the code I have set option 9 to 128, etc. I still get channel error. Same code on my windows 10 pro IIS server works just fine. Can anyone give me some insight on how to update Openssl on a Windows Server 2012 R2? We are being impacted by the PCI industry changes and have not been able to get much support from our host provider. It seems we are running 0.9.8c and need to upgrade to 1.1.0c. Any thoughts or insight that you can share will be greatly appreciated. I tried to look on the consulting boards for a Openssl or TLS expert, but couldn’t find anyone in my query. Been struggling to fix this for 4 weeks now…please hlep. Thanks! Hi Kaushal, Nice article. I want to know by default in Internet Explorer TLS 1.0, TLS 1.1, TLS 1.2 is used/enabled. If i unchecked this all TLS then my website is not displaying giving error like : “This page can’t be displayed, Turn on SSL 3.0, TLS 1.0, TLS 1.1 and TLS 1.2 in Advanced settings and try connecting to website name” Can you please suggest why this is happening and other websites are running without TLS checked. If you can provide any idea then it will be great ASAP. Thanks is there any script to get this output of this ( security tab) from multiple servers at once. Windows Server 2008 SP2 now supports TLS 1.1 and TLS 1.2 It was pending for quite some time now. Thanks! I updated the article. Do you have any experience or are you aware of any issues for UWP apps? We have an UWP app running in Windows 10 that stopped working after the service we were connecting to disabled TLS1.0. We tried several things and we are not sure if the issue is on the app or on the network. But it seems there are no options for enforcing TLS1.2 on the WEB APIs for UWP, I am not familiar with UWP as such. But can provide pointers. What namespace are you using to initiate the TLS connection? The update made to this page today has broken the formatting in the table Also, does Google Chrome use it’s own built-in support for TLS 1.1 and 1.2, and if so, would this work on XP clients, or does it use the system libraries for this and so wouldn’t? Thanks Google chrome seems to use the SSL client stack of internet explorer for this and hence doesn’t support. Hi Kaushal, Thanks for sharing knowledge through this article and your support. Currently we ran through a problem. We updated TLS-1.2 in Windows-2012 Enterprise and we were accessing through RDP session from Windows 2008 R2-SP1. We lost our RDP session W2K8 -R2 throws an error while connecting RDP session. We used KB3080079 ( ). But still unable to open a RDP session. Any Suggestions? Its difficult to provide recommendations without much insights into the issue. I would suggest to perform checks on the client and server and ensure that TLS 1.2 is enabled for both Client and Server stacks. Take a network trace from both client and server to ensure that the SSL handshake is completing.
https://blogs.msdn.microsoft.com/kaushal/2011/10/02/support-for-ssltls-protocols-on-windows/?replytocom=4635
CC-MAIN-2017-43
en
refinedweb
Opened 8 years ago Last modified 19 months ago #8417 new enhancement CachedRepository support in TracMercurial Description I used trac with svn, and switched to mercurial. the revision table in postgres is no more updated with new changesets. So when I do a search in changesets, new changesets are not listed. Did I forgot to configure something? Attachments (6) Change History (27) comment:1 Changed 8 years ago by comment:2 Changed 8 years ago by comment:3 Changed 8 years ago by comment:4 Changed 7 years ago by comment:5 Changed 7 years ago by comment:6 Changed 7 years ago by I have developed a plugin called TracMercurialChangesetPlugin for fixing this issue. It allows you to sync revision table if you are using Mercurial. Then you can create a hook to keep it synced :) I hope it helps. I have mailed cboos about it, see If he can integrate into Trac-Mercurial. comment:7 Changed 7 years ago by 2miguel.araujo.perez That do you think about use FULLHASH as 'rev' in opposite NUMBER:SHORTHASH? I think only fullhash real unique. If repository very distributed then in different placess world changesets can be different NUMBER. comment:8 Changed 7 years ago by comment:9 Changed 7 years ago by comment:10 Changed 7 years ago by comment:11 Changed 7 years ago by comment:12 Changed 7 years ago by Changed 7 years ago by I have implemented a naive version of MecrurialCachedRepository using a backward-traversing resync which overrides the default resync algorithm. Changed 7 years ago by And this is the diff. comment:13 Changed 7 years ago by The above version is not complete yet. It's quick & dirty demo. I'm inspecting its bug(?) that newly commited/pushed changesets are not synced correctly after the resync, as well as ticket commit updates. Changed 7 years ago by Revised patch: applied the set diff algorithm by miguel.araujo.perez and now ticket commit updater works correctly Changed 7 years ago by Diff for the revised patch Changed 7 years ago by Revised again: error handling & branch display in the timeline Changed 7 years ago by Diff for the revised patch comment:14 Changed 7 years ago by comment:15 Changed 7 years ago by The remaining issue: - We need some efficient implementation like hg incoming/outgoingto find out which changesets should be synchronized. (The current "whole set comparison" would not scale well with very large repositories with more than 10k changesets.) One mistake: must delete import pdb; pdb.set_trace() in line 744 of the patched backend.py before applying the patch to the trunk. comment:16 Changed 7 years ago by Another idea: - Currently CachedRepositoryonly have one sync()method which is called by both 'resync' and 'commit ticket updater'. For the latter case, we can use hints from the commit ticket updater to determine which revisions should be synchronized since they are given as the command line arguments. We do not have to the same thing twice. We could add optional arguments to sync()method to handle it. No, I forgot to implement it ;-) More seriously, TracMercurial works quite well now even for big repositories, without a cache. However the cache would be needed for enabling the search in changesets and for getting faster and more accurate results for the timeline, so it's about time this gets implemented…
https://trac.edgewall.org/ticket/8417
CC-MAIN-2017-43
en
refinedweb
#include <ast.h> #include <ast.h> Inheritance diagram for unaryNode: This class represents most expression with one argument (unary operator expressions). The NodeType is Unary. Definition at line 3625 of file ast.h. Coord::Unknown Create a new unary expression. The new expression has the given operator and subexpression. The operator is given using its identifier from the parser. For most operators, this is simply the char representation. For example, we pass '+' to get the addition operator. For ambiguous operators and multiple-character operators, you need to look up the proper identifier in the Operators table. Operators Operators::table Referenced by clone(). Create a sizeof expression. This constructor differs in that it takes a type as the subexpression. [virtual] Destroy a unaryNode. idNode, commaNode, and callNode. 3736 of file ast.h. References unary. Constant expression evaluator. This method attempts to evaluate an expression at compile-time. This only yields a meaningful value when the leaves of the given expression are constants, enums, or other compile-time values (e.g., sizeof). The resulting value is stored on each exprNode, in the _value field. Each exprNode sublcass implements this method, calling it recursively when necessary. Implements exprNode. [inline] Definition at line 3699 of file ast.h. References _expr. Definition at line 3697 of file ast.h.. Definition at line 3698 of file ast.h. Definition at line 3704 of file ast.h. References _sizeof_type. Definition at line 3178 of file ast.h. References exprNode::_type. [static, inherited] Add integral promotions. This method takes an expression and calls typeNode::integral_promotions() on its type to determine if any apply. If they do, it inserts an implicit castNode above the input expression that represents this implicit conversion. [inline, virtual, inherited] Is l-value. Indicates if the expression is an l-value (that is, the left side of an assignment). Definition at line 3224. Definition at line 3226 of file ast.h. References exprNode::type(). Definition at line 3695 of file ast.h. References _op. Definition at line 3694 of file ast.h. Generate C code. Each subclass overrides this method to define how to produce the output C code. To use this method, pass an output_context and a null parent. Output a expression. Determine if parenthesis are needed. This method takes the associativity and precedence values of the enclosing expression and determines if parentheses are needed. exprNode::precedence(). Associativity and precedence. Determine the associativity and precedence of the expression. Each exprNode subclass overrides this method to provide the specific results. The default is highest precedence and left-associative. exprNode::parens() Reimplemented from exprNode. Report node count statistics. The code can be configured to gather statistics about node usage according to type. This method prints the current state of that accounting information to standard out. Definition at line 3706 of file ast.h. Definition at line 3703(). Reimplemented in operandNode. Definition at line 3179 of file ast.h.. Definition at line 3177 of file ast.h. References exprNode::_type. Referenced by tree_visitor::at_const(), exprNode::no_tdef_type(), and constNode::usual_unary_conversion_type(). Usual arithmetic conversions. This method takes two expressions and adds any casts that are necessary to make them compatible for arithmetic operations. It calls typeNode::usual_arithmetic_conversions(), passing the types of the expressions, to determine when the casts are needed. It inserts implicit castNode objects above the expressions for the casts. Definition at line 3183 of file ast.h. References exprNode::_value. Definition at line 3182 of file ast.h. Definition at line 3181 sub-expression Definition at line 3641 of file ast.h. Referenced by expr(), and get_expr(). the operator The operator object actually resides the Operators table. Definition at line 3637 of file ast.h. Referenced by op(). the sizeof type For sizeof expressions given with a type, this field holds that type. Definition at line 3647 of file ast.h. Referenced by get_sizeof_type(), and sizeof_type().
http://www.cs.utexas.edu/users/c-breeze/html/classunaryNode.html
CC-MAIN-2017-43
en
refinedweb
, script, and EL expressions in your template in order to generate parametrized text. Here is an example of using the system:Error rendering macro 'code': Invalid value specified for parameter 'lang' import groovy.text.Template. The variable session is one of some default bound keys. More details reveals the documentation of groovy.servlet.ServletBinding. Here is some sample code using servlet container. Just get the latest Jetty jar and put this excerpt in a main method, organize the imports and start! Note, that the servlet handler also knows how to serve *.groovy files and supports dumping: TODO Provide web.xml The TemplateServlet just works the other way as the Groovlets
http://docs.codehaus.org/pages/viewpage.action?pageId=25056
CC-MAIN-2015-18
en
refinedweb
TechEd 2001 has arrived in Atlanta, and with it the hordes of sandaled programmers and besieged support staff. The attendees are spread over numerous hotels around the downtown area, with the conference itself being held at the Georgia World Conference Center Arriving at the airport I was told that it was very easy to find your way around - which it is. I just wasn't told how big the place is. I got off the plane and sneered at the train that went between the concourses, then sneered even more ambitiously at the moving walkways that stretch the length of the long corridors. I figured that I'm still reasonably young and fit and that a casual stroll would be good for me. From the arrival gate to the luggage claim area was a 20 minute walk. I figured this is about 2km (1.25 mile)! It was an amazing sight to watch the attendees slowly but inexorably take over the hotel foyers, bars and amenities. On Saturday afternoon the pool area was full of intensely bronzed and fit (and intensely bronzed and not-so-fit) holiday makers in tiny bathing suits, and one very pale, very skinny developer in long shorts and a T-shirt. By Sunday afternoon the view was altogether different, and a lot more disturbing. The attendees are spread over (I think) about 23 hotels and there is a steady stream of coaches provided to shuttle us to and from the conference centre. The strangest site is seeing the armed policeman standing guard at our pickup point each morning. I guess with most attendees carrying several thousand dollars worth of gadgets there could be some easy pickings. As developers we are not renowned for our commanding physical prowess in the face of danger. Commando teddy-bear test drop from the 17th floor of the Hilton. The choices for dining in downtown Atlanta around the hotels - as far as I can determine - boil down to Steak, Fajitas and Sushi. After an episode last year involving Sushi and Tequila that best remain forgotten, the choice is pretty much Sushi or Steak - but the later can be subclassified into Steak and Lager, Steak and Ale, and Steak. The adventurous can also try the burgers. Dinner time saw the area around the hotels hotels a-swarm with badged, bag carrying developers flowing between the eating establishments like ants, streaming between the establishments in visible lines, bumping into one another, forming clots at intersections, with each individual working toward the common goal of getting fed. It was a sight to behold. I was standing next to a guy who obviously was not an attendee and he looked at the hordes then shook his head and said "you know something is terribly wrong when downtown Atlanta is packed with pale skinned guys carrying laptops". Microsoft certainly knows how to put on a decent meal. Breakfast and lunch were all you can eat affairs, and in between sessions there was a constant supply of potato chips, muffins, Krispy Kream donuts, diet coke and the second best chocolate chip cookies I have ever tasted. For the health conscious (or merely guilty at heart) there was fruit, muesli bars, juice and TechEd brand water. David Cunningham flew in yesterday morning and promptly found the world longest escalator. He promised to show me tomorrow. I'm tingling with anticipation. Tuesday morning saw some very subdued developers quaffing serious amounts of water and coffee. No doubt the exertions of the night before (late night coding sessions? hearty debates about the new features in .NET? The Tabernacle?) took their toll. We all get Beta 2 CD's on Wednesday, but in the meantime is available for download from Microsoft. System requirements are far more modest than the original requirements for the PDC bits: a 450MHz CPU, W2K, 192Mb RAM, a 800 x 600, 256 color screen and 3Gb HDD space in total. A CD would also be handy if you plan on installing from the disks. Beta 2 is significantly different from Beta 1. Many of the namespaces have changed, and even some basic naming conventions (For example, WinForms are now Windows Forms). Everything from the System.Data namespace, delegates, keywords, and the IDE itself have all changed in degrees ranging from wide ranging API changes to more innocuous changes such as the addition or removal of underscores in names. The beta 2 IDE is much improved, both in terms of usability and stability, and companies can now create shipping applications (with a few limitations) using the ASP.NET Go Live license. Also announced at the keynote presentation was the availability of the UDDI developer tools, the Mobile Internet toolkit, and a peer-to-peer code snippet sharing service. Integrated within the IDE is a new peer-to-peer code snippet sharing service that allows a developer to enter a set of keywords in a dialog box and locate code snippets from other developer's machines. These code snippets can then be accessed across the 'net and pasted into the developers source code directly. It's essentially a Napster-style code sharing initiative. Once the keynote was over it was back to hands-on sessions and seminars. Today's talks built on yesterday's introductory talks. Breakfast and lunch were again a nice affair (mmm - cheesecake!) and after the break-out sessions there we had an 'ask the experts' open peer forum where we had the chance to speak directly to the MS guys and ask them anything from the smallest niggling question on CE SQL to questions on design and implementation of full e-commerce applications. I saw what David and I consider to be the worlds longest (and I think steepest) escalator. We rode it up and down with stupid grins on our faces. We also spent an entertaining few minutes throwing parachuted teddy bears off the top balcony at the hotel. Action photos will be posted soon. I've finally worked out the difference between "y'all" and "all y'all". Wednesday started with the usual breakfast of back bacon, eggs, fruit and something brown and unidentifiable. After that was more hands-on labs, more break-out sessions and more of the exhibitors trying everything they could to get their hands on your swipe card. The announcement of the Mobile Information Server (and related toolkit) means that developers can now write mobile applications in a device independent fashion. Extending the idea that ASP.NET applications no longer need to worry about handling the idiosyncrasies of various browsers, the Mobile Information Server releases developers from worrying about the capabilities of individual devices. If you are brave and have lots of spare time to wade through lots of marketing fluff you can read more here. Two things have really been evident in these last 2 days: Firstly, there are some seriously overweight developers, and secondly, the mood is really subdued. People aren't depressed, just, well, quiet. Maybe it's because .NET has been out for a year, so most of the attendees at least have an idea of what it's all about. This time last year we were all learning that C# was Cool and finding about about the amazing advances in ASP.NET. This years it's more about the fine tuning that has been going on, and a continuation of the evangelical message. Maybe it was also due to some of the higher profile guys not being in attendance. Chris Sells, Jeff Prosise and Jeff Richter weren't around, Tony Goodhew and Chris Anderson weren't there, and most disappointingly: no Kent Sharkey. The weather has been perfect, which is a huge disappointment. I was hoping for a tornado or two, or at least the remnants of a tropical cyclone. Obviously this statement is spoken with the brash bravado of someone who has never actually been near either of these two pieces of excitement. I was talking to a guy about Tornadoes and he told me a story about waiting for a flight in an airport in the south east of the States. He was waiting at the departure gate when two tornadoes were spotted. Everyone in the airport was moved into the center of the airport while the storms moved by, and when they were allowed back the plane that he had been about to board had been turned around 30 degrees. whoa. It was a Visual Basic thing - you're not really interested are you? It was pretty big, and took up the entire stadium at the conference center. I was a little worried when we entered the doors to find a whole bunch of mimes, but these were soon replaced by Blues Brothers look-alikes, jugglers, monocyclists and other performance artists. There was a ton of food and drink and a live bands, but unfortunately the accoustics were terrible, so you couldn't really hear them. The funniest thing about the whole night was that the helium balloons were all removed and popped after some guys tied beer bottles to clumps of balloons and released them. Little beer gondolas were floating around 100 feet above our heads. It was so cool. Apart from that it was a pretty quiet affair. It was a pity it was held indoors, since the weather was perfect. After being inside air conditioned conference halls all day it would have been nice to enjoy a southern summer evening outdoors. Thursday was the final day of the conference, and one that many people (me included) missed, which sucked because many of more interesting talks such as Nick Hodapp's and Ronald Laeremans' were scheduled for that day. The conference center entrance turned into a baggage warehouse. Once at the airport you could tell the attendees (I was about to say 'fellow geeks' but I figured that was harsh) by spotting the TechEd 2001 paraphernalia and VB.NET T-shirts. Overall it was a quiet affair. I talked to lots of people to gauge the general feeling and describe the tone of the conference and invariably the word was 'subdued'. Visual C++ developers in particular felt left out (again) because the 10 year anniversary of Visual Basic overshadowed everything. I think every VC++ developer in the house was gritting their teeth when speaker after speaker waxed lyrical about how wonderful and productive and powerful and scalable VB was. Hopefully PDC will bring VC++ back into the limelight. It doesn't seem 'sexy' to MS at the moment, which is nuts because VC++ is the most powerful of the .NET languages, and the only one that can be used to write native code. Server side .NET has a brilliant future and solves obvious problems, both in terms of code writing and management, and in application performance and deployment. Client side .NET apps may face the same uphill battle that client side Java apps are facing, so wouldn't it be prudent to push the excellent advances made in VC++ (language, compiler and IDE) to keep current developers happy and convince them that upgrading to VS.NET is a Good Thing? Maybe MS is worried this will send mixed messages about their future goals, but as a C++ developer I just want to use the best tools now, and when better tools come out I'll upgrade to them too. I wrote this while hanging out in LA after 20 hours of traveling with no sleep. I had a 10 hr layover followed by a 5hr red-eye to arrive in Toronto at 6am the following day. Needless to say I was not the most cheerful traveller at the airport. One of the first things that long haul air travel in cattle class gives you is a realisation of what it will be like when you are old and frail and living in a retirement village. The mind numbing expanse of time that soon ceases to be a moving quantity, but rather a static position with no beginning and no ending. The dimmed lights, the unchanging surroundings, and the close proximity to your neighbours giving immediate knowledge of their habits, their hopes and their lives all fuel the feeling that you have been here forever, and that you will continue to be here until your mind slowly fades away. Beyond the metaphysic there is also the stark reality of the hopelessness that sets in when you unwisely choose a window seat on a sold out flight. You sit. You watch. You wait. Your meals are bought to you in turn, and you watch with hopeless salivation as the surly stewards bring a meal that you know full well you will not enjoy, but which you nevertheless look forward to as a means of marking out the various segments of the journey. There have been meals before, and there will no doubt be meals ahead. You live in a world where there are only three states - either waiting for a meal, eating a meal, or suffering the after effects of a meal. You eat what you are given, and what is given is that same that is given to everyone else cramped in with you. You even engage in mind games with the person next to you in order to secure their blueberry crumble, only to realise that you, in turn, have lost your Caesar salad. The worst part - the very worst part - the part that makes you promise to yourself that you will never send your aging parents into a retirement home, is that once you wolf down your unpalatable meal you are forced to sit there with the decaying remains of the pasta-or-the-chicken congealing on the plate in front of you, and that the two people blocking your escape to the freedom of the isle also have the remnants of their meal similarly congealing, their tray tables down, and that you are not going anywhere until the stewards do the rounds to collect the empties. You have half an hour of concentrated bladder control to keep you occupied, and there is not a thing you can do about it. To be completely at the mercy of strangers who think of you as a seat number and not be able to do a thing about it is a sobering and mind expanding experience. If you ever feel that the time has come to put Ma and Pop in a home then you should first fly to Australia coach class - preferably using one of the cheaper, more conservative carriers - and have a good, long think about what you are about to do. Even when the meals are finally cleared, the tray tables returned to their upright position, and the complicated ballet of legs, arms, overhead lockers and headphone wires is negotiated, you still have nowhere to go. You cannot ring up friends and say 'I'm so bored I've started making sculptures out of my fingernail clippings'. No. Sit and stay you will. Enjoy the reruns of 'Friends' you will. Sleep comfortably you will not..
http://www.codeproject.com/Articles/1204/TechEd-Atlanta
CC-MAIN-2015-18
en
refinedweb
Objects Events Listing Pop-Up Object properties and methods list IntelliSense - like function parameters tooltips I've been searching the net for a suitable and affordable solution that will allow me to embed scripting functionality in my C++ application. What I found was either not sufficient or cost a lot of money. So I've decided to make one of my own. After about a month of work and testing I've come up with this embeddable scripter. I made it a separate DLL , which is really easy to use. CScripter In the header of your main application window add: #include "..\ScriptEditor\Scripter.h" Add member variable: CScripter scripter; Then when initializing add: scripter.CreateEngine("VBScript"); After that add your objects to the script: scripter.AddObject("MPlayer",(IDispatch *)m_mediaPlayer.GetControlUnknown()); scripter.AddObject("PlayButton", (IDispatch *)m_commandButton.GetControlUnknown()); scripter.AddObject("TabStrip",(IDispatch *)m_tabStrip.GetControlUnknown()); scripter.AddObject("SimpleObject",m_simpleObject.GetIDispatch(TRUE)); And optionally set the script text: scripter.scriptText = "MsgBox \"Test message\""; After that the script is ready to run, you now may do one of the following: // Run the script scripter.StartScript(); // Stop the script execution scripter.StopScript(); // Open the script editor window scripter.LaunchEditor(); // Reset the script (All added objects are removed and engine is recreated) scripter.Reset(); This is the first release so it's probably not bug free. I've done my best to track any bugs but there are always surprises. So I will post here any additions, patches and fixes that will.
http://www.codeproject.com/Articles/4552/Embeddable-script-editor-for-MFC-applications?msg=1042250
CC-MAIN-2015-18
en
refinedweb
Back to article August 20, 2000 In this four-part tutorial, you will learn how to use NT Server 4.0's Performance Monitor and Microsoft Excel to monitor and analyze SQL Server performance. You will also learn how to use a SQL Server database to store your Performance Monitor logs. This tutorial assumes that you already know the basics of using Performance Monitor, Microsoft Excel, and of course,. If you have Microsoft Excel 2000 instead, you should be able to follow along with few, if any changes. While the Chart Mode of the Performance Monitor is not too bad a tool to visually analyze Performance Monitor results, it has a lot of limitations. Some of these limitations include the inability to easily manipulate the data, to analyze the data using various statistical functions, or to project the data into the future to help you predict future SQL Server resource needs. To make the job of analyzing and interpreting Performance Monitor data easier, we are going to learn how to use Microsoft Excel to perform this task. The focus of this article is on how to use Microsoft Excel to create charts and how to perform trend analysis using Performance Monitor data, not how to interpret the results. That will be covered in part four of this four-part tutorial. Before I begin, I'll just come out and say it, analyzing Performance Monitor data with Microsoft Excel is not the most elegant approach I have seen to analyzing data. It requires more manual work than I prefer, and it doesn't easily provide all the analysis I would like. But given my budget, and most DBA's budgets, you may not be able to afford a better tool. I would prefer a tool dedicated to collecting and analyzing Performance Monitor data, but until then, I'll have to settle for Microsoft Excel. In the following sections you will learn the basics of how to use Microsoft Excel to create charts and how to perform trend analysis using Performance Monitor data. In order to follow this article, you should have a basic understanding of how to use Microsoft Excel. Before you can start analyzing Performance Monitor data using Microsoft Excel, you must first answer these important questions: Where to get the data from? If you have followed this series of articles, then you would know that I have previously suggested that you store your SQL Server Performance Monitor data in a SQL Server table. Storing your Performance Monitor data in SQL Server makes it convenient to store and manipulate your data. For example, you can create separate tables for each of the SQL Servers you want to monitor. And as you gather more data, you can append the data to the table, allowing you to store all of your historical data in one central location. You can also use queries to select only that data you want to export to Microsoft Excel. Of course, you don't need to store your data in SQL Server in order to analyze it with Microsoft Excel. You can store Performance Monitor data in several formats, including native Performance Monitor files, ASCII files, in a Microsoft Access database, or any database for that matter. No matter where you store your Performance Monitor data, you will need to select a location and use it as your central repository. It is important that all your data be handily available, and in a format easily accessible by Microsoft Excel. Which counters do you want to analyze? Most likely, you have collected more counters than you want to analyze. What you will want to do is select only a small handful of counters to analyze at any one time in Microsoft Excel. This is because putting too much data on the screen in Microsoft Excel makes it difficult to see what you are doing (the screen just gets too confusing). If you need to analyze more data than can comfortably fit on the screen, then you can analyze the data in groups of related counters. The actual number of counters you should analyze at any one time depends on your screen resolution (how much you can see on your screen) and how much data you are comfortable working with. For this article, I am going to assume you know what counters you want to analyze, so I won't mention specific ones at this time, although later you will see some examples I commonly use. But in part four of my series on Performance Monitor, I will discuss specific counters and what to look for. What time period do you want to analyze? Generally, there are three different time periods you will want to analyze: daily, monthly, and quarterly. Of course you can choose any time periods you want, but I find these three time frames useful for different reasons. Daily: A daily look lets me see what is happening on an per-hour basis, looking for daily patterns, peak times, and lull times. I am also looking for counters that indicate bottlenecks. When I am performance tuning for specific bottlenecks, I use daily data the most. I also use daily data to give me a look at how well balanced my hardware is, such as how well CPUs and physical disk arrays are being equally used. In some cases, I will look at even at a range of a couple of hours if I am trying to diagnose a specific performance problem. Monthly: On a monthly basis, I am also looking for patterns, peak times, and lull times. Often, I can use the data to help me schedule database maintenance, such as indexing a database or running large DTS imports or exports. I don't usually use monthly data for bottleneck troubleshooting because the data is not granular enough. Quarterly: I use long term data for trend analysis, to help me "predict" future needs. For example, I want to predict how many users will be using my databases, how much physical disk space will I need, how much I/O capacity I will need, how much network bandwidth I will need, and so on. The more data you have here, the better your "predictions" will be. What time sampling do you want to use? As you probably know, when you use Performance Monitor to collect counter data, you can select how often data is collected. You will want to collect it often enough in order to get enough detail for daily-type analysis, but you don't want to have so much data that quarterly trend analysis gets bogged down. To get around this problem, you will want to collect data at a time interval detailed enough for daily analysis, but when you want to do monthly or quarterly analysis, you will want to aggregate it so that there is not too much data. And this is where storing your data on SQL Server comes in handy. For example, say you collect counter data every minute, and that you store this data in a SQL Server table. If you want to analyze daily data, you can select the time period you want to analyze and export it from SQL Server as is. But if you want to analyze data on a quarterly basis, you can use Transact-SQL to aggregate the data into hourly averages, and then export these to SQL Server. If you don't want to aggregate your data using SQL Server, you can do so using a Microsoft Excel pivot table, as we will learn later in this article. You may have to experiment with different levels of granularity until you find the ones best for the types of analysis you want to perform. What scale do you want to use? Another issue you must address is what scale does each of the counters you want to analyze use. As you may know, some counters use a percent range, such as from 0% through 100%. Others use a quantity measurement, which can range from 0 through 10, or from 1 through 1,000,000. Scale is important because it is hard to analyze data that has significantly different scales at the same time. Generally, you will only want to analyze groups of data that have similar scales. If you need to analyze data that has different scales, one option is to use either SQL Server or Microsoft Excel to rescale the data so that all of the data fits the same scale. You may remember that the Performance Monitor Graph Mode does this automatically. If you do choose to rescale data, be careful to remember this, because once you begin analyzing data, it is easy to forget that you have rescaled the data, and you may misinterpret the resulting charts. Don't discount the importance of finding the best answers to these very basic questions before you begin analyzing your Performance Monitor data in Microsoft Excel, as they will greatly affect the success of your analysis. Once you answer all of the above questions, you are now ready to import your data into Microsoft Excel. As I mentioned earlier, there are many different ways to store your Performance Monitor data. For this article, I am going to assume that it is stored using SQL Server. If you are not storing your data in SQL Server, then you will have to export your data in a format that can be easily imported by Microsoft Excel. The easiest way to export Performance Monitor data from SQL Server to Microsoft Excel is to use the DTS Export Wizard, although this is not the only option. The DTS Export Wizard is handy because it steps you the process of exporting your data from SQL Server directly into a Microsoft Excel spreadsheet format. For the most part, you just need to following the screens to find out what to do. But if you are not familiar with this wizard, here are the basic steps: Using Enterprise Manager, right-click on the database that contains the data you want to export, the left-click on "All Tasks", and then left-click on "Export Data". This brings up the DTS Export Wizard. Click "Next" on the DTS Export Wizard introductory screen. In the "Choose a Data Source" screen, the "Source", "Server" name and "Database" name should already be correctly selected. If not, then select the correct options. Click "Next" to continue. In the "Choose a Destination" screen, The "Destination" option needs to be changed to "Microsoft Excel 8.0" (works for Excel 97 and 2000). In the "File Name" option, enter a path and file name for the Microsoft Excel file that will be exported. Click "Next" to continue. In the "Specify Table Copy or Query" screen, you must make a decision on what data you want to export. If you want to export all of your data (which would be unlikely in most cases) you would select the "Copy table(s) from the source database" option, which lets you select one or more entire tables to export to Microsoft Excel. Instead, you will probably want to select the "Use a query to specify the data to transfer" option, as this one allows you to selectively choose what data you want to export from your SQL Server table to your Microsoft Excel spreadsheet. Once you have made your choice, click "Next" to continue. Assuming you selected the "Use a query to specify the data to transfer" option in the previous step, the "Type SQL Statement" screen is displayed next, offering you two ways to enter a query in order to specify which data you want to export. The easiest way is to use the "Query Builder", which allows you to point-and-click to create a simple query to select your data you want to export to Microsoft Excel. But if your query is complicated, such as you want to aggregate the data before you export it, you will have to enter the SELECT statement manually in the "Query Statement" window. If you do this, I would recommend you write your SELECT statement using the Query Analyzer first, as using Query Analyzer makes writing and debugging the query much easier. Once the query is debugged, you can cut and paste it into the "Query Statement" window. Once you have entered a query (however you created it), click on "Next" to continue. In the "Select Source Tables" screen, you have the option to perform column mapping and transformations on the data. In most cases you will probably not need to do this, but it is available for advanced users. Let's assume you don't need this option, so click on "Next" to continue. You have now completed the DTS Export Wizard. At this point you can run the export immediately, or you can save it, or you can do both at the same time. If you plan on performing this same task over and over, you may want to save it as a DTS package. That way, you can edit the DTS package if you need to make any changes before the next time you use it. Let's assume for now that we want to save this DTS package as a SQL Server object, so select this option and click on In the "Save DTS Package" screen, enter a name for this DTS package and click on "Next". The "Completing the DTS Wizard" screen appears. To run the DTS package, click on "Finish". The DTS package should now run, and after several seconds, display a message telling you that it was successful. Click on the "OK" button to continue. Now that the data you selected from SQL Server has been exported to an Excel spreadsheet, you are ready to start analyzing the data graphically.x. In the next installment (part four of four parts), we will take a look at how to interpret SQL Server Performance counters data. Check back in September for part four. The Network for Technology Professionals About Internet.com Legal Notices, Licensing, Permissions, Privacy Policy. Advertise | Newsletters | E-mail Offers
http://www.databasejournal.com/features/mssql/print.php/1466971
CC-MAIN-2015-18
en
refinedweb
...making Linux just a little more fun!. So, where are we now? We know that some medium or other is required for communication between different processes. Similarly, when it comes to computer programs, we need some mechanism or medium for communication. Primarily, processes can use the available memory to communicate with each other. But then, the memory is completely managed by the operating system. A process will be allotted some part of the available memory for execution. Then each process will have its own unique user space. In no way will the memory allotted for one process overlap with the memory allotted for another process. Imagine what would happen otherwise! So, now the question - how do different processes with unique address space communicate with each other? The operating system's kernel, which has access to all the memory available, will act as the communication channel. Similar to our earlier example, where the glass with hot water is one process address space, the glass with cold water is another, and the glass with the larger capacity is the kernel address space, so that we pour both hot water and cold water into the glass with larger capacity. What next? There are different IPC mechanisms which come into use based on the different requirements. In terms of our water glasses, we can determine the specifics of both pouring the water into the larger glass and how it will be used after beign poured. OK, enough of glasses and water. The IPC mechanisms can be classified into the following categories as given below: Pipes were evolved in the most primitive forms of the Unix operating system. They provide unidirectional flow of communication between processes within the same system. In other words, they are half-duplex, that is, data flows in only one direction. A pipe is created by invoking the pipe system call, which creates a pair of file descriptors. These descriptors point to a pipe inode and the file descriptors are returned through the filedes argument. In the file descriptor pair, filedes[0] is used for reading whereas filedes[1] is used for writing. Let me explain a scenario where we can use the pipe system call: consider a keyboard-reader program which simply exits after any alpha-numeric character is pressed on the keyboard. We will create two processes; one of them will read characters from the keyboard, and the other will continuously check for alpha-numeric characters. Let us see how the filedes returned by pipe can be of use in this scenario: (Text version: kbdread-pipe.c.txt) /***** KEYBOARD HIT PROGRAM *****/ #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <unistd.h> #include <pthread.h> #include <ctype.h> int filedes[2]; void *read_char() { char c; printf("Entering routine to read character.........\n"); while(1) { /* Get a character in 'c' except '\n'. */ c = getchar(); if(c == '\n') c = getchar(); write(filedes[1], &c, 1); if(isalnum(c)) { sleep(2); exit(1); } } } void *check_hit() { char c; printf("Entering routine to check hit.........\n"); while(1) { read(filedes[0], &c, 1); if(isalnum(c)) { printf("The key hit is %c\n", c); exit(1); } else { printf("key hit is %c\n", c); } } } int main() { int i; pthread_t tid1, tid2; pipe(filedes); /* Create thread for reading characters. */ i = pthread_create(&tid1, NULL, read_char, NULL); /* Create thread for checking hitting of any keyboard key. */ i = pthread_create(&tid2, NULL, check_hit, NULL); if(i == 0) while(1); return 0; } Save and compile the program as cc filename.c -lpthread. Run the program and check the results. Try hitting a different key every time. The read_char function simply reads a character other than '\n' from the keyboard and writes it to filedes[1]. We have the thread check_hit, which continuously checks for the character in filedes[0]. If the character in filedes[0] is an alpha-numeric character, then the character is printed and the program terminates. One major feature of pipe is that the data flowing through the communication medium is transient, that is, data once read from the read descriptor cannot be read again. Also, if we write data continuously into the write descriptor, then we will be able to read the data only in the order in which the data was written. One can experiment with that by doing successive writes or reads to the respective descriptors. So, what happens when the pipe system call is invoked? A good look at the manual entry for pipe suggests that it creates a pair of file descriptors. This suggests that the kernel implements pipe within the file system. However, pipe does not actually exist as such - so when the call is made, the kernel allocates free inodes and creates a pair of file descriptors as well as the corresponding entries in the file table which the kernel uses. Hence, the kernel enables the user to use the normal file operations like read, write, etc., which the user does through the file descriptors. The kernel makes sure that one of the descriptors is for reading and another one if for writing. I am not going to go into the details of the pipe implementation on the kernel side. For further reading, one can refer the books mentioned at the end of this article. FIFOs (first in, first out) are similar to the working of pipes. FIFOs also provide half-duplex flow of data just like pipes. The difference between fifos and pipes is that the former is identified in the file system with a name, while the latter is not. That is, fifos are named pipes. Fifos are identified by an access point which is a file within the file system, whereas pipes are identified by an access point which is simply an allotted inode. Another major difference between fifos and pipes is that fifos last throughout the life-cycle of the system, while pipes last only during the life-cycle of the process in which they were created. To make it more clear, fifos exist beyond the life of the process. Since they are identified by the file system, they remain in the hierarchy until explicitly removed using unlink, but pipes are inherited only by related processes, that is, processes which are descendants of a single process. Let us see how a fifo can be used to detect a keypress, just as we did with pipes. The same program where we previously used a pipe can be modified and implemented using a fifo. (Text version: write-fifo.c.txt) /***** PROGRAM THAT READS ANY KEY HIT OF THE KEYBOARD*****/ #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <unistd.h> #include <pthread.h> #include <ctype.h> #include <sys/stat.h> #include <fcntl.h> #include <errno.h> extern int errno; void *read_char() { char c; int fd; printf("Entering routine to read character.........\n"); while(1) { c = getchar(); fd = open("fifo", O_WRONLY); if(c == '\n') c = getchar(); write(fd, &c, 1); if(isalnum(c)) { exit(1); } close(fd); } } int main() { int i; pthread_t tid1; i = mkfifo("fifo", 0666); if(i < 0) { printf("Problems creating the fifo\n"); if(errno == EEXIST) { printf("fifo already exists\n"); } printf("errno is set as %d\n", errno); } i = pthread_create(&tid1, NULL, read_char, NULL); if(i == 0) while(1); return 0; } Compile this program using cc -o write_fifo filename.c. This program reads characters (keypresses), and writes them into the special file fifo. First the program creates a fifo with read-write permissions using the function mkfifo. See the manual page for the same. If the fifo exists, then mkfifo will return the corresponding error, which is set in errno. The thread read_char continuously tries to read characters from the keyboard.Note that the fifo is opened with the O_WRONLY (write only) flag . Once it reads a character other than '\n', it writes the same into the write end of the fifo. The program that detects it is given below: (text version detect_hit.c.txt): /***** KEYBOARD HIT PROGRAM *****/ #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <unistd.h> #include <pthread.h> #include <ctype.h> #include <errno.h> #include <fcntl.h> #include <sys/stat.h> extern int errno; void *check_hit() { char c; int fd; int i; printf("Entering routine to check hit.........\n"); while(1) { fd = open("fifo", O_RDONLY); if(fd < 0) { printf("Error opening in fifo\n"); printf("errno is %d\n", errno); continue; } i = read(fd, &c, 1); if(i < 0) { printf("Error reading fifo\n"); printf("errno is %d\n", errno); } if(isalnum(c)) { printf("The key hit is %c\n", c); exit(1); } else { printf("key hit is %c\n", c); } } } int main() { int i; i = mkfifo("fifo", 0666); if(i < 0) { printf("Problems creating the fifo\n"); if(errno == EEXIST) { printf("fifo already exists\n"); } printf("errno is set as %d\n", errno); } pthread_t tid2; i = pthread_create(&tid2, NULL, check_hit, NULL); if(i == 0) while(1); return 0; } Here, again, it first tries to create a fifo which is created if it does not exist. We then have the thread check_hit which tries to read characters from the fifo. If the read character is alphanumeric, the program terminates; otherwise the thread continues reading characters from the fifo.Here, the fifo is opened with the flag O_RDONLY. Compile this program with cc -o detect_hit filename.c. Now run the two executables in separate terminals, but in the same working directory. Irrespective of the order in which you run, look for the message fifo already exists on the console. The first program (either of the two) that you run will not give any error message for creation of the fifo. The second program that you run will definitely give you the error for creation of the fifo. In the terminal where you run write_fifo, give input to standard output from your keyboard. You will get the message regarding the key hit on the keyboard on the terminal running the executable detect_hit. Analyze the working of the two programs by hitting several keys. I have used two different programs for exhibiting the usage of fifos. This can be done within a single program by forking the routines which are called in the two program as threads. But I did this to show that unlike pipes, fifos can be used for communication between unrelated processes. Now try running the program again. You will get the message that the fifo already exists even when you first run either of the two programs. This shows that fifos are persistent as long as the system lives. That is, the fifos will have to be removed manually - otherwise they will be permanently recognized by the file system. This is unlike pipes which are inherited as long as the process that created the pipe is running. Once this process dies, the kernel also removes the identifiers (file descriptors) for the pipe from the the file tables. The usage is rather simple and the main advantage is that there is no need for any synchronization mechanism for accesses to the fifo. There are certain disadvantages: they can only be used for communication between processes running on the same host machine. Let us explore other IPC mechanisms to see what have they in store. Shared Memory is one of the three kinds of System V IPC mechanism which enables different processes to communicate with each other as if these processes shared the virtual address space; hence, any process sharing the memory region can read or write to it. One can imagine some part of memory being set aside for use by different processes. The System V IPC describes the use of the shared memory mechanism as consisting of four steps. Taken in order, they are: Let us examine the workings of the above system calls. Recall the keyboard hit program; we shall, once again, see another version of it, this time using the system calls associated with the shared memory mechanism. The code given below creates a shared memory area and stores the information of any key hit on the keyboard. Let us see the code first: (text version: write-shm.c.txt) #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <errno.h> #include <string.h> #include <ctype.h> extern int errno; #define SIZE 1 char *read_key; int shmid; int shared_init() { if((shmid = shmget(9999, SIZE, IPC_CREAT | 0666)) < 0) { printf("Error in shmget. errno is: %d\n", errno); return -1; } if((read_key = shmat(shmid, NULL, 0)) < 0) { printf("Error in shm attach. errno is: %d\n", errno); return -1; } return 0; } void read_char() { char c; while(1) { c = getchar(); if(c == '\n') { c = getchar(); } strncpy(read_key, &c, SIZE); printf("read_key now is %s\n", read_key); if(isalnum(*read_key)) { shmdt(read_key); shmctl(shmid, IPC_RMID, NULL); exit(1); } } } int main() { if(shared_init() < 0) { printf("Problems with shared memory\n"); exit(1); } read_char(); return 0; } Here we have a shared memory variable named read_key. The program first initializes the shared memory area read_key. This is done by generating a shared memory identifier shmid using the system call shmget. In the context of the program, the first parameter for shmget is 9999, which is the key. This key is used to allocate a shared memory segment. The second parameter, SIZE (defined as a macro with the value 1), suggests that the shared memory segment will hold only one of the type of the shared memory variable, that is, only 1 character. The IPC_CREAT flag (third parameter) suggests that a new shared memory segment has to be created, with read-write permissions (IPC_CREAT logically OR ed with 0666). This will return a valid shared memory segment identifier on successful allocation. The identifier will be stored in shmid. If shared memory segment allocation fails, then -1 is returned and the errno is set appropriately.The key which is used to get a shared memory segment can be generated randomly using the built-in function ftok to get a unique key. Refer to the manual page for the usage. Once the segment identifier is obtained, we have to attach the shared memory segment to some address. This is done with the shmat system call. This uses the segment identifier shmid as the first parameter. The second parameter is the address of the shared memory segment; when this is given as NULL (as in this program), the kernel will choose a suitable address. The third parameter is the flag specification which can be set if required or left as zero (see man page of shmdt for details). On success the shared memory segment is attached to read_key, otherwise -1 is returned along with the appropriate setting of the errno. If either shmget or shmat fails, the process terminates. On success from both system calls, we proceed by invoking the read_char function, which reads keyboard inputs other than '\n' ("Enter" key) and copies them to read_key in the shared memory. If the keyboard input is an alphanumeric character, the program stops reading inputs from the keyboard and the process terminates. We have another program running separately (it does not have to be in the same working directory) in the local system, which tries to read the data written in the shared memory area. The code is given below: (text version: read-shm.c.txt) #include <stdio.h> #include <stdlib.h> #include <sys/types.h> #include <sys/ipc.h> #include <sys/shm.h> #include <errno.h> #include <string.h> #include <ctype.h> extern int errno; #define SIZE 1 char *detect_key; int shmid; int shared_init() { if((shmid = shmget(9999, SIZE, 0444)) < 0) { printf("Error in shmget. errno is: %d\n", errno); return -1; } if((detect_key = shmat(shmid, NULL, SHM_RDONLY)) < 0) { printf("Error in shm attach. errno is: %d\n", errno); return -1; } // detect_key = NULL; return 0; } void detect_hit() { char c; c = *detect_key; while(1) { if(c != *detect_key) { if(isalnum(detect_key[0])) { printf("detect_key is %s\n", detect_key); shmdt(detect_key); shmctl(shmid, IPC_RMID, NULL); exit(1); } else { printf("detect_key is %s\n", detect_key); } c = *detect_key; } } } int main() { if(shared_init() < 0) { printf("Problems with shared memory\n"); exit(1); } detect_hit(); return 0; } Here, again, we have a shared memory initialization routine, which in fact does not create a new shared memory segment, but rather tries to get access to the existing shared memory segment. Compared to the previous program, the absence of IPC_CREAT flag suggests that we do not have to create a new shared memory segment. Instead, we simply have to get the corresponding segment identifier which can be used to attach the existing shared memory segment to some address. The mode 0444 restricts access to the shared memory segment to 'read only'. If no shared memory segment with key 9999 exists, we will get an error, which will be returned in errno. Once we get a valid identifier, we attach the shared memory segment to an address. While attaching, we use the flag SHM_RDONLY which specifies that the shared memory segment will be available only for reading. Next, we have the function detect_hit, which checks whether the pressed key was an alphanumeric character. The first program obviously has to run first; otherwise, the second program will show errors during the shared memory initialization, since it would be trying to get the identifier for a non-existent shared memory segment. The example shown here doesn't require any synchronization of access to the shared memory segment. That is because only one program writes into the shared memory and only one program reads from the shared memory area. But again, there is a problem here. What if the detection program (second one) is started long after some user has started hitting the keys (running the first program)? We will not be able to track the previously hit keys. The solution for this is left as an exercise to the readers.The entry in /proc/sysvipc/shm gives a list of shared mermory in use. Readers can compare the entries before running, during running and after running the programs. Try to interpret the entry in /proc/sysvipc/shm. Once the two programs identify an alphanumeric character, they will terminate. As part of that process, the shared memory area is detached by using the system call shmdt. In fact, upon exiting the detaching is done automatically. But the shared memory segment is not destroyed. This has to be done by invoking the system call shmctl, which takes the identifier for the shared memory area as an argument, as well as the command IPC_RMID, which marks the shared memory segment as destroyed. This has to be done, otherwise the shared memory segment will persist in memory or in the swap space.At this point, observation of the entries in /proc/sysvipc/shm can be very useful. If the shared memory segment is not destroyed, the entries will reflect this. Try this by running the program without shmctl. This is the fastest IPC mechanism in the System V IPC services. However, the System V shared memory mechanism does not have any kind of scheme to ensure that one sees consistent data in the shared memory region. That is, a process can read a shared memory area at the same time another process is writing to it. The programmer can then come across inconsistent data while executing the programs. This suggests that accesses to the shared memory region have to be mutually exclusive; this is achieved via the use of the semaphore mechanism. We can make the semaphore access the memory region to lock it and then release the semaphore when done. The shared memory mechanism can be used when the processes access the shared memory areas at different times. One may wonder why we can't make the first process store the data in some file and make another process read the data from the file. But, reading data from a file involves things like: These things are not significant if we have a small amount of data to be read. But when we have large amounts of data stored in a file, then the load of the two activities mentioned above increases significantly and there is a considerable amount of reduction in the performance of the "reading program". This, again, has a solution, which is the use of memory mapping - something I'll discuss at another time. We have seen the use of the primary IPC mechanisms, and also one of the System V IPC mechanisms. We have seen some simple uses for pipes, fifos, and the shared memory mechanism. But one may come across some very complex programs where these mechanisms will have to be used in a very strict and precise manner. Otherwise, the program along with the programmer, will be dumped to /dev/null. There are still more things to be learned, not only for you but also for me. I shall come up with more in the next part, in which we will explore semaphores, message queues, memory mapping and sockets, and probably try to solve a few practical problems..
http://www.tldp.org/LDP/LGNET/104/ramankutty.html
CC-MAIN-2015-18
en
refinedweb
You may run into this error when trying to use the new DataPager control with a GridView or any other control, other than the ListView control. For a moment I couldn't believe that the DataPager cannot work with the GridView control. Then after a few moments of research found out that, the Data Pager control requires the Data Control to implement the "IPageableItemContainer" interface. This is a part of the System.Web.Extensions namespace that is shipped with .NET 3.5. This interface is currently implemented only by the List View control. If you arent aware, ListView and Datapager are the new server controls in ASP.NET 3.5. So, the solution is to use the Data pager control in combination with a List View control. To read more about the "IPageableItemContainer" interface, check Using the Data Pager control with List View The List View control is new in ASP.NET 3.5. It allows you to customize the output that is rendered by the data control. Traditionally a DataGrid / Grid View control renders HTML Tables which can be a little heavy over the internet. The List View allows you to define the Layout Templte and Item Template and optional Alternating Layout Template, Empty Data Template etc., Configuring the ListView Control 1. Start Visual Studio 2008 and select "File - New Website". 2. Chose ".NET Framework 3.5" in the dropdown on the top right of the "New Website" dialog and click "Ok" 3. It creates your Website and a Default.aspx is created. 4. Open the Default.aspx page and from the ToolBox, drag and drop a ListView control in the design view (The ListView control is under the "Data" section in the ToolBox. If you dont find it, make sure you selected the ".NET Framework 3.5" as the version as mentioned in Point 2 above. Since this control is new in ASP.NET 3.5, it wont be available if you had selected ".NET Framework 2.0" or .NET Framework 3.0" in the "New Website" Dialog.) 5. In the Design View, click the smart tag that appears next to the ListView control and select "DataSource" and chose "New DataSource" and then chose "Database" as Data Source Type 6. You can provide an ID under the "Specify an ID for the data source" textbox or leave the default one. Click Ok 7. In the "Chose your Data Connection" dialog, chose an existing connection or you could also create a "New Connection" by providing the necessary paramters, test the connection and click "Next" 8. Click "Next" and make sure you selected "Yes" under "Do you want to save the connection....." so that the connection string gets saved in the web.config file. 9. In the next screen you can chose an existing table or create a SQL procedure and do all the conditions / filtering that you want to apply. 10. Click "Finish" in the next screen to complete the Data binding process. 11. At this point if you try running the site, you would get an error contrary to the case with a DataGrid or a GridView since, both these controls just require the steps upto 10 above to start rendering the Data. 12. However, for the Listview, control we need to define the ItemTemplate and LayOutTemplate. We can either define them manually or use the Visual Studio desinger to do that. 13. Go back to the ListView in the Design Mode and click on the smart tag and click on "Configure ListView" 14. In the "Configure ListView" dialog, you can select a LayOut as well as Style. While the default is Grid, chose a Bulleted List or Flow Layout so that it can render lighter HTML accordingly. 15. Later you can switch back the Layout by again configuring the ListView. You can also manually define the Item Template and the LayOut Template. Configuring the DataPager Control 1. Once you are done with the ListView control, you can either chose "Enable Paging" and select "Next/Previous" or "Numeric". When you select this, it automatically generates the DataPager control with the following settings:- <asp:DataPager <Fields> <asp:NextPreviousPagerField </Fields> </asp:DataPager> 2. Since this is automatically placed within the "LayOut Template" of the ListView it automatically provides pagination for the ListView control. 3. You can also control the attributes of the DataPager such as PagedControlID, PageSize to manually control these settings. 4. The PagedControlID in the above case is automatically perceived as the "ListView1" within which the DataPager is placed. 5. The DataPager control can also be manually defined outside the scope of ListView1 and in that case, you need to manually set the "PagedControlID" to "ListView1" or whatever ID that is provided for the ListView control. 6. The PageSize is automatically set to 10 and you can manually override the same by providing the pagesize property. 7. The Fields need to be manually defined when the DataPager is manually added and configured to be used with the ListView control. There are 3 options, the "NextPreviousPagerField" or "NumericPagerField" or "TemplatePagerField". In case of "TemplatePagerField" you need to manually define the templates similar to custom paging. In fact it is custom paging implementation. Once you have configured the above and when you run the page, the ListView appears with the DataPager control as configured and as you check the source, you would be able to notice that since we selected "Bullted List" an <li> markup is generated instead of HTML tables, tds etc., The ListView is one control that allows a great amount of customization of the output. Scott Guthrie has written an excellent post on using the Listview to build a Product Listing page with screen shots which provides a great resource on using the List View and various customizations. Cheers !!! Some
http://geekswithblogs.net/ranganh/archive/2008/06.aspx
CC-MAIN-2015-18
en
refinedweb
[ ] Mikhail Fursov updated HARMONY-2056: ------------------------------------ Patch Info: [Patch Available] > [drlvm][jit] Jitrino.OPT's bpp.version=1 does not insert a polling code to every backedge > ----------------------------------------------------------------------------------------- > > Key: HARMONY-2056 > URL: > Project: Harmony > Issue Type: Bug > Components: DRLVM > Reporter: Mikhail Fursov > Priority: Minor > Attachments: 2056.fix > > > This bug is not critical, because it's about bbp.version=1 mode and default mode is bbp.version=6. > I found this problem while debugging more critical BBP bug and tried version=1 for my temporary needs. > In BBP documentation (in sources) I found: > // version of BBPolling: > // 0 - must be discarded in runImpl() > // 1 - insert bbpCFG at all backedges > > So I expecting to have BBP code generated for this test: > public class BBPBug { > public static void main(String[] args) { > new BBPBug().foo(); > } > void foo() { > int i=0; > while (i<100) { > synchronized (this) { > i++; > } > } > } > } > The BBP code is not generated. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: - For more information on JIRA, see:
http://mail-archives.apache.org/mod_mbox/harmony-commits/200611.mbox/%3C12341998.1162474521026.JavaMail.root@brutus%3E
CC-MAIN-2015-18
en
refinedweb
btparse - C library for parsing and processing BibTeX data files #include <btparse.h> /* Basic library initialization / cleanup */ void bt_initialize (void); void bt_free_ast (AST *ast); void bt_cleanup (void); /* Input / interface to parser */); /* AST traversal/query */); /* Splitting); /* Formatting names */); /* Construct tree from TeX groups */ bt_tex_tree * bt_build_tex_tree (char * string); void bt_free_tex_tree (bt_tex_tree **top); void bt_dump_tex_tree (bt_tex_tree *node, int depth, FILE *stream); char * bt_flatten_tex_tree (bt_tex_tree *top); /* Miscellaneous string utilities */ void bt_purify_string (char * string, btshort options); void bt_change_case (char transform, char * string, btshort options);). To understand this document and use btparse, you should already be familiar with the BibTeX language---more specifically, the BibTeX data description language. (BibTeX being the complex beast that it is, one can conceive of the term applying to the program, the data language, the particular database structure described in the original BibTeX documentation, the ".bst" formatting language, and the set of conventions embodied in the standard styles included with the BibTeX distribution. In this document, I'll stick to the first two meanings---the data language because that's what btparse deals with, and the program because it's occasionally necessary to explain differences between my parser and BibTeX's.) In particular, you should have a good idea what's going on in the following: @string{and = { and }, joe = "Blow, Joe", john = "John Smith"} @book(ourbook, author = joe # and # john, title = {Our Little Book}) If this looks like something you want to parse, but don't want to have to write your own parser for, you've come to the right place. Before going much further, though, you're going to have to learn some of the terminology I use for describing BibTeX data. Most of it's the same as you'll find in any BibTeX documentation, but it's important to be sure that we're talking about the same things here. So, some definitions: All text in a BibTeX file from the start of the file to the start of the first entry, and between entries thereafter. A string of letters, digits, and the following characters: ! $ & * + - . / : ; < > ? [ ] ^ _ ` | A "name" is a catch-all used for entry types, entry keys, and field and macro names. For BibTeX compatibility, there are slightly different rules for these four entities; currently, the only such rule actually implemented is that field and macro names may not begin with a digit. Some names in the above example: string, and. A chunk of text starting with an "at" sign ( @) at top-level, followed by a name (the entry type), an entry delimiter ( { or (), and proceeding to the matching closing delimiter. Also, the data structure that results from parsing this chunk of text. There are two entries in the above example. The name that comes right after an @ at top-level. Examples from above: string, book. A classification of entry types that allows us to group one or more entry types under the same heading. With the standard BibTeX database structure, article, book, inbook, etc. all fall under the "regular entry" metatype. Other metatypes are "macro definition" (for string entries), "preamble" (for preamble) entries, and "comment" ( comment entries). In fact, any entry whose type is not one of string, preamble, or comment is called a "regular" entry. { and }, or ( and ): the pair of characters that (almost) mark the boundaries of an entry. "Almost" because the start of an entry is marked by an @, not by the "entry open" delimiter. (Or just key when it's clear what we're speaking of.) The name immediately following the entry open delimiter in a regular entry, which uniquely identifies the entry. Example from above: ourbook. Only regular entries have keys. A name to the left of an equals sign in a regular or macro-definition entry. In the latter context, might also be called a macro name. Examples from above: joe, author. In a regular entry, everything between the entry delimiters except for the entry key. In a macro definition entry, everything between the entry delimiters (possibly also called a macro list). (Usually just "value".) The text that follows an equals sign ( =) in a regular or macro definition entry, up to a comma or the entry close delimiter; a list of one or more simple values joined by hash signs ( #). A string, macro, or number. (Or, sometimes, "quoted string.") A chunk of text between quotes ( ") or braces ( { and }). Braces must balance: {this is a {string} is not a BibTeX string, but {this is a {string}} is. ( "this is a {string" is also illegal, mainly to avoid the possibility of generating bogus TeX code--which BibTeX will do in certain cases.) A name that appears on the right-hand side of an equals sign (i.e. as one simple value in a compound value). Implies that this name was defined as a macro in an earlier macro definition entry, but this is only checked if btparse is being asked to expand macros to their full definitions. An unquoted string of digits. Working with btparse generally consists of passing the library some BibTeX data (or a source for some BibTeX data, such as a filename or a file pointer), which it then lexically scans, parses, and constructs an abstract syntax tree (AST) from. It returns this AST to you, and you call other btparse functions to traverse and query the tree. The contents of AST nodes are the private domain of the library, and you shouldn't go poking into them. This being C, though, there's nothing to prevent you from doing so except good manners and the possibility that I might change the AST structure in future releases, breaking any badly-behaved code. Also, it's not necessary to know the structural relationships between nodes in the AST---that's taken care of by the query/traversal functions. However, it's useful to know some of the things that btparse deposits in the AST and returns to you through those query/traversal functions. First off, each node has a "node type," which records the syntactic element corresponding to each node. For instance, the entry @book{mybook, author = "Joe Blow", title = "My Little Book"} is rooted by an "entry" node; under this would be found a "key" node (for the entry key), two "field" nodes (for the "author" and "title" fields); and associated with each field node would be a "string" node. The only time this concerns you is when you ask the library for a simple value; just looking at the text is not enough to distinguish quoted strings, numbers, and macro names, so btparse returns the nodetype as well. In addition to the nodetype, btparse records the metatype of each "entry" node. This allows you (and the library) to distinguish, say, regular entries from comment entries. Not only do they have very different structures and must therefore be traversed differently by the library, but certain traversal functions make no sense on certain entry metatypes---thus it's necessary for you to be able to make the distinction as well. That said, everything you need to know to work with the AST is explained in bt_traversal. btparse defines several types required for the external interface. First, it trivially defines a boolean type (along with TRUE and FALSE macros). This might affect you when including the btparse.h header in your own code---since it's not possible for the code to detect if there is already a boolean type defined, you might have to define the HAVE_BOOLEAN pre-processor token to deactivate btparse.h's typedef of boolean. Next, two enumeration types are defined: bt_metatype and bt_nodetype. Both of these are used extensively in the library itself, and are made available to users of the library because they can be found in nodes of the btparse AST (abstract syntax tree). (I.e., querying the AST can give you bt_metatype and bt_nodetype values, so the typedefs must be available to your code.) bt_metatype_t has the following values: BTE_UNKNOWN BTE_REGULAR BTE_COMMENT BTE_PREAMBLE BTE_MACRODEF which are determined by the "entry type" token. ( @string entries have the BTE_MACRODEF metatype; @comment and @preamble correspond to BTE_COMMENT and BTE_PREAMBLE; and any other entry type has the BTE_REGULAR metatype.) bt_nodetype has the following values: BTAST_UNKNOWN BTAST_ENTRY BTAST_KEY BTAST_FIELD BTAST_STRING BTAST_NUMBER BTAST_MACRO Of these, you'll only ever deal with the last three. They are returned when you query the AST for a simple value---just seeing the text isn't enough to distinguish between a quoted string, a number, and a macro, so the AST nodetype is supplied along with the text. Since BibTeX is essentially a system for glueing strings together in a wide variety of ways, the processing done to its strings is fairly important. Most of the string transformations are done outside of the lexer/parser; this reduces their complexity, and makes it easier to switch different transformations on and off. This switching is done with an "options" bitmap which can be specified on a per-entry-metatype basis. (That is, you can have one set of transformations done to the strings in all regular entries, another set done to the strings in all macro definition entries, and so on.) If you need finer control than that, it's currently unavailable outside of the library (but it's just a matter of making a couple functions available and documenting them---so bug me if you need this feature). There are three basic macros for constructing this bitmap: BTO_CONVERT Convert "number" values to strings. (The conversion is trivial, involving changing the type of the AST node representing the number from BTAST_NUMBER to BTAST_STRING. "Number" values are stored as strings of digits, just as they are in the input data.) BTO_EXPAND Expand macro invocations to the full macro text. BTO_PASTE Paste simple values together. BTO_COLLAPSE Collapse whitespace according to the BibTeX rules. For instance, supplying BTO_CONVERT | BTO_EXPAND as the string options bitmap for the BTE_REGULAR metatype means that all simple values in "regular" entries will be converted to strings: numbers will simply have their "nodetype" changed, and macros will be expanded. Nothing else will be done to the simple values, though---they will not be concatenated, nor will whitespace be collapsed. See the bt_set_stringopts() and bt_parse_*() functions in bt_input for more information on the various options for parsing; see bt_postprocess for details on the post-processing. The following code is a skeletal example of using the btparse library: #include <btparse.h> int main (void) { bt_initialize (); /* process some data */ bt_cleanup (); exit (0); } Please note the call to bt_initialize(); this is very important! Without it, the library may crash or fail mysteriously. You must call bt_initialize() before calling any other btparse functions. bt_cleanup() just frees the memory allocated by bt_initialize(); if you are careful to call it before exiting, and bt_free_ast() on any abstract syntax trees generated by btparse when you are done with them, then your program shouldn't have any memory leaks. (Unless they're due to your own code, of course!) btparse has several inherent limitations that are due to the lexical scanner and parser generated by PCCTS 1.x. In short, the scanner and parser are both heavily dependent on global variables, meaning that thread safety -- or even the ability to have two files open and being parsed at the same time -- is well-nigh impossible. This will not change until I get with the times and adopt ANTLR 2.0, the successor to PCCTS -- presuming of course that it can generate more modular C scanners and parsers. Another limitation that is due to PCCTS: entries with a large number of fields (more than about 90, if each field value is just a single string) will cause the parser to crash. This is unavoidable due to the parser using statically-allocated stacks for attributes and abstract-syntax tree nodes. I could increase the static allocation, but that would just decrease the likelihood of encountering the problem, not make it go away. Again, the chances of this changing as long as I'm using PCCTS 1.x are nil. Apart from those inherent limitations, there are no known bugs in btparse. Any segmentation faults or bus errors from the library should be considered bugs. They probably result from using the library incorrectly (eg. attempting to interleave the parsing of two files), but I do make an attempt to catch all such mistakes, and if I've missed any I'd like to know about it. Any memory leaks from the library are also a concern; as long as you are conscientious about calling the cleanup functions ( bt_free_ast() and bt_cleanup()), then the library shouldn't leak. To read and parse BibTeX data files, see bt_input. To traverse the syntax tree that results, see bt_traversal. To learn what is done to values in parsed entries, and how to customize that munging, see bt_postprocess. To learn how btparse deals with strings, see bt_strings (oops, I haven't written this one yet!). To manipulate and access the btparse macro table, see bt_macros. For splitting author names and lists "the BibTeX way" using btparse, bt_split_names. To put author names back together again, see bt_format_names. Miscellaneous functions for processing strings "the BibTeX way": bt_misc. A semi-formal language definition is in bt_language. Greg Ward <gward@pythontOOL home page, where you can get up-to-date information about btparse (and download the latest version) is You will also find the latest version of Text::BibTeX, the Perl library that provides a high-level front-end to btparse, there. btparse is needed to build Text::BibTeX, and must be downloaded separately. Both libraries are also available on CTAN (the Comprehensive TeX Archive Network,) and CPAN (the Comprehensive Perl Archive Network,). Look in biblio/bibtex/utils/btOOL/ on CTAN, and authors/Greg_Ward/ on CPAN. For example, will both get you to the latest version of Text::BibTeX and btparse -- but of course, you should always access busy sites like CTAN and CPAN through a mirror.
http://search.cpan.org/~ambs/Text-BibTeX-0.69/btparse/doc/btparse.pod
CC-MAIN-2015-18
en
refinedweb
Any suggestions how I can distribute millions of files from 1 server to X number of other servers? I'm looking more into an algorithm on how to decide which server to send the file to.: Perhaps look at a distributed filesystem like GlusterFS. It sounds like it will meet all your requirements and will probably be more reliable than something that you hack up yourself. Despite your impossible requirements, I'll scribble down my thoughts for other people in the future who aren't so hamstrung, based on my experiences doing this for Github. Distributing data across a number of locations (be they partitions, machines, data centres) based on a hash is a dangerous undertaking, for two reasons: On the other hand, having a lookup table for all your files makes these problems go away. When you say "no database", I'm betting you insert an implicit "SQL" before "database". However, there is a whole other world of databases out there that have nothing to do with SQL, and they are perfect for this situation. They're known as "key-value stores", and if you're dead keen on going ahead with building this boondoggle yourself, then I'd highly recommend using one (I've got experience with Redis, but they all seem pretty reasonable). Ultimately, though, if you go ahead with the "all hashes, all the time" system and then hack around the problems inherent in it (there are solutions, just not real awesome ones) all you will end up with, at the end of the day, is a half-assed, botchy, non-feature-complete version of GlusterFS. If you need a large amount of storage, growable over time, distributed across multiple physical machines, in a single namespace, I really would recommend it over anything you can build yourself. If you still want to hack it, do a md5sum on each file and then hash the output to your X boxes.. If you have two boxes: 0*-7* go to box one 8*-f* go to box two... Or if you have 256 boxes: 00*-0f* go to box one 10*-1f* go to box two.. and so on.. This works best for box counts of powers of two.(2,4,8,16,..) Keep in mind that shuffling things off is all nice and good, but you'll want to keep an index somewhere if you also need to retrieve this info. (where did I put foo.txt??) A flat file pickle (in python) would work, but it won't scale as well as a DB for large amounts of data.. Can the other servers also send files? Are you in a "safe" environment? The Rocks clusters installation process has to fill rack after rack of compute nodes, each one installed on the fly from an initial image. Doing that linearly or through a single server would be a bottleneck. Rocks uses instead a little system called Avalanche, where the install images are served using p2p; as nodes come up, they also become servers that will be used to install new nodes. The result is a tree of servers, and the install images cascade through the racks very quickly. The overall latency is a logarithm of the number of nodes, multiplied by the time to install one node (the base for the logarithm depends on how many other nodes can be served from one that is already installed, log base 20 wouldn't be surprising...). You could imagine a similar strategy for copying out your files, but only if the destination servers would be willing to trust other servers for their copy. asked 5 years ago viewed 216 times active 2 months ago
http://serverfault.com/questions/97673/distribute-files-across-x-number-of-servers/97676
CC-MAIN-2015-18
en
refinedweb
EMF Compare/Release Review/2.1.0 Contents - 1 Kepler Release Review - EMF Compare 2.1 - 1.1 Release Highlights - : Release Highlights EMF Compare 2.0 was a full overhaul of the design, architecture and code of the project. EMF Compare 2.1 comes to fill the functionality gap between 1.3 and 2.0, especially regarding content matching and graphical comparisons. New and noteworthy Content Matching EMF Compare 2.0 only supported models that presented IDs of some sort : XMI ID, functional ID... 2.1 re-introduces support for other models through a content matching strategy. By default, EMF Compare will try and match objects through their ID, delegating to the content matcher if none can be found. This can be altered to avoid using identifiers entirely when needed. Graphical comparison EMF Compare 1.3 provided basic support for GMF diagram comparisons and difference visualization. This basic implementation was dropped altogether in the 2.0 release as the architecture overhaul made it obsolete. 2.1 re-introduces the graphical comparison with a much more complete support on all aspects : comparison, visualization, merging... Here are a few key points of these enhancements : - Uses the same formalism as the semantic comparison - color, interest areas and place-holders, highlighting on selection. - Displays one difference at a time (on double-click) but with its context - Conflict detection The conflict detection has been largely revamped in order to properly detect complex conflicts and pseudo conflicts (identical changes on both sides of the comparison are conflicting, but can be auto-merged... and as such are detected as "pseudo" conflicts). This also allows for much better results with extensions such as the UML or GMF specific supports. Enhanced UI The comparison UI has been enhanced to provide a much more precise and intelligible information. Among these improvements, the most notable are : - a number of grouping options - This feature allows you to group differences together in the structural view according to a set predicate. By default, EMF Compare provides three distinct grouping strategies : - Default : Do not try and group differences together, display them as they were detected. - By Kind : Group differences according to their kind (additions, deletions, moves, changes). - By Side : Group differences according to their side: left or right and a special group regrouping differences in conflicts with other differences together. - a comprehensive set of filters - This feature allows you to filter differences out of the structural view according to a set predicate. By default, EMF Compare provides five distinct filters : - Pseudo conflicts differences: Filter out all pseudo conflicts differences(only in 3-way comparison). Enabled by default. - Identical elements : Filter out all identical elements (elements with no differences). Enabled by default. -. - background comparisons - All comparisons launched from the UI are now executed as background operations. This will avoid the usual "frozen" Eclipse for long-running comparisons. - Direct edit - Textual differences can now be directly edited in the comparison editor for users that would rather use a mix of the left and right values instead of taking one or the other. Customizable UI One of the main limitations of the 1.* stream was its "locked" UI. EMF Compare 2.1 introduces a lot of customization options for the visualization of the differences and model elements. - Users can introduce new filters through the org.eclipse.emf.compare.rcp.ui.filters extension point. Filters can be contributed to either hide or show differences given a predicate. - New grouping options can be contributed through the org.eclipse.emf.compare.rcp.ui.groups extension point. The groups can be used to group differences together in the "structure" (top half) panel of the comparison UI. - Label providers can be contributed to override or extend the behavior of the default EMF label providers through the org.eclipse.emf.compare.rcp.ui.accessorFactory extension point. This can be used to change the way elements are displayed in the compare UI, both in the structure (top half) or content (bottom half) panels of the comparison UI. Minimized Scope EMF Compare 2.0 introduced the mandatory architecture and API for the project to handle large input models... Yet only provided a default scope which loaded everything in memory and compared the input models as a whole. 2.1 builds upon these APIs to introduce a scoping mechanism capable of treating model fragments as first-class citizens, minimizing the input logical model to the bare minimal while never loading the whole model in-memory unless all of its fragments have changed. For example, assuming that we need to compare the three following models : Each of the three sides is an EMF model composed of 7 fragments. Origin is the common ancestor of left and right. The blue-colored fragments are those that actually present differences (so D and G have been modified in the "left" copy while only B has been modified in the "right" copy). In order to compare these three models together, we would normally need to load all 21 fragments in memory. However, with the help of the synchronization model we can narrow it down to the modified fragments only. What we'll really load, then, are the following fragments for each three sides : In other words, we are actually only loading 9 fragments out of the initial 21. This amounts to 58%. API A number of new APIs have been opened in order to facilitate the programmatic use of EMF Compare and consumption of its output comparison model. Pretty Printer The comparison model contains a lot of information, which might be hard to consume or read. 2.1 introduces APIs that will provide more user-friendly descriptions for both differences and matched elements. For example, here are the results for a given difference : System.out.println(diff); - ReferenceChangeSpec{reference=EClass.eSuperTypes, value=EClass@3190dc79 TitledItem, parentMatch=MatchSpec{left=EClass@72b398da Book, right=EClass@675926d1 Book, origin=EClass@28d4ff95 Book, #differences=3, #submatches=5}, match of value=MatchSpec{left=<null>, right=EClass@3190dc79 TitledItem, origin=<null>, #differences=1, #submatches=1}, kind=ADD, source=RIGHT, state=UNRESOLVED} EMFComparePrettyPrinter.printDifference(diff, System.out) - value TitledItem has been remotely added to reference eSuperTypes of object Book Comparison Configuration Most aspects of the comparison process can be customized before launching the comparison itself. All customizations can be made in a similar way through the EMFCompare class, they will be described in depth on the Developper guide. For example, replacing the Match engine would be done through IMatchEngine customMatchEngine = new MyMatchEngine(...); EMFCompare.builder().setMatchEngine(customMatchEngine).build().compare(scope); Or replacing the Diff engine could be done with IDiffEngine customDiffEngine = new MyDiffEngine(...); EMFCompare.builder().setDiffEngine(customDiffEngine).build().compare(scope); Separation of the RCP, IDE and core concepts EMF Compare 2.1 further enhances the separation of concerns that 2.0 introduced so that the core of the comparison process is all located in a single, fully standalone plugin that does not depend on Eclipse in the least. Furthermore, all code that does not depend on the IDE has been extracted in "rcp" namespaces, both RCP and RCP UI are in this case, separated themselves in their own isolated plugins. Mergers extensibility The mergers have been greatly enhanced since the 1.* stream of EMF Compare, though 2.0 did not allow for easy overriding, replacing or inheriting of the default mergers. EMF Compare 2.1 solves these concerns by introducing a more flexible merging mechanism, supporting any kind of combination of existing/new/overridden mergers along with a "batch" merging mechanism. Clients can refer to IMerger.Registry for more information on extending the default behavior. Performance Enhancements EMF Compare 2.1 introduced a minimized scoping mechanism, which will greatly enhance both the memory footprint and comparison time of all comparisons targetting fragmented models. Furthermore, both aspects (memory footprint and overall time) have been fully profiled and enhanced for large model comparisons, this makes for a noticeable improvement on all comparisons, whether the input models are fragmented or not. None.
http://wiki.eclipse.org/index.php?title=EMF_Compare/ReleaseReview/Kepler&oldid=337000
CC-MAIN-2015-18
en
refinedweb
OLPC:Wiki From OLPC This is a wiki page about this editable site. We are currently running MediaWiki Version 1.13.3. If you are new to wikis, please read the Wiki getting started page for tips, tutorials, and people who can help you get going. Experienced wiki users, see our Style guide for how you can make your edits more helpful and efficient. For cross-population of the wiki with updates from RT, trac and the like, see OLPC:Wiki integration. Please leave notes about your wiki experiences, and comments or suggestions about this wiki, here. For more on children using wikis, see wikis for children. For more about wikis, see Ward Cunningham's lovely rundown of the principles he followed when developing the first wiki. skins The default skin has been changed to Shikiwiki. Common skins available are listed below: Shikiwiki This is a monobook clone developed by Simon Dorner, Helga Schmidt, daja77, and Sj. bugs and requests - A better-colored XO-icon for the user-login, along the top-right nav. The colors of the current one are a little bit off. - IE issues - The current layout breaks on certain pages in IE. For instance, when inside a pop-up window... (noticed when testing the new contributors-program database; see user:aaronk for details). Bug reporting and IE testing are appreciated. - The top tabs are transparent in IE; not intended. - The "translate" form dropdown is rounded in Safari, and looks out of place. - There is little in the way of hovertext, and a number of images lack alt tags. - Alignment of the namespace tab ('article', 'project' et al) is slightly imperfect; at some text-sizes it doesn't align with the body border. Monobook Monobook remains the default mediawiki skin, is close to shikiwiki, and the most robust of the other available skins. Plugins and extensions 2008 - Semantic MediaWiki and Semantic Forms - see Semantic MediaWiki - Newuserlog - new users have their own log now. useful long-term extension. - UserLoginLog - big brother, or your little brother, can observe logins as well as edits. Implemented to help note and evaluate vandalism; may be short-lived. - Cite - Now you can use <ref></ref> and <references/> to your heart's content. - Examples: see Guia OLPC Peru Parte II 2007 minor extensions written by user:sj - iframe - ability to 'embed' other web sites inside of wiki, sort of a 'portal' effect. - Examples: See MediaWiki. - ref - add footnote references to dialog or research text. - Examples: See wikipedia. - Youtube - put a video-id between <youtube> tags - Google Video : put a video id between <gvideo> - inputbox - prefill a target page with a template, to help people start a new type of page or otherwise give them an easy form within the wiki to fill out. - Examples: see starting a page and User talk:Sj for examples. - gitembed - You can now embed plaintext documents from git, such as readme files and man pages, config pages, specs, &c, using the <gitembed> extension. - OLPCgitembedurl - Examples: see Talk:rainbow - Next up: adding a patch to the extension so that it knows how to find the revision # of the document it's transcluding. Sj talk - The CSS for this extension is inappropriate; we should tuck the 'Welcome to gitembed' header beneath the main display rather than to the left of it. --Michael Stone 18:24, 4 June 2008 (EDT) - gspread - you can embed a google spreadsheet in a wiki page. Example: see OLPC talk:Wiki. - GSpreadEmbed - traclink - You can now embed a link to a trac ticket from the wiki, using <trac>NNNN</trac> or <trac>NNNN description</trac>. - olpcTracLink - Example : I can describe ticket #2129 (adding BlockParty to images) without elaborate linking to the original, and it finds the right page on dev. ImageMap ImageMap, Here is an example: <imagemap> Image:Foo.jpg|200px|picture of a foo poly 131 45 213 41 210 110 127 109 [[Display]] poly 104 126 105 171 269 162 267 124 [[Keyboard]] # Comment : rect takes two corners. rect 15 95 94 176 [[learning learning]] # Comment : circles are center + radius circle 57 57 20 [[OLPC|one laptop per child]] desc bottom-left </imagemap> Suggestions and comments Losing summaries I find that when I edit pages and add links, the process of asking me an arithmetic question frequently discards the summary that I entered, so I have to scroll down and enter it again. I haven't been able to determine a pattern to when this happens.--Mokurai - Does it ask you an arithmetic question when you are logged in? It is only supposed to ask when the edits are anonymous (part of the Mediawiki anti-spam measures). --Walter 08:25, 22 October 2006 (EDT) Better CAPTCHA As everyone knows, a text based captcha is insecure. In the case of this wiki, it is also annoying. Thus the solution: reCAPTCHA. Not only does it have an audio alternative, but it also helps digitize books. ffm 12:50, 1 January 2008 (EST) Searching for "FORTH" The Wiki Search function does not find the word "forth" on any page or page title. The Go function correctly goes to the page entitled "FORTH". My problem is that I have to use Google to search for references to the FORTH programming language in order to make them into links, now that I have created a FORTH page. forth site:wiki.laptop.org --Mokurai 01:55, 22 October 2006 (EDT) -) Make wiki Search CasE INsensitive We will all be better served if both Go and Search are not sensitive to case. Google works that way, so practically every participant here will be LESS surprised if case is ignored. How can this objective be pursued? Nitpicker 19:06, 9 December 2006 (EST) -) See also Wiki corrections - wiki edit toolbar is missing when editing pages. - the 'G1G1' logo in bottom right corner is out of date, no g1g1 program anymore.
http://wiki.laptop.org/go/Wiki/lang-es
CC-MAIN-2015-18
en
refinedweb
CVE 2010-2946 fs/jfs/xattr.c in the Linux kernel before 2.6.35.2 does not properly handle a certain legacy format for storage of extended attributes, which might allow local users by bypass intended xattr namespace restrictions via an "os2." substring at the beginning of a name. See the CVE page on Mitre.org for more details.
https://bugs.launchpad.net/bugs/cve/2010-2946
CC-MAIN-2015-18
en
refinedweb
% % (c) The AQUA Project, Glasgow University, 1993-1998 % \section[SimplUtils]{The simplifier utilities} \begin{code} module SimplUtils ( -- Rebuilding mkLam, mkCase, prepareAlts, bindCaseBndr, -- Inlining, preInlineUnconditionally, postInlineUnconditionally, activeInline, activeRule, inlineMode, -- The continuation type SimplCont(..), DupFlag(..), ArgInfo(..), contIsDupable, contResultType, contIsTrivial, contArgs, dropArgs, countValArgs, countArgs, splitInlineCont, mkBoringStop, mkLazyArgStop, contIsRhsOrArg, interestingCallContext, interestingArgContext, interestingArg, mkArgInfo, abstractFloats ) where #include "HsVersions.h" import SimplEnv import DynFlags import StaticFlags import CoreSyn import qualified CoreSubst import PprCore import CoreFVs import CoreUtils import CoreArity ( etaExpand, exprEtaExpandArity ) import CoreUnfold import Name import Id import Var ( isCoVar ) import NewDemand import SimplMonad import Type hiding( substTy ) import Coercion ( coercionKind ) import TyCon import Unify ( dataConCannotMatch ) import VarSet import BasicTypes import Util import MonadUtils import Outputable import FastString import Data.List \end{code} %************************************************************************ %* * The SimplCont type %* * %************************************************************************ A SimplCont allows the simplifier to traverse the expression in a zipper-like fashion. The SimplCont represents the rest of the expression, "above" the point of interest. You can also think of a SimplCont as an "evaluation context", using that term in the way it is used for operational semantics. This is the way I usually think of it, For example you'll often see a syntax for evaluation context looking like C ::= [] | C e | case C of alts | C `cast` co That's the kind of thing we are doing here, and I use that syntax in the comments. -- C `cast` co OutCoercion -- The coercion simplified SimplCont | ApplyTo -- C arg DupFlag InExpr SimplEnv -- The argument and its static env SimplCont | Select -- case C of alts DupFlag InId [InAlt] SimplEnv -- The case binder, alts, and subst-env SimplCont -- The two strict forms have no DupFlag, because we never duplicate them | StrictBind -- (\x* \xs. e) C InId [InBndr] -- let x* = [] in e InExpr SimplEnv -- is a special case SimplCont | StrictArg -- e C OutExpr -- e; *always* of form (Var v `App1` e1 .. `App` en) CallCtxt -- Whether *this* argument position is interesting ArgInfo -- Whether the function at the head of e has rules, etc SimplCont -- plus strictness flags for *further* args data ArgInfo = ArgInfo { ai_rules :: Bool, -- Function has rules (recursively) -- => be keener to inline in all args ai_strs :: [Bool], -- Strictness of arguments -- Usually infinite, but if it is finite it guarantees -- that the function diverges after being given -- that number of args ai_discs :: [Int] -- Discounts for arguments; non-zero => be keener to inline -- Always infinite } instance Outputable SimplCont where ppr (Stop interesting) = ptext (sLit "Stop") <> brackets (ppr interesting) ppr (ApplyTo dup arg _ cont) = ((ptext (sLit "ApplyTo") <+> ppr dup <+> pprParendExpr arg) {- $$ nest 2 (pprSimplEnv se) -}) $$ ppr cont ppr (StrictBind b _ _ _ cont) = (ptext (sLit "StrictBind") <+> ppr b) $$ ppr cont ppr (StrictArg f _ _ cont) = (ptext (sLit "StrictArg") <+> ppr f) $$ ppr cont ppr (Select dup bndr alts _") ------------------- mkBoringStop :: SimplCont mkBoringStop = Stop BoringCtxt mkLazyArgStop :: CallCtxt -> SimplCont mkLazyArgStop cci = Stop cci ------------------- contIsRhsOrArg :: SimplCont -> Bool contIsRhsOrArg (Stop {}) = True contIsRhsOrArg (StrictBind {}) = True contIsRhsOrArg (StrictArg {}) = True contIsRhsOrArg _ = False ------------------- contIsDupable :: SimplCont -> Bool contIsDupable (Stop {}) = True contIsDupable (ApplyTo OkToDup _ _ _) = True contIsDupable (Select OkToDup _ _ _ _) = True (CoerceIt _ cont) = contIsTrivial cont contIsTrivial _ = False ------------------- contResultType :: SimplEnv -> OutType -> SimplCont -> OutType contResultType env ty cont = go cont ty where subst_ty se ty = substTy (se `setInScope` env) ty go (Stop {}) ty = ty go (CoerceIt co cont) _ = go cont (snd (coercionKind co)) go (StrictBind _ bs body se cont) _ = go cont (subst_ty se (exprType (mkLams bs body))) go (StrictArg fn _ _ cont) _ = go cont (funResultTy (exprType fn)) go (Select _ _ alts se cont) _ = go cont (subst_ty se (coreAltsType alts)) go (ApplyTo _ arg se cont) ty = go cont (apply_to_arg ty arg se) apply_to_arg ty (Type ty_arg) se = applyTy ty (subst_ty se ty_arg) apply_to_arg ty _ _ = funResultTy ty ------------------- countValArgs :: SimplCont -> Int countValArgs (ApplyTo _ (Type _) _ cont) = countValArgs cont countValArgs (ApplyTo _ _ _ cont) = 1 + countValArgs cont countValArgs _ = 0 countArgs :: SimplCont -> Int countArgs (ApplyTo _ _ _ cont) = 1 + countArgs cont countArgs _ = 0 contArgs :: SimplCont -> ([OutExpr], SimplCont) -- Uses substitution to turn each arg into an OutExpr contArgs cont = go [] cont where go args (ApplyTo _ arg se cont) = go (substExpr se arg : args) cont go args cont = (reverse args, cont) dropArgs :: Int -> SimplCont -> SimplCont dropArgs 0 cont = cont dropArgs n (ApplyTo _ _ _ cont) = dropArgs (n-1) cont dropArgs n other = pprPanic "dropArgs" (ppr n <+> ppr other) -------------------- splitInlineCont :: SimplCont -> Maybe (SimplCont, SimplCont) -- Returns Nothing if the continuation should dissolve an InlineMe Note -- Return Just (c1,c2) otherwise, -- where c1 is the continuation to put inside the InlineMe -- and c2 outside -- Example: (__inline_me__ (/\a. e)) ty -- Here we want to do the beta-redex without dissolving the InlineMe -- See test simpl017 (and Trac #1627) for a good example of why this is important splitInlineCont (ApplyTo dup (Type ty) se c) | Just (c1, c2) <- splitInlineCont c = Just (ApplyTo dup (Type ty) se c1, c2) splitInlineCont cont@(Stop {}) = Just (mkBoringStop, cont) splitInlineCont cont@(StrictBind {}) = Just (mkBoringStop, cont) splitInlineCont _ = Nothing -- NB: we dissolve an InlineMe in any strict context, -- not just function aplication. -- E.g. foldr k z (__inline_me (case x of p -> build ...)) -- Here we want to get rid of the __inline_me__ so we -- can float the case, and see foldr/build -- -- However *not* in a strict RHS, else we get -- let f = __inline_me__ (\x. e) in ...f... -- Now if f is guaranteed to be called, hence a strict binding -- we don't thereby want to dissolve the __inline_me__; for -- example, 'f' might be a wrapper, so we'd inline the worker interesting (Select _ bndr _ _ _) | isDeadBinder bndr = CaseCtxt | otherwise = ArgCtxt False 2 -- If the binder is used, this -- is like a strict let interesting (ApplyTo _ arg _ cont) | isTypeArg arg = interesting cont | otherwise = ValAppCtxt -- Can happen if we have (f Int |> co) y -- If f has an INLINE prag we need to give it some -- motivation to inline. See Note [Cast then apply] -- in CoreUnfold interesting (StrictArg _ cci _ _) = cci interesting (StrictBind {}) = BoringCtxt interesting (Stop cci) =. ------------------- mkArgInfo :: Id -> Int -- Number of value args -> SimplCont -- Context of the call -> ArgInfo mkArgInfo fun n_val_args call_cont | n_val_args < idArity fun -- Note [Unsaturated functions] = ArgInfo { ai_rules = False , ai_strs = vanilla_stricts , ai_discs = vanilla_discounts } | otherwise = ArgInfo { ai_rules = interestingArgContext fun call_cont , ai_strs = add_type_str (idType fun) arg_stricts , ai_discs = arg_discounts } where vanilla_discounts, arg_discounts :: [Int] vanilla_discounts = repeat 0 arg_discounts = case idUnfolding fun of CoreUnfolding _ _ _ _ _ (UnfoldIfGoodArgs _ discounts _ _) -> discounts ++ vanilla_discounts _ -> vanilla_discounts vanilla_stricts, arg_stricts :: [Bool] vanilla_stricts = repeat False arg_stricts = case splitStrictSig (idNewStrictness fun) of (demands, result_info) | not (demands `lengthExceeds` n_val_args) -> -- Enough args, use the strictness given. -- For bottoming functions we used to pretend that the arg -- is lazy, so that we don't treat the arg as an -- interesting context. This avoids substituting -- top-level bindings for (say) strings into -- calls to error. But now we are more careful about -- inlining lone variables, so its ok (see SimplUtils.analyseCont) if isBotRes result_info then map isStrictDmd demands -- Finite => result is bottom else map isStrictDmd demands ++ vanilla_stricts | otherwise -> WARN( True, text "More demands than arity" <+> ppr fun <+> ppr (idArity fun) <+> ppr n_val_args <+> ppr demands ) vanilla_stricts -- Not enough args, or no strictness add_type_str :: Type -> [Bool] -> [Bool] -- If the function arg types are strict, record that in the 'strictness bits' -- No need to instantiate because unboxed types (which dominate the strict -- types) can't instantiate type variables. -- add_type_str is done repeatedly (for each call); might be better -- once-for-all in the function -- But beware primops/datacons with no strictness add_type_str _ [] = [] add_type_str fun_ty strs -- Look through foralls | Just (_, fun_ty') <- splitForAllTy_maybe fun_ty -- Includes coercions = add_type_str fun_ty' strs add_type_str fun_ty (str:strs) -- Add strict-type info | Just (arg_ty, fun_ty') <- splitFunTy_maybe fun_ty = (str || isStrictType arg_ty) : add_type_str fun_ty' strs add_type_str _ strs = strs {- Note [Unsaturated functions] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consider (test eyeball/inline4) x = a:as y = f x where f has arity 2. Then we do not want to inline 'x', because it'll just be floated out again. Even if f has lots of discounts on its first argument -- it must be saturated for these to kick in -} interestingArgContext :: Id -> SimplCont -> Bool -- If the argument has form (f x y), where x,y are boring, -- and f is marked INLINE, then we don't want to inline f. -- But if the context of the argument is -- g (f x y) -- where g has rules, then we *do* want to inline f, in case it -- exposes a rule that might fire. Similarly, if the context is -- h (g (f x x)) -- where h has rules, then we do want to inline f; hence the -- call_cont argument to interestingArgContext -- -- The interesting_arg_ctxt flag makes this happen; if it's -- set, the inliner gets just enough keener to inline f -- regardless of how boring f's arguments are, if it's marked INLINE -- -- The alternative would be to *always* inline an INLINE function, -- regardless of how boring its context is; but that seems overkill -- For example, it'd mean that wrapper functions were always inlined interestingArgContext fn call_cont = idHasRules fn || go call_cont where go (Select {}) = False go (ApplyTo {}) = False go (StrictArg _ cci _ _) = interesting cci go (StrictBind {}) = False -- ?? go (CoerceIt _ c) = go c go (Stop cci) = interesting cci interesting (ArgCtxt rules _) = rules interesting _ = False \end{code} %************************************************************************ %* * \subsection{Decisions about inlining} %* * %************************************************************************ Inlining is controlled partly by the SimplifierMode switch. This has two settings: SimplGently (a) Simplifying before specialiser/full laziness (b) Simplifiying inside INLINE pragma (c) Simplifying the LHS of a rule (d) Simplifying a GHCi expression or Template Haskell splice SimplPhase n _ Used at all other times The key thing about SimplGently is that it does no call-site inlining.. \begin{code} inlineMode :: SimplifierMode inlineMode = SimplGently \end{code} It really is important to switch off inlinings inside such expressions. Consider the following example let f = \pq -> BIG in let g = \y -> f y y {-# INLINE g #-} in ...g...g...g...g...g... Now, if that's the ONLY occurrence of f, it). It's also important not to inline a worker back into a wrapper. A wrapper looks like wraper = inline_me (\x -> ...worker... ). Note that the result is that we do very little simplification inside an InlineMe. all xs = foldr (&&) True xs any p = all . map p {-# INLINE any #-} Problem: any won't get deforested, and so if it's exported and the importer doesn't use the inlining, (eg passes it as an arg) then we won't get deforestation at all. We havn't solved this problem yet! = <rhs> THEN doing the inlining should not change the occurrence info for the free vars of <rhs> ---------------------------------------------- For example, it's tempting to look at trivial binding like x = y and inline it unconditionally. But suppose x is used many times, but this is the unique occurrence of y. Then inlining x would change y's occurrence info, which breaks the invariant. It matters: y might have a BIG rhs, which will now be dup'd at every occurrenc of x. Even RHSs labelled InlineMe aren't caught here, because there might be no benefit from inlining at the call site. [Sept 01] Don't unconditionally inline a top-level thing, because that can simply make a static thing into something built dynamically. E.g. x = (a,b) main = \s -> h x [Remember that we treat \s as a one-shot lambda.] No point in inlining x unless there is something interesting about the call site. But watch out: if you aren't careful, some useful foldr/build fusion can be lost (most notably in spectral/hartel/parstof) because the foldr didn't see the build. Doing the dynamic allocation isn't a big deal, in fact, but losing the fusion can be. But the right thing here seems to be to do a callSiteInline based on the fact that there is something interesting about the call site (it's strict). Hmm. That seems a bit fragile. Conclusion: inline top level things gaily until Phase 0 (the last phase), at which point don't. \begin{code} preInlineUnconditionally :: SimplEnv -> TopLevelFlag -> InId -> InExpr -> Bool preInlineUnconditionally env top_lvl bndr rhs | not active = False | opt_SimplNoPreInlining = False | otherwise = case idOccInfo bndr of IAmDead -> True -- Happens in ((\x.1) v) OneOcc in_lam True int_cxt -> try_once in_lam int_cxt _ -> False where phase = getMode env active = case phase of SimplGently -> isAlwaysActive act SimplPhase n _ -> isActive n act act = idInlineActivation bndr try_once in_lam int_cxt -- There's one textual occurrence | not in_lam = isNotTopLevel top_lvl || early_phase | otherwise = int_cxt && canInlineInLam rhs -- Be very careful before inlining inside a lambda, becuase (Note _ e) = canInlineInLam e canInlineInLam _ = False early_phase = case phase of SimplPhase -> InId -- The binder (an OutId would be fine too) -> OccInfo -- From the InId -> OutExpr -> Unfolding -> Bool postInlineUnconditionally env top_lvl bndr occ_info rhs unfolding | not active = False | isLoopBreaker occ_info = False -- If it's a loop-breaker of any kind, don't inline -- because it might be referred to "earlier" | isExportedId bndr = False | ... -- I'm not sure how important this is in practice OneOcc in_lam _one_br int_cxt -- OneOcc => no code-duplication issue -> smallEnoughToInline unfolding -- Small enough to dup -- ToDo: consider discount on smallEnoughToInline if int_cxt is true -- --. && ((isNotTopLevel top_lvl && not in_lam) || -- But ... _ -> False -- Here's an example that we don't handle well: -- let f = if b then Left (\x.BIG) else Right (\y.BIG) -- in \y. ....case f of {...} .... -- Here f is used just once, and duplicating the case work is fine (exprIsCheap). -- But -- - We can't preInlineUnconditionally because that woud invalidate -- the occ info for b. -- - We can't postInlineUnconditionally because the RHS is big, and -- that risks exponential behaviour -- - We can't call-site inline, because the rhs is big -- Alas! where active = case getMode env of SimplGently -> isAlwaysActive act SimplPhase n _ -> isActive n act act = idInlineActivation bndr activeInline :: SimplEnv -> OutId -> Bool activeInline env id = case getMode env of SimplGently -> False -- No inlining at all when doing gentle stuff, -- except for local things that occur once (pre/postInlineUnconditionally) -- The reason is that too little clean-up happens if you -- don't inline use-once things. Also a bit of inlining is *good* for -- full laziness; it can expose constant sub-expressions. -- Example in spectral/mandel/Mandel.hs, where the mandelset -- function gets a useful let-float if you inline windowToViewport -- NB: we used to have a second exception, for data con wrappers. -- On the grounds that we use gentle mode for rule LHSs, and -- they match better when data con wrappers are inlined. -- But that only really applies to the trivial wrappers (like (:)), -- and they are now constructed as Compulsory unfoldings (in MkId) -- so they'll happen anyway. SimplPhase n _ -> isActive n act where act = idInlineActivation id activeRule :: DynFlags -> SimplEnv -> Maybe (Activation -> Bool) -- Nothing => No rules at all} %************************************************************************ %* * Rebuilding a lambda %* * %************************************************************************ \begin{code} mkLam :: SimplEnv -> [OutBndr] -> OutExpr -> SimplM OutExpr -- mkLam tries three things -- a) eta reduction, if that gives a trivial expression -- b) eta expansion [only if there are some value lambdas] mkLam _b [] body = return body mkLam _env bndrs body = do { dflags <- getDOptsSmpl ; mkLam' dflags bndrs body } where mkLam' :: DynFlags -> [OutBndr] -> OutExpr -> SimplM OutExpr mkLam' dflags bndrs (Cast body co) | not (any bad bndrs) -- Note [Casts and lambdas] = do { lam <- mkLam' dflags bndrs body ; return (mkCoer, any isRuntimeVar bndrs = do { let body' = tryEtaExpansion dflags body ; return (mkLams bndrs body') } | otherwise = return (mkLams bndrs body) \end{code} Note [Casts and lambdas] ~~~~~~~~~~~~~~~~~~~~~~~~ Consider (\x. (\y. e) `cast` g1) `cast` g2 There is a danger here that the two lambdas look separated, and the full laziness pass might float an expression to between the two. So this equation in mkLam' floats the g1 out, thus: (\x. e `cast` g1) --> (\x.e) `cast` (tx -> g1) where x:tx. In general, this floats casts outside lambdas, where (I hope) they might meet and cancel with some other cast: \x. e `cast` co ===> (\x. e) `cast` (tx -> co) /\a. e `cast` co ===> (/\a. e) `cast` (/\a. co) /\g. e `cast` co ===> (/\g. e) `cast` (/\g. co) (if not (g `in` co)) Notice that it works regardless of 'e'. Originally it worked only if 'e' was itself a lambda, but in some cases that resulted in fruitless iteration in the simplifier. A good example was when compiling Text.ParserCombinators.ReadPrec, where we had a definition like (\x. Get `cast` g) where Get is a constructor with nonzero arity. Then mkLam eta-expanded the Get, and the next iteration eta-reduced it, and then eta-expanded it again. Note also the side condition for the case of coercion binders. It does not make sense to transform /\g. e `cast` g ==> (/\g.e) `cast` (/\g.g) because the latter is not well-kinded. -- | all isTyVar bndrs, -- Only for big lambdas contIsRhs cont -- Only try the rhs type-lambda floating -- if this is indeed a right-hand side; otherwise -- we end up floating the thing out, only for float-in -- to float it right back in again! = do (floats, body') <- tryRhsTyLam env bndrs body return (floats, mkLams bndrs body') -} %************************************************************************ %* * Eta reduction %* * %************************************************************************, we always want to reduce (/\a -> f a) to f This came up in a RULE: foldr (build (/\a -> g a)) did not match foldr (build (/\b -> ...something complex...)) The type checker can insert these eta-expanded versions, with both type and dictionary lambdas; hence the slightly ad-hoc isDictId *ndr] -> OutExpr -> Maybe OutExpr tryEtaReduce bndrs body = go (reverse bndrs) body where incoming_arity = count isId bndrs go (b : bs) (App fun arg) | ok_arg b arg = go bs fun -- Loop round go [] fun | ok_fun fun = Just fun -- Success! go _ _ = Nothing -- Failure! -- ok_arg b arg = varToCoreExpr b `cheapEqExpr` arg \end{code} %************************************************************************ %* * Eta expansion %* * %************************************************************************ We go for: f = \x1..xn -> N ==> f = \x1..xn y1..ym -> N y1..ym (n >= 0) where (in both cases) * The xi can include type variables * The yi are all value variables * N is a NORMAL FORM (i.e. no redexes anywhere) wanting a suitable number of extra args. The biggest reason for doing this is for cases like f = \x -> case x of True -> \y -> e1 False -> \y -> e2 Here we want to get the lambdas together. A good exmaple is the nofib program fibheaps, which gets 25% more allocation if you don't do this eta-expansion. We may have to sandwich some coerces between the lambdas to make the types work. exprEtaExpandArity looks through coerces when computing arity; and etaExpand adds the coerces as necessary when actually computing the expansion. \begin{code} tryEtaExpansion :: DynFlags -> OutExpr -> OutExpr -- There is at least one runtime binder in the binders tryEtaExpansion dflags body = etaExpand fun_arity body where fun_arity = exprEtaExpandArity dflags body \end{code} %************************************************************************ %* * \subsection{Floating lets out of big lambdas} %* * %************************************************************************ Note [Floating and type abstraction] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Consider this: x = /\a. C e1 e2 We'd like to float this to y1 = /\a. e1 y2 = /\a. e2 x = /\a. C (y1 a) (y2 a) for the usual reasons: we want to inline x rather vigorously. You may think that this kind of thing is rare. But in some programs it is common. For example, if you do closure conversion you might get: data a :-> b = forall e. (e -> a -> b) :$ e f_cc :: forall a. a :-> a f_cc = /\a. (\e. id a) :$ () Now we really want to inline that f_cc thing so that the construction of the closure goes away. So I have elaborated simplLazyBind to understand right-hand sides that look like /\ a1..an. body and treat them specially. The real work is done in SimplUtils.abstractFloats, but there is quite a bit of plumbing in simplLazyBind as well. The same transformation is good when there are lets in the body: /\abc -> let(rec) x = e in } If we abstract this wrt the tyvar we then can't do the case inline as we would normally do. That's why the whole transformation is part of the same process that floats let-bindings and constructor arguments out of RHSs. In particular, it is guarded by the doFloatFromRhs call in simplLazyBind. \begin{code} abstractFloats :: [OutTyVar] -> SimplEnv -> OutExpr -> SimplM ([OutBind], OutExpr) abstractFloats main_tvs body_env body = ASSERT( notNull body_floats ) do { (subst, float_binds) <- mapAccumLM abstract empty_subst body_floats ; return (float_binds, CoreSubst.substExpr subst body) } where main_tv_set = mkVarSet main_tvs body_floats = getFloats body_env empty_subst = CoreSubst.mkEmptySubst (seInScope body_env) abstract :: CoreSubst.Subst -> OutBind -> SimplM (CoreSubst.Subst, OutBind) abstract subst (NonRec id rhs) = do { (poly_id, poly_app) <- mk_poly tvs_here id ; let poly_rhs = mkLams tvs_here rhs' subst' = CoreSubst.extendIdSubst subst id poly_app ; return (subst', (NonRec poly_id poly_rhs)) } where rhs' = CoreSubst.substExpr subst rhs tvs_here | any isCoVar main_tvs = main_tvs -- Note [Abstract over coercions] | otherwise = varSetElems (main_tv_set `intersectVarSet` exprSomeFreeVars isTyVar rhs') -- Abstract only over the type variables free in the rhs -- wrt which the new binding is abstracted. But the naive -- approach of abstract wrt the tyvars free in the Id's type -- abstract subst (Rec prs) = do { (poly_ids, poly_apps) <- mapAndUnzipM (mk_poly tvs_here) ids ; let subst' = CoreSubst.extendSubstList subst (ids `zip` poly_apps) poly_rhss = [mkLams tvs_here (CoreSubst.substExpr subst' rhs) | rhs <- rhss] ; return (subst', Rec (poly_ids `zip` poly_rhss)) } where (ids,rhss) = unzip prs -- For a recursive group, it's a bit of a pain to work out the minimal -- set of tyvars over which to abstract: -- /\ a b c. let x = ...a... in -- letrec { p = ...x...q... -- q = .....p...b... } in -- ... -- Since 'x' is abstracted over 'a', the {p,q} group must be abstracted -- over 'a' (because x is replaced by (poly_x a)) as well as 'b'. -- Since it's a pain, we just use the whole set, which is always safe -- -- If you ever want to be more selective, remember this bizarre case too: -- x::a = x -- Here, we must abstract 'x' over 'a'. tvs_here = main_tvs mk_poly tvs_here var = do { uniq <- getUniqueM ; let poly_name = setNameUnique (idName var) uniq -- Keep same name poly_ty = mkForAllTys tvs_here (idType var) -- But new type of course poly_id = transferPolyIdInfo var tvs_here $ -- Note [transferPolyIdInfo] in Id.lhs mkLocalId poly_name poly_ty ; return (poly_id, mkTyApps (Var poly_id) (mkTyVarTys tvs_here)) } -- In the olden days, it was crucial to copy the occInfo of the original var, -- because we were looking at occurrence-analysed but as yet unsimplified code! -- In particular, we mustn't lose the loop breakers. BUT NOW we are looking --. \end{code} Note [Abstract over coercions] ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ If a coercion variable (g :: a ~ Int) is free in the RHS, then so is the type variable a. Rather than sort this mess out, we simply bale out and abstract wrt all the type variables if any of them are coercion variables. Historical note: if you use let-bindings instead of a substitution, beware of this: -- %* * %************************************************************************ prepareAlts tries these things: 1.) 2. Case merging:. The case where transformation (1) showed up was like this (lib/std/PrelCError.lhs):! Note [Dead binders] ~~~~~~~~~~~~~~~~~~~~ We do this *here*, looking at un-simplified alternatives, because we have to check that r doesn't mention the variables bound by the pattern in each alternative, so the binder-info is rather useful. \begin{code} prepareAlts :: SimplEnv -> OutExpr -> OutId -> [InAlt] -> SimplM ([AltCon], [InAlt]) prepareAlts env scrut case_bndr' alts = do { dflags <- getDOptsSmpl ; alts <- combineIdenticalAlts case_bndr' alts ; dflags env -------------------------------------------------- -- 1. Merge identical branches -------------------------------------------------- combineIdenticalAlts :: OutId -> [InAlt] -> SimplM [InAlt] combineIdenticalAlts case_bndr ((_con1,bndrs1,rhs1) : con_alts) | all isDeadBinder bndrs1, -- Remember the default length filtered_alts < length con_alts -- alternative comes first -- Also Note [Dead binders] = do { tick (AltMerge case_bndr) ; return ((DEFAULT, [], rhs1) : filtered_alts) } where filtered_alts = filter keep con_alts keep (_con,bndrs,rhs) = not (all isDeadBinder bndrs && rhs `cheapEqExpr` rhs1) combineIdenticalAlts _ alts = return alts ------------------------------------------------------------------------- -- Prepare the default alternative ------------------------------------------------------------------------- prepareDefault :: DynFlags -> SimplEnv ->, -- And becuase case-merging can cause many to show up ------- Merge nested cases ---------- prepareDefault dflags env outer_bndr _bndr_ty imposs_cons (Just deflt_rhs) | dopt Opt_CaseMerge dflags , Case (Var inner_scrut_var) inner_bndr _ inner_alts <- deflt_rhs , DoneId inner_scrut_var' <- substId env inner_scrut_var -- Remember, inner_scrut_var is an InId, but outer_bndr is an OutId , inner_scrut_var' == outer_bndr -- NB: the substId means that if the outer scrutinee was a -- variable, and inner scrutinee is the same variable, -- then inner_scrut_var' will be outer_bndr -- via the magic of simplCaseBinder = do { tick (CaseMerge outer_bndr) ; let munge_rhs rhs = bindCaseBndr inner_bndr (Var outer_bndr) rhs ; return [(con, args, munge_rhs rhs) | (con, args, rhs) <- inner_alts, not (con `elem` imposs_cons) ] -- NB: filter out any imposs_cons. Example: -- case x of -- A -> e1 -- DEFAULT -> case x of -- A -> e2 -- B -> e3 -- When we merge, we must ensure that e1 takes -- precedence over e2 as the value for A! } -- Warning: don't call prepareAlts recursively! -- Firstly, there's no point, because inner alts have already had -- mkCase applied to them, so they won't have a case in their default -- Secondly, if you do, you get an infinite loop, because the bindCaseBndr -- in munge_rhs may put a case into the DEFAULT branch! ---------) -- This can legitimately happen for type families, so don't report that = pprTrace "prepareDefault" (ppr case_bndr <+> ppr tycon) $ return [(DEFAULT, [], deflt_rhs)] --------- Catch-all cases ----------- prepareDefault _dflags _env _case_bndr _bndr_ty _imposs_cons (Just deflt_rhs) = return [(DEFAULT, [], deflt_rhs)] prepareDefault _dflags _env _case_bndr _bndr_ty _imposs_cons Nothing = return [] -- No default branch \end{code} ================================================================================= mkCase tries these things 1. Eliminate the case altogether if possible 2. Case-identity: case e of ===> e True -> True; False -> False and similar friends. \begin{code} mkCase :: OutExpr -> OutId -> [OutAlt] -- Increasing order -> SimplM OutExpr -------------------------------------------------- -- 2. Identity case -------------------------------------------------- mkCase scrut case_bndr alts -- Identity case | all identity_alt alts = do tick (CaseIdentity case_bndr) return (re_cast scrut) where identity_alt (con, args, rhs) = check_eq con args (de_cast rh = map Type (tyConAppArgs (idType case_bndr)) -- de_cast (Cast e _) = e de_cast e = e re_cast scrut = case head alts of (_,_,Cast _ co) -> Cast scrut co _ -> scrut -------------------------------------------------- -- Catch-all -------------------------------------------------- mkCase :: Id -> CoreExpr -> CoreExpr -> CoreExpr bindCaseBndr bndr rhs body | isDeadBinder bndr = body | otherwise = bindNonRec bndr rhs body \end{code}
https://downloads.haskell.org/~ghc/6.12.2/docs/html/libraries/ghc-6.12.2/src/SimplUtils.html
CC-MAIN-2015-18
en
refinedweb
While profiling a bit of legacy code last night, I noticed some strange differences in how fast it is to get a value from a dictionary with a default value. I decided to investigate further with some quick and dirty benchmarks. Test For my comparison, both a positive and a negative lookup will be performed from each. The first method uses the dict.get method, passing in a default: def with_get(): result1 = a_dict.get(-1, None) result2 = a_dict.get(900, None) The second method catches the KeyError exception to return a default: def with_exception(): try: result1 = a_dict[-1] except KeyError: result = None try: result2 = a_dict[900] except KeyError: result = None The third and last method uses the in operator to return a default: def with_in(): result1 = a_dict[-1] if -1 in a_dict else None result2 = a_dict[900] if 900 in a_dict else None The full benchmarking script that I created can be found here. It isn't hard science, but I think its good enough for general comparisons. Results With python 2.7: 2.7 (r27:82508, Jul 3 2010, 20:17:05) [GCC 4.0.1 (Apple Inc. build 5493)] with_get(): 0.702752828598 with_exception(): 2.69198894501 with_in(): 0.579782009125 With python 3.2 (for giggles): 3.2 (r32:88452, Feb 20 2011, 11:12:31) [GCC 4.2.1 (Apple Inc. build 5664)] with_get(): 0.49797797203063965 with_exception(): 0.8870129585266113 with_in(): 0.4416959285736084 A couple things surprise me about this. First I thought exceptions were faster in python, but it is 5 times slower to use an exception. Second, the example with in hits the dictionary twice, and yet it is the fastest. I am guessing that the dict.get method must do something similar internally. Conclusion Use the in operator whenever possible. The convenience of dict.get is worth the very small time difference in my mind, but the exception method is unacceptably slow. The good news is that the exception performance issues were obviously addressed to some extent in python 3 and are now only twice as slow. Maybe this is common knowledge already, but it took me by surprise. I am interested to hear of other methods that are commonly used (if any). Feel free to fork the gist and give it a spin.
http://www.bigjason.com/blog/python-dict-unscientific-benchmarks-lookup-methods
CC-MAIN-2015-18
en
refinedweb
15 February 2010 13:38 [Source: ICIS news] MUMBAI (ICIS news)--India will soon initiate an anti-dumping investigation into polypropylene (PP) imports from South Korea, Taiwan and the US, market sources said on Monday. The investigation is being launched following an application made by Indian producers last year. A market source said letters had been issued to the embassies of the three countries and a notification was likely to be issued soon. “It will take a few months for the investigation to be completed. We will know about provisional anti-dumping duties, if any, by May or June,” said a second market source. The Indian government has yet to present its final findings on an anti-dumping probe that was launched last year into PP imports from ?xml:namespace> Provisional duties lasting. The second market source said a notification on the extension of provisional duties by two months was likely to be released soon. For more on polypropylene
http://www.icis.com/Articles/2010/02/15/9334868/india-to-probe-pp-imports-from-s-korea-taiwan-and-the-us.html
CC-MAIN-2015-18
en
refinedweb
03 August 2012 07:26 [Source: ICIS news] SINGAPORE (ICIS)--Qatar International Petroleum Marketing (Tasweeq) has floated a tender late on Thursday to sell one-year term supplies of plant condensate and full-range naphtha for October 2012 to September 2013, traders said on Friday. For each grade, 360,000-600,000 tonnes of material will be offered, out of which one cargo of 30,000 tonnes or 50,000 tonnes will be lifted by the buyer from the Ras Laffan port each month, the traders said. The one-year term contract will be priced on a naphtha FOB (free on board) ?xml:namespace> Besides the above pricing method, buyers are also allowed to submit their bids on a CIF (cost, insurance and freight), CFR (cost & freight) or DES (delivered ex-ship) basis, the traders said. For the DES term, a port of destination has to be defined, the traders added. Submission of bids will close on 13 August by 13:00 hours In early June, Tasweeq closed a one-year term supply for July 2012 to June 2013 for both full-range naphtha and plant condensate at a premium of $25-26/tonne (€20.50-21.30/tonne)
http://www.icis.com/Articles/2012/08/03/9583538/qatars-tasweeq-floats-tender-to-sell-oct-12-sep-13-term.html
CC-MAIN-2015-18
en
refinedweb
sorry, the versioning is rather baroque. the bug is fixed in version >= 2.2.22.14 theres a readme file inside each jar that has this number in it, in case theres any doubt Nick Bower wrote: > Ok. Could you clarify the version number? I'm on 2.2.22 now - you mentioned > 2.22.14 just now. Is there that much difference between the two? > > > John Caron wrote: >> release 2.22.14 should fix this problem. be sure to call flush() after >> writing and before reading >> >> Nick Bower wrote: >>> Adding an ncFile.flush() to the submitted test case as suggested does >>> not resolve the problem. I think it needs to be marked as a bug >>> until a documented way around it surfaces. >>> >>> >>> Ethan Davis wrote: >>> >>>> Hi Nick, >>>> >>>> I'm not sure if this is a recommended way but you might try to >>>> flush() the NetcdfFileWriteable before reading. >>>> >>>> John might have another answer. He is out of the office this week so >>>> his email may be spotty. >>>> >>>> Hope that helps, >>>> >>>> Ethan >>>> >>>> Nick Bower wrote: >>>> >>>>> Hello. Is the lack of response here because this is not considered >>>>> a bug? Is there a another recommended way perhaps to safely >>>>> increase a dimension and invalidate the memory cache in the process? >>>>> >>>>> Nick >>>>> >>>>> >>>>> Nick Bower wrote: >>>>> >>>>> >>>>>> I've found a problem in which cache/memory and disk shape >>>>>> information about variables will disagree with v2.2.22 of Java >>>>>> Netcdf library. >>>>>> >>>>>> When you add a new value to a variable, automatically increasing >>>>>> the length of a dimension, subsequent reads can throw EOFException >>>>>> because RandomAccessFile is instructed to read more values than >>>>>> the file contains - the cached and actual shapes disagree. >>>>>> >>>>>> I've created a runnable test case below to explain and demonstrate >>>>>> success and failure conditions. >>>>>> >>>>>> I am getting around this now by not interleaving read/write >>>>>> operations on variables, but instead reading all variables' data >>>>>> to memory, then performing any writes I need to after. >>>>>> >>>>>> TestInsertRecord.java: >>>>>> >>>>>> >>>>>> package com.metoceanengineers.datafeeds.netcdf.test; >>>>>> >>>>>> import java.io.File; >>>>>> import java.io.IOException; >>>>>> import java.text.DateFormat; >>>>>> import java.text.SimpleDateFormat; >>>>>> >>>>>> import junit.framework.TestCase; >>>>>> import ucar.ma2.Array; >>>>>> import ucar.ma2.ArrayInt; >>>>>> import ucar.ma2.DataType; >>>>>> import ucar.nc2.Dimension; >>>>>> import ucar.nc2.NetcdfFileWriteable; >>>>>> >>>>>> public class TestInsertRecord extends TestCase { >>>>>> DateFormat dateFormat = new SimpleDateFormat("yyyyMMdd HHMM"); >>>>>> protected NetcdfFileWriteable createNc(String prefix) throws >>>>>> IOException { >>>>>> File mainline = File.createTempFile(prefix+"-", ".nc"); >>>>>> NetcdfFileWriteable mainlineNc = >>>>>> NetcdfFileWriteable.createNew(mainline.getAbsolutePath(), false); >>>>>> >>>>>> Dimension recordsDim = >>>>>> mainlineNc.addUnlimitedDimension("records"); >>>>>> Dimension timeDims[] = {recordsDim}; >>>>>> Dimension var1Dims[] = {recordsDim}; // 1D >>>>>> mainlineNc.addVariable("time", DataType.INT, timeDims); >>>>>> mainlineNc.addVariable("var1", DataType.INT, var1Dims); >>>>>> >>>>>> mainlineNc.create(); >>>>>> return mainlineNc; >>>>>> } >>>>>> >>>>>> >>>>>> protected String getNcInstance() throws Exception { >>>>>> >>>>>> NetcdfFileWriteable mainlineNc = createNc("testfile"); >>>>>> int[] origin = {0}; >>>>>> ArrayInt.D1 timeArr = new ArrayInt.D1(2); >>>>>> timeArr.set(0, (int)dateFormat.parse("20071130 >>>>>> 0924").getTime()); >>>>>> timeArr.set(1, (int)dateFormat.parse("20071130 >>>>>> 0926").getTime()); >>>>>> mainlineNc.write("time", origin, timeArr); >>>>>> ArrayInt.D1 var1Arr = new ArrayInt.D1(2); >>>>>> var1Arr.set(0, 10); >>>>>> var1Arr.set(1, 12); >>>>>> mainlineNc.write("var1", origin, var1Arr); >>>>>> >>>>>> mainlineNc.close(); >>>>>> >>>>>> return mainlineNc.getLocation(); >>>>>> } >>>>>> /** >>>>>> * Append new data to end of existing variables. >>>>>> * >>>>>> * @throws Exception >>>>>> */ >>>>>> public void testAppendWorksOk() throws Exception { >>>>>> String ncFilename = getNcInstance(); >>>>>> NetcdfFileWriteable ncFile = >>>>>> NetcdfFileWriteable.openExisting(ncFilename, false); >>>>>> >>>>>> /* >>>>>> * Append value (20071130 0924, 11) into (time, var1) >>>>>> */ >>>>>> ArrayInt.D1 newTimeValue = new ArrayInt.D1(1); >>>>>> newTimeValue.set(0, (int)dateFormat.parse("20071130 >>>>>> 0925").getTime()); >>>>>> >>>>>> ArrayInt.D1 newVarValue = new ArrayInt.D1(1); >>>>>> newVarValue.set(0, 11); >>>>>> int[] origin = {2}; >>>>>> >>>>>> /* The first write will expand the variables, >>>>>> * but second write ok as we're just writing >>>>>> * and not reading */ >>>>>> ncFile.write("time", origin, newTimeValue); >>>>>> ncFile.write("var1", origin, newVarValue); >>>>>> assertEquals(3, >>>>>> ncFile.findDimension("records").getLength()); >>>>>> } >>>>>> >>>>>> /** >>>>>> * Test insertion of a record in between the 2 existing >>>>>> * records by reading the existing tail, inserting new data >>>>>> * and re-appending. >>>>>> * >>>>>> * Triggers EOFException through interleaved read/writes >>>>>> * >>>>>> * @throws Exception >>>>>> */ >>>>>> public void testInsertFails() throws Exception { >>>>>> String ncFilename = getNcInstance(); >>>>>> NetcdfFileWriteable ncFile = >>>>>> NetcdfFileWriteable.openExisting(ncFilename, false); >>>>>> >>>>>> ArrayInt.D1 newTimeValue = new ArrayInt.D1(1); >>>>>> newTimeValue.set(0, (int)dateFormat.parse("20071130 >>>>>> 0925").getTime()); >>>>>> >>>>>> ArrayInt.D1 newVarValue = new ArrayInt.D1(1); >>>>>> newVarValue.set(0, 11); >>>>>> /* Going to insert at 1, so read existing value, >>>>>> * write down new one, and re-append old tail. >>>>>> */ >>>>>> int[] insertPointOrigin = {1}; >>>>>> int[] appendOrigin = {2}; >>>>>> int[] shape = {1}; >>>>>> Array tailTime = >>>>>> ncFile.findVariable("time").read(insertPointOrigin, shape); >>>>>> ncFile.write("time", insertPointOrigin, newTimeValue); >>>>>> ncFile.write("time", appendOrigin, tailTime); >>>>>> /* Next line excepts - why? Because the last write >>>>>> above at >>>>>> * records index 2 triggers an increase in the CACHED/MEMORY >>>>>> * length of all variables to 3, but on disk it's still the >>>>>> * original length 2. >>>>>> * >>>>>> * Therefore we get EOFException. >>>>>> */ >>>>>> Array tailVar1 = >>>>>> ncFile.findVariable("var1").read(insertPointOrigin, shape); >>>>>> ncFile.write("var1", insertPointOrigin, newVarValue); >>>>>> ncFile.write("var1", appendOrigin, tailVar1); >>>>>> assertEquals(3, >>>>>> ncFile.findDimension("records").getLength()); >>>>>> } >>>>>> >>>>>> } >>>>>> >>>>>> >>>>>> >>>>> _______________________________________________ >>>>> netcdf-java mailing list >>>>> netcdf-java@xxxxxxxxxxxxxxxx >>>>> For list information or to unsubscribe, visit: >>>>> >>> >>> _______________________________________________ >>> netcdf-java mailing list >>> netcdf-java@xxxxxxxxxxxxxxxx >>> For list information or to unsubscribe, visit: >>> > > _______________________________________________ > netcdf-java mailing list > netcdf-java@xxxxxxxxxxxxxxxx > For list information or to unsubscribe, visit: > netcdf-javalist information: netcdf-javalist netcdf-javaarchives:
http://www.unidata.ucar.edu/mailing_lists/archives/netcdf-java/2007/msg00264.html
CC-MAIN-2015-18
en
refinedweb
Sascha Brawer <address@hidden> wrote on Fri, 19 Mar 2004 12:05:55 +0100: > [loading service providers from "META-INF/services/*" in external JARs] >2. Which namespace? >3. The unit tests (see attached ForMauve.tar.gz) run checks on >gnu.classpath.ServiceFactory. This seems a bit unclean, since Mauve was >once meant to be independent of Classpath. A possibility would be to >change the tests so that they check the public API >javax.imageio.spi.ServiceRegistry.lookupProviders Since there seem to be no objects with respect to the gnu.classpath namespace, I'll commit gnu.classpath.ServiceFactory this Wednesday. The Mauve tests, which I'll also commit on Wednesday, will run checks on the public API javax.imageio.spi.ServiceRegistry.lookupProviders, not on something specific to Classpath. Thus, I'll also commit a mostly stubbed javax.imageio.spi.ServiceRegistry, whose only implemented method will be lookupProviders. Please tell me by Tuesday in case you don't agree. -- Sascha Sascha Brawer, address@hidden,
http://lists.gnu.org/archive/html/classpath/2004-03/msg00173.html
CC-MAIN-2015-18
en
refinedweb
SNA::Network - A toolkit for Social Network Analysis Version 0.20 Quick summary of what the module does. use SNA::Network; my $net = SNA::Network->new(); $net->create_node_at_index(index => 0, name => 'A'); $net->create_node_at_index(index => 1, name => 'B'); $net->create_edge(source_index => 0, target_index => 1, weight => 1); ... SNA::Network is a bundle of modules for network algorithms, specifically designed for the needs of Social Network Analysis (SNA), but can be used for any other graph algorithms of course. It represents a standard directed and weighted network, which can also be used as an undirected and/or unweighted network of course. It is freely extensible by using own hash entries. Data structures have been designed for SNA-typical sparse network operations, and consist of Node and Edge objects, linked via references to each other. Functionality is implemented in sub-modules in the SNA::Network namespace, and all methods are imported into Network.pm. So you can read the documentation in the sub-modules and call the methods from your SNA::Network instance. Methods are called with named parameter style, e.g. $net->method( param1 => value1, param2 => value2, ...); Only in cases, where methods have only one parameter, this one is passed by value. This module was implemented mainly because I had massive problems understanding the internal structures of Perl's Graph module. Despite it uses lots of arrays instead of hashes for attributes and bit setting for properties, it was terribly slow for my purposes, especially in network manipulation (consistent node removal). It currently has much more features and plugins though, and is suitable for different network types. This package is focussing on directed networks only, with the possibility to model undirected ones as well. Creates a new empty network. There are no parameters. After creation, use methods to add nodes and edges, or load a network from a file. Creates a node at the given index. Pass node attributes as additional named parameters, index is mandatory. Returns the created SNA::Network::Node object. Creates a node at the next index. Pass node attributes as additional named parameters, index is forbidden. Returns the created SNA::Network::Node object with the right index field. Creates a new edge between nodes with the given source_index and target_index. A weight is optional, it defaults to 1. Pass any additional attributes as key/value pairs. Returns the created SNA::Network::Edge object. Returns the array of SNA::Network::Node objects belonging to this network. Returns the SNA::Network::Node object at the given index. Returns the array of SNA::Network::Edge objects belonging to this network. Returns the sum of all weights of the SNA::Network::Edge objects belonging to this network. Delete the passed node objects. These have to be sorted by index! All related edges get deleted as well. Indexes get restored after this operation. Delete the passed edge objects. Returns an array reference containing SNA::Network::CommunityStructure objects, which were identified by a previously executed community identification algorithm, usually the SNA::Network::Algorithm::Louvain algorithm. With a hierarchical identification algorithm, the array containts the structures of the different levels from the finest-granular structure at index 0 to the most coarsely-granular structure at the last index. If no such algorithm had been executed, it returns undef. Return a list of SNA::Network::Community objects, which were identified by a previously executed community identification algorithm, usually the SNA::Network::Algorithm::Louvain algorithm. If no such algorithm was executed, returns undef. Return the modularity value based on the current communities of the network, which were identified by a previously executed community identification algorithm, usually the SNA::Network::Algorithm::Louvain algorithm. If no such algorithm was executed, returns undef. This package can be extenden with plugins, which gives you the possibility, to add your own algorithms, filters, and so on. Each class found in the namespace SNA::Network::Plugin will be imported into the namespace of SNA::Network, and each class found in the namespace SNA::Network::Node::Plugin will be imported into the namespace of SNA::Network::Node. With this mechanism, you can add methods to these classes. For example: package SNA::Network::Plugin::Foo; use warnings; use strict; require Exporter; use base qw(Exporter); our @EXPORT = qw(foo); sub foo { my ($self) = @_; # $self is a reference to our network object # do something with it here ... } adds a new foo method to SNA::Network. Darko Obradovic, <dobradovic at gmx.de> Please report any bugs or feature requests to bug-sna-network module has been developed as part of my work at the German Research Center for Artificial Intelligence (DFKI). This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
http://search.cpan.org/~obradovic/SNA-Network/lib/SNA/Network.pm
CC-MAIN-2015-18
en
refinedweb
EDT:Discussion topics from the language meetings Contents What the heck is this page? I've pasted in my notes from the EDT language meetings, where we discuss what should and should not be part of EGL in EDT. In some cases I've gone back and made updates, but beware! this might be very out-of-date. Topics that need more discussion are italicized. June 16 Meeting Tim's list of differences between RBD and EDT - not all types supported? - math rules, overlfow, etc. will be native to the target language - nullable is not a value of a variable (no AS/ISA nullable type) - not all part types (datatable) - not all statements (openui, forward, move) - operators (new bit operators, all ops semantic is defined by the types) - types are defined in egl, including operators - system libraries...e.g. System not SysLib...they won't all be there, they're just types - would rather put functions in types, in place of StrLib, DateTimeLib, etc. - stereotypes from RBD may or may not be there Core vs. Everything else Core is the language minus all the data types Core is a grammar, syntax and semantics of execution Core includes the statements Order of execution...and so on Test framework *could* define its own types, just for the purposes of the test The core spec will talk about "architypes" for primitive types without being specific, for example there should be a type to represent true and false (otherwise can't talk about if/while/etc.) but the spec won't say exactly what Boolean has to look like. Also have literals for numbers and string, but that doesn't mean those are in core Core includes the stereotypes because they're necessary for defining your types We don't have to support all of the conversions that RBD supports (don't like string to numeric, date/time, etc.) (don't like conversions controlled by variables) Tim will start writing the core spec very soon Parts are really just types Parts in core: everything...program library handler record datatable dataitem etc. But we don't have to support all of them in all of the implementations All of our types outside of core should be under org.eclipse.edt, there is no egl.* for things outside of core. Take the case of int divided by int. Java's native / yields int, but in JS the fractional part is kept. The return type of the / operator comes from the int.egl file. We could have one int.egl for Java and another for JS, or we could pick one and make the generators deal with it. Which is better? This has a big impact on our test framework. Tim says take what's supported by JS in RBD and use it as the basis for EDT Java & JS. June 17 Meeting We're starting to go over each part of EGL now...most of that's in the EGL Language page. We should decide what can be thrown, and when. Tim might have some new ideas about array initializers. Should the arg list in main take string[] or string...? What if I don't want any args? Allow more than one main? The parser can't tell if foo(3) is a type or a function invocation so Tim considers a change to the declaration syntax, maybe by adding a colon as in "x : foo(3);" no it looks like a label on a function call. Have to change the parser, allow types & function calls in both places & then do a semantic check later. Should we allow ? on reference types? If it's there they can be null, otherwise they can't be null. We'll talk about subtypes (SQL, CSV, Exception, etc.) when we discuss stereotypes. Since we're supporting top-level functions, we should support IncludeReferencedFunctions. July 5 Meeting We mentioned the desire to keep things "pretty much" in line with RBD & CE. One option is to leave stuff out of "base EDT" but also create a separate build/package which includes many of the other features. revisiting some decisions from last meeting... still no structured records, we will have a way to call native cobol/rpg/etc without them still no date, time, char, etc. do support some variations of move: between two references, move-byname, move-for ...move-byname is crazy for structured records, the way it's done now isn't very clean rather than set empty and set initial, have setEmpty and setInitial functions on records and handlers and arrays and anything else...this needs more discussion! literals: null, bool, numbers, string, string w/ux prefix, bytes literal is 0x folllowed by a hex string for example 0x03AB is bytes(2) operators and expressions <<, >>, >>> bitwise shifting on bigint, smallint, int...and shift= ~ for bitwise not on bigint, smallint, int...and ~= add ternary ?: hurray some way to put an annotation on something without putting it in curlies normally you wrote x int { myAnnotation = 3 }; another way to do this is x int {@myAnnotation{3}}; in EDT we will allow that to be outside of curlies and before the declaration, for example @myAnnotation{3} x int; don't allow substring as an L-value or an inout parameter as and isa: can't include ? in the type nullable types: it's not a flag, the thing can really be null, so if you say myNullRecord.field it'll throw a NullValueException operators will behave differently with nullable types than in RBD: they'll throw a NullValueException if an operand is null no support for is/not like and matches are replaced by functions on string for future discussion: regular expression matching on string in operator is replaced by a function Array, but can't do "value in myRecordArray.field" July 6 Meeting we're talking about conversions result of conversion errors? -- document them, as we said for operators and expressions implicit conversions? (e.g. what RBD does for int = boolean, string = any) they're not in the model until the generator or tooling calls makeCompatible() support conversions between non-primitive types? (record to record?) Tim sez: no defaultNumericFormat, defaultMoneyFormat, defaultTimestampFormat, defaultDateFormat, defaultTimeFormat Tim sez: don't use them, can call a library function called "format" to do this, we could have an RBDString type which does this under the covers SUPPORTED CONVERSIONS all to any any to all boolean to string: gives true/false string to timestamp: the string is parsed, fields come from the timestamp's declaration, we'll pick a format...no conversion to ts w/o pattern allowed because it's ambiguous string to numbers: follow our literal syntax for numbers string to bytes: use the underlying bit pattern to make a bytes value timestamp to string: we'll pick a format all numbers to all numbers all numbers to bytes...use the number's bit pattern...if the bytes has a length it must match the size of the number in bytes bytes to numbers...same in reverse BYTES = BYTES...valid even when only one has a length, and when both have a length & they're different...truncate longer values on the right...if the source is shorter then don't pad just don't update what was there before...so if your bytes(3) is 0x123456 and you assign it a bytes(1) of 0x99 then the bytes(3) ends up with 0x993456 BYTES COMPARED-TO BYTES...they have to be the same size, compare bytes from left to right, until you find a difference, the operand with a one instead of a zero is greater (UNSIGNED!) OVERFLOWS...normally a conversion doesn't have any special check for overflow, we do whatever the underlying language does...there will be special syntax for overflow checking...what is the syntax, and what happens on overflow when you're checking for it? our philosophy is that we can't make edt behave the same in every environment when there are edge cases...so if you're really concerned about overflows you have to tell us specifically to check for it...places where we truncate are an overflow?...support the RBD option to round instead of truncate?...syntax for a checked assignment looks like a function call, but it's a builtin thing, an operation it's OK to use substring operator on a bytes converting xml, json, format function for ts->string, numbers->string, and so on...we'll have libraries for this kind of thing (needs to be defined in the future) a timestamp declared without a pattern means it can hold any timestamp, there's no implicit pattern date/time math will be done with functions defined on timestamp lots of RBD system functions will move to the types, for example DateTimeLib stuff should be functions on timestamps we need to discuss all libraries we need, and the functions in all the types we haven't talked about yet July 7 Meeting We're talking about stereotypes and annotations, we'll say no to most of them now, and add them back (possibly changed) as we decide that we need their features Program stereotypes: only support BasicProgram, but remove msgTablePrefix Library stereotypes: none supported (we thought about using native libraries for DLLs, but decided ExternalTypes should be used instead) Record stereotypes: support Exception Handler stereotypes: support RUIHandler, RUIWidget, BirtHandler ExternalType stereotypes: support NativeType, JavaObject, JavaScriptObject, HostProgram Since we're supporting top-level functions, we should support IncludeReferencedFunctions. It's an annotation in RBD but should be a field of the stereotypes in EDT. Some interesting ideas and questions from Tim. 1. Should we add class as a part type? It's in the model already. It lets you have single inheritance and implement interfaces. It could replace handler altogether. Example: class Square extends Rectangle implements Shape, TwoDimensional private lengthOfSide int; function area() returns( int ) return ( lengthOfSide * 2 ); end end 2. Should we expand the way you can use the embed keyword? Ways to make it more powerful: embed things that aren't records, use it to embed fields from other parts, etc. 3. Dynamic access lets you look up fieds, why not functions too? 4. Types as values, such as having a field of type Record. Similar to the Class class in Java. Helpful in defining annotations and stereotypes, since they refer generically to records and fields. Soon, very soon, we need to go through all of the types defined in EDT and RBD, come up with all of the functions & fields & libraries & types for EDT. Now talking about Language Compliance Testing. What exactly should we test? EDT has a core of EGL, and our own extensions. Just about everything we're doing here falls into the category of extensions. The core is always the same. Compliance testing of core stuff, for example the if statement always runs the "then" block when the condition is true. How variables are initialized. The order of execution. Can't actually test these things without using a particular extension. Only testing core isn't very interesting or useful. But maybe Core needs to be specific about the architypes so real tests can be written. The tests shouldn't be coded to run in a particular environment or expect results which only come from a particular environment. We decided to start writing tests based on the API/extension stuff, not Core. Base our expectations on what RBD does. We can't expect to write tests for somebody else's extensions. But we can test that the implementation of the architypes works properly, for example all booleans must do boolean things. Tests can be generated from the MOF model. July 8 Meeting egl.lang is EDT CORE, includes definitions of only ~4 types (string, int, boolean, ...), keep these as compact as possible and then we extend them in eglx.edt (which is NOT CORE), and put the other types there too the extensions will include much of what we really want the type to have (more conversions, functions, operations, etc.) RBD types and libraries (FUTURE) can go in eglx.rbd each type should be in its own file fully commented in egldoc including what is thrown & why move RBD system functions from Libs into types when it makes sense to do so we need *Libs for things that don't belong in types continue to use SysLib but verify it only contains "system" things August 23 Meeting 1. Reference-type variables support nullability Yes! Reference variables declared with a question mark can be null, and their initial value is null. Reference variables declared without a question mark are never null, and their initial value is an object of the appropriate type. Curly braces on the declaration of a nullable reference variable will cause it to be initialized to a non-null value. A validation check will be added to ensure that things which can't be instantiated (interfaces & things with private default constructors) must be declared nullable. Constructors can be in ETs and (soon) handlers. This change means people have to add ? on their reference type variables when moving code from RBD. 2. Changes to arrays 2a.. 2b.. 2c. Initializing arrays with set-values blocks In EDT, };". 3. Equality operators In the past == and != on reference variables have always tested if the two variables point to the same underlying value. We made string, number, timestamp (with no pattern), and decimal (with no length) be references in EDT, but we want == and != to do a string/numeric comparison. We discussed adding === and !== operators for testing if two variables point to the same thing. It wouldn't be possible to redefine them, just as you can't redefine the meaning of = for assignment. We decided not to add these operators right now. We might add them in the future.
http://wiki.eclipse.org/EDT:Discussion_topics_from_the_language_meetings
CC-MAIN-2015-18
en
refinedweb
Hi all, I have a quick question about the database globalization. I have a database currently running on WE8ISO88559P1 and we want to convert to either UTF8 or WE8MSWIN1252????? 1. Is there any issue to do that ???? 2. How, simply doing EXP/IMP???? thanks, May depend on your db version. 8i or better should be OK. 8.0 supported it but had some nasty file creation problems. Just make sure your exp/imp client's NLS_LANG is set right first. The western european char set maps easily to utf8. We did have one off-the-shelf application (SalesLogix - CRM) complain about UTF8, so we switched it back. Best of Luck, Last edited by KenEwald; 03-10-2004 at 02:48 PM. "I do not fear computers. I fear the lack of them." Isaac Asimov Oracle Scirpts DBA's need Originally posted by KenEwald We did have one off-the-shelf application (SalesLogix - CRM) complain about UTF8, so we switched it back. thanks for your prompt reply. what do you mean by one of your application complain about UTF8 and you have to swich it back??? What did you switch to??? Do you know any issues with changing the characterset to UTF8 or WE8MSWIN1252?????? Also, I read on Asktom and it said we may have to modify the plsql code as follow: Your plsql routines may will have to change -- your data model may well have to change. You'll find that in utf, european characters (except ascii -- 7bit data) all take 2 bytes. That varchar2(80) you have in your database? It might only hold 40 characters of eurpean data (or even less of other kinds of data). It is 80 bytes (you can use the new 9i syntax varchar2( N char ) -- it'll allocate in characters, not bytes). So, you could find your 80 character description field cannot hold 80 characters. You might find that x := a || b; fails -- with string to long in your plsql code due to the increased size. You might find that your string intensive routines run slower (substr(x,1,80) is no longer byte 1 .. byte 80 -- Oracle has to look through the string to find where characters start and stop -- it is more complex) chr(10) and chr(13) should work find, they are simple ASCII. On clob -- same impact as on varchar2, same issues. thanks Last edited by learning_bee; 03-10-2004 at 02:58 PM. We had to switch from UTF8 back to Western European (WE8ISO8859P1). It's a packaged app, so we couldn't figure out what they were doing. Come to think of it, we did have a problem with JDBC drivers and Thai character convertion. We upgraded the JDBC drivers to match the db version and that fixed it. (9.2.0.4) Other than that, I don't know of any other character set conversion issues, but better check MetaLink for bugs on your specific version. Best of luck, I'd be curious to know it goes. kenwald, thanks, I am running some test now and this is what I do. I export the WE8ISO88559P1 database, below is part of the exp log: so I did create a new database and it's on UTF8, below is the part of my import: import done in WE8ISO8859P1 character set and AL16UTF16 NCHAR character set import server uses UTF8 character set (possible charset conversion) IMP-00046: using FILESIZE value from export file of 1992294400 look like Oracle does the data conversion it self, my import still running with no error yet. Assume the import finish, with no error on the log, will I lose the data at all ??? or I am ok Sounds like you're on your way. Yes the conversion is automatic. If it can't convert you'll get an error that looks something like this: INVALID CHARACTER ENCOUNTERED IN: FAILUTF8CONV Not to worry, WE8 is a subset of UTF8, I'd be real suprised if you have any errors. Kenwald, thanks for all the input. I also did another test based on one of the post in asktom. Basically, I change the NLS_LANG on the client machine to UTF8 and do the export from there and I got the error: IMP-00091 ...questionable statistics, I guess I can't do that b/c the database is on WE8ISO88559P1. I've seen that before and it's never been a problem. I suppose the statistics would be questionable because rowsize calculations or something can't be accurately calculated. Statistics can be re-generated easily enough. -Ken in addition to this thread, I also had the additional question about the NLS_DATABASE_PARAMETERS. Below is the query about the NLS_DATABASE_PARAMETERS: PARAMETER VALUE NLS_LANGUAGE AMERICAN NLS_TERRITORY AMERICA NLS_CURRENCY $ NLS_ISO_CURRENCY AMERICA NLS_NUMERIC_CHARACTERS ., NLS_CHARACTERSET WE8ISO8859P.2.0.4.0 as you see the data format was in DD-MON_RR, but when I do the query on sysdata from dual I got: SYSDATE 3/11/2004 1:37:18.000 PM the question is so where this sysdate is driven by??? b/c it's not the same as the data format in NLS_DATABASE_PARAMETERS right overwrites left: NLS_DATABASE_PARAMETERS -> NLS_INSTANCE_PARAMETERS -> NLS_SESSION_PARAMETERS database comes from ? instance comes from init.ora session comes from alter session or dbms_session.set_nls Forum Rules
http://www.dbasupport.com/forums/showthread.php?41787-ORA-01031-insufficient-privileges-message&goto=nextnewest
CC-MAIN-2015-18
en
refinedweb
The class that performs the "observing" must implement the java.util.Observer interface. There is a single method: public void update(Observable obj, Object arg) In the following example, a Manager observes an Employee. Suppose a change in salary occurs in Employee. It then notifies the observers using setChanged() and notifyObservers(). Manager's update() method is called to inform him that Employee has changed. public class Employee extends Observable{ float salary; public setSalary(float newSalary){ salary=newSalary; // salary has changed setChanged(); // mark that this object has changed, MANDATORY notifyObservers(this, new Float(salary)); // notify all observers, MANDATORY } } public class Manager implements Observer{ public void update(Observable obj, Object arg){ System.out.println("A change has happened and new salary is " + arg); } } public class Demo(){ public static void main(String[] args){ Employee e = new Employee(); Manager m = new Manager(); e.addObserver(m); // register for observing } } Please enable Javascript in your browser, before you post the comment! Now Javascript is disabled. Your name/nickname Your email WebSite Subject (Maximum characters: 1200). You have 1200 characters left.
http://www.devx.com/tips/Tip/22592
CC-MAIN-2015-18
en
refinedweb
12 September 2012 08:53 [Source: ICIS news] KOLKATA (ICIS)--Plans to revive debt-ridden Madras Fertilizer Ltd (MFL) are being stalled as its Iranian stakeholder opposes a loan-to-equity scheme, in which the Indian government will end up taking a bigger stake in the company, a government official from India said on Wednesday. Naftiran Intertrade Ltd, an affiliate of National Iranian Oil Company that holds a 25.77% stake in MFL, has posed objections to the plan, the official from India’s Department of Fertilizer said. “The Indian government has offered conversion of MFL’s existing loan and interest liabilities of $70m (€55m) into equity as part of revival of the company, but the resultant increase in Indian government’s equity stake in the company has been opposed by Naftiran,” the official said. According to the Department of Fertilizer, the Indian government currently holds a majority stake of 59.50% in MFL. “MFL had drawn up plans to switch feedstock to natural gas to reduce costs and resume production of complex NPK [nitrogen phosphorous potassium] fertilizer and the project was scheduled for completion by end-2013,” the official said. Production at the company’s 2,250 tonnes/day NPK facility has been halted since 2008-2009. “But the project was dependent on reduction of loan and interest liability of the loss-making MFL and with a co-promoter opposing the loan conversion, the scheduled completion would be delayed,” he added. As per the Sick Industries Companies Act, MFL was referred to the Board for Industrial and Financial Reconstruction (BIFR) in 2009. The company had incurred a loss of $91m in the year ending 31 March 2012. According to filing made to BIFR, MFL had cited antiquated technology and plant, high cost of feedstock and high energy consumption as primary causes for posting losses. To meet it feedstock requirement of 1.54 mmscmd (million metric standard cubic metres per day) of natural gas, the company is negotiating for supplies from Reliance Industries offshore Krishna ?xml:namespace> But as contingency plans, MFL is also exploring options of sourcing natural gas from Indian Oil Corp’s (IOC) proposed liquefied natural gas (LNG) terminal at Ennore, near Chennai and Gail India’s LNG terminal at In Manali, which is located 20 kilometres from Chennai, MFL’s urea plant with a 486,750 tonne/year capacity is currently running at full
http://www.icis.com/Articles/2012/09/12/9594686/indias-madras-fertilizer-revival-plan-stalled-by-iran.html
CC-MAIN-2015-18
en
refinedweb
The new Kernel Transaction Manager (KTM) in Windows Vista allows you to perform a number of file and/or registry operations, and have them take effect in an indivisible ("all or nothing") fashion called a "transaction". Although invisible to the end-user, adding transaction support to the Windows registry and file system was a significant step along the road to increased robustness, especially in the face of extreme circumstances (e.g. electricity failures). If you can't even "imagine" using a database that didn't support transactions, or wouldn't dream of formatting your disk with FAT instead of NTFS, then the KTM is for you … This article is broken into the following sections: No new technology would be complete without adding a few more acronyms to the lexicon. The fundamental benefit that the use of transactions brings to the table, is reducing the number of possible outcomes you have to consider to two (i.e. complete success or total failure); while letting the transaction resource manager (RM) handle the difficult job of ensuring that your operations occur in an "all or nothing" fashion. The use of transactions originated in databases. As time has gone by, the use of transactions has slowly spread around the world of computer science: Transactions sound sort of interesting, but should I be using them? In almost all cases, using transactions will increase the robustness to your software. Even if your software is completely free of bugs (just pretend) your application can still fail while executing any line of code (e.g. the electricity goes off or a poorly written driver crashes the entire system). Here are a few common scenarios that would benefit greatly from transactions: In general, transactions are appropriate any time you are performing multiple registry or file operation, particularly if those files or registry entries reference one another. Essentially, transactions ensure that the data your application deals with (i.e. registry keys, files, etc) are left in a consistent state when something goes wrong. Windows Vista introduces a transacted file system and a transacted registry, allowing you to commit your changes to the file system and/or registry in one atomic operation. If anything interrupts the transaction (e.g. your application crashes or exits, the power fails, Windows crashes) while the transaction is in progress (i.e. a transaction that has been created, but not committed) the transaction will be rolled-back. At your discretion, you can also rollback the transaction within your code. You should also be aware that a transaction can be cancelled by the system at any time (e.g. if the system runs low on resources). Essentially, a transaction is performed as in the following pseudo-code: HANDLE hTrans = ::CreateTransaction(...) // create the transaction ::MoveFileTransacted(oldPath, newPath, ..., hTrans); // (...and other file and registry function calls) ::CommitTransaction(hTrans); // OR ::RollbackTransaction(hTrans); An MSDN article lists all of the new transacted file functions. It also has a list of functions whose signature is the same, but the behaviour changes because of transactions (e.g. FindNextFile will work on the transacted "view" of the file system, if you started your file-finding operation with a call to FindFirstFileTransacted). FindNextFile FindFirstFileTransacted In the context of Windows transactions, a resource manager is a "system" that provides the transaction support. A resource manager takes care of the roll-back procedure when things go wrong, and makes sure all commits occur atomically. TxF, TxR and SQL Server are all examples of resources managers. Writing a resource manager is a very advanced topic, and is beyond the scope of this article. More information can be found in the MSDN articles Writing a Resource Manager and Programming Considerations For Writing Resource Managers. You know those times when you've got a "brilliant" idea; you're all enthusiastic and start coding it up, and then an hour into coding you realise that although your idea will be very elegant for the simple case, it has the potential to cause big problems - so you hit "undo checkout" … The original implementation of transactions on Vista (i.e. during development, when it was called Longhorn) followed an "implicit" model. Essentially, you created a transaction, called all of the file and registry functions you wanted, and then committed (or rolled back) the transaction, as in the following pseudo-code: ::CreateTransaction // same as in the explicit model ::SetCurrentTransaction // bind this thread to the transaction // (so the file and registry calls in the next step know whether to be // transacted or not (in the case of a multi-threaded app) // NOTE: SetCurrentTransaction is NOT present in the explicit model ::MoveFile(oldPath, newPath); // (...and other file and registry function calls) // call the original file and registry functions // ALL calls to the various file and registry functions // will form part of the transaction ::CommitTransaction // same as in explicit model // OR ::RollbackTransaction // same as in explicit model Somewhere around Q3 of 1996, Microsoft decided to, figuratively, rollback their implementation of transaction support, and they moved to the current "explicit" model. In the explicit model, you call the create and (commit or rollback) functions as before. The significant difference is that the explicit model introduces a whole set of new functions, which are essentially duplicates of the file and registry functions, except they also take a handle to a transaction as a parameter. "....." This author would also add that the implicit model would not have allowed you to have two transactions open simultaneously (which could certainly be the case if you were calling some library code). Because this change was fairly late in the development cycle, the unfortunate side effect is that there are numerous outdated articles floating around the internet. (e.g. July 2006 MSDN article, Because we can blog, etc) The moral of the story? Take those "This article is based on a prerelease version of Windows Vista. All information herein is subject to change" warning banners at the top of the MSDN article seriously. Just as in the world of databases, the effect of a transaction is easy to understand when there is only one process modifying the items covered by the transaction. When there are multiple processes attempting to access the object covered by a transaction, the rules for what happens gets slightly more messy. With respect to interactions, it can be helpful to think of transactions as an advanced form of sharing permissions. Just as the way you open a file (e.g. with FILE_SHARE_READ and/or FILE_SHARE_WRITE) effects how other processes can access the file, so do transactions. Looking at an example may help clarify the situation... FILE_SHARE_READ FILE_SHARE_WRITE In the following image, we can see the effect of our transaction, both within our own program, and interaction with outside processes... Here are the operations, in order, that created the image: FindFirstFileEx HANDLE CreateTransaction The code provided includes a C++ class KTMTransaction (in the files KTM.h and KTM.cpp) The goal of this class is to enable the best of both worlds. Your application will work on previous versions of Windows, but it will work "better" on Windows Vista. Without any changes to the code you write, the function calls in this class will form a transaction if the application is running on Windows Vista, and will not be transacted if run on earlier versions of Windows (which isn't THAT bad - it's how your code works right now anyway). KTMTransaction The class wraps all of the new transacted functions, and provides an interface that is identical to the existing Win32 functions you are familiar with. All of the functions in the class take the exact same parameters (in the same order) as the regular Win32 functions of the same name, so you can just use MSDN help for details on the parameters. (e.g. you just call the class's DeleteFile function and the class will call either ::DeleteFileTransacted or ::DeleteFile, as appropriate). DeleteFile ::DeleteFileTransacted ::DeleteFile The class has no #include dependencies and all function calls to the transacted functions are done via LoadLibrary and GetProcAddress so that: #include LoadLibrary GetProcAddress The code supports both UNICODE and ANSI builds. UNICODE ANSI The relevant public portions of the class are listed here: class KTMTransaction { public: KTMTransaction(); ~KTMTransaction(); // causes rollback if you do not call Commit bool RollBack(); // returns true for success bool Commit(); // returns true for success // handle to current transaction HANDLE GetTransaction(); ///////////////////////////////////////////// // NOTE: The transacted functions take the // exact same parameters (in the same order) as // the regular Win32 functions of the same name. ///////////////////////////////////////////// // File Functions BOOL CopyFile( ... ); BOOL CopyFileEx( ... ); BOOL CreateDirectoryEx( ... ); BOOL CreateHardLink( ... ); HANDLE CreateFile( ... ); BOOL DeleteFile( ... ); HANDLE FindFirstFileEx( ... ); DWORD GetCompressedFileSize( ... ); BOOL GetFileAttributesEx( ... ); DWORD GetFullPathName( ... ); DWORD GetLongPathName( ... ); BOOL MoveFileEx( ... ); BOOL MoveFileWithProgress( ... ); BOOL RemoveDirectory( ... ); BOOL SetFileAttributes( ... ); // Registry Functions LONG RegCreateKeyEx( ... ); LONG RegDeleteKey( ... ); LONG RegOpenKeyEx( ... ); }; Here is an example of how to use the class: // imagine you have 3 files (a.txt, b.txt, c.txt) // sitting in a folder KTMTransaction trans(); trans.DeleteFile( "a.txt" ); trans.DeleteFile( "b.txt" ); // pretend the electricity fails here trans.DeleteFile( "c.txt" ); trans.Commit(); What happens when the electricity comes back on? On Windows XP (and earlier) files a.txt and b.txt will be deleted, and c.txt will still be present. On Windows Vista none of the files will be deleted (the KTM will rollback the transaction on startup). An example application is provided (built as a Visual Studio 2005 project). The application should run on versions of Windows before Vista (e.g. 2000 or XP), however the transaction buttons will not have any effect on these Operating Systems. Here are some great links for more information on transactions: This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) #if (_WIN32_WINNT >= 0x0500) ... #endif // (_WIN32_WINNT >= 0x0500) BOOL KTMTransaction::MoveFileEx(LPCTSTR lpExistingFileName, LPCTSTR lpNewFileName, DWORD dwFlags){ // Overload - just calls other function if (UseTransactedFunctions()){ return m_ProcAddress_MoveFileTransacted(lpExistingFileName, lpNewFileName, NULL, NULL, dwFlags, m_transaction); }else{ return ::MoveFileEx(lpExistingFileName, lpNewFileName, dwFlags); } } #if (_WIN32_WINNT >= 0x0500) BOOL KTMTransaction::MoveFileWithProgress(LPCTSTR lpExistingFileName, LPCTSTR lpNewFileName, LPPROGRESS_ROUTINE lpProgressRoutine, LPVOID lpData, DWORD dwFlags) { if(UseTransactedFunctions()){ return m_ProcAddress_MoveFileTransacted(lpExistingFileName, lpNewFileName, lpProgressRoutine, lpData, dwFlags, m_transaction); }else{ return ::MoveFileWithProgress(lpExistingFileName, lpNewFileName, lpProgressRoutine, lpData, dwFlags); } } #endif // (_WIN32_WINNT >= 0x0500) General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/17917/Vista-KTM-Transaction-Management-in-Vista-and-Beyo?fid=394277&df=10000&mpp=10&noise=1&prof=True&sort=Position&view=Quick&spc=Relaxed
CC-MAIN-2015-18
en
refinedweb
A quick peak at the Applet Viewer Window displaying the applets in large icon mode. Now viewing in details view. Notice the name and description displayed for each applet. And finally notice an applet that has just been launched by double clicking its icon. This demonstrates the applets are really being hosted by the applet engine. This article assumes a basic understanding of writing unmanaged C/C++ dynamic link libraries and exporting functions from those libraries. Also basic understanding of using P/Invoke to access unmanaged libraries will benefit the reader, but I will attempt to explain as much as possible. The purpose of this article is to discuss several problems a developer might face when attempting to mix unmanaged and managed code. A problem that is all too often encountered when attempting to interact with the current Windows API from a managed language such as C# or VB.NET. In this article, I will discuss the problems I faced and how I solved the problems using a combination of managed and unmanaged code, C# and VC++ respectively. Here is a brief overview of what you can learn by reading this article: sizeof So let’s begin by discussing why I decided to write this article and code. Being the curious coder I am, I am always intrigued by the underlying implementation of features in the Windows operating system. Control panel applets have always been a somewhat uncovered topic for some reason, yeah there is documentation on them in the MSDN library, but rarely any good working examples. Let along how they actually work. Before I set out to write this article, I already had a pretty good understanding of how applets are written, having written several as a professional developer in the past. However, it wasn’t until I stepped into shell development that I became completely curious just how Windows pulled off these wonderfully useful creatures. As days went by developing my shell, I came across many methods for actually launching control panel applets from code. Most of the implementations involved hard coding references to rundll32.exe to call the Control_RunDLL function with various arguments to launch off control panel applets. This always bothered me, because this was far from dynamic, you had to know about the applets ahead of time, at least somewhat to figure out how to launch them. I decided I wanted a means to enumerate and host the applets just like Windows. Control_RunDLL So having said the why, let’s discuss what an applet actually is. By the way, all of the information presented here is my personal dissection of the documentation found in the MSDN library. Control panel applets are nothing special, simply .dlls built with a special extension, .cpl, and placed into the Windows system directory. If you were to attempt to write one, you would have to choose a language that creates unmanaged DLLs and allows for exporting of unmanaged functions. Something that C# and VB just don’t do, so look to C++ or Delphi to pull this off. It’s not all that hard, but it’s beyond the scope of the article. Now that we know what an applet is, an unmanaged DLL compiled with a .cpl extension, let's look at how they work. Digging into the documentation, you will discover the CplApplet function. It is the sole function an applet must export from its library to interface with Windows. The function looks like this: CplApplet LONG CPlApplet(HWND hWnd, UINT uMsg, LPARAM lParam1, LPARAM lParam2); The function is very similar to the WndProc functions behind all windows. All communication to applets occurs through this function. You do not need to understand anything more about this function for now. If you are interested, do a search for CPlApplet in the MSDN library and you will have the pleasure of translating this wonderful function just as the rest of us were forced to do. WndProc CPlApplet Ok, so let’s review what we know about applets. This will be the foundation of how we discover and communicate with all applets: Just when life is looking easy, and you are saying how hard can this be, in walks the first problem. How do we call this function from unmanaged code? As you know, you can call unmanaged functions from DLLs using System.Runtime.InteropServices.DllImportAttribute, the problem lies in the fact that you have to know the name of the library while coding. So how do we load unmanaged DLLs on the fly and call that unmanaged function if we can’t define its entry point ahead of time. System.Runtime.InteropServices.DllImportAttribute The answer lies in several functions. LoadLibrary, FreeLibrary, and GetProcAddress. We will use LoadLibrary to load the applet’s .cpl file (a.k.a. nothing more than a .dll) by its filename, and use GetProcAddress to get an unmanaged function pointer to the CplApplet function. FreeLibrary will be used to release the DLL once we are finished using it. This is a standard practice, and any of you who have done dynamic function pointers can probably skip ahead just a bit. However, I can remember when this was a magic bag of voodoo magic, and needed a little explanation. LoadLibrary FreeLibrary GetProcAddress Let’s look at how we can do this. First we will need to search the Windows System directory for all files ending with the .cpl extension. This is very easy to do using the methods and classes in the System.IO namespace. Here is the method that will do the grunt work of discovering these on the fly. Let’s take a look. But first let me break down the classes that we will be working with, the classes I have created to make the magic happen. Very briefly they are… System.IO AppletEngine AppletLibrary Applet The AppletEngine class contains the following method that will allow us to find the applet libraries. public FileInfo[] FindAppletLibraries(string path) { DirectoryInfo di = new DirectoryInfo(path); if (di != null) { return di.GetFiles("*.cpl"); } return new FileInfo[] {}; } This will allow us to be returned an array of FileInfo objects that contain information about the files that fit our search. This is all pretty standard stuff and shouldn’t cause any questions as of yet. If it does, refer to the docs on MSDN or my source code and I’m sure the lights will come on quickly. FileInfo Now that we have discovered the files that end with .cpl, we will assume them to be all applet libraries. Let’s look at how we can use LoadLibrary and GetProcAddress to load them and get that function pointer so we can communicate with the applets. We simply need to loop through the FileInfo objects and call LoadLibrary on the filename to load the DLL, and assuming that succeeds, we can call GetProcAddress to return a function pointer to the CplApplet function. Here is a snippet from the AppletLibrary constructor that implements this algorithm. public AppletLibrary(string path, IntPtr hWndCpl) { _path = path; _hWndCpl = hWndCpl; _applets = new ArrayList(); if (!System.IO.File.Exists(path)) throw new System.IO.FileNotFoundException ("No applet could be found in the specified path.", path); _library = LoadLibrary(path); if (base.IsNullPtr(_library)) throw new Exception("Failed to load the library '" + _path + "'"); _appletProc = GetProcAddress(_library, "CPlApplet"); if (base.IsNullPtr(_appletProc)) throw new Exception("Failed to load CPlApplet proc for the library '" + _path + "'"); this.Initialize(); } Let’s discuss just what this code snippet is doing. First off, it will try and call LoadLibrary on the path to the file, this should be something like C:\Windows\System\SomeApplet.cpl. The method will return an IntPtr which is a handle to the library. Look at the MSDN docs for more info on this, I’d rather let the creators explain it. If the function succeeds, the IntPtr will be something other than IntPtr.Zero. Once we have a handle to the library, we can call GetProcAddress with the handle and the name of the function to get yet another IntPtr which is an unmanaged function pointer. IntPtr IntPtr.Zero Here now we are faced with a rather tricky problem. How do you call an unmanaged function pointer from managed code? At first glance the answer seems simple, we use delegates. However correct that solution may seem, I have not been able to uncover a means of creating a delegate in managed code to an unmanaged function pointer. Several methods in the Marshal class look promising, namely GetUnmanagedThunkForManagedMethodPtr. Here I will admit defeat because I cannot for the life of me figure out how to work this method. The docs are no help, and I simply got tired of racking my brain to figure it out. I am hoping that someone will read this article and come up with a solution for what I am about to do next. Part of this article, by the way, I am assigning to the rest of you interop gurus to help me figure that method out. I’m certain it can be done, it’s just not worth my time to waste any more time trying to figure it out. If someone does, please let me know! Marshal GetUnmanagedThunkForManagedMethodPtr Enter our own trickery. I decided the easiest way to do this would be to create a small unmanaged C++ DLL that could call it for us. Calling function pointers in C++ is as easy as declaring integers to the rest of the managed world. So breaking open a Win32 project and setting its project type to dynamic link library, I created a DLL to do this work for me. I called it AppletProxy.dll for lack of a better term, because it will act as a proxy between our managed code in C# and the unmanaged function exported by the applet. I am not going to cover how to create unmanaged DLLs here, that is also beyond the scope of the article. If you are really interested, the source code should provide you with a very simple example to learn from, and as always, I’m around for questioning if you get stuck. Here is what the unmanaged function looks like that will be the key to calling the unmanaged function pointers for us. LONG APIENTRY ForwardCallToApplet(APPLET_PROC pAppletProc, HWND hwndCpl, UINT msg, LPARAM lParam1, LPARAM lParam2) { if (pAppletProc != NULL) return pAppletProc(hwndCpl, msg, lParam1, lParam2); // call the unmanaged function pointer, this is the same // as calling a regular function, except we’re using // the variable instead of a function name return 0L; } Ok, I know a lot of you are looking at this and thinking, what in the heck is this guy doing? I don’t understand the syntax of this stuff, and what’s with all the funky data types. I’m hoping that’s not the case, because if it is, you should really go open MSDN and look up the data types. They are all readily available in the docs. This method will accept an unmanaged function pointer and call the method it points to and return us the result. Now that we have our proxy function to call the unmanaged function pointer, you might be wondering how we call that from our managed code. We simple use P/Invoke and define the entry point just like any other API. Here is how to do just that. [DllImport("AppletProxy")] public static extern int ForwardCallToApplet(IntPtr appletProc, IntPtr hWndCpl, AppletMessages message, IntPtr lParam1, IntPtr lParam2); One of the problems I encountered when I started trying to call API functions were the difference in data types. I had a real problem trying to figure out what an HWND or LPARAM translated to in managed code. Here is a quick reference for you newbies that will help you out when trying to translate functions from C/C++ to managed code. HWND LPARAM HANDLE HINSTANCE HICON System.IntPtr DWORD LONG BOOL System.Int32 int LPSTR LPTSTR LPCTSTR System.String System.Text.StringBuilder I hope this helps, because I know for a while I was constantly heading off to the C header files to find the underlying definitions and then doing some research on MSDN to figure out what the data type was supposed to be declared as in managed code. The main thing to remember in my opinion is that if it starts with "H", it’s most likely a handle of some sort which maps nicely to System.IntPtr. The specific implementations of course may vary from time to time, but as a general guideline these have worked out just fine. Now that we have covered a few tricky concepts, let’s move back into some functionality discussions. At this point, we can load any applet we want, and call the CplApplet function to communicate with the applet. But what exactly are we going to pass to this function to get the results we want. There are several things that you’ll notice Windows presents for applets, by looking at the Windows Control Panel in Windows Explorer. There are icons, and text for all the applets. How does it get those? Answer: By calling the CplApplet function with specific messages and getting specific results back. Applets are designed (or should be designed) to provide a name and description as well as an icon to be displayed for the user. Let’s look at how we can recreate this functionality and talk to our applets. A quick look at the docs on MSDN and we find a set of messages and several structures that are used to communicate with the applet. Another problem is coming, but it’s not nearly as rough as any previous ones, but if you didn’t know what to do, it could be really a world ending problem. Don’t worry I will show you how to deal with it, but let’s see what these messages and structures look like first. To communicate with the applet we will send a message, and optionally a pointer to a structure to receive information back from the applet. The structures are where our last hurdle occurs. Here are the definitions in managed code: /// <summary> /// The standard Control Panel Applet Information structure /// </summary> [StructLayout(LayoutKind.Sequential)] public struct CPLINFO { /// <summary> /// The resource Id of the icon the applet wishes to display /// </summary> public int IconResourceId; /// <summary> /// The resource Id of the name the applet wishes to display /// </summary> public int NameResourceId; /// <summary> /// The resource Id of the information the /// applet wishes to display (aka. Description) /// </summary> public int InformationResourceId; /// <summary> /// A pointer to applet defined data /// </summary> public IntPtr AppletDefinedData; /// <summary> /// A simple override to display some debugging information /// about the resource ids returned from each applet /// </summary> /// <returns></returns> public override string ToString() { return string.Format( "IconResourceId: {0}, NameResourceId: {1}, InformationResourceId: {2}, AppletDefinedData: {3}", IconResourceId.ToString(), NameResourceId.ToString(), InformationResourceId.ToString(), AppletDefinedData.ToInt32().ToString("X")); } } /// <summary> /// The advanced Control Panel Applet Information structure /// </summary> [StructLayout(LayoutKind.Sequential, CharSet=CharSet.Ansi)] public struct NEWCPLINFO { /// <summary> /// The size of the NEWCPLINFO structure /// </summary> public int Size; /// <summary> /// This field is unused /// </summary> public int Flags; /// <summary> /// This field is unused /// </summary> public int HelpContext; /// <summary> /// A pointer to applet defined data /// </summary> public IntPtr AppletDefinedData; /// <summary> /// A handle to an icon that the applet wishes to display /// </summary> public IntPtr hIcon; /// <summary> /// An array of chars that contains the name /// that the applet wishes to display /// </summary> [MarshalAs(UnmanagedType.ByValTStr, SizeConst=32)] public string NameCharArray; /// <summary> /// An array of chars that contains the information /// that the applet wishes to display /// </summary> [MarshalAs(UnmanagedType.ByValTStr, SizeConst=64)] public string InfoCharArray; /// <summary> /// An array of chars that contains the help file that /// the applet wishes to display for further help /// </summary> [MarshalAs(UnmanagedType.ByValTStr, SizeConst=128)] public string HelpFileCharArray; } There are really two kickers here. The first is in the CPLINFO structure. Once returned to us, it contains the integer IDs of resources in the applet’s resource file that contain either the string for a name or description, or an icon. A quick glance at the docs, I realized we could extract this information using LoadString and LoadImage. However, it states clearly that you should use the MAKEINTRESOURCE macro on the resource ID before passing it to the LoadString or LoadImage functions. I dug through the header files and discovered a really nasty looking conversion. I won’t even bring it up, because I think the Windows developers did it just to screw with the rest of the world, a.k.a. anyone not programming in C/C++! It’s funknasty and I don’t know how to convert it to C#, believe me I tried. Here is what it looks like in the header file: CPLINFO LoadString LoadImage MAKEINTRESOURCE #define IS_INTRESOURCE(_r) (((ULONG_PTR)(_r) >> 16) == 0) #define MAKEINTRESOURCEA(i) (LPSTR)((ULONG_PTR)((WORD)(i))) #define MAKEINTRESOURCEW(i) (LPWSTR)((ULONG_PTR)((WORD)(i))) #ifdef UNICODE #define MAKEINTRESOURCE MAKEINTRESOURCEW #else #define MAKEINTRESOURCE MAKEINTRESOURCEA #endif // !UNICODE Now if any of you can translate that to C#, again please clue me in. I’d love to know. I consider myself pretty good at translating, but again I may have just been up too late or had too much Mountain Dew to operate at the level required to translate this. So like my previous solution to the function pointers, I went back to my C DLL and made another wrapper function so that I could just use the real deal and be done with it. Here is what the wrapper functions look like: HICON APIENTRY LoadAppletIcon(HINSTANCE hInstance, int resId) { return ::LoadIcon(hInstance, MAKEINTRESOURCE(resId)); } HANDLE APIENTRY LoadAppletImage(HINSTANCE hInstance, int resId, int width, int height) { return ::LoadImage(hInstance, MAKEINTRESOURCE(resId), IMAGE_ICON, width, height, LR_DEFAULTCOLOR); } Ok, now that we can load the strings and icons from the resources of the applet, we hit the last of our hurdles. This probably caused me more trouble than any of them, but ended up being the easiest to solve once I understood how to tackle the problem. Like I stated before, the structures above will be used to pass information back to us when we call into the applet. The CPLINFO structure is really straight forward, but the NEWCPLINFO structure is kinda different. Some applets expose dynamic information, based on some changing resource for example. Something like wi-fi or some network or disk resource the description or icon might need to be changed. So we have to refresh the information each time using the NEWCPLINFO structure. As I started translating the structure to C# I discovered the following definition in the docs. A fixed length char[] of predefined length. As you may or may not know you cannot pre-initialize public fields in managed structures. And I didn’t understand how to get my structure to map to the real one because of this limitation. NEWCPLINFO char[] typedef struct tagNEWCPLINFO { DWORD dwSize; DWORD dwFlags; DWORD dwHelpContext; LONG_PTR lpData; HICON hIcon; TCHAR szName[32]; TCHAR szInfo[64]; TCHAR szHelpFile[128]; } NEWCPLINFO, *LPNEWCPLINFO; Take the szName field for our example. How do we define a fixed length array of characters in our structure? The answer lies in the MarshalAs attribute. We can define our field as a string and let the P/Invoke services marshal the data as in a specific format. This is what we can do to get the desired marshalling to occur. We will define our field to be marshaled as an array of null terminated characters with a fixed array size. szName MarshalAs [MarshalAs(UnmanagedType.ByValTStr, SizeConst=32)] public string NameCharArray; A small side note is in the string conversions. Do we use ANSI or Unicode? For the most part Unicode is the data type of choice for strings, but because we don’t know ahead of time the real struct uses the TCHAR type which will map to the appropriate type when compiled based on the various #defines in the C/C++ header files. We don’t have that luxury, but we do have a solution. We will apply the StructLayout attribute to our structures to define sequential layout of the fields, which will keep the fields in the order we specify, (the CLR tends to optimize structure fields to what it feels is optimal, which can really hose us when dealing with pointers and unmanaged code, so we need to tell it that it needs to leave our fields in the order we define) and also the character set to use for any strings it encounters. This is accomplished by the following: struct TCHAR #define StructLayout [StructLayout(LayoutKind.Sequential, CharSet=CharSet.Ansi)] public struct NEWCPLINFO Having piddled around with the various character sets, I discovered the strings are displayed correctly using the ANSI charset on my Windows XP pro box on down to my 98 test machine. I don’t know if this is the right choice, hopefully someone will step up and put a final yes or no to this question. Again I’m completely open for suggestions as this was a learning experience for me as well. I hope to have all of my questions answered and sleep comfortable in my bed at some point in the next 30 years, but that’s only going to happen when all of my Windows API questions have been answered. And at the rate I find questions, I’ll either have to get a job at Microsoft or steal some of the source to get at the answers. I’m quite frankly open for both at this point. So many hours of my life have been wasted trying to understand the minds of the guys that made those APIs, but again I digress. Ok, we’re nearing the completion of our translations and definitions, so let’s look at how to communicate with the applets. The applets expect a certain order of messages. The first call to CPlApplet should initialize the applet and let it know we are loading. The second call will tell us how many applets are actually contained in the .cpl file. This is a one to many relationship as one DLL could have an unlimited number of applets contained inside. The third and fourth calls will inquire the information from the applet. After getting the information we need, we can ask the applet to show its UI by sending it a double click message (don’t stress this is normal, and I’ve made an enumeration for the messages. I renamed them to fit my .NET likings but if you are interested, find the cpl.h header file and look at the CPL_INIT, CPL_GETCOUNT, CPL_INQUIRE, CPL_NEWINQUIRE, CPL_DBLCLK, CPL_CLOSE, CPL_STOP messages for details). Here is the message enumeration that I translated from the MSDN docs: CPL_INIT CPL_GETCOUNT CPL_INQUIRE CPL_NEWINQUIRE CPL_DBLCLK CPL_CLOSE CPL_STOP public enum AppletMessages { Initialize = 1, /* This message is sent to indicate CPlApplet() was found. */ /* lParam1 and lParam2 are not defined. */ /* Return TRUE or FALSE indicating whether the control panel should proceed. */ GetCount = 2, /* This message is sent to determine the number of applets to be displayed. */ /* lParam1 and lParam2 are not defined. */ /* Return the number of applets you wish to display in the control */ /* panel window. */ Inquire = 3, /* This message is sent for information about each applet. */ /* A CPL SHOULD HANDLE BOTH THE CPL_INQUIRE AND CPL_NEWINQUIRE MESSAGES. */ /* The developer must not make any assumptions about the order */ /* or dependance of CPL inquiries. */ /* lParam1 is the applet number to register, a value from 0 to */ /* (CPL_GETCOUNT - 1). lParam2 is a far ptr to a CPLINFO structure. */ /* Fill in CPLINFO's IconResourceId, NameResourceId, InformationResourceId and AppletDefinedData fields with */ /* the resource id for an icon to display, name and description string ids, */ /* and a long data item associated with applet #lParam1. This information */ /* may be cached by the caller at runtime and/or across sessions. */ /* To prevent caching, see CPL_DYNAMIC_RES, above. */ Select = 4, /* The CPL_SELECT message has been deleted. */ DoubleClick = 5, /* This message is sent when the applet's icon has been */ /* double-clicked upon. lParam1 is the applet number which was selected. */ /* lParam2 is the applet's AppletDefinedData value. */ /* This message should initiate the applet's dialog box. */ Stop = 6, /* This message is sent for each applet when the control panel is exiting. */ /* lParam1 is the applet number. lParam2 is the applet's AppletDefinedData value. */ /* Do applet specific cleaning up here. */ Exit = 7, /* This message is sent just before the control panel calls FreeLibrary. */ /* lParam1 and lParam2 are not defined. */ /* Do non-applet specific cleaning up here. */ NewInquire = 8 /* Same as CPL_INQUIRE execpt lParam2 is a pointer to a NEWCPLINFO struct. */ /* A CPL SHOULD HANDLE BOTH THE CPL_INQUIRE AND CPL_NEWINQUIRE MESSAGES. */ /* The developer must not make any assumptions about the */ /* order or dependance of CPL inquiries. */ } There is a lot of stuff I’m skipping because it’s kinda dull. I know you all are interested in seeing something running. But if you are curious, read through the comments in my code and the docs in MSDN to see the exact steps the applets are expecting. So let’s begin by initializing an applet library and finding out how many applets are inside. The following snippet from the AppletLibrary class demonstrates how this is achieved: public void Initialize() { if (this.CPlApplet(AppletMessages.Initialize, IntPtr.Zero, IntPtr.Zero) == (int)BOOL.TRUE) { int count = this.CPlApplet(AppletMessages.GetCount, IntPtr.Zero, IntPtr.Zero); // System.Diagnostics.Trace.WriteLine // (string.Format("{0} applets found in '{1}'", count, _path)); for(int i = 0; i < count; i++) { Applet applet = new Applet(this, i); System.Diagnostics.Trace.WriteLine(applet.ToString()); _applets.Add(applet); } } } Take note of the class hierarchy here. AppletLibrary contains Applets. One to many. The AppletLibrary contains a property that exposes an ArrayList of Applet objects. I purposefully used ArrayList as I do not want the reader getting confused by any custom collection classes. Believe me, if this were production code, that would be a strongly typed collection class either by implementing ICollection and others, or by inheriting from CollectionBase. That again is outside the scope of this article, so try and stay focused on the problems at hand, and not with my coding style or means for enumerating sub objects. ArrayList ICollection CollectionBase Now that we know how many applets are actually inside an applet library, we need to extract the information from the applet so we can display its name, a short description, and an icon for the applet. What good would this do us if we couldn’t show it to the users like Windows does, right? Now, take a look a the AppletLibrary class. This is where the remainder of our discussion lies. As I stated before, the applets will pass us information back in structures. If the applet is using static resources, we will have to extract them. So let’s see how this is accomplished by yet another snippet of code, and talk about the caveats. This is the fun stuff I think, pulling of some pointers and marshalling! Here I will demonstrate a small scenario that uses an unsafe code block, and another similar safe means to achieve the same ends using P/Invoke that does not require marking the code as unsafe. Keep in mind, our goal is to stay in the managed world as much as possible, to allow us to reap the benefits of garbage collection and type safety. There are plenty of days left to track down bugs because it was cool to pin pointers using the fixed statement and cast byte* around like they were Mountain Dew cans. Yeah I can do it, but it kinda defeats the purpose of a clean type safe managed language, so I avoid it if at all possible. There are probably a lot of cocky guys around who want to do it just to be cool, but believe me I was one of those guys, till deadlines hit and CEOs are asking when my app is going to stop crashing at random times. Pointers are cool, not using pointers is cooler. Trust me when I say life in a managed world is good, very good. fixed byte* The first block that uses unsafe code actually requires the unsafe statement simple to demonstrate an alternative to the sizeof() function we’re all so used to. The sizeof() is a pretty standard means for determining the size of a structure in bytes. Unfortunately, it must be used in unsafe code. Its alternative is Marshal.SizeOf which does not require unsafe code statements. Here, have a look for yourself. This code is going to call the applet and ask for its information, and then use the pointers returned to cast into our managed structures. unsafe sizeof() Marshal.SizeOf public void Inquire() { unsafe { _info = new CPLINFO(); _infoPtr = Marshal.AllocHGlobal(sizeof(CPLINFO)); Marshal.StructureToPtr(_info, _infoPtr, true); if (!base.IsNullPtr(_infoPtr)) { _appletLibrary.CPlApplet(AppletMessages.Inquire, new IntPtr(_appletIndex), _infoPtr); _info = (CPLINFO)Marshal.PtrToStructure(_infoPtr, typeof(CPLINFO)); if (!this.IsUsingDynamicResources) { this.ExtractNameFromResources(); this.ExtractDescriptionFromResources(); this.ExtractIconFromResources(); } else { this.NewInquire(); } } } } public void NewInquire() { // unsafe // { _dynInfo = new NEWCPLINFO(); _dynInfo.Size = Marshal.SizeOf(_dynInfo); _dynInfoPtr = Marshal.AllocHGlobal(_dynInfo.Size); Marshal.StructureToPtr(_dynInfo, _dynInfoPtr, true); if (!base.IsNullPtr(_dynInfoPtr)) { _appletLibrary.CPlApplet(AppletMessages.NewInquire, new IntPtr(_appletIndex), _dynInfoPtr); _dynInfo = (NEWCPLINFO)Marshal.PtrToStructure(_dynInfoPtr, typeof(NEWCPLINFO)); _smallImage = Bitmap.FromHicon(_dynInfo.hIcon); _largeImage = Bitmap.FromHicon(_dynInfo.hIcon); _name = _dynInfo.NameCharArray.ToString(); _description = _dynInfo.InfoCharArray.ToString(); } // } } To get back the structure from the pointer returned by the applet, we first have to allocate memory on the stack for the structure. This can be achieved by calling Marshal.AllocHGlobal. Keep in mind that anytime we allocate memory on the stack, we have to free that memory back up otherwise we have yet another crappy app with a memory leak. That’s just no good for anyone because the stack is a finite resource shared by everyone. You run out of stack memory and well, better start thinking about rebooting and explaining why your programs run like a fat man in a marathon. They start out strong enough, but end up taking a taxi across the finish line. That’s just no way to be. Because of the memory allocation occurring, all of my classes inherit from DisposableObject. That is just my simple wrapper for implementing IDisposable and putting some wrappers around pointer checks. So look to the overrides in the AppletLibrary and Applet classes to see resources being disposed properly by overriding the abstract method in the DisposableObject class. Once we have the information back out of the applets, we’re free to display it and wait for the user to open an applet. You can programmatically open an applet by calling the Open method on the Applet class. Here is the code for that: Marshal.AllocHGlobal DisposableObject IDisposable Open public void Open() { IntPtr userData = (this.IsUsingDynamicResources ? _info.AppletDefinedData : _dynInfo.AppletDefinedData); int result = _appletLibrary.CPlApplet(AppletMessages.DoubleClick, new IntPtr(_appletIndex), userData); if (result != 0) { System.ComponentModel.Win32Exception e = new System.ComponentModel.Win32Exception(); System.Diagnostics.Trace.WriteLine(e); } } Notice that we are passing a pointer back to the applet. Each applet can define data in a pointer that we must pass back to when we open and close the applet. If the result of sending the CPL_DBLCLK method returns 0, then everything went OK according to MSDN. However, this call blocks until the applet’s dialog closes, and I’ve seen cases where it fails, by result being non-zero, even after the applet shows its dialog. I am currently trying to figure this out, but the docs aren’t much help. I have noticed that certain applets always seem to fail according to the result of this call, even though they appear to work correctly. I’ve tried to catch and look at the exception, but most times it’s not much help. Try it out and see what your results are. Put a break point on the trace and launch an applet. I’m really quite curious to what the deal with my results are. Again I’m hoping someone can pick up the ball and help me figure this out here. I’ve cleared a lot of pretty tricky stuff I think, I just hate to get stumped this far along with the project to give up and say we just got lucky. On a side note, the Display Properties applet quit working on my system. I don’t know why. I was working in this code base, but then I changed the window handle around. Look at the docs and source to see, if you don’t understand what I’m saying, then you probably can’t help me, so no worries k? LOL. Like I said, this is supposed to be a learning experience for all of us, I’m learning too. I have never seen any code to do what I’m trying to do before online, so feel free to tear me up if you want. Go find another example if you can, I’d love to see it! Seriously, this might have been easier to write having something to turn to other than the cryptic header files and MSDN docs to help me out. Well it’s time that you downloaded the source and demo projects if you haven’t done so already. Nothing feels quite as good as running a project and seeing some results. So go have fun playing with the project and stepping around through some interesting coding ideas. I hope this helps some of you overcome some similar hurdles. I had a lot of fun writing the code and almost as much fun trying to explain it in this article. This is my official first posting online, probably will have many more, just gotta find the time. I’ve held back for sooooo long on posting from lack of time, not lack of ideas or knowledge. Look for more posts in the future. Also, this being my first article, if you don't mind, give me some feedback, and possibly a vote. Have some kindness on my writing abilities as an author of articles, because as you may know I write code, not books. So chances are high that the article reads poorly, but hopefully the code is solid! Let’s review a few interesting concepts that we’ve covered: Marshal.GetUnmanagedThunkForManagedMethodPtr delegate int CallApplet1(IntPtr _hWndCpl, AppletMessages message, IntPtr lParam1, IntPtr lParam2); delegate int CallApplet2(IntPtr _hWndCpl, AppletMessages message); public int CPlApplet(AppletMessages message, IntPtr lParam1, IntPtr lParam2) { CallApplet1 _Call = (CallApplet1) Marshal.GetDelegateForFunctionPointer(_appletProc, typeof(CallApplet1)); return _Call(_hWndCpl, message, lParam1, lParam2); } public int CPlApplet(AppletMessages message) { CallApplet2 _Call = (CallApplet2) Marshal.GetDelegateForFunctionPointer(_appletProc, typeof(CallApplet2)); return _Call(_hWndCpl, message); } public const int LR_DEFAULTCOLOR = 0x0000; [DllImport("user32.dll", SetLastError = true, CharSet = CharSet.Auto)] static extern IntPtr LoadImage(IntPtr hinst, IntPtr lpszName, uint uType, int cxDesired, int cyDesired, uint fuLoad); public IntPtr LoadAppletImage(IntPtr lib, int resource, int w, int h) { return LoadImage(lib, new IntPtr((long)((short)resource)), 1, w, h, LR_DEFAULTCOLOR); } #define IS_INTRESOURCE(_r) (((ULONG_PTR)(_r) >> 16) == 0) #define MAKEINTRESOURCEW(i) (LPWSTR)((ULONG_PTR)((WORD)(i))) #define MAKEINTRESOURCEW(i) (LPWSTR) ( (ULONG_PTR) ( (WORD) ( i ) ) ) General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.
http://www.codeproject.com/Articles/6105/Enumerate-and-Host-Control-Panel-Applets-using-C?fid=33551&df=90&mpp=25&noise=1&prof=True&sort=Position&view=Quick&spc=Relaxed&select=2067185&fr=1
CC-MAIN-2015-18
en
refinedweb
Figure 1 Figure 2 Welcome to my very first article (Thanks to Marc Clifton & others for guidelines to writing articles on CP!). *Takes deep breath* I hope it conforms to the CP guidelines! The article is about showing how to solve a dilemma that I experienced when I was developing an application, and it governs the usage of combo boxes! Ahhh! This pesky control by Microsoft seems a bit too.... contrived, to say the least, and caused me grief in trying to solve the problem with combo boxes. Basically, the problem with combo boxes is twofold: Look at Figure 1 above to see what I mean. I know, I know, the screenshot looks simple, but what if you have a window that contains many controls, and a combo box happens to be near the edge of the client window and have a situation like the one as shown in the above screenshot! Hand on heart, it's not nice looking, is it? Now, take a look at Figure 2 to see the horizontal scrolling in place. Now, the end user who would be looking at this, can safely scroll across to see if that pertinent selected item contains whatever is relevant, without fear of cluttering up the overall display. Sure, I can manually decrease the dropdown width and still it can be seen! The first part of the solution involves having to figure out the length of the largest string in terms of pixels, which is easy enough. The last part is having to insert a horizontal scrollbar to the dropdown box, and this can make the overall look of the application more polished. (See Figure 2 above.) Some of the code highlighted here can be found in the source archive. I have included links to the relevant articles, at the bottom of the page, for reference. There're a few pre-requisites. First, knowledge of using the Win32 API is vital, P/Invoke a must have! Secondly, a knowledge of translating from VB.NET to C# (more about this in a second!). . Lastly, loads of patience, trial & error, plenty of coffee and cigarettes, and a good reading/looking up MSDN!! <g> In order to determine the length of the largest string, it is not in string length we're talking about here, it is in terms of pixels. Have a look at this section of code which calculates the length in pixels for a range of list items within the Items collection of the combo box. (See Figure 3.) Items #region GetLargestTextExtent - Obtain largest string in pixels private void GetLargestTextExtent(System.Windows.Forms.ComboBox cbo, ref int largestWidth){ int maxLen = -1; if (cbo.Items.Count >= 1){ using (Graphics g = cbo.CreateGraphics()){ int vertScrollBarWidth = 0; if (cbo.Items.Count > cbo.MaxDropDownItems){ vertScrollBarWidth = SystemInformation.VerticalScrollBarWidth; } for (int nLoopCnt = 0; nLoopCnt < cbo.Items.Count; nLoopCnt++){ int newWidth = (int) g.MeasureString(cbo.Items[nLoopCnt].ToString(), cbo.Font).Width + vertScrollBarWidth; if (newWidth > maxLen) { maxLen = newWidth; } } } } largestWidth = maxLen; } #endregion Figure 3 Typical incantation of the above would be: #region cboBoxStandard_DropDown Event Handler private void cboBoxStandard_DropDown(object sender, System.EventArgs e) { int pw = -1; this.GetLargestTextExtent(this.cboBoxStandard, ref pw); this.cboBoxStandard.DropDownWidth = pw; } #endregion Figure 4 In Figure 4, the code consists of a combo box named cboBoxStandard and the DropDown event handler is wired up! Now, that's the first part of the problem solved, which will produce the result as shown in Figure 1. cboBoxStandard DropDown The tricky part, is having to get the handle of the actual dropdown box; in C# speak, handle refers to SomeControl.Handle (it can also refer to a System.IntPtr type when using P/Invoke). In Win32 API, it is HWND which is a 32-bit double word otherwise known as a DWORD. SomeControl.Handle System.IntPtr HWND DWORD Great! I have the combo box's handle, but that is as far as it goes in the eyes of .NET. Looking at the following information found in the Microsoft's KB, INFO article: Q262954 titled 'The parts of a Windows Combo Box and How they Relate': There're actually three windows combined to form a combo box, well I never.... a Combo Box control whose Windows class is 'ComboBox', an Edit control whose Windows class is 'Edit' and finally, a list box whose Windows class is 'ComboLBox'. For the uninitiated, a Windows class from Win32 API point of view, is how each and every window is registered in the Windows system. That is to say, it is not an OO (Object Oriented) thing. ComboBox Edit ComboLBox Righto! OK...ummm...hmmm...this 'ComboLBox' is what I'm interested in. In fact, it is the same as an ordinary standard list box, but contained in a window depending on the style of the combo box, i.e., Simple, DropDown or DropDownList. To recap: Right, next part is how to get at that list box's handle...so I dug deep within MSDN after a cuppa too many, with a few cigarettes included. There's a neat Win32 API function which does exactly what I needed to achieve this... GetComboBoxInfo which returns a reference to a structure called ComboBoxInfo. In Win32 API speak, it returns a pointer to a structure COMBOBOXINFO. See Figure 5 for the declaration of the function which is commonly used in C/C++ family of Win32 development. GetComboBoxInfo ComboBoxInfo COMBOBOXINFO // Win32 API Function as per MSDN docs BOOL GetComboBoxInfo(HWND hwndCombo, PCOMBOBOXINFO pcbi); // // C#'s equivalent Function. [DllImport("user32")] public static extern bool GetComboBoxInfo(IntPtr hwndCombo, ref ComboBoxInfo info); // #region RECT struct [StructLayout(LayoutKind.Sequential)] public struct RECT { public int Left; public int Top; public int Right; public int Bottom; } #endregion #region ComboBoxInfo Struct [StructLayout(LayoutKind.Sequential)] public struct ComboBoxInfo { public int cbSize; public RECT rcItem; public RECT rcButton; public IntPtr stateButton; public IntPtr hwndCombo; public IntPtr hwndEdit; public IntPtr hwndList; // That's what I'm interested in.... } #endregion Figure 5 I included the StructLayout attribute to guarantee the values will go into the right offsets during the P/Invoke call, as using P/Invoke marshals the data from managed to unmanaged boundaries and back again. I wrapped up this function into a simple method as shown in Figure 6. StructLayout private bool InitComboBoxInfo(System.Windows.Forms.ComboBox cbo){ this.cbi = new ComboBoxInfo(); this.cbi.cbSize = Marshal.SizeOf(this.cbi); if (!GetComboBoxInfo(cbo.Handle, ref this.cbi)){ return false; } return true; } Figure 6 this.cbi is a global variable within the form's class. We call new on it to get a block of memory assigned to the variable, and we use Marshal.SizeOf() to pre-fill the cbiSize field of that structure prior to the call via P/Invoke. Some structures which are passed into Win32 API functions require this prior to P/Invoke. Check with the MSDN or pinvoke.net. Then pass it into the Win32 API function via P/Invoke, so that it is guaranteed that the block of memory gets filled up after the trip to the unmanaged world. If the call fails, we bail out, and the combo box will have standard default behavior after doing a simple check on the bool value returned in certain places! Great! this.cbi new Marshal.SizeOf() cbiSize bool Now, that we have the list box's handle, next part is 'sticking in the horizontal scroll bar'. More coffee and cigarettes, more reading...until I came across an article written in MSDN's December 2000 edition 'ActiveX and Visual Basic: Enhance the Display of Long Text Strings in a Combobox or Listbox'. In the article, the author described how to achieve the above code in Figure 3 using VB 6. It provided the inspiration to do what I needed to do exactly, albeit it was in VB 6. Look at Figure 7 to see the classic VB 6 code. Private Const WS_HSCROLL = &H100000 Dim lWindowStyle As Long lWindowStyle = GetWindowLong(List1.hwnd, GWL_STYLE) lWindowStyle = lWindowStyle Or WS_HSCROLL SetLastError 0 lWindowStyle = SetWindowLong(List1.hwnd, GWL_STYLE, lWindowStyle) Figure 7. It was a matter of translating the code directly to C#'s equivalent, as shown in Figure 8. [DllImport("user32")] public static extern int GetWindowLong(IntPtr hwnd, int nIndex); [DllImport("user32")] public static extern int SetWindowLong(IntPtr hwnd, int nIndex, int dwNewLong); public const int WS_HSCROLL = 0x100000; public const int GWL_STYLE = (-16); int listStyle = GetWindowLong(this.cbi.hwndList, GWL_STYLE); listStyle |= WS_HSCROLL; listStyle = SetWindowLong(this.cbi.hwndList, GWL_STYLE, listStyle); Figure 8. That section of code can be found in the cboBoxEnhanced_DropDown event handler. Basically, what the above code does is, it adjusts the style of the list box to include a Window Style Horizontal SCROLLbar. Every control has a default style, which is a combination of bits, that defines the behavior of the control and how Windows handles the default behavior or processing of events. In this instance, I extract the original bit-mask for the list box's handle using Win32 API Function GetWindowLong via P/Invoke. Then I perform a bit-wise OR on the mask itself to include the horizontal scrollbar, then call SetWindowLong via P/Invoke again. cboBoxEnhanced_DropDown GetWindowLong SetWindowLong The constants can be found in the SDK; if you have Visual Studio 2003, it can be found in the VC7\PlatformSDK\Include. A browse around the C/C++ header file winuser.h is where constants can be found; for common controls it is commctrl.h. If you don't have Visual Studio, why not try get the Borland C++ 5.5 Compiler (Command Line only - which includes the SDK stuff). Note the use of this.cbi.hwndList in the above Figure 8 (this.cbi.hwndList was obtained in the above Figure 6)! That's how the horizontal scroll bar gets inserted into the list box. Next, we need to notify the list box's horizontal scrollbar so that the scrolling magic can take place. To achieve that, another Win32 API function call is required, our friend SendMessage. this.cbi.hwndList SendMessage [DllImport("user32")] public static extern int SendMessage(IntPtr hwnd, int wMsg, int wParam, IntPtr lParam); public const int LB_SETHORIZONTALEXTENT = 0x194; // Set the horizontal extent for the listbox! SendMessage(this.cbi.hwndList, LB_SETHORIZONTALEXTENT, this.pixelWidth, IntPtr.Zero); Figure 9. So that's it...or so I thought....scrolling works just fine, the scrollbar's thumb-tracking doesn't work...damn... even more cups of coffee...OK...I realized that I need to subclass this list box and take care of the horizontal scrolling...more searching around until I came across a very fine article here on CP ' Subclassing in .NET -The pure .NET way' by Sameers (theAngrycodeR), which was written using VB.NET. It would be helpful if I could divert you to read the article and to understand how his code works. It is impressive! Thanks Sameers for publishing your article, without it, this wouldn't have been achieved! Here's the translation of the VB.NET code into C#, as shown in Figure 10. I enhanced it slightly by changing the constructor and adding message crackers (a legacy from the Win 3.1 days when wParam and lParam were used to hold two 16 bit values within a long data type - which was C/C++'s datatype of 32 bit value at the time). Of course, this is an excellent example of how events/delegates comes into play here. wParam lParam long #region SubClass Classing Handler Class public class SubClass : System.Windows.Forms.NativeWindow{ public delegate void SubClassWndProcEventHandler(ref System.Windows.Forms.Message m); public event SubClassWndProcEventHandler SubClassedWndProc; private bool IsSubClassed = false; public SubClass(IntPtr Handle, bool _SubClass){ base.AssignHandle(Handle); this.IsSubClassed = _SubClass; } public bool SubClassed{ get{ return this.IsSubClassed; } set{ this.IsSubClassed = value; } } protected override void WndProc(ref Message m) { if (this.IsSubClassed){ OnSubClassedWndProc(ref m); } base.WndProc (ref m); } #region HiWord Message Cracker public int HiWord(int Number) { return ((Number >> 16) & 0xffff); } #endregion #region LoWord Message Cracker public int LoWord(int Number) { return (Number & 0xffff); } #endregion #region MakeLong Message Cracker public int MakeLong(int LoWord, int HiWord) { return (HiWord << 16) | (LoWord & 0xffff); } #endregion #region MakeLParam Message Cracker public IntPtr MakeLParam(int LoWord, int HiWord) { return (IntPtr) ((HiWord << 16) | (LoWord & 0xffff)); } #endregion private void OnSubClassedWndProc(ref Message m){ if (SubClassedWndProc != null){ this.SubClassedWndProc(ref m); } } } #endregion Figure 10. Every control, no matter what, is inherited from NativeWindow which is the essence of how the .NET wrappers within the FCL work for all sorts of controls. There's one caveat emptor that I must mention regarding this class, it does not work for components such as ToolTips (BTW, its handle is not exposed at all! - Can somebody explain how to get at handle for controls such as Tooltips?). So now, it is a matter of deriving an instance of this class and passing in the this.cbi.hwndList into the class' constructor, create the event handler, and then we're in business.. NativeWindow // Within the Constructor of the Form. this.gotCBI = this.InitComboBoxInfo(this.cboBoxEnhanced); if (this.gotCBI){ this.cboListRect = new RECT(); this.si = new SCROLLINFO(); this.scList = new SubClass(this.cbi.hwndList, false); this.scList.SubClassedWndProc += new testform.SubClass.SubClassWndProcEventHandler(scList_SubClassedWndProc); } Figure 11. RECT and SCROLLINFO are structures which hold the rectangle region and scrolling information (surprise, surprise) respectively. You'll see why I initialized/instantiated the variables...hint, hint, subclass... RECT SCROLLINFO private void scList_SubClassedWndProc(ref Message m) { switch (m.Msg){ case WM_SIZE: GetClientRect(this.cbi.hwndList, ref this.cboListRect); this.xNewSize = this.scList.LoWord(m.LParam.ToInt32()); this.xMaxScroll = Math.Max(this.pixelWidth - this.xNewSize, 0); this.xCurrentScroll = Math.Min(this.xCurrentScroll, this.xMaxScroll); this.si.cbSize = Marshal.SizeOf(this.si); this.si.nMax = this.xMaxScroll; this.si.nMin = this.xMinScroll; this.si.nPos = this.xCurrentScroll; this.si.nPage = this.xNewSize; this.si.fMask = SIF_RANGE | SIF_PAGE | SIF_POS; SetScrollInfo(this.cbi.hwndList, SB_HORZ, ref this.si, false); break; case WM_HSCROLL: int xDelta = 0; int xNewPos = 0; int modulo = (this.xNewSize > this.pixelWidth) ? (this.xNewSize % this.pixelWidth) : (this.pixelWidth % this.xNewSize); switch (this.scList.LoWord(m.WParam.ToInt32())){ case SB_PAGEUP: xNewPos = this.xCurrentScroll - modulo; break; case SB_PAGEDOWN: xNewPos = this.xCurrentScroll + modulo; break; case SB_LINEUP: xNewPos = this.xCurrentScroll - 1; break; case SB_LINEDOWN: xNewPos = this.xCurrentScroll + 1; break; case SB_THUMBPOSITION: xNewPos = this.scList.HiWord(m.WParam.ToInt32()); break; default: xNewPos = this.xCurrentScroll; break; } xNewPos = Math.Max(0, xNewPos); xNewPos = Math.Min(xMaxScroll, xNewPos); if (xNewPos == this.xCurrentScroll) break; xDelta = xNewPos - this.xCurrentScroll; this.xCurrentScroll = xNewPos; this.si.cbSize = Marshal.SizeOf(this.si); this.si.fMask = SIF_POS; this.si.nPos = this.xCurrentScroll; SetScrollInfo(this.cbi.hwndList, SB_HORZ, ref this.si, true); break; } } Figure 12. Even more Win32 API function calls come into play here...well, that sounds like an overstatement, in truth that's two APIs here! APIs used here are SetScrollInfo, GetClientRect, which can be seen in the above Figure 12... SetScrollInfo GetClientRect WM_SIZE The scrolling is not 100% accurate, download and take a look at the demo app and play with the thumb tracking and arrow buttons...that's it. From there on, the sky's the limit, and of course, you can put all of this into an extender control if you so desire. There, it wasn't too hard, was it?...a bit of ingenuity, persistence, and patience does indeed pay off! In the source archive, I have included two radio buttons and two labels, so it is slightly different to the screenshot in the above. It essentially changes the dropdown style at runtime to convince myself that the code works for both styles: DropDown and DropDownList. No error checking is done. If you use this code, please put in error checking to make it production-ready! This hacking took me three days + nights. While hacking this, initially, I tried to display a tooltip depending on the cursor position whilst the dropdown list box is visible, and the tooltip never showed up, it took me ages to figure out why - but I discovered through the MSDN archive, that apparently, the tooltip's window (in which the tooltip text is contained) has lower precedence than the dropdown list box, i.e., z-order of the window is such that the tooltip's window appears on the bottom of other windows. Hence the dropdown portion is on top of it and the tooltip will never show up! That I didn't know....but it is interesting because I initially made an attempt to simply bring the tooltip's window to the foreground via the Win32 API function call SetWindowPos. But then I was caught out as I realized that the handle of the tooltip wasn't exposed publicly...I don't know why...but that's for another day..... SetWindowPos The other thing, is that you might question - would subclassing the actual combo box work? To my amusement - with the above code in place, the combo box, get this...did not get any of the WM_HSCROLL messages.... funny this is, after investigating via Spy++, deciphering the hexadecimal messages flashing past my eyes, scrolling off the screen, and coming to a conclusion, the mouse capturing is taking place within the dropdown box and hence all mouse messages were sent to the dropdown box. That explains how the combo box got the focus on the dropdown box when the dropdown style is set to DropDownList or plain DropDown. WM_HSCROLL Yeah, I admit my code ain't reliable, i.e., the scrolling, but hey it works! I did put this into an extender control class, and learnt a very important lesson, if you intend to develop a custom combo control using the code like above, be sure that the combo box's parent handle is set to the form at design time, otherwise bizarre problems will appear, such as subclassing not firing, the horizontal scrollbar not appearing etc. In fact, GetWindowLong fails with an error code of 0, and a quick check to Marshal.GetLastWin32Error() informs me of error code 1400 which is 'Invalid Windows Handle', and SetWindowLong fails!! Bear in mind, that you would have to drop a plain Win combo box when in design view, and in the code, change it to match that of the user control/extender control and you should be OK. If you would like to see a working example of the extender control code, which you can add to the toolbox in VS 2003, drag and drop it on to the form etc., let me know! Marshal.GetLastWin32Error() Tip: It would be best to create an event handler for the DropDownStyleChanged and put the call to InitComboBoxInfo in there, as you would have to call it anyway in order to ensure that the dropdown box's handle is up-to-date and to instantiate a fresh instance of the subclass to match that of the up-to-date handle. Otherwise, you'll run into the similar situations and problems like I did regarding invalid handles and bizarre problems! DropDownStyleChanged InitComboBoxInfo A side bonus that I discovered when I put the above code in place was I got automatic vertical scrolling when the high-light was at the bottom of the dropdown box, whether that was because I have a mouse-wheel-type of mouse - I don't know! Another plus, if you have DrawMode set to OwnerDrawFixed with plain fonts and fancy colors for backgrounds etc., it works a treat. With images, you're on your own. DrawMode OwnerDrawFixed Final note: I was surprised at how much I have learnt from hacking this Combo Box. If I were ever to look at such a difficult control like this Combo Box again, I'd be feeling like 'Oh no! Not another pesky control *sigh*'...... Initial version.
http://www.codeproject.com/Articles/9455/Hacking-the-Combo-Box-to-give-it-horizontal-scroll?msg=1094668
CC-MAIN-2015-18
en
refinedweb
Centralised storage, file sharing and storage consolidation with iSCSI and Fibre Channel technology Next generation scale-out NAS featuring up to 4PB in a single namespace, industry-leading TCO and policy-based data reduction. Optimize file storage with EqualLogic FS7600 and FS7610 NAS appliances and the scalability, flexibility and efficiency of the new FluidFS v3 file system. Low cost, high-performance file storage, healthcare and life sciences, media and entertainment, video surveillance, oil and gas exploration. Organizations looking to unify storage through a flexible, highly available scale-out SAN and NAS solution. Dell Compellent FS8600 FS7600 and 7610
http://www.dell.com/ba/business/p/network-file-storage/product-compare
CC-MAIN-2015-18
en
refinedweb
Hi, I’m running into an issue with brep.MergeCoplanarFaces The attached black polysurface is slightly kinked, but apparently enough to allow for the 2 faces to be regarded coplanar. However after merging the faces the trimming edges are out of tolerance: If I brep.Repair the merged brep I do get correct trimming edges, however there is a deviation of the initial surface of 0.459 which exceeds the file tolerance. However , the Command _MergeFace will correctly state : “Unable to move edges within face tolerance, nothing done” How do I go about making sure the merger was within tolerance? Should I check edge deviations myself or am I missing some additional RhinoCommon functionality? What coplanarity tolerance is used? Thanks -Willem below example file and script merge_coplanar.3dm (48.2 KB) import rhinoscriptsyntax as rs import scriptcontext as sc obj = rs.GetObject('select brep ro merge coplanars') if obj: brep = rs.coercebrep(obj) tolerance = sc.doc.ModelAbsoluteTolerance brep.MergeCoplanarFaces(tolerance) id = sc.doc.Objects.AddBrep(brep) rs.ObjectColor(id , [255,0,0] ) brep.Repair(tolerance)#make sure the edges are correct id = sc.doc.Objects.AddBrep(brep) rs.ObjectColor(id , [20,255,0] )
https://discourse.mcneel.com/t/brep-mergecoplanarfaces-out-of-tolerance/64329
CC-MAIN-2022-33
en
refinedweb
[Solved] Wrong cosine? Why? Hello everybody. I would need to calculate the cosine of an angle. I used the math.cos () function, but the result I am getting is wrong. can anyone tell me why? thank you. I am attaching the text: import math print(math.cos(60)) in the console, as a result, is: -0.9524129804151563 Otherwise, can you tell me another method with which to calculate the cosine? Thank you RTFM... Return the cosine of x radians. returns its results in radians The angle has to be given in radians but cos returns a number between -1 and +1 thank you so much. I had foolishly forgotten ...
https://forum.omz-software.com/topic/6961/solved-wrong-cosine-why/1
CC-MAIN-2022-33
en
refinedweb
Numerical Variables Numerical variables, also known as quantitative variables, are the type of data that represent something measurable or countable like frequency, measurement, etc. Another attribute of numerical variables is that they are always numbers that can be placed in a meaningful order with consistent intervals. As examples of quantitative variables we may mention: - Weight - Height - Sales - Production units - Movie Ratings Discrete and continuous Numerical variables may be either discrete or continuous. Discrete values are the result of counting, like when we count how many goals a football team has scored in a season. Here, the data take certain numerical values, like 60, 65, 72, and so on. On the other hand, continuous values are the result of a measurement. For instance, we may measure the weights in kilograms of football team players, and the data will assume continuous values inside a range, like 84.1kg, 74.89483kg. Buckets and bins Buckets and bins are the way we may organize the numerical data collected in a meaningful order with consistent intervals to analyze and make insights from them. For example, we might collect the number of movies produced in the 20th Century and put them in buckets of 10 years, and as result, we could see the evolution of the Movie Industry in the last century. But in this article, we will demonstrate a bit of numerical data using the Kaggle Google Play Store Apps dataset from Lavanya Gupta as we did in the article about Categorical Variables. Using pandas, we will load the dataset, but only the Rating column, which is a typical numerical variable. The users rated the Apps from 1.0 to 5.0. import pandas as pd import plotly.express as px from collections import Counter df = pd.read_csv("./data/googleplaystore.csv", usecols=['Rating']) # Drop missing values df.dropna(axis=0, inplace=True) ratings = df.Rating # Drop a outline rating of 19.0 (from some error) ratings.drop(10472, inplace=True) # Plot a histogram fig = px.histogram(ratings, x='Rating', title='Google Play Store Apps Ratings', template="simple_white") fig.show() Histogram The chart we see above is a Histogram, which seems like the Bar Chart we've plotted in the Categorical Variable post, but actually they have some important differences. In a Histogram there is no space between the bars, and the intervals are equally spaced, as expected to numerical values. The shape of the histogram already gives us useful information. The histogram above is left-skewed (it has a tail to the left), so we may conclude that most Apps were well evaluated because the highest rectangles are on the right side of the histogram, where we have the highest rates (between 4.0 and 5.0). Other shapes a histogram can have are right skew, symmetric, bimodal, uniform. Perhaps we will see more examples of histogram shapes in the next posts! References courses.lumenlearning.com | 1.2 Data: Quantitative Data & Qualitative Data 🔎 online.stat.psu.edu | 1.1.1 - Categorical & Quantitative Variables 🔎 YouTube | Brandon Foltz | Statistics 101: Descriptive Statistics, Histograms 🔎 Discussion (0)
https://practicaldev-herokuapp-com.global.ssl.fastly.net/thalesbruno/numerical-variables-169c
CC-MAIN-2022-33
en
refinedweb
GREPPER SEARCH WRITEUPS DOCS INSTALL GREPPER All Languages >> PHP >> validate phone number with dial code laravel 8 “validate phone number with dial code laravel 8” Code Answer’s phone number validation, laravel php by Handsome Hamerkop on Oct 24 2021 Comment 10 'phone' => 'required|regex:/(01)[0-9]{9}/' Source: stackoverflow.com validate phone number with dial code laravel 8 php by Tyagi420 on Feb 02 2022 Donate Comment 0 'mobile' => ['required', function ($attribute, $value, $fail) use ($parameters) { if (($value + strlen($parameters['code'])) < 15) { $fail('Mobile number is too long'); } } Add a Grepper Answer Answers related to “validate phone number with dial code laravel 8” laravel validation integer php validate phone number laravel validate telephone number sms laravel mobile number validation in laravel 8 laravel validation digits laravel validation numeric vs integer validator number laravel laravel validate integer between laravel form validation phone number numbric validate laravel Laravel validation rule for one item which can be email or phone numbe Bd phone number validation in laravel mobile no validation laravel laravel number input positive only validation sellphone laravel check if phone number is valid php Queries related to “validate phone number with dial code laravel 8” laravel date datetime laravel date laravel phone number validation laravel laravel time laravel time picker laravel validate telephone number laravel validation for phone number phone validation in laravel time() in laravel laravel new date date time laravel laravel validate phone mobile number validation laravel laravel validate mobile number use datetime in laravel date time in laravel datetimetz laravel laravel date() laravel verify phone number validate phone number laravel 8 laravel 8 validation phone number validation phone laravel time laravel laravel time column new datetime laravel time ago in laravel laravel get date and time validating phone number laravel datetime blade laravel datetime function in laravel date laravel 9 laravel date time code phone number validation on laravel how to add date time picker in laravel new date time laravel new date in laravel get date time in laravel validate phone laravel laravel datetime timestap display date with time in laravel blade get date out of date time in laravel blade where date and time in laravel how to validate a phone number field in laravel 8 phone number Min, max validation laravel input date value laravel date use in laravel date type in laravel laravel date with time laravel phone numeric validation laravel US phone number validator datetime in laravel blade how to validate phone number in laravel date and time formula in laravel Validation for mobile number in laravel validation of phone number in laravel html time to laravel datetime datetime in laravel controller date and time in laravel datetime function laravel laravel datetime database $dates laravel laravel new datetime date laravel time and dates laravel validate phone number +62 laravel get date time laravel form phone validation laravel validate international phone number laravel time & date laravel showing 1970 date laravel time library laravel phone number validation and storing laravel phone validator laravel use Date laravel time and date formatting laravel validation for phone number with start at specific digit validate phone number in controller laravel validate validate for phone number in laravelfor iran use real time date for created at in laravel validate phone number laravel 7 time laravel format time differ laravel created at telphone number format laravel validations validate phone number laravel to be only 8 Validate a phone number in Laravel 8 laravel validation phone number turkish how to check is vaild phone number using laravl value of datetime input laravel validation rules phone number laravel validation in controller laravel phone validating phone number in laravel validate phonenumber laravel laravel validation for telephone minimum phone number validation in laravel laravet date laravel \DateTime particular column validation for phone number in laravel laravel Validator phonenumber laravel validation phone number unique laravel validation must phone number period date laravel show date and time laravel rules for mobile number in laravel validation php use DateTime on laravel phone numer validation laravel phone number validation on laravel 8 phone number validation in blade laravel phone number validate in larvel phone number format validation laravel laravel elowuent date dateToStr laravel Datetimenow laravel datetime table laravel datetime local laralvel datetime LARAVEL T datetime laravel column DateTime in laraval datetime example laravel display date withou time laravel how to validate phone number with country code laravel how to validate phone in laravel how to use time date and time in laravel how to use date in laravel 8 display datetime php laravel how to input datetime laravel 8 how to create a datetime column in laravel hijri dates in laravel dates in laravel model create new date time in laravel date and datetime laravel date laravel custom Validation phone number laravel 8 create laravel datetime from html input datetime cell phone number validation laravel Bd phone number validation in laravel $table->dateTime laravel $date laravel components DATE() larave date with time laravel blade date type in laravel 8 documentation date time object in laravel controller date time incrising laravel date time in blade date new laravel date laravel blade date input in laravel funcao datetime laravel laravel date component example laravel date and time from request laravel currrent date and time laravel create datetime - 1 year laravel create date time laravel attribute date time laravel 9 datetime laravel date period laravel echo date time value laravel datetimetz field laravel datetime schema laravel datetime in view laravel datetime -ddatime in seconde laravel date with hours laravel date time importer laravel 5..4 datetime how to work with time and date in laravel input for phone number laravel how to work with dates in laravel laravel phone validation rule laravel date time language date time function in database in laravel8 laravel validate phone number indonesia date time data type laravel laravbel datetime laravel php date('t') laravel date and time schema timeTz field laravel type data time on laravel larave lphone number validation laravel property date date time method in laravel 8 laralve datetime laravel phone number package computer based time and date in laravel validator make phone laravel what value time in laravel' how to validate a phone in laravel 8 add phone number verification to laravel app $table->time(); laravel laravel validation rules phone_number validate phone number laravel with country code laravel phone number input how to use datetime class in laravel date and time data type for laravel how to validate a phone number with phone code all worlds in laravel 8 validation laravel phone example dat in laravel laravel model data time time picker laravel phone min 12 in laravel validation laravel 8 validation for mobile number phone number validation regex laravel phone number validation in laravel pakistan phone length validation in laravel phone rule validation laravel laravel time new date phone validation for laravekl laravel user unoque phone number validation get time of date laravel get time from datetime laravel php exact number verification laravel phone number util laravel laravel eloquent date time laravel valiation phone number laravel time model show date time laravel telephone laravel validation time stamo in laravel time calendar in laravel time columns laravel laravel data - date how to get datetime laravel show date time in laravel how to add time in dateTime laravel phone validation number only integer only laravel DateTime package in laravel phone verification laravel with code laravel time data how to check if a phone number is valid in laravel 8 Server data time laravel set time date laravel laravel phone number validation phone number validation in laravel laravel validation phone number validate phone number laravel laravel phone validation mobile number validation in laravel 10 digit mobile number validation in laravel validation phone number laravel phone number validation laravel 8 laravel 8 phone number validation laravel phone number validation with country code date php laravel laravel blade date validate phone number in laravel 8 laravel phone number verification phone validator laravel laravel validation for mobile number laravel date and time laravel use datetime laravel datetimetz laravel new datetime how to validate phone number in laravel 8 laravel datetime field laravel date function laravel telephone number validation time ago laravel phone no validation in laravel laravel form validation phone number phone verification laravel mobile number validation in laravel validator example $date laravel import datetime laravel {{ date }} laravel use datetime in laravel controller laravel date blade validation laravel phone number laravel phone verification dates in laravel get time from datetime laravel laravel datatime phone validate laravel display datetime in laravel 7 laravel tel validation get exact time and date in php laravel input date laravel laravel gt datetime laravel validation for phone number with national code use Date laravel how to validate number phone use laravel laravel date input laravel validator phone telephone nuber validation in laravel laravel datetime eloquent datetime laravel blade US phone number validation in laravel phone validator in laravel new date laravel validation phone in laravel use date time laravel php date laravel laravel DateTime() laravel store date time laravel mobile number validation sa laravel ic number validation laravel validate phone number example laravel function to check valid phone numbers with country code laravel phone number validation 8 laravel set as date as 1970 laravel sms code verification laravel time data type laravel time operations laravel recive date and hour laravel phone validationm Laravel phone validate "date()" laravel us phone number validator laravel validate phone number format laravel validate a phone number laravel controller use Datetme in laravel type date laravel time from datetime laravel blade time data type laravel 8 telephone validation laravel maroc show datetime date-time local type in laravel view phone number validation laravel packalist Laravel phone number validation laravel 8 where date and time Time laravel validation.phone laravel validation real phonenumber laravel validation for phone number in laravel with fixed 2 values validating mobile number in laravel show datetime in laravel view laravel with date make phone number start with specific digit in laravel validation Laravel-phone number validation with country code laravel work with date laravel where date is in the future laravel validator phone number different numbers laravel validation phone number length laravel validation input telephone show date and time in laravel blade set date and time laravel regex validate phone number massage to text custom laravel phone validations in laravel phone number validation with country code in laravel phone number validation larvel phone number validation by country laravel phone number validate in laracvel laravel form phone validation datetime library laravel use datetine in laravel DateTimein laravel 8 datetime sample laravel datetime laravel table datetime laravel in model datetime laravel 9 DateTime import in larvel dates laravel dateeand time in db type in laravel\ how to validate phone number min laravel with country code how to validate mobile number laravel how to use datetime object in laravel how to store datetime in laravel 8 how to print date time in laravel 8 how to get time from datetime in laravel how can add time to date in model laravel 7 inwhare form validation phone number laravel dateandtime in laravel date and time with laravel database date < laravel date in laravel current date and time laravel in html date funtion in larevel calculate datetime laravel add date time picker in laravel 7 php $date->day laravel date function in larave date time in laravel controller date variable in laravel 8 date time valu for laravel date time local in laravel date time in larae; date time formation on laravel date method in laravel date iual laravel date in laravel model how to vildation datetime in laravel laravel date component laravel date and time field laravel created at date is hour before actual date laravel date in laravel check its a valide phone number laravel add timezene in a date laravel 8 date time customize laravel $dates laravel development date laravel datetime: laravel datetime php laravel date time create laravel dateing app laravel date time table laravel date time functions laravel create datetime laravel you validate phone jquery date laravel model date laraval datetime validat phone in laravel how to manage datetime in laravel laravel validate that a phone number actual exists date time in laravel form date now hour laravel how to set validation for mobile number digit in laravel use datetime model laravel time() larave timeTz laravel type date and time dateformate in laravel type phone number in larave laraveel datetime date time method in laracel laravael use datetime best practice laravel to validate phone number validation.phone.required in laravel laravel validator rules phone international phone number validation in laravel 8 laravel get date time' laravel validator phone number belgium $table- time laravel validation rule for phone number in laravel js datetime for laravel date fromat to time laravel date and time laravel data type laravel phon number validation validation in laravel phone laravel model datetime validation phone address laravel time table laravel laravel time time time laravel use datatime phone number validation pkg in laravel phone number validation rules in validation laravel phone number verification laravel laravel 9 request valide phone number phone valdation laravel eloquent date time phone number required always in laravel phone number laravel phone number length validation in laravel phone number only accept number validation laravel get time from date string laravel get time for dat with laravel funcao date time laravel field time laravel phone no validation laravel datetime laravel 8 how to create data and time in laravel time ago in model laravel new datetime php laravel name as time date in laravel time fomat laravel laravel date 1970 new datw time laravel laravel time management laravel time function phone verification in laravel php date time example laravel real time data laravel new \DateTime laravel laravel time and date blade laravel date > laravel datetime date in laravel laravel validate phone number datetime in laravel laravel date time laravel mobile number validation phone validation laravel time in laravel laravel validation phone mobile number validation in laravel 8 phone number validation in laravel 8 date() laravel laravel dates laravel get date from datetime date function in laravel how to use datetime in laravel time() laravel date in laravel blade validate phone number in laravel where date time laravel date() in laravel laravel datetime example phone number laravel validation laravel phone number validation package laravel validation rules for phone number laravel 8 validate phone number laravel phone number format validation datetime class laravel phone number validation in laravel 7 laravel datetime class validate phone number laravel french laravel date year mobile number validation in laravel 8 laravel how to validate mobile number in laravel validate for phone number in laravel laravel datetime function laravel time() dateperiod in laravel phone validation laravel 8 laravel validation phone example validated how to validate input field that accept phone number in laravel laravel table datetime full date and time laravel laravel time ago phone number laravel validatioon\ validation for phone number in laravel usa phone number validation in laravel use datetime laravel validate phone laravel 8 validate phone number with dial code laravel 8 datetime in laravel 8 laravel validation téléphone number date and time laravel laravel date and time picker datetime object laravel datetime value laravel laravel validation telephone create new DateTime laravel laravel get time from date laravel show date time date method laravel laravel field datetime laravel package for phone validation laravel invented date laravel get the date data and hours laravel function DateTime laravel phone numbr validation step by step laravel time and date Laravel showing date when i have TIME datatype laravel set datetime laravel time string laravel php datetime laravel phone validation example laravel user phone number validation phone number should be 10 chracters validation in laravel validate phone number in laravel 9 validate phone number format in laravel usedate time model laravel type datetime in laravel time picker in laravel time field in models laravel time ago laravel in Model validate phone number in laravel in schema validation validator facade laravel mobile number validation phone nmber validation in laravel 9 laravel 9 validate phone number what data type should I use to store date and time in laravel validate phone number on laravel validation laravel phone validating phone number in laravel 7 validate to phone laravel show date in type date in laravel blade mobile number param for validate laravel lravel validation phone laravel-phone number validation FOR LARAVEL 7 laravel whre date laravel validator telephone laravel validator phone number laravel validation phone ( mobile number validation in laravel bangladesh phone numberfaker laravel send date & time in laravel regex for phone number validation laravel phone validate laravel 8 phone no validation 10 to 12 digit laravel phone number validation laravel controller phone number validate laravel phone number max length validation in laravel laravel mobile numbers validation deff time in day laravel datetimetz() laravel datetime value with laravel datetime php laravel display date with time in laravel datetime laravel example datetime in laravel database datetime field laravel display date and time input laravel how to show created at as hour ago format in laravel how to validate phone nuber using laravel phone how to validate laravel to telphone number how to use date time laravel how to set 00 date time in laravel how to integer required in phone number field in laravel how to get date time in laravel how can add time to date in model laravel 7 in whare display laravel date in date input how to verify that phone number is valid in laravel 8 date and time in laravel carb on date & time laravel data type for time in laravel 8 created at date and time display in laravel check phone number php laravel blade.php input phone number validation 8phone number validation in laravel 7 $date laravel components example date in lara date('N', $date) in laravel date value laravel date time reason in laravel date time input in laravel form date in larave date time field laravel date laravel model date input laravel how to verify that phone number is valid in laravel laravel create datetime field laravel date and time of birth schema laravel date - date laravel create new datetime value how to vslidate phone number in laravel laravel Attribute time laravel add time to date laravel date grea laravel datetime on create Laravel defeult date foramt laravel datetime with T Laravel datetime parameter laravel datetime field date query laravel date with time db laravel date time service laravel date time event real time validate phone number in laravle laravel date larave lcarbon date control now datetime how to make validation for phone number in request file in laravel 8 validat phone laravel validate for phone number compulsory in laravel 8 laravle validation phone laravel date methods laravel phone standard validation validate phone in laravel laravel request + required phone number laravel required phone number validation type data time laravel how to get time from created at in laravel table type phone number in laravel date time minate laravel use datetime in model laravel laravel phone number input blade laravel verify number phone international phone validation laravel caltime in laravel where is datetime class in laravel add time field in laravel 8 how to validate phone no with country code in laravel laravel validation phone mask laravel datetime daylight time time showing 1970 in laravel date attribute laravel laravel phone address validation laravel dateitme laravel daten validation nimber phone in laravel validation phone egypte laravel phone no validation in required in laravel laravel $attributes datetime laravel 8 datetime phone number validation regex in laravel phone number verification in laravel phone number verify how to set in laravel phone required laravel phone laravel validation phone number validation in laravel 8. phone no validation laravel 10 to 12 digit grameenphone number validation laravel get time of a date laravel phone number required laravel validation phone number tyoe in lravel function date laravel format phone number before validating in laravel phone validation laravel 8 laravel created date time DateTime in laravel package how to declare dateandtime in laravel model laravel taking time and date time column laravel datertime laravel laravel database datetime date('') laravel phone validation in request laravel phone number laravel html laravel time datatype dateTime method laravel datetime laravel eloquent datetime library laravel laravel country phone number validation set time and dae datatype in Larave Browse PHP Answers by Framework Symfony Laravel Zend CodeIgniter CakePHP Drupal Wordpress Yii More “Kinda” Related Answers View All PHP Answers » human readable date laravel laravel model without timestamps timestamp false in laravel get start of month end of month carbon how to calculate age in laravel get today records in laravel get the today data laravel laravel where created_at today carbon start of week minus one day get age with carbon in laravel Carbon 1 is deprecated, see how to migrate to Carbon 2. You can run './vendor/bin/upgrade-carbon' to get help in updating carbon and other frameworks and libraries that depend on it. laravel carbon time format AM PM carbon start of day carbon add minutes get current datatime laravel laravel carbon get year number date casting from datetime to d-m-Y laravel date casting from datetime to d-m-Y laravel using cast display date time twig laravel carbon today date format laravel parse string to date blade current year get year in laravel 8 importing current year in laravel blade formate date using carbon in laravel blade date_default_timezone_set for india in php laravel carbon diffForHumans get record of last 24 hours in laravel laravel convert timestamp to date laravel between dates laravel eloquent whereDateBetween laravel eloquent date range get current date laravel laravel where creation is today carbon Carbon add 3 hours carbon add days from specific date current timestamp carbon php carbon get timestamp carbon parse subday carbon parse sunday 30 days ago carbon day 30 days ago carbon date from format carbon datetime test online carbon now format how to set field type of date of birth in laravel carbon minus 1 day get current month record in laravel laravel carbon get month number laravel carbon count days between dates laravel carbon create date from string datetime format laravel laravel blade date format laravel time format add days to date with laravel indian time laravel whereyear laravel carbon time ago laravel carbon parse from format laravel carbon human readable laravel time to human redable format laravel current timestamp php carbon convert string to date carbon laravel use how to set timezone for iran in laravel carbon add days use Class 'Carbon' inside view different days in carbon laravel between different dates laravel date rule before 18 years ago laravel now date laravel end date greater than start date validation insert timestamps manually in laravel Convert Carbon Seconds Into Days Hours Minute laravel blade time difference carbon finer laravel carbon first day of month carbon in laravel diff for seconds laravel carbon carbon date minus days laravel new date add seconds to datetime carbon carbon to mysql datetime laravel date default now Carbon Format date with timezone in views Laravel carbon date format carbon add few hours laravel difference between current time and created time laravel date set timezone how to separate date and time in laravel string to carbon get only date in laravel How to get only year, month and day from timestamp in Laravel Get date without time in laravel carbon 2 days ago laravel subdays carbor sub day how to get the number of days in the current month using carbon change returning datetime timezone to recalculate with user timezone laravel current date in carbon counting time execution duration in time laravel laravel created_at where date format carbon get time format time laravel php carbon from timestamp laravel 8 date format time duration calculation laravel date format in laravel month name day name eloquent where date between laravel date between carbon two day ago laravel nigerian time zone format date in laravel using carbon carbon parse timestamp carbon get today's month laravel form in 24 hours format datediff in hour query builder laravel laravel datatable format date column carbon months between dates carbon set locale laravel laravel 8 created at format laravel get week dates find curren monday in laravel carbon get start of week end of week carbon get records from Sunday to Sunday laravel laravel timestamp carbon difference between two dates laravel subtract date minus day from carbon date date format change in laravel Carbon Add Days To Date In Laravel Laravel validating birthdate by 13 years old laravel insert timestamp now show created_at as normal date laravel blade carbon last day of month in timestamp carbon equal dates laravel 8 date difference in days how to get local current time in laravel laravel seconds to hours minutes seconds Carbon Add Months To Date In Laravel change minutes in to hours carbon format seconds to human readable carbon convert date to timestamp in laravel builder show time laravel changing created_at to short date time laravel created_at changing laravel count group by date laravel count by date php laravel between dates excel extract date from dd mm yyyy laravel blade carbon laravel d m y to y-m-d carbon subtract two dates current time input field in laravel form get am pm 12 hour timee laravel carbon format date in laravel datetime blade laravel date format change in laravel blade get data based on date in laravel convert time to 24 hour format laravel carbon diff check if date between two dates laravel phone number validation, laravel date time laravel laravel carbon isoformat carbon subdays laravel carbon set timezone fetch data based on month and year in laravel invalid datetime format laravel total days between two dates carbon carbon get day name from date laravel carbon set laravel local time to indonesia time left laravel seconds carbon previous day carbon random future date Get only time from timestamp in laravel array of dates laravel search by date using carbon laravel laravel 8 blade get days and hours ago Date Format Conversion in controller or Blade file how convert the date and time to integer in laravel change the date format in laravel view page laravel datepicker date format format date laravel timestamp view date diff in laravel Day of Week Using carbon library carbon create from format currency format in laravel how to get yearly chart in laravel carbon greater than laravel validate datetime with datetime-local laravel get data from this year carbon check if date is greater laravel compare date timestamp get value by today yesterday in laravel laravel carbon created_at date in current month carbon compare same date laravel capsule schema datatime CURRENT_TIMESTAMP laravel carbon get day name carbon get day name carbon between hours Get All dates of a month with laravel carbon diffinhours with minutes carbon custom timestamp column laravel laravel timezone Laravel: Set timestamp column to current timestamp Change date format on view - laravel laravel 8 carbon if date is today carbon if date is today laravel set date format laravel carbon date format 2 days left format in laravel seprate day and year from laravel to timestamp how to get previous date in laravel Carbon Add Hours In Laravel carbon date time laravel Difference in seconds between datetime php difference between two dates in seconds how check the time of operation in laravel get array of last 3 dates with carbon get all the between dates from start and end date carbon php carbon select dates except get recoed between two datetime laravel carbon add and subtract carbon this month first day laravel date format valdiate remove time from date in carbon get dates between two dates using specific interval using carbon blade format date submonth carbon carbon get month from date find the next 7 date data in laravel eloquent carbon now set timezone get current month laravel Carbon Add Years To Date In Laravel php/Laravel check if date is passed laravel set timezone dynamically packagist carbon php carbon get last year data in php carbon laravel wrong timestamp laravel timestamp not updating increase date in laravel laravel Join columns of Day, Month , Year to calculate age carbon in laravel documentation How To Substract And Add Hours In Laravel Using Carabon? Laravel display the date the participation was created tomorrow carbon laravel Update First and Last Day of Previous Month with Carbon laravel carbon subtract minutes to current time Carbon\Traits\Units.php:69 carbon get difference between two dates in years and months Calculate the remaining days on view Laravel, negative days if date has passed carbon carbon parse timestamp in model laravel carbon check sunday laravel carbon y-m-d new laravel 8 project laravel artisan clear cache laravel ui auth laravel ui laravel auth laravel storage get file path laravel file path make controller laravel 8 with resource pluck array in laravel laravel debugbar use if in laravel blade CLI to create a new laravel project create laravel project with composer laravel/ui laravel encrypt decrypt call controller function from another controller laravel csrf token laravel install laravel laravel asset setcookie in laravel 8 Could not open input file: artisan clear laravel cache InvalidArgumentException Please provide a valid cache path. laravel please provide a valid cache path Please provide a valid cache path. laravel folder permission .
https://www.codegrepper.com/code-examples/php/validate+phone+number+with+dial+code+laravel+8
CC-MAIN-2022-33
en
refinedweb
Passwords¶ Vapor includes a password hashing API to help you store and verify passwords securely. This API is configurable based on environment and supports asynchronous hashing. Configuration¶ To configure the Application's password hasher, use app.passwords. import Vapor app.passwords.use(...) Bcrypt¶ To use Vapor's Bcrypt API for password hashing, specify .bcrypt. This is the default. app.passwords.use(.bcrypt) Bcrypt will use a cost of 12 unless otherwise specified. You can configure this by passing the cost parameter. app.passwords.use(.bcrypt(cost: 8)) Plaintext¶ Vapor includes an insecure password hasher that stores and verifies passwords as plaintext. This should not be used in production but can be useful for testing. switch app.environment { case .testing: app.passwords.use(.plaintext) default: break } Hashing¶ To hash passwords, use the password helper available on Request. let digest = try req.password.hash("vapor") Password digests can be verified against the plaintext password using the verify method. let bool = try req.password.verify("vapor", created: digest) The same API is available on Application for use during boot. let digest = try app.password.hash("vapor") Async¶ Password hashing algorithms are designed to be slow and CPU intensive. Because of this, you may want to avoid blocking the event loop while hashing passwords. Vapor provides an asynchronous password hashing API that dispatches hashing to a background thread pool. To use the asynchronous API, use the async property on a password hasher. req.password.async.hash("vapor").map { digest in // Handle digest. } // or let digest = try await req.password.async.hash("vapor") Verifying digests works similarly: req.password.async.verify("vapor", created: digest).map { bool in // Handle result. } // or let result = try await req.password.async.verify("vapor", created: digest) Calculating hashes on background threads can free your application's event loops up to handle more incoming requests.
https://docs.vapor.codes/fr/security/passwords/
CC-MAIN-2022-33
en
refinedweb
Description Sched fix for proc holding locks. the proc was added back to the queue, but the queue was not re-added to the runq Attachments Attachments Issue Links - blocks HBASE-16744 Procedure V2 - Lock procedures to allow clients to acquire locks on tables/namespaces/regions - Closed - is depended upon by HBASE-16813 Procedure v2 - Move ProcedureEvent to hbase-procedure module - Closed
https://issues.apache.org/jira/browse/HBASE-16735
CC-MAIN-2022-33
en
refinedweb
Panja Patchi Sastram Software [EXCLUSIVE] Download Panja Patchi Sastram Software Download Panchapakshi Software Downloads and Software : Pentimento. Vergaala Shathra Sutra: The Vedic Shatr Agama (From Elements and Panjapakshi) || Free download · Panja Patchi Sastram Software Downloadlexon_35. Yvaneswara sastram software download.. New. Download. The software is made compatible with Windows 7. 31 Oct 2011 – 4 min – Uploaded by Dinesh Tiwari Download · Pancha Patchi Sastram Software Download. Book 2 is the yellow chapter. Download · Panja Patchi Sastram Software. software”. The auctioneer has a smile on his face because he knows that. 21 Jun 2014 — 24 min – Uploaded by R SagarI never thought that I would come across a software that does all the mathematics in. 28 Dec 2013 – 8 min – Uploaded by Vignan Bhamidipatty Pancha patchi sastram software download Modern. Balachandra adangal. All collected datas were entered using MS access/excel software onto computer. 12 Dec 2010 — 8 min – Uploaded by Download · Pancha Patchi Sastram Software Downloadfree download · Panja Patchi Sastram Software Download. book 2 is the yellow chapter. Download · Panja Patchi Sastram Software. software”. The auctioneer has a smile on his face because he knows that. View and Download Avaash Mobile User Guide booklet — MS Windows XP/Vista/Windows 7. Mobile. DVD with MP4 720p : Pancha/pakshi software.. By clicking the Download Button, the software will download and install onto the. 22 Jul 2016 – 3 min – Uploaded by Mahanagar. Pancha/Pakshi Software. Book 2 is the yellow chapter. Download · Panja Patchi Sastram Software. software”. The auctioneer has a smile on his face because he knows that. Accounting eBooks Software, Accountancy Books, Accounting eBooks, eBook Software for Accountancy For Free. Download · Panja Patchi Sastram Software Downloadebook accounting information systems by marshall romney rar 42 loader iclass b9b9. An illustration of two photographs. Images. A bird book Download Kbuzlenia Balboa Escutar Player Naruto Shippuden The Legend of the Ninja: Chapter 652,. in the third episode of the anime series. After the events of the. In. Microsoft application of PDF files with extra ink and reference tools. XXX. 535994173. He is a great component of show. Here, he. There is another good episode for Naruto Shippuden… Download Free Software… My Items v1.1 (Jun 6, 2009). /* This file is part of dnSpy dnSpy is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. dnSpnSpy. If not, see . */ using System; using dnSpy.AsmEditor.DnlibDialogs; using dnSpy.AsmEditor.Properties; using dnSpy.AsmEditor.TvSettings; using dnSpy.AsmEditor.XmlRpc; using dnSpy.AsmEditor.XmlRpc.Calls; namespace dnSpy.AsmEditor.ViewHelpers { /// /// Returns the value of as a string. /// public static string GetXDocumentString(in XmlAsmNode asmNode, bool selectContent = true) { if (asmNode == null) 37a470d65a termsrv.dll has an unknown checksum sp1 fpwin pro 6 full version download ReaSoft Development reaConverter Pro 7.4 Cracked Kickin’ It – A colpi di karate 720p torrent Onekey Ghost Win 7 32bit Estadistica Para Negocios Y Economia 11 Edicion Anderson Sweeney Williams Pdf Descargar Gratis EURODENT 2000.rar Kamen Rider Battride War Pc Down Silhouette Studio Design Edition Torrent Hard Reset Tablet Sonivox
https://haitiliberte.com/advert/panja-patchi-sastram-software-exclusive-download/
CC-MAIN-2022-33
en
refinedweb
Outputs commands to the terminal. Curses Library (libcurses.a) #include <curses.h> int putp(const char *str); int tputs(const char *str, int affcnt, int (*putfunc)(int)); These subroutines output commands contained in the terminfo database to the terminal. The putp subroutine is equivalent to tputs(str, 1, putchar). The output of the putp subroutine always goes to stdout, not to the fildes specified in the setupterm subroutine. The tputs subroutine outputs str to the terminal. The str argument must be a terminfo string variable or the return value from the tgetstr, tgoto, tigestr, or tparm subroutines. The affcnt argument is the number of lines affected, or 1 if not applicable. If the terminfo database indicates that the terminal in use requires padding after any command in the generated string, the tputs subroutine inserts pad characters into the string that is sent to the terminal, at positions indicated by the terminfo database. The tputs subroutine outputs each character of the generated string by calling the user-supplied putfunc subroutine (see below). The user-supplied putfunc subroutine (specified as an argument to the tputs subroutine is either putchar or some other subroutine with the same prototype. The tputs subroutine ignores the return value of the putfunc subroutine. Upon successful completion, these subroutines return OK. Otherwise, they return ERR. For the putp subroutine: To call the tputs(my_string , 1, putchar) subroutine, enter: char *my_string; putp(my_string); For the tputs subroutine: int_my_putchar(); tputs(clear_screen, 1 ,my_putchar); int_my_putchar(); tputs(tparm(cursor_address, 18, 40), 1, my_putchar); This subroutine is part of Base Operating System (BOS) Runtime. The doupdate, is_linetouched, putchar, tgetent, tigetflag, tputs subroutines. Curses Overview for Programming in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs. List of Curses Subroutines in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs. Understanding Terminals with Curses in AIX Version 4.3 General Programming Concepts: Writing and Debugging Programs.
http://ps-2.kev009.com/tl/techlib/manuals/adoclib/libs/basetrf2/putp.htm
CC-MAIN-2022-33
en
refinedweb
Extending Iridium With Custom Step Definitions Iridium makes it easy for developers to build in their own custom step definitions, removing the need to write custom code gluing Cucumber and WebDriver together. Join the DZone community and get the full member experience.Join For Free. Fortunately, developers can take advantage of the work done in Iridium to create their own step definitions which can then be run by Iridium alongside (or completely replacing) the standard set of steps provided by Iridium. To demonstrate this, we’ll create a simple extension with a simple step definition that logs a message. To start with, you’ll also need to create a GPG key, which is used to sign the extension JAR files. The Building chapter of the Getting Started Guide has details on how to create a GPG key. You’ll also need to sign up to the Sonatype Open Source Maven publishing programme. This is a service provided by Sonatype that allows open source developers to publish artifacts to the central Maven repositories, which is incredibly useful given that Maven is the defacto method for sharing Java artifacts. Once you have created your GPG key and have a Sonatype account, those details go in the file ~/.gradle/gradle.properties. With that done, we can create the Java project. We’ll start with the Gradle build script in the build.gradle file. buildscript { repositories { mavenLocal() mavenCentral() maven { url '' } maven { url '' } } dependencies { classpath 'com.matthewcasperson:build:0.+' } } apply plugin: 'com.matthewcasperson.build.iridiumextension' This build file takes advantage of a custom Gradle plugin that handles everything you need to create an Iridium extension. You can find the source code to this plugin here. By using the Gradle plugin, things like maven repos, maven publishing, dependencies and code style checking is implemented for you. One of the requirements enforced by the plugin is the need to specify a number of properties of the extension. This is done in the gradle.properties file. # Update these properties to reflect the details of the extension Group=com.matthewcasperson ArchivesBaseName=example-extension Version=0.0.1-SNAPSHOT MavenName=Iridium Example Exntension MavenDescription=A demo of an extension that can be used with the Iridium testing application MavenURL= MavenSCMConnection=scm:git: MavenSCMURL= MavenLicenseName=MIT MavenLicenseURL= MavenDeveloperID=mcasperson MavenDeveloperName=Matthew Casperson MavenDeveloperEMail=matthewcasperson@gmail.com To make it easy for others to build your project, create a Gradle wrapper with the command gradle wrapper --gradle-version 2.12 This saves some scripts in your project that are intended to be checked into source control. Running these scripts will download the appropriate version of Gradle onto the user’s local PC if it doesn’t already exist, and means that other developers don’t need Gradle installed to build your project. Now we create the class that will hold our step definition. The class needs to in a package under au.com.agic.apptestingext.steps. This is because Iridium has been configured to scan all classes under au.com.agic.apptestingext.steps for step definitions. package au.com.agic.apptestingext.steps; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import cucumber.api.java.en.When; /** * Cucumber steps exist in a POJO */ public class Example { private static final Logger LOGGER = LoggerFactory.getLogger(Example.class); /** * Annotate a public method with @And, @But, @When, @Given or @Then, and cucumber * will recognise the method as a Gherkin step * * @param message The message to be written to the log */ @When("I write \"(.*?)\" as a log message") public void writeExampleLogMessage(final String message) { LOGGER.info(message); } } The @When annotation takes a regular expression that is matched to any steps in a feature script. The groups in the regular expression are passed into the method as parameters. In this case we have one regaular expression group, which takes the message to be logged, and this is passed to the message parameter. Inside the method, we use these parameters however we like. Our example takes the message and logs it to the console. Once you are happy with the code, publish it to Sonatype with the command: ./gradlew build uploadArchive This uploads the artifact to the Sonatype staging repo. Note that while it is in the staging repo, the artifact is not available to other developers unless they specifically add the staging repo to their build scripts. Our Gradle plugin does add the staging repo though, so anyone using this plugin will have access to your artifact. Publishing to the central Maven repo (which makes the artifact globally available) is covered by the Sonatype docs. With your Iridium extension in the staging Maven repo, we now need to incorporate it into a build of Iridium. To do this, clone the Iridium repo, and the following line to the build.gradle file: dependencies { // Existing dependencies go here... compile group: 'com.matthewcasperson', name: 'example-extension', version: '0.0.1-SNAPSHOT' } Doing so adds your new extension JAR as a dependency of the Iridium project. You can now build a self contained uberjar with the command: ./gradlew clean shadowJar --refresh-dependencies The resulting JAR file in the build/libs directory now contains your JAR file with the step definitions. You can reference these step definitions from a feature script like any of the step definitions included with Iridium. Feature: Local Test Scenario: Local Scenario And I write "Hello World" as a log message The source code to this example extension can be found here. Published at DZone with permission of Matthew Casperson, DZone MVB. See the original article here. Opinions expressed by DZone contributors are their own.
https://dzone.com/articles/extending-iridium-with-custom-step-definitions?utm_medium=feed&utm_source=feedpress.me&utm_campaign=Feed%3A+dzone%2Fwebdev
CC-MAIN-2022-33
en
refinedweb
MVC with configurable controller By: Charles When application gets large you cannot stick to bare bone MVC. You have to extend it somehow to deal with these complexities. One mechanism of extending MVC that has found widespread adoption is based on a configurable controller Servlet. The MVC with configurable controller servlet is shown in Figure below. When the HTTP request arrives from the client, the Controller Servlet looks up in a properties file to decide on the right Handler class for the HTTP request. This Handler class is referred to as the Request Handler. The Request Handler contains the presentation logic for that HTTP request including business logic invocation. In other words, the Request Handler does everything that is needed to handle the HTTP request. The only difference so far from the bare bone MVC is that the controller servlet looks up in a properties file to instantiate the Handler instead of calling it directly. MVC with configurable controller Servlet. // Configurable Controller Servlet Implementation public class MyControllerServlet extends HttpServlet { private Properties props; public init(ServletConfig config) throws ServletException { try { props = new Properties(); props.load(new FileInputStream("C:/file.properties")); } catch (IOException ioe) { throw new ServletException(ioe); } } public void doGet(HttpServletRequest httpRequest, HttpServletResponse httpResponse) throws ServletException, IOException { String urlPath = httpRequest.getPathInfo(); String reqhandlerClassName = (String) props.get(urlPath); HandlerInterface handlerInterface = (HandlerInterface) Class.forName(reqhandlerClassName).newInstance(); String nextView = handlerInterface.execute(httpRequest); .. .. RequestDispatcher rd = getServletContext(). getRequestDispatcher(nextView); rd.forward(httpRequest, httpResponse); } } At this point you might be wondering how the controller servlet would know to instantiate the appropriate Handler. The answer is simple. Two different HTTP requests cannot have the same URL. Hence you can be certain that the URL uniquely identifies each HTTP request on the server side and hence each URL needs a unique Handler. In simpler terms, there is a one-to-one mapping between the URL and the Handler class. This information is stored as key-value pairs in the properties file. The Controller Servlet loads the properties file on startup to find the appropriate Request Handler for each incoming URL request. The controller servlet uses Java Reflection to instantiate the Request Handler. However there must be some sort of commonality between the Request Handlers for the servlet to generically instantiate the Request Handler. The commonality is that all Request Handler classes implement a common interface. Let us call this common interface as Handler Interface. In its simplest form, the Handler Interface has one method say, execute(). The controller servlet reads the properties file to instantiate the Request Handler as shown in program above. The Controller Servlet instantiates the Request Handler in the doGet() method and invokes the execute() method on it using Java Reflection. The execute() method invokes appropriate business logic from the middle tier and then selects the next view to be presented to the user. The controller servlet forwards the request to the selected JSP view. All this happens in the doGet() method of the controller servlet. The doGet() method lifecycle never changes. What changes is the Request Handler’s execute() method. You may not have realized it, but you just saw how Struts works in a nutshell! Struts is a controller servlet based configurable MVC framework that executes predefined methods in the handler objects. Instead of using a properties file like we did in this example, Struts uses XML to store more useful.
https://java-samples.com/showtutorial.php?tutorialid=351
CC-MAIN-2022-33
en
refinedweb
TextBlob is a Python library that can be used to process textual data. Some of the tasks where it is good to use are sentiment analysis, tokenization, spelling correction, and many other natural language processing tasks. In this article, I’ll walk you through a tutorial on TextBlob in Python. What is TextBlob in Python? TextBlob is an open-source Python library that is very easy to use for processing text data. It offers many built-in methods for common natural language processing tasks. Some of the tasks where I prefer to use it over other Python libraries are spelling correction, part of speech tagging, and text classification. But it can be used for various NLP tasks like: - Noun phrase extraction - Part of speech tagging - Sentiment Analysis - Text Classification - Tokenization - Word and phrase frequencies - Parsing - n-grams - Word inflexion - Spelling Correction I hope you now have understood on which types of problems we can use the TextBlob library in Python. In the section below, I will take you through a tutorial on TextBlob in Python. TextBlob in Python (Tutorial) If you have never used this Python library before, you can easily install it on your systems using the pip command; pip install textblob. Now let’s see how to use it by performing some common natural language processing tasks. I’ll start by using it to analyze the sentiment of a text: from textblob import TextBlob # Sentiment Analysis text = TextBlob("I hope you are enjoying this tutorial.") print(text.sentiment) Sentiment(polarity=0.5, subjectivity=0.6) Now let’s have a look at how to do tokenization by using this library: # Tokenization text = TextBlob("I am a fan of Apple Products") print(text.words) ['I', 'am', 'a', 'fan', 'of', 'Apple', 'Products'] Sentiment analysis and tokenization are very common today and these features are already offered by many Python libraries. But one task that is not common in other Python NLP libraries is spelling correction. So let’s see how to correct spellings with Python: # Spelling Correction text = TextBlob("I love Machne Learnin") print(text.correct()) I love Machine Learning Summary So this is how you can use this library in Python to perform various tasks of natural language processing. You can learn more about this library from here. I hope you liked this article on a tutorial on TextBlob in Python. Feel free to ask your valuable questions in the comments section below.
https://thecleverprogrammer.com/2021/05/07/textblob-in-python-tutorial/
CC-MAIN-2022-33
en
refinedweb
TorchMetrics in PyTorch Lightning¶ TorchMetrics was originally created as part of PyTorch Lightning, a powerful deep learning research framework designed for scaling models without boilerplate. Note TorchMetrics always offers compatibility with the last 2 major PyTorch Lightning versions, but we recommend to always keep both frameworks up-to-date for the best experience. While TorchMetrics was built to be used with native PyTorch, using TorchMetrics with Lightning offers additional benefits: Modular metrics are automatically placed on the correct device when properly defined inside a LightningModule. This means that your data will always be placed on the same device as your metrics. No need to call .to(device)anymore! Native support for logging metrics in Lightning using self.log inside your LightningModule. The .reset()method of the metric will automatically be called at the end of an epoch. The example below shows how to use a metric in your LightningModule: class MyModel(LightningModule): def __init__(self): ... self.accuracy = torchmetrics.Accuracy() def training_step(self, batch, batch_idx): x, y = batch preds = self(x) ... # log step metric self.accuracy(preds, y) self.log('train_acc_step', self.accuracy) ... def training_epoch_end(self, outs): # log epoch metric self.log('train_acc_epoch', self.accuracy) Metric logging in Lightning happens through the self.log or self.log_dict method. Both methods only support the logging of scalar-tensors. While the vast majority of metrics in torchmetrics returns a scalar tensor, some metrics such as ConfusionMatrix, ROC, MeanAveragePrecision, ROUGEScore return outputs that are non-scalar tensors (often dicts or list of tensors) and should therefore be dealt with separately. For info about the return type and shape please look at the documentation for the compute method for each metric you want to log. Logging TorchMetrics¶ Logging metrics can be done in two ways: either logging the metric object directly or the computed metric values. When Metric objects, which return a scalar tensor are logged directly in Lightning using the LightningModule self.log method, Lightning will log the metric based on on_step and on_epoch flags present in self.log(...). If on_epoch is True, the logger automatically logs the end of epoch metric value by calling .compute(). Note sync_dist, sync_dist_op, sync_dist_group, reduce_fx and tbptt_reduce_fx flags from self.log(...) don’t affect the metric logging in any manner. The metric class contains its own distributed synchronization logic. This however is only true for metrics that inherit the base class Metric, and thus the functional metric API provides no support for in-built distributed synchronization or reduction functions. class MyModule(LightningModule): def __init__(self): ... self.train_acc = torchmetrics.Accuracy() self.valid_acc = torchmetrics.Accuracy() def training_step(self, batch, batch_idx): x, y = batch preds = self(x) ... self.train_acc(preds, y) self.log('train_acc', self.train_acc, on_step=True, on_epoch=False) def validation_step(self, batch, batch_idx): logits = self(x) ... self.valid_acc(logits, y) self.log('valid_acc', self.valid_acc, on_step=True, on_epoch=True) As an alternative to logging the metric object and letting Lightning take care of when to reset the metric etc. you can also manually log the output of the metrics. class MyModule(LightningModule): def __init__(self): ... self.train_acc = torchmetrics.Accuracy() self.valid_acc = torchmetrics.Accuracy() def training_step(self, batch, batch_idx): x, y = batch preds = self(x) ... batch_value = self.train_acc(preds, y) self.log('train_acc_step', batch_value) def training_epoch_end(self, outputs): self.train_acc.reset() def validation_step(self, batch, batch_idx): logits = self(x) ... self.valid_acc.update(logits, y) def validation_epoch_end(self, outputs): self.log('valid_acc_epoch', self.valid_acc.compute()) self.valid_acc.reset() Note that logging metrics this way will require you to manually reset the metrics at the end of the epoch yourself. In general, we recommend logging the metric object to make sure that metrics are correctly computed and reset. Additionally, we highly recommend that the two ways of logging are not mixed as it can lead to wrong results. Note When using any Modular metric, calling self.metric(...) or self.metric.forward(...) serves the dual purpose of calling self.metric.update() on its input and simultaneously returning the metric value over the provided input. So if you are logging a metric only on epoch-level (as in the example above), it is recommended to call self.metric.update() directly to avoid the extra computation. class MyModule(LightningModule): def __init__(self): ... self.valid_acc = torchmetrics.Accuracy() def validation_step(self, batch, batch_idx): logits = self(x) ... self.valid_acc.update(logits, y) self.log('valid_acc', self.valid_acc, on_step=True, on_epoch=True) Common Pitfalls¶ The following contains a list of pitfalls to be aware of: If using metrics in data parallel mode (dp), the metric update/logging should be done in the <mode>_step_endmethod (where <mode>is either training, validationor test). This is because dpsplit the batches during the forward pass and metric states are destroyed after each forward pass, thus leading to wrong accumulation. In practice do the following: class MyModule(LightningModule): def training_step(self, batch, batch_idx): data, target = batch preds = self(data) # ... return {'loss': loss, 'preds': preds, 'target': target} def training_step_end(self, outputs): # update and log self.metric(outputs['preds'], outputs['target']) self.log('metric', self.metric) Modular metrics contain internal states that should belong to only one DataLoader. In case you are using multiple DataLoaders, it is recommended to initialize a separate modular metric instances for each DataLoader and use them separately. The same holds for using seperate metrics for training, validation and testing. class MyModule(LightningModule): def __init__(self): ... self.val_acc = nn.ModuleList([torchmetrics.Accuracy() for _ in range(2)]) def val_dataloader(self): return [DataLoader(...), DataLoader(...)] def validation_step(self, batch, batch_idx, dataloader_idx): x, y = batch preds = self(x) ... self.val_acc[dataloader_idx](preds, y) self.log('val_acc', self.val_acc[dataloader_idx]) Mixing the two logging methods by calling self.log("val", self.metric)in {training}/{val}/{test}_stepmethod and then calling self.log("val", self.metric.compute())in the corresponding {training}/{val}/{test}_epoch_endmethod. Because the object is logged in the first case, Lightning will reset the metric before calling the second line leading to errors or nonsense results. Calling self.log("val", self.metric(preds, target))with the intention of logging the metric object. Because self.metric(preds, target)corresponds to calling the forward method, this will return a tensor and not the metric object. Such logging will be wrong in this case. Instead it is important to seperate into seperate lines: def training_step(self, batch, batch_idx): x, y = batch preds = self(x) ... # log step metric self.accuracy(preds, y) # compute metrics self.log('train_acc_step', self.accuracy) # log metric object
https://torchmetrics.readthedocs.io/en/v0.8.0/pages/lightning.html
CC-MAIN-2022-33
en
refinedweb
Many methods exist in java.lang.Character class used for different purposes. Some methods are useful for checking the character is a digit or letter or is a uppercase letter or lowercase letter etc. At the sametime, some methods can modify the case of the letters from uppercase to lowercase or vice versa. Now the present method compareTo() is useful to compare two character objects. This method uses internally compareTo() method of interface Comparable. This is possible as Character class implements Comparable interface. Following is the method signature as defined in java.lang.Character class. - public int compareTo(Character c1): Compares two Character objects numerically. Returns an integer value of their ASCII value difference. Following example on Java Character compareTo() Example illustrates. public class CharacterFunctionDemo { public static void main(String args[]) { Character c1 = new Character('A'); // ASCII value 65 Character c2 = new Character('C'); // ASCII value 67 System.out.println("compareTo() with characters A and A: " + c1.compareTo(c1)); // 65-65 System.out.println("compareTo() with characters A and C: " + c1.compareTo(c2)); // 65-67 System.out.println("compareTo() with characters C and A: " + c2.compareTo(c1)); // 67-65 } } Output screenshot on Java Character compareTo() Example
https://way2java.com/java-lang/class-character/java-character-comparetocharacter-c1-example/
CC-MAIN-2022-33
en
refinedweb
USBH_Ep_TypeDef Struct Reference USB HOST endpoint status data. #include <em_usb.h> USB HOST endpoint status data. A host application should not manipulate the contents of this struct. Field Documentation ◆ setup A SETUP package. ◆ setupErrCnt Error counter for SETUP transfers. ◆ epDesc Endpoint descriptor. ◆ parentDevice The device the endpoint belongs to. ◆ type Endpoint type. ◆ packetSize Packet size, current transfer. ◆ hcOut Host channel number assigned for OUT transfers. ◆ hcIn Host channel number assigned for IN transfers. ◆ in Endpoint direction. ◆ toggle Endpoint data toggle. ◆ state Endpoint state. ◆ addr Endpoint address. ◆ buf Transfer buffer. ◆ xferCompleted Transfer completion flag. ◆ xferStatus Transfer status. ◆ xferCompleteCb Transfer completion callback function. ◆ xferred Number of bytes transferred. ◆ remaining Number of bytes remaining. ◆ timeout Transfer timeout.
https://docs.silabs.com/gecko-platform/3.1/middleware/api/struct-u-s-b-h-ep-type-def
CC-MAIN-2022-33
en
refinedweb
How to Run Hornet On Kubernetes This page explains how to run IOTA mainnet Hornet nodes in a Kubernetes (K8s) environment. Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. K8s services, support, and tools are widely available on multiple cloud providers. If you are not familiar with K8s we recommend you to start by learning the K8s technology. Introduction Running Hornet mainnet nodes on K8s can enjoy all the advantages of a declarative, managed, portable and automated container-based environment. However, as Hornet is a stateful service with several persistence, configuration and peering requirements, the task can be challenging. To overcome it, the IOTA Foundation under the one-click-tangle repository umbrella is providing K8s recipes and associated scripts that intend to educate developers on how nodes can be automatically deployed, peered and load balanced in a portable way. This script allows you to run sets of Hornet instances "in one click" in your K8s' environment of choice and also provides a blueprint with the best practices K8s administrators can leverage when deploying production-ready environments. Deploying Using the “One Click” Script For running the one click script you need to get access to a K8s cluster. For local development, we recommend microk8s. Instructions on how to install it can be found here. You may also need to enable the ingress add-on on micro-k8s by running microk8s.enable ingress. You will also need to properly configure the kubectl command-line tool to get access to your cluster. You can pass the following parameters as variables on the command line to the one-click script: NAMESPACE: The namespace where the one-click script will create the K8s objects. tangleby default. PEER: A multipeer address that will be used to peer your nodes with. If you do not provide an address, auto-peering will be configured for the set's first Hornet Node ( hornet-0). INSTANCES: The number of Hornet instances to be deployed. 1by default. INGRESS_CLASS: The class associated with the Ingress object that will be used to externally expose the Node API endpoint so that it can be load balanced. It can depend on the target K8s environment. nginxby default. You can deploy a Hornet Node using the default parameter values by running the following command: hornet-k8s.sh deploy After executing the script, different Kubernetes objects will be created under the tangle namespace, as enumerated and depicted below. You can see the kubectl instruction to get more details about them. kubectl get namespaces NAME STATUS AGE default Active 81d tangle Active 144m kube-node-lease Active 81d kube-public Active 81d kube-system Active 81d - A StatefulSet named hornet-setthat controls the different Hornet instances and enables scaling them. kubectl get statefulset -n tangle -o=wide NAME READY AGE CONTAINERS IMAGES hornet-set 1/1 20h hornet gohornet/hornet:1.1.3 - One Pod per Hornet Node bound to our StatefulSet. A pod is an artifact that executes the Hornet Docker container. kubectl get pods -n tangle NAME READY STATUS RESTARTS AGE hornet-set-0 1/1 Running 0 20h You may have noticed that the pod's name is the concatenation of the name of the Statefulset hornet-set plus an index indicating the pod number in the set (in this case 0). If you scaled your StatefulSet to 2, you would have two pods ( hornet-set-0 and hornet-set-1). - One Persistent Volume Claim bound to each instance of the StatefulSet. It is used to permanently store all the files corresponding to the internal databases and snapshots of a Hornet Node. kubectl get pvc -n tangle -o=wide NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE hornet-ledger-hornet-set-0 Bound pvc-905fe9c7-6a10-4b29-a9fd-a405fd49a5fd 20Gi RWO standard 157m The name of the Persistent Volume Claim is the concatenation of hornet-ledger plus the name of the bound Pod, hornet-set-0 in our case. - Service objects: - One Service Node Port object exposes the REST API of the nodes. It is a load balancer to port 14625of all the Nodes. - One Service Node Port object per Hornet instance (in this example, just one) which exposes as a "Node Port" the gossip, dashboard, and auto-peering endpoints. kubectl get services -n tangle -o=wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR hornet-0 NodePort 10.60.4.75 <none> 15600:30744/TCP,8081:30132/TCP,14626:32083/UDP 19h statefulset.kubernetes.io/pod-name=hornet-set-0 hornet-rest NodePort 10.60.3.96 <none> 14265:31480/TCP 19h app=hornet You can run kubectl describe services -n tangle to get more details about the endpoints supporting the referred Services. The name of the Services is important as it will allow you to address Hornet Nodes by DNS name within the cluster. For instance, if you want to peer a Hornet Node within the cluster, you can refer to it with the name of its bound Service, for example, hornet-0. - An Ingress controller intended to expose the load-balanced Hornet REST API endpoint outside the cluster, under the /apipath. For convenience, the dashboard corresponding to the first Hornet in the StatefulSet ( hornet-0) is also exposed through the /path. kubectl get ingress -n tangle -o=wide NAME CLASS HOSTS ADDRESS PORTS AGE hornet-ingress <none> * 34.1.1.1 80 21h In the example above, you can observe that the public IP address of the load balancer associated with the Ingress Controller is shown. This will happen when you deploy on a commercial, public cloud service. - A ConfigMap that contains the configuration applied to each Hornet Node, including the peering configuration. Remember that your Hornet nodes, which belong to a StatefulSet, are peered among them. kubectl get configmap -n tangle -o=wide NAME DATA AGE hornet-config 4 19h kube-root-ca.crt 1 19h Likewise, you can run kubectl describe configmap hornet-config to obtain more details about the ConfigMap. Secrets of the Nodes (keys, etc.). Two secrets are created: hornet-secret: Contains secrets related to the dashboard credentials (hash and salt). hornet-private-key: Contains the Ed25519 private keys of each node. kubectl get secrets -n tangle -o=wide NAME TYPE DATA AGE default-token-fks6m kubernetes.io/service-account-token 3 20h hornet-private-key Opaque 1 20h hornet-secret Opaque 2 20h This blueprint does not provide Network Policies. However, in a production environment, they should be defined so that Pods are properly restricted to perform outbound connections or receive inbound connections. Accessing Your Hornet Node Once you have deployed your Hornet Node on the cluster, you will want to access it from the outside. Fortunately, that is easy as you have already created K8s Services of type Node Port. This means that your Hornet Node will be accessible through certain ports published on the K8s machine (worker node in K8s terminology) where Hornet is actually running. If you execute: kubectl get services -n tangle hornet-0 NodePort 10.60.4.75 <none> 15600:30744/TCP,8081:30132/TCP,14626:32083/UDP 20h hornet-rest NodePort 10.60.3.96 <none> 14265:31480/TCP 20h In the example above, the REST API endpoint of your Hornet Node will be accessible through the port 31480 of a K8s worker. Likewise, the Hornet dashboard will be exposed on the port 30744. If you are running microk8s locally in your machine, you will typically have only one K8s machine running as a virtual machine. Usually, the IP address of the virtual machine is 192.168.64.2. You can double-check the IP address by displaying your current kubectl configuration running the following command: kubectl config view | grep server You should receive an output similar to the endpoint of the K8s API Server. server: Additionally, you can get access to your Hornet Node REST API endpoint through the external load balancer defined by the Ingress Controller. If you are using a local configuration, this will not make much difference as the machine where the Ingress Controller lives is the same as the Service machine (more details at). However, in the case of a real environment provided by a public cloud provider, your Ingress controller will usually be mapped to a load balancer exposed through a public IP address. You can find more information in the commercial public cloud environment's specifics section. Remember that it might take a while for your Hornet Pods to be running and ready Working With Multiple Instances If you want to work with multiple instances, you can scale your current K8s StatefulSet by running: INSTANCES=2 hornet-k8s.sh scale If the cluster has enough resources, a new Hornet Node will automatically be spawned and peered with your original one. You will notice that one more Pod ( hornet-set-1) will be running: kubectl get pods -n tangle -o=wide NAME READY STATUS RESTARTS AGE hornet-set-0 1/1 Running 0 24h hornet-set-1 1/1 Running 0 24h However, if your cluster does not have enough resources, the new POD will still be listed but its status will be Pending: hornet-set-1 0/1 Pending 0 2m12s You can find more details on the reasons why the new Pod is not running by executing: kubectl describe pods/hornet-set-1 -n tangle If your Pod is running properly, a new Persistent Volume will be listed as well: kubectl get pvc -n tangle -o=wide hornet-ledger-hornet-set-0 Bound pvc-905fe9c7-6a10-4b29-a9fd-a405fd49a5fd 20Gi RWO standard 24h hornet-ledger-hornet-set-1 Bound pvc-95b3b566-4602-4a36-8b1b-5e6bf75e5c6f 20Gi RWO standard 24h And an additional Service hornet-1: kubectl get services -n tangle -o=wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE hornet-0 NodePort 10.60.4.75 <none> 15600:30744/TCP,8081:30132/TCP,14626:32083/UDP 24h hornet-1 NodePort 10.60.7.44 <none> 15600:32184/TCP,8081:31776/TCP,14626:31729/UDP 24h hornet-rest NodePort 10.60.3.96 <none> 14265:31480/TCP 24h The REST service will be load balancing two Pods. You can verify this by running the following command: kubectl describe services/hornet-rest -n tangle Name: hornet-rest Namespace: tangle Labels: app=hornet-api source=one-click-tangle Selector: app=hornet Type: NodePort IP Family Policy: SingleStack IP Families: IPv4 IP: 10.60.3.96 IPs: 10.60.3.96 Port: rest 14265/TCP TargetPort: 14265/TCP NodePort: rest 31480/TCP Endpoints: 10.56.0.18:14265,10.56.9.32:14265 Session Affinity: None External Traffic Policy: Cluster If your hornet-0 node is synced, hornet-1 should also be synced as hornet-0 and hornet-1 will have peered. You can verify this by connecting to the corresponding dashboards. Deep Dive. The "One-Click" Script Internals In this section, you can find the internals of our blueprints for deploying Hornet Nodes on K8s. The figure below depicts the target deployment architecture behind our proposed blueprint. The figure shows the K8s objects used and their relationships. The following sections will provide more details about them, and the K8s manifests that declare them (available at the repository). The label source=one-click-tangle is used to mark these K8s objects that will live under a specific Namespace (named tangle by default). StatefulSet hornet-set The hornet.yaml source file contains the definition of the StatefulSet ( hornet-set) that templates and controls the execution of the Hornet Pods. The StatefulSet is also bound to a volumeClaimTemplate so that each Hornet Node on the set can be bound to its own K8s Persistent Volume. The StatefulSet is labeled as source=one-click-tangle and the selector used for the Pods is app=hornet. Additionally, the StatefulSet is bound to the Service hornet-rest. The template contains the Pod definition, which declares different volumes: configurationwhich is mapped to the hornet-configConfigMap. private-keywhich is mapped to the hornet-private-keySecret. secrets-volumean emptyDirinternal volume where the Hornet Node private key will be actually copied. The Pod definition within the StatefulSet contains one initialization container ( create-volumes) and one regular container ( hornet). The initialization container is in charge of preparing the corresponding volumes so that the hornet container volume mounts are ready to be used with the proper files inside and suitable permissions. The initialization container copies the Hornet Node private key and peering configuration so that each Hornet is bound to its private key and peering details. The hornet container declares the following volume mounts, which are key for the hornet container to run properly within its Pod: /app/config.jsonagainst the configurationvolume. app/p2p2storeagainst the p2pstoresubfolder of the hornet-ledgerPersistent Volume. app/p2pstore/identity.keyagainst the transient, internal secrets-volumeof the Pod. app/peering.jsonagainst the peeringsubfolder of the hornet-ledgerPersistent Volume. This is necessary as the peering configuration is dynamic, and new peers might be added during the lifecycle of the Hornet Node. app/mainnetdbagainst the mainnetdbsubfolder of the hornet-ledgerPersistent Volume to store the database files. app/snapshots/mainnetagainst the snapshotssubfolder of the hornet-ledgerPersistent Volume to store snapshots. The Pod template configuration also declares extra configuration details such as liveness and readiness probes, security contexts, and links to other resources such as the Secret that defines the dashboard credentials, mapped into environment variables. Services Two different kinds of Services are used in our blueprint: A Node Port Service hornet-rest(declared by the hornet-rest-service.yamlmanifest) that is bound to the StatefulSet and the port 14265of the Hornet Nodes. Its purpose is to expose the REST API endpoint of the Hornet nodes. The endpoint Pods of such a Service are labeled as app=hornet. One Node Port Service ( hornet-0, hornet-1, ..., hornet-n) per Hornet Node, declared by the hornet-service.yamlmanifest. These Node Port Services expose access to the individual dashboard and gossip and auto-peering endpoints of each node. Thus, it is only bound to one and only one Hornet Node. For this purpose, its configuration includes externalTrafficPolicy localand a selector named statefulset.kubernetes.io/pod-name: hornet-set-xwhere xcorresponds to the Pod number of the Hornet Node the Service is bound. Under the hood, the one-click script takes care of creating as many Services of this type as needed. Ingress Controller hornet-ingress The Ingress Controller hornet-ingress is configured so that the hornet-rest Service can be externally load-balanced. There are two path mappings, /api, whose backend is the hornet-rest Service, and / whose backend is the dashboard of the hornet-0 Service. The latter exists for convenience reasons of this blueprint. In the default configuration, the kubernetes.io/ingress.class is nginx, but you can override that for specific cloud environments (see below). ConfigMap and Secrets For ConfigMaps and Secrets, there are no YAML definition files as they are created on the fly through the kubectl command line. They are created from a config directory automatically generated by the "one-click" script. You can see the contents of those objects by running the following command: kubectl get configmap/hornet-config -n tangle -o=yaml The same goes for the Hornet dashboard credentials (all the nodes share the same admin credentials). kubectl get secrets/hornet-secret -n tangle -o=yaml As well as for the Nodes' private keys: kubectl get secrets/hornet-private-key -n tangle -o=yaml Commercial Public Cloud Environments Specifics Google Kubernetes Environment (GKE) The deployment recipes are fully portable to the GKE public cloud environment. You will only need to ensure that the Ingress Controller is correctly annotated with kubernetes.io/ingress.class: gce. You can do this by executing the following command: kubectl annotate -f hornet-ingress.yaml -n $NAMESPACE --overwrite kubernetes.io/ingress.class=gce Alternatively, if you are using the "one-click" script you can simply execute the following command and the one-click script will perform the annotation during the deployment process.: INGRESS_CLASS=gce hornet-k8s.sh deploy The process of deploying an external load balancer by a public cloud provider can take a while. If you want to get access to the Service Node Ports, you will need to have a cluster with public K8s workers. You can determine the public IP addresses of your K8s workers by running: kubectl get nodes -o=wide Then, you can determine on which K8s worker your Hornet Pod is running by executing the following command (the default NAMESPACE is tangle): kubectl get pods -n $NAMESPACE -o=wide Once you determine the worker and its IP address, you can access each Hornet Node by knowing the Node ports declared by the corresponding service. You can do this by running the following command: kubectl get services -n $NAMESPACE Once you know the port, you will have to create firewall rules so that the port is reachable. That can be done using the gcloud tool. For instance, if your Hornet Node's dashboard is mapped to port 34200 and the public IP address of our K8s worker is 1.1.1.1: gcloud compute firewall-rules create test-hornet-dashboard --allow tcp:34200 Now, you can open up a browser and load to access the Hornet Node's dashboard. You may also have to look into encrypting Secrets when moving to a production-ready system. Amazon Kubernetes Environment (EKS) The deployment recipes are fully portable to the EKS commercial public cloud environment. However, there are certain preparation steps (including IAM permission grants) that have to be executed on your cluster so that the Ingress Controller is properly mapped to an AWS Application Load Balancer (ALB). Additionally, as it happens with the GKE environment, you can access your Hornet Nodes through its Service Node Port. The procedure requires a cluster with public workers and security groups configured so that traffic is enabled to the corresponding Service Node Ports. You will need to follow several preparation steps on your cluster to map the Ingress Controller objects to AWS Application Load Balancers. Please read these documents and follow the corresponding instructions on your cluster: - AWS Docs - Create a kubeconfig for Amazon EKS - AWS Docs - Application load balancing on Amazon EKS - AWS Docs - AWS Load Balancer Controller - Kubernets Docs - AWS Load Balancer Controller You will also need to annotate your Ingress Controller with the following: kubernetes.io/ingress.class=alb alb.ingress.kubernetes.io/scheme=internet-facing alb.ingress.kubernetes.io/subnets: A comma-separated list of the IDs of the subnets that can actually host the Services being load balanced, for instance subnet-aa1649cc, subnet-a656cffc, subnet-fdf3dcb5. Remember that you can annotate your Ingress Controller by running kubectl annotate. If you have made all the preparations and annotations properly, you will be able to find the DNS name of your external load balancer when you execute the following command (Please note it can take a while for DNS servers to sync up): kubectl get ingress -n $NAMESPACE -o=wide NAME CLASS HOSTS ADDRESS PORTS AGE hornet-ingress <none> * xyz.eu-west-1.elb.amazonaws.com 80 71m Conclusion Reference recipes are key in facilitating the deployment of IOTA mainnet Hornet nodes. The IOTA Foundation provides them as a blueprint that can be customized by developers and administrators in their journey towards production-ready deployment. The reference recipes have been designed with portability and simplicity in mind and tested successfully on some popular commercial public cloud environments.
https://wiki.iota.org/introduction/how_tos/mainnet_hornet_node_k8s
CC-MAIN-2022-33
en
refinedweb
Python is a powerful programming language with a robust library of functions and other resources for faster development. Its inbuilt functions are a great support to any Python Programmer. Sometimes it is easier to write your own functions. If in-built functions are not sufficient to implement your logic you have to create your own functions. For such cases you can create User Defined Functions. This post is about how to create and call Python User Defined Functions. Python User Defined Functions-Syntax def Function-Name(Argument list): statements to define function body [return statement] To define a function you will always use the keyword def. It is followed by the name of the python user defined function and the list of arguments in parenthesis. The argument list is followed by colon “:”. You will recall that the blocks in Python are defined with the help of indentation. To define the function body statements press enter (to go to net line of code) and tab key (to define indentation of the statement) Return statement is the last statement of function . It returns value evaluated within the function. Example – A python user defined function to add two numbers passed as parameters. def addnums(x,y): # name of function and two arguments in parenthesis. c=x+y # expression to sum two numbers print(c) # print the added value stored in variable c OR def addnums(x,y): # name of function and two arguments in parenthesis. c=x+y # expression to sum two numbers return (c) # returns the added value stored in variable c Calling Python User Defined Functions The created function can be called by writing the Function name and passing the required parameters. This function created can be called after the function definition in the saved Python file. Other method is to call it from Python Idle command prompt. If the program is compiled with no errors it will display the result. Syntax Function-name(actual parameter list) Example – calling the function addnums addnums(40,50) Note: A function created with def keyword on the Idle Command Prompt will be available. If you close Idle or any tool to run Python commands, the function defined in this session will not be available in the next session. So, it is always advisable to save Python files. This way you can use the python user defined functions again by importing the compiled Python files in other programs. Be First to Comment
https://csveda.com/python-user-defined-functions/
CC-MAIN-2022-33
en
refinedweb
OpenGL Programming/Modern OpenGL Tutorial 03 Attributes: pass additional vertex information[edit | edit source]) { cerr << "Could not bind attribute " << attribute_name << endl; return false; } Now in the render | edit source].r, f_color.g, f_color.b, | edit source] | edit source]) { cerr << "Could not bind uniform " << uniform_name << endl; return false; } Note: we could even target a specific array element in the shader code with uniform_name, e.g. "my_array[1]"! In addition, for uniform, we also explicitly set its non-varying value. Let's request, in render, that the triangle be very little opaque: glUniform1f(uniform_fade, 0.1); We now can use this variable in our fragment shader: varying vec3 f_color; uniform float fade; void main(void) { gl_FragColor = vec4(f_color.r, f_color.g, f_color.b, fade); } Note: if you don't use the uniform in your code, glGetUniformLocation will not see it, and fail. OpenGL ES 2 portability[edit | edit source] In the previous section, we mentioned that GLES2 requires precision hints. These hints tells OpenGL how much precision we want for our data. The precision can be: lowp mediump highp For instance, lowp can often be used for colors, and it implicitly highp for vertex shaders. For fragment shaders, highp might not be available, which can be tested using the GL_FRAGMENT_PRECISION_HIGH macro[2]. We can improve our shader loader so it defines a default precision on GLES2, and ignore precision identifiers on OpenGL 2.1 (so we can still set the precision for a specific variable if needed): GLuint res = glCreateShader(type); // GLSL version const char* version; int profile; SDL_GL_GetAttribute(SDL_GL_CONTEXT_PROFILE_MASK, &profile); if (profile == SDL_GL_CONTEXT_PROFILE_ES) version = "#version 100\n"; // OpenGL ES 2.0 else version = "#version 120\n"; // OpenGL 2.1 // GLES2 precision specifiers const char* precision; precision = "#ifdef GL_ES \n" "# ifdef GL_FRAGMENT_PRECISION_HIGH \n" " precision highp float; \n" "# else \n" " precision mediump float; \n" "# endif \n" "#else \n" // Ignore unsupported precision specifiers "# define lowp \n" "# define mediump \n" "# define highp \n" "#endif \n"; const GLchar* sources[] = { version, precision, source }; glShaderSource(res, 3, sources, NULL); Keep in mind that the GLSL compiler will count these prepended lines in its line count when displaying error messages. Setting #line 0 sadly does not reset this compiler line count. Refreshing the display[edit | edit source] Now it would be quite nice if the transparency could vary back and forth. To achieve this, - we can check the number of seconds since the user started the application; SDL_GetTicks()/1000gives that - apply the maths sin function on it (the sin function goes back on forth between -1 and +1 every 2.PI=~6.28 units of time) - before rending the scene, prepare a logic function to update its state. In mainLoop, let's call the logic function, before the render: logic(); render(window); Let's add a new logic function void logic() { // alpha 0->1->0 every 5 seconds float cur_fade = sinf(SDL_GetTicks() / 1000.0 * (2*3.14) / 5) / 2 + 0.5; glUseProgram(program); glUniform1f(uniform_fade, cur_fade); } Also remove the call to glUniform1f in render. | edit source] -
https://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_03
CC-MAIN-2022-33
en
refinedweb